CA2288513A1 - Apparatus and method for an atm layer device - Google Patents

Apparatus and method for an atm layer device Download PDF

Info

Publication number
CA2288513A1
CA2288513A1 CA002288513A CA2288513A CA2288513A1 CA 2288513 A1 CA2288513 A1 CA 2288513A1 CA 002288513 A CA002288513 A CA 002288513A CA 2288513 A CA2288513 A CA 2288513A CA 2288513 A1 CA2288513 A1 CA 2288513A1
Authority
CA
Canada
Prior art keywords
queue
layer
atm
manager
conjunction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
CA002288513A
Other languages
French (fr)
Inventor
James A. Gilderson
Marc Levesque
Amal Khailtash
Aneesh Dalvi
W. David Cornfield
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Spacebridge Networks Corp
Original Assignee
Spacebridge Networks Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Spacebridge Networks Corp filed Critical Spacebridge Networks Corp
Priority to CA002288513A priority Critical patent/CA2288513A1/en
Priority to CA002325135A priority patent/CA2325135A1/en
Publication of CA2288513A1 publication Critical patent/CA2288513A1/en
Abandoned legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/30Peripheral units, e.g. input or output ports
    • H04L49/3081ATM peripheral units, e.g. policing, insertion or extraction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/30Peripheral units, e.g. input or output ports
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5678Traffic aspects, e.g. arbitration, load balancing, smoothing, buffer management
    • H04L2012/5681Buffer or queue management
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/50Overload detection or protection within a single switching element

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

This disclosure encompasses the presentation of an ATM layer device addressing ATM
layer queuing and routing functionality pre- and post- switching. A layered apparatus is presented as a method of isolating the arrival and departure processes of queuing, while separating network layer communication from ATM layer operation. The structure examined is useful in the ingress and egress segments of input and/or output-queued ATM
switching applications and offers benefits associated with parallel processing and process independence.

Description

TITLE
Apparatus and Method for an ATM layer device.
FIELD OF THE INVENTION
This invention relates to ATM layer devices and is particularly concerned with route management and congestion management via the queuing of ATM cells pre- and post-switching.
BACKGROUND OF THE INVENTION
A key component of any communications network is the switching sub-system that provides interconnectivity between users. Fast packet switching is related to a type of switching sub-system that can provide, in an efficient and fair manner, the various qualities of service (QoS) required by different applications.
Packet switching in modern high-speed telecommunication networks is generally implemented in a switching device comprising an input (ingress) interface card 1, a switch fabric 2, and an output (egress) interface card 3, as illustrated in Figure 1.
In a typical ATM
switch, the ingress card is responsible for processing incoming traffic arnving at the input ports 7 for internal routing. Prior to routing traffic through the forward path 6, this card appends additional information onto the header portion of the ATM cell, including its internal identifier (ID), cell type, cell class, and the designated egress card(s).
This information is typically stored in a table that is indexed by the external ID of the cell. In addition, the ingress card performs other functions, such as buffering, scheduling, queuing, monitoring, and discarding of cells, and it communicates these functions with other parts of the switch through the alternate path 4. The ATM switch fabric is primarily responsible for switching all traffic arriving from the ingress card(s). It also performs other functions such as buffering, scheduling and queuing of cells. Finally, the egress card is responsible for processing the traffic received from the switch fabric for onward transmission through the output ports 8.
This process involves the removal of the information appended to the cell header and the reinsertion of new information for delivery to the next destination. In addition, the egress card performs management functions similar to the ingress card and communicates these functions through the feedback path 5.
In the field of fast packet switching, a problem exists for the switching fabric. The interconnection of all inputs to a single output (for each output) of a fabric must handle an aggregate input rate greater than its output rate. Solutions may generally be classified into the following categories:
~ Input Queuing - to reduce the aggregate input rate in the event of fabric congestion;
~ Fabric Speed-up - to increase throughput to match or exceed the aggregate input rate; and ~ Output Queuing - to increase congestion tolerance at the bottleneck Typically any sub-set of the preceding solutions are implemented for a given fast packet switch.
Queuing describes two simultaneously occurring processes: an arnval process and a departure process. The function of the arrival process is to distribute cells into queues on the bases of priority and destination. Since queues have finite depth, feedback is required to determine if a cell must enter a queue or whether it can be discarded. The function of the departure process is to decide which of the queues is to be served next. The decision must consider service fairness in light of the requested QoS.
All processes in general should be independent of one another because the dependency of one process imposes unnecessary restrictions on the operation of the others.
Furthermore, the spreading of problems from one process to another has a compounding effect, ultimately resulting in larger problems with an unidentifiable source. For example, a delay in one process should not impose any delays in neighbouring processes.
Difficulties arise, however, in that independent processes typically have to communicate with one another. The co-ordination of accurate and reliable communication is no easy task
2 and effective management of this communication is essential in order to successfully isolate processes. The most common means of achieving such communication is to use a shared storage resource. This is the fundamental concept of a queue. However, queue access is typically subject to contention between processes. Thus the successful negotiation of access becomes key to effective communications management.
In queuing, the arnval and departure processes contend with one another for storage resources. First, access to queue memory has to be time-shared between processes.
Secondly, processes contend in a similar manner for access to occupancy information stored in memory . This information is required in the arrival process as feedback to make a decision as to whether or not a queue should be entered. It is also required by the departure process in the decision of which queue should be serviced next.
The Prior Art does not suitably address the isolation of the arrival and departure processes in respect of contention. An exemplary switching device described in reference [1] fails to mention this aspect of contention. One weakness of this Prior Art device is that there is no scheduling performed across priorities and destinations, nor is there any selective cell discard on the basis of feedback. However both the arrival and departure processes are present and contend for access to the single queue: ICELL memory. In the arnval process of this device, the cell loader stores cells in the ICELL memory based on pointers found in the free cell FIFO. In the departure process, cells are scheduled for transmission by an active chain managed by the ingress controller in the call table. An active chain is a linked list of cells in ICELL memory. The ingress controller establishes the links as a low priority function by modifying a next pointer field of each COM cell stored in ICELL memory. The additional shared memory access required for the establishment of links in the departure process impinges on the operation of the arnval process. The result is that in conditions of heavy traffic, primarily composed of large packets, data will likely be lost in the arrival process as the departure process monopolises the ICELL memory with link establishment.
Another weakness in the said Prior Art is that processes responsible for complex decisions are typically made using a centralised processing unit (CPU) to lower cost. The problem here is that processes must contend for CPU processing time. The successful
3 negotiation of processing time becomes the key to low cost decision-making.
Note, however, that this is the key if the decision is complex.
Each queuing process requires a decision. In the arnval process, a decision is required to drop or pass a cell. In the departure process, a decision is required to slate the next queue for servicing in a fair manner. These decisions are non-trivial, but very manageable when presented with the appropriate information to assess. For example, a scheduling decision generally requires only information about the present occupancy of queues and the current state of congestion. A typical discard decision requires only information on the occupancy of the destination queue and cell eligibility.
In the said Prior Art these queuing decisions are generally made by a centralised unit.
This approach counteracts the isolation of each of the processes, since the unit becomes another point of resource contention in need of management. The overhead of this management generally causes performance degradation.
Another exemplary ATM layer device is disclosed in reference [2] in which the scope of queuing is limited and selective cell discard is not performed. Here, a centralised unit is presented that, ". .. controls various stages of cell processing in conjunction with other portions of the ATM layer device. ..". In particular it should be noted that the cell processor is responsible for several functions, such as "...the external ram address look-up, the microprocessor ram arbitration, the microprocessor interface, the microprocessor cell buffer, and the auxiliary cell FIFO buffer." Thus the cell processor must essentially manage all devices of contention management (across queuing processes and across layers) in addition to performing route re-mapping.
Since process independence, namely unrestricted operation and reduced error propagation, has not been suitably addressed by the Prior Art, there is a need to incorporate in an ATM layer device a feature that will offer greater isolation of the arnval and departure processes.
Prior Art References
4 [1] US Patent No. 5,528,592 (Schibler et al, June 18, 1996) [2] US Patent No. 5,889,778 (Huscro$ et al, March 30, 1999) SUMMARY OF THE INVENTION
Accordingly, it is an object of this invention to provide an improved ATM
layer device for use in an ingress or egress configuration that has higher reliability and better performance than many of its predecessor designs. The increase in quality is directly accomplished through greater isolation of queuing processes.
In accordance with the aspect of the present invention there is provided a dual-mode, two-layer, five-point cluster of managers. These managers are responsible for:
route managing tasks (route manager 16); queue occupancy access contention management (queue manager 18); discard decisions (discard manager 17); buffer FIFO access contention management (buffer manager 20); and scheduling decisions (scheduler 19). Layers are provided to differentiate the method of network layer communication from the information exchanged in the communication.
The advantage of the present invention over the prior art is the manner in which the isolation of processes is accomplished. The ATM layer device in this invention addresses the issue of storage resource contention by assigning a manager with the sole purpose of managing said contention. The issue of contention in processing resources is addressed in a similar manner - a manager is assigned to each point of decision-making with the sole purpose of making such decisions. Coupled with a layered structure, this managerial assignment provides greater algorithmic independence in that the manner in which any one manger implements its functionality is completely independent from the way neighbouring managers implement their respective functionality. The ensuing advantages of algorithmic independence are: independent and parallel development; ease of fault detection, isolation and handling pre- and post- deployment; algorithm substitution; and independent tuning pre-and post- deployment.
Furthermore, by assigning two distinct managers to each decision point, the present invention eliminates processing resource contention. This feature offers the advantage of parallel processing in as much as decisions are made concurrently, thereby increasing throughput. Moreover, with the assignment of two distinct managers to each point, the said concurrent decisions can be made in an asynchronous manner. This asynchronous feature eases system development and integration efforts in that front-end and back-end designs remain uncoupled with respect to queuing processes.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 is a basic block diagram illustrating the traffic flow of a typical ATM
switch.
FIG. 2 is a block diagram illustrating the interconnection of the present ATM
layer device in the context of a fast packet switch, implementing both input and output queuing.
FIG. 3 is a block diagram illustrating the layering structure of the present ATM layer device in the context of the open systems interconnection (OSI) reference model and ATM
forum recommendations.
FIG. 4 is a mapping diagram illustrating the Device Management Sub-Layer in the ATM
layer device.
FIG. 5 is a block diagram illustrating the ATM Processing Sub-Layer in the ATM
layer device.
DESCRIPTION OF THE INVENTION
Acronyms For convenience, a glossary of acronyms used in this description is given below:
ATM Asynchronous Transfer Mode AAL ATM Adaptation Layer CAC Connection Admission Control CC Configuration & Control CLP Cell Loss Priority CPU Centralised Processing Unit CRC Cyclical Redundancy Check DDS Distributed Database Structure EPD Early Packet Discard FIFO First-In-First-Out Gbps Gigabits/s HEC Header Error Control LRU Least Recently Used Mbps Megabits/s MIB Management Information Base PPD Partial Packet Discard OSI Open Systems Interconnection QoS Quality of Service SAR Segmentation And Re-assembly UTOPIA Universal Test and Operations Interface for ATM

VCI Virtual Connection Identifier VPI Virtual Path Identifier Terminology Throughout this description the following terms are used in accordance with their respective definitions given below:
Process The word "process" as used herein is a task requiring certain resources to carry out their intended functions.
Edge Device A device that connects the source or destination of information to the network. (ie., a telephone set is an edge device in a telecommunications network) Signal Notation The asterisk * following a compound word serves to indicate which of the two words applies when the signal is at a specific logic level (ie., when the high/low* signal is at a logic-high, the high* portion of the compound word applies) Service Fairness The issue of unfairness arises when a small number of traffic queues dominate the common buffer space in the switch. This issue can occur during heavy traffic occurrences when some egress cards are continually congested. Thus, traffic queues designated to lightly-congested egress cards will have limited access to the common buffer space.
Description In the switching system shown in FIG. 2, the transmission pipes 5 are the physical links between network nodes. Incoming data streams arnving over a transmission pipe
5 are received and converted into ATM cells by a physical layer device 4. Cells are passed to an ATM layer device Z in ingress mode over a UTOPIA bus 6. The ATM layer device 2 in ingress mode is responsible for queuing in the event of congestion, or for inserting routing information and passing cells to the ATM switch fabric 1 over a cell bus 7.
The ATM switch fabric 1 is responsible for routing cells from an ATM layer device 2 in ingress mode to an ATM layer device 2 in egress mode as contained in the cell header. Cells are passed to the ATM layer device 2 in egress mode via a cell bus 7. While in egress mode, the ATM layer device 2 is responsible for queuing to reduce the likelihood of congestion and for routing cells to the required physical layer device 4. Cells are passed from the ATM
layer device 2 in egress mode to the physical layer device 4 over a UTOPIA bus 6. Outgoing cell streams are framed and transmitted over a transmission pipe 5 by the physical layer device 4. The ATM
layer device 2, in this context, whether in ingress or egress mode, is responsible for implementing route management, congestion management and queuing functionality.
In the OSI model of FIG. 3, the arriving and departing data streams are in the physical layer 8. The conversion of data streams into packets is a progression from the physical layer 8 to the data link layer 9. Packets are associated with a destination (or set of destinations, in the case of multicast and broadcast connections) and are switched in the data link layer 9.
For switch nodes, traffic management, namely, the acceptance and routing, or rejection of connections based on loading, occurs in the network layer 10. The higher layers are primarily for reformatting information in edge devices to standardise communication between source and destination. In ATM networks, fixed-length packets called cells are used.
As such, the data link layer 9 is sub-divided into two sub-layers: the ATM Adaptation Layer (AAL) 11 and the ATM Layer 12. The AAL 11 is primarily used in edge devices for the segmentation and re-assembly (SAR) of packets into ATM cells. The ATM layer 12 is used for the processing of cells which mainly involves the implementation of route management, congestion management and queuing functionality. In ATM switches, the ATM
layer 12 also presents loading information to the network layer 10 for the purpose of traffic management.
The ATM layer device 2, in this context, whether in ingress or egress mode, is responsible for presenting a management information base (MIB) to the network layer 10 for traffic management.
Since cell processing is a repetitive local task and traffic management involves complex global decisions, the ATM layer device 2 incorporates a layered organisation falling inside the ATM layer 12. The ATM processing sub-layer 14 is responsible for implementing route management, congestion management, and queuing functionality, while the device management sub-layer 13 is responsible for managing the communications between the network layer 10 and the ATM processing sub-layer 14.
In the mapping diagram shown in FIG. 4, the communication between the ATM
processing sub-layer and the ATM device management sub-layer is bi-directional and dual-facetted. For general device management (configuration), a distributed database structure (DDS) 23 is used to communicate error flags upward and tuning parameters downward. For traffic management (control), loading information is communicated upward via MIB
statistics integrated into the DDS 23, while downward communication is done via routing table entries. The DDS 23 is a bank of registers that are distributed according to different managers in the ATM processing sub-layer, with each manager occupying an eighth of a 24-bit address space and only communicating information pertinent to its local operations. (The content of information exchanged will be presented later in conjunction with the ATM
processing sub-layer 14 functionality). A configuration and control (CC) interface 21 is used to present the 24-bit address space to the network layer 10 via a serial bit-stream 22. The message is transmitted with the most significant bit (MSB) first, and formatted to include a start-bit, a read/write bit, a 24-bit address, a 3-bit message length field and a data field generally varying between 0 and 120 bits.
Moving now to the block diagram of FIG. 5, since the downward communication of routing table entries contends with the route manager 16 for connection memory 28 access, said route manger (16), having greater priority by default as a result of the layering, is the master of connection memory access. The management of this inter-layer contention is achieved in one of two manners. First, contention management is implemented as a part of the route manager 16, with the route table entries integrated into the DDS 23.
Secondly, as illustrated in FIG. 5, contention management is implemented as a memory access device on the back-end of one eighth of CC interface 21 address space, with the route manager 16 retaining a mutual exclusion signal 26 indicating continued connection memory access.
Optionally, a buffer FIFO memory access device is similarly implemented for added test support.
In FIG. 5, route management, congestion management and queuing functionality are divided amongst a cluster of five managers in the ATM processing layer 14 with the intention of isolating queuing processes. Route management functionality occurs strictly in the route manager 16. Congestion management is a selective discard, based on feedback.
The decision to discard is made by the discard manager 17, while feedback is provided in the form of queue occupancy from the queue manager 18. Queuing functionality occurs across three managers. The scheduler 18 decides which queue to service next in the departure process.
The decision is based on all queue occupancies presented by the queue manager 18. The buffer manager 20 is responsible for managing the arrival and departure processes that contend for access to buffer FIFOs 38. Note also that the queue manager 18 is responsible for managing access contention for arrival and departure process queue occupancy.
The queue manager 18 can also be integrated in the scheduler 18 in the case where congestion management functionality does not require feedback. The use of the discard manager 17 is optional in the case where congestion management functionality is not required at all. Furthermore, in such a case, the route manager 16 can be located after the buffer manager 20 in the departure process.

Optionally, queuing memory usage can be increased with sharing. Each queue is only guaranteed a small allocation of memory space, but may exploit an entire shared portion. The shared portion is the remaining memory after queue allocations, or any set of divisions thereof. In the case where sharing is implemented, the queue manager 18 is responsible for ensuring that the shared space does not overflow. This sharing function is not apparent in occupancy and is transparent to the operation of the other managers. To achieve this, a 12-bit minimum queue size in conjunction with a 12-bit maximum share size is held on a per queue basis in the DDS 23.
In the arnval process, a cell is passed from the input interface 27 to the route manager 16 via a cell bus 29. The route manager 16 performs a table look-up based on the VPI/VCI
values and maps in the new cell route (R' 31) stored in external connection memory 28.
Simultaneously, a request is made by the route manager 16 to determine if the cell should be dropped on the basis of the volume of presently queued traffic for the given service class and output. This request is made via a 5-bit request (E) 34 signal, wherein E is a 4-bit state-vector representing the combined CLP, EPD & PPD eligibility of the current connection.
The route manager 16 drops the cell if the discard manager 17 asserts a discard on the 5-bit discard/send* E signal 30. Otherwise it passes the cell via a cell bus 29 to the buffer manager 20 for queuing in the external buffer FIFOs 38. Since discarding is connection-based, the discard manager 17 also returns a modified 4-bit E state-vector in conjunction with the discard/send* E 30 signal for storage in the external connection memory 28 by the route manager 16. The route manager 16 flags erroneous routes to the DDS 23 by storing the last recently used (LRU) invalid VPI/VPC per input port and the LRU parity-erred connection memory 28 address.
In order to make a decision the discard manager 17 requires eligibility information and queue occupancy feedback. Eligibility information is dispatched directly from the route manager 16 as the 4-bit E state-vector portion of the request E 34 signal, while the queue occupancy feedback is dispatched as a 16-bit vector from the queue manager 18 via the Qfeedback 33 signal. The feedback information selected is based on the 16-bit destination queue Q, indicated by route manager 16 on the query Q signal 36 when the request is initiated. If a cell for a given connection is eligible and the occupancy of the destination queue exceeds some threshold that would otherwise compromise QoS, then the route manager 16 is advised to drop the cell by asserting discard on the discardlsend* E 30 signal.
In the event all queues are full, as indicated by an overflow flag asserted by the queue manager 18 in conjunction with the Qfeedback 33 signal, the discard manger 17 automatically asserts discard on the discard/send* E 30 signal. Twelve-bit CLP, EPD & PPD
on and off thresholds for each queue are stored in the DDS 23 for tuning purposes.
Furthermore the discard manager 17 stores counts of transmitted cells (total and per port) and discarded cells (total and per queue on the basis of CLP, EPD & PPD
eligibility) in the DDS
23 for traffic management purposes.
In the case where a cell is not dropped, it is passed from the route manager 16 to the buffer manager 20 for queuing via a cell bus 29. The buffer manager 20 extracts the destination queue and accordingly stores the cell in the buffer FIFOs 38 by writing over the FIFO bus 40. Fifteen-bit cell addresses are manipulated in a FIFO manner, and the cell is written to a common external memory space. Optionally, a CRC field is appended to the cell to protect against memory errors. The change in queue occupancy is dispatched to the queue manager 18 via the OQ 37 signal, wherein Q is a 1 S-bit vector representing the destination queue asserted in conjunction with a increment/decrement* signal. FIFO
boundaries are stored in the DDS 23 to allow for queue depth tailoring at start-up while buffer FIFO 38 memory errors are flagged to the DDS 23 for tracking purposes as a count in conjunction with the erred address in LRU fashion.
Concurrently in the departure process, the buffer manager 20 requests a queue from the scheduler 19 via the request 41 signal. The queue slated for transmission is then supplied by the scheduler 19 via the dispatch Q 43 signal, wherein Q is a 15-bit vector representing the queue slated for transmission. The buffer manager 20 subsequently fetches a cell from the buffer FIFOs 38 via the FIFO bus 40 and then passes the cell to the output interface 44 via a cell bus 29 (Note that the FIFO bus is the point of resource contention between processes).
The change in queue occupancy is then dispatched to the queue manager 18 via the O Q
signal 37 by asserting a decrement* in conjunction with the 15-bit queue Q.
Optionally, the decrement* assertion is also routed to the scheduler 20 as an acknowledgement to prompt recalculation. Furthermore, a CRC check is performed in the case where a CRC
field was appended to the cell in the arrival process. As previously mentioned, FIFO
boundaries are stored in the DDS 23 to allow for queue depth tailoring at start-up while buffer FIFO 38 memory errors are flagged to the DDS 23 for tracking purposes as a count in conjunction with the erred address in LRU fashion.
The scheduler 19 decides which queue to service next. To assess fairness in light of requested QoS, the scheduling decision requires prioritisation information in conjunction with the current queue occupancy information. The queue manager 18 permanently presents this per-queue occupancy information to the scheduler 20 via the Qstatus 39 bus. Tuneable service class and output prioritisation parameters are presented as weights stored in the DDS
23. Furthermore, in ingress mode, the decision may consider downstream congestion feedback, as indicated by a one-hot encoded 8-bit congestion 42 signal. The scheduler supplies the next queue slated for transmission on the dispatch Q 43 signal upon assertion of the request 41 signal by the buffer manager 20.
Comparing FIG. 2 with FIG. 5, the input interface 27 and output interface 44 couple the ATM layer device 2 to the ATM switch fabric 1 and a plurality of physical layer devices 4. In ingress mode, the input interface 27 is couplable to a plurality of physical layer devices 4, while the output interface 44 is a high speed cell bus 7 compatible with the ATM switch fabric 1. In egress mode, the input interface 27 is a high speed cell bus 7 couplable to the switch fabric 1, while the output interface 44 is couplable to a plurality of physical layer devices 4. In a typical ATM switch, the interfaces reformat cells to 60- bytes for internal use compatible with the ATM switch fabric 1. In general, the cell bus 29 can be 30 data bits long and can be transmitted in conjunction with a 50 MHz clock pulse and two parity bits, providing a total maximal throughput of 1.325 Gbps. In addition, an eighth of the CC
interface 21 address space is dedicated to the interfaces for flagging interfacing errors in the DDS 23 and configuring device mode.
Of course, numerous variations and adaptations may be made to the particular embodiments of the invention described above, without departing from the spirit and scope of the invention, as defined in the following claims.

Claims (15)

What is claimed is:
1. An ATM layer device for interfacing between a plurality of physical layer devices and an ATM switch fabric, said ATM layer device comprising:
(a) a first sub-layer for performing a first functional set of ATM cell processing functions; and (b) a second sub-layer for performing a second functional set of network layer communication functions;
2. A device according to claim 1, wherein the first functional set includes a priority based queuing function.
3. A device according to claim 2, wherein the priority based queuing function includes a set of concurrent queuing processes, and wherein the first sub-layer includes a cluster of independent cell processing blocks for isolating said concurrent queuing processes.
4. A device according to claim 1, wherein the second functional set includes an MIB
statistics collection function for traffic management purposes.
5. A device according to claim 1, wherein the first functional set includes a route management function.
6. A device according to claim 1, wherein the first functional set includes a congestion management function.
7. A device according to claim 6, wherein the congestion management is in a form of feedback-based selective discard in the arrival process.
8. A device according to claim 1, wherein the second functional set includes a manipulation function for manipulating performance tuning parameters.
9. A device according to claim 1, wherein the first functional set includes a route management function and a congestion management function in a form of feedback-based selective discard in the arrival process, and wherein the second functional set includes a manipulation function for manipulating performance tuning parameters.
10. A device according to claim 9, wherein the first sub-layer includes a cluster of independent cell processing blocks comprising:
(a) a buffer manager for isolating buffer FIFO contention;
(b) a queue manager for isolating queue occupancy contention (c) a discard manager for isolating discarding decisions in the arrival process; with 17;
(d) a scheduler for isolating scheduling decisions in the departure process;
and (e) a route manager for performing route management, notwithstanding the limitations imposed by vector size, or active levels and edges.
11. A device according to claim 10, wherein the cluster of independent cell processing blocks communicate with one another by means of at least one of:
a cell bus, a request (E) signal, a discard (E) response, a query (Q) signal, a, Q feedback response, a .DELTA.(Q) signal, a Qstatus bus, a request signal, a dispatch (Q) response, and a congestion signal.
12. A device according to claim 9, wherein the second functional set are facilitated by an address-mapped distributed-database structure in conjunction with a configuration-and-control interface.
13. A device according to claim 10, wherein the network layer communication functions are facilitated by an address-mapped distributed-database structure in conjunction with a configuration-and-control interface.
14. A device according to claim 12, wherein the second sub-layer includes a distributed data structure for maintaining at least one of:
(a) an invalid VPI/VPCs in a LRU fashion in conjunction with a total count;

(b) a parity-erred connection memory addresses in LRU fashion in conjunction with a total count;
(c) on-and-off thresholds on a per-queue basis for CLP, EPD & PPD discarding mechanisms;
(d) counts, in conjunction with a total count, on a per-queue and per-discard mechanism of discarded cells;
(e) parity-erred buffer FIFO memory addresses in LRU fashion in conjunction with a total count;
(f) FIFO boundaries;
(g) prioritisation weights, notwithstanding the limitations imposed by vector size.
15. A device according to claim 14, wherein the second sub-layer performs sharing, and the distributed data structure further maintains minimum queue sizes in conjunction with a maximum share size on a per-queue basis.
CA002288513A 1999-11-05 1999-11-05 Apparatus and method for an atm layer device Abandoned CA2288513A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CA002288513A CA2288513A1 (en) 1999-11-05 1999-11-05 Apparatus and method for an atm layer device
CA002325135A CA2325135A1 (en) 1999-11-05 2000-11-06 Asynchronous transfer mode layer device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CA002288513A CA2288513A1 (en) 1999-11-05 1999-11-05 Apparatus and method for an atm layer device

Publications (1)

Publication Number Publication Date
CA2288513A1 true CA2288513A1 (en) 2001-05-05

Family

ID=4164565

Family Applications (1)

Application Number Title Priority Date Filing Date
CA002288513A Abandoned CA2288513A1 (en) 1999-11-05 1999-11-05 Apparatus and method for an atm layer device

Country Status (1)

Country Link
CA (1) CA2288513A1 (en)

Similar Documents

Publication Publication Date Title
US5917828A (en) ATM reassembly controller and method
JP4070610B2 (en) Manipulating data streams in a data stream processor
US6259699B1 (en) System architecture for and method of processing packets and/or cells in a common switch
US7457297B2 (en) Methods and apparatus for differentiated services over a packet-based network
US5724358A (en) High speed packet-switched digital switch and method
US6430191B1 (en) Multi-stage queuing discipline
CA2271883C (en) Many dimensional congestion detection system and method
US5732087A (en) ATM local area network switch with dual queues
US5583861A (en) ATM switching element and method having independently accessible cell memories
US6654343B1 (en) Method and system for switch fabric flow control
US7876763B2 (en) Pipeline scheduler including a hierarchy of schedulers and multiple scheduling lanes
CA2314625C (en) Telecommunications switches and methods for their operation
US7151744B2 (en) Multi-service queuing method and apparatus that provides exhaustive arbitration, load balancing, and support for rapid port failover
US20040151197A1 (en) Priority queue architecture for supporting per flow queuing and multiple ports
US20030081624A1 (en) Methods and apparatus for packet routing with improved traffic management and scheduling
US20040028067A1 (en) Two-dimensional queuing/de-queuing methods and systems for implementing the same
JPH08251233A (en) Packet communication system and method of improved memory layout
WO1995015526A1 (en) Method and system for managing memory in a high speed network
US6876659B2 (en) Enqueuing apparatus for asynchronous transfer mode (ATM) virtual circuit merging
US6510160B1 (en) Accurate computation of percent utilization of a shared resource and fine resolution scaling of the threshold based on the utilization
US7680043B2 (en) Network processor having fast flow queue disable process
US7116680B1 (en) Processor architecture and a method of processing
EP0855820A2 (en) A method and system for optimizing data transmission line bandwidth occupation in a multipriority data traffic environment
US20050047338A1 (en) Scalable approach to large scale queuing through dynamic resource allocation
US20050190779A1 (en) Scalable approach to large scale queuing through dynamic resource allocation

Legal Events

Date Code Title Description
FZDE Discontinued