EP0839420A4 - Allocated and dynamic bandwidth management - Google Patents
Allocated and dynamic bandwidth managementInfo
- Publication number
- EP0839420A4 EP0839420A4 EP96924622A EP96924622A EP0839420A4 EP 0839420 A4 EP0839420 A4 EP 0839420A4 EP 96924622 A EP96924622 A EP 96924622A EP 96924622 A EP96924622 A EP 96924622A EP 0839420 A4 EP0839420 A4 EP 0839420A4
- Authority
- EP
- European Patent Office
- Prior art keywords
- switch
- input
- bandwidth
- output
- dynamic
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/10—Flow control; Congestion control
- H04L47/18—End to end
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F15/00—Digital computers in general; Data processing equipment in general
- G06F15/16—Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
- G06F15/163—Interprocessor communication
- G06F15/173—Interprocessor communication using an interconnection network, e.g. matrix, shuffle, pyramid, star, snowflake
- G06F15/17356—Indirect interconnection networks
- G06F15/17368—Indirect interconnection networks non hierarchical topologies
- G06F15/17375—One dimensional, e.g. linear array, ring
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L12/00—Data switching networks
- H04L12/28—Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
- H04L12/46—Interconnection of networks
- H04L12/4604—LAN interconnection over a backbone network, e.g. Internet, Frame Relay
- H04L12/4608—LAN interconnection over ATM networks
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L12/00—Data switching networks
- H04L12/54—Store-and-forward switching systems
- H04L12/56—Packet switching systems
- H04L12/5601—Transfer mode dependent, e.g. ATM
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L12/00—Data switching networks
- H04L12/54—Store-and-forward switching systems
- H04L12/56—Packet switching systems
- H04L12/5601—Transfer mode dependent, e.g. ATM
- H04L12/5602—Bandwidth control in ATM Networks, e.g. leaky bucket
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/08—Configuration management of networks or network elements
- H04L41/0896—Bandwidth or capacity management, i.e. automatically increasing or decreasing capacities
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/10—Flow control; Congestion control
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/10—Flow control; Congestion control
- H04L47/11—Identifying congestion
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/10—Flow control; Congestion control
- H04L47/26—Flow control; Congestion control using explicit feedback to the source, e.g. choke packets
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/10—Flow control; Congestion control
- H04L47/26—Flow control; Congestion control using explicit feedback to the source, e.g. choke packets
- H04L47/266—Stopping or restarting the source, e.g. X-on or X-off
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/10—Flow control; Congestion control
- H04L47/29—Flow control; Congestion control using a combination of thresholds
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/10—Flow control; Congestion control
- H04L47/30—Flow control; Congestion control in combination with information about buffer occupancy at either end or at transit nodes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/50—Queue scheduling
- H04L47/62—Queue scheduling characterised by scheduling criteria
- H04L47/621—Individual queue per connection or flow, e.g. per VC
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L49/00—Packet switching elements
- H04L49/10—Packet switching elements characterised by the switching fabric construction
- H04L49/104—Asynchronous transfer mode [ATM] switching fabrics
- H04L49/105—ATM switching elements
- H04L49/106—ATM switching elements using space switching, e.g. crossbar or matrix
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L49/00—Packet switching elements
- H04L49/10—Packet switching elements characterised by the switching fabric construction
- H04L49/104—Asynchronous transfer mode [ATM] switching fabrics
- H04L49/105—ATM switching elements
- H04L49/107—ATM switching elements using shared medium
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L49/00—Packet switching elements
- H04L49/15—Interconnection of switching modules
- H04L49/1515—Non-blocking multistage, e.g. Clos
- H04L49/153—ATM switching fabrics having parallel switch planes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L49/00—Packet switching elements
- H04L49/15—Interconnection of switching modules
- H04L49/1553—Interconnection of ATM switching modules, e.g. ATM switching fabrics
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L49/00—Packet switching elements
- H04L49/15—Interconnection of switching modules
- H04L49/1553—Interconnection of ATM switching modules, e.g. ATM switching fabrics
- H04L49/1576—Crossbar or matrix
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L49/00—Packet switching elements
- H04L49/20—Support for services
- H04L49/201—Multicast operation; Broadcast operation
- H04L49/203—ATM switching fabrics with multicast or broadcast capabilities
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L49/00—Packet switching elements
- H04L49/25—Routing or path finding in a switch fabric
- H04L49/253—Routing or path finding in a switch fabric using establishment or release of connections between ports
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L49/00—Packet switching elements
- H04L49/25—Routing or path finding in a switch fabric
- H04L49/253—Routing or path finding in a switch fabric using establishment or release of connections between ports
- H04L49/255—Control mechanisms for ATM switching fabrics
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L49/00—Packet switching elements
- H04L49/25—Routing or path finding in a switch fabric
- H04L49/256—Routing or path finding in ATM switching fabrics
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L49/00—Packet switching elements
- H04L49/30—Peripheral units, e.g. input or output ports
- H04L49/3081—ATM peripheral units, e.g. policing, insertion or extraction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L49/00—Packet switching elements
- H04L49/30—Peripheral units, e.g. input or output ports
- H04L49/3081—ATM peripheral units, e.g. policing, insertion or extraction
- H04L49/309—Header conversion, routing tables or routing tags
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L49/00—Packet switching elements
- H04L49/45—Arrangements for providing or supporting expansion
- H04L49/455—Provisions for supporting expansion in ATM switches
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L49/00—Packet switching elements
- H04L49/55—Prevention, detection or correction of errors
- H04L49/552—Prevention, detection or correction of errors by ensuring the integrity of packets received through redundant connections
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L49/00—Packet switching elements
- H04L49/55—Prevention, detection or correction of errors
- H04L49/555—Error detection
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L69/00—Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
- H04L69/30—Definitions, standards or architectural aspects of layered protocol stacks
- H04L69/32—Architecture of open systems interconnection [OSI] 7-layer type protocol stacks, e.g. the interfaces between the data link level and the physical level
- H04L69/322—Intralayer communication protocols among peer entities or protocol data unit [PDU] definitions
- H04L69/324—Intralayer communication protocols among peer entities or protocol data unit [PDU] definitions in the data link layer [OSI layer 2], e.g. HDLC
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04Q—SELECTING
- H04Q11/00—Selecting arrangements for multiplex systems
- H04Q11/04—Selecting arrangements for multiplex systems for time-division multiplexing
- H04Q11/0428—Integrated services digital network, i.e. systems for transmission of different types of digitised signals, e.g. speech, data, telecentral, television signals
- H04Q11/0478—Provisions for broadband connections
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W28/00—Network traffic management; Network resource management
- H04W28/02—Traffic management, e.g. flow control or congestion control
- H04W28/10—Flow control between communication endpoints
- H04W28/14—Flow control between communication endpoints using intermediate storage
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04J—MULTIPLEX COMMUNICATION
- H04J3/00—Time-division multiplex systems
- H04J3/02—Details
- H04J3/06—Synchronising arrangements
- H04J3/0635—Clock or time synchronisation in a network
- H04J3/0682—Clock or time synchronisation in a network by delay compensation, e.g. by compensation of propagation delay or variations thereof, by ranging
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04J—MULTIPLEX COMMUNICATION
- H04J3/00—Time-division multiplex systems
- H04J3/02—Details
- H04J3/06—Synchronising arrangements
- H04J3/0635—Clock or time synchronisation in a network
- H04J3/0685—Clock or time synchronisation in a node; Intranode synchronisation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L12/00—Data switching networks
- H04L12/54—Store-and-forward switching systems
- H04L12/56—Packet switching systems
- H04L12/5601—Transfer mode dependent, e.g. ATM
- H04L2012/5614—User Network Interface
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L12/00—Data switching networks
- H04L12/54—Store-and-forward switching systems
- H04L12/56—Packet switching systems
- H04L12/5601—Transfer mode dependent, e.g. ATM
- H04L2012/5614—User Network Interface
- H04L2012/5616—Terminal equipment, e.g. codecs, synch.
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L12/00—Data switching networks
- H04L12/54—Store-and-forward switching systems
- H04L12/56—Packet switching systems
- H04L12/5601—Transfer mode dependent, e.g. ATM
- H04L2012/5625—Operations, administration and maintenance [OAM]
- H04L2012/5627—Fault tolerance and recovery
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L12/00—Data switching networks
- H04L12/54—Store-and-forward switching systems
- H04L12/56—Packet switching systems
- H04L12/5601—Transfer mode dependent, e.g. ATM
- H04L2012/5628—Testing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L12/00—Data switching networks
- H04L12/54—Store-and-forward switching systems
- H04L12/56—Packet switching systems
- H04L12/5601—Transfer mode dependent, e.g. ATM
- H04L2012/5629—Admission control
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L12/00—Data switching networks
- H04L12/54—Store-and-forward switching systems
- H04L12/56—Packet switching systems
- H04L12/5601—Transfer mode dependent, e.g. ATM
- H04L2012/5629—Admission control
- H04L2012/5631—Resource management and allocation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L12/00—Data switching networks
- H04L12/54—Store-and-forward switching systems
- H04L12/56—Packet switching systems
- H04L12/5601—Transfer mode dependent, e.g. ATM
- H04L2012/5629—Admission control
- H04L2012/5631—Resource management and allocation
- H04L2012/5632—Bandwidth allocation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L12/00—Data switching networks
- H04L12/54—Store-and-forward switching systems
- H04L12/56—Packet switching systems
- H04L12/5601—Transfer mode dependent, e.g. ATM
- H04L2012/5629—Admission control
- H04L2012/5631—Resource management and allocation
- H04L2012/5632—Bandwidth allocation
- H04L2012/5634—In-call negotiation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L12/00—Data switching networks
- H04L12/54—Store-and-forward switching systems
- H04L12/56—Packet switching systems
- H04L12/5601—Transfer mode dependent, e.g. ATM
- H04L2012/5629—Admission control
- H04L2012/5631—Resource management and allocation
- H04L2012/5632—Bandwidth allocation
- H04L2012/5635—Backpressure, e.g. for ABR
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L12/00—Data switching networks
- H04L12/54—Store-and-forward switching systems
- H04L12/56—Packet switching systems
- H04L12/5601—Transfer mode dependent, e.g. ATM
- H04L2012/5638—Services, e.g. multimedia, GOS, QOS
- H04L2012/564—Connection-oriented
- H04L2012/5642—Multicast/broadcast/point-multipoint, e.g. VOD
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L12/00—Data switching networks
- H04L12/54—Store-and-forward switching systems
- H04L12/56—Packet switching systems
- H04L12/5601—Transfer mode dependent, e.g. ATM
- H04L2012/5638—Services, e.g. multimedia, GOS, QOS
- H04L2012/564—Connection-oriented
- H04L2012/5643—Concast/multipoint-to-point
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L12/00—Data switching networks
- H04L12/54—Store-and-forward switching systems
- H04L12/56—Packet switching systems
- H04L12/5601—Transfer mode dependent, e.g. ATM
- H04L2012/5638—Services, e.g. multimedia, GOS, QOS
- H04L2012/5646—Cell characteristics, e.g. loss, delay, jitter, sequence integrity
- H04L2012/5647—Cell loss
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L12/00—Data switching networks
- H04L12/54—Store-and-forward switching systems
- H04L12/56—Packet switching systems
- H04L12/5601—Transfer mode dependent, e.g. ATM
- H04L2012/5638—Services, e.g. multimedia, GOS, QOS
- H04L2012/5646—Cell characteristics, e.g. loss, delay, jitter, sequence integrity
- H04L2012/5647—Cell loss
- H04L2012/5648—Packet discarding, e.g. EPD, PTD
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L12/00—Data switching networks
- H04L12/54—Store-and-forward switching systems
- H04L12/56—Packet switching systems
- H04L12/5601—Transfer mode dependent, e.g. ATM
- H04L2012/5638—Services, e.g. multimedia, GOS, QOS
- H04L2012/5646—Cell characteristics, e.g. loss, delay, jitter, sequence integrity
- H04L2012/5649—Cell delay or jitter
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L12/00—Data switching networks
- H04L12/54—Store-and-forward switching systems
- H04L12/56—Packet switching systems
- H04L12/5601—Transfer mode dependent, e.g. ATM
- H04L2012/5638—Services, e.g. multimedia, GOS, QOS
- H04L2012/5646—Cell characteristics, e.g. loss, delay, jitter, sequence integrity
- H04L2012/5651—Priority, marking, classes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L12/00—Data switching networks
- H04L12/54—Store-and-forward switching systems
- H04L12/56—Packet switching systems
- H04L12/5601—Transfer mode dependent, e.g. ATM
- H04L2012/5638—Services, e.g. multimedia, GOS, QOS
- H04L2012/5646—Cell characteristics, e.g. loss, delay, jitter, sequence integrity
- H04L2012/5652—Cell construction, e.g. including header, packetisation, depacketisation, assembly, reassembly
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L12/00—Data switching networks
- H04L12/54—Store-and-forward switching systems
- H04L12/56—Packet switching systems
- H04L12/5601—Transfer mode dependent, e.g. ATM
- H04L2012/5672—Multiplexing, e.g. coding, scrambling
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L12/00—Data switching networks
- H04L12/54—Store-and-forward switching systems
- H04L12/56—Packet switching systems
- H04L12/5601—Transfer mode dependent, e.g. ATM
- H04L2012/5678—Traffic aspects, e.g. arbitration, load balancing, smoothing, buffer management
- H04L2012/5679—Arbitration or scheduling
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L12/00—Data switching networks
- H04L12/54—Store-and-forward switching systems
- H04L12/56—Packet switching systems
- H04L12/5601—Transfer mode dependent, e.g. ATM
- H04L2012/5678—Traffic aspects, e.g. arbitration, load balancing, smoothing, buffer management
- H04L2012/5681—Buffer or queue management
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L12/00—Data switching networks
- H04L12/54—Store-and-forward switching systems
- H04L12/56—Packet switching systems
- H04L12/5601—Transfer mode dependent, e.g. ATM
- H04L2012/5678—Traffic aspects, e.g. arbitration, load balancing, smoothing, buffer management
- H04L2012/5681—Buffer or queue management
- H04L2012/5682—Threshold; Watermark
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L12/00—Data switching networks
- H04L12/54—Store-and-forward switching systems
- H04L12/56—Packet switching systems
- H04L12/5601—Transfer mode dependent, e.g. ATM
- H04L2012/5678—Traffic aspects, e.g. arbitration, load balancing, smoothing, buffer management
- H04L2012/5681—Buffer or queue management
- H04L2012/5683—Buffer or queue management for avoiding head of line blocking
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L12/00—Data switching networks
- H04L12/54—Store-and-forward switching systems
- H04L12/56—Packet switching systems
- H04L12/5601—Transfer mode dependent, e.g. ATM
- H04L2012/5685—Addressing issues
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L7/00—Arrangements for synchronising receiver with transmitter
- H04L7/04—Speed or phase control by synchronisation signals
- H04L7/041—Speed or phase control by synchronisation signals using special codes as synchronising signal
- H04L7/046—Speed or phase control by synchronisation signals using special codes as synchronising signal using a dotting sequence
Definitions
- the invention generally relates to the field of telecommunications networks, and specifically to bandwidth allocation and delay management in an asynchronous transfer mode switch.
- Telecommunications networks such as asynchronous transfer mode (“ATM") networks are used for transfer of audio, video and other data.
- ATM networks deliver data by routing data units such as ATM cells from source to destination through switches.
- Switches include input/output ("I/O") ports through which ATM cells are received and transmitted. The appropriate output port for transmission of the cell is determined based on the cell header.
- Such traffic types include the constant bit rate (“CBR”) service class, the variable bit rate (“VBR”) service class, the available bit rate (“ABR”) service class, and the unspecified bit rate (“UBR”) service class.
- CBR constant bit rate
- VBR variable bit rate
- ABR available bit rate
- URR unspecified bit rate
- Telecommunications network applications such as teleconferencing require deterministic delay bounds, and are typically assigned to the CBR service class.
- Transaction processing applications such as automated teller machines require a "tightly bounded" delay specification to provide acceptable response times.
- Such applications typically are assigned to the VBR service class.
- File transfer applications such as internetwork traffic merely require a "bounded" delay, and thus typically employ the ABR service classes.
- the UBR service class normally provides no delay bound.
- Bandwidth is another consideration in establishing an acceptable switch configuration. Video applications typically have a predictable bandwidth requirement, while file transfer applications are much more aperiodic, or "bursty. "
- Low-delay and complete line utilization are opposing goals when multiplexing asynchronous sources.
- High utilization is achieved by having a set of connections share bandwidth that is unused by connections that need very low delay. This shared bandwidth is known as dynamic bandwidth because it is distributed to connections based on instantaneous operating conditions.
- VBR, ABR and UBR utilize dynamic bandwidth to achieve high line utilization.
- a network switch capable of adaptively accommodating network traffic having such dissimilar delay and bandwidth requirements, and thus providing low-cost, highly efficient integrated services, is required.
- Integrated services is the accommodation of various traffic types, wherein each of the traffic types is characterized by delay bounds and by guaranteed bandwidth, and wherein each of the traffic types receives allocated bandwidth, dynamic bandwidth, or a combination of both.
- the presently disclosed invention is an ATM network switch and method capable of adaptively providing highly efficient, and thus low cost, integrated services therein. In providing such integrated service ⁇ , if the input rate for a connection is greater than its allocated bandwidth, the connection can optionally use dynamic bandwidth.
- the switch includes at least one input port, at least one output port, and input and output buffers associated with the respective input and output ports.
- Cells enter the switch through the input port and are buffered in the input buffers.
- the cells are then transmitted from the input buffers to the output buffers, under the control of respective port processors and a Bandwidth Arbiter ("BA”) , and then transmitted to the appropriate output port.
- BA Bandwidth Arbiter
- each queue includes multiple buffers, and each switch includes multiple input queues and multiple output queues.
- each cell Upon entering the switch, each cell is loaded into an input cell buffer belonging to a particular input queue for eventual transmission to an output cell buffer belonging to a particular output queue.
- Per VC queuing enables connection- level flow control, since cells are grouped according to the input and output port pair they traverse.
- Individual queues are then assigned to traffic type groups in order to facilitate traffic type flow control. For example, each queue is dedicated to a particular traffic type (sometimes referred to as a service class) such as the variable bit rate (“VBR”) service class and the available bit rate (“ABR”) service class as described above.
- VBR variable bit rate
- ABR available bit rate
- each category In addition to the differentiation of cell traffic into the service categories described above, further levels of priority are introduced within each category because different applications within a category may have different sensitivity to delay. For example, a file transfer performed by a back-up application can tolerate longer delays than a file transfer of a medical image to an awaiting physician.
- Flow control can also be implemented on these traffic sub- types, with each queue being assigned to a particular connection, thereby providing flow control on a per- connection basis as well as on a per-service category basis.
- the presently disclosed network switch provides integrated services by transferring input cells to output buffers using bandwidth assigned specifically to such connections (“allocated bandwidth”) , by transferring input cells to output buffers using bandwidth which is instantaneously unallocated by connections requiring allocated bandwidth (“dynamic bandwidth”) , and by transferring input cells to output buffers utilizing a mix of both allocated and dynamic bandwidth.
- Bandwidth arbitration or the matching of available receivers to transmitters needing to transmit cells to that set of receivers, begins with a determination of what bandwidth is available.
- a To Switch Port Processor is responsible for receiving a cell from a unidirectional transmission path known as a "link,” for analyzing cell header information to identify a connection with which the cell is associated, and for buffering the cell in accordance with the service class and subclass priority associated with the respective connection. Further, the TSPP is responsible for transferring the cell from the buffer to one or more From Switch Port Processors (“FSPPs”) using the associated switch fabric.
- FSPPs From Switch Port Processors
- the bandwidth employed for such transfer can be either allocated or dynamic, or both, as previously characterized.
- the TSPP employs a time slotted frame concept through the use of a Switch
- the TSPP also uses two data structures in managing different resources, a queue and a list.
- a queue is used to manage buffers, and consists of a group of one or more buffered cells organized as a FIFO and manipulated as a linked list using pointers.
- Incoming cells are added (enqueued) to the tail of the queue.
- Cells which are sent to the switch fabric are removed (dequeued) from the head of the queue. Cell ordering is always maintained. For a given connection, the sequence of cells that is sent to the switch fabric is identical to that in which they arrived although the time intervals between each departing cell may be different from the inter-cell arrival times.
- Valid SAT entries provide a pointer to a "scheduling list," in which is maintained a list of queues which may have cells intended for transfer to a particular output port.
- a scheduling list consists of one or more queue numbers organized as a circular list. As with queues, lists are manipulated as a linked-list structure using pointers. Queue numbers are added to the tail of a list and removed from the head of the list. A queue number can appear only once on any given scheduling list. In addition to being added and removed, queue numbers are recirculated on a list by removing from the head and then adding the removed queue number back onto the tail. This results in round-robin servicing of the queues on a particular list.
- Allocated time slots which cannot be used at a given instant in time or valid SAT entries where there is no cell to send for that connection cause the TSPP to notify the BA that it can use that time slot as a dynamic bandwidth cell time for any of the TSPPs associated with the switch. In this way, service classes requiring either or both of allocated and dynamic bandwidth are accommodated.
- Cells received through the switch fabric are received by the FSPP associated with the appropriate output port. Based upon prioritization information associated with the cell at the TSPP, the cells are prioritized and transmitted, with each cell maintained in the same order, relative to other cells on a connection, in which it was received.
- Fig. 1 is a block diagram of a switch according to the present invention.
- Fig. 2 is a block diagram illustrating point-to-point and point-to-multipoint operation in the switch of Fig. 1;
- Fig. 3 illustrates Switch Allocation Tables according to the present invention
- Fig. 4 illustrates a scheduling list and associated queues according to the present invention
- Fig. 5 illustrates a linked-list structure for multipoint-to-point and point-to-point transfer arbitration according to the present invention
- Fig. 6 illustrates the use of priority lists in the present invention
- Fig. 7 illustrates the relationship between a dynamic bandwidth threshold, allocated bandwidth, and dynamic bandwidth in the present invention
- Fig. 8 illustrates the distribution of unallocated output ports for dynamic bandwidth utilization in the present invention
- Fig. 9 is an exemplary queue as used in the present invention.
- Fig. 10 illustrates the placement of queues on preferred and/or dynamic lists within the FSPP in the present invention.
- Fig. 11 illustrates preferred and dynamic lists in an FSPP according to the present invention.
- each MTC 10 includes a plurality of input ports 20, a plurality of output ports 22 and an NxN switch fabric 11, such as a cross point switch, coupled between the input ports 20 and output ports 22.
- Each input port 20 includes a To Switch Port Processor (“TSPP”) ASIC 14 and each output port 22 includes a From Switch Port Processor (“FSPP”) ASIC 16.
- TSPP To Switch Port Processor
- FSPP From Switch Port Processor
- BA bandwidth arbiter
- each MTC supports up to four TSPPs 14 or FSPPs 16.
- the switch fabric 11 includes a data crossbar 13 for data cell transport and the bandwidth arbiter 12 and MTCs 18 for control signal transport.
- the Bandwidth Arbiter (“BA") ASIC 12 controls, inter alia, transport of data cells from a TSPP 14 to one or more FSPPs 16 through the data crossbar 13 (i.e., switch port scheduling), including the dynamic scheduling of momentarily unassigned bandwidth (as further described below) .
- Each FSPP 16 receives cells from the data crossbar 13 and schedules transmission of those cells onto network links 30 (i.e., link scheduling).
- Each of the input ports 20 and output ports 22 includes a plurality of input buffers 26 and output buffers 28, respectively (Fig. 2) .
- the buffers 26, 28 are organized into a plurality of input queues 32a-m (referred to herein generally as input queues 32) and a plurality of output queues 34a-m (referred to herein generally as output queues 34) , respectively.
- each input port 20 includes a plurality of input queues 32 and each output port includes a plurality of output queues 34, as shown.
- the input queues 32 are stored in a Control RAM 41 and a Pointer
- RAM 50 of the input port 20 and the output queues 34 are stored in a CR1 RAM 61 and a CR2 RAM 63 of the output port 22.
- the actual cell buffering occurs in Cell Buffer RAM 17, with the queues 32 having pointers to this buffer RAM 17.
- a data cell 24 enters the switch through an input port 20 and is enqueued on an input queue 32 at the at the respective TSPP 14. The cell is then transmitted from the input queue 32 to one or more output queues 34 via the data crossbar 13.
- Control signal ⁇ are transmitted from a TSPP 14 to one or more FSPPs 16 via the respective MTC 18 and the bandwidth arbiter 12.
- data and control signals may be transmitted from an input queue 32 to a particular one of the output queues 34, in the case of a point-to-point connection 40.
- data and control signals may be transmitted from an input queue 32 to a selected set of output queues 34, in the case of a point-to-multipoint connection 42.
- the data cell 24 is transmitted outside of the switch 10, for example, to another switch 21 via a network 30.
- the bandwidth arbiter 12 contains a crossbar controller
- a transfer request message, or probe control signal flows through the probe crossbar and is used to query whether or not sufficient space is available at a destination output queue, or queues 34, to enqueue a cell.
- the request message is considered a "forward" control signal since its direction is from a TSPP 14 to one or more FSPPs 16 (i.e., the same direction as data) .
- a two bit control signal flows in the reverse direction (from one or more FSPPs to a TSPP) through the XOFF crossbar and responds to the request message query by indicating whether or not the destination output queue, or queues 34, are presently capable of accepting data cells and thus, whether or not the transmitting TSPP can transmit cells via the data crossbar 13.
- the XOFF control signal indicates that the queried output queue(s) 34 are not presently capable of receiving data
- another reverse control signal which flows through the XON crossbar, notifies the transmitting TSPP once space becomes available at the destination output queue(s) 34.
- Each output port 22 contains four memories: a Control RAM 1 (CR1 RAM) 61, a Control RAM 2 (CR2 RAM) 63, a Cell Buffer RAM 19, and a Quantum Flow Control RAM (QFC RAM) .
- the Cell Buffer RAM 19 is where the actual cells are buffered while they await transmission.
- the CRl RAM 61 and the CR2 RAM 63 contain the output queues 34, with each queue 34 containing pointers to cells in the Cell Buffer RAM 19.
- the CRl RAM 61 contains information required to implement scheduling lists used to schedule link access by the output queues 34 associated with each link 30 supported by the FSPP 16.
- the QFC RAM 67 stores update information for transfer to another switch 29 via a network link 30.
- Update cells are generated in response to the update information provided by a TSPP 14 and specify whether the particular TSPP 14 is presently capable of accepting data cells.
- the buffers 26, 28 are organized into queues 32, 34 respectively and flow control is implemented on a per queue basis.
- Each queue includes multiple buffers, and each switch includes multiple input queues 32 and multiple output queues 34.
- each cell 24 i ⁇ loaded into a particular input queue 32 for eventual transmission to a particular output queue 34.
- connection level flow control is facilitated.
- queues 32a, 34a could be dedicated to a particular connection.
- nested queues of queues may be employed to provide per subclass flow control.
- each input port includes a TSPP 14, and each output port includes an FSPP 16.
- the TSPPs and FSPPs each include cell buffer RAM which is organized into queues 32, 34, respectively. All cells in a connection 40 pas ⁇ through a single queue at each port, one at the TSPP and one at the FSPP, for the life of the connection.
- the queues preserve cell ordering. Thi ⁇ ⁇ trategy also allows quality of service (“QoS”) guarantees on a per connection basis. In the multipoint-to-point case, two or more queues are established to service the multiple source ⁇ .
- the first action performed by the TSPP is to check the cell header for errors and then to check that the cell is associated with a valid connection.
- the VPI/VCI fields specified in each cell header are employed as an index into a translation table known as the VXT which is stored in the Control RAM 41.
- the TSPP first checks to see if this connection is one previously set up by the control software. If recognized, the cell will then be assigned a queue number associated with the connection. At the same time, the cell is converted into an internal cell format by the TSPP.
- the queue number is associated with a queue descriptor which is a table of state information that is unique to that source.
- the TSPP After a cell is assigned a queue number from the VXT, the TSPP looks at the corresponding queue descriptor for further information on how to process the cell. The next operation is to try to assign a buffer for the cell. If available, the cell buffer number is enqueued to the tail of its respective queue and the cell is written out to external cell buffer RAM 32.
- the TSPP In addition to processing and buffering incoming cell streams, the TSPP must transfer the cells from the cell buffer to a group of one or more FSPPs using the switch fabric 11.
- the bandwidth used for such transfer can either be preassigned (i.e., allocated bandwidth) or dynamically assigned (i.e., dynamic bandwidth).
- the allocated bandwidth is assigned by Call Acceptance Control (CAC) software.
- CAC Call Acceptance Control
- the assignment of dynamic bandwidth depends on the instantaneous utilization of the switch resources, and is controlled by the Bandwidth Arbiter 12.
- each TSPP has a data structure called a Switch Allocation Table ("SAT") 23 which is used to manage the allocated bandwidth.
- SAT Switch Allocation Table
- All TSPPs in the switch are synchronized such that they are all pointing, using a SAT pointer 25, to the same offset in the SAT at any given cell time.
- each slot in the SAT is active for 32 clock cycles at 50 Mhz, providing approximately 64Kbps of cell payload bandwidth.
- the pointers scan the SATs every approximately 6msec, thereby providing a maximum delay for transmission opportunity of approximately 6msec.
- the CAC software is responsible for assigning allocated bandwidth from TSPPs to FSPPs in a conflict-free manner.
- TSPP look ⁇ at the SAT entry for that cell time.
- a SAT entry is either not valid or points to a list of queues in TSPP Control RAM 41 called a scheduling list 27 (see Fig. 4). Queue descriptors for each of the queues are also stored in the Control RAM 41. If the SAT entry is invalid, that cell time is made available to the Bandwidth Arbiter for use in as ⁇ igning dynamic bandwidth, as described below. Allocated cell time given up by a particular TSPP may be used as a dynamic bandwidth cell time; it may be used by the TSPP that gave up the slot or it may be given to a different TSPP for use.
- the decision of which TSPP gets a given dynamic cell time is made by the Bandwidth Arbiter. If the SAT entry contains a valid scheduling list number, as illustrated in Fig. 3 as SLIST 4 27, the TSPP will use the first queue on the referenced scheduling list as the source of the cell to be transferred during that cell time. This is accomplished by the scheduling list containing a "head" pointer 29 and a "tail” pointer 31, as shown in Fig. 4.
- the head pointer 29 is a pointer to a first queue 33 having a cell to be transmitted to a particular output port.
- the tail pointer 31 is a pointer to a last queue 35 having a cell to be transmitted to the same output port.
- each queue a ⁇ sociated with this li ⁇ t ha ⁇ a "next" pointer labelled "N" in Fig. 4 which point ⁇ to the next queue in a ⁇ equence of queues.
- each queue i ⁇ a linked li ⁇ t wherein the queue de ⁇ criptor has a head pointer pointing to the first cell buffered in this queue, and a tail pointer pointing to the last cell buffered in this queue.
- Each buffered cell has a next pointer pointing to the next cell in the queue.
- the SAT for TSPPO presently indicates that a cell time is available to scheduling list 4 27 (SLIST 4) .
- the head pointer 29 of this scheduling list is pointing to queue 4 33, which has four cells ready to be transmitted to the respective output port.
- queue 4 33 now becomes the last of the three queues associated with SLIST 4 to be selected next time.
- the head pointer of SLIST 4 is modified to point to queue 7
- the tail pointer is modified to point to queue 4
- the header data of queue 2 is modified to point to queue 4. If queue 4 does not have another cell to be tran ⁇ mitted, the queue i ⁇ dequeued, queue 7 is the next queue, and queue 2 is the last queue.
- Each entry represents a dynamic bandwidth list for each port and priority (discussed below) , and has a head pointer-tail pointer pair pointing to scheduling lists for port 0, priority 3.
- "Dynamic Bandwidth Lists" is comprised of entries which are themselves list ⁇ , or in other words, i ⁇ a list of lists.
- the head pointer for Port 0 3 points to scheduling list 12 (SLIST 12) .
- SLIST 12 is the first of plural scheduling lists in the linked-list data structure called the dynamic bandwidth list for the port and priority.
- the tail pointer for Port 0 3 points to the last entry in this linked-list structure, SLIST 5.
- Each scheduling li ⁇ t in the structure has a pointer to the next scheduling list in the same structure.
- Each of SLISTs 12, 2 and 5 al ⁇ o has a head pointer-tail pointer pair pointing to at least one queue having a linked- list data structure.
- the head pointer of SLIST 12 point ⁇ to Queue 3 (labelled Q3)
- the tail pointer of SLIST 12 points to the last queue in that queue-level linked list, Queue 11 (labelled Qll) .
- the head and tail pointers of SLIST 2 point to a ⁇ ingle queue, Queue 8 (Q8)
- the head and tail pointer ⁇ of SLIST 5 point to Queue ⁇ 2 and 6, respectively.
- a head pointer for Q3 points to the first buffered cell in the queue, labelled Cl, having a pointer to the buffered cell data ("C") , and a pointer ("N") to the next cell in the queue.
- scheduling list For point-to-point transmission, there is a one-to-one correspondence between scheduling list and queue. This is illustrated in Fig. 5 with SLIST 2 and Queue 8. For multipoint-to-point, there can be plural queues per scheduling list. Such is the case with SLIST 12 and Queues 3 and 11, and with SLIST 5 and Queues 2 and 6.
- this overall "list of lists" structure By implementing this overall "list of lists" structure in the presently disclosed ATM switch, multiple levels of control are provided. For instance, the first time an event occurs which enables one cell to be transmitted to Port 0 3 , a cell from the first cell in the first queue associated with scheduling list 12 will be selected. This is cell Cl of Queue 3.
- the pointers of the "Dynamic Bandwidth Lists" list and SLISTs 12 and 5 are adjusted such that SLIST 2 is the next scheduling list from which a cell is provided if dynamic bandwidth becomes available for transmission of a cell to output Port 0 3 .
- SLIST 5 would be second, and SLIST 12 would then be last.
- Queue 3 having just provided a cell, becomes the last queue to be eligible to provide a cell vis a vis SLIST 12, with Queue 11 being the next. This occur ⁇ through the manipulation of pointers in SLIST 12 and Queues 3 and 11.
- Round-robin selection is thus enabled between the scheduling lists and the queues, with even bandwidth distribution being provided at each level.
- Other scheduling policies can be implemented if other bandwidth distributions are desired.
- the list of lists approach is applied to the allocation of dynamic bandwidth in the form of Dynamic Bandwidth Lists internal to the TSPP ASIC.
- 260 dynamic bandwidth lists are employed in the TSPP ASIC in a preferred embodiment.
- the first 256 of these lists are used for point- to-point ("P2P") and multipoint-to-point (“M2P”) connections.
- P2P point- to-point
- M2P multipoint-to-point
- Four lists are assigned to each one of the switch output ports.
- Four other list ⁇ are used for point-to-multipoint (“P2M”) connection ⁇ . This is shown at an upper-level in Fig. 6, where for each TSPP, there is a li ⁇ t of lists structure similar to that of Fig. 5.
- the queue is dropped, or dequeued, from the linked list of queues. Further, when all queues for a particular scheduling list have been dequeued, the scheduling list is removed from the linked list of lists. If all scheduling lists for a particular entry in the linked list are removed, the pointers in the Dynamic Bandwidth List are given null values.
- Another example of the application of the list of lists structure to the present ATM switch is described below with respect to Output Link Scheduling.
- the BA utilize ⁇ this priority information to effect the order in which it grants dynamic bandwidth to the TSPP.
- This prioritization is employed in assigning scheduling lists to one of the four dynamic bandwidth lists.
- cells from the VBR and ABR service categories are subject to being assigned to any of the four priorities, and UBR cells are subject only to being assigned to the lowest priority dynamic bandwidth list.
- Each queue for each connection has a dynamic bandwidth threshold 37 associated therewith, as shown in Fig. 4. If a queue buffer depth exceeds the cell depth indicated by the respective dynamic bandwidth threshold 37, the scheduling list for that queue will be added to the appropriate dynamic bandwidth list corresponding to the appropriate output port and priority. For each output port, the dynamic bandwidth list provides an indication of which if any cells are to be transmitted to the respective output port using dynamic bandwidth.
- a dynamic bandwidth threshold for a queue of CBR cells, or cells requiring a dedicated bandwidth would be established such that the requested bandwidth (labelled "A" in Fig. 7) meets or exceeds the requirement.
- a dynamic bandwidth threshold such as that labelled "B" in Fig. 7 may be suitable, wherein the majority of the traffic is handled by allocated bandwidth, with momentary bursts handled by high-priority dynamic bandwidth. In either case, bandwidth specifically allocated but unused is made available to the BA by the TSPP for dynamic bandwidth allocation.
- the dynamic bandwidth threshold is set above any expected peaks in cell reception. Conversely, for categories of service having no (or low) delay bounds and no guaranteed bandwidth, such as UBR, the dynamic bandwidth threshold is set to zero.
- each queue is also a linked list, wherein the queue descriptor, resident in the control RAM 41, has a head pointer pointing to the first buffer belonging to the queue and containing a cell, and a tail pointer pointing to the last buffer belonging to the queue and containing a cell.
- the input cells are buffered in the cell buffer RAM 32.
- the linked list that form ⁇ the queue is just a chain of pointers.
- the contents of one pointer points to the next pointer, etc.
- the pointer number is both the logical address of the pointer as well as the logical address of the cell buffer (i.e., the cell buffer number) .
- the majority of the pointers are stored within Pointer RAM 50 along with the SAT.
- ⁇ ince cells are removed from the queue in a fir ⁇ t-in-first-out (FIFO) fa ⁇ hion, no matter whether allocated or dynamic bandwidth is u ⁇ ed. This is despite the fact that a scheduling list can be granted transmission opportunities by either the SAT or by a dynamic bandwidth list.
- FIFO fir ⁇ t-in-first-out
- all of the queues in each dynamic bandwidth list share, in round-robin fashion, the available dynamic bandwidth for that port.
- queue 4 from Fig. 4 is added to one of the scheduling lists on the dynamic bandwidth list of Fig. 5.
- pointers of the dynamic bandwidth list, the respective scheduling list, and any other queues on the scheduling list are adjusted to place a queue on this list; no physical relocation of the queue is involved.
- no cell ⁇ are added to queue 4 as illustrated in Fig. 4 and no cells are removed from the queue as a result of allocated bandwidth being made available. If two cells are transmitted from this queue as a result of dynamic bandwidth being made available over time during this interval, the cell count in the queue would then be below the respective dynamic bandwidth threshold 37.
- the queue would then be removed from the dynamic bandwidth list by adjusting the pointers of the appropriate scheduling list and any other queues associated with that scheduling list.
- the TSPP is assigned either allocated or dynamic bandwidth.
- the TSPP uses this information in deciding which connection to use in supplying a particular cell to be transferred during that cell time.
- the Bandwidth Arbiter 12 (“BA") distributes unallocated and unused-allocated ⁇ witch bandwidth, the dynamic bandwidth. The distribution is based on request ⁇ and information ⁇ ent by each TSPP. Each TSPP identifies to the BA output port ⁇ which will have cells sent to them for a particular cell time as a result of allocated bandwidth. In addition, each TSPP provides to the BA an indication of which output ports are requested for access via dynamic bandwidth, a product of the dynamic bandwidth lists. If a TSPP does not have an allocation on the SAT for a specific cell time, it may vie for dynamic bandwidth. Each TSPP can have several outstanding requests stored in the BA.
- Each TSPP provides its dynamic bandwidth request(s) for a port(s) to the BA via a serially-communicated request to set the bits for the requested output ports.
- Each TSPP can set or delete bits in its respective request vector, or can change priority with respect to each request - each request has a priority level stored in conjunction therewith.
- These three commands are executed via a three-bit serial command sent from the respective TSPP to the BA.
- Up to 16 ports can be requested by the TSPP.
- each TSPP can request all of the output ports in a switch having sixteen output ports. A request remains set unless it is explicitly deleted by the TSPP.
- a grant in the form of a port number is returned by the BA to the requesting TSPP.
- the BA interprets the request ⁇ and stores them in the form of a register bank, one for each priority with a ⁇ et bit indicating a requested port.
- These dynamic bandwidth requests of all vying TSPPs are fed into a Dynamic Arbitration Unit 43 of the BA, which tries to match the requests with the available (not allocated or allocated but unused) ports.
- Matched requests are communicated back to the TSPPs, which refer to their dynamic bandwidth lists (described above) in sending cells accordingly. State information is retained by the BA to implement a round-robin service scheme and to determine which was the la ⁇ t TSPP served.
- a TSPP is served when a Free Output Port Vector in the BA is matched to a TSPP request, whereby the reque ⁇ ted port is granted and the request is subtracted from the Free Output Port Vector.
- the Free Output Port Vector is then applied to the next TSPP request in an attempt to match unassigned ports to requested ports. Eventually, the Free Output Port Vector will be all or almost all zeroes, and no further match between unassigned ports and requested ports can be found.
- Fig. 8 illustrates the matching process.
- TSPP 0 has provided a serial request for ports 0 and 2.
- the BA indicates that ports 0, 1, and 2 are available for dynamic bandwidth cell transfer via the Free Output Port Vector.
- P2P point-to-point
- TSPP 1 has requested ports 0 - 3.
- Ports 1 and 2 match with the left-over available list.
- Port 2 is granted, and the new left-over Free Output Port Vector includes port 1.
- the BA matching process continues until all available ports are granted by the BA, or no unmatched TSPP requests remain.
- P2M point-to-multipoint
- matches to P2M request are sought prior to seeking matches to P2P requests, since it i ⁇ more difficult to match all requested ports from one TSPP at once.
- cell transfer ⁇ are tagged to indicate whether they were above or below their allocated cell rate. The tagging i ⁇ performed by the BA. If a cell is shipped using a SAT slot, it is tagged as scheduled. If the cell i ⁇ ⁇ hipped because it won bandwidth arbitration, it is tagged as not scheduled. This information is employed in FSPP processing, as described below.
- Traffic of different priority levels is supported in the presently disclosed switch through the assignment of requests to one of four priority levels by the originating TSPP.
- the BA separates these four levels into either "high” or “low” priority, and attempts to match all high-priority requests prior to attempting to match all low-priority requests.
- an FSPP Prior to receiving a cell through the switch fabric, an FSPP receives control information indicative of whether the cell transfer utilizes scheduled bandwidth or a dynamic bandwidth.
- the control information further indicates the queue(s) within which the cell is to be enqueued. This information allows the FSPP to determine whether it has sufficient resources such as queue space, buffer space, bandwidth, etc., to receive the cell.
- the FSPP does not have sufficient resources to receive a cell, it indicates thi ⁇ by asserting an appropriate control signal.
- the as ⁇ ertion of this ⁇ ignal means the FSPP is able to receive the cell or the FSPP is not present.
- Control RAM 1 61 external to the FSPP 16 are four memories, Control RAM 1 61, Control RAM 2 63, Cell Buffer RAM 19, and QFC RAM 67.
- Control RAM 1 Control RAM 2, and Cell Buffer RAM are used to enqueue and dequeue cells.
- Control RAM 1 and Control RAM 2 contain the information required to implement the queues, dynamic lists and preferred lists (discussed below) necessary to provide the FSPP functions.
- the Cell Buffer RAM is where the actual cells are buffered while they await tran ⁇ mi ⁇ sion.
- the QFC RAM primarily contains storage for a flow control information received from the TSPP, and is accessed during the generation of flow control update cells.
- the cell buffer pool count register contains the current number of cell buffer locations in use for that pool.
- the cell buffer pool limit register contains the maximum number of cell buffer locations allowed for that pool.
- Cell numbers are manipulated to place cell buffer locations into queues. When a cell buffer location in the cell buffer is written with a cell, the cell number pointing to that cell buffer location is then placed on a queue. Cells are transmitted from the queues in the order in which they were received; the first received is the first transmitted. A logical representation of such a queue is illustrated in Fig. 9. Each queue is implemented as a linked list of cell numbers; each cell on the queue points to the next cell on the queue using its cell number as a pointer, as previously described. Each queue has a separate structure, known as the queue descriptor, maintained in Control RAM 2 to point to the head and tail of the queue.
- the linked list making up a queue is implemented as a set of pointers in Control RAM 1 such that each cell buffer location has one entry.
- the pointers are indexed using the cell number, with each entry containing another cell number. Thus, one cell number can point to another cell number.
- the queue descriptor also contains a count of the cells in the queue.
- a cell Once a cell is placed on a queue, that queue must then be scheduled for transmission. This is done by placing the queue number of that queue on a list.
- Lists are linked lists of queue numbers, similar to the scheduling lists of the TSPP. Each list has a ⁇ eparate ⁇ tructure, known as the list descriptor, maintained internal to the FSPP to point to the head and tail of the list.
- Two types of lists are used for scheduling the two types of traffic: preferred lists and dynamic lists.
- the queue numbers of queues having allocated traffic are placed on the preferred list.
- the queue numbers of queues having dynamic traffic are placed on the dynamic list. Queues can be found on both the preferred list and the dynamic list since each queue may have both scheduled and unscheduled cells, as shown in Fig. 10.
- the first entry in the preferred list is a pointer to queue 7, labelled Q7.
- Q7 is also pointed to by the second entry in the illustrated dynamic list.
- the preferred list will be serviced before the dynamic list.
- a queue has no cells assigned to it, it is obviously on neither the preferred list nor the dynamic list. If the queue receives one cell via dynamic bandwidth, the queue is placed on the dynamic list. If the queue receives a second cell, but this time via allocated bandwidth, the queue is also placed on the preferred list. Since servicing of preferred lists take precedence over servicing of dynamic lists, the first cell received in the queue will be chosen for transmission out of the switch via the preferred list, not the dynamic list. The queue will remain on the dynamic list after being removed from the preferred list until the remaining cell is chosen for transmission. Therefore, even though the queue was first placed on the dynamic list, then the preferred list, the first cell is dequeued via the preferred list. This is neces ⁇ ary to ensure and maintain proper cell ordering.
- the BA is responsible for tagging each cell as either shipped in an allocated SAT slot, or as shipped in an unscheduled dynamic slot. It is thi ⁇ information which is used in assigning queues to preferred and/or dynamic lists.
- Some queues have mixed service traffic with both allocated and dynamic cells. This is a result of providing integrated services whereby a particular connection may have cells to transmit beyond the respective dynamic bandwidth threshold (see discussion pertaining to Fig. 7 above) .
- Cells below the threshold are sent as allocated traffic.
- Cells above the threshold may be sent as dynamic traffic.
- Queue numbers for the allocated traffic are placed on the preferred list, and queue numbers for the dynamic traffic are placed on the dynamic list. Regardless of order of receipt between allocated and dynamic cells, cells from the queue numbers on the preferred list will be scheduled and removed first. The cells are still transmitted in order out of the FSPP, however, since the cell numbers on the queue remain in order and cell numbers are always removed from the head of the queue. Therefore, even if an individual cell at the head of an output queue was received in the FSPP as an unscheduled, dynamic cell, it will be transmitted first, even if the queue is identified next on a preferred list.
- a queue number Once a queue number has been added to a list, either a preferred list or a dynamic list, it remains on that list until the queue has no more cells of the appropriate type.
- a queue number makes it to the head of the list, that queue becomes the next queue within the list from which a cell is tran ⁇ mitted.
- the queue number is removed from the head of the list and the count of either allocated cells for a preferred list or dynamic cells for a dynamic list is decremented within the queue descriptor. If the decremented counter is non-zero, the queue number is returned to the tail of the list. Otherwi ⁇ e it is dropped from the list.
- the queues within a list receive round-robin scheduling.
- each output link is scheduled independently, so there is no interaction between the preferred lists for different links. Newly received cells in a higher priority list are transmitted before previously received cells in a lower priority preferred list.
- All the preferred lists with allocated traffic for a link are scheduled with a priority above dynamic lists with dynamic traffic for that link.
- VBR virtual resource pool
- ABR ABR
- UBR Universal Resource Block
- Each type of list is permanently assigned to each output link.
- the four priorities for each of the VBR and ABR dynamic lists are further divided among two priority levels assigned by the BA: high (bandwidth not met) ; and low (bandwidth exceeded) . These two levels enable the VBR service class to achieve a preselected percentage of dynamic bandwidth before allowing a lower priority service class, ABR, to share in the dynamic bandwidth. Once ABR has achieved its preselected percentage, the remaining dynamic bandwidth is shared among VBR, ABR and UBR.
- VBR list ⁇ has a lower latency, but may have cell loss.
- VBR dynamic list ⁇ are guaranteed a minimum bandwidth on an output link. Once the VBR dynamic lists with traffic have received their guaranteed bandwidth, the ABR dynamic list transmits if its minimum bandwidth has not been reached. When the minimum bandwidth for both VBR and ABR have both been sati ⁇ fied, the UBR, VBR and ABR dynamic lists vie in round robin fashion for the remaining bandwidth.
- ABR also provides four levels of priority. It differs from VBR in that it guarantees no cell loss because flow control is utilized.
- ABR dynamic list ⁇ are also guaranteed minimum bandwidth on an output link. A ⁇ noted above, once ABR minimum bandwidth has been satisfied, UBR, VBR, and ABR all vie in round-robin fashion for remaining bandwidth.
- the list of lists structure introduced with respect to Figs. 5 and 7 is also applicable to the processing of cells at the FSPP. Specifically, with regard to Fig. 10 once again, each "cell" illustrated on one of the preferred and dynamic lists is actually a pair of pointers to a queue having one or more cells to be transmitted from the respective port at the respective priority. Each queue is entered only once on a particular list.
- each list entry points to a linked list of cells to be transmitted - it is a list of lists. Fairness is provided between queues of like priority, prioritization between lists is enabled, and cell prioritization i ⁇ maintained.
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Computer Security & Cryptography (AREA)
- Computer Hardware Design (AREA)
- Theoretical Computer Science (AREA)
- Software Systems (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
An ATM network switch and method of utilization for adaptively providing integrated services therein is disclosed. In providing such integrated services, if the allocated bandwidth for one connection has been consumed, or if the connection is not entitled to allocated bandwidth, the connection can optionally use dynamic bandwidth arbitration, or a combination of both allocated and dynamic. The switch includes an input port processor (14), a bandwidth arbiter (12), and an output port processor (16). Cells are transmitted from the input to the output, under the control of respective port processors and the bandwidth arbiter. Flow control is implemented on a per-connection basis. Individual queues are then assigned to traffic type groups in order to provide traffic type flow control. Based upon prioritization information associated with the cell at the input, cells are prioritized and transmitted from the output, with each cell maintained in the same order, relative to other cells on a connection, in which it was received.
Description
ALLOCATED AND DYNAMIC BANDWIDTH MANAGEMENT
RELATED APPLICATION This application claims benefit of U.S. Provisional
Application Serial No. 60/001,498, filed July 19, 1995.
FIELD OF THE INVENTION The invention generally relates to the field of telecommunications networks, and specifically to bandwidth allocation and delay management in an asynchronous transfer mode switch.
BACKGROUND OF THE INVENTION Telecommunications networks such as asynchronous transfer mode ("ATM") networks are used for transfer of audio, video and other data. ATM networks deliver data by routing data units such as ATM cells from source to destination through switches. Switches include input/output ("I/O") ports through which ATM cells are received and transmitted. The appropriate output port for transmission of the cell is determined based on the cell header.
In configuring a network element such as a switch for the optimal transfer of various traffic types (sometimes referred to as a service classes) supported by ATM networks, multiple factors such as throughput delay and desired bandwidth must be considered. Such traffic types, each having its own delay and bandwidth requirements, include the constant bit rate ("CBR") service class, the variable bit rate ("VBR") service class, the available bit rate ("ABR") service class, and the unspecified bit rate ("UBR") service class.
The primary differentiator between the service classes is delay. Telecommunications network applications such as teleconferencing require deterministic delay bounds, and are typically assigned to the CBR service class. Transaction
processing applications such as automated teller machines require a "tightly bounded" delay specification to provide acceptable response times. Such applications typically are assigned to the VBR service class. File transfer applications such as internetwork traffic merely require a "bounded" delay, and thus typically employ the ABR service classes. The UBR service class normally provides no delay bound.
Bandwidth is another consideration in establishing an acceptable switch configuration. Video applications typically have a predictable bandwidth requirement, while file transfer applications are much more aperiodic, or "bursty. "
Low-delay and complete line utilization are opposing goals when multiplexing asynchronous sources. High utilization is achieved by having a set of connections share bandwidth that is unused by connections that need very low delay. This shared bandwidth is known as dynamic bandwidth because it is distributed to connections based on instantaneous operating conditions. VBR, ABR and UBR utilize dynamic bandwidth to achieve high line utilization.
A network switch capable of adaptively accommodating network traffic having such dissimilar delay and bandwidth requirements, and thus providing low-cost, highly efficient integrated services, is required.
SUMMARY OF THE INVENTION Integrated services is the accommodation of various traffic types, wherein each of the traffic types is characterized by delay bounds and by guaranteed bandwidth, and wherein each of the traffic types receives allocated bandwidth, dynamic bandwidth, or a combination of both. The presently disclosed invention is an ATM network switch and method capable of adaptively providing highly efficient, and thus low cost, integrated services therein. In providing such integrated serviceε, if the input rate for a connection
is greater than its allocated bandwidth, the connection can optionally use dynamic bandwidth.
In general overview, the switch includes at least one input port, at least one output port, and input and output buffers associated with the respective input and output ports. Cells enter the switch through the input port and are buffered in the input buffers. The cells are then transmitted from the input buffers to the output buffers, under the control of respective port processors and a Bandwidth Arbiter ("BA") , and then transmitted to the appropriate output port.
In order to provide both connection and traffic type isolation, the buffers are grouped into queues and flow control is implemented on a per queue basis. Each queue includes multiple buffers, and each switch includes multiple input queues and multiple output queues. Upon entering the switch, each cell is loaded into an input cell buffer belonging to a particular input queue for eventual transmission to an output cell buffer belonging to a particular output queue. Per VC queuing enables connection- level flow control, since cells are grouped according to the input and output port pair they traverse. Individual queues are then assigned to traffic type groups in order to facilitate traffic type flow control. For example, each queue is dedicated to a particular traffic type (sometimes referred to as a service class) such as the variable bit rate ("VBR") service class and the available bit rate ("ABR") service class as described above.
In addition to the differentiation of cell traffic into the service categories described above, further levels of priority are introduced within each category because different applications within a category may have different sensitivity to delay. For example, a file transfer performed by a back-up application can tolerate longer delays than a file transfer of a medical image to an awaiting physician. Flow control can also be implemented on these traffic sub-
types, with each queue being assigned to a particular connection, thereby providing flow control on a per- connection basis as well as on a per-service category basis.
It is then possible for the presently disclosed network switch to provide integrated services by transferring input cells to output buffers using bandwidth assigned specifically to such connections ("allocated bandwidth") , by transferring input cells to output buffers using bandwidth which is instantaneously unallocated by connections requiring allocated bandwidth ("dynamic bandwidth") , and by transferring input cells to output buffers utilizing a mix of both allocated and dynamic bandwidth.
Bandwidth arbitration, or the matching of available receivers to transmitters needing to transmit cells to that set of receivers, begins with a determination of what bandwidth is available.
A To Switch Port Processor ("TSPP") is responsible for receiving a cell from a unidirectional transmission path known as a "link," for analyzing cell header information to identify a connection with which the cell is associated, and for buffering the cell in accordance with the service class and subclass priority associated with the respective connection. Further, the TSPP is responsible for transferring the cell from the buffer to one or more From Switch Port Processors ("FSPPs") using the associated switch fabric. The bandwidth employed for such transfer can be either allocated or dynamic, or both, as previously characterized.
To manage the allocated bandwidth, the TSPP employs a time slotted frame concept through the use of a Switch
Allocation Table ("SAT") . The TSPP also uses two data structures in managing different resources, a queue and a list. A queue is used to manage buffers, and consists of a group of one or more buffered cells organized as a FIFO and manipulated as a linked list using pointers. Incoming cells are added (enqueued) to the tail of the queue. Cells which
are sent to the switch fabric are removed (dequeued) from the head of the queue. Cell ordering is always maintained. For a given connection, the sequence of cells that is sent to the switch fabric is identical to that in which they arrived although the time intervals between each departing cell may be different from the inter-cell arrival times.
Valid SAT entries provide a pointer to a "scheduling list," in which is maintained a list of queues which may have cells intended for transfer to a particular output port. A scheduling list consists of one or more queue numbers organized as a circular list. As with queues, lists are manipulated as a linked-list structure using pointers. Queue numbers are added to the tail of a list and removed from the head of the list. A queue number can appear only once on any given scheduling list. In addition to being added and removed, queue numbers are recirculated on a list by removing from the head and then adding the removed queue number back onto the tail. This results in round-robin servicing of the queues on a particular list. Allocated time slots which cannot be used at a given instant in time or valid SAT entries where there is no cell to send for that connection cause the TSPP to notify the BA that it can use that time slot as a dynamic bandwidth cell time for any of the TSPPs associated with the switch. In this way, service classes requiring either or both of allocated and dynamic bandwidth are accommodated.
Cells received through the switch fabric are received by the FSPP associated with the appropriate output port. Based upon prioritization information associated with the cell at the TSPP, the cells are prioritized and transmitted, with each cell maintained in the same order, relative to other cells on a connection, in which it was received.
BRIEF DESCRIPTION OF THE DRAWINGS The invention will be more fully understood by reference to the following description and accompanying drawings of
which:
Fig. 1 is a block diagram of a switch according to the present invention;
Fig. 2 is a block diagram illustrating point-to-point and point-to-multipoint operation in the switch of Fig. 1;
Fig. 3 illustrates Switch Allocation Tables according to the present invention;
Fig. 4 illustrates a scheduling list and associated queues according to the present invention; Fig. 5 illustrates a linked-list structure for multipoint-to-point and point-to-point transfer arbitration according to the present invention;
Fig. 6 illustrates the use of priority lists in the present invention; Fig. 7 illustrates the relationship between a dynamic bandwidth threshold, allocated bandwidth, and dynamic bandwidth in the present invention;
Fig. 8 illustrates the distribution of unallocated output ports for dynamic bandwidth utilization in the present invention;
Fig. 9 is an exemplary queue as used in the present invention;
Fig. 10 illustrates the placement of queues on preferred and/or dynamic lists within the FSPP in the present invention; and
Fig. 11 illustrates preferred and dynamic lists in an FSPP according to the present invention.
DETAILED DESCRIPTION Referring now to Fig. 1, the presently disclosed switch
10 includes a plurality of input ports 20, a plurality of output ports 22 and an NxN switch fabric 11, such as a cross point switch, coupled between the input ports 20 and output ports 22. Each input port 20 includes a To Switch Port Processor ("TSPP") ASIC 14 and each output port 22 includes a From Switch Port Processor ("FSPP") ASIC 16. A Multipoint
Topology Controller ("MTC") ASIC 18 is coupled between each TSPP 14 and a bandwidth arbiter ("BA") ASIC 12, and as well as between the bandwidth arbiter 12 and each FSPP 16, as shown. In one embodiment, each MTC supports up to four TSPPs 14 or FSPPs 16.
The switch fabric 11 includes a data crossbar 13 for data cell transport and the bandwidth arbiter 12 and MTCs 18 for control signal transport. The Bandwidth Arbiter ("BA") ASIC 12 controls, inter alia, transport of data cells from a TSPP 14 to one or more FSPPs 16 through the data crossbar 13 (i.e., switch port scheduling), including the dynamic scheduling of momentarily unassigned bandwidth (as further described below) . Each FSPP 16 receives cells from the data crossbar 13 and schedules transmission of those cells onto network links 30 (i.e., link scheduling).
Each of the input ports 20 and output ports 22 includes a plurality of input buffers 26 and output buffers 28, respectively (Fig. 2) . The buffers 26, 28 are organized into a plurality of input queues 32a-m (referred to herein generally as input queues 32) and a plurality of output queues 34a-m (referred to herein generally as output queues 34) , respectively. More particularly, each input port 20 includes a plurality of input queues 32 and each output port includes a plurality of output queues 34, as shown. The input queues 32 are stored in a Control RAM 41 and a Pointer
RAM 50 of the input port 20 and the output queues 34 are stored in a CR1 RAM 61 and a CR2 RAM 63 of the output port 22. The actual cell buffering occurs in Cell Buffer RAM 17, with the queues 32 having pointers to this buffer RAM 17. To traverse the switch 10, a data cell 24 enters the switch through an input port 20 and is enqueued on an input queue 32 at the at the respective TSPP 14. The cell is then transmitted from the input queue 32 to one or more output queues 34 via the data crossbar 13. Control signalε are transmitted from a TSPP 14 to one or more FSPPs 16 via the respective MTC 18 and the bandwidth arbiter 12. In
particular, data and control signals may be transmitted from an input queue 32 to a particular one of the output queues 34, in the case of a point-to-point connection 40. Alternatively, data and control signals may be transmitted from an input queue 32 to a selected set of output queues 34, in the case of a point-to-multipoint connection 42. From the output queue(s) 34, the data cell 24 is transmitted outside of the switch 10, for example, to another switch 21 via a network 30. The bandwidth arbiter 12 contains a crossbar controller
15 which includes probe crossbar, an XOFF crossbar and an XON crossbar, each of which is an NxN switch. A transfer request message, or probe control signal, flows through the probe crossbar and is used to query whether or not sufficient space is available at a destination output queue, or queues 34, to enqueue a cell. The request message is considered a "forward" control signal since its direction is from a TSPP 14 to one or more FSPPs 16 (i.e., the same direction as data) . A two bit control signal flows in the reverse direction (from one or more FSPPs to a TSPP) through the XOFF crossbar and responds to the request message query by indicating whether or not the destination output queue, or queues 34, are presently capable of accepting data cells and thus, whether or not the transmitting TSPP can transmit cells via the data crossbar 13. In the event that the XOFF control signal indicates that the queried output queue(s) 34 are not presently capable of receiving data, another reverse control signal, which flows through the XON crossbar, notifies the transmitting TSPP once space becomes available at the destination output queue(s) 34.
Each output port 22 contains four memories: a Control RAM 1 (CR1 RAM) 61, a Control RAM 2 (CR2 RAM) 63, a Cell Buffer RAM 19, and a Quantum Flow Control RAM (QFC RAM) . The Cell Buffer RAM 19 is where the actual cells are buffered while they await transmission. The CRl RAM 61 and the CR2 RAM 63 contain the output queues 34, with each queue 34
containing pointers to cells in the Cell Buffer RAM 19. The CRl RAM 61 contains information required to implement scheduling lists used to schedule link access by the output queues 34 associated with each link 30 supported by the FSPP 16. The QFC RAM 67 stores update information for transfer to another switch 29 via a network link 30. Update cells are generated in response to the update information provided by a TSPP 14 and specify whether the particular TSPP 14 is presently capable of accepting data cells. In order to provide both connection and traffic type isolation, the buffers 26, 28 are organized into queues 32, 34 respectively and flow control is implemented on a per queue basis. Each queue includes multiple buffers, and each switch includes multiple input queues 32 and multiple output queues 34. Upon entering the switch, each cell 24 iε loaded into a particular input queue 32 for eventual transmission to a particular output queue 34. By organizing input cells in queues by received (input) port and destination (output) port, connection level flow control is facilitated. For example, queues 32a, 34a could be dedicated to a particular connection. In addition, nested queues of queues may be employed to provide per subclass flow control.
Referring again to Fig. 1, the invention will now be described in greater detail. In the preferred architecture each input port includes a TSPP 14, and each output port includes an FSPP 16. The TSPPs and FSPPs each include cell buffer RAM which is organized into queues 32, 34, respectively. All cells in a connection 40 pasε through a single queue at each port, one at the TSPP and one at the FSPP, for the life of the connection. The queues preserve cell ordering. Thiε εtrategy also allows quality of service ("QoS") guarantees on a per connection basis. In the multipoint-to-point case, two or more queues are established to service the multiple sourceε. Aε a cell iε received into the TSPP 14, the first action performed by the TSPP is to check the cell header for errors
and then to check that the cell is associated with a valid connection. To do this, the VPI/VCI fields specified in each cell header are employed as an index into a translation table known as the VXT which is stored in the Control RAM 41. The TSPP first checks to see if this connection is one previously set up by the control software. If recognized, the cell will then be assigned a queue number associated with the connection. At the same time, the cell is converted into an internal cell format by the TSPP. The queue number is associated with a queue descriptor which is a table of state information that is unique to that source. After a cell is assigned a queue number from the VXT, the TSPP looks at the corresponding queue descriptor for further information on how to process the cell. The next operation is to try to assign a buffer for the cell. If available, the cell buffer number is enqueued to the tail of its respective queue and the cell is written out to external cell buffer RAM 32.
In addition to processing and buffering incoming cell streams, the TSPP must transfer the cells from the cell buffer to a group of one or more FSPPs using the switch fabric 11. The bandwidth used for such transfer can either be preassigned (i.e., allocated bandwidth) or dynamically assigned (i.e., dynamic bandwidth). The allocated bandwidth is assigned by Call Acceptance Control (CAC) software. The assignment of dynamic bandwidth depends on the instantaneous utilization of the switch resources, and is controlled by the Bandwidth Arbiter 12.
Allocated bandwidth is managed using a time slotted frame concept. With regard to Fig. 3, each TSPP has a data structure called a Switch Allocation Table ("SAT") 23 which is used to manage the allocated bandwidth. All TSPPs in the switch are synchronized such that they are all pointing, using a SAT pointer 25, to the same offset in the SAT at any given cell time. In a preferred embodiment, each slot in the SAT is active for 32 clock cycles at 50 Mhz, providing
approximately 64Kbps of cell payload bandwidth. Given a SAT depth of 8192, the pointers scan the SATs every approximately 6msec, thereby providing a maximum delay for transmission opportunity of approximately 6msec. The CAC software is responsible for assigning allocated bandwidth from TSPPs to FSPPs in a conflict-free manner.
Each cell time, the TSPP lookε at the SAT entry for that cell time. A SAT entry is either not valid or points to a list of queues in TSPP Control RAM 41 called a scheduling list 27 (see Fig. 4). Queue descriptors for each of the queues are also stored in the Control RAM 41. If the SAT entry is invalid, that cell time is made available to the Bandwidth Arbiter for use in asεigning dynamic bandwidth, as described below. Allocated cell time given up by a particular TSPP may be used as a dynamic bandwidth cell time; it may be used by the TSPP that gave up the slot or it may be given to a different TSPP for use. The decision of which TSPP gets a given dynamic cell time is made by the Bandwidth Arbiter. If the SAT entry contains a valid scheduling list number, as illustrated in Fig. 3 as SLIST 4 27, the TSPP will use the first queue on the referenced scheduling list as the source of the cell to be transferred during that cell time. This is accomplished by the scheduling list containing a "head" pointer 29 and a "tail" pointer 31, as shown in Fig. 4. The head pointer 29 is a pointer to a first queue 33 having a cell to be transmitted to a particular output port. The tail pointer 31 is a pointer to a last queue 35 having a cell to be transmitted to the same output port. Further, each queue aεsociated with this liεt haε a "next" pointer labelled "N" in Fig. 4 which pointε to the next queue in a εequence of queues. Though not illuεtrated in Fig. 4, each queue iε a linked liεt, wherein the queue deεcriptor has a head pointer pointing to the first cell buffered in this queue, and a tail pointer pointing to the last cell buffered in this queue. Each buffered cell has a next pointer
pointing to the next cell in the queue. Thus, as illustrated, the SAT for TSPPO presently indicates that a cell time is available to scheduling list 4 27 (SLIST 4) . The head pointer 29 of this scheduling list is pointing to queue 4 33, which has four cells ready to be transmitted to the respective output port. After the first cell from queue 4 has been transmitted through the switch fabric and the internal pointers of queue 4 have been modified to point to the second cell as the next cell for transmission, queue 4 33 now becomes the last of the three queues associated with SLIST 4 to be selected next time. Specifically, the head pointer of SLIST 4 is modified to point to queue 7, the tail pointer is modified to point to queue 4, and the header data of queue 2 is modified to point to queue 4. If queue 4 does not have another cell to be tranεmitted, the queue iε dequeued, queue 7 is the next queue, and queue 2 is the last queue.
Cell times are made available to the Bandwidth Arbiter for assignment as dynamic bandwidth under the following conditions:
1) if the scheduling list identified by the SAT has no queue entry available, this case being referred to as "allocated, unused;" or
2) if the SAT has no εcheduling liεt εpecified for a particular cell time εlot, thiε caεe being referred to as "unallocated." A further condition exists in the case where a pacing scheme is implemented in the TSPP to minimize initial delay in transferring a cell using allocated bandwidth. If a SAT εlot for a particular εcheduling list is indicated, but the pacing counter for that list haε not reached the appropriate value, a cell from an aεsociated queue is prevented from being transferred, and the slot becomes available for dynamic bandwidth transfer. Dynamic bandwidth cell times are managed by taking advantage of a nested set of pointers, or what is referred
to as a "list of lists" technique. In general, such a structure is presented in Fig. 5. A set of lists, labelled Dynamic Bandwidth Lists, has plural entries, labelled Port
Oj, Port 02, Port 03, Port 04, Port 1,, Port 12, Port l3, Each entry represents a dynamic bandwidth list for each port and priority (discussed below) , and has a head pointer-tail pointer pair pointing to scheduling lists for port 0, priority 3. Thus, "Dynamic Bandwidth Lists" is comprised of entries which are themselves listε, or in other words, iε a list of lists. The head pointer for Port 03 points to scheduling list 12 (SLIST 12) . SLIST 12 is the first of plural scheduling lists in the linked-list data structure called the dynamic bandwidth list for the port and priority. The tail pointer for Port 03 points to the last entry in this linked-list structure, SLIST 5. Each scheduling liεt in the structure has a pointer to the next scheduling list in the same structure.
Each of SLISTs 12, 2 and 5 alεo has a head pointer-tail pointer pair pointing to at least one queue having a linked- list data structure. Specifically, the head pointer of SLIST 12 pointε to Queue 3 (labelled Q3) , and the tail pointer of SLIST 12 points to the last queue in that queue-level linked list, Queue 11 (labelled Qll) . Similarly, the head and tail pointers of SLIST 2 point to a εingle queue, Queue 8 (Q8) , and the head and tail pointerε of SLIST 5 point to Queueε 2 and 6, respectively.
At the queue level, a head pointer for Q3 points to the first buffered cell in the queue, labelled Cl, having a pointer to the buffered cell data ("C") , and a pointer ("N") to the next cell in the queue.
For point-to-point transmission, there is a one-to-one correspondence between scheduling list and queue. This is illustrated in Fig. 5 with SLIST 2 and Queue 8. For multipoint-to-point, there can be plural queues per scheduling list. Such is the case with SLIST 12 and Queues 3 and 11, and with SLIST 5 and Queues 2 and 6.
By implementing this overall "list of lists" structure in the presently disclosed ATM switch, multiple levels of control are provided. For instance, the first time an event occurs which enables one cell to be transmitted to Port 03, a cell from the first cell in the first queue associated with scheduling list 12 will be selected. This is cell Cl of Queue 3. The pointers of the "Dynamic Bandwidth Lists" list and SLISTs 12 and 5 are adjusted such that SLIST 2 is the next scheduling list from which a cell is provided if dynamic bandwidth becomes available for transmission of a cell to output Port 03. SLIST 5 would be second, and SLIST 12 would then be last. Similarly, Queue 3, having just provided a cell, becomes the last queue to be eligible to provide a cell vis a vis SLIST 12, with Queue 11 being the next. This occurε through the manipulation of pointers in SLIST 12 and Queues 3 and 11. Finally, cell Cl, having been transmitted, is dequeued from Queue 3, meaning the pointers of Queue 3 are readjusted to point to C2 as next to be transmitted. Only if another cell is received into Queue 3 will another cell fall in to line behind cell 4.
Round-robin selection is thus enabled between the scheduling lists and the queues, with even bandwidth distribution being provided at each level. Other scheduling policies can be implemented if other bandwidth distributions are desired.
As shown, the list of lists approach is applied to the allocation of dynamic bandwidth in the form of Dynamic Bandwidth Lists internal to the TSPP ASIC. 260 dynamic bandwidth lists are employed in the TSPP ASIC in a preferred embodiment. The first 256 of these lists are used for point- to-point ("P2P") and multipoint-to-point ("M2P") connections. Four lists are assigned to each one of the switch output ports. Four other listε are used for point-to-multipoint ("P2M") connectionε. This is shown at an upper-level in Fig. 6, where for each TSPP, there is a liεt of lists structure similar to that of Fig. 5.
In either the P2P or M2P case, when enough cells have been removed from a queue to reach a dynamic bandwidth threshold (discussed subsequently) , the queue is dropped, or dequeued, from the linked list of queues. Further, when all queues for a particular scheduling list have been dequeued, the scheduling list is removed from the linked list of lists. If all scheduling lists for a particular entry in the linked list are removed, the pointers in the Dynamic Bandwidth List are given null values. Another example of the application of the list of lists structure to the present ATM switch is described below with respect to Output Link Scheduling.
The priority of the scheduling listε iε tranεmitted to the BA. The BA utilizeε this priority information to effect the order in which it grants dynamic bandwidth to the TSPP. This prioritization is employed in assigning scheduling lists to one of the four dynamic bandwidth lists. In one embodiment of the disclosed switch, illustrated in Fig. 6, cells from the VBR and ABR service categories are subject to being assigned to any of the four priorities, and UBR cells are subject only to being assigned to the lowest priority dynamic bandwidth list.
Each queue for each connection has a dynamic bandwidth threshold 37 associated therewith, as shown in Fig. 4. If a queue buffer depth exceeds the cell depth indicated by the respective dynamic bandwidth threshold 37, the scheduling list for that queue will be added to the appropriate dynamic bandwidth list corresponding to the appropriate output port and priority. For each output port, the dynamic bandwidth list provides an indication of which if any cells are to be transmitted to the respective output port using dynamic bandwidth. The dynamic bandwidth threshold iε established at call setup time. In a further embodiment of the present switch, however, the threshold value is adjusted dynamically based upon an empirical analyεis of traffic through the εwitch.
With regard to Fig. 7, a dynamic bandwidth threshold for a queue of CBR cells, or cells requiring a dedicated bandwidth, would be established such that the requested bandwidth (labelled "A" in Fig. 7) meets or exceeds the requirement. For other applications which may be more bursty but which still require tightly bounded delay, a dynamic bandwidth threshold such as that labelled "B" in Fig. 7 may be suitable, wherein the majority of the traffic is handled by allocated bandwidth, with momentary bursts handled by high-priority dynamic bandwidth. In either case, bandwidth specifically allocated but unused is made available to the BA by the TSPP for dynamic bandwidth allocation.
Note that for categories of service which rely solely on allocated bandwidth, the dynamic bandwidth threshold is set above any expected peaks in cell reception. Conversely, for categories of service having no (or low) delay bounds and no guaranteed bandwidth, such as UBR, the dynamic bandwidth threshold is set to zero.
As discussed, each queue is also a linked list, wherein the queue descriptor, resident in the control RAM 41, has a head pointer pointing to the first buffer belonging to the queue and containing a cell, and a tail pointer pointing to the last buffer belonging to the queue and containing a cell. The input cells are buffered in the cell buffer RAM 32. The queue header of an empty queue has a head pointer = 0 and a tail pointer = head. Thus, the linked list that formε the queue is just a chain of pointers. The contents of one pointer points to the next pointer, etc. The pointer number is both the logical address of the pointer as well as the logical address of the cell buffer (i.e., the cell buffer number) . There is a one-to-one mapping of a cell pointer and its corresponding cell buffer. The majority of the pointers are stored within Pointer RAM 50 along with the SAT.
Cell ordering is preserved εince cells are removed from the queue in a firεt-in-first-out (FIFO) faεhion, no matter whether allocated or dynamic bandwidth is uεed. This is
despite the fact that a scheduling list can be granted transmission opportunities by either the SAT or by a dynamic bandwidth list. In a first embodiment, all of the queues in each dynamic bandwidth list share, in round-robin fashion, the available dynamic bandwidth for that port.
Assume that queue 4 from Fig. 4 is added to one of the scheduling lists on the dynamic bandwidth list of Fig. 5. In actuality, only pointers of the dynamic bandwidth list, the respective scheduling list, and any other queues on the scheduling list are adjusted to place a queue on this list; no physical relocation of the queue is involved. Assume that over an interval no cellε are added to queue 4 as illustrated in Fig. 4 and no cells are removed from the queue as a result of allocated bandwidth being made available. If two cells are transmitted from this queue as a result of dynamic bandwidth being made available over time during this interval, the cell count in the queue would then be below the respective dynamic bandwidth threshold 37. The queue would then be removed from the dynamic bandwidth list by adjusting the pointers of the appropriate scheduling list and any other queues associated with that scheduling list. The opposite is true for a queue which receives cellε above its respective dynamic bandwidth threshold. Note that the first cell to be buffered within a queue will always be the first to be transmitted, whether such transmiεsion is via allocated or dynamic bandwidth. This is necessary to preserve the proper ordering of cells.
Therefore, at each cell time, the TSPP is assigned either allocated or dynamic bandwidth. The TSPP uses this information in deciding which connection to use in supplying a particular cell to be transferred during that cell time.
The Bandwidth Arbiter 12 ("BA") distributes unallocated and unused-allocated εwitch bandwidth, the dynamic bandwidth. The distribution is based on requestε and information εent by each TSPP. Each TSPP identifies to the BA output portε which will have cells sent to them for a particular cell time
as a result of allocated bandwidth. In addition, each TSPP provides to the BA an indication of which output ports are requested for access via dynamic bandwidth, a product of the dynamic bandwidth lists. If a TSPP does not have an allocation on the SAT for a specific cell time, it may vie for dynamic bandwidth. Each TSPP can have several outstanding requests stored in the BA.
Each TSPP provides its dynamic bandwidth request(s) for a port(s) to the BA via a serially-communicated request to set the bits for the requested output ports. Each TSPP can set or delete bits in its respective request vector, or can change priority with respect to each request - each request has a priority level stored in conjunction therewith. These three commands are executed via a three-bit serial command sent from the respective TSPP to the BA. Up to 16 ports can be requested by the TSPP. In other words, each TSPP can request all of the output ports in a switch having sixteen output ports. A request remains set unless it is explicitly deleted by the TSPP. In the case where a request is matched by the BA with an available output port, a grant in the form of a port number is returned by the BA to the requesting TSPP. The BA interprets the requestε and stores them in the form of a register bank, one for each priority with a εet bit indicating a requested port. These dynamic bandwidth requests of all vying TSPPs are fed into a Dynamic Arbitration Unit 43 of the BA, which tries to match the requests with the available (not allocated or allocated but unused) ports. Matched requests are communicated back to the TSPPs, which refer to their dynamic bandwidth lists (described above) in sending cells accordingly. State information is retained by the BA to implement a round-robin service scheme and to determine which was the laεt TSPP served. A TSPP is served when a Free Output Port Vector in the BA is matched to a TSPP request, whereby the requeεted port is granted and the request is subtracted from the Free Output Port Vector. The Free Output
Port Vector is then applied to the next TSPP request in an attempt to match unassigned ports to requested ports. Eventually, the Free Output Port Vector will be all or almost all zeroes, and no further match between unassigned ports and requested ports can be found.
Fig. 8 illustrates the matching process. Here, TSPP 0 has provided a serial request for ports 0 and 2. The BA indicates that ports 0, 1, and 2 are available for dynamic bandwidth cell transfer via the Free Output Port Vector. Assuming TSPP 0 is first in the round-robin list of TSPPs to be matched, a match for ports 0 and 2 is indicated. In point-to-point ("P2P") communication, only one matching port is granted to TSPP 0, in the illustration, port 0. The BA now has ports 1 and 2 left in the Free Output Port Vector for matching with the next requesting TSPP. TSPP 1 has requested ports 0 - 3. Ports 1 and 2 match with the left-over available list. Port 2 is granted, and the new left-over Free Output Port Vector includes port 1. The BA matching process continues until all available ports are granted by the BA, or no unmatched TSPP requests remain.
In an embodiment supporting point-to-multipoint ("P2M") cell tranεfer, matches to P2M request are sought prior to seeking matches to P2P requests, since it iε more difficult to match all requested ports from one TSPP at once. To allow the use of allocated and shared (i.e., dynamic) resources at the output port by a single connection, cell transferε are tagged to indicate whether they were above or below their allocated cell rate. The tagging iε performed by the BA. If a cell is shipped using a SAT slot, it is tagged as scheduled. If the cell iε εhipped because it won bandwidth arbitration, it is tagged as not scheduled. This information is employed in FSPP processing, as described below.
Traffic of different priority levels is supported in the presently disclosed switch through the assignment of requests to one of four priority levels by the originating TSPP. The
BA separates these four levels into either "high" or "low" priority, and attempts to match all high-priority requests prior to attempting to match all low-priority requests.
Prior to receiving a cell through the switch fabric, an FSPP receives control information indicative of whether the cell transfer utilizes scheduled bandwidth or a dynamic bandwidth. The control information further indicates the queue(s) within which the cell is to be enqueued. This information allows the FSPP to determine whether it has sufficient resources such as queue space, buffer space, bandwidth, etc., to receive the cell.
If the FSPP does not have sufficient resources to receive a cell, it indicates thiε by asserting an appropriate control signal. The asεertion of this εignal means the FSPP is able to receive the cell or the FSPP is not present.
As illustrated in Fig. 1, external to the FSPP 16 are four memories, Control RAM 1 61, Control RAM 2 63, Cell Buffer RAM 19, and QFC RAM 67. Control RAM 1, Control RAM 2, and Cell Buffer RAM are used to enqueue and dequeue cells. Control RAM 1 and Control RAM 2 contain the information required to implement the queues, dynamic lists and preferred lists (discussed below) necessary to provide the FSPP functions. The Cell Buffer RAM is where the actual cells are buffered while they await tranεmiεsion. The QFC RAM primarily contains storage for a flow control information received from the TSPP, and is accessed during the generation of flow control update cells.
The cell buffer iε divided into cell buffer locations, each capable of holding a single cell. Cell buffer locations within the cell buffer are pointed to using cell numbers. The starting address of a cell buffer location is derived from the cell number; each cell buffer location has a unique cell number pointing to that cell buffer location within the cell buffer. The total number of cell buffer locations is divided among plural cell buffer pools, each dedicated to an internal
cell scheduling resource. Each pool is implemented using two internal registers. The cell buffer pool count register contains the current number of cell buffer locations in use for that pool. The cell buffer pool limit register contains the maximum number of cell buffer locations allowed for that pool.
Cell numbers are manipulated to place cell buffer locations into queues. When a cell buffer location in the cell buffer is written with a cell, the cell number pointing to that cell buffer location is then placed on a queue. Cells are transmitted from the queues in the order in which they were received; the first received is the first transmitted. A logical representation of such a queue is illustrated in Fig. 9. Each queue is implemented as a linked list of cell numbers; each cell on the queue points to the next cell on the queue using its cell number as a pointer, as previously described. Each queue has a separate structure, known as the queue descriptor, maintained in Control RAM 2 to point to the head and tail of the queue. The linked list making up a queue is implemented as a set of pointers in Control RAM 1 such that each cell buffer location has one entry. The pointers are indexed using the cell number, with each entry containing another cell number. Thus, one cell number can point to another cell number. The queue descriptor also contains a count of the cells in the queue.
Once a cell is placed on a queue, that queue must then be scheduled for transmission. This is done by placing the queue number of that queue on a list. There are different types and priorities of lists within the FSPP. Lists are linked lists of queue numbers, similar to the scheduling lists of the TSPP. Each list has a εeparate εtructure, known as the list descriptor, maintained internal to the FSPP to point to the head and tail of the list. There are two categories of traffic to be scheduled for transmisεion: allocated traffic and dynamic traffic. The
control information associated with each cell received in the FSPP indicates to which of these two categories the cell belongs.
Two types of lists are used for scheduling the two types of traffic: preferred lists and dynamic lists. The queue numbers of queues having allocated traffic are placed on the preferred list. The queue numbers of queues having dynamic traffic are placed on the dynamic list. Queues can be found on both the preferred list and the dynamic list since each queue may have both scheduled and unscheduled cells, as shown in Fig. 10. Here, the first entry in the preferred list is a pointer to queue 7, labelled Q7. Note that Q7 is also pointed to by the second entry in the illustrated dynamic list. The preferred list will be serviced before the dynamic list.
If a queue has no cells assigned to it, it is obviously on neither the preferred list nor the dynamic list. If the queue receives one cell via dynamic bandwidth, the queue is placed on the dynamic list. If the queue receives a second cell, but this time via allocated bandwidth, the queue is also placed on the preferred list. Since servicing of preferred lists take precedence over servicing of dynamic lists, the first cell received in the queue will be chosen for transmission out of the switch via the preferred list, not the dynamic list. The queue will remain on the dynamic list after being removed from the preferred list until the remaining cell is chosen for transmission. Therefore, even though the queue was first placed on the dynamic list, then the preferred list, the first cell is dequeued via the preferred list. This is necesεary to ensure and maintain proper cell ordering.
The BA is responsible for tagging each cell as either shipped in an allocated SAT slot, or as shipped in an unscheduled dynamic slot. It is thiε information which is used in assigning queues to preferred and/or dynamic lists.
Some queues have mixed service traffic with both
allocated and dynamic cells. This is a result of providing integrated services whereby a particular connection may have cells to transmit beyond the respective dynamic bandwidth threshold (see discussion pertaining to Fig. 7 above) . Cells below the threshold are sent as allocated traffic. Cells above the threshold may be sent as dynamic traffic. Queue numbers for the allocated traffic are placed on the preferred list, and queue numbers for the dynamic traffic are placed on the dynamic list. Regardless of order of receipt between allocated and dynamic cells, cells from the queue numbers on the preferred list will be scheduled and removed first. The cells are still transmitted in order out of the FSPP, however, since the cell numbers on the queue remain in order and cell numbers are always removed from the head of the queue. Therefore, even if an individual cell at the head of an output queue was received in the FSPP as an unscheduled, dynamic cell, it will be transmitted first, even if the queue is identified next on a preferred list.
Once a queue number has been added to a list, either a preferred list or a dynamic list, it remains on that list until the queue has no more cells of the appropriate type. When a queue number makes it to the head of the list, that queue becomes the next queue within the list from which a cell is tranεmitted. When the cell iε transmitted, the queue number is removed from the head of the list and the count of either allocated cells for a preferred list or dynamic cells for a dynamic list is decremented within the queue descriptor. If the decremented counter is non-zero, the queue number is returned to the tail of the list. Otherwiεe it is dropped from the list.
By servicing the queue number from the head of the list and returning it to the tail of the liεt, the queues within a list receive round-robin scheduling.
With reference to Fig. 11, four priorities of preferred lists are provided for each output link to provide delay bounds through the switch for differentiation of different
levels of service. Each output link is scheduled independently, so there is no interaction between the preferred lists for different links. Newly received cells in a higher priority list are transmitted before previously received cells in a lower priority preferred list.
All the preferred lists with allocated traffic for a link are scheduled with a priority above dynamic lists with dynamic traffic for that link.
There are three types of dynamic lists, with four priorities for each type: VBR, ABR, UBR, as shown in Fig. 10. Each type of list is permanently assigned to each output link. In a preferred embodiment of the presently disclosed switch, the four priorities for each of the VBR and ABR dynamic lists are further divided among two priority levels assigned by the BA: high (bandwidth not met) ; and low (bandwidth exceeded) . These two levels enable the VBR service class to achieve a preselected percentage of dynamic bandwidth before allowing a lower priority service class, ABR, to share in the dynamic bandwidth. Once ABR has achieved its preselected percentage, the remaining dynamic bandwidth is shared among VBR, ABR and UBR.
Four priorities exist for VBR listε, VBR0-3. VBR has a lower latency, but may have cell loss. VBR dynamic listε are guaranteed a minimum bandwidth on an output link. Once the VBR dynamic lists with traffic have received their guaranteed bandwidth, the ABR dynamic list transmits if its minimum bandwidth has not been reached. When the minimum bandwidth for both VBR and ABR have both been satiεfied, the UBR, VBR and ABR dynamic lists vie in round robin fashion for the remaining bandwidth.
ABR also provides four levels of priority. It differs from VBR in that it guarantees no cell loss because flow control is utilized. ABR dynamic listε are also guaranteed minimum bandwidth on an output link. Aε noted above, once ABR minimum bandwidth has been satisfied, UBR, VBR, and ABR all vie in round-robin fashion for remaining bandwidth.
The list of lists structure introduced with respect to Figs. 5 and 7 is also applicable to the processing of cells at the FSPP. Specifically, with regard to Fig. 10 once again, each "cell" illustrated on one of the preferred and dynamic lists is actually a pair of pointers to a queue having one or more cells to be transmitted from the respective port at the respective priority. Each queue is entered only once on a particular list. If that queue has more than one cell to be transmitted, the queue is put on the list again, but behind all other queues already on the list. Round-robin queue servicing is thus enabled. In summary, each list entry points to a linked list of cells to be transmitted - it is a list of lists. Fairness is provided between queues of like priority, prioritization between lists is enabled, and cell prioritization iε maintained.
In this fashion, allocated traffic is transmitted first. Any remaining bandwidth is transmitted according to a prioritized scheme. Thus, multiple classes of service can be provided through the same εwitch, enabling a cuεtomer to pay for the level of service desired (in terms of bandwidth and latency) , while maximizing the utilization of the switch bandwidth.
Having deεcribed preferred embodiments of the invention, it will be apparent to those skilled in the art that other embodiments incorporating the concepts may be used. For instance, simple variations on the data rates specified herein are to be considered within the scope of thiε diεclosure.
These and other examples of the invention illustrated above are intended by way of example and the actual scope of the invention is to be limited solely by the scope and spirit of the following claims.
Claims
AMENDED CLAIMS
[received by the International Bureau on 11 December 1996 (11.12.96); original claims 1 and 2 replaced by new claims 1-51 (11 pages)]
1. A data switch providing integrated services to data cells transmitted therethrough, each data cell identified as belonging to one of multiple service classes, said switch comprising: plural input port processors, each for buffering and enqueuing input data cells and for transmitting said input data cells through said switch using either allocated or dynamic intra-switch bandwidth; a bandwidth arbiter, in communication with said input port processors, for arbitrating said dynamic intra-switch bandwidth among said input port processors, and for tagging said input data cells as allocated or dynamic intra-switch bandwidth transmitted data cells; plural output port processors, in communication with said input port processors and said bandwidth arbiter, each for buffering and enqueuing said tagged and transmitted data cells, according to said tag and data cell service class information, in a prioritized manner; and a switch fabric enabling said communication between said input port processors, said bandwidth arbiter, and said output port processors.
2. The switch of claim 1 wherein each of said input processors is further for generating scheduling lists comprised of linked lists of queues of said input data cells, each of said scheduling lists
having associated therewith data cells belonging to a common connection.
3. The switch of claim 2 wherein each of said input processors is further maintaining a switch allocation table having plural entries, each of said entries identifying one of said scheduling lists, said switch allocation table enabling the transmission of said input data cells using said allocated intra-switch bandwidth.
4. The switch of claim 3 wherein each of said input processors is further for maintaining plural dynamic bandwidth lists, each relating to a respective one of said output processors and comprising a linked list of at least one scheduling list, said dynamic bandwidth lists enabling transmission of said input data cells using said dynamic intra-switch bandwidth.
5. The switch of claim 4 wherein each of said input processors is further for maintaining a dynamic port list for conveyance to said bandwidth arbiter, said dynamic port list indicating which of said output processors are requested by said dynamic bandwidth list scheduling lists, and at what priority.
6. The switch of claim 5 wherein said bandwidth arbiter is further for identifying which of said output processors is not scheduled for receipt of an input data cell via said allocated intra-switch bandwidth from an input processor, and for matching said output processors not so scheduled with requested output processors on said dynamic port list based upon said priority.
7. The switch of claim 6 wherein said bandwidth arbiter is further for conveying to said input processor an available output processor which corresponds to a request by said input
processor.
8. The switch of claim 1 wherein each of said output processors is further for receiving an identification of an input data cell to be transmitted by one of said input processors and for providing an indication to said input processor to inhibit said transmission of said input data cell based upon said identification.
9. The switch of claim 2 wherein each of said output processors is further for maintaining allocated and dynamic list structures for each of said data cells received from said input processors via said allocated and dynamic intra- switch bandwidth, respectively.
10. The switch of claim 9 wherein each of said output processors is further for transmitting data cells associated with said allocated list structures out of said switch before data cells associated with said dynamic list structures.
11. The switch of claim 9 wherein each of said output processors is further for providing said allocated and dynamic list structures for each of said multiple service classes.
12. The switch of claim 11 wherein each of said output processors is further for providing said multiple service class list structures for each of plural priorities.
13. A communications switch providing integrated services to data cells transmitted therethrough, each data cell identified as belonging to one of multiple service classes, said switch having at least one input port in communication with an input communications link, at least one output port in communication with an output communications link, and a switch fabric therebetween, said switch comprising:
an input processor, associated with each of said at least one input ports, for buffering, enqueuing and transmitting said data cells received from said input communications link according to a determination, made by said input processor, of whether allocated or dynamic intra- switch bandwidth may be utilized in transmitting said enqueued data cells across said switch fabric; a bandwidth arbiter, in communication with said at least one input processor, for arbitrating said dynamic intra- switch bandwidth among said input processors, and for tagging said data cells with a tag indicative of whether a respective data cell has been transmitted across said switch fabric utilizing said allocated or said dynamic intra-switch bandwidth; and an output processor, associated with each of said at least one output ports, for buffering and enqueuing said data cells transmitted by at least one of said input processors, tagged by said bandwidth arbiter, and received across said switch fabric, said output processor utilizing said bandwidth arbiter tag and data cell service class information in enqueuing said data cells for transmission to said output communications link in a prioritized manner.
14. The switch of claim 13 wherein said input processor is further capable of verifying valid header data format associated with each' of said data cells.
15. The switch of claim 13 wherein said input processor is further capable of verifying a valid connection identifier from header data for each of said data cells.
16. The switch of claim 13 wherein said input processor is further capable of analyzing a connection identifier from header data for each of said data cells in associating said data cell with a queue for the respective connection.
17. The switch of claim 13 wherein said input processor enqueues said data cells for a particular connection in a single queue in order of data cell reception from said input communications link.
18. The switch of claim 13 wherein said input processor is further capable of reformatting said data cells received from said input communications link into a switch-internal format.
19. The switch of claim 13 wherein said input processor is further capable of generating scheduling lists comprised of linked lists of queues of said data cells identified as belonging to a common connection.
20. The switch of claim 19 wherein said input processor maintains a respective switch allocation table comprising plural entries, each entry identifying a respective one of said scheduling lists, said switch allocation table enabling transmission of said enqueued data cells through said switch employing said allocated intra-switch bandwidth.
21. The switch of claim 20 wherein said input processor maintains a dynamic bandwidth list for each of said output ports, each said dynamic bandwidth list comprising a linked list of at least one of said scheduling lists, said dynamic bandwidth lists enabling transmission of said enqueued data cells through said switch employing said dynamic intra-switch bandwidth.
22. The switch of claim 21 wherein said input processor maintains a dynamic port list for conveyance to said bandwidth arbiter, said dynamic port list providing an indication to said bandwidth arbiter of which output ports are requested by said scheduling lists associated with said input processor, and at what priority.
23. The switch of claim 22 wherein said bandwidth arbiter is further capable of identifying which of said output ports are not scheduled for receipt of data cells via said allocated intra-switch bandwidth by said input processor, and of matching said output ports not scheduled with said requested output ports on said dynamic port list based upon said priority.
24. The switch of claim 23 wherein said bandwidth arbiter further conveys to said input processor an available output port which matches one of said output ports requested by said input processor.
25. The switch of claim 24 wherein said bandwidth arbiter is further capable of maintaining a history of which input processors were provided with a requested output port in providing a fairness in output port distribution.
26. The switch of claim 13 wherein said bandwidth arbiter is further capable of receiving from said input processor a queue vector which identifies a destination output processor for ones of said data cells to be transmitted through said switch.
27. The switch of claim 13 wherein said input processor identifies, to at least one of said output processors via said switch fabric, a data cell to be transmitted to said at least one output processor, and said at least one output processor is further capable of identifying, for said data cell to be transmitted, via said switch fabric, said bandwidth arbiter-provided tag and a respective queue for buffering said data cell at said output port.
28. The switch of claim 27 wherein said output processor is further capable of providing a signal to said input processor
providing said data cell to be transmitted, via said switch fabric, said signal inhibiting transmission of said data cell until receipt by said input processor of a further signal from said output processor.
29. The switch of claim 13 wherein said output processor is further capable of maintaining list structures for each of data cells received via said allocated and dynamic intra- switch bandwidths, said list structures comprising linked lists of queues having data cells for transmission to said output communications link.
30. The switch of claim 29 wherein each of said queues is restricted, by said output processor, to one entry within said linked list of queues for each of said list structures.
31. The switch of claim 29 wherein said output processor is further capable of transmitting data cells from queues found on said allocated intra-switch bandwidth list structure before transmitting data cells from queues found on said dynamic intra-switch bandwidth list structure.
32. The switch of claim 29 wherein said output processor is further capable of providing said allocated and dynamic intra-switch bandwidth list structures for each of said multiple service classes.
33. The switch of claim 32 wherein said output processor is further capable of providing said multiple service class list structures for each of plural priorities.
3 . A method of forwarding a data cell through a communications switch having an input port, an input processor associated with said input port, an output port, an output processor associated with said output port, a switch fabric disposed between said input port and said
output port, and a bandwidth arbiter associated with said switch fabric, said method comprising the steps of: input buffering and enqueuing a data cell received at said input port by said input processor according to whether said data cell is to be forwarded, via said switch fabric, to said output port via allocated or dynamic intra-switch bandwidth; tagging said buffered and enqueued data cell by said bandwidth arbiter as a data cell to be forwarded over said switch fabric via said allocated or dynamic intra-switch bandwidth; forwarding said tagged data cell over said switch fabric by said input processor to said output processor; and output buffering and enqueuing said forwarded data cell, by said output processor at said output port, according to said tag.
35. The method of claim 34 wherein said input buffering and enqueuing step further comprises error checking, by said input processor, a header associated with said data cell.
36. The method of claim 34 wherein said input buffering and enqueuing step further comprises identification, by said input processor, of a connection to which said data cell corresponds from header data associated with said data cell.
37. The method of claim 34 wherein said input buffering and enqueuing step further comprises converting said data cell to a switch-internal data cell format by said input processor.
38. The method of claim 34 wherein said input buffering and enqueuing step further comprises the step of generating, by said input processor, scheduling lists comprised of linked lists of queues identified as belonging to a common connection.
39. The method of claim 38 wherein said input buffering and enqueuing step further comprises the step of maintaining, by said input processor, a switch allocation table comprising plural entries, each entry identifying a respective one of said scheduling lists, said switch allocation table enabling transmission of said enqueued data cell through said switch employing said allocated intra-switch bandwidth.
40. The method of claim 38 wherein said input buffering and enqueuing step further comprises the step of maintaining, by said input processor, a dynamic bandwidth list for said output port, said dynamic bandwidth list comprising a linked list of at least one of said scheduling lists and enabling transmission of said enqueued data cell through said switch employing said dynamic intra-switch bandwidth.
41. The method of claim 40 wherein said input buffering and enqueuing step further comprises the step of maintaining a dynamic port list, by said input processor, said dynamic port list to provide an indication to said bandwidth arbiter of whether said output port is requested by said scheduling lists associated with said input processor, and at what priority.
42. The method of claim 41 wherein said tagging step further comprises the step of identifying, by said bandwidth arbiter, whether said output port is scheduled for receipt of said data cell via said allocated intra-switch bandwidth by said input processor during a first interval, and of matching, by said bandwidth arbiter, said output port not scheduled during said first interval with said output port request on said dynamic port list based upon said priority.
43. The method of claim 42 wherein said step of matching further comprises the step of conveying, by said bandwidth arbiter, said matched output port to said requesting input
processor.
44. The method of claim 42 wherein said step of matching further comprises the step of maintaining a history of matched output port requests.
45. The method of claim 34 wherein said step of tagging further comprises the step of receiving, by said bandwidth arbiter, a queue vector, from said input processor, for identifying a destination output processor for said enqueued data cell to be transmitted through said switch.
46. The method of claim 34 wherein said step of forwarding said tagged data cell further comprises the step of providing, by said output processor, a signal to said input processor inhibiting said forwarding of said tagged data cell until said output processor provides a second signal to said input processor.
47. The method of claim 34 wherein said step of output buffering and enqueuing further comprises the step of maintaining, by said output processor, list structures for each of data cells received from said input processor via said allocated and dynamic intra-switch bandwidth, said list structures comprising linked lists of queues each having data cells for transmission from said output port.
48. The method of claim 47 wherein said step of maintaining further comprises entering, by said output processor, a queue, having a data cell for transmission from said output switch, on said linked list of queues only once for each of said list structures.
49. The method of claim 47 wherein said step of maintaining further comprises transmitting data cells from queues found on said allocated intra-switch bandwidth list structure
before transmitting data cells from queues found on said dynamic intra-switch bandwidth list structure.
50. The method of claim 47 wherein said data cell is associated with one of plural service classes and said step of maintaining further comprises providing plural allocated and dynamic intra-switch bandwidth list structures for each of said multiple service classes.
51. The method of claim 50 wherein said step of maintaining further comprises providing said multiple service class list structures for each of plural priorities.
AMENDED SHEET (ARTICLE IS)
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US149895P | 1995-07-19 | 1995-07-19 | |
US1498P | 1995-07-19 | ||
PCT/US1996/011943 WO1997004564A1 (en) | 1995-07-19 | 1996-07-18 | Allocated and dynamic bandwidth management |
Publications (2)
Publication Number | Publication Date |
---|---|
EP0839420A1 EP0839420A1 (en) | 1998-05-06 |
EP0839420A4 true EP0839420A4 (en) | 2001-07-18 |
Family
ID=38659695
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP96924622A Withdrawn EP0839420A4 (en) | 1995-07-19 | 1996-07-18 | Allocated and dynamic bandwidth management |
Country Status (4)
Country | Link |
---|---|
EP (1) | EP0839420A4 (en) |
JP (1) | JPH11510010A (en) |
AU (1) | AU6502496A (en) |
WO (1) | WO1997004564A1 (en) |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP3471588B2 (en) | 1997-10-30 | 2003-12-02 | 株式会社エヌ・ティ・ティ・ドコモ | Packet data bandwidth control method and packet switching network system in packet switching network |
US6044061A (en) | 1998-03-10 | 2000-03-28 | Cabletron Systems, Inc. | Method and apparatus for fair and efficient scheduling of variable-size data packets in an input-buffered multipoint switch |
US6421348B1 (en) * | 1998-07-01 | 2002-07-16 | National Semiconductor Corporation | High-speed network switch bus |
US7215678B1 (en) | 2000-04-10 | 2007-05-08 | Switchcore, A.B. | Method and apparatus for distribution of bandwidth in a switch |
US7408959B2 (en) * | 2002-06-10 | 2008-08-05 | Lsi Corporation | Method and apparatus for ensuring cell ordering in large capacity switching systems and for synchronizing the arrival time of cells to a switch fabric |
US9942027B2 (en) | 2016-03-23 | 2018-04-10 | Rockley Photonics Limited | Synchronization and ranging in a switching system |
US10091784B1 (en) | 2016-12-31 | 2018-10-02 | Sprint Communications Company L.P. | Device-to-device (D2D) scheduling control in orthogonal frequency division multiplexing (OFDM) wireless system |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CA2015248C (en) * | 1989-06-30 | 1996-12-17 | Gerald R. Ash | Fully shared communications network |
GB2266033B (en) * | 1992-03-09 | 1995-07-12 | Racal Datacom Ltd | Communications bus and controller |
JP3042267B2 (en) * | 1993-07-22 | 2000-05-15 | ケイディディ株式会社 | Adaptive call connection regulation control apparatus and method |
US5485455A (en) * | 1994-01-28 | 1996-01-16 | Cabletron Systems, Inc. | Network having secure fast packet switching and guaranteed quality of service |
US5526344A (en) * | 1994-04-15 | 1996-06-11 | Dsc Communications Corporation | Multi-service switch for a telecommunications network |
-
1996
- 1996-07-18 AU AU65024/96A patent/AU6502496A/en not_active Abandoned
- 1996-07-18 EP EP96924622A patent/EP0839420A4/en not_active Withdrawn
- 1996-07-18 JP JP9506880A patent/JPH11510010A/en active Pending
- 1996-07-18 WO PCT/US1996/011943 patent/WO1997004564A1/en not_active Application Discontinuation
Non-Patent Citations (3)
Title |
---|
FAN R ET AL: "EXPANDABLE ATOM SWITCH ARCHITECTURE (XATOM) FOR ATM LANS", NEW ORLEANS, MAY 1 - 5, 1994,NEW YORK, IEEE,US, vol. -, 1 May 1994 (1994-05-01), pages 402 - 409, XP000438948 * |
NOBORU ENDO: "SHARED BUFFER MEMORY SWITCH FOR AN ATM EXCHANGE", IEEE TRANSACTIONS ON COMMUNICATIONS,US,IEEE INC. NEW YORK, vol. 41, no. 1, 1993, pages 237 - 245, XP000367768, ISSN: 0090-6778 * |
See also references of WO9704564A1 * |
Also Published As
Publication number | Publication date |
---|---|
WO1997004564A1 (en) | 1997-02-06 |
EP0839420A1 (en) | 1998-05-06 |
AU6502496A (en) | 1997-02-18 |
JPH11510010A (en) | 1999-08-31 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US5748629A (en) | Allocated and dynamic bandwidth management | |
US6438134B1 (en) | Two-component bandwidth scheduler having application in multi-class digital communications systems | |
US6064677A (en) | Multiple rate sensitive priority queues for reducing relative data transport unit delay variations in time multiplexed outputs from output queued routing mechanisms | |
US6377583B1 (en) | Rate shaping in per-flow output queued routing mechanisms for unspecified bit rate service | |
US5926459A (en) | Rate shaping in per-flow queued routing mechanisms for available bit rate service | |
US6295295B1 (en) | Scheduler for an information packet switch | |
US6064650A (en) | Rate shaping in per-flow output queued routing mechanisms having output links servicing multiple physical layers | |
US6038217A (en) | Rate shaping in per-flow output queued routing mechanisms for available bit rate (ABR) service in networks having segmented ABR control loops | |
EP0839422B1 (en) | Linked-list structures for multiple levels of control in an atm switch | |
EP1111851B1 (en) | A scheduler system for scheduling the distribution of ATM cells | |
EP0839420A4 (en) | Allocated and dynamic bandwidth management | |
EP0817435B1 (en) | A switch for a packet communication system | |
EP0817431A2 (en) | A packet switched communication system | |
EP0817434B1 (en) | A packet switched communication system and traffic shaping process | |
EP0817432A2 (en) | A packet switched communication system | |
WO1997004561A1 (en) | Link scheduling | |
WO1997004542A2 (en) | Multipoint-to-point arbitration in a network switch | |
WO1997004562A1 (en) | Point-to-multipoint arbitration | |
WO1997004570A1 (en) | Controlling bandwidth allocation using a pace counter | |
WO1997004565A9 (en) | Priority arbitration for point-to-point and multipoint transmission | |
EP1183833A1 (en) | Apparatus and method for traffic shaping in a network switch | |
WO1997004565A1 (en) | Priority arbitration for point-to-point and multipoint transmission | |
WO1997004541A2 (en) | Multipoint to multipoint processing in a network switch having data buffering queues | |
WO1997004568A1 (en) | Asynchronous transfer mode based service consolidation switch |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
17P | Request for examination filed |
Effective date: 19980216 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): DE FR GB |
|
A4 | Supplementary search report drawn up and despatched |
Effective date: 20010607 |
|
AK | Designated contracting states |
Kind code of ref document: A4 Designated state(s): DE FR GB |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN |
|
18D | Application deemed to be withdrawn |
Effective date: 20050202 |