WO1997004557A1 - Minimum guaranteed cell rate method and apparatus - Google Patents

Minimum guaranteed cell rate method and apparatus Download PDF

Info

Publication number
WO1997004557A1
WO1997004557A1 PCT/US1996/011936 US9611936W WO9704557A1 WO 1997004557 A1 WO1997004557 A1 WO 1997004557A1 US 9611936 W US9611936 W US 9611936W WO 9704557 A1 WO9704557 A1 WO 9704557A1
Authority
WO
WIPO (PCT)
Prior art keywords
link
cell
counter
connection
buffer
Prior art date
Application number
PCT/US1996/011936
Other languages
French (fr)
Inventor
Thomas A. Manning
Stephen A. Caldara
Stephen A. Hauser
Original Assignee
Fujitsu Network Communications, Inc.
Fujitsu Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fujitsu Network Communications, Inc., Fujitsu Limited filed Critical Fujitsu Network Communications, Inc.
Priority to JP9506877A priority Critical patent/JPH11510008A/en
Priority to PCT/US1996/011936 priority patent/WO1997004557A1/en
Priority to AU65020/96A priority patent/AU6502096A/en
Publication of WO1997004557A1 publication Critical patent/WO1997004557A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/18End to end
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F15/00Digital computers in general; Data processing equipment in general
    • G06F15/16Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
    • G06F15/163Interprocessor communication
    • G06F15/173Interprocessor communication using an interconnection network, e.g. matrix, shuffle, pyramid, star, snowflake
    • G06F15/17356Indirect interconnection networks
    • G06F15/17368Indirect interconnection networks non hierarchical topologies
    • G06F15/17375One dimensional, e.g. linear array, ring
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/46Interconnection of networks
    • H04L12/4604LAN interconnection over a backbone network, e.g. Internet, Frame Relay
    • H04L12/4608LAN interconnection over ATM networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L12/5602Bandwidth control in ATM Networks, e.g. leaky bucket
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/11Identifying congestion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/26Flow control; Congestion control using explicit feedback to the source, e.g. choke packets
    • H04L47/266Stopping or restarting the source, e.g. X-on or X-off
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/26Flow control; Congestion control using explicit feedback to the source, e.g. choke packets
    • H04L47/267Flow control; Congestion control using explicit feedback to the source, e.g. choke packets sent by the destination endpoint
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/29Flow control; Congestion control using a combination of thresholds
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/30Flow control; Congestion control in combination with information about buffer occupancy at either end or at transit nodes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/621Individual queue per connection or flow, e.g. per VC
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/70Admission control; Resource allocation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/70Admission control; Resource allocation
    • H04L47/74Admission control; Resource allocation measures in reaction to resource unavailability
    • H04L47/745Reaction in network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/70Admission control; Resource allocation
    • H04L47/80Actions related to the user profile or the type of traffic
    • H04L47/805QOS or priority aware
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/70Admission control; Resource allocation
    • H04L47/82Miscellaneous aspects
    • H04L47/822Collecting or measuring resource availability data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/10Packet switching elements characterised by the switching fabric construction
    • H04L49/104Asynchronous transfer mode [ATM] switching fabrics
    • H04L49/105ATM switching elements
    • H04L49/106ATM switching elements using space switching, e.g. crossbar or matrix
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/10Packet switching elements characterised by the switching fabric construction
    • H04L49/104Asynchronous transfer mode [ATM] switching fabrics
    • H04L49/105ATM switching elements
    • H04L49/107ATM switching elements using shared medium
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/15Interconnection of switching modules
    • H04L49/1515Non-blocking multistage, e.g. Clos
    • H04L49/153ATM switching fabrics having parallel switch planes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/15Interconnection of switching modules
    • H04L49/1553Interconnection of ATM switching modules, e.g. ATM switching fabrics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/15Interconnection of switching modules
    • H04L49/1553Interconnection of ATM switching modules, e.g. ATM switching fabrics
    • H04L49/1576Crossbar or matrix
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/20Support for services
    • H04L49/201Multicast operation; Broadcast operation
    • H04L49/203ATM switching fabrics with multicast or broadcast capabilities
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/25Routing or path finding in a switch fabric
    • H04L49/253Routing or path finding in a switch fabric using establishment or release of connections between ports
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/25Routing or path finding in a switch fabric
    • H04L49/253Routing or path finding in a switch fabric using establishment or release of connections between ports
    • H04L49/255Control mechanisms for ATM switching fabrics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/25Routing or path finding in a switch fabric
    • H04L49/256Routing or path finding in ATM switching fabrics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/30Peripheral units, e.g. input or output ports
    • H04L49/3081ATM peripheral units, e.g. policing, insertion or extraction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/30Peripheral units, e.g. input or output ports
    • H04L49/3081ATM peripheral units, e.g. policing, insertion or extraction
    • H04L49/309Header conversion, routing tables or routing tags
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/45Arrangements for providing or supporting expansion
    • H04L49/455Provisions for supporting expansion in ATM switches
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/55Prevention, detection or correction of errors
    • H04L49/552Prevention, detection or correction of errors by ensuring the integrity of packets received through redundant connections
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/55Prevention, detection or correction of errors
    • H04L49/555Error detection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q11/00Selecting arrangements for multiplex systems
    • H04Q11/04Selecting arrangements for multiplex systems for time-division multiplexing
    • H04Q11/0428Integrated services digital network, i.e. systems for transmission of different types of digitised signals, e.g. speech, data, telecentral, television signals
    • H04Q11/0478Provisions for broadband connections
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04JMULTIPLEX COMMUNICATION
    • H04J3/00Time-division multiplex systems
    • H04J3/02Details
    • H04J3/06Synchronising arrangements
    • H04J3/062Synchronisation of signals having the same nominal but fluctuating bit rates, e.g. using buffers
    • H04J3/0632Synchronisation of packets and cells, e.g. transmission of voice via a packet network, circuit emulation service [CES]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04JMULTIPLEX COMMUNICATION
    • H04J3/00Time-division multiplex systems
    • H04J3/02Details
    • H04J3/06Synchronising arrangements
    • H04J3/0635Clock or time synchronisation in a network
    • H04J3/0682Clock or time synchronisation in a network by delay compensation, e.g. by compensation of propagation delay or variations thereof, by ranging
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04JMULTIPLEX COMMUNICATION
    • H04J3/00Time-division multiplex systems
    • H04J3/02Details
    • H04J3/06Synchronising arrangements
    • H04J3/0635Clock or time synchronisation in a network
    • H04J3/0685Clock or time synchronisation in a node; Intranode synchronisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5614User Network Interface
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5614User Network Interface
    • H04L2012/5616Terminal equipment, e.g. codecs, synch.
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5625Operations, administration and maintenance [OAM]
    • H04L2012/5627Fault tolerance and recovery
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5628Testing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5629Admission control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5629Admission control
    • H04L2012/5631Resource management and allocation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5629Admission control
    • H04L2012/5631Resource management and allocation
    • H04L2012/5632Bandwidth allocation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5629Admission control
    • H04L2012/5631Resource management and allocation
    • H04L2012/5632Bandwidth allocation
    • H04L2012/5634In-call negotiation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5629Admission control
    • H04L2012/5631Resource management and allocation
    • H04L2012/5632Bandwidth allocation
    • H04L2012/5635Backpressure, e.g. for ABR
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5638Services, e.g. multimedia, GOS, QOS
    • H04L2012/564Connection-oriented
    • H04L2012/5642Multicast/broadcast/point-multipoint, e.g. VOD
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5638Services, e.g. multimedia, GOS, QOS
    • H04L2012/564Connection-oriented
    • H04L2012/5643Concast/multipoint-to-point
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5638Services, e.g. multimedia, GOS, QOS
    • H04L2012/5646Cell characteristics, e.g. loss, delay, jitter, sequence integrity
    • H04L2012/5647Cell loss
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5638Services, e.g. multimedia, GOS, QOS
    • H04L2012/5646Cell characteristics, e.g. loss, delay, jitter, sequence integrity
    • H04L2012/5647Cell loss
    • H04L2012/5648Packet discarding, e.g. EPD, PTD
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5638Services, e.g. multimedia, GOS, QOS
    • H04L2012/5646Cell characteristics, e.g. loss, delay, jitter, sequence integrity
    • H04L2012/5649Cell delay or jitter
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5638Services, e.g. multimedia, GOS, QOS
    • H04L2012/5646Cell characteristics, e.g. loss, delay, jitter, sequence integrity
    • H04L2012/5651Priority, marking, classes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5638Services, e.g. multimedia, GOS, QOS
    • H04L2012/5646Cell characteristics, e.g. loss, delay, jitter, sequence integrity
    • H04L2012/5652Cell construction, e.g. including header, packetisation, depacketisation, assembly, reassembly
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5672Multiplexing, e.g. coding, scrambling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5678Traffic aspects, e.g. arbitration, load balancing, smoothing, buffer management
    • H04L2012/5679Arbitration or scheduling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5678Traffic aspects, e.g. arbitration, load balancing, smoothing, buffer management
    • H04L2012/5681Buffer or queue management
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5678Traffic aspects, e.g. arbitration, load balancing, smoothing, buffer management
    • H04L2012/5681Buffer or queue management
    • H04L2012/5682Threshold; Watermark
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5678Traffic aspects, e.g. arbitration, load balancing, smoothing, buffer management
    • H04L2012/5681Buffer or queue management
    • H04L2012/5683Buffer or queue management for avoiding head of line blocking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5685Addressing issues
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/30Definitions, standards or architectural aspects of layered protocol stacks
    • H04L69/32Architecture of open systems interconnection [OSI] 7-layer type protocol stacks, e.g. the interfaces between the data link level and the physical level
    • H04L69/322Intralayer communication protocols among peer entities or protocol data unit [PDU] definitions
    • H04L69/324Intralayer communication protocols among peer entities or protocol data unit [PDU] definitions in the data link layer [OSI layer 2], e.g. HDLC
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L7/00Arrangements for synchronising receiver with transmitter
    • H04L7/04Speed or phase control by synchronisation signals
    • H04L7/041Speed or phase control by synchronisation signals using special codes as synchronising signal
    • H04L7/046Speed or phase control by synchronisation signals using special codes as synchronising signal using a dotting sequence

Definitions

  • This application relates to communications methods and apparatus in a distributed switching architecture, and in particular to bandwidth management in a distributed switching architecture.
  • FCVC Flow Controlled Virtual Connection
  • This protocol involves a credit-based flow control system, where a number of connections exist within the same link with the necessary buffers established and flow control monitored on a per-connection basi ⁇ . Buffer usage over a known time interval, the link round-trip time, is determined in order to calculate the per-connection bandwidth. A trade-off is established between maximum bandwidth and buffer allocation per connection. Such per- connection feedback and subsequent flow control at the transmitter avoids data loss from an inability of the downstream element to store data cells sent from the upstream element.
  • the flow control protocol isolates each connection, ensuring lossless cell transmission for that connection.
  • Connection-level flow control results in a trade-off between update frequency and the realized bandwidth for the connection.
  • High update frequency has the effect of minimizing situations in which a large number of receiver cell buffers are available, though the transmitter incorrectly believes the buffers to be unavailable. Thus it reduces the number of buffers that must be set aside for a connection.
  • a high update frequency to control a traffic flow will require a high utilization of bandwidth (in the reverse direction) to supply the necessary flow control buffer update information where a large number of connections exist in the same link. Realizing that transmission systems are typically symmetrical with traffic flowing in both directions, and flow control buffer update information likewise flowing in both directions, it is readily apparent that a high update frequency is wasteful of the bandwidth of the link.
  • the presently claimed invention provides, in a link- level and virtual-connection flow controlled environment, the ability to guarantee a minimum bandwidth to a connection through a link, the ability to employ shared bandwidth thereabove, and the ability to guarantee no cell-loss due to buffer overflows at the receiver, while providing a high level of link utilization efficiency.
  • the amount of the bandwidth guarantee is individually programmable for each connection.
  • a buffer resource in a receiver, downstream of a transmitter, is logically divided into first buffers dedicated to allocated bandwidth cell traffic and buffers shared among dynamic bandwidth cell traffic.
  • the invention utilizes elements in both the transmitter and the receiver, at both the connection level and link level, necessary for enabling the provision of buffer state flow control at the link level, otherwise known as link flow control, in addition to flow control on a per-connection basis.
  • Link flow control enables receiver cell buffer sharing while maintaining a per- connection bandwidth guarantee. A higher and thus more efficient utilization of receiver cell buffers is achieved. No cell-loss due to buffer overflows at the receiver is guaranteed, leading to high link utilization in a frame traffic environment, as well as low delay in the absence of cell retransmission.
  • link flow control may have a high update frequency
  • connection flow control information may have a low update frequency.
  • the link can be defined either as a physical link or as a logical grouping comprised of logical connections.
  • the resultant system adds more capability than is defined in the presently known art. It eliminates the excessive wasting of link bandwidth that results from reliance on a per-connection flow control mechanism alone, while taking advantage of both a high update frequency at the link level and buffer sharing to minimize the buffer requirements of the receiver. Yet this flow control mechanism still ensures the same lossless transmission of cells as would the prior art.
  • a judicious use of the counters associated with the link level and connection level flow control mechanisms allows easy incorporation of a dynamic buffer allocation mechanism to control the number of buffers allocated to each connection, further reducing the buffer requirements.
  • Additional counters associated with the link level and connection level flow control mechanisms at the transmitter therefore provide the ability to guarantee a minimum, allocated bandwidth to a connection through a link, the ability to transmit dynamically distributed bandwidth in conjunction therewith, and the ability to guarantee no cell- loss, while providing a high level of link utilization efficiency. Any given connection can be flow controlled below the guaranteed minimum, but only by the receiver as a result of congestion downstream on the same connection; congestion on other connections does not result in bandwidth reduction below the allocated rate.
  • the presently disclosed mechanism may further be combined with a mechanism for prioritized access to a shared buffer resource.
  • Fig. 1 is a block diagram of a connection-level flow control apparatus as known in the prior art
  • Fig. 2 is a block diagram of a link-level flow control apparatus according to the present invention.
  • Figs. 3A and 3B are flow diagram representations of counter initialization and preparation for cell transmission within a flow control method according to the present invention
  • Fig. 4 is a flow diagram representation of cell transmission within the flow control method according to the present invention.
  • Figs. 5A and 5B are flow diagram representations of update cell preparation and transmission within the flow control method according to the present invention.
  • Figs. 6A and 6B are flow diagram representations of an alternative embodiment of the update cell preparation and transmission of Figs. 5A and 5B;
  • Figs. 7A and 7B are flow diagram representations of update cell reception within the flow control method according to the present invention.
  • Figs. 8A, 8B and 8C are flow diagram representations of check cell preparation, transmission and reception within the flow control method according to the present invention.
  • Figs. 9A, 9B and 9C are flow diagram representations of an alternative embodiment of the check cell preparation, transmission and reception of Figs. 8A, 8B and 8C;
  • Fig. 10 illustrates a cell buffer pool according to the present invention as viewed from an upstream element;
  • Fig. 11 is a block diagram of a link-level flow control apparatus in an upstream element providing prioritized access to a shared buffer resource in a downstream element according to the present invention
  • Figs. 12A and 12B are flow diagram representations of counter initialization and preparation for cell transmission within a prioritized access method according to the present invention
  • Figs. 13A and 13B illustrate alternative embodiments of cell buffer pools according to the present invention as viewed from an upstream element
  • Fig. 14 is a block diagram of a flow control apparatus in an upstream element providing guaranteed minimum bandwidth and prioritized access to a shared buffer resource in a downstream element according to the present invention
  • Figs. 15A and 15B are flow diagram representations of counter initialization and preparation for cell transmission within a guaranteed minimum bandwidth mechanism employing prioritized access according to the present invention
  • Fig. 16 is a block diagram representation of a transmitter, a data link, and a receiver in which the presently disclosed joint flow control mechanism is implemented;
  • Fig. 17 illustrates data structures associated with queues in the receiver of Fig. 16.
  • connection-level flow control the resources required for connection-level flow control are presented.
  • an upstream transmitter element 12 also known as an UP subsystem
  • a downstream receiver element 14 also known as a DP subsystem.
  • Each element 12, 14 can act as a switch between other network elements.
  • the upstream element 12 in Fig. 1 can receive data from a PC (not shown) . This data is communicated through the link 10 to the downstream element 14, which in turn can forward the data to a device such as a printer (not shown) .
  • the illustrated network elements 12, 14 can themselves be network end-nodes.
  • the essential function of the presently described arrangement is the transfer of data cells from the upstream element 12 via a connection 20 in the link 10 to the downstream element 14, where the data cells are temporarily held in cell buffers 28.
  • Cell format is known, and is further described in "Quantum Flow Control", Version 1.5.1, dated June 27, 1995 and subsequently published in a later version by the Flow Control Consortium.
  • the block labelled Cell Buffers 28 represents a set of cell buffers dedicated to the respective connection 20. Data cells are released from the buffers 28, either through forwarding to another link beyond the downstream element 14, or through cell utilization within the downstream element 14. The latter event can include the construction of data frames from the individual data cells if the downstream element 14 is an end-node such as a work station.
  • Each of the upstream and downstream elements 12 , 14 are controlled by respective processors, labelled UP (Upstream Processor) 16 and DP (Downstream Processor) 18.
  • processors labelled UP (Upstream Processor) 16 and DP (Downstream Processor) 18.
  • UP Upstream Processor
  • DP Downstream Processor
  • Associated with each of the processors 16, 18 are sets of buffer counters for implementing the connection-level flow control. These buffer counters are each implemented as an increasing counter/limit register set to facilitate resource usage changes.
  • the counters of Fig. 1, described in further detail below, are implemented in a first embodiment in UP internal RAM.
  • the counter names discussed and illustrated for the prior art utilize some of the same counter names as used with respect to the presently disclosed flow control method and apparatus. This is merely to indicate the presence of a similar function or element in the prior art with respect to counters, registers, or like elements now disclosed.
  • the link 10 which in a first embodiment is a copper conductor, multiple virtual connections 20 are provided.
  • the link 10 is a logical grouping of plural virtual connections 20.
  • the number of connections 20 implemented within the link 10 depends upon the needs of the respective network elements 12, 14, as well as the required bandwidth per connection. In Fig. 1, only one connection 20 and associated counters are illustrated for simplicity. First, with respect to the upstream element 12 of Fig.
  • BS_Counter 22 two buffer state controls are provided, BS_Counter 22 and BS_Limit 24.
  • each are implemented as fourteen bit counters/registers, allowing a connection to have 16,383 buffers. This number would support, for example, 139 Mbps, 10,000 kilometer round-trip service.
  • the buffer state counters 22, 24 are employed only if the connection 20 in question is flow-control enabled. That is, a bit in a respective connection descriptor, or queue descriptor, of the UP 16 is set indicating the connection 20 is flow-control enabled.
  • BS_Counter 22 is incremented by the UP 16 each time a data cell is transferred out of the upstream element 12 and through the associated connection 20.
  • this counter 22 Periodically, as described below, this counter 22 is adjusted during an update event based upon information received from the downstream element 14.
  • BS_Counter 22 thus presents an indication of the number of data cells either currently being transmitted in the connection 20 between the upstream and downstream elements 12, 14, or yet unreleased from buffers 28 in the downstream element 14.
  • BS_Limit 24 is set at connection configuration time to reflect the number of buffers 28 available within the receiver 14 for thi ⁇ connection 20. For instance, if BS_Counter 22 for this connection 20 indicates that twenty data cells have been transmitted and BS_Limit 24 indicates that this connection 20 is limited to twenty receiver buffers 28, the UP 16 will inhibit further transmi ⁇ ion from the upstream element 12 until an indication is received from the downstream element 14 that further buffer space 28 is available for that connection 20.
  • Tx_Counter 26 is used to count the total number of data cells transmitted by the UP 16 through this connection 20.
  • thi ⁇ is a twenty-eight bit counter which rolls over at OxFFFFFFF.
  • Tx_Counter 16 is used during a check event to account for errored cells for thi ⁇ connection 20.
  • the DP 18 also manages a set of counters for each connection 20.
  • Buffer_Limit 30 performs a policing function in the downstream element 14 to protect against misbehaving transmitters.
  • the buffer_limit register 30 indicates the maximum number of cell buffers 28 in the receiver 14 which this connection 20 can use.
  • BS_Limit 24 is equal to Buffer_Limit 30.
  • This function is coordinated by network management software. To avoid the "dropping" of data cells in transmission, an increase in buffers per connection is reflected first in Buffer_Limit 30 prior to BS_Limit 24. Conversely, a reduction in the number of receiver buffers per connection is reflected first in BS_Limit 24 and thereafter in Buffer_Limit 30.
  • Buffer_Counter 32 provides an indication of the number of buffers 28 in the downstream element 14 which are currently being used for the storage of data cells. As described subsequently, this value is used in providing the upstream element 12 with a more accurate picture of buffer availability in the downstream element 14. Both the Buffer_Limit 30 and Buffer_Counter 32 are fourteen bits wide in the first embodiment.
  • N2_Limit 34 determines the frequency of connection flow- rate communication to the upstream transmitter 12. A cell containing ⁇ uch flow-rate information is sent up ⁇ tream every time the receiver element 14 forward ⁇ a number of cells equal to N2_Limit 34 out of the receiver element 14. Thi ⁇ updating activity is further described ⁇ ub ⁇ equently.
  • N2__Limit 34 is six bits wide.
  • the DP 18 u ⁇ e ⁇ N2_Counter 36 to keep track of the number of cells which have been forwarded out of the receiver element 14 since the last time the N2_Limit 34 was reached.
  • N2_Counter 36 is six bits wide.
  • the DP 18 maintains Fwd_Counter
  • the total number of cells received by the receiver element 14 can be derived by adding Buffer_Counter 32 to Fwd_Counter 38. The latter is employed in correcting the transmitter element 12 for errored cells during the check event, as described below.
  • Fwd_Counter 38 is twenty-eight bits wide in the first embodiment.
  • the DP 18 maintains Rx_Counter 40, a counter which is incremented each time the downstream element 14 receive ⁇ a data cell through the respective connection 20. The value of this counter 40 is then usable directly in response to check cells and in the generation of an update cell, both of which will be described further below. Similar to the Fwd_Counter 38, Rx_Counter 40 is twenty-eight bits wide in this second embodiment.
  • update There are two events in addition to a steady state condition in the connection-level flow controlled protocol: update; and check.
  • data cell ⁇ are transmitted from the transmitter element 12 to the receiver element 14.
  • update buffer occupancy information is returned upstream by the receiver element 14 to correct counter value ⁇ in the transmitter element 12.
  • Check mode is used to check for cells lost or injected due to tran ⁇ mi ⁇ sion errors between the up ⁇ trea tran ⁇ itter and down ⁇ tream receiver element ⁇ 12, 14.
  • connection level counters are augmented with "[i]" to indicate as ⁇ ociation with one connection [i] of plural po ⁇ sible connections.
  • Fig. 3A Prior to any activity, counter ⁇ in the up ⁇ tream and downstream elements 12, 14 are initialized, as illu ⁇ trated in Fig. 3A. Initialization includes zeroing counters, and providing initial values to limit regi ⁇ ters such as Link_BS_Limit and Link_Buffer_Limit.
  • Buffer_Limit[i] is shown being initialized to (RTT*BW) + N2 , which represents the round-trip time times the virtual connection bandwidth, plus accommodation for delays in processing the update cell.
  • RTT*BW Link_N2_Limit
  • "X" represent ⁇ the buffer ⁇ tate update frequency for the link
  • N2_Limit[i] "Y" represents the buffer state update frequency for each connection.
  • the UP 16 of the transmitter element 12 determines which virtual connection 20 (VC) has a non-zero cell count (i.e. ha ⁇ a cell ready to transmit), a BS_Counter value les ⁇ than the BS_Limit, and an indication that the VC is next to send (also in Figs. 3A and 3B) .
  • VC virtual connection 20
  • the UP 16 increments BS_Counter 22 and Tx_Counter 26 whenever the UP 16 transmits a data cell over the respective connection 20, assuming flow control is enabled (Fig. 4) .
  • Buffer_Counter 32 When a data cell is forwarded out of the receiver element 14, Buffer_Counter 32 is decremented. Buffer_Counter 32 should never exceed Buffer_Limit 30 when the connection- level flow control protocol is enabled, with the exception of when BS_Limit 24 has been decreased and the receiver element 14 has yet to forward ⁇ ufficient cell ⁇ to bring Buffer_Counter 32 below Buffer_Limit 30.
  • a buffer state update occurs when the receiver element 14 has forwarded a number of data cell ⁇ equal to N2_Limit 34 out of the receiver element 14.
  • update involves the transfer of the value of Fwd_Counter 38 from the receiver element 14 back to the transmitter element 12 in an update cell, as in Fig. 6A.
  • the value of Rx_Counter 40 minus Buffer_Counter 32 is conveyed in the update cell, as in Fig. 5A.
  • the update cell is used to update the value in BS_Counter 22, as ⁇ hown for the two embodiments in Fig. 7A.
  • BS_Counter 22 Since BS_Counter 22 is independent of buffer allocation information, buffer allocation can be changed without impacting the performance of this aspect of connection-level flow control. Update cells require an allocated bandwidth to ensure a bounded delay. This delay needs to be accounted for, as a component of round-trip time, to determine the buffer allocation for the respective connection.
  • the amount of bandwidth allocated to the update cells is a function of a counter, Max_Update_Counter (not illustrated) at an a ⁇ ociated downstream transmitter element (not illustrated) . This counter forces the scheduling of update and check cell ⁇ , the latter to be discussed subsequently.
  • Min_Update_Interval counter (not shown) in the downstream transmitter element, which controls the space between update cells. Normal cell packing is seven records per cell, and Min_Update_Interval is similarly set to seven. Since the UP 16 can only process one update record per cell time, back-to-back, fully packed update cells received at the UP 16 would cause some records to be dropped.
  • An update event occurs as follows, with regard to Figs. 1, 5A and 6A.
  • Buffer_Counter 32 is decremented and N2_Counter 36 and Fwd_Counter 38 are incremented.
  • N2 Counter 36 is equal to N2_Limit 34
  • the DP 18 prepares an update cell for transmission back to the upstream element 12 and N2_Counter 36 is set to zero.
  • the upstream element 12 receives a connection indicator from the downstream element 14 forwarded cell to identify which connection 20 is to be updated.
  • the DP 18 causes the Fwd_Counter 38 value to be inserted into an update record payload (Fig. 6A) .
  • the DP 18 causes the Rx_Counter 40 value minus the Buffer_Counter 32 value to be inserted into the update record payload (Fig. 5A) .
  • the update cell is transmitted to the upstream element 12.
  • the UP 16 receive ⁇ the connection indicator from the update record to identify the tran ⁇ mitter connection, and extracts the Fwd_Counter 38 value or the Rx_Counter 40 minus Buffer_Counter 32 value from the update record.
  • BS_Counter 22 is reset to the value of Tx_Counter 26 minus the update record value (Fig. 7A) . If this connection was disabled from transmitting due to BS_Counter 22 being equal to or greater than BS_Limit 24, this condition should now be reversed, and if so the connection should again be enabled for transmitting.
  • the update event provides the transmitting element 12 with an indication of how many cells originally transmitted by it have now been released from buffers within the receiving element 14, and thus provides the transmitting element 12 with a more accurate indication of receiver element 14 buffer 28 availability for that connection 20.
  • the buffer state check event ⁇ erves two purposes: 1) it provides a mechanism to calculate and compensate for cell lo ⁇ s or cell insertion due to transmi ⁇ sion errors; and 2) it provides a mechanism to start (or restart) a flow if update cells were lost or if enough data cell ⁇ were lo ⁇ t that N2_Limit 34 i ⁇ never reached.
  • One timer (not ⁇ hown) in the UP subsy ⁇ tem 16 serves all connections. The connections are enabled or disabled on a per connection basi ⁇ as to whether to send check cells from the upstream transmitter element 12 to the down ⁇ tream receiver element 14.
  • the check proce ⁇ s in the transmitter element 12 involves searching all of the connection descriptors to find one which is check enabled (see Figs. 8A, 9A) .
  • the check cell is forwarded to the receiver element 14 and the next check enabled connection is identified.
  • the spacing between check cell ⁇ for the same connection is a function of the number of active flow- controlled connections times the mandated spacing between check cells for all connection ⁇ .
  • Check cells have priority over update cell ⁇ .
  • the check event occur ⁇ a ⁇ follows, with regard to Figs. 8A through 8C and 9A through 9C.
  • Each transmit element 12 connection 20 is checked after a timed check interval is reached. If the connection is flow-control enabled and the connection i ⁇ valid, then a check event i ⁇ ⁇ cheduled for transmission to the receiver element 14.
  • a buffer state check cell i ⁇ generated using the Tx_Counter 26 value for that connection 20 in the check cell payload, and is transmitted using the connection indicator from the respective connection descriptor (Figs. 8A and 9A) .
  • a calculation of errored cells is made at the receiver element 14 by ⁇ umming Fwd_Counter 38 with Buffer_Counter 32, and subtracting this value from the contents of the transmitted check cell record, the value of Tx_Counter 26 (Fig. 9B) .
  • the value of Fwd_Counter 38 is increased by the errored cell count.
  • An update record with the new value for Fwd_Counter 38 is then generated. This updated Fwd_Counter 38 value subsequently updates the BS_Counter 22 value in the transmitter element 12.
  • the check event enables accounting for cells transmitted by the transmitter element 12, through the connection 20, but either dropped or not received by the receiver element 14.
  • a "no cell loss" guarantee is enabled using buffer state accounting at the connection level since the transmitter element 12 ha ⁇ an up-to-date account of the number of buffers 28 in the receiver element 14 available for receipt of data cells, and has an indication of when data cell transmi ⁇ sion should be ceased due to the absence of available buffers 28 downstream.
  • link-level flow control also known as link-level buffer ⁇ tate accounting, i ⁇ added to connection-level flow control. It i ⁇ possible for such link-level flow control to be implemented without connection-level flow control. However, a combination of the two is preferable since without connection-level flow control there would be no restriction on the number of buffers a single connection might consume.
  • Link-level flow control enables cell buffer sharing at a receiver element while maintaining the "no cell loss" guarantee afforded by connection-level flow control.
  • Buffer sharing result ⁇ in the most efficient use of a limited number of buffers. Rather than provide a number of buffers equal to bandwidth times RTT for each connection, a smaller number of buffers is employable in the receiver element 14 ⁇ ince not all connection ⁇ require a full compliment of buffers at any one time.
  • a further benefit of link-level buffer state accounting is that each connection is provided with an accurate representation of downstream buffer availability without necessitating increased reverse bandwidth for each connection.
  • the upstream transmitter element 12' (FSPP subsy ⁇ tem) partially include ⁇ a proce ⁇ or labelled From Switch Port Proce ⁇ or (FSPP) 16'.
  • Tx_Counter 26' each having the same function on a per-connection basis as those described with respect to Fig. 1.
  • Fig. 2 further include ⁇ a set of resources added to the upstream and downstream elements 12', 14' which enable link-level buffer accounting. These resources provide similar functions as those utilized on a per-connection basi ⁇ , yet they operate on the link level.
  • Link__BS_Counter 50 track ⁇ all cell ⁇ in flight between the FSPP 16' and element ⁇ down ⁇ tream of the receiver element 14', including cells in transit between the transmitter 12' and the receiver 14' and cells stored within receiver 14' buffers 28'.
  • Link_BS__Counter 50 i ⁇ modified during a link update event by subtracting either the Link_Fwd_Counter 68 value or the difference between Link_Rx_Counter 70 and Link_Buffer_Counter 62 from the Link_TX_Counter 54 value.
  • the link-level counters are implemented in external RAM a ⁇ ociated with the FSPP proce ⁇ or 16'.
  • Link BS Limit 52 limits the number of shared downstream cell buffers 28' in the receiver element 14' to be shared among all of the flow-control enabled connections 20'.
  • Link_BS_Counter 50 and Link_BS_Limit 52 are both twenty bits wide.
  • Link_TX_Counter 54 tracks all cell ⁇ tran ⁇ mitted onto the link 10'. It is used during the link-level update event to calculate a new value for Link_BS_Counter 50.
  • Link_TX_Counter 54 is twenty-eight bits wide in the first embodiment.
  • To Switch Port Processor In the downstream element 14', To Switch Port Processor
  • the TSPP 18' also manages a ⁇ et of counter ⁇ for each link 10' in the same fashion with respect to the commonly illu ⁇ trated counters in Figs. 1 and 2.
  • the TSPP 18' further includes a Link_Buffer_Limit 60 which perform ⁇ a function in the downstream element 14' similar to Link_BS_Limit 52 in the upstream element 12' by indicating the maximum number of cell buffers 28' in the receiver 14' available for use by all connections 10'. In most case ⁇ , Link_BS_Limit 52 is equal to Link_Buffer_Limit 60.
  • Link_Buffer_Counter 62 provides an indication of the number of buffers in the down ⁇ tream element 14' which are currently being u ⁇ ed by all connection ⁇ for the ⁇ torage of data cells. This value i ⁇ u ⁇ ed in a check event to correct the Link_Fwd_Counter 68 (de ⁇ cribed ⁇ ubsequently) .
  • the Link_Buffer_Counter 62 is twenty bits wide in the first embodiment.
  • Link_N2_Limit 64 and Link_N2_Counter 66 are used to generate link update records, which are intermixed with connection-level update records.
  • Link_N2_Limit 64 establishes a threshold number for triggering the generation of a link-level update record (Figs. 5B and 6B)
  • Link_N2_Counter 66 and Link_Fwd_Counter 68 are incremented each time a cell is released out of a buffer cell in the receiver element 14'.
  • N2_Limit 34' and Link_N2_Limit 64 are both static once initially configured.
  • each is dynamically adjustable ba ⁇ ed upon mea ⁇ ured bandwidth.
  • Link_N2_Limit 64 could be adjusted down to cause more frequent link-level update record transmission. Any forward bandwidth impact would be considered minimal. Lower forward bandwidth would enable the raising of Link_N2_Limit 64 since the unknown availability of buffers 28' in the downstream element 14' is less critical.
  • Link_Fwd_Counter 68 tracks all cells released from buffer cell ⁇ 28' in the receiver element 14' that came from the link 10' in que ⁇ tion.
  • Link_Rx_Counter 70 is employed in an alternative embodiment in which Link_Fwd_Counter 68 is not employed. It is also twenty-eight bits wide in an illustrative embodiment and track ⁇ the number of cell ⁇ received across all connections 20' in the link 10'. With regard to Figs. 2 et seq., a receiver element buffer sharing method is de ⁇ cribed.
  • the update event at the link level involves the generation of a link update record when the value in Link_N2_Counter 66 reaches (equals or exceeds) the value in
  • Link_N2_Limit 64 as shown in Figs. 5B and 6B. In a first embodiment, Link_N2_Limit 64 is set to forty.
  • the link update record the value taken from Link_Fwd_Counter 68 in the embodiment of Fig. 6B, is mixed with the per-connection update records (the value of Fwd_Counter 38') in update cells transferred to the FSPP 16'.
  • the value of Link_Rx_Counter 70 minus Link_Buffer_Counter 62 is mixed with the per- connection update records.
  • the up ⁇ tream element 12' receive ⁇ the update cell having the link update record, it set ⁇ the Link_BS_Counter 50 equal to the value of Link_Tx_Counter 54 minu ⁇ the value in the update record (Fig. 7B) .
  • Link_BS_Counter 50 in the upstream element 12' is reset to reflect the number of data cells tran ⁇ mitted by the upstream element 12', but not yet released in the downstream element 14'.
  • the actual implementation of the transfer of an update record recognizes that for each TSPP subsystem 14', there is an a ⁇ sociated FSPP proces ⁇ or (not illu ⁇ trated) , and for each FSPP ⁇ ub ⁇ y ⁇ te 12', there is also an associated TSPP processor (not illustrated) .
  • the TSPP 18' convey ⁇ the update record to the a ⁇ sociated FSPP (not illu ⁇ trated), which constructs an update cell.
  • the cell is conveyed from the a ⁇ ociated FSPP to the TSPP (not illu ⁇ trated) a ⁇ sociated with the upstream FSPP subsystem 12'.
  • the associated TSPP strips out the update record from the received update cell, and conveys the record to the up ⁇ tream FSPP ⁇ ubsystem 12'.
  • the check event at the link level involve ⁇ the transmi ⁇ sion of a check cell having the Link_Tx_Counter 54 value by the FSPP 16' every "W" check cell ⁇ (Fig ⁇ . 8A and 9A) .
  • W is equal to four.
  • the TSPP 18' performs the previously de ⁇ cribed check function ⁇ at the connection-level, a ⁇ well a ⁇ increasing the Link_Fwd_Counter 68 value by an amount equal to the check record contents, Link_Tx_Counter 54, minus the sum of Link_Buffer_Counter 62 plus Link_Fwd_Counter 68 in the embodiment of Fig. 9C.
  • Fig. 9C In the embodiment of Fig.
  • Link_Rx_Counter 70 is modified to equal the content ⁇ of the check record (Link_Tx_Counter 54) . This i ⁇ an accounting for errored cells on a link-wide basis. An update record is then generated having a value taken from the updated Link_Fwd_Counter 68 or Link_Rx_Counter 70 values (Fig ⁇ . 8C and 9C) .
  • the BS_Limit value equals the Buffer_Limit value for both the connections and the link.
  • BS_Limit 24' and Buffer_Limit 30' are both equal to twenty, and there are 100 connection ⁇ in thi ⁇ link, there are only 1000 buffers 28' in the downstream element, as reflected by Link_BS_Limit 52 and Link_Buffer_Limit 60. This is because of the buffer pool sharing enabled by link-level feedback. Link-level flow control can be disabled, should the need arise, by not incrementing: Link_BS_Counter; Link_N2_Counter; and Link_Buffer_Counter, and by disabling link-level check cell transfer. No updates will occur under these conditions.
  • the presently described invention can be further augmented with a dynamic buffer allocation ⁇ cheme, such as previously described with respect to N2_Limit 34 and Link_N2_Limit 64.
  • Thi ⁇ ⁇ cheme includes the ability to dynamically adjust limiting parameters such as BS_Limit 24, Link_BS_Limit 52, Buffer_Limit 30, and Link Buffer_Limit 60, in addition to N2_Limit 34 and Link_N2_Limit 64.
  • Such adjustment is in respon ⁇ e to measured characteristics of the individual connection ⁇ or the entire link in one embodiment, and i ⁇ e ⁇ tabli ⁇ hed according to a determined priority scheme in another embodiment.
  • Dynamic buffer allocation thus provides the ability to prioritize one or more connections or link ⁇ given a limited buffer resource.
  • the Link_N2_Limit is set according to the de ⁇ ired accuracy of buffer accounting. On a link-wide basis, a ⁇ the number of connection ⁇ within the link increa ⁇ e ⁇ , it may be de ⁇ irable to decrease Link_N2_Limit in light of an increased number of connections in the link, since accurate buffer accounting allows greater buffer sharing among many connection ⁇ . Conver ⁇ ely, if the number of connections within the link decreases, Link_N2_Limit may be increased, since the criticality of sharing limited resources among a relatively small number of connections is decreased.
  • the counters previou ⁇ ly de ⁇ cribed a ⁇ being reset to zero and counting up to a limit can be implemented in a further embodiment as starting at the limit and counting down to zero.
  • the transmitter and receiver processors interpret the limits as starting points for the respective counters, and decrement upon detection of the appropriate event. For in ⁇ tance, if Buffer_Counter (or Link_Buffer_Counter) is implemented a ⁇ a decrementing counter, each time a data cell i ⁇ allocated to a buffer within the receiver, the counter would decrement.
  • a further enhancement of the foregoing zero cell lo ⁇ , link-level flow control technique include ⁇ providing a plurality of shared cell buffers 28" in a down ⁇ tream element 14" wherein the cell buffer ⁇ 28" are divided into N prioritized cell buffer subsets, Priority 0 108a, Priority 1 108b, Priority 2 108c, and Priority 3 I08d, by N - 1 threshold level(s) , Threshold(l) 102, Threshold(2) 104, and Threshold(3) 106.
  • Such a cell buffer pool 28" i ⁇ illu ⁇ trated in Fig. 10, in which four priorities labelled Priority 0 through Priority 3 are illustrated a ⁇ being defined by three thre ⁇ holds labelled Thre ⁇ hold(l) through Thre ⁇ hold(3) .
  • Thi ⁇ prioritized buffer pool enable ⁇ the tran ⁇ mi ⁇ ion of high priority connections while lower priority connection ⁇ are " ⁇ tarved" or prevented from tran ⁇ mitting cell ⁇ downstream during periods of link congestion.
  • Cell priorities are identified on a per-connection basis.
  • the policy by which the thresholds are established is defined according to a predicted model of cell traffic in a first embodiment, or, in an alternative embodiment, is dynamically adjusted. Such dynamic adjustment may be in response to observed cell traffic at an upstream transmitting element, or according to empirical cell traffic data as observed at the prioritized buffer pool in the downstream element.
  • This modified upstream element 12 viewed in Fig. 11, has at least one Link_BS_Threshold(n) 100, 102, 104 established in association with a Link_BS_Counter 50" and Link_BS_Limit 52", as described above, for characterizing a cell buffer pool 28" in a downstream element 14".
  • Link_BS_Thresholds 102, 104, 106 define a number of cell buffers in the pool 28" which are allocatable to cells of a given priority, wherein the priority is identified by a register 108 as ⁇ ociated with the BS_Counter 22" counter and BS_Limit 24" register for each connection 20".
  • the Priorities 108a, 108b, 108c, 108d illustrated in Fig. 11 are identified as Priority 0 through Priority 3, Priority 0 being the highe ⁇ t.
  • Link_BS_Counter 50 being less than Link_BS_Thre ⁇ hold(1) 102 in Fig ⁇ . 10 and 11, flow-controlled connection ⁇ of any priority can tran ⁇ mit.
  • connection-level flow control can still prevent a high-priority connection from transmitting, if the path that connection is intended for is severely congested.
  • Link_BS_Counter 50 is periodically updated based upon a value contained within a link-level update record transmitted from the downstream element 14" to the upstream element 12". This periodic updating is required in order to ensure accurate function of the prioritized buffer access of the present invention.
  • Threshold levels 102, 104, 106 are modified dynamically, either a ⁇ a result of tracking the priority as ⁇ ociated with cells received at the upstream transmitter element or based upon observed buffer usage in the downstream receiver element, it is necessary for the FSPP 16" to have an accurate record of the state of the cell buffer ⁇ 28", a ⁇ afforded by the update function.
  • the multiple priority levels enable different procedura ⁇ of service, in terms of delay bounds, to be offered within a single quality of service.
  • highest priority to shared buffers is typically given to connection/network management traffic, as identified by the cell header.
  • Second highe ⁇ t priority is given to low bandwidth, ⁇ mall burst connections, and third highe ⁇ t for bur ⁇ ty traffic. With prioritization allocated a ⁇ de ⁇ cribed, conge ⁇ tion within any one of the ⁇ ervice categories will not prevent connection/management traffic from having the lowest cell delay.
  • Initialization of the upstream element 12" as depicted in Fig. 11 is illu ⁇ trated in Fig. 12A.
  • the same counters and registers are ⁇ et as viewed in Fig. 3A for an upstream element 12' not enabling prioritized access to a shared buffer resource, with the exception that Link_BS_Threshold 102, 104, 106 values are initialized to a respective buffer value T.
  • these threshold buffer values can be pre-established and static, or can be adjusted dynamically based upon empirical buffer usage data.
  • Fig. 12B represents many of the same tests employed prior to forwarding a cell from the upstream element 12" to the downstream element 14" a ⁇ ⁇ hown in Fig. 3B, with the exception that an additional test i ⁇ added for the provi ⁇ ion of prioritized acce ⁇ s to a shared buffer resource.
  • the FSPP 16" uses the priority value 108 associated with a cell to be transferred to determine a threshold value 102, 104, 106 above which the cell cannot be transferred to the downstream element 14". Then, a test is made to determine whether the Link_BS_Counter 50" value is greater than or equal to the appropriate threshold value 102, 104, 106. If so, the data cell is not transmitted.
  • connection-level congestion te ⁇ t ⁇ are executed, a ⁇ previou ⁇ ly de ⁇ cribed.
  • more or less than four prioritie ⁇ can be implemented with the appropriate number of thresholds, wherein the fewest number of priorities i ⁇ two, and the corre ⁇ ponding fewest number of thresholds is one. For every N priorities, there are N - l thresholds.
  • flow-control is provided solely at the link level, and not at the connection level, though it i ⁇ ⁇ till nece ⁇ ary for each connection to provide some form of priority indication akin to the priority field 108 illustrated in Fig. 11.
  • the link level flow controlled protocol a ⁇ previou ⁇ ly described can be further augmented in yet another embodiment to enable a guaranteed minimum cell rate on a per-connection basi ⁇ with zero cell loss.
  • This minimum cell rate is also referred to a ⁇ guaranteed bandwidth.
  • the connection can be flow-controlled below this minimum, allocated rate, but only by the receiver elements associated with this connection. Therefore, the minimum rate of one connection is not affected by conge ⁇ tion within other connection ⁇ .
  • cells present at the upstream element as ⁇ ociated with the FSPP 116 be identified by whether they are to be transmitted from the upstream element using allocated bandwidth, or whether they are to be transmitted using dynamic bandwidth.
  • the cells may be provided in queues as ⁇ ociated with a li ⁇ t labelled "preferred,” indicative of cell ⁇ requiring allocated bandwidth.
  • the cell ⁇ may be provided in queues associated with a li ⁇ t labelled "dynamic,” indicative of cell ⁇ requiring dynamic bandwidth.
  • the present mechanism In a frame relay setting, the present mechanism is used to monitor and limit both dynamic and allocated bandwidth. In a ⁇ etting involving purely internet traffic, only the dynamic portion ⁇ of the mechanism may be of significance. In a setting involving purely CBR flow, only the allocated portions of the mechanism would be employed. Thus, the presently disclosed method and apparatus enables the maximized use of mixed scheduling connections - those requiring all allocated bandwidth to those requiring all dynamic bandwidth, and connections therebetween.
  • a downstream cell buffer pool 128, akin to the pool 28' of Fig. 2, is logically divided between an allocated portion 300 and a dynamic portion 301, whereby cells identified a ⁇ to receive allocated bandwidth are buffered within this allocated portion 300, and cells identified as to receive dynamic bandwidth are buffered in the dynamic portion 301.
  • Fig. 13A shows the two portions 300, 301 as distinct entities; the allocated portion is not a physically distinct block of memory, but repre ⁇ ent ⁇ a number of individual cell buffer ⁇ , located anywhere in the pool 128.
  • the presently disclosed mechanism for guaranteeing minimum bandwidth is applicable to a mechanism providing prioritized access to downstream buffers, as previously described in conjunction with Figs. 10 and 11. With regard to Fig.
  • a downstream buffer pool 228 is logically divided among an allocated portion 302 and a dynamic portion 208, the latter logically subdivided by threshold levels 202, 204, 206 into prioritized cell buffer subsets 208a-d.
  • the division of the buffer pool 228 is a logical, not physical, division.
  • Fig. 14 Elements required to implement this guaranteed minimum bandwidth mechanism are illustrated in Fig. 14, where like element ⁇ from Fig ⁇ . 2 and 11 are provided with like reference number ⁇ , added to 100 or 200. Note that no new element ⁇ have been added to the down ⁇ tream element; the pre ⁇ ently described guaranteed minimum bandwidth mechanism is tran ⁇ parent to the down ⁇ tream element.
  • D_BS_Counter 122 highlights resource consumption by tracking the number of cells scheduled u ⁇ ing dynamic bandwidth transmitted downstream to the receiver 114. This counter ha ⁇ essentially the same function as BS_Counter 22' found in Fig. 2, where there was no differentiation between allocated and dynamically scheduled cell traffic.
  • D_B ⁇ _Limit 124 used to provide a ceiling on the number of down ⁇ tream buffer ⁇ available to store cells from the transmitter 112, finds a corre ⁇ ponding function in BS_Limit 24' of Fig. 2.
  • the dynamic bandwidth can be ⁇ tatistically shared; the actual number of buffers available for dynamic cell traffic can be over-allocated.
  • RTT include ⁇ delay ⁇ incurred in proce ⁇ ing the update cell.
  • A_BS_Counter 222 and A_BS_Limit 224 also track and limit, re ⁇ pectively, the number of cell ⁇ a connection can tran ⁇ mit by comparing a tran ⁇ mitted number with a limit on buffers available. However, these values apply strictly to allocated cells; allocated cells are those identified as requiring allocated bandwidth (the guaranteed minimum bandwidth) for transmi ⁇ sion.
  • Limit information is set up at connection initialization time and can be raised and lowered as the guaranteed minimum bandwidth is changed. If a connection does not have an allocated component, the A_BS_Limit 224 will be zero.
  • the A_BS_Counter 222 and A_BS_Limit 224 are in addition to the D_BS_Counter 122 and D_BS__Limit 124 described above.
  • the amount of "A" buffers dedicated to a connection i ⁇ equal to the RTT times the allocated bandwidth plus N2.
  • the actual number of buffers dedicated to allocated traffic cannot be over-allocated. This ensures that congestion on other connections doe ⁇ not impact the guaranteed minimum bandwidth.
  • M2P multipoint- to-point
  • the condition of not having further "A" buffer ⁇ tate ⁇ inhibits the intra-switch tran ⁇ mi ⁇ sion of further allocated cell traffic for that connection.
  • Link_A_BS_Counter 250 is added to the FSPP 116. It tracks all cells identified as requiring allocated bandwidth that are "in-flight" between the FSPP 116 and the downstream switch fabric, including cell ⁇ in the TSPP 118 cell buffer ⁇ 128, 228. The counter 250 i ⁇ decrea ⁇ ed by the same amount as the A_BS_Counter 222 for each connection when a connection level update function occurs (discu ⁇ sed subsequently) .
  • Link_B ⁇ __Limit 152 reflects the total number of buffers available to dynamic cells only, and does not include allocated buffers.
  • Link_BS_Counter 150 reflects a total number of allocated and dynamic cells transmitted. Thus, connections are not able to use their dynamic bandwidth when Link_BS_Counter 150 (all cells in-flight, buffered, or in downstream switch fabric) minus Link_A_BS_Counter 250 (all allocated cell ⁇ tran ⁇ mitted) i ⁇ greater than Link_BS_Limit 152 (the maximum number of dynamic buffers available) . This is neces ⁇ ary to en ⁇ ure that conge ⁇ tion does not impact the allocated bandwidth.
  • the sum of all individual A_BS_Limit 224 values, or the total per-connection allocated cell buffer space 300, 302, is in one embodiment le ⁇ s than the actually dedicated allocated cell buffer space in order to account for the potential effect of stale (i.e. , low frequency) connection-level updates.
  • Update and check events are also implemented in the presently disclosed allocated/dynamic flow control mechanism.
  • the downstream element 114 transmit ⁇ connection level update cell ⁇ when either a preferred li ⁇ t and a VBR-priority 0 list are empty and an update queue is fully packed, or when a "max_update_interval" (not illustrated) has been reached.
  • the update cell is analyzed to identify the appropriate queue, the FSPP 116 adjusts the A_BS_Counter 222 and D_BS_Counter 122 for that queue, returning cell buffers to "A" fir ⁇ t then "D", a ⁇ described above, since the FSPP 116 cannot distinguish between allocated and dynamic buffers.
  • the number of "A" buffers returned to individual connections is ⁇ ubtracted from Link_A_BS_Counter 250.
  • link level elements used in association with the presently disclo ⁇ ed minimum guaranteed bandwidth mechanism such as Link_Tx_Counter 154, function a ⁇ described in the foregoing discussion of link level flow control.
  • link_Tx_Counter 154 function a ⁇ described in the foregoing discussion of link level flow control.
  • a further embodiment of the presently described mechani ⁇ m functions with a link level flow control scenario incorporating prioritized access to the downstream buffer re ⁇ ource 228 through the u ⁇ e of thre ⁇ holds 202, 204, 206. The function of these elements are as described in the foregoing.
  • Downstream element has 3000 buffer ⁇
  • thi ⁇ occurs by the queue being removed from the dynamic list, preventing the queue from being scheduled for transmit using dynamic bandwidth.
  • For allocated cells a check is made when each cell is enqueued to determine whether the cell, plu ⁇ other enqueued cells, plus A_BS_Counter, is a number greater than A_BS_Limit. If not, the cell is enqueued and the queue is placed on the preferred list. Else, the connection is prevented from transmitting further cells through the upstream element 112 switch fabric.
  • Fig. 15A Initialization of the up ⁇ tream element 112 as depicted in Fig. 14 is illustrated in Fig. 15A. Es ⁇ entially, the same counters and register ⁇ set in Fig. 3A for an upstream element 12' (when prioritized access to a shared buffer resource is not enabled) , and in Fig. 12A for an upstream element 12" (when prioritized access is enabled) . Exceptions include: Link_A_BS_Counter 250 initialized to zero; connection-level allocated and dynamic BS_Counters 122, 222 ⁇ et to zero; and connection-level allocated and dynamic BS_Limits 124, 224 set to respective values of N ⁇ and N u . Similarly, on the downstream end at the connection level, the allocated and dynamic.
  • Fig. 15B represent ⁇ many of the ⁇ ame tests employed prior to forwarding a cell from the upstream element 112 to the downstream element 114 as shown in Figs. 3B and 12B, with the following exceptions.
  • Over-allocation of buffer states per connection is checked for dynamic traffic only and is calculated by subtracting Link_A_BS_Counter from Link_BS_Counter and comparing the result to Link_BS_Limit.
  • over-allocation at the downstream element is tested for both allocated and dynamic traffic at the connection level.
  • connection-level flow control as known in the art relies upon discrete control of each individual connection.
  • the control is from transmitter queue to receiver queue.
  • a single queue Q A in a transmitter element is the source of data cells for four queues Q w , Q x , Q ⁇ , and Q z associated with a single receiver processor, the prior art does not define any mechanism to handle this situation.
  • Fig. 16 in which a single queue Q A in a transmitter element is the source of data cells for four queues Q w , Q x , Q ⁇ , and Q z associated with a single receiver processor, the prior art does not define any mechanism to handle this situation.
  • Fig. 16 in which a single queue Q A in a transmitter element is the source of data cells for four queues Q w , Q x , Q ⁇ , and Q z associated with a single receiver processor, the prior art does not define any mechanism to handle this situation.
  • Fig. 16 in which a single queue Q A in a transmitter element is
  • the transmitter element 10 is an FSPP element having a FSPP 11 as ⁇ ociated therewith, and the receiver element 12 i ⁇ a TSPP element having a TSPP 13 associated therewith.
  • the FSPP 11 and TSPP 13 a ⁇ employed in Fig. 16 selectively provide the same programmable capabilities as described above, such as link-level flow control, prioritized acces ⁇ to a shared, downstream buffer resource, and guaranteed minimum cell rate on a connection level, in addition to a connection-level flow control mechanism. Whether one or more of these enhanced capabilities are employed in conjunction with the connection- level flow control is at the option of the sy ⁇ tem configurator.
  • Yet another capability provided by the FSPP and TSPP according to the pre ⁇ ent disclosure is the ability to treat a group of receiver queues jointly for purposes of connection-level flow control.
  • the presently disclo ⁇ ed mechani ⁇ m utilizes one connection 16 in a link 14, terminating in four separate queues Q w , Q x , Q ⁇ , and Q z , though the four queues are treated es ⁇ entially as a single, joint entity -for purpose ⁇ of connection-level flow control.
  • Thi ⁇ is needed because some network elements need to use a flow controlled service but cannot handle the bandwidth of processing update cells when N2 is set to a low value, 10 or less (see above for a discussion of the update event in connection-level flow control) .
  • Setting N2 to a large value, such as 30, for a large number of connections requires large amounts of downstream buffering because of buffer orphaning, where buffers are not in-use but are accounted for up-stream as in-use because of the lower frequency of update events.
  • This mechanism is also useful to terminate Virtual Channel Connections (VCC) within a Virtual Path Connection (VPC) , where flow control is applied to the VPC.
  • VCC Virtual Channel Connections
  • VPC Virtual Path Connection
  • queue de ⁇ criptor ⁇ for the queue ⁇ in the receiver are illustrated. Specifically, the de ⁇ criptors for queues Q w , Q x , and Q ⁇ are provided on the left, and in general have the same characteristic .
  • One of the first fields pertinent to the present disclosure is a bit labelled "J." When set, this bit indicates that the as ⁇ ociated queue is being treated as part of a joint connection in a receiver. Instead of maintaining all connection-level flow control information in each queue descriptor for each queue in the group, certain flow control elements are maintained only in one of the queue descriptor ⁇ for the group. In the illustrated case, that one queue is queue Q z .
  • a "Joint Number” field provides an offset or pointer to a set of flow control elements in the descriptor for queue Q z .
  • This pointer field may provide another function when the "J" bit is not ⁇ et.
  • Buffer_Limit (labelled “Buff_Limit” in Fig.
  • Joint_Buffer_Counter (labelled “Jt_Buff_Cntr")
  • Joint_N2_Counter (labelled “Jt_N2_Cntr”)
  • Joint_Forward_Counter (labelled "Jt_Fwd_Cntr”) are maintained in the descriptor for queue Q z for all of the queues in the group.
  • the same counters in the descriptors for queue ⁇ Q w , Q x , and Q ⁇ go unused.
  • the joint counters perform the ⁇ ame function as the individual counters, such as those illustrated in Fig.
  • Joint_Buffer_Counter is updated whenever a buffer cell receives a data cell or releases a data cell in association with any of the group queues.
  • Joint_N2_Counter is replaced with Receive_Counter.
  • Joint_Forward_Counter is replaced with Joint_Receive_Counter, depending upon which is maintained in each of the group queues. Only the embodiment including Forward_Counter and Joint_Forward_Counter are illustrated.
  • Buffer_Limit (labelled “Buff_Limit” in Fig. 17) is set and referred to on a per-queue basi ⁇ . Thu ⁇ , Joint_Buffer_Counter i ⁇ compared against the Buffer_Limit of a respective queue.
  • the Buffer_Limit could be Joint_Buffer_Limit, instead of maintaining individual, common limits. The policy is to set the same Buffer_Limit in all the TSPP queues associated with a single Joint_Buffer_Counter.
  • An update event is triggered, as previously described, when the Joint_N2_Counter reaches the queue-level N2_Limit.
  • the policy is to set all of the N2_Limits equal to the same value for all the queues associated with a single joint flow control connection.
  • the level of indirection provided by the Joint_Number is applicable to both data cells and check cells.
  • the transmitter element 10 only one set of upstream flow control elements are maintained.
  • the joint connection is set-up as a single, point-to- point connection, a ⁇ far a ⁇ the upstream elements are concerned. Therefore, instead of maintaining four set ⁇ of up ⁇ tream elements for the embodiment of Fig. 16, the presently disclosed mechanism only requires one set of elements (Tx_Counter, BS_Counter, BS_Limit, all having the functionality as previously de ⁇ cribed) .
  • each new queue must have the same N2_Limit and Buffer_Limit values.
  • the queue ⁇ for the additional connection ⁇ will reference the common Joint_N2_Counter and either Joint_Forward_Counter or Joint_Receive_Counter.
  • the Joint_Number field is used as an offset to the group descriptor.
  • the Joint_Number for the group descriptor i ⁇ set to itself, as shown in Fig. 17 with regard to the descriptor for queue Q z . This is also the case in point-to-point connections (VCC to VCC rather than the VPC to VCC, as illustrated in Fig. 16) , where each Joint_Number points to its own descriptor.

Abstract

A method and apparatus for providing a minimum per-connection banwidth guarantee and the ability to employ shared bandwidth thereabove in an environment having both virtual-connection and link-level flow control. A buffer pool (28) downstream of a transmitter (12) and disposed in a receiver (14) is divided among a first portion dedicated for allocated bandwidth cell traffic (300) and a second portion for dynamic bandwidth cell traffic (128). Link flow control enables the receiver buffer (28) sharing while maintaining the per-connection bandwidth guarantee. No cell-loss due to buffer overflows at the receiver (14) is also guaranteed, resulting in high link-utilization in a frame traffic environment, as well as low delay in the absence of cell retransmission. A higher and thus more efficient utilization of receiver cell buffers (28) is achieved.

Description

MINIMUM GUARANTEED CELL RATE METHOD AND APPARATUS
RELATED APPLICATION This application claims benefit of U.S. Provisional Application Serial No. 60/001,498, filed July 19, 1995.
FIELD OF THE INVENTION This application relates to communications methods and apparatus in a distributed switching architecture, and in particular to bandwidth management in a distributed switching architecture.
BACKGROUND OF THE INVENTION A Flow Controlled Virtual Connection (FCVC) protocol for use in a distributed switching architecture is presently known in the art, and is briefly discussed below with reference to Fig. 1. This protocol involves communication of status (buffer allocation and current state) on a per virtual connection, such as a virtual channel connection or virtual path connection, basis between upstream and downstream network elements to provide a "no cell loss" guarantee. A cell is the unit of data to be transmitted. Each cell reguires a buffer to store it.
One example of this protocol involves a credit-based flow control system, where a number of connections exist within the same link with the necessary buffers established and flow control monitored on a per-connection basiε. Buffer usage over a known time interval, the link round-trip time, is determined in order to calculate the per-connection bandwidth. A trade-off is established between maximum bandwidth and buffer allocation per connection. Such per- connection feedback and subsequent flow control at the transmitter avoids data loss from an inability of the downstream element to store data cells sent from the upstream element. The flow control protocol isolates each connection, ensuring lossless cell transmission for that connection. However, since buffers reserved for a first connection cannot be made available for (that is, shared with) a second connection without risking cell loss in the first connection, the cost of the potentially enormous number of cell buffers required for long-haul, high-bandwidth links, each supporting a large number of connections, quickly becomes of great significance.
Connection-level flow control results in a trade-off between update frequency and the realized bandwidth for the connection. High update frequency has the effect of minimizing situations in which a large number of receiver cell buffers are available, though the transmitter incorrectly believes the buffers to be unavailable. Thus it reduces the number of buffers that must be set aside for a connection. However, a high update frequency to control a traffic flow will require a high utilization of bandwidth (in the reverse direction) to supply the necessary flow control buffer update information where a large number of connections exist in the same link. Realizing that transmission systems are typically symmetrical with traffic flowing in both directions, and flow control buffer update information likewise flowing in both directions, it is readily apparent that a high update frequency is wasteful of the bandwidth of the link. On the other hand, using a lower update frequency to lower the high cost of this bandwidth loss in the link, in turn requires that more buffers be set aside for each connection. This trade-off can thus be restated as being between more efficient receiver cell buffer usage and a higher cell transmission rate. In practice, given a large number of connections in a given link, it turns out that any compromise results in both too high a cost for buffers and too much bandwidth wasted in the link.
Therefore, presently known cell transfer flow control protocols fail to provide for efficient use of a minimized receiver cell buffer pool and a high link data transfer efficiency, while simultaneously maintaining a "no cell loss" guarantee on a per-connection basis when a plurality of connections exist in the same link.
Other protocols that use end-to-end flow control require information regarding newly available bandwidth to travel all the way back to the origin of the connection in order to take advantage of such newly available bandwidth at any one point in a series of network elements. The response delay may result in under- utilization of the link. Prior mechanisms weren't defined for ensuring no cell loss in conjunction with a minimum bandwidth guarantee on a per-connection basis.
SUMMARY OF THE INVENTION The presently claimed invention provides, in a link- level and virtual-connection flow controlled environment, the ability to guarantee a minimum bandwidth to a connection through a link, the ability to employ shared bandwidth thereabove, and the ability to guarantee no cell-loss due to buffer overflows at the receiver, while providing a high level of link utilization efficiency. The amount of the bandwidth guarantee is individually programmable for each connection. A buffer resource in a receiver, downstream of a transmitter, is logically divided into first buffers dedicated to allocated bandwidth cell traffic and buffers shared among dynamic bandwidth cell traffic. The invention utilizes elements in both the transmitter and the receiver, at both the connection level and link level, necessary for enabling the provision of buffer state flow control at the link level, otherwise known as link flow control, in addition to flow control on a per-connection basis. Link flow control enables receiver cell buffer sharing while maintaining a per- connection bandwidth guarantee. A higher and thus more efficient utilization of receiver cell buffers is achieved. No cell-loss due to buffer overflows at the receiver is guaranteed, leading to high link utilization in a frame traffic environment, as well as low delay in the absence of cell retransmission. In such a system, link flow control may have a high update frequency, whereas connection flow control information may have a low update frequency. The end result is a low effective update frequency since link level flow control exists only once per link basis whereas the link typically has many connections within it, each needing their own flow control. This minimizes the wasting of link bandwidth to transmit flow control update information. However, since the whole link now has a flow control mechanism ensuring lossless transmission for it and thus for all of the connections within it, buffers may be allocated from a pool of buffers and thus connections may share in access to available buffers. Sharing buffers means that fewer buffers are needed since the projected buffers required for a link in the defined known time interval may be shown to be less than the projected buffers that would be required if independently calculated and summed for all of the connections within the link for the same time interval. Furthermore, the high update frequency that may be used on the link level flow control without undue wasting of link bandwidth, allows further minimization of the buffers that must be assigned to a link. Minimizing the number of cell buffers at the receiver significantly decreases net receiver cost.
The link can be defined either as a physical link or as a logical grouping comprised of logical connections.
The resultant system adds more capability than is defined in the presently known art. It eliminates the excessive wasting of link bandwidth that results from reliance on a per-connection flow control mechanism alone, while taking advantage of both a high update frequency at the link level and buffer sharing to minimize the buffer requirements of the receiver. Yet this flow control mechanism still ensures the same lossless transmission of cells as would the prior art. As an additional advantage of this invention, a judicious use of the counters associated with the link level and connection level flow control mechanisms, allows easy incorporation of a dynamic buffer allocation mechanism to control the number of buffers allocated to each connection, further reducing the buffer requirements. Additional counters associated with the link level and connection level flow control mechanisms at the transmitter therefore provide the ability to guarantee a minimum, allocated bandwidth to a connection through a link, the ability to transmit dynamically distributed bandwidth in conjunction therewith, and the ability to guarantee no cell- loss, while providing a high level of link utilization efficiency. Any given connection can be flow controlled below the guaranteed minimum, but only by the receiver as a result of congestion downstream on the same connection; congestion on other connections does not result in bandwidth reduction below the allocated rate.
The presently disclosed mechanism may further be combined with a mechanism for prioritized access to a shared buffer resource.
BRIEF DESCRIPTION OF THE DRAWINGS The above and further advantages may be more fully understood by referring to the following description and accompanying drawings of which: Fig. 1 is a block diagram of a connection-level flow control apparatus as known in the prior art;
Fig. 2 is a block diagram of a link-level flow control apparatus according to the present invention;
Figs. 3A and 3B are flow diagram representations of counter initialization and preparation for cell transmission within a flow control method according to the present invention;
Fig. 4 is a flow diagram representation of cell transmission within the flow control method according to the present invention;
Figs. 5A and 5B are flow diagram representations of update cell preparation and transmission within the flow control method according to the present invention;
Figs. 6A and 6B are flow diagram representations of an alternative embodiment of the update cell preparation and transmission of Figs. 5A and 5B;
Figs. 7A and 7B are flow diagram representations of update cell reception within the flow control method according to the present invention;
Figs. 8A, 8B and 8C are flow diagram representations of check cell preparation, transmission and reception within the flow control method according to the present invention;
Figs. 9A, 9B and 9C are flow diagram representations of an alternative embodiment of the check cell preparation, transmission and reception of Figs. 8A, 8B and 8C; Fig. 10 illustrates a cell buffer pool according to the present invention as viewed from an upstream element;
Fig. 11 is a block diagram of a link-level flow control apparatus in an upstream element providing prioritized access to a shared buffer resource in a downstream element according to the present invention;
Figs. 12A and 12B are flow diagram representations of counter initialization and preparation for cell transmission within a prioritized access method according to the present invention; Figs. 13A and 13B illustrate alternative embodiments of cell buffer pools according to the present invention as viewed from an upstream element;
Fig. 14 is a block diagram of a flow control apparatus in an upstream element providing guaranteed minimum bandwidth and prioritized access to a shared buffer resource in a downstream element according to the present invention;
Figs. 15A and 15B are flow diagram representations of counter initialization and preparation for cell transmission within a guaranteed minimum bandwidth mechanism employing prioritized access according to the present invention;
Fig. 16 is a block diagram representation of a transmitter, a data link, and a receiver in which the presently disclosed joint flow control mechanism is implemented; and
Fig. 17 illustrates data structures associated with queues in the receiver of Fig. 16.
DETAILED DESCRIPTION In Fig. 1, the resources required for connection-level flow control are presented. As previously stated, the illustrated configuration of Fig. 1 is presently known in the art. However, a brief discussion of a connection-level flow control arrangement will facilitate an explanation of the presently disclosed link-level flow control method and apparatus. One link 10 is shown providing an interface between an upstream transmitter element 12, also known as an UP subsystem, and a downstream receiver element 14, also known as a DP subsystem. Each element 12, 14 can act as a switch between other network elements. For instance, the upstream element 12 in Fig. 1 can receive data from a PC (not shown) . This data is communicated through the link 10 to the downstream element 14, which in turn can forward the data to a device such as a printer (not shown) . Alternatively, the illustrated network elements 12, 14 can themselves be network end-nodes.
The essential function of the presently described arrangement is the transfer of data cells from the upstream element 12 via a connection 20 in the link 10 to the downstream element 14, where the data cells are temporarily held in cell buffers 28. Cell format is known, and is further described in "Quantum Flow Control", Version 1.5.1, dated June 27, 1995 and subsequently published in a later version by the Flow Control Consortium. In Fig. 1, the block labelled Cell Buffers 28 represents a set of cell buffers dedicated to the respective connection 20. Data cells are released from the buffers 28, either through forwarding to another link beyond the downstream element 14, or through cell utilization within the downstream element 14. The latter event can include the construction of data frames from the individual data cells if the downstream element 14 is an end-node such as a work station.
Each of the upstream and downstream elements 12 , 14 are controlled by respective processors, labelled UP (Upstream Processor) 16 and DP (Downstream Processor) 18. Associated with each of the processors 16, 18 are sets of buffer counters for implementing the connection-level flow control. These buffer counters are each implemented as an increasing counter/limit register set to facilitate resource usage changes. The counters of Fig. 1, described in further detail below, are implemented in a first embodiment in UP internal RAM. The counter names discussed and illustrated for the prior art utilize some of the same counter names as used with respect to the presently disclosed flow control method and apparatus. This is merely to indicate the presence of a similar function or element in the prior art with respect to counters, registers, or like elements now disclosed.
Within the link 10, which in a first embodiment is a copper conductor, multiple virtual connections 20 are provided. In an alternative embodiment, the link 10 is a logical grouping of plural virtual connections 20. The number of connections 20 implemented within the link 10 depends upon the needs of the respective network elements 12, 14, as well as the required bandwidth per connection. In Fig. 1, only one connection 20 and associated counters are illustrated for simplicity. First, with respect to the upstream element 12 of Fig.
1, two buffer state controls are provided, BS_Counter 22 and BS_Limit 24. In a first embodiment, each are implemented as fourteen bit counters/registers, allowing a connection to have 16,383 buffers. This number would support, for example, 139 Mbps, 10,000 kilometer round-trip service. The buffer state counters 22, 24 are employed only if the connection 20 in question is flow-control enabled. That is, a bit in a respective connection descriptor, or queue descriptor, of the UP 16 is set indicating the connection 20 is flow-control enabled. BS_Counter 22 is incremented by the UP 16 each time a data cell is transferred out of the upstream element 12 and through the associated connection 20. Periodically, as described below, this counter 22 is adjusted during an update event based upon information received from the downstream element 14. BS_Counter 22 thus presents an indication of the number of data cells either currently being transmitted in the connection 20 between the upstream and downstream elements 12, 14, or yet unreleased from buffers 28 in the downstream element 14. BS_Limit 24 is set at connection configuration time to reflect the number of buffers 28 available within the receiver 14 for thiε connection 20. For instance, if BS_Counter 22 for this connection 20 indicates that twenty data cells have been transmitted and BS_Limit 24 indicates that this connection 20 is limited to twenty receiver buffers 28, the UP 16 will inhibit further transmiεεion from the upstream element 12 until an indication is received from the downstream element 14 that further buffer space 28 is available for that connection 20. Tx_Counter 26 is used to count the total number of data cells transmitted by the UP 16 through this connection 20. In the first embodiment, thiε is a twenty-eight bit counter which rolls over at OxFFFFFFF. As described later, Tx_Counter 16 is used during a check event to account for errored cells for thiε connection 20.
In the downεtream element 14, the DP 18 also manages a set of counters for each connection 20. Buffer_Limit 30 performs a policing function in the downstream element 14 to protect against misbehaving transmitters. Specifically, the buffer_limit register 30 indicates the maximum number of cell buffers 28 in the receiver 14 which this connection 20 can use. In most cases, BS_Limit 24 is equal to Buffer_Limit 30. At some point, though, it may be necessary to adjust the maximum number of cell buffers 28 for this connection 20 up or down. This function is coordinated by network management software. To avoid the "dropping" of data cells in transmission, an increase in buffers per connection is reflected first in Buffer_Limit 30 prior to BS_Limit 24. Conversely, a reduction in the number of receiver buffers per connection is reflected first in BS_Limit 24 and thereafter in Buffer_Limit 30.
Buffer_Counter 32 provides an indication of the number of buffers 28 in the downstream element 14 which are currently being used for the storage of data cells. As described subsequently, this value is used in providing the upstream element 12 with a more accurate picture of buffer availability in the downstream element 14. Both the Buffer_Limit 30 and Buffer_Counter 32 are fourteen bits wide in the first embodiment.
N2_Limit 34 determines the frequency of connection flow- rate communication to the upstream transmitter 12. A cell containing εuch flow-rate information is sent upεtream every time the receiver element 14 forwardε a number of cells equal to N2_Limit 34 out of the receiver element 14. Thiε updating activity is further described εubεequently. In the first embodiment, N2__Limit 34 is six bits wide.
The DP 18 uεeε N2_Counter 36 to keep track of the number of cells which have been forwarded out of the receiver element 14 since the last time the N2_Limit 34 was reached. In the first embodiment, N2_Counter 36 is six bits wide. In a first embodiment, the DP 18 maintains Fwd_Counter
38 to maintain a running count of the total number of cells forwarded through the receiver element 14. This includes buffers released when data cells are utilized for data frame construction in an end-node. When the maximum count for this counter 38 is reached, the counter rolls over to zero and continues. The total number of cells received by the receiver element 14 can be derived by adding Buffer_Counter 32 to Fwd_Counter 38. The latter is employed in correcting the transmitter element 12 for errored cells during the check event, as described below. Fwd_Counter 38 is twenty-eight bits wide in the first embodiment.
In a second embodiment, the DP 18 maintains Rx_Counter 40, a counter which is incremented each time the downstream element 14 receiveε a data cell through the respective connection 20. The value of this counter 40 is then usable directly in response to check cells and in the generation of an update cell, both of which will be described further below. Similar to the Fwd_Counter 38, Rx_Counter 40 is twenty-eight bits wide in this second embodiment.
There are two events in addition to a steady state condition in the connection-level flow controlled protocol: update; and check. In steady state, data cellε are transmitted from the transmitter element 12 to the receiver element 14. In update, buffer occupancy information is returned upstream by the receiver element 14 to correct counter valueε in the transmitter element 12. Check mode is used to check for cells lost or injected due to tranεmiεsion errors between the upεtrea tranε itter and downεtream receiver elementε 12, 14.
In the accompanying figures, connection level counters are augmented with "[i]" to indicate asεociation with one connection [i] of plural poεsible connections.
Prior to any activity, counterε in the upεtream and downstream elements 12, 14 are initialized, as illuεtrated in Fig. 3A. Initialization includes zeroing counters, and providing initial values to limit regiεters such as Link_BS_Limit and Link_Buffer_Limit. In Fig. 3A, Buffer_Limit[i] is shown being initialized to (RTT*BW) + N2 , which represents the round-trip time times the virtual connection bandwidth, plus accommodation for delays in processing the update cell. As for Link_N2_Limit, "X" representε the buffer εtate update frequency for the link, and for N2_Limit[i], "Y" represents the buffer state update frequency for each connection.
In steady state operation, the UP 16 of the transmitter element 12 determines which virtual connection 20 (VC) has a non-zero cell count (i.e. haε a cell ready to transmit), a BS_Counter value lesε than the BS_Limit, and an indication that the VC is next to send (also in Figs. 3A and 3B) .
The UP 16 increments BS_Counter 22 and Tx_Counter 26 whenever the UP 16 transmits a data cell over the respective connection 20, assuming flow control is enabled (Fig. 4) . Upon receipt of the data cell, the DP 18 checks whether Buffer_Counter 32 equals or exceeds Buffer_Limit 30, which would be an indication that there are no buffers available for receipt of the data cell. If Buffer_Counter >= Buffer_Limit, the data cell is discarded (Fig. 3B) . Otherwise, the DP 18 increments Buffer_Counter 32 and Rx_Counter 40 and the data cell is deposited in a buffer cell 28, as in Fig. 4. The Tx_Counter 26 and the Rx_Counter 40 roll over when they reach their maximum. If flow control is not enabled, none of the presently described functionality is implemented. Connections that do not utilize flow control on the link can coexist with connections using link flow control. The flow control accounting is not employed when cells from non-flow controlled connections are transmitted and received. This includes both connection level accounting and link level accounting. Thereby, flow control and non-flow control connections can be active simultaneously.
When a data cell is forwarded out of the receiver element 14, Buffer_Counter 32 is decremented. Buffer_Counter 32 should never exceed Buffer_Limit 30 when the connection- level flow control protocol is enabled, with the exception of when BS_Limit 24 has been decreased and the receiver element 14 has yet to forward εufficient cellε to bring Buffer_Counter 32 below Buffer_Limit 30.
A buffer state update occurs when the receiver element 14 has forwarded a number of data cellε equal to N2_Limit 34 out of the receiver element 14. In the first embodiment in which the DP 18 maintains Fwd_Counter 38, update involves the transfer of the value of Fwd_Counter 38 from the receiver element 14 back to the transmitter element 12 in an update cell, as in Fig. 6A. In the embodiment employing Rx Counter 40 in the downstream element 14, the value of Rx_Counter 40 minus Buffer_Counter 32 is conveyed in the update cell, as in Fig. 5A. At the transmitter 12, the update cell is used to update the value in BS_Counter 22, as εhown for the two embodiments in Fig. 7A. Since BS_Counter 22 is independent of buffer allocation information, buffer allocation can be changed without impacting the performance of this aspect of connection-level flow control. Update cells require an allocated bandwidth to ensure a bounded delay. This delay needs to be accounted for, as a component of round-trip time, to determine the buffer allocation for the respective connection.
The amount of bandwidth allocated to the update cells is a function of a counter, Max_Update_Counter (not illustrated) at an aεεociated downstream transmitter element (not illustrated) . This counter forces the scheduling of update and check cellε, the latter to be discussed subsequently. There is a corresponding Min_Update_Interval counter (not shown) in the downstream transmitter element, which controls the space between update cells. Normal cell packing is seven records per cell, and Min_Update_Interval is similarly set to seven. Since the UP 16 can only process one update record per cell time, back-to-back, fully packed update cells received at the UP 16 would cause some records to be dropped.
An update event occurs as follows, with regard to Figs. 1, 5A and 6A. When the downstream element 14 forwards (releaεes) a cell, Buffer_Counter 32 is decremented and N2_Counter 36 and Fwd_Counter 38 are incremented. When the N2 Counter 36 is equal to N2_Limit 34, the DP 18 prepares an update cell for transmission back to the upstream element 12 and N2_Counter 36 is set to zero. The upstream element 12 receives a connection indicator from the downstream element 14 forwarded cell to identify which connection 20 is to be updated. In the first embodiment, the DP 18 causes the Fwd_Counter 38 value to be inserted into an update record payload (Fig. 6A) . In the second embodiment, the DP 18 causes the Rx_Counter 40 value minus the Buffer_Counter 32 value to be inserted into the update record payload (Fig. 5A) . When an update cell is fully packed with records, or aε the minimum bandwidth pacing interval is reached, the update cell is transmitted to the upstream element 12.
Once received upstream, the UP 16 receiveε the connection indicator from the update record to identify the tranεmitter connection, and extracts the Fwd_Counter 38 value or the Rx_Counter 40 minus Buffer_Counter 32 value from the update record. BS_Counter 22 is reset to the value of Tx_Counter 26 minus the update record value (Fig. 7A) . If this connection was disabled from transmitting due to BS_Counter 22 being equal to or greater than BS_Limit 24, this condition should now be reversed, and if so the connection should again be enabled for transmitting.
In summary, the update event provides the transmitting element 12 with an indication of how many cells originally transmitted by it have now been released from buffers within the receiving element 14, and thus provides the transmitting element 12 with a more accurate indication of receiver element 14 buffer 28 availability for that connection 20.
The buffer state check event εerves two purposes: 1) it provides a mechanism to calculate and compensate for cell loεs or cell insertion due to transmiεsion errors; and 2) it provides a mechanism to start (or restart) a flow if update cells were lost or if enough data cellε were loεt that N2_Limit 34 iε never reached. One timer (not εhown) in the UP subsyεtem 16 serves all connections. The connections are enabled or disabled on a per connection basiε as to whether to send check cells from the upstream transmitter element 12 to the downεtream receiver element 14. The check proceεs in the transmitter element 12 involves searching all of the connection descriptors to find one which is check enabled (see Figs. 8A, 9A) . Once a minimum pacing interval has elapsed (the check interval) , the check cell is forwarded to the receiver element 14 and the next check enabled connection is identified. The spacing between check cellε for the same connection is a function of the number of active flow- controlled connections times the mandated spacing between check cells for all connectionε. Check cells have priority over update cellε.
The check event occurε aε follows, with regard to Figs. 8A through 8C and 9A through 9C. Each transmit element 12 connection 20 is checked after a timed check interval is reached. If the connection is flow-control enabled and the connection iε valid, then a check event iε εcheduled for transmission to the receiver element 14. A buffer state check cell iε generated using the Tx_Counter 26 value for that connection 20 in the check cell payload, and is transmitted using the connection indicator from the respective connection descriptor (Figs. 8A and 9A) .
In the first embodiment, a calculation of errored cells is made at the receiver element 14 by εumming Fwd_Counter 38 with Buffer_Counter 32, and subtracting this value from the contents of the transmitted check cell record, the value of Tx_Counter 26 (Fig. 9B) . The value of Fwd_Counter 38 is increased by the errored cell count. An update record with the new value for Fwd_Counter 38 is then generated. This updated Fwd_Counter 38 value subsequently updates the BS_Counter 22 value in the transmitter element 12.
In the second embodiment, illustrated in Fig. 8B, the same is acco pliεhed by resetting the Rx_Counter 40 value equal to the check cell payload value (Tx_Counter 26) . A subεequent update record iε eεtabliεhed uεing the difference between the values of Rx_Counter 40 and Buffer_Counter 32.
Thus, the check event enables accounting for cells transmitted by the transmitter element 12, through the connection 20, but either dropped or not received by the receiver element 14.
A "no cell loss" guarantee is enabled using buffer state accounting at the connection level since the transmitter element 12 haε an up-to-date account of the number of buffers 28 in the receiver element 14 available for receipt of data cells, and has an indication of when data cell transmiεsion should be ceased due to the absence of available buffers 28 downstream.
In order to augment the foregoing protocol with a receiver element buffer sharing mechanism, link-level flow control, also known as link-level buffer εtate accounting, iε added to connection-level flow control. It iε possible for such link-level flow control to be implemented without connection-level flow control. However, a combination of the two is preferable since without connection-level flow control there would be no restriction on the number of buffers a single connection might consume.
It is desirable to perform buffer state accounting at the link level, in addition to the connection level, for the following reasons. Link-level flow control enables cell buffer sharing at a receiver element while maintaining the "no cell loss" guarantee afforded by connection-level flow control. Buffer sharing resultε in the most efficient use of a limited number of buffers. Rather than provide a number of buffers equal to bandwidth times RTT for each connection, a smaller number of buffers is employable in the receiver element 14 εince not all connectionε require a full compliment of buffers at any one time.
A further benefit of link-level buffer state accounting is that each connection is provided with an accurate representation of downstream buffer availability without necessitating increased reverse bandwidth for each connection. A high-frequency link-level update doeε not significantly effect overall per-connection bandwidth.
Link-level flow control iε deεcribed now with regard to Fig. 2. Like elementε found in Fig. 1 are given the εame reference numberε in Fig. 2, with the addition of a prime. Once again, only one virtual connection 20' iε illustrated in the link 10', though the link 10' would normally host multiple virtual connections 20'. Once again, the link 10' is a physical link in a first embodiment, and a logical grouping of plural virtual * connections in a second embodiment.
The upstream transmitter element 12' (FSPP subsyεtem) partially includeε a proceεεor labelled From Switch Port Proceεεor (FSPP) 16'. The FSPP proceεεor 16' iε provided with two buffer εtate counters, BS_Counter 22' and BS_Limit
24', and a Tx_Counter 26' each having the same function on a per-connection basis as those described with respect to Fig. 1.
The embodiment of Fig. 2 further includeε a set of resources added to the upstream and downstream elements 12', 14' which enable link-level buffer accounting. These resources provide similar functions as those utilized on a per-connection basiε, yet they operate on the link level.
For inεtance, Link__BS_Counter 50 trackε all cellε in flight between the FSPP 16' and elementε downεtream of the receiver element 14', including cells in transit between the transmitter 12' and the receiver 14' and cells stored within receiver 14' buffers 28'. As with the update event described above with respect to connection-level buffer accounting, Link_BS__Counter 50 iε modified during a link update event by subtracting either the Link_Fwd_Counter 68 value or the difference between Link_Rx_Counter 70 and Link_Buffer_Counter 62 from the Link_TX_Counter 54 value. In a first embodiment, the link-level counters are implemented in external RAM aεεociated with the FSPP proceεεor 16'.
Link BS Limit 52 limits the number of shared downstream cell buffers 28' in the receiver element 14' to be shared among all of the flow-control enabled connections 20'. In a first embodiment, Link_BS_Counter 50 and Link_BS_Limit 52 are both twenty bits wide. Link_TX_Counter 54 tracks all cellε tranεmitted onto the link 10'. It is used during the link-level update event to calculate a new value for Link_BS_Counter 50. Link_TX_Counter 54 is twenty-eight bits wide in the first embodiment. In the downstream element 14', To Switch Port Processor
(TSPP) 18' also manages a εet of counterε for each link 10' in the same fashion with respect to the commonly illuεtrated counters in Figs. 1 and 2. The TSPP 18' further includes a Link_Buffer_Limit 60 which performε a function in the downstream element 14' similar to Link_BS_Limit 52 in the upstream element 12' by indicating the maximum number of cell buffers 28' in the receiver 14' available for use by all connections 10'. In most caseε, Link_BS_Limit 52 is equal to Link_Buffer_Limit 60. The effect of adjusting the number of buffers 28' available up or down on a link-wide baεiε iε the εame as that described above with respect to adjusting the number of buffers 28 available for a particular connection 20. Link_Buffer_Limit 60 iε twenty bits wide in the first embodiment. Link_Buffer_Counter 62 provides an indication of the number of buffers in the downεtream element 14' which are currently being uεed by all connectionε for the εtorage of data cells. This value iε uεed in a check event to correct the Link_Fwd_Counter 68 (deεcribed εubsequently) . The Link_Buffer_Counter 62 is twenty bits wide in the first embodiment.
Link_N2_Limit 64 and Link_N2_Counter 66, each eight bits wide in the first embodiment, are used to generate link update records, which are intermixed with connection-level update records. Link_N2_Limit 64 establishes a threshold number for triggering the generation of a link-level update record (Figs. 5B and 6B) , and Link_N2_Counter 66 and Link_Fwd_Counter 68 are incremented each time a cell is released out of a buffer cell in the receiver element 14'. In a first embodiment, N2_Limit 34' and Link_N2_Limit 64 are both static once initially configured.
However, in a further embodiment of the present invention, each is dynamically adjustable baεed upon meaεured bandwidth. For inεtance, if forward link bandwidth is relatively high, Link_N2_Limit 64 could be adjusted down to cause more frequent link-level update record transmission. Any forward bandwidth impact would be considered minimal. Lower forward bandwidth would enable the raising of Link_N2_Limit 64 since the unknown availability of buffers 28' in the downstream element 14' is less critical. Link_Fwd_Counter 68 tracks all cells released from buffer cellε 28' in the receiver element 14' that came from the link 10' in queεtion. It iε twenty-eight bitε wide in a firεt embodiment, and is uεed in the update event to recalculate Link_BS_Counter 50. Link_Rx_Counter 70 is employed in an alternative embodiment in which Link_Fwd_Counter 68 is not employed. It is also twenty-eight bits wide in an illustrative embodiment and trackε the number of cellε received across all connections 20' in the link 10'. With regard to Figs. 2 et seq., a receiver element buffer sharing method is deεcribed. Normal data tranεfer by the FSPP 16' in the upεtream element 12' to the TSPP 18' in the downεtream element 14' iε enabled across all connections 20' in the link 10' as long aε the Link_BS_Counter 50 iε leεε than or equal to Link_BS_Limit 52, as in Fig. 3B. This test prevents the FSPP 16' from transmitting more data cells than it believes are available in the downstream element 14'. The accuracy of this belief is maintained through the update and check events, described next. A data cell is received at the downstream element 14' if neither connection-level or link-level buffer limit are exceeded (Fig. 3B) . If a limit is exceeded, the cell is discarded.
The update event at the link level involves the generation of a link update record when the value in Link_N2_Counter 66 reaches (equals or exceeds) the value in
Link_N2_Limit 64, as shown in Figs. 5B and 6B. In a first embodiment, Link_N2_Limit 64 is set to forty.
The link update record, the value taken from Link_Fwd_Counter 68 in the embodiment of Fig. 6B, is mixed with the per-connection update records (the value of Fwd_Counter 38') in update cells transferred to the FSPP 16'. In the embodiment of Fig. 5B, the value of Link_Rx_Counter 70 minus Link_Buffer_Counter 62 is mixed with the per- connection update records. When the upεtream element 12' receiveε the update cell having the link update record, it setε the Link_BS_Counter 50 equal to the value of Link_Tx_Counter 54 minuε the value in the update record (Fig. 7B) . Thus, Link_BS_Counter 50 in the upstream element 12' is reset to reflect the number of data cells tranεmitted by the upstream element 12', but not yet released in the downstream element 14'.
The actual implementation of the transfer of an update record, in a first embodiment, recognizes that for each TSPP subsystem 14', there is an aεsociated FSPP procesεor (not illuεtrated) , and for each FSPP εubεyεte 12', there is also an associated TSPP processor (not illustrated) . Thus, when an update record is ready to be transmitted by the TSPP subsystem 14' back to the upstream FSPP subεystem 12', the TSPP 18' conveyε the update record to the aεsociated FSPP (not illuεtrated), which constructs an update cell. The cell is conveyed from the aεεociated FSPP to the TSPP (not illuεtrated) aεsociated with the upstream FSPP subsystem 12'. The associated TSPP strips out the update record from the received update cell, and conveys the record to the upεtream FSPP εubsystem 12'.
The check event at the link level involveε the transmiεsion of a check cell having the Link_Tx_Counter 54 value by the FSPP 16' every "W" check cellε (Figε. 8A and 9A) . In a firεt embodiment, W is equal to four. At the receiver element 14', the TSPP 18' performs the previously deεcribed check functionε at the connection-level, aε well aε increasing the Link_Fwd_Counter 68 value by an amount equal to the check record contents, Link_Tx_Counter 54, minus the sum of Link_Buffer_Counter 62 plus Link_Fwd_Counter 68 in the embodiment of Fig. 9C. In the embodiment of Fig. 8C, Link_Rx_Counter 70 is modified to equal the contentε of the check record (Link_Tx_Counter 54) . This iε an accounting for errored cells on a link-wide basis. An update record is then generated having a value taken from the updated Link_Fwd_Counter 68 or Link_Rx_Counter 70 values (Figε. 8C and 9C) .
It iε neceεsary to perform the check event at the link level in addition to the connection level in order to readjust the Link_Fwd_Counter 68 value (Fig. 9C) or Link_Rx_Counter 70 value (Fig. 8C) quickly in the case of large transient link failures.
Again with regard to Fig. 2, the following are exemplary initial values for the illustrated counters in an embodiment having 100 connections in one link.
BSJLimit (24') = 20
Buffer_Limit (30') = 20
N2_Limit (34') = 3
Link_BS_Limit (52) = 1000
Link_Buffer_Limit (60) = 1000 Link_N2_Counter (66) = 40
The BS_Limit value equals the Buffer_Limit value for both the connections and the link. Though BS_Limit 24' and Buffer_Limit 30' are both equal to twenty, and there are 100 connectionε in thiε link, there are only 1000 buffers 28' in the downstream element, as reflected by Link_BS_Limit 52 and Link_Buffer_Limit 60. This is because of the buffer pool sharing enabled by link-level feedback. Link-level flow control can be disabled, should the need arise, by not incrementing: Link_BS_Counter; Link_N2_Counter; and Link_Buffer_Counter, and by disabling link-level check cell transfer. No updates will occur under these conditions. The presently described invention can be further augmented with a dynamic buffer allocation εcheme, such as previously described with respect to N2_Limit 34 and Link_N2_Limit 64. Thiε εcheme includes the ability to dynamically adjust limiting parameters such as BS_Limit 24, Link_BS_Limit 52, Buffer_Limit 30, and Link Buffer_Limit 60, in addition to N2_Limit 34 and Link_N2_Limit 64. Such adjustment is in responεe to measured characteristics of the individual connectionε or the entire link in one embodiment, and iε eεtabliεhed according to a determined priority scheme in another embodiment. Dynamic buffer allocation thus provides the ability to prioritize one or more connections or linkε given a limited buffer resource.
The Link_N2_Limit is set according to the deεired accuracy of buffer accounting. On a link-wide basis, aε the number of connectionε within the link increaεeε, it may be deεirable to decrease Link_N2_Limit in light of an increased number of connections in the link, since accurate buffer accounting allows greater buffer sharing among many connectionε. Converεely, if the number of connections within the link decreases, Link_N2_Limit may be increased, since the criticality of sharing limited resources among a relatively small number of connections is decreased.
In addition to adjuεting the limitε on a per-link baεis, it may alεo be desirable to adjust limits on a per-connection basis in order to change the maximum sustained bandwidth for the connection.
The presently disclosed dynamic allocation schemes are implemented during link operation, based upon previously prescribed performance goals. In a first embodiment of the present invention, incrementing logic for all counters iε diεposed within the FSPP processor 16'. Related thereto, the counters previouεly deεcribed aε being reset to zero and counting up to a limit can be implemented in a further embodiment as starting at the limit and counting down to zero. The transmitter and receiver processors interpret the limits as starting points for the respective counters, and decrement upon detection of the appropriate event. For inεtance, if Buffer_Counter (or Link_Buffer_Counter) is implemented aε a decrementing counter, each time a data cell iε allocated to a buffer within the receiver, the counter would decrement. When a data cell iε released from the respective buffer, the counter would increment. In thiε manner, the counter reaching zero would εerve aε an indication that all available bufferε have been allocated. Such implementation iε less easily employed in a dynamic bandwidth allocation scheme since dynamic adjustment of the limits must be accounted for in the non¬ zero countε.
A further enhancement of the foregoing zero cell loεε, link-level flow control technique includeε providing a plurality of shared cell buffers 28" in a downεtream element 14" wherein the cell bufferε 28" are divided into N prioritized cell buffer subsets, Priority 0 108a, Priority 1 108b, Priority 2 108c, and Priority 3 I08d, by N - 1 threshold level(s) , Threshold(l) 102, Threshold(2) 104, and Threshold(3) 106. Such a cell buffer pool 28" iε illuεtrated in Fig. 10, in which four priorities labelled Priority 0 through Priority 3 are illustrated aε being defined by three threεholds labelled Threεhold(l) through Threεhold(3) .
Thiε prioritized buffer pool enableε the tranεmiεεion of high priority connections while lower priority connectionε are "εtarved" or prevented from tranεmitting cellε downstream during periods of link congestion. Cell priorities are identified on a per-connection basis. The policy by which the thresholds are established is defined according to a predicted model of cell traffic in a first embodiment, or, in an alternative embodiment, is dynamically adjusted. Such dynamic adjustment may be in response to observed cell traffic at an upstream transmitting element, or according to empirical cell traffic data as observed at the prioritized buffer pool in the downstream element. For example, in an embodiment employing dynamic threshold adjustment, it may be advantageous to lower the number of bufferε available to data cells having a priority lesε than Priority 0, or conversely to increase the number of buffers above Threshold(3) , if a significantly larger quantity of Priority 0 traffic iε detected.
The cell buffer pool 28" depicted in Fig. 10 iε taken from the vantage point of a modified version 12" of the foregoing link-level flow control upstream element 12', the pool 28" being resident within a corresponding downstream element 14". This modified upstream element 12", viewed in Fig. 11, has at least one Link_BS_Threshold(n) 100, 102, 104 established in association with a Link_BS_Counter 50" and Link_BS_Limit 52", as described above, for characterizing a cell buffer pool 28" in a downstream element 14". These Link_BS_Thresholds 102, 104, 106 define a number of cell buffers in the pool 28" which are allocatable to cells of a given priority, wherein the priority is identified by a register 108 asεociated with the BS_Counter 22" counter and BS_Limit 24" register for each connection 20". The Priorities 108a, 108b, 108c, 108d illustrated in Fig. 11 are identified as Priority 0 through Priority 3, Priority 0 being the higheεt. When there is no congestion, as reflected by Link_BS_Counter 50" being less than Link_BS_Threεhold(1) 102 in Figε. 10 and 11, flow-controlled connectionε of any priority can tranεmit. As congestion occurs, as indicated by an increasing value in the Link_BS_Counter 50", lower priority connections are denied access to downstream buffers, in effect disabling their transmiεsion of cellε. In the case of εevere congestion, only cells of the higheεt priority are allowed to tranεmit. For inεtance, with respect again to Fig. 10, only cells of Priority o 108a are enabled for transmission from the upstream element 12" to the downstream element 14" if the link-level Link_BS_Threεhold(3) 106 haε been reached downεtream. Thuε, higher priority connectionε are less effected by the state of the network because they have first accesε to the εhared downstream buffer pool. Note, however, that connection-level flow control can still prevent a high-priority connection from transmitting, if the path that connection is intended for is severely congested. As above, Link_BS_Counter 50" is periodically updated based upon a value contained within a link-level update record transmitted from the downstream element 14" to the upstream element 12". This periodic updating is required in order to ensure accurate function of the prioritized buffer access of the present invention. In an embodiment of the present invention in which the Threshold levels 102, 104, 106 are modified dynamically, either aε a result of tracking the priority asεociated with cells received at the upstream transmitter element or based upon observed buffer usage in the downstream receiver element, it is necessary for the FSPP 16" to have an accurate record of the state of the cell bufferε 28", aε afforded by the update function.
The multiple priority levels enable different categorieε of service, in terms of delay bounds, to be offered within a single quality of service. Within each quality of service, highest priority to shared buffers is typically given to connection/network management traffic, as identified by the cell header. Second higheεt priority is given to low bandwidth, εmall burst connections, and third higheεt for burεty traffic. With prioritization allocated aε deεcribed, congeεtion within any one of the εervice categories will not prevent connection/management traffic from having the lowest cell delay.
Initialization of the upstream element 12" as depicted in Fig. 11 is illuεtrated in Fig. 12A. Essentially, the same counters and registers are εet as viewed in Fig. 3A for an upstream element 12' not enabling prioritized access to a shared buffer resource, with the exception that Link_BS_Threshold 102, 104, 106 values are initialized to a respective buffer value T. As discussed, these threshold buffer values can be pre-established and static, or can be adjusted dynamically based upon empirical buffer usage data.
Fig. 12B represents many of the same tests employed prior to forwarding a cell from the upstream element 12" to the downstream element 14" aε εhown in Fig. 3B, with the exception that an additional test iε added for the proviεion of prioritized acceεs to a shared buffer resource. Specifically, the FSPP 16" uses the priority value 108 associated with a cell to be transferred to determine a threshold value 102, 104, 106 above which the cell cannot be transferred to the downstream element 14". Then, a test is made to determine whether the Link_BS_Counter 50" value is greater than or equal to the appropriate threshold value 102, 104, 106. If so, the data cell is not transmitted. Otherwise, the cell is transmitted and connection-level congestion teεtε are executed, aε previouεly deεcribed. In alternative embodiments, more or less than four prioritieε can be implemented with the appropriate number of thresholds, wherein the fewest number of priorities iε two, and the correεponding fewest number of thresholds is one. For every N priorities, there are N - l thresholds. In yet a further embodiment, flow-control is provided solely at the link level, and not at the connection level, though it iε εtill neceεεary for each connection to provide some form of priority indication akin to the priority field 108 illustrated in Fig. 11. The link level flow controlled protocol aε previouεly described can be further augmented in yet another embodiment to enable a guaranteed minimum cell rate on a per-connection basiε with zero cell loss. This minimum cell rate is also referred to aε guaranteed bandwidth. The connection can be flow-controlled below this minimum, allocated rate, but only by the receiver elements associated with this connection. Therefore, the minimum rate of one connection is not affected by congeεtion within other connectionε.
It is a requirement of the preεently diεcloεed mechanism that cells present at the upstream element asεociated with the FSPP 116 be identified by whether they are to be transmitted from the upstream element using allocated bandwidth, or whether they are to be transmitted using dynamic bandwidth. For inεtance, the cells may be provided in queues asεociated with a liεt labelled "preferred," indicative of cellε requiring allocated bandwidth. Similarly, the cellε may be provided in queues associated with a liεt labelled "dynamic," indicative of cellε requiring dynamic bandwidth.
In a frame relay setting, the present mechanism is used to monitor and limit both dynamic and allocated bandwidth. In a εetting involving purely internet traffic, only the dynamic portionε of the mechanism may be of significance. In a setting involving purely CBR flow, only the allocated portions of the mechanism would be employed. Thus, the presently disclosed method and apparatus enables the maximized use of mixed scheduling connections - those requiring all allocated bandwidth to those requiring all dynamic bandwidth, and connections therebetween.
In the present mechanism, a downstream cell buffer pool 128, akin to the pool 28' of Fig. 2, is logically divided between an allocated portion 300 and a dynamic portion 301, whereby cells identified aε to receive allocated bandwidth are buffered within this allocated portion 300, and cells identified as to receive dynamic bandwidth are buffered in the dynamic portion 301. Fig. 13A shows the two portions 300, 301 as distinct entities; the allocated portion is not a physically distinct block of memory, but repreεentε a number of individual cell bufferε, located anywhere in the pool 128. In a further embodiment, the presently disclosed mechanism for guaranteeing minimum bandwidth is applicable to a mechanism providing prioritized access to downstream buffers, as previously described in conjunction with Figs. 10 and 11. With regard to Fig. 13B, a downstream buffer pool 228 is logically divided among an allocated portion 302 and a dynamic portion 208, the latter logically subdivided by threshold levels 202, 204, 206 into prioritized cell buffer subsets 208a-d. As with Fig. 13A, the division of the buffer pool 228 is a logical, not physical, division.
Elements required to implement this guaranteed minimum bandwidth mechanism are illustrated in Fig. 14, where like elementε from Figε. 2 and 11 are provided with like reference numberε, added to 100 or 200. Note that no new elementε have been added to the downεtream element; the preεently described guaranteed minimum bandwidth mechanism is tranεparent to the downεtream element.
New aspects of flow control are found at both the connection and link levels. With respect first to the connection level additions and modifications, D_BS_Counter 122 highlights resource consumption by tracking the number of cells scheduled uεing dynamic bandwidth transmitted downstream to the receiver 114. This counter haε essentially the same function as BS_Counter 22' found in Fig. 2, where there was no differentiation between allocated and dynamically scheduled cell traffic. Similarly, D_BΞ_Limit 124, used to provide a ceiling on the number of downεtream bufferε available to store cells from the transmitter 112, finds a correεponding function in BS_Limit 24' of Fig. 2. Aε diεcussed previously with respect to link level flow control, the dynamic bandwidth can be εtatistically shared; the actual number of buffers available for dynamic cell traffic can be over-allocated. The amount of "D" buffers provided to a connection iε equal to the RTT timeε the dynamic bandwidth pluε N2. RTT includeε delayε incurred in proceεεing the update cell. A_BS_Counter 222 and A_BS_Limit 224 also track and limit, reεpectively, the number of cellε a connection can tranεmit by comparing a tranεmitted number with a limit on buffers available. However, these values apply strictly to allocated cells; allocated cells are those identified as requiring allocated bandwidth (the guaranteed minimum bandwidth) for transmiεsion. Limit information is set up at connection initialization time and can be raised and lowered as the guaranteed minimum bandwidth is changed. If a connection does not have an allocated component, the A_BS_Limit 224 will be zero. The A_BS_Counter 222 and A_BS_Limit 224 are in addition to the D_BS_Counter 122 and D_BS__Limit 124 described above. The amount of "A" buffers dedicated to a connection iε equal to the RTT times the allocated bandwidth plus N2. The actual number of buffers dedicated to allocated traffic cannot be over-allocated. This ensures that congestion on other connections doeε not impact the guaranteed minimum bandwidth.
A connection loεes, or runs out of, its allocated bandwidth through the aεsociated upstream εwitch once it haε enqueued a cell but has no more "A" buffers as reflected by A_BS_Counter 222 and A_BS_Limit 224. If a connection is flow controlled below itε allocated rate, it loses a portion of its allocated bandwidth in the switch until the congestion condition is alleviated. Such may be the caεe in multipoint- to-point (M2P) εwitching, where plural sources on the same connection, all having a minimum guaranteed rate, converge on a single egress point which is less than the sum of the source rates. In an embodiment of the presently discloεed mechanism in which the transmitter element iε a portion of a εwitch having complimentary εwitch flow control, the condition of not having further "A" buffer εtateε inhibits the intra-switch tranεmiεsion of further allocated cell traffic for that connection.
The per-connection buffer return policy iε to return bufferε to the allocated pool first, until the A_BS_Counter 222 equals zero. Then buffers are returned to the dynamic pool, decreasing D BS Counter 122. Tx_Counter 126 and Priority 208 are provided as described above with respect to connection-level flow control and prioritized access.
On the link level, the following elements are added to enable guaranteed minimum cell rate on a per-connection basis. Link_A_BS_Counter 250 is added to the FSPP 116. It tracks all cells identified as requiring allocated bandwidth that are "in-flight" between the FSPP 116 and the downstream switch fabric, including cellε in the TSPP 118 cell bufferε 128, 228. The counter 250 iε decreaεed by the same amount as the A_BS_Counter 222 for each connection when a connection level update function occurs (discuεsed subsequently) .
Link_BΞ__Limit 152 reflects the total number of buffers available to dynamic cells only, and does not include allocated buffers. Link_BS_Counter 150, however, reflects a total number of allocated and dynamic cells transmitted. Thus, connections are not able to use their dynamic bandwidth when Link_BS_Counter 150 (all cells in-flight, buffered, or in downstream switch fabric) minus Link_A_BS_Counter 250 (all allocated cellε tranεmitted) iε greater than Link_BS_Limit 152 (the maximum number of dynamic buffers available) . This is necesεary to enεure that congeεtion does not impact the allocated bandwidth. The sum of all individual A_BS_Limit 224 values, or the total per-connection allocated cell buffer space 300, 302, is in one embodiment leεs than the actually dedicated allocated cell buffer space in order to account for the potential effect of stale (i.e. , low frequency) connection-level updates.
Update and check events are also implemented in the presently disclosed allocated/dynamic flow control mechanism. The downstream element 114 transmitε connection level update cellε when either a preferred liεt and a VBR-priority 0 list are empty and an update queue is fully packed, or when a "max_update_interval" (not illustrated) has been reached. At the upstream end 112, the update cell is analyzed to identify the appropriate queue, the FSPP 116 adjusts the A_BS_Counter 222 and D_BS_Counter 122 for that queue, returning cell buffers to "A" firεt then "D", aε described above, since the FSPP 116 cannot distinguish between allocated and dynamic buffers. The number of "A" buffers returned to individual connections is εubtracted from Link_A_BS_Counter 250.
Other link level elements used in association with the presently discloεed minimum guaranteed bandwidth mechanism, such as Link_Tx_Counter 154, function aε described in the foregoing discussion of link level flow control. Also, as previously noted, a further embodiment of the presently described mechaniεm functions with a link level flow control scenario incorporating prioritized access to the downstream buffer reεource 228 through the uεe of threεholds 202, 204, 206. The function of these elements are as described in the foregoing.
The following iε an example of a typical initialization in a flow controlled link according to the present disclosure: Downstream element has 3000 bufferε;
Link iε short haul, εo RTT*bandwidth equals one cell; 100 allocated connections requiring 7 "A" buffers each, consuming 700 buffers total; 3000-700 = 2300 "D" buffers to be shared among 512 connections having zero allocated bandwidth; Link_BS_Limit = 2300.
If D_BS_Counter >= D_BS_Limit, then the queue is prevented from indicating that it has a cell ready to tranεmit. In the embodiment referred to above in which the upεtream element iε a switch having composite bandwidth, thiε occurs by the queue being removed from the dynamic list, preventing the queue from being scheduled for transmit using dynamic bandwidth. For allocated cells, a check is made when each cell is enqueued to determine whether the cell, pluε other enqueued cells, plus A_BS_Counter, is a number greater than A_BS_Limit. If not, the cell is enqueued and the queue is placed on the preferred list. Else, the connection is prevented from transmitting further cells through the upstream element 112 switch fabric.
Initialization of the upεtream element 112 as depicted in Fig. 14 is illustrated in Fig. 15A. Esεentially, the same counters and registerε set in Fig. 3A for an upstream element 12' (when prioritized access to a shared buffer resource is not enabled) , and in Fig. 12A for an upstream element 12" (when prioritized access is enabled) . Exceptions include: Link_A_BS_Counter 250 initialized to zero; connection-level allocated and dynamic BS_Counters 122, 222 εet to zero; and connection-level allocated and dynamic BS_Limits 124, 224 set to respective values of NΛ and Nu. Similarly, on the downstream end at the connection level, the allocated and dynamic. Buffer_Limits and Buffer_counters are set, with the Buffer_Limits employing a bandwidth value for the respective traffic type (i.e., BWΛ = allocated cell bandwidth and BWD = dynamic cell bandwidth) . Further, each cell to be transmitted is identified as either requiring allocated or dynamic bandwidth as the cell iε received from the switch fabric.
Fig. 15B representε many of the εame tests employed prior to forwarding a cell from the upstream element 112 to the downstream element 114 as shown in Figs. 3B and 12B, with the following exceptions. Over-allocation of buffer states per connection is checked for dynamic traffic only and is calculated by subtracting Link_A_BS_Counter from Link_BS_Counter and comparing the result to Link_BS_Limit. Over-allocation on a link-wide basiε iε calculated from a summation of Link_BS_Counter (which tracks both allocated and dynamic cell traffic) and Link_A_BS_Counter against the Link_BS_Limit. Similarly, over-allocation at the downstream element is tested for both allocated and dynamic traffic at the connection level. As previously indicated, the presently disclosed mechanism for providing guaranteed minimum bandwidth can be utilized with or without the prioritized access mechanism, though aspects of the latter are illustrated in Fig. 15A and 15B for completenesε. Aε discusεed, connection-level flow control as known in the art relies upon discrete control of each individual connection. In particular, between network elements such as a transmitting element and a receiving element, the control is from transmitter queue to receiver queue. Thus, even in the situation illustrated in Fig. 16 in which a single queue QA in a transmitter element is the source of data cells for four queues Qw, Qx, Qγ, and Qz associated with a single receiver processor, the prior art does not define any mechanism to handle this situation. In Fig. 16, the transmitter element 10 is an FSPP element having a FSPP 11 asεociated therewith, and the receiver element 12 iε a TSPP element having a TSPP 13 associated therewith. The FSPP 11 and TSPP 13 aε employed in Fig. 16 selectively provide the same programmable capabilities as described above, such as link-level flow control, prioritized accesε to a shared, downstream buffer resource, and guaranteed minimum cell rate on a connection level, in addition to a connection-level flow control mechanism. Whether one or more of these enhanced capabilities are employed in conjunction with the connection- level flow control is at the option of the syεtem configurator.
Yet another capability provided by the FSPP and TSPP according to the preεent disclosure is the ability to treat a group of receiver queues jointly for purposes of connection-level flow control. In Fig. 16, instead of utilizing four parallel connections, the presently discloεed mechaniεm utilizes one connection 16 in a link 14, terminating in four separate queues Qw, Qx, Qγ, and Qz, though the four queues are treated esεentially as a single, joint entity -for purposeε of connection-level flow control. Thiε is needed because some network elements need to use a flow controlled service but cannot handle the bandwidth of processing update cells when N2 is set to a low value, 10 or less (see above for a discussion of the update event in connection-level flow control) . Setting N2 to a large value, such as 30, for a large number of connections requires large amounts of downstream buffering because of buffer orphaning, where buffers are not in-use but are accounted for up-stream as in-use because of the lower frequency of update events. This mechanism is also useful to terminate Virtual Channel Connections (VCC) within a Virtual Path Connection (VPC) , where flow control is applied to the VPC.
Thiε ability to group receiver queueε iε a reεult of manipulations of the queue descriptor associated with each of the receiver queues Qw, Qx, Qγ, and Qz. With reference to Fig. 17, queue deεcriptorε for the queueε in the receiver are illustrated. Specifically, the deεcriptors for queues Qw, Qx, and Qγ are provided on the left, and in general have the same characteristic . One of the first fields pertinent to the present disclosure is a bit labelled "J." When set, this bit indicates that the asεociated queue is being treated as part of a joint connection in a receiver. Instead of maintaining all connection-level flow control information in each queue descriptor for each queue in the group, certain flow control elements are maintained only in one of the queue descriptorε for the group. In the illustrated case, that one queue is queue Qz.
In each of the descriptorε for queues Qw, Qx, and Qy, a "Joint Number" field provides an offset or pointer to a set of flow control elements in the descriptor for queue Qz. This pointer field may provide another function when the "J" bit is not εet. While Buffer_Limit (labelled "Buff_Limit" in Fig. 17) and N2_Limit are maintained locally within each respective descriptor, Joint_Buffer_Counter (labelled "Jt_Buff_Cntr") , Joint_N2_Counter (labelled "Jt_N2_Cntr") , and Joint_Forward_Counter (labelled "Jt_Fwd_Cntr") are maintained in the descriptor for queue Qz for all of the queues in the group. The same counters in the descriptors for queueε Qw, Qx, and Qγ go unused. The joint counters perform the εame function as the individual counters, such as those illustrated in Fig. 2 at the connection level, but are advanced or decremented aε appropriate by actionε taken in association with the individual queues. Thus, for example, Joint_Buffer_Counter is updated whenever a buffer cell receives a data cell or releases a data cell in association with any of the group queues. The same applieε to Joint_N2_Counter and Joint_Forward_Counter. In an alternate embodiment of the previously described flow control mechaniεm, each Forward_Counter is replaced with Receive_Counter. Similarly, in an alternative embodiment of the presently discloεed mechanism, Joint_Forward_Counter is replaced with Joint_Receive_Counter, depending upon which is maintained in each of the group queues. Only the embodiment including Forward_Counter and Joint_Forward_Counter are illustrated. Not all of the per-queue descriptor elements are superseded by functions in a common descriptor. Buffer_Limit (labelled "Buff_Limit" in Fig. 17) is set and referred to on a per-queue basiε. Thuε, Joint_Buffer_Counter iε compared against the Buffer_Limit of a respective queue. Optionally, the Buffer_Limit could be Joint_Buffer_Limit, instead of maintaining individual, common limits. The policy is to set the same Buffer_Limit in all the TSPP queues associated with a single Joint_Buffer_Counter.
An update event is triggered, as previously described, when the Joint_N2_Counter reaches the queue-level N2_Limit. The policy is to set all of the N2_Limits equal to the same value for all the queues associated with a single joint flow control connection.
When a check cell is received for a connection, an effort to modify the Receive_Counter associated with the receiving queue results in a modification of the Joint_Receive_Counter. Thus, the level of indirection provided by the Joint_Number is applicable to both data cells and check cells.
At the transmitter element 10, only one set of upstream flow control elements are maintained. At connection set-up time, the joint connection is set-up as a single, point-to- point connection, aε far aε the upstream elements are concerned. Therefore, instead of maintaining four setε of upεtream elements for the embodiment of Fig. 16, the presently disclosed mechanism only requires one set of elements (Tx_Counter, BS_Counter, BS_Limit, all having the functionality as previously deεcribed) .
Once a joint flow control entity haε been eεtablished, other TSPP queues for additional connections may be added. To do so, each new queue must have the same N2_Limit and Buffer_Limit values. The queueε for the additional connectionε will reference the common Joint_N2_Counter and either Joint_Forward_Counter or Joint_Receive_Counter.
As previously noted, when J = 1, the Joint_Number field is used as an offset to the group descriptor. The Joint_Number for the group descriptor iε set to itself, as shown in Fig. 17 with regard to the descriptor for queue Qz. This is also the case in point-to-point connections (VCC to VCC rather than the VPC to VCC, as illustrated in Fig. 16) , where each Joint_Number points to its own descriptor.
Implementation for each of point-to-point and the presently deεcribed point-to-multipoint connectionε is thus εimplified.
Having described preferred embodiments of the invention, it will be apparent to those skilled in the art that other embodiments incorporating the conceptε may be uεed. Theεe and other exampleε of the invention illustrated above are intended by way of example and the actual scope of the invention is to be determined from the following claims.

Claims

CLAIMS What is claimed is:
1. A method for providing a minimum guaranteed bandwidth to a connection transmitting a cell over a link from a transmitter to a receiver for storage of said cell in a buffer resource in εaid receiver, εaid method comprising the steps of: determining, by said transmitter apparatus, whether said cell to be transmitted requires allocated bandwidth or dynamic bandwidth; generating, in said transmitter, a first count indicative of a number of cells transmitted to said receiver apparatuε over εaid link for εtorage in εaid shared buffer resource using allocated bandwidth; generating, in said transmitter, a second count indicative of a number of cells transmitted to said receiver apparatuε over εaid link for εtorage in said shared buffer resource using dynamic bandwidth; storing, in said transmitter, a limit to the number of cells requiring allocated bandwidth which can be εtored in said buffer resource; storing, in said tranεmitter, a limit to the number of cellε requiring dynamic bandwidth which can be εtored in said buffer resource; identifying, by said transmitter, whether a said limits are equaled or exceeded by a respective one of said first or second counts; and disabling transmission of said cell, by said transmitter, if said limit corresponding to said required bandwidth of said cell to be tranεmitted is equaled or exceeded by a respective one of said first or second counts.
2. An apparatus providing guaranteed minimum per-connection bandwidth between a telecommunications network transmitter and a telecommunications network receiver, comprising: a communications medium having transmitter and receiver ends; said transmitter at said transmitter end of εaid communicationε medium for tranεmitting a data cell over said communications medium; and said receiver at said receiver end of said communications medium for receiving said data cell, said receiver having a shared buffer resource associated therewith for storing said data cell, wherein said shared buffer resource includes an allocated portion and a dynamic portion, wherein said transmitter analyzes εaid data cell to determine whether said cell is to be tranεmitted using allocated or dynamic bandwidth, and wherein said tranεmitter transmits said data cell to said receiver via said communications medium for storage in said shared buffer resource if bufferε in εaid εhared buffer resource are available for storing data cellε in either said allocated portion or said dynamic portion.
PCT/US1996/011936 1995-07-19 1996-07-18 Minimum guaranteed cell rate method and apparatus WO1997004557A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
JP9506877A JPH11510008A (en) 1995-07-19 1996-07-18 Minimum guaranteed cell rate method and apparatus
PCT/US1996/011936 WO1997004557A1 (en) 1995-07-19 1996-07-18 Minimum guaranteed cell rate method and apparatus
AU65020/96A AU6502096A (en) 1995-07-19 1996-07-18 Minimum guaranteed cell rate method and apparatus

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US149895P 1995-07-19 1995-07-19
US60/001,498 1995-07-19
PCT/US1996/011936 WO1997004557A1 (en) 1995-07-19 1996-07-18 Minimum guaranteed cell rate method and apparatus

Publications (1)

Publication Number Publication Date
WO1997004557A1 true WO1997004557A1 (en) 1997-02-06

Family

ID=38659686

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US1996/011936 WO1997004557A1 (en) 1995-07-19 1996-07-18 Minimum guaranteed cell rate method and apparatus

Country Status (3)

Country Link
JP (1) JPH11510008A (en)
AU (1) AU6502096A (en)
WO (1) WO1997004557A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009001954A1 (en) * 2007-06-27 2008-12-31 Nippon Shokubai Co., Ltd. Method of producing water-absorbent resin

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101367825B1 (en) * 2006-06-30 2014-02-26 닛폰 이타가라스 가부시키가이샤 Glass substrate for reflecting mirror, reflecting mirror having the glass substrate, glass substrate for liquid crystal panel, and liquid crystal panel having the glass substrate
CN108011845A (en) * 2016-10-28 2018-05-08 深圳市中兴微电子技术有限公司 A kind of method and apparatus for reducing time delay

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4603382A (en) * 1984-02-27 1986-07-29 International Business Machines Corporation Dynamic buffer reallocation
US5093912A (en) * 1989-06-26 1992-03-03 International Business Machines Corporation Dynamic resource pool expansion and contraction in multiprocessing environments
US5483526A (en) * 1994-07-20 1996-01-09 Digital Equipment Corporation Resynchronization method and apparatus for local memory buffers management for an ATM adapter implementing credit based flow control
US5533009A (en) * 1995-02-03 1996-07-02 Bell Communications Research, Inc. Bandwidth management and access control for an ATM network

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4603382A (en) * 1984-02-27 1986-07-29 International Business Machines Corporation Dynamic buffer reallocation
US5093912A (en) * 1989-06-26 1992-03-03 International Business Machines Corporation Dynamic resource pool expansion and contraction in multiprocessing environments
US5483526A (en) * 1994-07-20 1996-01-09 Digital Equipment Corporation Resynchronization method and apparatus for local memory buffers management for an ATM adapter implementing credit based flow control
US5533009A (en) * 1995-02-03 1996-07-02 Bell Communications Research, Inc. Bandwidth management and access control for an ATM network

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009001954A1 (en) * 2007-06-27 2008-12-31 Nippon Shokubai Co., Ltd. Method of producing water-absorbent resin

Also Published As

Publication number Publication date
JPH11510008A (en) 1999-08-31
AU6502096A (en) 1997-02-18

Similar Documents

Publication Publication Date Title
US6002667A (en) Minimum guaranteed cell rate method and apparatus
WO1997003549A2 (en) Prioritized access to shared buffers
US6456590B1 (en) Static and dynamic flow control using virtual input queueing for shared memory ethernet switches
EP0719012B1 (en) Traffic management and congestion control for packet-based network
US5787071A (en) Hop-by-hop flow control in an ATM network
JP2753468B2 (en) Digital communication controller
JP2693266B2 (en) Data cell congestion control method in communication network
US6625121B1 (en) Dynamically delisting and relisting multicast destinations in a network switching node
US4769810A (en) Packet switching system arranged for congestion control through bandwidth management
CA2118471C (en) Upc-based traffic control framework for atm networks
US6785236B1 (en) Packet transmission scheduling with threshold based backpressure mechanism
EP0693840B1 (en) Traffic control system having distributed rate calculation and link-based flow control
US4769811A (en) Packet switching system arranged for congestion control
EP0647081B1 (en) Method and apparatus for controlling congestion in a communication network
US6717912B1 (en) Fair discard system
US7054269B1 (en) Congestion control and traffic management system for packet-based networks
CA2214838C (en) Broadband switching system
US6005843A (en) Virtual source/virtual destination device of asynchronous transfer mode network
AU719514B2 (en) Broadband switching system
US5617409A (en) Flow control with smooth limit setting for multiple virtual circuits
US6526062B1 (en) System and method for scheduling and rescheduling the transmission of cell objects of different traffic types
EP0894380A1 (en) Method for flow controlling atm traffic
WO1997004546A1 (en) Method and apparatus for reducing information loss in a communications network
WO1997004557A1 (en) Minimum guaranteed cell rate method and apparatus
WO1997004556A1 (en) Link buffer sharing method and apparatus

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AL AM AT AU AZ BB BG BR BY CA CH CN CZ DE DK EE ES FI GB GE HU IL IS JP KE KG KP KR KZ LK LR LS LT LU LV MD MG MK MN MW MX NO NZ PL PT RO RU SD SE SG SI SK TJ TM TR TT UA UG US UZ VN AM AZ BY KG KZ MD RU TJ TM

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): KE LS MW SD SZ UG AT BE CH DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA

DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
121 Ep: the epo has been informed by wipo that ep was designated in this application
ENP Entry into the national phase

Ref country code: JP

Ref document number: 1997 506877

Kind code of ref document: A

Format of ref document f/p: F

REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: CA