WO2001080504A1 - Commutateur de paquets incluant un moniteur d'utilisation et une unite de programmation - Google Patents

Commutateur de paquets incluant un moniteur d'utilisation et une unite de programmation Download PDF

Info

Publication number
WO2001080504A1
WO2001080504A1 PCT/US2001/004180 US0104180W WO0180504A1 WO 2001080504 A1 WO2001080504 A1 WO 2001080504A1 US 0104180 W US0104180 W US 0104180W WO 0180504 A1 WO0180504 A1 WO 0180504A1
Authority
WO
WIPO (PCT)
Prior art keywords
queue
customer
allotment
customers
network
Prior art date
Application number
PCT/US2001/004180
Other languages
English (en)
Inventor
Harsh Kapoor
Paul Gallo
Douglas Walker
Brian Myrick
Original Assignee
Appian Communications, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Appian Communications, Inc. filed Critical Appian Communications, Inc.
Priority to AU2001236810A priority Critical patent/AU2001236810A1/en
Publication of WO2001080504A1 publication Critical patent/WO2001080504A1/fr

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/625Queue scheduling characterised by scheduling criteria for service slots or service orders
    • H04L47/627Queue scheduling characterised by scheduling criteria for service slots or service orders policing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/11Identifying congestion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/29Flow control; Congestion control using a combination of thresholds
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/52Queue scheduling by attributing bandwidth to queues
    • H04L47/522Dynamic queue service slot or variable bandwidth allocation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/621Individual queue per connection or flow, e.g. per VC
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/10Packet switching elements characterised by the switching fabric construction
    • H04L49/103Packet switching elements characterised by the switching fabric construction using a shared central buffer; using a shared memory
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/20Support for services
    • H04L49/205Quality of Service based
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/30Peripheral units, e.g. input or output ports
    • H04L49/3018Input queuing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/30Peripheral units, e.g. input or output ports
    • H04L49/3027Output queuing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/35Switches specially adapted for specific applications
    • H04L49/351Switches specially adapted for specific applications for local area network [LAN], e.g. Ethernet switches

Definitions

  • This invention relates to packet switches for communication networks, and in particular to packet switches for allocating network access among customers.
  • a network typically includes a number of customers all of whom share a common transmission line, path, or trunk. Since only one customer can use the trunk at any instant, a procedure must exist for permitting one customer to use the trunk while excluding all other customers.
  • this allocation procedure is at its simplest.
  • a customer who wants to send a packet on the trunk determines whether the trunk is in use. If the trunk is not in use, the customer places a packet on the trunk. If the trunk is already in use, the customer waits and tries again.
  • a disadvantage of this conventional allocation procedure is its unpredictability. With the growth of the number of customers using a network comes increased traffic and progressively longer waits for network access. In addition, with even a small number of customers it is possible for a single user to monopolize the network for extended periods. Consequently, in the conventional allocation procedure, it is not possible to guarantee to any one customer a fixed amount of network access.
  • a conventional approach to guaranteeing to a customer a fixed amount of network access is to allocate specific time slots to each customer.
  • a customer takes a turn at using the network for a limited time.
  • that customer takes another turn at using the network for another limited time.
  • time-division multiplexing does succeed in guaranteeing a lower bound on a customer's access to the network, it also creates an upper bound on that access.
  • a customer is always precluded from using the network during a competing customer's time slot. It is immaterial, in such a system, whether or not a competing customer actually needs to use the network when it is his turn to do so. Because data communication occurs in bursts, with long periods of silence between bursts, there is a significant probability in such a network that time slots will remain unused, and hence wasted.
  • the present invention addresses the disadvantages of the art by providing a packet switch for allocating network access among a plurality of network users, each of whom has an allotment of guaranteed access to a network.
  • the packet switch includes a queuing unit for maintaining a plurality of queue- sets.
  • Each queue-set corresponds to a user from the plurality of network users.
  • the queue-set corresponding to a particular user accepts data packets from that user to the exclusion of other users.
  • the packet switch further includes a usage monitor that tracks the extent to which each user has depleted his allotment of guaranteed access. On the basis of this usage information, a scheduler, in communication with both the queuing unit and the usage monitor, selects a queue-set and retrieves from that queue-set a data packet for transmission on the network.
  • the scheduler selects first those packets from queue-sets associated with customers who have not depleted their allotment of guaranteed network access.
  • the scheduler grants network access on the basis of a supplemental allotment provided to each customer. Customers who have been allocated higher supplemental allotments receive proportionately more network access than customers who have been allocated lower supplemental allotments.
  • each queue-set includes a plurality of queues, each corresponding to a different data packet priority.
  • a data classifier causes data packets to be placed in the correct priority queue.
  • the scheduler selects data packets among the queues according to customer provided queue weights for each queue. If, as a result of network congestion, a particular packet must be dropped, the scheduler preferentially drops those in a lower priority queue before those in a higher priority queue. In this way, the packet switch ensures that the most important packets are most likely to be transmitted on the network even though they may have been queued later than the less important packets.
  • FIG. 1 shows a network having a packet switch incorporating the subject matter of the invention
  • FIG. 2 is a block diagram of the architecture of a selected packet switch from FIG. 1;
  • FIG. 3 shows the steps implemented by the input scheduler shown in FIG. 2;
  • FIG. 4 shows the steps implemented by the output scheduler shown in FIG. 2 in satisfying a customer's GBR
  • FIG. 5 shows the steps implemented by the output scheduler shown in FIG. 2 in satisfying a customer's MBR.
  • a packet switch 10 incorporating the principles of the invention has a local area network (LAN) interface 12 and a wide area network (WAN) interface 14.
  • the LAN interface 12 is in communication with a plurality of local customers 16a-d on a packet switched network.
  • Tie wide area network interface 14 is in communication with a trunk 18 serving a wide area network.
  • Additional packet switches 20, 22, 24, each of which is likewise in communication with a plurality of remote customers 26, 28, 30, are also connected to the trunk 18.
  • An example of a packet switch network suitable for connection to the LAN interface 12 is an ethernet.
  • a network suitable for use as a wide area network 18 is a telecommunication network such as a SONET (Synchronous Optical Network) ring.
  • Each local customer 16a-d is guaranteed an allotment of network usage time.
  • This allotment which translates into a guaranteed bit rate, is assigned by a service provider on the basis of how much network access the customer is willing to pay for and on how much network access the service provider can guarantee.
  • the amount of network access that the service provider can guarantee depends on the difference between the maximum bit rate of the wide area network trunk 18 and the extent to which that maximum bit rate has already been committed to other customers.
  • the sum of the guaranteed bit rates for all the customers, both local customers 16a-d and remote customers 26, 28, 30, is less than or equal to the bandwidth of the trunk 18 serving the wide area network.
  • Each customer 16a-d is also provided with a supplemental allotment of network access that translates into a maximum burst rate.
  • This maximum burst rate is between the guaranteed bit rate and the bandwidth, or carrying capacity of the wide area network trunk 18.
  • the service provider assigns a maximum burst rate on the basis of how much network access the customer is willing to buy and the available bandwidth of the wide area network trunk 18.
  • Each packet switch 10, 20, 22, 24 shown in FIG. 1 guarantees that each customer will obtain his guaranteed allotment of network usage. If a customer has already depleted his allotment, the packet switches 10, 20, 22, 24 determine if all the competing customers have had their guaranteed allotments satisfied. If this is the case, and if the network is idle, the packet switches 10, 20, 22, 24 grant each customer supplemental network access. The amount of supplemental network access granted to each customer depends on that customer's maximum burst rate, the maximum burst rates of all competing customers, and the unused capacity of the trunk 18.
  • both the guaranteed allotment and the supplemental allotment can be changed through software, without the need to alter existing hardware.
  • This feature of the invention reduces the cost of altering service to any particular customer and also enhances the customer's flexibility. Because both allotments can easily be changed by software, a customer can experiment with different combinations in order to find a combination suitable for his needs.
  • a typical packet switch 10 shown in greater detail in FIG. 2, includes a first packet classifier 32 in communication with the local customers 16a-d on a packet- switched network 34.
  • Each packet transmitted by a customer 16a includes a header that contains information identifying the customer 16a and information indicating the priority that the customer 16a has assigned to the packet.
  • an input queuing unit 36 maintains separate input queue-sets 38a-d for each customer 16a-d on the network 34.
  • each input queue-set 38a-d includes four queues corresponding to four priority levels.
  • a system incorporating the invention can have any number of priority levels or only one priority level.
  • the first packet classifier 32 instructs an input DMA (direct memory access) module 40 to place the incoming packet into the input queue-set corresponding to that customer.
  • the first packet classifier 32 instructs the input DMA module 40 to place the incoming packet in the particular queue from that customer's input queue-set that corresponds to the packet's priority.
  • each customer is allotted a number of bits guaranteed to be transmitted during that cycle. This number is communicated to each of the packet switches 10, 20, 22, 24 on the network. In each packet switch, this number is placed in a corresponding location in an allocated guaranteed bit rate (GBR) array 42 in communication with an input scheduler 44.
  • GRR allocated guaranteed bit rate
  • each customer is allotted a number of bits that may be, but need not be, transmitted during that cycle. This number is also communicated to each of the packet switches 10, 20, 22, 24 on the network. In each packet switch, this number is placed in a corresponding allocated maximum burst rate (MBR) array 46, also in communication with the input scheduler 44.
  • MRR allocated maximum burst rate
  • the input scheduler 44 selects from the input queue- sets 38a-d those data packets that are to be transmitted during that cycle.
  • the procedure used by the input scheduler 44 is a weighted round-robin in which each customer's data packets are selected on the basis of that customer's maximum burst rate, the maximum burst rates of all other customers, and the overall bandwidth of the trunk 18.
  • FIG. 3 shows the weighted round-robin procedure 48 followed by the input scheduler in selecting data packets for transmission.
  • the input scheduler first determines the bandwidth of the trunk serving the wide area network (step 50).
  • the input scheduler looks up the guaranteed bit rate for each local customer (step 52). Since these bit rates are guaranteed, the input scheduler must accept data packets offered by all local customers to the extent that the number of such data packets does not cause that local customer to exceed his guaranteed bit rate (step 54).
  • the input scheduler 44 could service a particular local customer completely before moving on to the next customer, such an algorithm presents several disadvantages. It would be unfair, for example, for a customer who only needs to send one data packet during a cycle to have to wait until another customer has finished sending hundreds of data packets. As a result, in the preferred embodiment, the input scheduler 44 accepts only a limited number of data packets from each customer during each iteration of the round-robin.
  • the input scheduler determines how much trunk bandwidth is left over (step 56). In no case is this residual trunk bandwidth less than the trunk bandwidth reduced by the sum of the guaranteed bit rates for all customers, both local and remote. In fact, because data communication tends to occur in short bursts, there may be many cycles during which only a few customers offer data packets for transmission. During such cycles, the residual trunk bandwidth can be considerably greater.
  • the next step is to equitably allocate this residual trunk bandwidth among all customers (step 58).
  • the input scheduler first looks up the maximum burst rate for each customer (step 60). Then, to the extent that there exist data packets offered for transmission, the input scheduler selects from each customer's queue-set an equitable number of data packets (step 62). In the preferred embodiment, this equitable number is proportional to the ratio of a particular customer's maximum burst rate to the sum of the maximum burst rates of all local customers weighted by the residual trunk bandwidth.
  • the input scheduler selects a data packet from a particular customer's queue-set, that input scheduler must decide from which of the individual queues within the queue-set the data packet is to be retrieved. This decision is made whether the data packet is being selected to consume that customer's guaranteed bit rate (step 54) or to consume residual bandwidth (step 62).
  • the order in which individual queues from a queue- set are selected and the number of data packets to be selected from each queue can be adjusted by the customer.
  • the customer does so by adjusting the queue weights in a weighted round-robin implemented by the input scheduler 44.
  • the input scheduler performs this weighted round-robin procedure as part of selecting data packets to meet the customer's guaranteed bit rate (step 54) and as part of selecting data packets to consume residual trunk bandwidth (step 62).
  • the number of weights available for adjustment in the weighted round-robin is equal to the number of queues in each queue-set.
  • the customer can specify that all data packets from high priority queues within the queue-set are to be sent before any data packets from lower priority queues are sent.
  • the customer can specify that data packets from lower priority queues can be sent once a specified number of data packets from higher priority queues have been sent. This feature is useful when network congestion results in the need to-drop certain data packets.
  • the input scheduler 44 sends data packets to a second packet classifier 64.
  • This second packet classifier 64 also receives data packets transmitted by other packet switches 20, 22, 24 on the trunk 18 serving the wide area network.
  • the second packet classifier 64 determines the destination of each data packet that it receives. If the destination of that packet is one of the local customers 16a-d, the second packet classifier 64 routes that packet to a local queuing unit 66. If the destination of the packet is a remote customer, the second packet classifier 64 routes the packet to a network queuing unit 68.
  • the network queuing unit 68 maintains as many queue-sets 70 as there are customers in all local area networks serviced by all the packet switches 10, 20, 22, 24. Each such queue-set has as many queues as there are priority levels.
  • the network queuing unit 68 therefore maintains a queue structure identical to that maintained by the input queuing unit 36 with the exception that the number of queue-sets is equal to the sum of the number of local customers 16a-d and the number of remote customers 26, 28, 30.
  • a network output scheduler 72 selects packets from the queue-sets maintained by the network queuing unit 68 and transmits those packets onto the trunk 18 serving the wide area network.
  • the network output scheduler 72 is in communication with a usage monitor 74 that maintains two counter-arrays: a guaranteed bit rate (GBR) counter array 76 and a maximum burst rate (MBR) counter array 78. Like the input scheduler 44, the network output scheduler 72 selects data packets from the queue-sets 70 on the basis of the guaranteed bit rate and maximum burst rate of each customer. Each element of the GBR counter array 76 is initialized to the corresponding value in the allocated GBR array 42. Similarly, each element of the MBR counter array 78 is initialized to the corresponding value in the allocated MBR array 46.
  • GBR guaranteed bit rate
  • MBR maximum burst rate
  • the local queuing unit 66 maintains one queue-set 80a-d for each local customer, with each queue-set having one queue for each priority.
  • the queue structure maintained by the local queuing unit 66 is therefore identical to that maintained by the input queuing unit 36.
  • a local output scheduler 82 is in communication with the usage monitor 74.
  • the operation of the local output scheduler 82 and that of the network output scheduler 72 are essentially identical and best understood with reference to FIGS. 4 and 5. Because of the similarity in the operation of both output schedulers, the following discussion is written as it applies to the network output scheduler 72. It will be understood by one of ordinary skill in the art that the local output scheduler 82 operates in a like manner.
  • the network output scheduler 72 determines if all customers have depleted their respective allotments of guaranteed network usage (step 83). If so, the network output scheduler 72 begins the process of depleting the customers' allotments of supplemental network access (step 84) as discussed below in connection with FIG. 5. Otherwise, the network output scheduler 72 proceeds to the next customer (step 85) and polls that customer's queue-set for traffic (step 86). If the network output scheduler 72 detects an empty queue-set, it proceeds to the next customer (step 87).
  • the network output scheduler 72 detects traffic on the input queue-set for a particular customer, it interrogates the usage monitor 74 to determine if the corresponding element of the GBR counter array 64 indicates that the customer's allotment of guaranteed network usage has been depleted (step 88).
  • the network output scheduler 72 buffers selected packets from that customer's input queue-set for transmission on the trunk 18 (step 90). The network output scheduler 72 then updates the corresponding element of the GBR counter array 64 to reflect the customer's network usage (step 92).
  • the network output scheduler 72 proceeds to the next customer and repeats the process. Any packets remaining on the bypassed customer's input queue-sets will remain there until all competing customers have passed through the loop and depleted their guaranteed allotments of network usage.
  • the network output scheduler 72 apportions the remaining time in that cycle among the customers in a manner that is proportional to their respective allotted maximum burst rates. The network output scheduler 72 does so by dividing a particular customer's maximum burst rate by the sum of all the customers' maximum burst rates, thereby generating a ratio indicative of that customer's priority relative to all other customers. The scheduler 72 then multiples this ratio by the time remaining in the cycle This results in the most equitable manner of sharing the remaining trunk bandwidth.
  • the network output scheduler 72 proceeds with the allocation of residual bandwidth, as shown in FIG. 5. It does so by first determining if each customer has depleted his supplemental allotment of network access (step 93). If so, the network output scheduler 72 waits for the beginning of the next cycle (step 94). Otherwise, the network output scheduler 72 proceeds to the next customer (step 95) and again polls each queue-set for traffic (step 96). If the network output scheduler 72 detects an empty queue-set, it proceeds to the next customer (step 97).
  • the network output scheduler 72 detects traffic on the input queue-set for a particular customer, it interrogates the usage monitor 74 to determine if the corresponding element of the MBR counter array 66 indicates that the customer's allotment of supplemental network usage has been depleted (step 98).
  • the network output scheduler 72 buffers selected packets from that customer's queue-set for transmission on the trunk 18 (step 100). The network output scheduler 72 then updates the MBR counter 66 to reflect the customer's network usage (step 102). This procedure is repeated until the beginning of the next interval, at which point each customer receives a new allotment of guaranteed network access and a new allotment of supplemental network access.
  • the network output scheduler retrieves a data packet from a particular customer's queue-set, it must decide from which of the individual queues within the queue-set the data packet is to be retrieved. This decision is made whether the data packet is being selected to consume that customer's guaranteed bit rate (step 90) or to consume residual bandwidth (step 98).
  • the order in which individual queues from a queue-set are selected and the number of data packets to be selected from each queue can be adjusted by the customer.
  • the customer does so by adjusting the queue weights, and hence the queue priorities, in a weighted round-robin implemented by the network output scheduler 72.
  • the network output scheduler 72 performs this weighted round-robin procedure as part of selecting data packets to meet the customer's guaranteed bit rate (step 90) and as part of selecting data packets to consume residual trunk bandwidth (step 98).
  • the number of weights available for adjustment in the weighted round-robin is equal to the number of queues in each queue-set.
  • the customer can specify that all data packets from high priority queues within the queue-set be sent before any data packets from lower priority queues are sent.
  • the customer can specify that data packets from lower priority queues can be sent once a specified number of data packets from higher priority queues have been sent.
  • the operation of the local output scheduler 82 is identical to that of the network output scheduler 72 as described above. Data packets selected by the local output scheduler as described in connection with FIGS. 4 and 5 proceed to an output DMA 104. If the local area network 34 is not busy, the output DMA 104 places the packet onto the local area network 34. Otherwise, the output DMA 104 passes the data packet to an output queuing unit 106 for placement in the queue-sets 108a-d pending eventual transmission onto the local area network 34.
  • a data packet sent from one remote customer to another remote customer is scheduled by the network output scheduler 72.
  • a data packet sent from a remote customer to a local customer is scheduled by the local output scheduler 82.
  • a data packet sent by a local customer to a remote customer is scheduled by the network output scheduler 72.
  • a data packet sent by a local customer to another local customer is scheduled by the local output scheduler 82.
  • the scheduling method of the invention therefore does not depend on either the source or the destination of the data packet upon which it operates.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

L'invention concerne un commutateur de paquets pour l'attribution d'accès à une communication de réseau parmi une pluralité de clients, chacun d'eux s'étant vu attribuer des parts d'accès garantis au réseau, comprenant une unité de formation de files d'attente destinée à actualiser une pluralité d'ensembles de files d'attente, chacune desquelles acceptant un paquet de données d'un client correspondant. Le commutateur comprend également un moniteur d'utilisation pour contrôler la mesure dans laquelle chaque client a épuisé son attribution d'accès garanti. Le moniteur d'utilisation et l'unité de formation de files d'attente communiquent avec une unité de programmation qui récupère un paquet de données pour le transmettre sur le réseau. L'ensemble de files d'attente est sélectionné d'après les informations d'utilisation emmagasinées par le moniteur d'utilisation.
PCT/US2001/004180 2000-04-10 2001-02-09 Commutateur de paquets incluant un moniteur d'utilisation et une unite de programmation WO2001080504A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2001236810A AU2001236810A1 (en) 2000-04-10 2001-02-09 Packet switch including usage monitor and scheduler

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US54609000A 2000-04-10 2000-04-10
US09/546,090 2000-04-10

Publications (1)

Publication Number Publication Date
WO2001080504A1 true WO2001080504A1 (fr) 2001-10-25

Family

ID=24178818

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2001/004180 WO2001080504A1 (fr) 2000-04-10 2001-02-09 Commutateur de paquets incluant un moniteur d'utilisation et une unite de programmation

Country Status (2)

Country Link
AU (1) AU2001236810A1 (fr)
WO (1) WO2001080504A1 (fr)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10491531B2 (en) 2016-09-13 2019-11-26 Gogo Llc User directed bandwidth optimization
US10511680B2 (en) 2016-09-13 2019-12-17 Gogo Llc Network profile configuration assistance tool
US10523524B2 (en) 2016-09-13 2019-12-31 Gogo Llc Usage-based bandwidth optimization

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5392280A (en) * 1994-04-07 1995-02-21 Mitsubishi Electric Research Laboratories, Inc. Data transmission system and scheduling protocol for connection-oriented packet or cell switching networks
EP0817436A2 (fr) * 1996-06-27 1998-01-07 Xerox Corporation Système de communication à commutation par paquets
EP0901301A2 (fr) * 1997-09-05 1999-03-10 Nec Corporation Planification dynamique, basée sur le débit, pour réseaux ATM

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5392280A (en) * 1994-04-07 1995-02-21 Mitsubishi Electric Research Laboratories, Inc. Data transmission system and scheduling protocol for connection-oriented packet or cell switching networks
EP0817436A2 (fr) * 1996-06-27 1998-01-07 Xerox Corporation Système de communication à commutation par paquets
EP0901301A2 (fr) * 1997-09-05 1999-03-10 Nec Corporation Planification dynamique, basée sur le débit, pour réseaux ATM

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10491531B2 (en) 2016-09-13 2019-11-26 Gogo Llc User directed bandwidth optimization
US10511680B2 (en) 2016-09-13 2019-12-17 Gogo Llc Network profile configuration assistance tool
US10523524B2 (en) 2016-09-13 2019-12-31 Gogo Llc Usage-based bandwidth optimization
US11038805B2 (en) 2016-09-13 2021-06-15 Gogo Business Aviation Llc User directed bandwidth optimization
US11296996B2 (en) 2016-09-13 2022-04-05 Gogo Business Aviation Llc User directed bandwidth optimization

Also Published As

Publication number Publication date
AU2001236810A1 (en) 2001-10-30

Similar Documents

Publication Publication Date Title
KR100212104B1 (ko) 회로망에 전송 용량을 할당하는 방법
US5675573A (en) Delay-minimizing system with guaranteed bandwidth delivery for real-time traffic
US7123622B2 (en) Method and system for network processor scheduling based on service levels
CN101057481B (zh) 为在网络中路由而利用要以优先级处理的分组的隐式确定来调度分组的方法和设备
USRE44119E1 (en) Method and apparatus for packet transmission with configurable adaptive output scheduling
US6909691B1 (en) Fairly partitioning resources while limiting the maximum fair share
US5831971A (en) Method for leaky bucket traffic shaping using fair queueing collision arbitration
US8189597B2 (en) Pipeline scheduler with fairness and minimum bandwidth guarantee
CA2366269C (fr) Methode et appareil pour integrer le trafic base sur le meilleur effort et la largeur de bande garantie dans un reseau a commutation de paquets
US7159219B2 (en) Method and apparatus for providing multiple data class differentiation with priorities using a single scheduling structure
US6646986B1 (en) Scheduling of variable sized packet data under transfer rate control
US6721796B1 (en) Hierarchical dynamic buffer management system and method
US7321554B1 (en) Method and apparatus for preventing blocking in a quality of service switch
US7764703B1 (en) Apparatus and method for dynamically limiting output queue size in a quality of service network switch
US20030223453A1 (en) Round-robin arbiter with low jitter
KR100463697B1 (ko) 네트워크 프로세서가 흐름 큐의 연결 해제/재연결을 통해출력을 스케줄링하는 방법 및 시스템
JP2000512442A (ja) 通信ネットワークにおける事象駆動セルスケジューラおよびマルチサービスカテゴリをサポートする方法
US6952424B1 (en) Method and system for network processor scheduling outputs using queueing
JP4163044B2 (ja) 帯域制御方法およびその帯域制御装置
US7894347B1 (en) Method and apparatus for packet scheduling
US7619971B1 (en) Methods, systems, and computer program products for allocating excess bandwidth of an output among network users
EP1335540B1 (fr) Système et procédé de communication utilisant un dispositif pour exécuter une mise en file d'attente par service
WO2001080504A1 (fr) Commutateur de paquets incluant un moniteur d'utilisation et une unite de programmation
EP2063580B1 (fr) Planificateur de faible complexité avec partage de processeur généralisé GPS comme une performance de programmation
US8467401B1 (en) Scheduling variable length packets

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AU CA CN IL JP

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: JP