CA2482430C - Scheduling a shared resource among synchronous and asynchronous packet flows - Google Patents

Scheduling a shared resource among synchronous and asynchronous packet flows Download PDF

Info

Publication number
CA2482430C
CA2482430C CA2482430A CA2482430A CA2482430C CA 2482430 C CA2482430 C CA 2482430C CA 2482430 A CA2482430 A CA 2482430A CA 2482430 A CA2482430 A CA 2482430A CA 2482430 C CA2482430 C CA 2482430C
Authority
CA
Canada
Prior art keywords
synchronous
service
flows
value
flow
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime
Application number
CA2482430A
Other languages
French (fr)
Other versions
CA2482430A1 (en
Inventor
Luciano Lenzini
Enzo Mingozzi
Enrico Scarrone
Giovanni Stea
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Telecom Italia SpA
Original Assignee
Telecom Italia SpA
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Telecom Italia SpA filed Critical Telecom Italia SpA
Publication of CA2482430A1 publication Critical patent/CA2482430A1/en
Application granted granted Critical
Publication of CA2482430C publication Critical patent/CA2482430C/en
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/56Queue scheduling implementing delay-aware scheduling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/64Hybrid switching systems
    • H04L12/6418Hybrid transport
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/24Traffic characterised by specific attributes, e.g. priority or QoS
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/52Queue scheduling by attributing bandwidth to queues
    • H04L47/521Static queue service slot or fixed bandwidth allocation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/52Queue scheduling by attributing bandwidth to queues
    • H04L47/522Dynamic queue service slot or variable bandwidth allocation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/56Queue scheduling implementing delay-aware scheduling
    • H04L47/562Attaching a time tag to queues
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/621Individual queue per connection or flow, e.g. per VC
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/622Queue service order
    • H04L47/6225Fixed service order, e.g. Round Robin
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/625Queue scheduling characterised by scheduling criteria for service slots or service orders
    • H04L47/6265Queue scheduling characterised by scheduling criteria for service slots or service orders past bandwidth allocation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/64Hybrid switching systems
    • H04L12/6418Hybrid transport
    • H04L2012/6421Medium of transmission, e.g. fibre, cable, radio, satellite
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/64Hybrid switching systems
    • H04L12/6418Hybrid transport
    • H04L2012/6445Admission control
    • H04L2012/6448Medium Access Control [MAC]
    • H04L2012/6451Deterministic, e.g. Token, DQDB
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/64Hybrid switching systems
    • H04L12/6418Hybrid transport
    • H04L2012/6445Admission control
    • H04L2012/6456Channel and bandwidth allocation

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)
  • Mobile Radio Communication Systems (AREA)
  • Communication Control (AREA)

Abstract

Each synchronous flow (i=1, 2, ..., Ns) is associated to a respective synchronous capacity value (Hi) that is related the period of time for which a synchronous flow can be serviced before the server moves on. This value can be selected either according to a local allocation criteria or according to a global allocation criteria. Each asynchronous flow (i=1, 2, ..., NA) s is associated to a respective first value indicating the delay to be made up so that the respective queue has the right to be serviced and to another value indicating the instant in which the server visited the respective queue in the previous cycle. Each queue associated to a synchronous flow (h) is then serviced for a period of time that is related to be aforesaid synchronous capacity value, while each queue associated to an asynchronous flow (i) is serviced only if the server's visit occurs before the expected moment. The server's visit (10) to the synchronous queues should preferably take place during two successive cycles in order to optimise the use of the resources available.

Description

SCHEDULING A SHARED RESOURCE AMONG SYNCHRONOUS AND ASYNCHRONOUS PACKET FLOWS
TECHNIQUE SECTOR
This invention refers to, the packet communication systems, and in particular to the scheduling criteria of a shared resource, i.e. the criteria used to select the packet to which the resource is to be assigned each time this occurs.
The solution given in the invention has been developed both for radio resource scheduling (e. g.: MAC or Medium Access Control level scheduling), and for the scheduling of computational and transmissive resources in the network nodes, for example, for flow scheduling with different service quality on Internet Protocol router (IP). The following description is based especially on the latter application example, and is given purely as an example and does not limit the scope of the invention.
INTRODUCTION
For several years now, the widespread application and rapid evolution of the packet networks have given rise to the problem of integrating the traditional services offered by the old generation packet networks (electronic mail, web surfing, etc.) and the new services previously reserved for circuit switching networks (real-time video, telephony, etc.) into the so-called integrated services networks.
Systems like UMTS, for example, for which a fixed packet network component (core network) is envisaged, must simultaneously handle voice and data services, and offer support for the development of new services be they real-time or not.
The integrated services networks must therefore be able to handle traffic flows with different characteristics and to offer each type of flow a suitable service quality, a set of performance indexes negotiated between user and service provider, which must be guaranteed within the terms agreed upon.
One of the key elements in providing the service quality requested is the scheduling system implemented on the network nodes, i.e. the system used to select the packet to be transmitted from those present on the node; this system must obviously embody contrasting characteristics like flexibility, in terms of capacity to provide different types of services, simplicity, a characteristic that makes it possible to use in environments that require high transmission speeds and the handling of numerous transmission flows, and efficiency in the use of the shared resource (e. g.
the transmissive means).
The need to guarantee a given level of service quality (or Los) in the packet networks is constantly increasing, as can be seen for example in the documents US-A-6 091 709, US
A-6 147 970 or EP-A-1 035 751.
This invention in fact is the development of the solution described in the industrial invention patent request T02000A001000 and in the corresponding request PCT/IT
01/00536.
The previous solution basically applies to the scheduling of a service resource shared between several information packet flows in which the flows generate respective associated queues and are serviced when the server gives permission to transmit.
The flows are divided into synchronous flows, which require a minimum service rate guarantee, and into asynchronous flows, which use the service capacity of the resource that is left unused by the synchronous flows. The solution in question includes the following:
- provides a server that visits the queues associated with the flows in successive cycles, granting each queue a target token rotation time (or "revolution"), called TTRT, which identifies the time required for the server to complete the queue visiting cycle, - associates each synchronous flow with a synchronous capacity value indicating the maximum time the synchronous flow can be serviced before its transmission permission is revoked by the server, - associates each asynchronous flow with a first lateness(i) value, indicating the delay that must be made up for the respective queue to have the right to be serviced, plus another value (last_token_time) indicating the moment the server visited the respective queue in the previous cycle, which determines the time elapsed since the server's previous visit, - services each queue associated to a synchronous flow for a maximum period of time equal to the above-mentioned synchronous capacity value, and - services each queue associated to an asynchronous flow only if the server's visit occurs before the expected moment.
This advance is obtained from the difference between the aforesaid TTRT time and the time that has elapsed since the server's previous visit and the accumulated delay.
If this difference is positive it defines the maximum service time for each queue associated to an asynchronous flow.
The solution referred to above has proved to be completely satisfactory from an operational point of view.
The experience gained by the "Petitioner" has however shown that the solution can be further developed and improved as illustrated in this invention.
This applies particularly to the following aspects:
- the possibility of offering different types of service while keeping computational costs low: an important feature for computer network applications that must guarantee service quality for its users, like the IP networks with Intserv (Integrated Services, as per IETF specification) or Diffserv (Differentiated Integrated Services, as per IETF
specification), or for the radio resource scheduling systems like the MAC level scheduling algorithms (W-LAN systems, third generation radio-mobile services);
- the possibility of guaranteeing the bit rate of the various flows, the maximum queuing delay and the maximum occupation of the buffers of each flow for synchronous traffic;
- flexibility, in terms of capacity to provide two different types of services at the same time, rate-guaranteed (suitable for synchronous flows) and fair queuing (suitable for asynchronous flows), especially in service integration networks;
- the possibility of isolating transmission flows, i.e.
it makes the service offered to a single flow independent from the presence and behaviour of other flows;
- low computational complexity in terms of the number of operations necessary to select the packet to be transmitted;
this feature makes it possible to use in environments that require high transmission speeds and the handling of numerous transmission flows, also in view of a possible implementation in hardware;
- adaptability, in the sense that it can handle a change in the operating parameters (e. g. the number of flows present) by redistributing its resources without having to resort to complex procedures; and - analytic describability, i.e. it gives a complete analytic description of the system's behaviour, which makes it possible to relate the service quality measurements to the system parameters.
Another important aspect is equity, i.e. the possibility to manage in the same way both the transmission flows that receive a rate-guaranteed service, and those that receive a fair-queuing service, giving each one a level of service that is proportional to that requested, even in the presence of packets of different lengths.
DESCRIPTION OF THE INVENTION
The aim of this invention is to develop even further the already known solution referred to previously with special attention to the aforesaid aspects.
According to this invention, this aim can be reached by using a scheduling procedure having the characteristics referred to specifically in the following claims.
The invention also refers to the relative system.
Briefly, the solution given in the invention operates a scheduling system that can be defined with the name introduced in this patent request - Packet Timed Token Service Discipline or PTTSD.
At the moment, this scheduling system is designed to work on a packet-computer network switching node and is able to multiplex a single transmission channel into several transmission flows.
The system offers two different types of service: rate-guaranteed service, suitable for transmission flows (henceforth, "synchronous flows") that require a guaranteed minimum service rate, and a fair-queueing service, suitable for transmission flows (henceforth "asynchronous flows") that do not require any guarantee on the minimum service rate, but which benefit from the greater transmission capacity available. The system provides the latter, however, with an equal sharing of the transmission capacity not used by the synchronous flows.
The traffic from each transmission flow input on the node is inserted in its own queue (synchronous or asynchronous queues) from which it will be taken to be transmitted. The server visits the queues in a fixed cyclic order and grants each queue a service time established according to precise timing constraints at each visit.
The server initially visits the synchronous queues twice during a revolution, thus completing a major cycle and a minor or recovery cycle, and then moves on to visit the asynchronous queues.
BRIEF DESCRIPTION OF THE FIGURE
The following description of the invention is given as a non-limiting example, with reference to the annexed drawing, which includes a single block diagram figure that illustrates the operating criteria of a system working according to the invention.
DESCRIPTION OF A PREFERRED FORM OF EXECUTION
A scheduling system as given in the invention is able to multiplex a single transmission channel into several transmission flows.
The system offers two different types of service: a rate-guaranteed service, suitable for transmission flows (henceforth i synchronous flows where i = 1, 2, ..., 1~TS) that require a guaranteed minimum service rate, and a best-effort service, suitable for transmission flows (henceforth j asynchronous flows where j - 1, 2, ..., NA) that do not require any guarantee on the service rate. The system provides the latter, however, with an equal sharing of the transmission capacity not used by the synchronous flows.
It should be supposed that NS and NA are non-negative integers and that each synchronous flow t-1..NS requires a service rate equal to ~ , and that the sum of the service rates requested by the synchronous flow does not exceed the Ns capacity of channel C
The traffic from each transmission flow input on the node is inserted in its own queue (synchronous or asynchronous queues will be discussed later) from which it will be taken to be transmitted. The server 10 visits the queues in a fixed cyclic order (ideally illustrated in the figure of the drawings with trajectory T and arrow A), granting each queue a service time established according to precise timing constraints at each visit.
The procedure referred to in the invention includes an initialisation stage followed by cyclic visits to the queues.
These procedures will be discussed below.
Initialisation First of all, it is necessary to give the system the information relating to the working conditions: how many synchronous flows there are (in general: NS), what the transmission rate requested by each synchronous flow is, how many asynchronous flows there are, the target rotation time (TTRT), i.e. how long a complete cycle during which the sever visits all the queues once is to last.
Synchronous flows Each synchronous flow i, l 1"~~~, is associated, according to an appropriate allocation policy, to a variable H% (synchronous ca acit p y), which measures the maximum time for which the traffic of a synchronous flow can be transmitted before the server takes the transmission permission away. The possible allocation policies will be described below. A variable ~~, initially nil, is associated to each synchronous flow, and stores the amount of transmission time available to the flow.
Asynchronous flows Each asynchronous flow j, ~ 1"NA, is associated with two variables, L~ and last_visit_time~; the first variable stores the delay or lag that must be made up for the asynchronous queue j to have the right to be serviced; the second variable stores the instant the server visited the asynchronous queue j in the previous cycle. These variables are respectively initialised to zero and to the instant the revolution in progress when the flow is activated started.
This way of proceeding means that the asynchronous flows can be activated at any moment, not necessarily at system startup.
Visit to a generic synchronous queue i, with i - 1...NS
during the major cycle A synchronous queue can be serviced for a period of time equal to the maximum value of the variable Vii. This variable is incremented by H% (value decided during initialisation) when the queue is visited in the major cycle, and decremented by the transmission time of each packet transmitted.
The service of a queue during the major cycle ends when either the queue is empty (in which case the variable ~~ is reset), or the time available (represented by the current value of ~~) is not sufficient to transmit the packet that is at the front of the queue.
Visit to a generic synchronous queue i, i = 1...NS during the minor cycle During the minor (or recovery) cycle a synchronous queue can transmit only one packet, provided the variable ~1 has a strictly positive value. If transmission takes place, the variable ~% is decremented by the transmission time.
Visit to a generic asynchronous queue j, with j =1,...,NA
An asynchronous queue can only be serviced if the server's visit takes place before the expected instant. To calculate whether the server's visit is in advance, subtract the time that has elapsed since the previous visit and the accumulated delay L~ from the target rotation time TTRT .
If this difference is positive, it is the period of time for which the asynchronous queue j has the right to be serviced, and in this case the variable L~ is reset.
If the difference is negative, the server is late and the queue j cannot be serviced; in this case the delay is stored in the variable L~. The asynchronous queue service ends when the queue is empty, or the time available (which is decremented each time a packet is transmitted) is not sufficient to transmit the packet that is at the front of the queue.
Visit sequence during a revolution A double scan is made on all the synchronous queues (major and minor cycles) during one revolution, and then the asynchronous queues are visited. The minor cycle ends the moment one of the following events takes place:
- the last synchronous queue has been visited;
- a period of time that is equal to or greater than the sum of the capacity of all the synchronous queues has elapsed since the beginning of the major cycle.
Analytic guarantees The synchronous capacities are linked to the target rotation time TTRT and to the duration of the transmission of the longest packet ZmaX by the following inequality, which must always be verified:
~NyH;+2",~X _<TTRT (1) Minimum transmission rate for synchronous flows In hypothesis (1), the system as illustrated herein guarantees that the following normalised transmission rate will be guaranteed for each synchronous flow:
N~, + 1 N~ +~~,SI'~~, +a with:
X; = H; ~TTRT
a = 2-",aX ~TTRT
and it is also possible to guarantee that, given any period of time ~tl,t,, ~ in which the generic synchronous queue i is never empty, the service time W ~tl't'~ received from the queue i in ~t~,t~~ verifies the following inequality:
Y;vt~-t,)-W~t,~t~)~~;<~ (~) where:
H; ~~2-y,.~+~1+y~~2; se H; >-2;
~r ~~z; + 2 ~ H; se H; < 2;
and 2~ is the transmission time of the longest packet for the f low Expression (2) seen previously establishes that the service supplied by the i synchronous flow system of the type described here does not differ by more than ~~ from the service that the same flow would. experience if it were the only owner of a private transmission channel with a capacity equal to Y times that of the channel managed by the system illustrated in this invention. ~% therefore represents the maximum service difference with respect to an ideal situation.
A synchronous flow can therefore feature a parameter, called latency, which is calculated as follows:
NATTRT + 2ma~ + ~ H;
2 -I- zr ~~Es .+- 2. - Ij~ , se H~ >- 2,.
H; NA + 1 ' O; _ NATTRT + z,"1~ + ~ H;
~ + 2r ~'Es , se H; < 2~
H; NA + 1 or, for N

2 + 2' TTRT + 2'; - H; , se H~ >_ 2~
O:~ _ ~ Ha 2+ 2 TTRT, se H; < 2-;
Hr Given a switching node that implements the solution described herein, if the traffic input on a synchronous flow on that node is limited by a so-called "leaky-bucket" of parameters ~6,p~, the following guarantees can be given:
a) Maximum delay on a single node for a synchronous flow Each packet has, a delay that is not greater than:
D=6/p+O;
b) Maximum memory occupation on a node for a synchronous flow The amount of memory occupied by packets in a synchronous flow packet is:
B=~-+p?O;
c) Maximum delay on a route of N nodes for a synchronous flow Let $~~...~N N be switching nodes that implement the system described herein; let O~ be the latencies calculated on each of the ~~ nodes and let:
N
~i = ~ -1 ~i J
In this case it is possible to define an upper limit for the maximum delay for a packet to cross the N nodes, provided that the traffic input on the first node is limited by a leaky-bucket of parameters ~~,p~; this limit is:
DN =6-lp+O;
The value O~r>_ O~ can be employed in each of the three guarantees a), b), c); this means that the limits that do not depend on the number of active asynchronous flows can be calculated.
Parameter selection The ability to guarantee that the synchronous flows receive a minimum service rate no lower than that requested is subordinate to a correct selection of the synchronous capacities H~ i=l..Ns. Assuming that each synchronous flow l requires a minimum transmission rate ~, it is necessary to allocate the synchronous capacities to verify the following inequality:
Y~ ~'~~C
The solution described herein allocates the synchronous capacities according to two different schemes called local and global allocation respectively.
Local allocation The synchronous capacities are selected as follows .
z. ~ TTRT
H. _ C
In this way, the inequality (1) is verified if the transmission rates requested verify the following inequality:
N
~ns~ji,~C~1-~ (4) Each synchronous flow is guaranteed a normalised service rate equal to:
y~ _ ~N~+N~~yaC (5) NA+~ns1'il~C+a The value of Y~ given by expression (5) verifies the inequality ( 3 ) .
Global allocation According to this scheme, which requires NA ~ ~, the synchronous capacities are selected as follows:
(N +a~ ~ ~/C
H; = A N' ~TTRT
N~, +1-y,S~r~~C

In the global allocation scheme the sum of the transmission rates requested must also remain below the inequality (4). If (4) is verified, the normalised service rate of a synchronised flow is ~~ ~~C .
The global scheme guarantees greater use of the channel's transmission capacity than the local scheme, in that it allocates less capacity to the synchronous flows, leaving more bandwidth for the asynchronous flow transmission.
On the other hand, the use of a global scheme means that all the synchronous capacities'are to be recalculated each time the number of flows (synchronous or asynchronous) present in the system changes; the use of a local scheme, however, means that the capacities can be established independently from the number of flows in the system.
Selection of TTRT
The following scheme can be given to show the selection of TTRT in the solution according to the invention.
Given a set of synchronous flows with requested transmission rates that verify the inequality:
N
~~~SIYn~C < I
TTRT must be selected according to the following inequality:
TTRT >_ 2",aX
Ns 1-~n=iYr~~C
The pseudo-code illustrated below analytically describes the behaviour of a system as given in the invention.
Flow initialisation Sync_Flow-Init (synchronous flow i) 3~ {
Select synchronous bandwidth Hi;

Async Flow_Tnit (asynchronous flow j) { L~ = 0 ;
last visit timed = start of_curr_revolution;
Visit to a generic synchronous queue i, i - 1. .NS, during the major cycle Major Cycle_Visit (synchronous flow i) { ~i+-- Hi ;
q=first-packet_transmission_time;
1 0 while ((~i>=q) and (q > 0)) { transmit_packet (q);
q;
elapsed_tim~+= q;
if (q=0) ~i=0;
Visit to a generic synchronous queue i, i - 1...NS, during the minor cycle Minor_Cycle_Visit (synchronous flow i) 0 { q=first-packet transmission_time;
if (q > 0) {
transmit~packet (q);
Di -= q.
2 5 elapsed_time += q;
if (q=0) ~i=0;
Visit to a generic asynchronous queue j, j =1...NA
Async Flow_Visit (asynchronous flow j) { t = current_time;
earliness = TTRT-L~ - (t-last visit_time~);
if ( earliness > 0 ) { z; = o;
transmit time = earliness;
q=first_packet transmission_time;
while ((transmit_time>=q) and (q > 0)) {
transmit~acket (q) ; .
transmit time -= q;
else L~ _ - earliness;

last visit_time~ = t;
Visit sequence during a revolution PTTSD revolution () { elapsed~time=0;
for (i=1 to NS) Major Cycle_Visit (i);
i = 1;
while((elapsed_time<sum(Hl,)) and.(i<=NS)) {
if (Di>0) Minor Cycle_Visit (i);
i ++;
for (j=1 to NA) Async_Flow Visit (j);
Obviously the details of how this is done can be altered with respect to what has been described, without however, leaving the context of this invention.

Claims (34)

1. A method for the scheduling of a service resource shared among several information packet flows that generate respective associated queues, said flows including synchronous flows that require a guaranteed minimum service rate and asynchronous flows that use the service capacity of said the resource that is left unused by the synchronous flows, method making use of a server and comprising the following steps:
(a) causing said server to visit the respective queues associated to said flows in successive cycles on the basis of a target rotation time value , which identifies the time necessary for the server to complete a visit cycle on said respective queues;
(b) associating each synchronous flow with a respective synchronous capacity value indicating a maximum period of time for which the respective synchronous flow can be serviced before the server moves on;
(c) associating each asynchronous flow with a first respective delay value that identifies a value that must be made up for the respective queue to have the right to be serviced, and a second respective value that indicates an instant in which the server visited the respective queue in a previous cycle, determining for said respective queue, a time that has elapsed since the server's previous visit;
(d) servicing each queue associated to a synchronous flow for a maximum service time relative to said respective value of synchronous capacity;
(e) servicing each queue associated to an asynchronous flow only if the server's visit occurs before the expected instant, said advance being determined as the difference between said target rotation time value and time that has elapsed since the server's previous visit and the accumulated delay; if positive, this difference defining a maximum service time for each asynchronous queue; and (f) defining said respective synchronous capacity value for the queue associated to the i-th synchronous flow by, insuring that:
(f1) a sum of the synchronous capacity values for said synchronous flows plus the duration of the longest packet services by said shared service resource does not exceed said target rotation time value ; and (f2) said target rotation time value is not lower than a ratio of said longest packet serviced by said shared service resource to a complement to one of the sum over said synchronous flows of the minimum service rates required by said synchronous flows normalized to the service capacity of said shared service resource.
2. The method defined in claim 1 which includes the step of defining said respective synchronous capacity value for the queue associated to the i-th synchronous flow as the product of the minimum service rate required by said i-th synchronous flow and said target rotation time value normalized to the service capacity of said shared service resource.
3. The method defined in claim 1 which includes the step of defining said respective synchronous capacity value for the queue associated to the i-th synchronous flow by:
defining a factor such that the sum over said synchronous flows of the minimum service rates required by said synchronous flows normalized to the service capacity of said shared service resource is not larger than the complement to one of said factor; and defining said respective synchronous capacity value for the queue associated to the i-th synchronous flow as said target rotation time value times the ratio of a first and a second parameter, wherein:
said first parameter is the sum of the number of said asynchronous flows and said factor, said sum times the minimum service rates required by said synchronous flows normalized to the service capacity of said shared service resource, and said second parameter is the sum of the number of said asynchronous flows plus the complement to one of the sum over said synchronous flows of the minimum service rates required by said synchronous flows normalized to the service capacity of said shared service resource.
4. The method defined in claim 1 which includes the step of insuring that the sum over said synchronous flows of the minimum service rates required by said synchronous flows normalized to the service capacity of said shared service resource does not exceed unity.
5. The method defined in claim 1 wherein said respective synchronous capacity value for the queue associated to the i-th synchronous flow is defined by satisfying:
i) the expressions ii) as well as the last one of the following expressions where:
H i is said respective synchronous capacity value for the queue associated to the i-th synchronous flow, the summations are extended to all the synchronous flows, equal to N s, N A is the number of said asynchronous flows, T max is the duration of the longest packet service by said shared service resource, TTRT is said target rotation time value, C is the service capacity of said shared service resource, r i is the minimum service rate required by the i-th synchronous flow, with and .alpha. is a parameter that gives
6. The method defined in claim 1 wherein during each of said successive cycles, said server performs a double scan on all the queues associated to said synchronous flows and then visits the queues associated to said asynchronous flows.
7. The device defined in claim 6 which includes the following steps:
associating with each synchronous flow a further value indicating the amount of service time that is available to the respective flow, during a major cycle of said double scan servicing each queue associated to a synchronous flow for a period of time equal to the maximum said further value, and during a minor cycle of said double scan servicing only one packet of each queue associated to a synchronous flow, provided that said further value is strictly positive.
8. The device defined in claim 7 which includes the step of incrementing said further value by said respective value of the synchronous capacity when the queue is visited during the major cycle of said double scan.
9. The device defined in claim 7 or claim 8 which includes the operation of decrementing said further value of the transmission time by each packet serviced.
10. The device defined in any one of claims 7 to 9 wherein the servicing of each queue associated to a synchronous flow during the major cycle of said double scan ends when one of the following conditions occurs:
the queue is empty, the time available, represented by said further value, is not sufficient to service the packet at the front of the queue.
11. The device defined in claim 10 which includes the operation of resetting said further value when the respective queue is empty.
12. The device defined in any one of claims 7 to 11 which includes the step of decrementing the service time of said further value in the presence of a service given during the minor cycle of said double scan.
13. The device defined in any one of claims 7 to 12 wherein during said double scan of all the queues associated to said synchronous flows, said minor cycle ends when one of the following conditions is satisfied:
the last queue associated to a synchronous flow has been visited, a period of time not less than the sum of the capacities of all of the queues associated to said synchronous flows has elapsed since the beginning of said major cycle of said double scan.
14. The device defined in any one of claims 7 to 13 which includes the step of initializing said further value to zero.
15. The device defined in any one of claims 1 to 14 wherein in the case that said difference is negative, each said queue associated to an asynchronous flow is not serviced and the value of said difference is accumulated with said delay.
16. The device defined in any one of claims 1 to 15 wherein the service of a queue associated to an asynchronous flow ends when one of the following conditions is satisfied:
the queue is empty, the time available is not sufficient to transmit the packet that is at the front of the queue.
17. The device defined in any one of claims 1 to 16 wherein said first respective value and said second respective value are respectively initialized to zero and to a moment of startup of the current cycle when the flow is activated.
18. A system for the scheduling of a service resource shared among several information packet flows that generate respective associated queues, said flows including synchronous flows that require a guaranteed minimum service rate and asynchronous flows destined to use the service capacity of said resource left unused by the synchronous flows, the system including a server able to visit the respective queues associated to said flows in successive cycles, which is configured to perform the following operations:
determine a target rotation time value that identifies the time necessary for the server to complete a visiting cycle of said respective queues, associate to each synchronous flow a respective synchronous capacity value indicating the maximum amount of time for which a synchronous flow can be serviced before moving on to the next, associate to each asynchronous flow a first respective delay value that identifies the delay that must be made up for the respective queue to have the right to be serviced, and a second respective value that indicates the instant in which in the previous cycle the server visited the respective queue, determining for said respective queue, the time that has elapsed since the server's previous visit, service each queue associated to a synchronous flow for a maximum period of time relating to said respective synchronous capacity value, and service each queue associated to an asynchronous flow only if the server's visit occurs before the expected instant, said advance being determined as the difference between said target rotation time and the time that has elapsed since the server's previous visit and the accumulated delay difference, if positive, defining the maximum service time for each said asynchronous queue, the system being configured to define said respective synchronous capacity value for the queue associated to the i-th synchronous flow by ensuring that:
the sum of the synchronous capacity values for said synchronous flows plus the duration of the longest packet serviced by said shared service resource does not exceed said target rotation time value; and said target rotation time value is not lower than the ratio of said longest packet serviced by said shared service resource to the complementary to one of the sum over said synchronous flows of the minimum service rates required by said synchronous flows normalized to the service capacity of said shared service resource.
19. The system defined in claim 18 which is configured for defining said respective synchronous capacity value for the queue associated to the i-th synchronous flow as the product of the minimum service rate required by said i-th synchronous flow and said target rotation time value normalized to the service capacity of said shared service resource.
20. The system defined in claim 18 which is configured for defining said respective synchronous capacity value for the queue associated to the i-th synchronous flow by:
defining a factor such that the sum over said synchronous flows of the minimum service rates required by said synchronous flows normalized to the service capacity of said shared service resource is not larger than the complementary to one of said factor;

defining said respective synchronous capacity value for the queue associated to the i-th synchronous flow as said target rotation time value times the ratio of a first and a second parameter, wherein:
said first parameter is the sum of the number of said asynchronous flows and said factor, said sum times the minimum service rates required by said synchronous flows normalized to the service capacity of said shared service resource, and said second parameter is the sum of the number of said asynchronous flows plus the complementary to one of the sum over said synchronous flows of the minimum service rates required by said synchronous flows normalized to the service capacity of said shared service resource.
21. The system defined in claim 18 which is configured for ensuring that the sum over said synchronous flows of the minimum service rates required by said synchronous flows normalized to the service capacity of said shared service resource does not exceed unity.
22. The system defined in claim 18 which is configured to define said respective synchronous capacity value for the queue associated to the i-th synchronous flow by ensuring that the following are satisfied:
i) the expressions i) as well as at least one of the following expressions where:
H i is the said respective synchronous capacity value for the queue associated to the i-th synchronous flow, the summations are extended to all the synchronous flows, equal to N s, N A is the number of said asynchronous flows, T max is the service duration of the longest packet by said shared service resource, TTRT is said target rotation time value, C is the service capacity of said shared service resource, r i is the minimum service rate requested by the i-th synchronous flow, with and, .alpha. is a parameter that gives
23. The system defined in claim 18 wherein, during each of the said successive cycles, said server performs a double scan on all the queues associated to said synchronous flow and then visits the queues associated to said asynchronous flows.
24. The system defined in claim 18 wherein:
a further value, indicating the amount of service time available to the respective flow, is associated to each synchronous flow, during a major cycle of said double scan, each queue associated to a synchronous flow is serviced for a period of time equal to the maximum further value, and during a minor cycle of said double scan the system services only one packet of each queue associated to a synchronized flow, provided said further value is strictly positive.
25. The system defined in claim 24 wherein said further value is incremented by said respective synchronous capacity value when the queue is visited during the major double scan cycle.
26. The system defined in claim 24 or claim 25 wherein said further value is decremented by the transmission time of each packet serviced.
27. The system defined in any one of claims 24 to 26 which is configured so that the service of each queue associated to a synchronous flow during the major cycle of said double scan ends when one of the following conditions occurs:
the queue is empty, the time available, represented by said further value, is not sufficient to serve the packet at the front of the queue.
28. The system defined in claim 27 wherein said further value is reset when the respective queue is empty.
29. The system defined in any one of claims 24 to 28 wherein in the presence of a service given during the minor cycle of said double scan, said further value is decremented by the amount of service time.
30. The system defined in any one of claims 24 to 29 wherein during said double scan on all the queues associated to said synchronous flows, said minor cycle ends when one of the following conditions is satisfied:
the last queue associated to a synchronous flow has been visited, a period of time not less than the sum of the capacities of all of the queues associated to said synchronous flows has elapsed since the beginning of said major cycle of said double scan.
31. The system defined any one of claims 24 to 30 wherein said further value is initialized to zero.
32. The system defined in any one of claims 24 to 31 wherein, if said difference is negative, each said queue associated to an asynchronous flow is not serviced and the value of said difference is accumulated with said delay.
33. The system defined in any one of claims 24 to 32 wherein the service of a queue associated to an asynchronous flow ends when one of the following conditions is satisfied:
the queue is empty, the time available is not sufficient to transmit the packet that is at the front of the queue.
34. The system defined in any one of claims 24 to 33 wherein said first respective value and said second respective value are respectively initialized to zero and to the moment of startup of the current cycle when the flow is activated.
CA2482430A 2002-04-12 2002-07-01 Scheduling a shared resource among synchronous and asynchronous packet flows Expired - Lifetime CA2482430C (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
IT2002TO000326A ITTO20020326A1 (en) 2002-04-12 2002-04-12 ,, PROCEDURE AND SYSTEM FOR SCHEDULING A RESOURCE SHARED A MULTIPLE OF INFORMATION PACKAGE FLOWS ,,
ITTO02A000326 2002-04-12
PCT/IT2002/000430 WO2003088605A1 (en) 2002-04-12 2002-07-01 Scheduling a shared resource among synchronous and asynchronous packet flows

Publications (2)

Publication Number Publication Date
CA2482430A1 CA2482430A1 (en) 2003-10-23
CA2482430C true CA2482430C (en) 2013-10-01

Family

ID=27639005

Family Applications (1)

Application Number Title Priority Date Filing Date
CA2482430A Expired - Lifetime CA2482430C (en) 2002-04-12 2002-07-01 Scheduling a shared resource among synchronous and asynchronous packet flows

Country Status (10)

Country Link
US (1) US7336610B2 (en)
EP (1) EP1495600B1 (en)
JP (1) JP3973629B2 (en)
KR (1) KR100908287B1 (en)
AT (1) ATE352156T1 (en)
AU (1) AU2002318035A1 (en)
CA (1) CA2482430C (en)
DE (1) DE60217728T2 (en)
IT (1) ITTO20020326A1 (en)
WO (1) WO2003088605A1 (en)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100588174C (en) 2004-04-26 2010-02-03 意大利电信股份公司 Method and system for scheduling synchronous and asynchronous data packets over same network
CN100488165C (en) * 2005-07-06 2009-05-13 华为技术有限公司 Stream scheduling method
US8660319B2 (en) * 2006-05-05 2014-02-25 Parham Aarabi Method, system and computer program product for automatic and semi-automatic modification of digital images of faces
US8010947B2 (en) * 2006-05-23 2011-08-30 International Business Machines Corporation Discovering multi-component software products based on weighted scores
CN102111891B (en) * 2007-01-18 2016-06-29 华为技术有限公司 Share the methods, devices and systems of Internet resources
CN101227714B (en) * 2007-01-18 2011-04-06 华为技术有限公司 System, apparatus and method for sharing network resource
GB2448762B (en) * 2007-04-27 2009-09-30 Nec Corp Scheduliing information method and related communications devices
EP2134037B1 (en) * 2008-06-12 2011-05-11 Alcatel Lucent Method and apparatus for scheduling data packet flows
US10313208B2 (en) * 2014-12-17 2019-06-04 Telefonaktiebolaget Lm Ericsson (Publ) Flexible assignment of network functions for radio access
WO2016148617A1 (en) 2015-03-18 2016-09-22 Telefonaktiebolaget Lm Ericsson (Publ) Apparatus and methods for paging
EP3289803B1 (en) 2015-04-30 2019-06-12 Telefonaktiebolaget LM Ericsson (PUBL) Relaxed measurement reporting with control plane dual connectivity
CN113612700B (en) * 2021-08-12 2023-11-14 北京邮电大学 Low-delay zero-jitter mixed time sensitive flow scheduling method and device

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5404424A (en) 1992-09-22 1995-04-04 The Texas A&M University System Normalized proportional synchronous bandwidth allocation in a token ring network by setting a maximum message transmission time
US6147970A (en) 1997-09-30 2000-11-14 Gte Internetworking Incorporated Quality of service management for aggregated flows in a network system
US6469991B1 (en) * 1997-10-14 2002-10-22 Lucent Technologies Inc. Method for overload control in a multiple access system for communication networks
US6091709A (en) 1997-11-25 2000-07-18 International Business Machines Corporation Quality of service management for packet switched networks
US7116679B1 (en) * 1999-02-23 2006-10-03 Alcatel Multi-service network switch with a generic forwarding interface
US6594268B1 (en) 1999-03-11 2003-07-15 Lucent Technologies Inc. Adaptive routing system and method for QOS packet networks
US6570883B1 (en) * 1999-08-28 2003-05-27 Hsiao-Tung Wong Packet scheduling using dual weight single priority queue
ITTO20001000A1 (en) * 2000-10-23 2002-04-23 Cselt Ct Studi E Lab T Lecomun PROCEDURE AND SYSTEM FOR THE SCHEDULING OF A SHARED RESOURCE BETWEEN A MULTIPLE OF INFORMATION PACKAGE FLOWS.

Also Published As

Publication number Publication date
CA2482430A1 (en) 2003-10-23
WO2003088605A1 (en) 2003-10-23
KR20040102083A (en) 2004-12-03
JP3973629B2 (en) 2007-09-12
ATE352156T1 (en) 2007-02-15
DE60217728T2 (en) 2007-10-18
KR100908287B1 (en) 2009-07-17
EP1495600B1 (en) 2007-01-17
US7336610B2 (en) 2008-02-26
ITTO20020326A0 (en) 2002-04-12
EP1495600A1 (en) 2005-01-12
DE60217728D1 (en) 2007-03-08
US20050147030A1 (en) 2005-07-07
ITTO20020326A1 (en) 2003-10-13
AU2002318035A1 (en) 2003-10-27
JP2005522946A (en) 2005-07-28

Similar Documents

Publication Publication Date Title
Semeria Supporting differentiated service classes: queue scheduling disciplines
Sayenko et al. Ensuring the QoS requirements in 802.16 scheduling
US6633585B1 (en) Enhanced flow control in ATM edge switches
US7039013B2 (en) Packet flow control method and device
CA2482430C (en) Scheduling a shared resource among synchronous and asynchronous packet flows
Wischhof et al. Packet scheduling for link-sharing and quality of service support in wireless local area networks
JP3878553B2 (en) Procedure and system for scheduling shared resources between multiple information packet flows
Guo G-3: An O (1) time complexity packet scheduler that provides bounded end-to-end delay
Jiwasurat et al. Hierarchical shaped deficit round-robin scheduling
Agharebparast et al. Efficient fair queuing with decoupled delay-bandwidth guarantees
Schmitt Optimal network service curves under bandwidth-delay decoupling
Xiaohui et al. Two simple implementation algorithms of WFQ and their performance analysis
Anelli et al. Differentiated services over shared media
Koubaa et al. SBM protocol for providing real-time QoS in Ethernet LANs
Majoor Quality of service in the internet Age
Sharafeddine et al. A dimensioning strategy for almost guaranteed quality of service in voice over IP networks
Hwang et al. The economics of QoS allocation strategies in the Internet: An empirical study
Moorman Quality of service support for heterogeneous traffic across hybrid wired and wireless networks
LEE et al. Design of a Label Switch Controller for Differentiated Services in IP and ATM Integrated Networks
Larsson et al. Capacity management for Internet traffic
Aghdaee et al. An enhanced bandwidth allocation algorithms for QoS provision in IEEE 802.16 BWA
Aghdaee Quality of service support in IEEE 802.16 broadband wireless access networks
Osali et al. MEEAC: An enhanced scheme for supporting QoS granularity by multipath explicit endpoint admission control
Bin-Abbas Adaptive capacity allocation in MPLS networks
Lucenius The application of RSVP

Legal Events

Date Code Title Description
EEER Examination request
MKEX Expiry

Effective date: 20220704