WO2006082548A1 - A scheduler for a request-grant procedure - Google Patents

A scheduler for a request-grant procedure Download PDF

Info

Publication number
WO2006082548A1
WO2006082548A1 PCT/IB2006/050301 IB2006050301W WO2006082548A1 WO 2006082548 A1 WO2006082548 A1 WO 2006082548A1 IB 2006050301 W IB2006050301 W IB 2006050301W WO 2006082548 A1 WO2006082548 A1 WO 2006082548A1
Authority
WO
WIPO (PCT)
Prior art keywords
transmitting
requests
data
channel
channel capacity
Prior art date
Application number
PCT/IB2006/050301
Other languages
French (fr)
Inventor
Theodorus J. J. Denteneer
Johannes S. H. Van Leeuwaarden
Original Assignee
Koninklijke Philips Electronics N.V.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Koninklijke Philips Electronics N.V. filed Critical Koninklijke Philips Electronics N.V.
Publication of WO2006082548A1 publication Critical patent/WO2006082548A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W72/00Local resource management
    • H04W72/12Wireless traffic scheduling
    • H04W72/1263Mapping of traffic onto schedule, e.g. scheduled allocation or multiplexing of flows
    • H04W72/1268Mapping of traffic onto schedule, e.g. scheduled allocation or multiplexing of flows of uplink data flows
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W72/00Local resource management
    • H04W72/20Control channels or signalling for resource management
    • H04W72/21Control channels or signalling for resource management in the uplink direction of a wireless link, i.e. towards the network

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

A method for scheduling use of a communication channel 5 shared by stations 3 for transmitting data and for transmitting requests to use the channel 5 for transmission of data, comprises, in the scheduler, the step of allocating channel capacity for transmitting the requests. It further comprises receiving respective requests from the stations 3, and allocating channel capacity for transmitting data during selected time intervals to individual stations 3 in dependence on the requests received. Moreover, the step of allocating channel capacity for transmitting the requests is performed in dependence on a predefined minimal delay between the moment of transmitting a request and a corresponding time interval comprising channel capacity allocated for transmitting data.

Description

A scheduler for a request-grant procedure
The invention relates to a method for scheduling use of a communication channel shared by stations for transmitting data and for transmitting requests to use the channel for transmission of data, the method comprising, in the scheduler: allocating channel capacity for transmitting the requests, - receiving respective requests from the stations, and allocating channel capacity for transmitting data during selected time intervals to individual stations in dependence on the requests received.
The invention also relates to a scheduler for scheduling use of a communication channel shared by stations for transmitting data. The invention also relates to a communication network including a shared communication channel, a plurality of stations, and a scheduler. The invention also relates to a headend with the scheduler for use in a communication network. The invention also relates to a computer program product.
In the prior art document "A Review of Contention Resolution Algorithms for
IEEE 802.14 Networks", by N. Golmie et al., IEEE Communications Surveys, First Quarter 1999, referred to hereinafter as "Golmie et al.", methods are reviewed for broadcast environments where a contention resolution algorithm is needed in order to share a multiaccess medium among various nodes. Bidirectional common access television networks (CATV networks) are a good example of such broadcast environments. In a cable network, which can be characterized by hybrid fiber-coax (HFC) structure, data may be transmitted from a headend to the stations and from the stations to the headend. Data transfer from the headend to the stations shall be referred to as downstream communication, while data transfer from the stations to the headend shall be referred to as upstream communication. All such data transmissions share a single channel with a limited capacity or bandwidth. The channel's capacity can be divided by means of frequency divided multiple access (FDMA), or time divided multiple access (TDMA), or both. The channel may be divided into several frequency channels, some channels dedicated to downstream communication, and others dedicated to upstream communication. For upstream communication to be successful, it is necessary that at most one station transmits data at a given time and frequency. In many known data transmission channels, the time is divided into sequential periods called frames, said frames are divided into slots, and said slots are sometimes divided into minislots. The duration of a slot or minislot is chosen such, that it corresponds to the transmission time of a data packet with a predefined size. The headend allocates selected data slots or minislots to individual stations for upstream communication. During the slots allocated to a specific station, only that specific station is allowed to transmit data to the headend. In order for the headend to be able to allocate channel capacity to those stations that have data ready for transmission, the stations can transmit requests to the headend. Preferably, the request contains information about the amount of data that a station needs to transmit, but, for example, it is also possible that a request always relates to a fixed, predetermined amount of data. In Golmie et al., the headend designates certain slots or minislots as contention slots. During contention slots, the channel is shared among a plurality of stations, allowing all stations to transmit requests for reserved data slots. During a contention slot, it is possible that a plurality of stations transmits a request to the headend simultaneously. Such an event is called a collision, and the data transmitted during such an event cannot be received by the headend. After a collision, requests are retransmitted according to a collision resolution algorithm. After the request has been transmitted successfully, it enters the waiting queue and remains there until the station has been able to perform the requested data transmission. As an alternative for contention, the headend could poll the stations sequentially for potential requests. Other alternative methods for communication of requests are possible.
The reservation cycle, or request/feedback cycle, is defined as the time that elapses between request transmission and feedback reception at the station that is farthest away from the headend. In order to ensure that all stations have the same opportunity to submit requests and to prevent unfairness due to the relative location of the stations with respect to the headend, it is necessary to take the reservation cycle into account when processing the requests. In Golmie et al., this is done by collecting the requests received during a single frame before allocating corresponding data slots. With a given set of request messages, the headend can schedule the corresponding grants according to a known scheduling strategy. Possible scheduling strategies include "first come, first served" and "round robin". These and other scheduling strategies are well known from the literature and are not described here in further detail. A certain portion of the channel's capacity has to be reserved for transmitting requests, thereby limiting the portion of the channel's capacity that can be allocated to stations for transmitting data. Since the channel has a limited capacity, the size of the portion of the channel's capacity reserved for transmitting requests has immediate consequences for the amount of data that can be transmitted per time unit. According to Golmie et al., a certain minimal portion of the channel's capacity is required in contention mode to obtain sufficient requests to allocate all available data slots. If the headend allocates more than said minimal portion of the channel's capacity to contention mode, a smaller portion of the channel's capacity can be allocated to data transmission, so that fewer data can be transmitted over the data channel in a given time unit, which may lead to longer waiting queues.
In Golmie et al., a method is presented to determine a portion of the channel's capacity to be used in contention mode based on the maximum number of data slots in a frame, the number of minislots that a data slot contains, the average number of data slots that can be requested at a time, and the total number of data slots requested but not yet allocated by the headend.
This method leads to relatively long waiting queues.
It is an object of the invention disclosed herein to provide a method, network, and scheduler of the kind set forth, wherein the mean queue size can be reduced.
According to one aspect of the invention, the step of allocating channel capacity for transmitting the requests is performed in dependence on a predefined lower bound on a delay between a moment of transmitting a request to use the channel for transmission of data and a moment a corresponding time interval starts, comprising channel capacity allocated for transmitting the data.
Using this inventive method leads to reduced mean queue size. In many networks there is a non-negligible delay between the time a station transmits a request and the time the headend has received and processed the request. Likewise, there is a delay between the time the headend transmits a grant and the time the grant is received and processed by a station. These two delays can depend on the distance between the headend and the station. In a typical example network, the sum of the two said delays for the station furthest away from the headend is in the range of 2 to 3 frames. By taking knowledge of these delays into account, it is possible to allocate channel capacity more efficiently. Another aspect of the invention further includes the step of storing information about allocated channel capacity during an immediately preceding period, the period corresponding to the predefined lower bound, and wherein the step of allocating of channel capacity for transmitting requests also depends on said stored information. This aspect of the invention leads to a further reduction of the mean queue size. Moreover, the method leads to a reduced cyclic behavior of the queue size.
According to another aspect of the invention, said allocating of channel capacity for transmitting requests also depends on an expected amount of data requested during a predefined time duration with a predefined capacity allocated for transmitting requests. Said amount can be computed as the product of the expected number of requests and the average amount of data requested in a single request. By incorporating said amount, it becomes possible to estimate the required capacity for transmitting requests in order to avoid an empty queue. This allows further reduction of the queue size. The expected amount can be determined by computing an average amount. The expected amount can be determined once, but preferably the expected amount is updated periodically based on recent network traffic.
According to another aspect of the invention, said allocating of channel capacity for transmitting requests also depends on an average amount of data (Λ) requested in a fixed time duration. By taking this quantity into account, it is possible to define a target for the average channel capacity that is needed for transmitting data and that cannot be used for transmitting requests. Taking this into account leads to a further reduction of the queue size. Preferably, Λ is updated regularly based on the number of requests and the amount of data requested in an immediately preceding period that is substantially longer than the fixed time duration.
According to another aspect of the invention, said allocating of channel capacity for transmitting requests comprises minimizing an expected product of the spare channel capacity within a predefined time duration starting around the beginning of the immediately preceding period and the amount of data requested that cannot be allocated within the predefined time duration starting at the current time. This aspect of the invention also results a reduction of the mean queue size. The minimizing of the expected product can be performed numerically or by means of an adaptive scheduling technique disclosed in the detailed description of the invention.
According to another aspect of the invention, the channel capacity allocated for transmitting requests is expressed as a number of slots in a frame, the predefined lower bound is expressed as a number of frames d, and the allocating of channel capacity for transmitting requests also depends on the number of slots allocated for transmitting requests during the last d frames.
According to this aspect the invention is particularly easy to implement.
According to another aspect of the invention, the predefined lower bound is at least 2 frames. The queue size reduction is greater for larger lower bounds on the delay.
According to another aspect of the invention, the step of allocating channel capacity for transmitting the requests comprises allocating a lower bound on the portion of the channel capacity for transmitting requests, and periodically determining said lower bound on the portion of the channel capacity in dependence on a variance of the amount of data requested during a predefined time duration with a predefined capacity allocated for transmitting requests.
This aspect of the invention also results in a reduction of the mean queue size.
According to another aspect of the invention, the determining of said lower bound on the portion of the channel capacity is performed by minimizing an expected amount of data requested but not yet allocated in dependence on said variance, an average amount of data requested during a predefined time duration with a predefined capacity allocated for transmitting requests, and said lower bound on the delay. This results in a further reduction of the mean queue size.
According to another aspect of the invention, the periodical determining of said minimal portion of the channel capacity is performed at least two times per hour. This allows the method to adapt regularly to changing data traffic conditions.
According to another aspect of the invention, relating to a scheduler for scheduling use of a communication channel shared by stations for transmitting data and for transmitting requests to use the channel for transmission of data, the scheduler comprises: means for allocating channel capacity for transmitting the requests, - means for receiving respective requests, the requests being transmitted by stations, means for allocating channel capacity during selected time intervals to individual stations in dependence on the requests received, wherein the means for allocating channel capacity for transmitting the requests is arranged to operate in dependence on a predefined lower bound on a delay between a moment of transmitting a request to use the channel for transmission of data and a moment a corresponding time interval starts, comprising channel capacity allocated for transmitting data.
According to another aspect of the invention, relating to a communication network, the communication network includes a shared communication channel, a plurality of stations, and a scheduler as described above.
According to another aspect of the invention, relating to a headend for use in a communication network, the communication network further comprising a plurality of stations, a shared upstream channel used for communication from the stations to the headend, and a downstream channel used for communication from the headend to the stations, the headend comprises a scheduler as described above.
According to another aspect of the invention, relating to a computer program product, the computer program product includes instructions for causing a processor to execute the method described above.
These and other aspects of the method of the invention will be further elucidated and described with reference to the drawing, in which:
Fig. 1 depicts a diagram of a data channel in a common access television network;
Fig. 2 depicts how time can be divided into frames, how frames can be divided into slots, and how slots can be subdivided into minislots; Fig. 3 depicts how selected slots within a frame can be allocated for transmitting requests or for transmitting data;
Fig. 4 depicts an example of the time of transmitting a request and the time of transmitting corresponding data;
Fig. 5 depicts an embodiment of the method implemented on the scheduler; Fig. 6 depicts an embodiment of the scheduler;
Fig. 7 illustrates the cyclic behavior of the waiting queue;
Figs. 8 and 9 illustrate how the mean queue length varies as a function of the traffic intensity; Fig. 10 illustrates how the variance of the mean queue length varies as a function of the traffic intensity;
Fig. 11 illustrates how a correlation term varies as a function of the traffic intensity; Fig. 12 illustrates how the mean idle time term varies as a function of the traffic intensity;
Figs. 13, 14 and 15 illustrate how the mean queue length and a correlation term vary as a function of a delay parameter.
Figure 1 shows a schematic diagram of a part of a communication network 1. A headend 2 communicates with stations 3 using a shared data channel 4. The channel can use different media such as for example fiber cable, coax cable, a combination of both fiber and coax, or in case of a wireless network, the ether. The channel can also comprise possible amplifiers. Using FDMA, the channel can be subdivided into multiple frequency bands, each frequency band being used for upstream communication 5 from the stations to the headend and/or downstream communication 6 from the headend to the stations. Hereinafter, it will be assumed that the inventive scheduler is part of the headend. However, it will be appreciated by the skilled artisan that other configurations are possible. For example, the scheduler could be part of one of the stations using the channel.
In a preferred embodiment, every station uses only one frequency band at a time for upstream communication and monitors one frequency band for downstream communication, as allocated by the headend. One frequency band can be allocated to a plurality of stations, for example around 300 stations, in which case the stations have to share the frequency band. In this case, that particular frequency band as used on the physical channel can also be referred to as a 'channel' in the sense of the invention. In this embodiment, the headend allocates selected time slots to individual stations. The headend can also designate selected time slots for shared channel use, for example in contention mode. The headend regularly transmits announcements to the stations comprising information about allocated channel capacity, for example about allocated frequency bands and/or time slots. In particular, an announcement transmitted using the downstream channel can comprise information about allocated channel capacity regarding the upstream channel.
In another embodiment, the allocation of a frequency to a station is not fixed, but can vary depending on the amount of data transmitted by the individual stations. In another embodiment, a station can be instructed to use a plurality of frequency bands simultaneously. A grant to use channel capacity could comprise directions to use a specified frequency band or frequency bands. It will be appreciated by the man skilled in the art, that the above mentioned and other variations are all within the scope of the invention. Figure 2 illustrates how time can be divided into a contiguous sequence of periods 51, called slots. A sequence of a predefined number of slots defines a frame 50, while a slot 51 can be subdivided into a sequence of minislots 52. Usually, the duration of a slot is such that exactly one data packet can be transferred in each slot. The duration of a minislot is usually such that exactly one request message can be transferred in each minislot. However, this is not limitative of the invention. It is possible to design a system in which frames, slots, and minislots have variable length, or in which the concept of frames, slots, and minislots is not used. In the latter case, some other structure may be present that allows allocation of channel capacity during selected time intervals. Such an other structure is within the scope of the invention. Figure 3 shows an example of how selected slots within a frame 50 can be allocated as request slots 55 and how selected slots can be allocated as data slots 56. During request slots, stations may transmit requests to the headend. For example, the request slots can operate in contention mode as described above. It is possible that not only requests, but also other types of information, including data packets, can be transmitted during request slots, depending on the implementation of the network. During data slots, at least part of the capacity of the channel is reserved for data transmission by a single station. It is also possible that some of the slots are allocated for purposes not mentioned herein.
Figure 4 depicts a first frame 50 comprising a first slot 60, the slot 60 comprising a minislot 61, during which a station transmits a request to transmit a data packet. The Figure further depicts a second slot 62, within a second frame 63, and a third frame in between the first frame 50 and the second frame 63. Said second slot 62 is allocated to said station for data transmission. Due to the transmission and processing times of the request and corresponding grant, there is a minimal delay between the transmission of the request 61 and the allocated slot 62. In many implementations, the duration of a frame is adapted to the propagation delay of a message between the headend and the station furthest away from the headend. In known CATV networks, this propagation delay is approximately 3 milliseconds. Often this means that there are at least one or two frames in between the first frame 50 and the second frame 63. Figure 5 depicts an embodiment of the method implemented in a scheduler. The Figure shows, respectively, after the method is started 70, that a portion of the channel capacity is allocated 71 for transmitting requests and a portion of the channel capacity is allocated 71 for transmitting data, that an announcement is transmitted to the stations informing them about channel allocation details 72, and that the requests transmitted by the stations using the channel capacity allocated are received 73, after which the steps are repeated. It will be clear to the man skilled in the art that this sequence of events is only one exemplary embodiment of the inventive scheduler. The above mentioned steps can, preferably, be implemented to operate in parallel. For example, allocation of channel capacity 71 and transmitting of the announcement 72 can be implemented to be performed while receiving requests 73 according to channel capacity that was allocated previously. Moreover, the allocation of channel capacity 71 can be performed while transmitting a previously prepared announcement 72, possibly in combination with transmitting other information, such as for example digital video or web pages, from the headend to the stations. Figure 6 shows another view of a possible embodiment of a scheduler according to the invention, comprising a central processing unit, or processor, PROC and a memory MEM. The processor PROC is provided with instructions, said instructions possibly residing in memory MEM, for causing the processor PROC to perform the inventive method. The scheduler further comprises data transmission means Tx and data reception means Rx, the data transmission means Tx capable of transmitting announcements and/or data to the stations, and the data reception means Rx capable of receiving grants and/or data transmitted by the stations. The elements MEM, PROC, Tx, and Rx, making up the embodiment of the scheduler, may be set up to perform not only the function of a scheduler, but also other functions, in particular functions generally performed by a CATV network headend, such as transmitting, receiving, and forwarding data such as video signals and web pages.
Model description and overview
The inventive method can be motivated by analyzing the number of data packets requested, but not yet allocated by the headend. This quantity can be analyzed using a model. An existing example of such a model is the discrete bulk service queue, a fixed boundary model, defined by the recursion
X1+1 = (X1 -SY + A1 . (1) Here, the superscript operator + is defined by x+ := max(0, x) , Xt denotes the queue length at the beginning of frame t, At denotes the number of newly arriving packets during frame t, and s denotes the maximum number of packets that can be transmitted in one frame. Packets that arrive at the queue in frame t can be transmitted at the earliest in frame t + 1. To better reflect the detail of data transmission in aforementioned communication networks, two modifications to the basic bulk service queue are introduced in the following. Firstly, the arrivals At are coupled to the queue length so that the arrival intensity is relatively high if the queue size is small and the arrival intensity is relatively low if the queue size is large.
Secondly, a delay parameter is introduced in the model so that there is a fixed minimum delay between the instant of issuing a request and the instant that a corresponding data packet or corresponding data packets can be transmitted. These two modifications lead to the delayed flexible boundary model according to the invention defined by the recursion c+U-X, dΫ xt+i = (xt -sy + £ γt_da . (2)
(=1
Here, Xt denotes the size of the data queue at the beginning of frame t, Y^ denotes the random variable distributed as the number of arriving packets during the i-th request slot of frame t, d represents the transmission delay such that a request made in frame t can be scheduled at the earliest in frame t + d + l , and c denotes the number of forced request slots per frame, i.e., the minimal number of slots allocated for transmission of requests in each frame. For the remaining s = f -c slots in a frame (frame length is f slots), it is assumed that data transmission takes precedence over reservation. Then, the delayed flexible boundary model serves as a model for the data queue. The quantity c + (s -Xt_d)+ can be interpreted as the number of slots used for handling request messages in frame t - d. The actual data transmissions for these requests cannot be scheduled before frame t + 1 so that the data packets associated with these requests join the data queue at the beginning of slot t + 1. The sum in Equation 2 thus represents the total number of new data packets that can be transmitted. Hereinafter, the flexible boundary model shall refer to the model described by Equation 2 with d=0, as opposed to the delayed flexible boundary model, which refers to the model with d>0. The Y^ are assumed to be independent identically distributed copies of some integer- valued random variable Y. The delayed flexible boundary model incorporates many of the key characteristics of cable networks regulated by a request-grant mechanism. Scheduling parameter c and transmission delay
For the fixed and flexible boundary models, there is no obvious choice of c. For the delayed flexible boundary model, this choice is even less obvious. In "Adaptive control mechanisms for cable modem MAC protocols", by D. SaIa, J. Limb, and S. Khaunte, in Proceedings INFOCOM 98, Vol. 3, pages 1392-1399, San Francisco, CA, 1998, hereinafter referred to as "SaIa et al.", a strategy is investigated in which priority is given to the data queue (c = 0) by simulating a cable access network with transmission delay, in which data transfer was organized by a reservation mechanism. In SaIa et al., it is observed that this type of scheduling results in a very bursty arrival process, and a cyclic queue behavior. SaIa et al. compared this priority strategy with c = 0 to strategies that reduce the cycle length by forcing capacity to handle requests (c > 0). These strategies, which guarantee some of the capacity to the request queue, lead to a smoother process and to shorter delays. In the following, a model will be derived which will aid to appreciate the advantages of the inventive method in relation with the transmission delay d through a mathematical analysis of the delayed flexible boundary model.
Approach
In the following, approximating bounds are derived for the mean stationary queue length. The mean queue length is expressed in terms of moments of the arrival distribution, a term related to the idle time, and a correlation term. After that a technique is used to bound the term related to the idle time. Finally, the correlation term is approximated.
The bounds and the approximation together yield approximations for the mean stationary queue length.
These approximations suggest some properties of the mean queue length. These properties can be used to advantage to develop an adaptive scheduling strategy that is designed to take into account the transmission delay. The properties suggested by the approximation are stated after that, followed by the adaptive scheduling algorithm.
Simulation results are presented as examples to illustrate the advantages of the invention.
Mean queue length
The mean and variance of the random variable Y are hereinafter denoted by μY and Oy , respectively . To get more insight in the mean queue length for general d ≥ 0 , a method described in "Inequalities in the theory of queues", by J.F.C. Kingman, in J. Royal Statist. Soc, Ser. B, Volume 32, pages 102-110, is applied. This method is based on the manipulation of
Figure imgf000013_0001
For these variables, the following obvious relations hold:
Figure imgf000013_0002
The notation Xd will be used to denote a random variable distributed according to the stationary distribution of the queue length process as defined by Equation 2, and the notation Md will be used to denote a random variable that follows the same distribution as (s - Xd)+. Using these notations, the mean queue length in the delayed flexible boundary model with delay parameter d can be expressed as
Figure imgf000013_0003
where
E(Rd) = \imE(PtMt_d) . (7) t—ϊ∞
Equation 6 for E(Xd) contains two unknown terms: A term E((Md)2) related to the idle time and a correlation term E(Rd). The idle time term
E((Mdf) = ∑P(Xd = j)(s - jf (8)
J=O can be satisfactorily bounded in the following way. Since
∑P(X" = j)(s -j) I ≤ ∑P(Xd = j)(s -jf ≤ ∑P(Xd = j)(s -j), (9)
. J=O ) J=O J=O and
Figure imgf000013_0004
it follows that
Figure imgf000013_0005
In case d = 0, it is obvious that Rd= 0. In that case, combining Equation 6 with Equation 11 yields bounds for the mean queue length. The bounds are sharp for the heavy- traffic case in which cμγ → s . For d ≥ 1 , a fluid approximation for E(Rd) is derived in the following. An approximation for the correlation term
In this section an approximation for E(Rd) is disclosed and motivated by means of a heuristic argument. Said approximation together with the bounds given in Equation 11, yields approximations for E(Xd). The argument is based on the inspection of the sample paths of various realizations of the process defined by Equation 2.
One such sample path is shown in Figure 7, where f = 18, d = 100, c = 0, and Y geometrically distributed with μγ = 1.25. The Figure shows the queue length Xt as a function of time t after a long initial warm-up period. The sample path has settled on a cyclic pattern. Each cycle can be subdivided into three distinct parts. First, there is an interval, of length d + 1, in which the queue length equals 0. In the second interval, also of length d + 1, the queue length increases. Finally, in the third interval (the length of this interval is specified below), the data queue size is drained until it hits zero. Thereafter, a new cycle starts. It is conjectured that this is the typical behavior of the sample paths in case μγ> 1 and d > 0, irrespective of the actual distribution of Y. This conjecture makes clear that it is possible to construct a deterministic approximation of the sample path. A heuristic approximation of E(Rd) is then obtained by evaluating E(Rd) for this deterministic approximation. The deterministic process xt is defined via Equation 2 with Y^1 replaced by its expected value: c+(f-c-x, df xt+1 = (xt -f + c)+ + £ μr . (12)
(=1
Given initial values X1 = . . . = Xd+i = cμy, it follows from Equation 12 that for j = l, . . . , d + l,
Xd+Uj = Λf -cμτ)μγ -U - i)(/ -c) , because in this period those packets join the data queue that were generated in the f - cμy request slots d + 1 frames earlier, while packets are transmitted from the queue at maximum rate f- c packets per frame. At the end of this period, the queue has built up to the level (d + 1)(/ - cμγγ -d(f -c) , after which the queue is drained at rate (d + 1)(/ - cμγ ) μr . This yields
X2(d+ι)+j = (d + 1X/ - Wr)Vr ~ (d + j)(f -C) + jcμr , for j = 1, . . . ,L*. Here L* is the smallest value 1 for which X2(d+i)+i hits cμy. Consequently, L* can be calculated from X2(d+i)+ L* = cμy, i.e. j* _ (d + \)(f -cμγγ -d(f -c) f -c-cμγ After instant 2(d+l)+ L*- 1 the sequence repeats itself. Hence the cycle length equals L = 2(d + 1) + L*- 1 = (d + l)(μY+ 1). E(Rd) can be approximated as follows
E(Rd) » limi∑>( -s)-(xt+d -S)+ » j∑(xt -f + cT(xt+d -f + c)+ , (14)
where x~ := -min{0,x} . For μγ> 1, the second sum in Equation 14 can be approximated by the terms t = 2, . . . , d + 1, so that
E(Rd) ~ γ∑(cμr -f + c)-(j(f -cμrr -(j -\)(f -c) -f + c)+
L J=1
= jU(d + \)(f -c-cμγ)((f -cμγγ -f + c)+ (15)
= - -if -c-cμγ)((f -cμγγ -f + c)+
Substituting Equation 15 into Equation 6 yields the following approximation for E(Xd):
E(X") « — ^l + _-i∑ — + -l±f-±L + E((Md)2)
2(s -cμY) 2(l + μr) 2 " 2(,sr -cμr) (J fi)
+LdJh-Qj -cμYY -f + c)+ λ μY + 1
The bounds in Equation 11 for E((Md)2) can again be used to obtain explicit expressions. The approximation in Equation 16 of the mean queue length is in general sharp, but breaks down in heavy-traffic conditions for c > 0. The latter is because the deterministic approximation of the sample path is less suitable for c > 0 and heavy-traffic conditions.
Equation 16 suggests various interesting properties for E(Xd). Most importantly, E(Xd) can be considered as a function of c. In order to set c such that the mean queue length is minimized, there are two considerations: The smaller c, the quicker the data queue is emptied, while the larger c, the more the arrival process is smoothened. Equation 16 can be used to strike the proper balance between these two considerations.
Adaptive scheduling One may conclude from the above that transmission delay results in a cyclic behavior and a strongly correlated arrival process. This might have severe consequences for the mean queue length (see Equation 6), since the correlation term E(Rd) becomes dominant in high- load situations. One objective of the invention is to provide means to smoothen the arrival process and to reduce the correlation of the arrival process. One way to realize this objective is to choose c in response to current traffic characteristics by minimizing Equation 16 with respect to c. This scheduling technique will be referred to as partly adaptive scheduling hereinafter. Current traffic characteristics can for example be determined by collecting the information contained in requests arriving in a predefined time duration, and computing the traffic characteristics, such as average and variation of the amount of data requested during a predefined time duration with a predefined capacity allocated for transmitting requests, from said information. The minimizing of Equation 16 with respect to c can be performed for example numerically by computing E(Xd) for multiple values of c. Since a frame has a finite number of slots, only a finite number of values of c need to be computed. It is also possible to determine c as a non- integer value, in which case a root- finding technique can be used. Root-finding techniques, such as the Newton-Raphson method, are well known in the art. Another way to realize this objective is by introducing a scheduling strategy that does not only allow to vary the number of request slots per frame (as for the delayed flexible boundary model), but also allows for the number of request slots in a frame to depend on the queue length at the beginning of the frame and the number of request slots scheduled in the previous d frames. This strategy will be referred to hereinafter as adaptive scheduling.
Denote by ct the number of request slots scheduled in frame t. The evolution equation of the queue length at frame boundaries then becomes
X1+1 = (Xt -(f - ct)T +XX,, . (17)
(=1 The cyclic behavior, as disclosed herein, can be explained as follows. The arrival process is coupled to the queue length such that more packets arrive when the queue is small, and less packets arrive when the queue is long. This type of control is expected to lead to a smoother distribution of the number of arriving packets over time. However, the transmission delay upsets the balance. The impact of a corrective decision, like more arrivals if the system is less busy, is only seen d frames later. If the system is busier d frames later, the extra arrivals might have just the opposite effect. This phenomenon of control decisions that have the opposite effect as one had in mind can be captured by the correlation term: E(Rd) = \imE(PtMt_d) t—ϊ∞
= \im∑P(Mt_d = j)∑P(Pt = k I Mt_d = j)jk t→∞ J=O Jt=O s-1
= \wi∑P(Xt_d = s -j)∑P(Xt = s + k \ Xt_d = s -j)(s - j)(s + k)
J=O Jt=O E(Rd) might be viewed as a measure for the performance of a scheduling strategy: A high value of E(Rd) indicates that the scheduling strategy balances the input poorly, and ideally E(Rd) equals zero. Obviously, it holds that the larger the transmission delay d, the less unlikely it is that the relatively simple scheduling adopted by the flexible boundary model balances the input well. It is one objective of the invention to reduce the mean queue length by choosing an adaptive scheduling strategy that balances the input properly despite a substantial delay. Denote by c*the mean number of request slots per frame, given by
C = -^- . (18) l + μr
The mean number of arriving data packets per frame, denoted by Λ , is then given by A = c'μ = Jh- m (19) i + μr
In balancing the input, it would be desirable to have Λ packets to arrive to the queue per frame. This is not feasible, since the arrival process has stochastic characteristics, but it might serve as a guiding principle. At the beginning of frame t, ct-d + ct-d+i + • • • + ct-i request slots have been allocated in the previous d frames and the number of request slots ct in frame t must still be allocated, which makes that the number of arriving packets in the next d frames is given by d c, k c,
EEwEv (2°)
Ideally, there will be f- ct packets at the beginning of each frame, so that in each frame all waiting packets can be transmitted. In that case, the following would hold:
Xt = f ~ct; ∑∑Yt-k,, + ∑Yt,, = (d + l)A . (21)
*=1 (=1 (=1
In reality, this will not be the case, but these values can be used as a benchmark. It is advantageous to choose ct such that
^ -(/-c() + LL*U,, +iX ~ (rf + l)Λ . (22)
*=1 (=1 (=1
This benchmark provides a useful scheduling strategy when the Yt,, in Equation 22 are replaced by their expectation μγ . Some rewriting then gives the value ct as a target level for Ct, i.e.
Figure imgf000018_0001
Here, d represents the time interval between the moment of transmitting a request and the first possible moment corresponding channel capacity can be allocated, ct_k , for all k = \,...,d , represents the previously allocated resource capacity for issuing requests during said time interval d, μr is the average number of requested packages in a resource slot allocated for transmitting requests, and Λ is the average number of requested packages in a frame.
To make sure that ct is integer- valued, and that all unused data slots are turned into request slots, it is advantageous to choose ct according to C( = max{0,LcJ,/ -X(} . (24)
Numerical evaluation
In order to assess the merit of various scheduling strategies, the results of a number of simulation studies are shown in Figure 8 through Figure 15. Two different distributions for the arrival process Y are considered: the geometric and the Poisson distribution. The frame length has been set to f = 18 and the transmission delay has been set to d = 3. The number of forced request slots c and the traffic intensity Λ were varied. In all simulation results described hereinafter, the performance measures have been evaluated on an interval of 1,000,000 frames, after an initial warm-up period of 200,000 frames.
In the Figures, different curves can be identified by means of labels. The curves labeled "c = 0", "c = 1", "c = 2", "c = 3", "c = 4" apply to the case of regular scheduling, as in the delayed flexible boundary model with c set to the respective labeled value. The curves labeled "Golmie" apply to the scheduling method described in Golmie et al., and the curves labeled "c*" apply to the inventive adaptive scheduling method, according to Equation 24.
Figure 8 shows the mean stationary queue length E(Xd) as a function of the traffic intensity Λ, in the case of Poisson distributed arrivals, for seven different scheduling strategies, using the labels identifying different curves defined above. It can be well appreciated, by comparing the curves labeled c* and "Golmie", that the method according to the invention leads to substantial reduction of the mean queue length for many values of Λ. The curves obtained from regular scheduling all have an asymptote at Λ = f - c. For most values of Λ , c = 0 results in the largest mean queue length, while the adaptive scheduling results in the smallest mean queue length. For regular scheduling the non-monotonic behavior is clearly visible for c = 1. This behavior, though remarkable, is not uncommon in systems that involve control and feedback delay. Situations in which these characteristics lead to unwanted oscillations and increased delay may occur if the traffic dynamics can be expressed via a difference equation or differential equation that involves a delayed response.
Figure 9 shows the mean stationary queue length E(Xd) as a function of the traffic intensity Λ, in the case of geometrically distributed arrivals, for seven different scheduling strategies, using the labels identifying different curves described above. It can be well appreciated, by comparing the curves labeled c* and "Golmie", that the method according to the invention leads to substantial reduction of the mean queue length for many values of Λ.
In both Figure 8 and figure 9, the mean stationary queue length E(Xd) as a function of the traffic intensity Λ resulting from the partially adaptive scheduling method is at most equal to the minimum of the mean stationary queue lengths associated with regular scheduling with c = 0, c = 1, c = 2, c = 3, or c = 4. It can be well appreciated, by comparing said minimum to the curve labeled "Golmie", that the method according to the invention leads to a substantial reduction of the mean queue length for many values of Λ. It can also be appreciated, by comparing said minimum to any one of the curves labeled c = 0, c = l, c = 2, c = 3, or c = 4, that the method according to the invention leads to a substantial reduction of the mean queue length for many values of Λ.
Figure 10 displays the variance of the stationary queue length as a function of the traffic intensity Λ, in the case of Poisson distributed arrivals, for six different scheduling strategies, using the labels identifying different curves defined above. What strikes is that the Figure is quite similar to Figure 8, including the good performance of adaptive scheduling and the non-monotonic behavior. Figure 11 displays the correlation term. As Λ approaches its maximum sustainable value, the correlation term decreases for regular scheduling with c ≥ 1 . The adaptive scheduling succeeds in keeping the correlation term small, except for high values of Λ . As mentioned before, a small correlation term indicates that the scheduling strategy balances the input well.
Figure 12 displays the idle-time term. Clearly, the term is bounded and decreases for increasing values of c. It is confirmed that the idle-time term vanishes when Λ approaches its maximum sustainable value.
Adaptive scheduling and delay
The performance of adaptive scheduling in relation with the transmission delay and the arrival process can be illustrated as follows. Figure 13, Figure 14 and Figure 15 show plots of EXd and ERd for d = 1, 10, and 25, respectively, and static and adaptive scheduling.
First note that in all cases the adaptive scheduling performs well, in the sense that it minimizes the mean queue length for almost all values of Λ. An exception is for Λ ranging from 16.7 to 17 for d = 1. In this case, regular scheduling with c = 1 gives a smaller mean queue length. For increasing values of d, the relative performance of the adaptive scheduling becomes better. The reason for this can be seen from the figures that display the correlation term. The higher the transmission delay becomes, the more (relatively) the correlation term is lowered by adaptive scheduling. For d = 25, the correlation term for adaptive scheduling is almost negligible. This can be explained as follows. The adaptive scheduling determines the appropriate number of request slots by estimating the number of packets that will arrive in the future. Denote the total number of request slots scheduled in the d previous frames by m. The estimated number of future arrivals is then mμy . Hence, the larger d, the larger m, and the more precise the estimation of the number of future arrivals will be. A similar argument can be used for describing the influence of the arrival distribution. The more volatile the distribution, the less accurate the estimation of the future arrivals will be.
Embodiments
In an example embodiment, the scheduler allocates selected slots in a frame t, the frame comprising f slots in total, for transmitting requests, the number of slots allocated for transmitting requests being determined by ct in Equation 24. The scheduler allocates the remaining slots to individual stations for transmission of data. When all slots in a frame have been allocated, an announcement is transmitted from the headend to the stations, the announcement comprising information about the allocated time slots. The scheduler does not allocate data slots corresponding to a request transmitted in frame t in any frame before frame t+d+1, where d is a fixed delay parameter related to the reservation cycle. Moreover, in order to evaluate the expression in Equation 24, the scheduler stores the values ct-k, for k = 0, ..., d, being the numbers of slots allocated for transmitting requests in each of the last d frames, and replaces Λ and μy by predefined values. Said predefined values can be derived from estimates of data traffic. Preferably, Λ and μy are given values corresponding with heavy data traffic conditions.
A preferred embodiment is different from the previously described embodiment in that the values Λ and/or μγ are updated regularly on the basis of statistic estimation, the estimation being based on the data slots requested during frames t - p, t - p + 1, ... , t - 1 , the value p being a predefined population size. Preferably both Λ and μy are updated as described.

Claims

CLAIMS:
1. A method for scheduling use of a communication channel (5) shared by stations (3) for transmitting data and for transmitting requests to use the channel for transmission of data, the method comprising, in the scheduler (2): allocating (71) channel capacity for transmitting the requests (55), - receiving (73) respective requests from the stations, and allocating (71) channel capacity for transmitting data (56) during selected time intervals to individual stations in dependence on the requests received, wherein the step of allocating channel capacity for transmitting the requests is performed in dependence on a predefined lower bound on a delay between a moment of transmitting a request (61) to use the channel for transmission of data and a moment a corresponding time interval (62) starts, comprising channel capacity allocated for transmitting the data.
2. A method according to claim 1, further including the step of storing information about allocated channel capacity during an immediately preceding period, the period corresponding to the predefined lower bound, and wherein the step of allocating of channel capacity for transmitting requests also depends on said stored information.
3. A method according to claim 2, wherein said allocating of channel capacity for transmitting requests also depends on an expected amount of data requested during a predefined time duration with a predefined capacity allocated for transmitting requests.
4. A method according to claim 2, wherein said allocating of channel capacity for transmitting requests also depends on an average amount of data (Λ) requested in a fixed time duration.
5. A method according to claim 2, wherein said allocating of channel capacity for transmitting requests comprises minimizing an expected product of the spare channel capacity within a predefined time duration starting around the beginning of the immediately preceding period and the amount of data requested that cannot be allocated within the predefined time duration starting at the current time.
6. A method according to claim 1, wherein the channel capacity allocated for transmitting requests is expressed as a number of slots (60) in a frame (50), the predefined lower bound is expressed as a number of frames d, and the allocating of channel capacity for transmitting requests also depends on the number of slots allocated for transmitting requests (55) during the last d frames.
7. A method according to claim 6, wherein the predefined lower bound is at least
2 frames.
8. A method according to claim 1 , wherein the step of allocating channel capacity for transmitting the requests comprises allocating a lower bound on the portion of the channel capacity for transmitting requests, and periodically determining said lower bound on the portion of the channel capacity in dependence on a variance of the amount of data requested during a predefined time duration with a predefined capacity allocated for transmitting requests.
9. A method according to claim 8, wherein the determining of said lower bound on the portion of the channel capacity is performed by minimizing an expected amount of data requested but not yet allocated in dependence on said variance, an average amount of data requested during a predefined time duration with a predefined capacity allocated for transmitting requests, and said lower bound on the delay.
10. A method according to claim 9, wherein the determining of said lower bound on the portion of the channel capacity is performed at least two times per hour.
11. A scheduler (2) for scheduling use of a communication channel (5) shared by stations (3) for transmitting data and for transmitting requests to use the channel for transmission of data, the scheduler comprising: means for allocating channel capacity for transmitting the requests (55), means for receiving respective requests, the requests being transmitted by stations, means for allocating channel capacity (56) during selected time intervals to individual stations in dependence on the requests received, wherein the means for allocating channel capacity for transmitting the requests is arranged to operate in dependence on a predefined lower bound on a delay between a moment of transmitting a request (61) to use the channel for transmission of data and a moment a corresponding time interval starts, comprising channel capacity allocated for transmitting data.
12. A communication network (1) including a shared communication channel (4), a plurality of stations (3), and a scheduler (2) according to the previous claim.
13. A headend (2) for use in a communication network (1), the communication network further comprising a plurality of stations (3), a shared upstream channel (5) used for communication from the stations to the headend, and a downstream channel (6) used for communication from the headend to the stations, wherein the headend comprises a scheduler according to claim 11.
14. A computer program product including instructions for causing a processor (PROC) to execute the method according to claim 1.
PCT/IB2006/050301 2005-02-04 2006-01-27 A scheduler for a request-grant procedure WO2006082548A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP05100802 2005-02-04
EP05100802.7 2005-02-04

Publications (1)

Publication Number Publication Date
WO2006082548A1 true WO2006082548A1 (en) 2006-08-10

Family

ID=36581764

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2006/050301 WO2006082548A1 (en) 2005-02-04 2006-01-27 A scheduler for a request-grant procedure

Country Status (1)

Country Link
WO (1) WO2006082548A1 (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4498169A (en) * 1983-03-14 1985-02-05 Scientific Atlanta, Inc. Multiaccess broadcast communication system
EP0804006A2 (en) * 1996-04-23 1997-10-29 International Business Machines Corporation Medium access control scheme for a wireless access to an ATM network

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4498169A (en) * 1983-03-14 1985-02-05 Scientific Atlanta, Inc. Multiaccess broadcast communication system
EP0804006A2 (en) * 1996-04-23 1997-10-29 International Business Machines Corporation Medium access control scheme for a wireless access to an ATM network

Similar Documents

Publication Publication Date Title
CN111211830B (en) Satellite uplink bandwidth resource allocation method based on Markov prediction
KR100894198B1 (en) Dynamic adaptation for wireless communications with enhanced quality of service
US6810043B1 (en) Scheduling circuitry and methods
WO2007022135A2 (en) Latency-aware service opportunity window-based (laso) scheduling
US8000247B2 (en) Bandwidth management apparatus
US6795865B1 (en) Adaptively changing weights for fair scheduling in broadcast environments
EP1482689A2 (en) Shaping method wherein the transmission rate is stepwise reduced according to the input buffer fill level
CA2393740A1 (en) Weighted fair queuing scheduler
JP2021536196A (en) Congestion control method and network device
US7894347B1 (en) Method and apparatus for packet scheduling
JP3989903B2 (en) Data transmission device
JP2001504316A (en) System, apparatus and method for performing scheduling in a communication network
Soni et al. Optimizing network calculus for switched ethernet network with deficit round robin
US20020075804A1 (en) Method for scheduling packetized data traffic
GB2400528A (en) WLAN access scheduling for multimedia data transfer comprising isochronous streams and asynchronous bursts
EP2556714B1 (en) Method and node for handling queues in communication networks, and corresponding computer program product
EP1649624A2 (en) System and method for adaptive polling in a wlan
US7502352B2 (en) Scheduling method for quality of service differentiation for non-real time services in packet radio networks
CN112714081B (en) Data processing method and device
EP2063580B1 (en) Low complexity scheduler with generalized processor sharing GPS like scheduling performance
US6870809B1 (en) Fair scheduling in broadcast environments
WO2006082548A1 (en) A scheduler for a request-grant procedure
Wang et al. A deep learning assisted approach for minimizing the age of information in a WiFi network
Abi-Nassif et al. Offered load estimation in a multimedia cable network system
Gokhale et al. ViTaLS-A Novel Link-Layer Scheduling Framework for Tactile Internet over Wi-Fi

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application
NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 06710772

Country of ref document: EP

Kind code of ref document: A1

WWW Wipo information: withdrawn in national office

Ref document number: 6710772

Country of ref document: EP