CA2398755A1 - Scheduler for a shared channel - Google Patents
Scheduler for a shared channel Download PDFInfo
- Publication number
- CA2398755A1 CA2398755A1 CA002398755A CA2398755A CA2398755A1 CA 2398755 A1 CA2398755 A1 CA 2398755A1 CA 002398755 A CA002398755 A CA 002398755A CA 2398755 A CA2398755 A CA 2398755A CA 2398755 A1 CA2398755 A1 CA 2398755A1
- Authority
- CA
- Canada
- Prior art keywords
- subscriber
- base station
- subscriber stations
- station
- subscriber station
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/10—Flow control; Congestion control
- H04L47/24—Traffic characterised by specific attributes, e.g. priority or QoS
- H04L47/2416—Real-time traffic
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/10—Flow control; Congestion control
- H04L47/24—Traffic characterised by specific attributes, e.g. priority or QoS
- H04L47/2425—Traffic characterised by specific attributes, e.g. priority or QoS for supporting services specification, e.g. SLA
- H04L47/2433—Allocation of priorities to traffic types
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W28/00—Network traffic management; Network resource management
- H04W28/02—Traffic management, e.g. flow control or congestion control
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W72/00—Local resource management
- H04W72/50—Allocation or scheduling criteria for wireless resources
- H04W72/535—Allocation or scheduling criteria for wireless resources based on resource usage policies
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W84/00—Network topologies
- H04W84/02—Hierarchically pre-organised networks, e.g. paging networks, cellular networks, WLAN [Wireless Local Area Network] or WLL [Wireless Local Loop]
- H04W84/04—Large scale networks; Deep hierarchical networks
- H04W84/08—Trunked mobile radio systems
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W28/00—Network traffic management; Network resource management
- H04W28/16—Central resource management; Negotiation of resources or communication parameters, e.g. negotiating bandwidth or QoS [Quality of Service]
- H04W28/18—Negotiating wireless communication parameters
- H04W28/22—Negotiating communication rate
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W72/00—Local resource management
- H04W72/50—Allocation or scheduling criteria for wireless resources
- H04W72/52—Allocation or scheduling criteria for wireless resources based on load
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W72/00—Local resource management
- H04W72/50—Allocation or scheduling criteria for wireless resources
- H04W72/54—Allocation or scheduling criteria for wireless resources based on quality criteria
- H04W72/543—Allocation or scheduling criteria for wireless resources based on quality criteria based on requested quality, e.g. QoS
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W72/00—Local resource management
- H04W72/50—Allocation or scheduling criteria for wireless resources
- H04W72/56—Allocation or scheduling criteria for wireless resources based on priority criteria
- H04W72/566—Allocation or scheduling criteria for wireless resources based on priority criteria of the information or information source or recipient
- H04W72/569—Allocation or scheduling criteria for wireless resources based on priority criteria of the information or information source or recipient of the traffic information
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Mobile Radio Communication Systems (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
ABSTRACT
A method, system and apparatus for scheduling data to subscriber stations from a base station over a shared channel. Data destined for each subscriber station is placed into queues at the base station. The base station allocates a portion of the shared channel to each particular queue based upon that queue's priority value. The priority value for each queue is determined by an adjusted QoS value and an adjusted throughput value. The QoS value indicates whether a subscriber station has been receiving data according to an agreed-upon QoS
level. The throughput value indicates the data rate that can be achieved by transmitting to that subscriber station. These two values are examined by a scheduling policy at the base station. Scheduling policies can include scheduling data to emphasize fairness between subscriber stations, scheduling data to improve overall throughput, and scheduling data to achieve a balance between fairness and throughput.
A method, system and apparatus for scheduling data to subscriber stations from a base station over a shared channel. Data destined for each subscriber station is placed into queues at the base station. The base station allocates a portion of the shared channel to each particular queue based upon that queue's priority value. The priority value for each queue is determined by an adjusted QoS value and an adjusted throughput value. The QoS value indicates whether a subscriber station has been receiving data according to an agreed-upon QoS
level. The throughput value indicates the data rate that can be achieved by transmitting to that subscriber station. These two values are examined by a scheduling policy at the base station. Scheduling policies can include scheduling data to emphasize fairness between subscriber stations, scheduling data to improve overall throughput, and scheduling data to achieve a balance between fairness and throughput.
Description
Scheduler for a Shared Channel FIELD OF THE INVENTION
The present invention relates to a system, method and apparatus for scheduling data in a network. More specifically, the present invention relates to a system, method and apparatus for scheduling data traffic over a shared channel.
BACKGROUND OF THE INVENTION
In a network including a shared channel to deliver data traffic to multiple receiving stations from a single transmitting station, a transmitting station must determine how to allocate its downlink capacity among receiving stations. Examples of such a network include CATV-based data networks and wireless networks such as the AMOSPHERETM system sold by the assignee of the present invention, etc. In the latter system, a base station transceiver will service a plurality of subscriber stations through an air interface that provides both shared and dedicated downlink (base station to subscriber station) channels. Because the transmission capacity of such systems is limited (typically by the available bandwidth), allocating the available capacity among the users to ensure efficient use of the transmission capacity and providing acceptable service levels can be difficult. Accordingly, such systems can benefit from appropriately scheduling transmissions over the shared link(s).
One of the simplest scheduling methods is round-robin scheduling. Round-robin scheduling provides each subscriber station with an equal amount of transmission time over the shared channel. While this can be an advantageous method in some circumstances, in many networks, such as those employing radio-based links, not all subscriber stations will have the same data reception rates, due to factors such as different signal-to-noise ratios (SNR). Thus, round-robin sharing does not actually provide an equal delivery of data over the shared downlink to each subscriber station. This inequality can result in dissatisfied subscribers, particularly for those subscribers using subscriber stations at the edges of the service area who have substantially lower average data rates than subscribers with subscriber stations located close to the base station. Furthermore, this inequality requires the service provider to be conservative when advertising the performance capabilities offered by the system.
The present invention relates to a system, method and apparatus for scheduling data in a network. More specifically, the present invention relates to a system, method and apparatus for scheduling data traffic over a shared channel.
BACKGROUND OF THE INVENTION
In a network including a shared channel to deliver data traffic to multiple receiving stations from a single transmitting station, a transmitting station must determine how to allocate its downlink capacity among receiving stations. Examples of such a network include CATV-based data networks and wireless networks such as the AMOSPHERETM system sold by the assignee of the present invention, etc. In the latter system, a base station transceiver will service a plurality of subscriber stations through an air interface that provides both shared and dedicated downlink (base station to subscriber station) channels. Because the transmission capacity of such systems is limited (typically by the available bandwidth), allocating the available capacity among the users to ensure efficient use of the transmission capacity and providing acceptable service levels can be difficult. Accordingly, such systems can benefit from appropriately scheduling transmissions over the shared link(s).
One of the simplest scheduling methods is round-robin scheduling. Round-robin scheduling provides each subscriber station with an equal amount of transmission time over the shared channel. While this can be an advantageous method in some circumstances, in many networks, such as those employing radio-based links, not all subscriber stations will have the same data reception rates, due to factors such as different signal-to-noise ratios (SNR). Thus, round-robin sharing does not actually provide an equal delivery of data over the shared downlink to each subscriber station. This inequality can result in dissatisfied subscribers, particularly for those subscribers using subscriber stations at the edges of the service area who have substantially lower average data rates than subscribers with subscriber stations located close to the base station. Furthermore, this inequality requires the service provider to be conservative when advertising the performance capabilities offered by the system.
Another known scheduling method, proportionally fair sharing, provides each subscriber station with an adjusted amount of channel capacity on the shared link, where each subscriber station receives a channel share that is adjusted by their data reception rates, so that each subscriber station receives approximately the same average amount of data.
While proportional fair sharing can provide a better degree of equality between subscriber stations, it can also lead to an overall drop in total system throughput, as the base station must devote a large amount of channel capacity to service a small minority of subscriber stations with poor SNRs.
In contrast to the above two methods which focus on providing equal service to subscriber stations, some other methods focus on achieving optimal throughput across the entire system, at the expense of fairness between subscriber stations. In their article "CDMA/HDR: A
Bandwidth-Efficient High-Speed Wireless Data Service for Nomadic Users" (IEEE
Communication Magazine, July 2000, pp. 70-78), Bender et al. demonstrate how unequal latency between subscriber stations with different data reception capabilities (i.e. - different SNRs) can increase total throughput on the downlink in the network. By providing a greater portion of channel capacity on the shared link to subscriber stations with better instantaneous SNRs, the base station can transmit more traffic overall. This method increases the total throughput of the system, potentially clearing backlogs and improving overall network performance, but also creates latency and a lower data rate for subscriber stations with poorer average SNRs. To ensure that all subscriber stations possess at least a tolerable individual data rate, the system limits the maximum permitted latency value to subscriber stations with poorer SNR ratios.
Liu et al. in their article "Opportunistic Transmission Scheduling with Resource-Sharing Constraints in Wireless Networks" (IEEE Journal on Selected Areas in Communication, Vol. 19, No. 10, October 2001, pp. 2053-2064) discuss the potential of improving wireless resource efficiency by exploiting variances in channel conditions, while still maintaining a level of fairness between subscriber stations. In their model, each subscriber station is allocated a fraction of transmission time. Data packets are stored in traffic queues for each subscriber station until their scheduled transmission time occurs, like the methods described above. However, the base station continuously measures the quality of the link (which can vary over time) at each subscriber station, and transmits to the best possible subscriber station at that moment while still providing a required average data rate to each subscriber station. While opportunistic transmission scheduling can provide an increase in both total and individual throughput, it is not without its disadvantages. One disadvantage is that opportunistic scheduling can increase latency for subscriber stations as the base station waits for the opportune time to transmit to them.
Another disadvantage is that this method assumes a constant level of data to be transmitted, such as a WAP session on a cellular phone. Nor does the opportunistic method contemplate intentional differences in the treatment of subscribers, such as those provided by different qualities of service (QoS), or for treatments of different types of media data. For example, a base station may service subscriber stations with different priorities, so that one subscriber using a latency-intolerant Voice-over-IP (VoIP) service receives a guaranteed service, while another subscriber is web surfing and will be provided only with a best-effort service by the base station.
In their article "Providing Quality of Service over a Shared Wireless Link"
(IEEE
Communications Magazine Feb 2001, pp. 50-54), Andrews et al., describe a new scheduling algorithm, Modified Largest Weighted Delays First (M-LWDF) which ameliorates some of the above disadvantages. M-LWDF is similar to opportunistic scheduling in that it schedules transmissions to take advantage of SNR fluctuations. However, particular traffic flows (traveling on the downlink to their respective subscriber stations) are given preferential treatment over other traffic flows by virtue of their QoS level. For each time slot (t), the base station calculates a value for each traffic queue, this value consisting of the product of the packet delay multiplied by the channel capacity for that subscriber station multiplied by an arbitrary value. The traffic queue with the highest derived value is scheduled. Mathematically, each time slot (t) serves a waiting packet queue (j) for which the function y~W~~(t)r~(t) is maximal.
Wj(t) is the current waiting time for the packets stored in queue j, r~(t) is the channel capacity, or data rate, for data flow j, and y~ is an arbitrary value. If y is the same for each packet queue, then all subscriber stations have the same QoS level. A packet queue with a higher value for y thus has a higher level of service than a packet queue with a lower value for y. If r(t) is the same for each packet queue, then all subscriber stations have the same data rate. A packet queue with a higher value for r(t) thus transmits at a higher average data rate than another packet queue which has the same y value. While M-LWDF provides some advantage over the prior art, it also has its limitations. A
key disadvantage is that M-LWDF provides no means to provide a policy of fairness between subscriber stations with different channel quality. Another disadvantage is that the M-LWDF can schedule traffic for only a single subscriber station per time slot. This creates a significant amount of latency (x number of timeslots) for all other subscribers waiting for packet delivery.
Furthermore, since each timeslot bears data traffic for a single subscriber station, it is possible that some of the capacity of that timeslot is wasted due to internal fragmentation.
It is therefore desired to provide a data transmission scheduler that provides QoS
differentiated service to subscriber stations while offering service providers the ability to implement scheduling policies designed to provide improved sector-throughput, fairness between subscribers, or some combination of the two. It is also desirable to implement a data transmission scheduler which is opportunistic and can take advantage of variance in the data reception characteristics of subscriber stations served by the transmitter.
Finally the scheduling mechanism should be flexible enough to accommodate a wide variety of computational capabilities at the base station.
SUMMARY OF THE INVENTION
It is an object of the present invention to provide a novel data transmission scheduler which obviates or mitigates at least some of the above-identified disadvantages of the prior art.
According to a first aspect of the present invention, there is provided a method of scheduling data for transmission from a base station over a shared channel to a plurality of subscriber stations, said method comprising:
determining a fairness factor from a range of possible fairness factors, where a first end of said range indicates a policy of scheduling data with maximum fairness between said plurality of subscriber stations and a second end of said range indicates a policy of scheduling data for maximum data traffic over said shared channel;
for each subscriber station in said plurality of subscriber stations that said base station has data to be delivered to determining a quality of service priority value, said quality of service priority value indicating a priority of said subscriber station relative to other subscriber stations in said plurality of subscriber stations;
determining a throughput value, said throughput value indicating the quantity of data to be moved to said subscriber station if data is scheduled to said subscriber station;
determining a total priority value, said total priority value is the sum of said quality of service priority adjusted according to said fairness factor and said throughput value conversely adjusted according to said fairness factor; and scheduling data on a portion of said shared channel to at least one of said plurality of subscriber stations, beginning with the subscriber station with the highest total priority value.
According to another aspect of the present invention, there is provided a system for transmitting data, comprising:
a plurality of subscriber stations each having a processor, a modem, a radio and an antenna, each subscriber station operable to transmit a request for a dedicated data channel from a base station; and a base station having a processor, a modem, a radio and an antenna, and operable to receive said requests for a dedicated data channel from said subscriber stations and to schedule data for transmission to said plurality of subscriber stations over a shared channel in accordance with a scheduling policy that varies priorities between scheduling for fairness between subscriber stations and for improved throughput to said plurality of subscriber stations.
The present invention provides a method, system and apparatus for scheduling data to a plurality of subscriber stations from a base station over a shared channel.
Data destined for each subscriber station is placed into individual queues at the base station. The base station allocates a portion of the shared channel to transmit the data in each particular queue based upon a priority value it assigns to that queue. The priority value for each queue is determined by a QoS value and a throughput value, where each of these two values is adjusted by a fairness factor. The QoS
value indicates whether a subscriber station has been receiving data from the base station according to an agreed-upon QoS level. The throughput value indicates the data rate that can be achieved by the base station transmitting to that subscriber station. The fairness factor represents a scheduling policy at the base station. Scheduling policies can include scheduling data to emphasize fairness between subscriber stations at a particular QoS level, scheduling data to maximize throughput over the shared channel, and scheduling data to achieve a balance between fairness and maximum throughput.
BRIEF DESCRIPTION OF THE DRAWINGS
Embodiments of the present invention will now be described, by way of example only, with reference to the attached Figures, wherein:
Figure 1 is a schematic representation of a wireless network in accordance with an embodiment of the invention;
Figure 2 is a representation of a communications link as shown in Figure 1, comprised of multiple channels;
Figure 3 is a schematic representation of the base station shown in Figure I;
Figure 4 is a schematic representation of one of the subscriber stations shown in Figure l;
Figure 5 is a schematic representation of a scheduler for a shared channel running on the base station shown in Figure 3;
Figure 6 is a flowchart showing how the broadcast downlink channel scheduler shown in Figure 5 manages the scheduling of backlogged traffic flows to the shared channel.
DETAILED DESCRIPTION OF THE INVENTION
Refernng now to Figure 1, a wireless network for transmitting data is indicated generally at 20. Network 20 includes a radio base station 24 and a plurality of subscriber stations 28a, 28b ... 28n. In a presently preferred embodiment, radio base station 24 is connected to at least one data telecommunications network (not shown), such as a land line-based switched data network, a packet network, etc., by an appropriate gateway and one or more backhauls (not shown), such as a T1, T3, EI, E3, OC3 or other suitable land line link, or can be a satellite or other radio or microwave channel link or any other link suitable for operation as a backhaul as will occur to those of skill in the art.
_7_ Base station 24 communicates with subscriber stations 28 which can be fixed, nomadic or mobile devices. The number 'n' of subscriber stations serviced by a base station 24 can vary depending upon the amount of radio bandwidth available and/or the configuration and requirements of the subscriber stations 28.
As illustrated in Figure 1, the geographic distribution of subscriber stations 28 with respect to base station 24 need not be symmetric nor will subscriber stations which are physically located close to one another necessarily experience the same or similar data reception rates, due to varying signal to noise ratios (SNRs) experienced at the subscriber stations 28 due to a variety of factors including the geographic environment (the presence or absence of buildings which can reflect or mask signals), the radio environment (the presence or absence of radio noise sources), etc. Thus, in most circumstances subscriber stations 28 served by a base station 24 can have significantly different SNRs and these SNRs can change over time.
As known to those of skill in the art, subscriber stations 28 can be geographically divided into different sectors 36, formed via directional antennas at base station 24 to increase the number of subscriber station 28 that can be served from a single base station location. In such a case, each sector 36 essentially acts as a different base station and base station 24 can manage the network resources in each sector 36 independent of each other sector 36.
While Figure 1 shows only one base station 24, it will further be apparent to those of skill in the art that network 20 can contain multiple, geographically distributed base stations 24, with overlapping sector 36 coverage of subscriber stations 28, and where each subscriber station 28 in an overlapping sector 36 coverage area can select which base station 24 it will be serviced by A communication link 32 is established in each sector 36 between base station 24 and each subscriber station 28 in the sector 36 via radio. Communication link 32a carries information to be transferred between base station 24 and subscriber station 28b, communication link 32b carries information to be transferred between base station 24 and subscriber stations 28c and 28d, etc. Communication link 32 can be implemented using a variety of multiple access techniques, including TDMA, FDMA, CDMA or hybrid systems such as GSM, etc. In a present embodiment, data transmitted over communication link 32 is transmitted using CDMA as a -g-multiple access technology and the data is in the form of packets, transmitted within slotted time frames, the details of which will be discussed in greater detail below.
As used herein, the terms "package", "packaged" and "packaging" refer to the overall arrangement of the transmission of the packaged data for its reception at an intended destination receiver. Packaging of data can include, without limitation, applying different levels of forward error correcting (FEC) codes (from no coding to high levels of coding and/or different coding methods), employing various levels of symbol repetition, employing different modulation schemes (4-QAM, 16-QAM, 64-QAM, etc.) and any other techniques or methods for arranging data transmission with a selection of the amount of radio (or other physical layer) resources required, the data rate and probability of transmission errors which are appropriate for the transmission. For example, data can be packaged with rate 1/4 FEC coding (each 1 data bit is transmitted in 4 bits of information) and 16-QAM modulation for transmission to a first intended receiver and packaged with rate 1/2 FEC coding and 64-QAM modulation for transmission to a second intended receiver which has a better reception-quality than the first.
Communications link 32 operates in both an uplink (from a subscriber station 28 to base station 24) and a downlink direction (from base station 24 to subscriber stations 28). The method of providing both uplink and downlink direction is not particularly limited, and in the present embodiment communications link 32 operates by frequency division duplexing (FDD).
However, other methods of providing both an uplink and downlink direction, such as time division duplexing (TDD) and hybrid schemes are within the scope of the invention.
Referring now to Figure 2, in the current embodiment, communications link 32 is comprised of a plurality of channels, which in the present CDMA
implementation, is achieved with orthogonal coding of link 32. In the downlink direction, base station 24 uses a shared channel, referred to as the broadcast data channel (BDCH) 38 to carry variable-rate and bursty traffic (consisting primarily of signaling and Internet traffic) across a sector 36. BDCH 38 makes use of adaptive FEC and modulation to maximize downlink capacity and contains multiple packets or, more commonly, segments of packets of data for various subscriber stations 28 all time-multiplexed together into a single frame. In the present embodiment, BDCH
38 can be configured with spreading factor 4, wherein eight blocks of data can be sent within a ten millisecond frame, spreading factor 8 wherein four blocks of data can be sent within a frame or spreading factor 16 wherein two blocks of data can be sent within a frame.
Also in the present embodiment, base station 24 can support one or more BDCH 38 per sector 36 at any one time.
In the uplink direction, data traffic is carned from subscriber station 28 to base station 24 using a dedicated data channel (DDCH) 44. A separate DDCH 44 is set-up between base station 24 and each subscriber station 28 with an active communications link 32.
Signaling traffic is carried from subscriber station 28 to base station 24, typically inband using DDCH 44.
Subscriber stations 28 measure their received SNR, or other metric of their ability to receive data from base station 24 and report this information back to base station 24 on a regular basis over their DDCH 44 using an upper layer signaling protocol. Subscriber stations 28 with high SNRs require less channel coding and can use higher order modulation than subscriber stations 28 with lower SNRs and thus, each block transmitted on BDCH 38 using a different block type (i.e., different packaging ofFEC type, FEC rate, modulation, etc.). Figure 3 shows an example of a base station 24 in greater detail. For the sake of clarity, base station 24 shows an example of a single sector base station. However, multi-sector base stations 24 are also within the scope of the invention. Base station 24 comprises an antenna 50, or antennas, for receiving and transmitting radio-communications over communications link 32. In turn, antenna 50 is connected to a radio 52 and a modem 54. Modem 50 is connected to a microprocessor-router assembly 56 such as a Intel Corporation Pentium processor based system using a conventional operating system such as Linux. Microprocessor-muter assembly 56 is responsible for traffic scheduling of all subscriber stations 28 within its sector 36 and for radio resource management. It will be understood that assembly 56 can include multiple microprocessors, as desired and/or that the muter can be provided as a separate unit, if desired. The muter within microprocessor-router assembly 56 is connected to a backhaul 58 in any suitable manner, which in turn connects base station 24 to a data network (not shown).
Referring now to Figure 4, an example of a subscriber station 28 is shown in greater detail. Subscriber station 28 comprises an antenna 60, or antennas, for receiving and transmitting radio-communications over communications link 32. In turn, antenna 60 is connected to a radio 64 and a modem 68, which in turn is connected to a microprocessor-assembly 72.
Microprocessor-assembly 72 can include, for example, a StrongARM processor manufactured by Intel Corporation, that performs a variety of functions, including implementing A/D-D/A conversion, filters, encoders, decoders, data compressors, de-compressors and/or packet disassembly. Micro-processor-assembly 72 also includes buffers 74 which stores queued data traffic waiting for transport up communications link 32.
As shown in Figure 4, microprocessor-assembly 72 interconnects modem 68 and a data port 76, for connecting subscriber station 28 to a data client device (not shown), such as a personal computer, personal digital assistant or the like which is operable to use data received over communications link 32. Accordingly, microprocessor-assembly 72 is operable to process data between data port 76 and modem 68. Microprocessor-assembly 72 is also interconnected to at least one telephony port 80, for connecting subscriber station 28 to a telephony device (not shown) such as a telephone. In some cases, particularly in the case of a mobile subscriber station 28, the data client device can be integrated into the subscriber station 28.
Referring now to Figure 5, the logical architecture of a shared channel, such as a BDCH
scheduler is shown generally at 100. Scheduler 100 is responsible for assigning queued packets of data intended to be transmitted from base station 24 to subscriber stations 28 into the bitstream of BDCH 38 while maintaining any agreed-upon QoS terms for each subscriber station 28 and implementing a scheduling policy based upon a fairness factor provided by a network operator (and discussed in greater detail below) in order to provide varying degrees of prioritization of fairness between subscriber stations 28 and overall throughput on BDCH 38. A
method for scheduler 100 to implement a scheduling policy and schedule queued packets is described further below with reference to Figure 6. In the current embodiment, scheduler 100 is a software program running within base station 24 on microprocessor-assembly 56. However, other implementations, such as a hardware or firmware implementation are also within the scope of the invention.
Data 102 bound for each subscriber station 28 is queued in traffic queues 104 before being sent downstream on BDCH 38 to various subscriber stations 28. Each flow of data 102 can contain a variety of different types of data, such as web pages, FTP data, streaming media, voice over IP data, or other data types as will occur to those of skill in the art. Traffic queues 104 for subscriber stations 28 are established for each subscriber station 28 that is known to, and is connected to, base station 24 over communication link 32. The example in Figure 5 shows a scenario with four traffic queues 104, each servicing a flow to a corresponding subscriber station 28 (e.g. traffic queue 104a holds traffic bound for subscriber station 28a, etc.) In the example shown in the Figure, traffic queue 104a has five queued packets, 104b has no queued packets, traffic queue 104c has three queued packets, and traffic queue 104d has four queued packets.
Traffic queues 104a, 104c, and 104d therefore have backlogged (e.g. - non-zero length queues) flows while traffic queue 104b has no backlogged flow.
Besides holding traffic queues 104n, scheduler 100 stores link quality parameter 108n, negotiated service share parameter 112", and measured service share parameter 116n parameters for each active subscriber station 28n with a traffic flow. Furthermore, scheduler 100 stores a fairness factor 120. As described in further detail below, there is at least one instance of fairness factor 120 per sector 36.
The link quality parameter 108n holds a suitable measurement of the reception quality experienced at subscriber station 28n (a subscriber station 28n)~ In a present embodiment, the value of E
link quality parameter 108 is the signal to noise (SNR) estimate a subscriber station Zs = c of at least N
one suitable channel, where E~ represents the per chip BDCH channel signal energy at the antenna of subscriber station 28, and Nt represents the total noise received at antenna 60 of subscriber station 28, the total noise equaling the sum of the average noise density, interference from other interfering cells and sectors, plus mufti-path interference. In the present embodiment, each subscriber station 28n periodically updates its value for link quality parameter 108 by transmitting its received SNR over an uplink channel such as DDCH 44.
The negotiated service share parameter 112n stores the value for an agreed-upon quality of service level (~neg) for subscriber station 28n. In the present embodiment, negotiated service share parameter represents a guaranteed data rate (bits/s); however, other definitions of a negotiated service share such as maximum delay before transmitting a waiting packet, or a combination of guaranteed data rate and maximum delay are within the scope of the invention.
Subscriber stations 28 with higher values for negotiated service share parameter 112 will receive better service than subscriber stations 28 with lower negotiated service share parameter 112. In the present embodiment, negotiated service share parameter 112 is negotiated between base station 24 and each subscriber station 28 when the subscriber station 28 connects to base station 24. However, the means of determining negotiated service share parameter 112 are not particularly limited. For example, negotiated service share parameter 112 can be determined by the service provided based upon the media type about to be transmitted to subscriber station 28, a monthly subscription agreement for subscriber station 28, a fee for service, etc. Other methods of determining negotiated service share parameter 112 will occur to those of skill in the art.
Measured service share parameter 116n stores the value of the measured service share (~meas) for subscriber station 28n. The measured service share is the portion of BDCH 38 that has carried packets destined for that particular subscriber station 28n. Thus, a larger value for the measured service share parameter 1160 indicates a higher average data rate delivered to that subscriber station 28n. Measured service share parameter 1160 can be equated to the average bit rate over a fixed interval of time to a particular subscriber station 28a.
Finally, fairness factor 120 is an adjustable parameter (F) that represents the scheduling policy and this parameter controls the trade-off between individual flow fairness versus overall throughput over communications link 32. There can be either one instance of fairness factor 120 per sector 36, which is set by a network operator, or there can be one instance of fairness factor 120 per BDCH 38 (in cases where there is more than one BDCH 38 per sector 36).
In the current embodiment, F is normalized and ranges from zero to one. A setting of zero indicates a policy that prioritizes the scheduling of data flows 102 to prioritize throughput on the downlink by scheduling data to subscriber stations 28 with the best SNRs, without regard to providing fairness between subscriber stations 28. A setting of one indicating a policy that prioritizes the scheduling of flows 102 to provide fairness between subscriber stations 28s so that all subscriber stations 28 at the same QoS level will receive the same data rate, regardless of their respective SNRs.
Once packets in a traffic queue 104 are scheduled by scheduler 100, they are moved into the blocks 128 of frames 124 of BDCH 38. Normally the spreading factor of a BDCH 38 is predetermined by a network operator and is fixed for every subscriber station 28 that is being serviced by a particular BDCH 38. In a current embodiment, a spreading factor of 4 is preferred (thus providing eight blocks 128 per frame 124).
As known to those of skill in the art, the structure of a block 128 can vary according to differences in modulation order, symbol repetition, etc. The number of bits of information carried in each block 128 can vary according to the block structure used. In a current embodiment, each block 128 can carry between three-hundred and twenty and nine-thousand seven-hundred and forty-four bits of information. Generally, blocks 128 with smaller information payloads use lower order modulation and higher symbol repetition to provide greater redundancy for noisy or otherwise poor communication links 32 experienced by subscriber stations 28 with low SNRs. Communication to subscriber stations 28 with better SNRs can employ a block structure that carries more bits of information. In a current embodiment, blocks 128 with different block structures can be carried in the same frame 124.
Referring now to Figure 6, a flowchart for a method of scheduling data transmissions for a BDCH 38 is shown beginning at 200. In the current embodiment, the method described below is run once per scheduled frame 124. However, the frequency of the method is not particularly limited and it can be run more or less frequently than described here, if desired.
At step 200, a scheduling policy is determined by setting a value for fairness factor 120 for sector 36 (between 0 and 1) in base station 24. As described above, a value of 0 indicates a scheduling policy where scheduler 100 prioritizes sector throughput over fairness between subscriber stations 28 and a value of 1 indicates a scheduling policy where scheduler 100 transmits queued packets with maximum fairness to subscriber stations 28.
It is contemplated that, in the majority of deployments, a fairness factor 120 set somewhere between the two bounds of this range (zero and one) will be preferable. For example a setting of 0.5 would provide a policy which provides a reasonable degree of fairness to most subscriber stations 28, while still taking advantage of SNR variances to maximize downlink throughput. It should be noted that even when fairness factor 120 is set to one (indicating maximum fairness), subscriber stations 28 with different QoS levels 112 will be scheduled differently according to their QoS levels, as fairness factor 120 determines fairness in scheduling only with regards to different SNRs. In order to provide totally equal service, QoS levels 112 should be the same for all subscriber stations 28.
At step 204, scheduler 100 receives the link quality parameter 108 for each subscriber station 28 with backlogged traffic (i.e., any subscriber station 28n whose corresponding traffic queue 104n has at least one packet in it). Link quality parameter 108 can be updated on each iteration through the method, or can be updated at appropriate longer intervals. At step 208, scheduler 100 calculates a value for the QoS-based priority (c~) for each traffic queue 104 having at least one packet in the queue for this frame. qfis a positive number between 0 and 1, with higher values indicating a higher priority for that queue 104. A value for gf greater than 0.5 indicates that the data in queue 104 is lagging (i.e., the measured service share parameter 116 is less than the negotiated service share in negotiated service share parameter 112) and a of value less than 0.5 indicates that that particular queue 104 is leading in service (i.e., the measured service share parameter 116 is greater than the negotiated service share in negotiated service share parameter 112). A higher value of qfimplies that the data in queue 104 requires service sooner.
The value for qfis derived by first subtracting a required service share ~freq from the measured service share ~~"eas, dividing the difference by two, then adding 0.5:
(hj'req - (IJfmeas Both ~heq and ~~"eas (described further below) are defined as ranging between zero and one, so of is always a number between zero and one.
The required service share (~fre9) represents the amount of the service share required for a negotiated service share parameter 112 to be met. For each frame that queue 104 is backlogged, the required service share increases. In the current embodiment ~~.eq is calculated as follows:
nE~B(n) where the numerator is the product of the number of backlogged frames for flow 102 over a sliding interval of frames (wf) and ~fis the negotiated service share parameter 112 negotiated at setup. The absolute value notation is used to indicate the length of the vector, and B(n) is the indices of the flows which are backlogged during frame n. The denominator is the sum of negotiated service shares of all backlogged queues 104 over the same sliding interval of frames.
The measured service share (~~~) is determined by taking all the bits transmitted over BDCH 38 for the selected queue 104 during a sliding window interval, and dividing it by all the bits transmitted over BDCH 38 during the same sliding window interval for every queue 104. In the current embodiment, ~~"eas is calculated as follows:
~bf(n) rtE w as - ~
lJr(II) rtE ~E B (!I
where b~(n) is the number of bits transmitted from flow f during frame n.
At step 212, scheduler 100 computes the throughput priority (tf) for each backlogged queue 104 for this frame. tfis a normalized interpretation of the SNR for a particular flow 102 (~, yielding a value between 0 and 1, where 0 indicates a minimum SNR ratio and 1 indicates the maximum possible SNR. Mathematically, b(asscry-b(a ~) tf=
b a~ -b(a~) where b(x) is the number of bits in block 128 at SNR x, as determined by the block structure.
At step 216, scheduler 100 computes the total priority function (pf) for each backlogged queue 104 by calculating a value derived from adding the QoS priority (c~, as determined at step 208) to the throughput value (tf, as determined at step 212), where each priority is multiplied by fairness factor 120 or an inverse of the fairness factor 120 respectively.
In the current embodiment, pfis calculated as follows:
pf=Fqf+t~(1-F), f E B
where B is the set of all backlogged queues 104.
In the above equation, F is the fairness factor 120. When fairness factor 120 is zero, then the product of F and qfis zero and the term (a'SS~(1-F)), which is the throughput value, is maximal. When fairness factor 120 is one, then qfis maximal and the throughput value is multiplied to a product of zero.
At step 220, all the backlogged queues 104 are sorted by the total priority function (p') determined in step 212 in descending order, (p'J~pP~~;~ > pP.~~~, i<j.
At step 224, starting with the queue 104 with the highest total priority value p'(1), scheduler 100 computes the maximum number of blocks 128, referred to herein as (m'(i)), that will be allocated to this queue 104 by calculating a percentage of blocks 128 available to it, based upon the priority of this queue 104 relative to the sum of all total priority values for all backlogged queues . More specifically, scheduler 100 multiplies the number of blocks 128 in frame 124 (M) by the priority function (pf) for this queue 104 (as determined in step 212), then dividing the result by the sum of all priority functions (also determined in step 212) for all backlogged queues. In the current embodiment the maximum number of blocks 128 m'(i) is determined as follows:
Mp' (i ) m'(i) = round ~ p,( f.) fEB
where the round() operator rounds the number of scheduled blocks, m'(i), to the closest integer.
At step 228, starting with the highest priority queue 104 (p'(1 )), scheduler 100 allocates up to m'(1 ) blocks 128. Fewer blocks 128 are allocated if the number of bits available for scheduling to queue 104 ( p'(1)) requires fewer than m'(1) blocks 128. Step 220 is repeated for all backlogged queues 104 (p'(2)), p'(3)), etc., until either all backlogged queues 104 are cleared or all the blocks 128 in frame 124 are scheduled. Once step 220 is completed, the method returns to step 204 to schedule the next frame 124. The method continues whenever data is present to be scheduled for transmission.
While the embodiments discussed herein are directed specific implementations of the invention, it will be understood that combinations, sub-sets and variations of the embodiments are within the scope of the invention. For example, in the current embodiment, only a single queue is scheduled for each traffic block 128. However, multiple queues can be scheduled in a traffic block 128 and are within the scope of the invention. If multiple subscriber stations 28 were scheduled in the same block 128, the block structure must be suitable to satisfy the SNR
requirements of all subscriber stations 28 scheduled. One way to ensure this, for example, is to only allocate subscribers that have an SNR that is greater than the first subscriber that is allocated to the block.
It is contemplated that scheduler 100 may run less frequently when the total amount of traffic in all queues 104 is below a certain threshold (representing a low amount of data traffic volume), as to reduce latency for microprocessor-router assembly 36.
Alternatively, when the total amount of traffic is below a certain threshold, scheduler 100 may elect to suspend its scheduling method (as described with regards to Figure 6), and instead schedule traffic according to some other scheduling method such as FIFO, in order to reduce latency for each queue 104 It is further contemplated that scheduler 100 can run more frequently than once per frame 124, where the scheduling frequency is a multiple integer of the duration of block 128. This will permit the scheduler to perform well in situations where the coherence time of the channel is lower, such as when the carrier frequency is higher or if there is a greater amount of motion of pedestrians or other traffic near to the subscriber or the base station.
It is also not intended that the present invention be limited to use with the particular radio based system described above, nor to radio-based systems in general and it is believed that the present invention can be advantageously employed with any system for scheduling data for transmission from a single node to one or more of a plurality of other nodes over a shared communications link. Use with optical, wireline and other shared communication links is contemplated.
The above-described embodiments of the invention are intended to be examples of the present invention and alterations and modifications may be effected thereto, by those of skill in the art, without departing from the scope of the invention which is defined solely by the claims appended hereto.
While proportional fair sharing can provide a better degree of equality between subscriber stations, it can also lead to an overall drop in total system throughput, as the base station must devote a large amount of channel capacity to service a small minority of subscriber stations with poor SNRs.
In contrast to the above two methods which focus on providing equal service to subscriber stations, some other methods focus on achieving optimal throughput across the entire system, at the expense of fairness between subscriber stations. In their article "CDMA/HDR: A
Bandwidth-Efficient High-Speed Wireless Data Service for Nomadic Users" (IEEE
Communication Magazine, July 2000, pp. 70-78), Bender et al. demonstrate how unequal latency between subscriber stations with different data reception capabilities (i.e. - different SNRs) can increase total throughput on the downlink in the network. By providing a greater portion of channel capacity on the shared link to subscriber stations with better instantaneous SNRs, the base station can transmit more traffic overall. This method increases the total throughput of the system, potentially clearing backlogs and improving overall network performance, but also creates latency and a lower data rate for subscriber stations with poorer average SNRs. To ensure that all subscriber stations possess at least a tolerable individual data rate, the system limits the maximum permitted latency value to subscriber stations with poorer SNR ratios.
Liu et al. in their article "Opportunistic Transmission Scheduling with Resource-Sharing Constraints in Wireless Networks" (IEEE Journal on Selected Areas in Communication, Vol. 19, No. 10, October 2001, pp. 2053-2064) discuss the potential of improving wireless resource efficiency by exploiting variances in channel conditions, while still maintaining a level of fairness between subscriber stations. In their model, each subscriber station is allocated a fraction of transmission time. Data packets are stored in traffic queues for each subscriber station until their scheduled transmission time occurs, like the methods described above. However, the base station continuously measures the quality of the link (which can vary over time) at each subscriber station, and transmits to the best possible subscriber station at that moment while still providing a required average data rate to each subscriber station. While opportunistic transmission scheduling can provide an increase in both total and individual throughput, it is not without its disadvantages. One disadvantage is that opportunistic scheduling can increase latency for subscriber stations as the base station waits for the opportune time to transmit to them.
Another disadvantage is that this method assumes a constant level of data to be transmitted, such as a WAP session on a cellular phone. Nor does the opportunistic method contemplate intentional differences in the treatment of subscribers, such as those provided by different qualities of service (QoS), or for treatments of different types of media data. For example, a base station may service subscriber stations with different priorities, so that one subscriber using a latency-intolerant Voice-over-IP (VoIP) service receives a guaranteed service, while another subscriber is web surfing and will be provided only with a best-effort service by the base station.
In their article "Providing Quality of Service over a Shared Wireless Link"
(IEEE
Communications Magazine Feb 2001, pp. 50-54), Andrews et al., describe a new scheduling algorithm, Modified Largest Weighted Delays First (M-LWDF) which ameliorates some of the above disadvantages. M-LWDF is similar to opportunistic scheduling in that it schedules transmissions to take advantage of SNR fluctuations. However, particular traffic flows (traveling on the downlink to their respective subscriber stations) are given preferential treatment over other traffic flows by virtue of their QoS level. For each time slot (t), the base station calculates a value for each traffic queue, this value consisting of the product of the packet delay multiplied by the channel capacity for that subscriber station multiplied by an arbitrary value. The traffic queue with the highest derived value is scheduled. Mathematically, each time slot (t) serves a waiting packet queue (j) for which the function y~W~~(t)r~(t) is maximal.
Wj(t) is the current waiting time for the packets stored in queue j, r~(t) is the channel capacity, or data rate, for data flow j, and y~ is an arbitrary value. If y is the same for each packet queue, then all subscriber stations have the same QoS level. A packet queue with a higher value for y thus has a higher level of service than a packet queue with a lower value for y. If r(t) is the same for each packet queue, then all subscriber stations have the same data rate. A packet queue with a higher value for r(t) thus transmits at a higher average data rate than another packet queue which has the same y value. While M-LWDF provides some advantage over the prior art, it also has its limitations. A
key disadvantage is that M-LWDF provides no means to provide a policy of fairness between subscriber stations with different channel quality. Another disadvantage is that the M-LWDF can schedule traffic for only a single subscriber station per time slot. This creates a significant amount of latency (x number of timeslots) for all other subscribers waiting for packet delivery.
Furthermore, since each timeslot bears data traffic for a single subscriber station, it is possible that some of the capacity of that timeslot is wasted due to internal fragmentation.
It is therefore desired to provide a data transmission scheduler that provides QoS
differentiated service to subscriber stations while offering service providers the ability to implement scheduling policies designed to provide improved sector-throughput, fairness between subscribers, or some combination of the two. It is also desirable to implement a data transmission scheduler which is opportunistic and can take advantage of variance in the data reception characteristics of subscriber stations served by the transmitter.
Finally the scheduling mechanism should be flexible enough to accommodate a wide variety of computational capabilities at the base station.
SUMMARY OF THE INVENTION
It is an object of the present invention to provide a novel data transmission scheduler which obviates or mitigates at least some of the above-identified disadvantages of the prior art.
According to a first aspect of the present invention, there is provided a method of scheduling data for transmission from a base station over a shared channel to a plurality of subscriber stations, said method comprising:
determining a fairness factor from a range of possible fairness factors, where a first end of said range indicates a policy of scheduling data with maximum fairness between said plurality of subscriber stations and a second end of said range indicates a policy of scheduling data for maximum data traffic over said shared channel;
for each subscriber station in said plurality of subscriber stations that said base station has data to be delivered to determining a quality of service priority value, said quality of service priority value indicating a priority of said subscriber station relative to other subscriber stations in said plurality of subscriber stations;
determining a throughput value, said throughput value indicating the quantity of data to be moved to said subscriber station if data is scheduled to said subscriber station;
determining a total priority value, said total priority value is the sum of said quality of service priority adjusted according to said fairness factor and said throughput value conversely adjusted according to said fairness factor; and scheduling data on a portion of said shared channel to at least one of said plurality of subscriber stations, beginning with the subscriber station with the highest total priority value.
According to another aspect of the present invention, there is provided a system for transmitting data, comprising:
a plurality of subscriber stations each having a processor, a modem, a radio and an antenna, each subscriber station operable to transmit a request for a dedicated data channel from a base station; and a base station having a processor, a modem, a radio and an antenna, and operable to receive said requests for a dedicated data channel from said subscriber stations and to schedule data for transmission to said plurality of subscriber stations over a shared channel in accordance with a scheduling policy that varies priorities between scheduling for fairness between subscriber stations and for improved throughput to said plurality of subscriber stations.
The present invention provides a method, system and apparatus for scheduling data to a plurality of subscriber stations from a base station over a shared channel.
Data destined for each subscriber station is placed into individual queues at the base station. The base station allocates a portion of the shared channel to transmit the data in each particular queue based upon a priority value it assigns to that queue. The priority value for each queue is determined by a QoS value and a throughput value, where each of these two values is adjusted by a fairness factor. The QoS
value indicates whether a subscriber station has been receiving data from the base station according to an agreed-upon QoS level. The throughput value indicates the data rate that can be achieved by the base station transmitting to that subscriber station. The fairness factor represents a scheduling policy at the base station. Scheduling policies can include scheduling data to emphasize fairness between subscriber stations at a particular QoS level, scheduling data to maximize throughput over the shared channel, and scheduling data to achieve a balance between fairness and maximum throughput.
BRIEF DESCRIPTION OF THE DRAWINGS
Embodiments of the present invention will now be described, by way of example only, with reference to the attached Figures, wherein:
Figure 1 is a schematic representation of a wireless network in accordance with an embodiment of the invention;
Figure 2 is a representation of a communications link as shown in Figure 1, comprised of multiple channels;
Figure 3 is a schematic representation of the base station shown in Figure I;
Figure 4 is a schematic representation of one of the subscriber stations shown in Figure l;
Figure 5 is a schematic representation of a scheduler for a shared channel running on the base station shown in Figure 3;
Figure 6 is a flowchart showing how the broadcast downlink channel scheduler shown in Figure 5 manages the scheduling of backlogged traffic flows to the shared channel.
DETAILED DESCRIPTION OF THE INVENTION
Refernng now to Figure 1, a wireless network for transmitting data is indicated generally at 20. Network 20 includes a radio base station 24 and a plurality of subscriber stations 28a, 28b ... 28n. In a presently preferred embodiment, radio base station 24 is connected to at least one data telecommunications network (not shown), such as a land line-based switched data network, a packet network, etc., by an appropriate gateway and one or more backhauls (not shown), such as a T1, T3, EI, E3, OC3 or other suitable land line link, or can be a satellite or other radio or microwave channel link or any other link suitable for operation as a backhaul as will occur to those of skill in the art.
_7_ Base station 24 communicates with subscriber stations 28 which can be fixed, nomadic or mobile devices. The number 'n' of subscriber stations serviced by a base station 24 can vary depending upon the amount of radio bandwidth available and/or the configuration and requirements of the subscriber stations 28.
As illustrated in Figure 1, the geographic distribution of subscriber stations 28 with respect to base station 24 need not be symmetric nor will subscriber stations which are physically located close to one another necessarily experience the same or similar data reception rates, due to varying signal to noise ratios (SNRs) experienced at the subscriber stations 28 due to a variety of factors including the geographic environment (the presence or absence of buildings which can reflect or mask signals), the radio environment (the presence or absence of radio noise sources), etc. Thus, in most circumstances subscriber stations 28 served by a base station 24 can have significantly different SNRs and these SNRs can change over time.
As known to those of skill in the art, subscriber stations 28 can be geographically divided into different sectors 36, formed via directional antennas at base station 24 to increase the number of subscriber station 28 that can be served from a single base station location. In such a case, each sector 36 essentially acts as a different base station and base station 24 can manage the network resources in each sector 36 independent of each other sector 36.
While Figure 1 shows only one base station 24, it will further be apparent to those of skill in the art that network 20 can contain multiple, geographically distributed base stations 24, with overlapping sector 36 coverage of subscriber stations 28, and where each subscriber station 28 in an overlapping sector 36 coverage area can select which base station 24 it will be serviced by A communication link 32 is established in each sector 36 between base station 24 and each subscriber station 28 in the sector 36 via radio. Communication link 32a carries information to be transferred between base station 24 and subscriber station 28b, communication link 32b carries information to be transferred between base station 24 and subscriber stations 28c and 28d, etc. Communication link 32 can be implemented using a variety of multiple access techniques, including TDMA, FDMA, CDMA or hybrid systems such as GSM, etc. In a present embodiment, data transmitted over communication link 32 is transmitted using CDMA as a -g-multiple access technology and the data is in the form of packets, transmitted within slotted time frames, the details of which will be discussed in greater detail below.
As used herein, the terms "package", "packaged" and "packaging" refer to the overall arrangement of the transmission of the packaged data for its reception at an intended destination receiver. Packaging of data can include, without limitation, applying different levels of forward error correcting (FEC) codes (from no coding to high levels of coding and/or different coding methods), employing various levels of symbol repetition, employing different modulation schemes (4-QAM, 16-QAM, 64-QAM, etc.) and any other techniques or methods for arranging data transmission with a selection of the amount of radio (or other physical layer) resources required, the data rate and probability of transmission errors which are appropriate for the transmission. For example, data can be packaged with rate 1/4 FEC coding (each 1 data bit is transmitted in 4 bits of information) and 16-QAM modulation for transmission to a first intended receiver and packaged with rate 1/2 FEC coding and 64-QAM modulation for transmission to a second intended receiver which has a better reception-quality than the first.
Communications link 32 operates in both an uplink (from a subscriber station 28 to base station 24) and a downlink direction (from base station 24 to subscriber stations 28). The method of providing both uplink and downlink direction is not particularly limited, and in the present embodiment communications link 32 operates by frequency division duplexing (FDD).
However, other methods of providing both an uplink and downlink direction, such as time division duplexing (TDD) and hybrid schemes are within the scope of the invention.
Referring now to Figure 2, in the current embodiment, communications link 32 is comprised of a plurality of channels, which in the present CDMA
implementation, is achieved with orthogonal coding of link 32. In the downlink direction, base station 24 uses a shared channel, referred to as the broadcast data channel (BDCH) 38 to carry variable-rate and bursty traffic (consisting primarily of signaling and Internet traffic) across a sector 36. BDCH 38 makes use of adaptive FEC and modulation to maximize downlink capacity and contains multiple packets or, more commonly, segments of packets of data for various subscriber stations 28 all time-multiplexed together into a single frame. In the present embodiment, BDCH
38 can be configured with spreading factor 4, wherein eight blocks of data can be sent within a ten millisecond frame, spreading factor 8 wherein four blocks of data can be sent within a frame or spreading factor 16 wherein two blocks of data can be sent within a frame.
Also in the present embodiment, base station 24 can support one or more BDCH 38 per sector 36 at any one time.
In the uplink direction, data traffic is carned from subscriber station 28 to base station 24 using a dedicated data channel (DDCH) 44. A separate DDCH 44 is set-up between base station 24 and each subscriber station 28 with an active communications link 32.
Signaling traffic is carried from subscriber station 28 to base station 24, typically inband using DDCH 44.
Subscriber stations 28 measure their received SNR, or other metric of their ability to receive data from base station 24 and report this information back to base station 24 on a regular basis over their DDCH 44 using an upper layer signaling protocol. Subscriber stations 28 with high SNRs require less channel coding and can use higher order modulation than subscriber stations 28 with lower SNRs and thus, each block transmitted on BDCH 38 using a different block type (i.e., different packaging ofFEC type, FEC rate, modulation, etc.). Figure 3 shows an example of a base station 24 in greater detail. For the sake of clarity, base station 24 shows an example of a single sector base station. However, multi-sector base stations 24 are also within the scope of the invention. Base station 24 comprises an antenna 50, or antennas, for receiving and transmitting radio-communications over communications link 32. In turn, antenna 50 is connected to a radio 52 and a modem 54. Modem 50 is connected to a microprocessor-router assembly 56 such as a Intel Corporation Pentium processor based system using a conventional operating system such as Linux. Microprocessor-muter assembly 56 is responsible for traffic scheduling of all subscriber stations 28 within its sector 36 and for radio resource management. It will be understood that assembly 56 can include multiple microprocessors, as desired and/or that the muter can be provided as a separate unit, if desired. The muter within microprocessor-router assembly 56 is connected to a backhaul 58 in any suitable manner, which in turn connects base station 24 to a data network (not shown).
Referring now to Figure 4, an example of a subscriber station 28 is shown in greater detail. Subscriber station 28 comprises an antenna 60, or antennas, for receiving and transmitting radio-communications over communications link 32. In turn, antenna 60 is connected to a radio 64 and a modem 68, which in turn is connected to a microprocessor-assembly 72.
Microprocessor-assembly 72 can include, for example, a StrongARM processor manufactured by Intel Corporation, that performs a variety of functions, including implementing A/D-D/A conversion, filters, encoders, decoders, data compressors, de-compressors and/or packet disassembly. Micro-processor-assembly 72 also includes buffers 74 which stores queued data traffic waiting for transport up communications link 32.
As shown in Figure 4, microprocessor-assembly 72 interconnects modem 68 and a data port 76, for connecting subscriber station 28 to a data client device (not shown), such as a personal computer, personal digital assistant or the like which is operable to use data received over communications link 32. Accordingly, microprocessor-assembly 72 is operable to process data between data port 76 and modem 68. Microprocessor-assembly 72 is also interconnected to at least one telephony port 80, for connecting subscriber station 28 to a telephony device (not shown) such as a telephone. In some cases, particularly in the case of a mobile subscriber station 28, the data client device can be integrated into the subscriber station 28.
Referring now to Figure 5, the logical architecture of a shared channel, such as a BDCH
scheduler is shown generally at 100. Scheduler 100 is responsible for assigning queued packets of data intended to be transmitted from base station 24 to subscriber stations 28 into the bitstream of BDCH 38 while maintaining any agreed-upon QoS terms for each subscriber station 28 and implementing a scheduling policy based upon a fairness factor provided by a network operator (and discussed in greater detail below) in order to provide varying degrees of prioritization of fairness between subscriber stations 28 and overall throughput on BDCH 38. A
method for scheduler 100 to implement a scheduling policy and schedule queued packets is described further below with reference to Figure 6. In the current embodiment, scheduler 100 is a software program running within base station 24 on microprocessor-assembly 56. However, other implementations, such as a hardware or firmware implementation are also within the scope of the invention.
Data 102 bound for each subscriber station 28 is queued in traffic queues 104 before being sent downstream on BDCH 38 to various subscriber stations 28. Each flow of data 102 can contain a variety of different types of data, such as web pages, FTP data, streaming media, voice over IP data, or other data types as will occur to those of skill in the art. Traffic queues 104 for subscriber stations 28 are established for each subscriber station 28 that is known to, and is connected to, base station 24 over communication link 32. The example in Figure 5 shows a scenario with four traffic queues 104, each servicing a flow to a corresponding subscriber station 28 (e.g. traffic queue 104a holds traffic bound for subscriber station 28a, etc.) In the example shown in the Figure, traffic queue 104a has five queued packets, 104b has no queued packets, traffic queue 104c has three queued packets, and traffic queue 104d has four queued packets.
Traffic queues 104a, 104c, and 104d therefore have backlogged (e.g. - non-zero length queues) flows while traffic queue 104b has no backlogged flow.
Besides holding traffic queues 104n, scheduler 100 stores link quality parameter 108n, negotiated service share parameter 112", and measured service share parameter 116n parameters for each active subscriber station 28n with a traffic flow. Furthermore, scheduler 100 stores a fairness factor 120. As described in further detail below, there is at least one instance of fairness factor 120 per sector 36.
The link quality parameter 108n holds a suitable measurement of the reception quality experienced at subscriber station 28n (a subscriber station 28n)~ In a present embodiment, the value of E
link quality parameter 108 is the signal to noise (SNR) estimate a subscriber station Zs = c of at least N
one suitable channel, where E~ represents the per chip BDCH channel signal energy at the antenna of subscriber station 28, and Nt represents the total noise received at antenna 60 of subscriber station 28, the total noise equaling the sum of the average noise density, interference from other interfering cells and sectors, plus mufti-path interference. In the present embodiment, each subscriber station 28n periodically updates its value for link quality parameter 108 by transmitting its received SNR over an uplink channel such as DDCH 44.
The negotiated service share parameter 112n stores the value for an agreed-upon quality of service level (~neg) for subscriber station 28n. In the present embodiment, negotiated service share parameter represents a guaranteed data rate (bits/s); however, other definitions of a negotiated service share such as maximum delay before transmitting a waiting packet, or a combination of guaranteed data rate and maximum delay are within the scope of the invention.
Subscriber stations 28 with higher values for negotiated service share parameter 112 will receive better service than subscriber stations 28 with lower negotiated service share parameter 112. In the present embodiment, negotiated service share parameter 112 is negotiated between base station 24 and each subscriber station 28 when the subscriber station 28 connects to base station 24. However, the means of determining negotiated service share parameter 112 are not particularly limited. For example, negotiated service share parameter 112 can be determined by the service provided based upon the media type about to be transmitted to subscriber station 28, a monthly subscription agreement for subscriber station 28, a fee for service, etc. Other methods of determining negotiated service share parameter 112 will occur to those of skill in the art.
Measured service share parameter 116n stores the value of the measured service share (~meas) for subscriber station 28n. The measured service share is the portion of BDCH 38 that has carried packets destined for that particular subscriber station 28n. Thus, a larger value for the measured service share parameter 1160 indicates a higher average data rate delivered to that subscriber station 28n. Measured service share parameter 1160 can be equated to the average bit rate over a fixed interval of time to a particular subscriber station 28a.
Finally, fairness factor 120 is an adjustable parameter (F) that represents the scheduling policy and this parameter controls the trade-off between individual flow fairness versus overall throughput over communications link 32. There can be either one instance of fairness factor 120 per sector 36, which is set by a network operator, or there can be one instance of fairness factor 120 per BDCH 38 (in cases where there is more than one BDCH 38 per sector 36).
In the current embodiment, F is normalized and ranges from zero to one. A setting of zero indicates a policy that prioritizes the scheduling of data flows 102 to prioritize throughput on the downlink by scheduling data to subscriber stations 28 with the best SNRs, without regard to providing fairness between subscriber stations 28. A setting of one indicating a policy that prioritizes the scheduling of flows 102 to provide fairness between subscriber stations 28s so that all subscriber stations 28 at the same QoS level will receive the same data rate, regardless of their respective SNRs.
Once packets in a traffic queue 104 are scheduled by scheduler 100, they are moved into the blocks 128 of frames 124 of BDCH 38. Normally the spreading factor of a BDCH 38 is predetermined by a network operator and is fixed for every subscriber station 28 that is being serviced by a particular BDCH 38. In a current embodiment, a spreading factor of 4 is preferred (thus providing eight blocks 128 per frame 124).
As known to those of skill in the art, the structure of a block 128 can vary according to differences in modulation order, symbol repetition, etc. The number of bits of information carried in each block 128 can vary according to the block structure used. In a current embodiment, each block 128 can carry between three-hundred and twenty and nine-thousand seven-hundred and forty-four bits of information. Generally, blocks 128 with smaller information payloads use lower order modulation and higher symbol repetition to provide greater redundancy for noisy or otherwise poor communication links 32 experienced by subscriber stations 28 with low SNRs. Communication to subscriber stations 28 with better SNRs can employ a block structure that carries more bits of information. In a current embodiment, blocks 128 with different block structures can be carried in the same frame 124.
Referring now to Figure 6, a flowchart for a method of scheduling data transmissions for a BDCH 38 is shown beginning at 200. In the current embodiment, the method described below is run once per scheduled frame 124. However, the frequency of the method is not particularly limited and it can be run more or less frequently than described here, if desired.
At step 200, a scheduling policy is determined by setting a value for fairness factor 120 for sector 36 (between 0 and 1) in base station 24. As described above, a value of 0 indicates a scheduling policy where scheduler 100 prioritizes sector throughput over fairness between subscriber stations 28 and a value of 1 indicates a scheduling policy where scheduler 100 transmits queued packets with maximum fairness to subscriber stations 28.
It is contemplated that, in the majority of deployments, a fairness factor 120 set somewhere between the two bounds of this range (zero and one) will be preferable. For example a setting of 0.5 would provide a policy which provides a reasonable degree of fairness to most subscriber stations 28, while still taking advantage of SNR variances to maximize downlink throughput. It should be noted that even when fairness factor 120 is set to one (indicating maximum fairness), subscriber stations 28 with different QoS levels 112 will be scheduled differently according to their QoS levels, as fairness factor 120 determines fairness in scheduling only with regards to different SNRs. In order to provide totally equal service, QoS levels 112 should be the same for all subscriber stations 28.
At step 204, scheduler 100 receives the link quality parameter 108 for each subscriber station 28 with backlogged traffic (i.e., any subscriber station 28n whose corresponding traffic queue 104n has at least one packet in it). Link quality parameter 108 can be updated on each iteration through the method, or can be updated at appropriate longer intervals. At step 208, scheduler 100 calculates a value for the QoS-based priority (c~) for each traffic queue 104 having at least one packet in the queue for this frame. qfis a positive number between 0 and 1, with higher values indicating a higher priority for that queue 104. A value for gf greater than 0.5 indicates that the data in queue 104 is lagging (i.e., the measured service share parameter 116 is less than the negotiated service share in negotiated service share parameter 112) and a of value less than 0.5 indicates that that particular queue 104 is leading in service (i.e., the measured service share parameter 116 is greater than the negotiated service share in negotiated service share parameter 112). A higher value of qfimplies that the data in queue 104 requires service sooner.
The value for qfis derived by first subtracting a required service share ~freq from the measured service share ~~"eas, dividing the difference by two, then adding 0.5:
(hj'req - (IJfmeas Both ~heq and ~~"eas (described further below) are defined as ranging between zero and one, so of is always a number between zero and one.
The required service share (~fre9) represents the amount of the service share required for a negotiated service share parameter 112 to be met. For each frame that queue 104 is backlogged, the required service share increases. In the current embodiment ~~.eq is calculated as follows:
nE~B(n) where the numerator is the product of the number of backlogged frames for flow 102 over a sliding interval of frames (wf) and ~fis the negotiated service share parameter 112 negotiated at setup. The absolute value notation is used to indicate the length of the vector, and B(n) is the indices of the flows which are backlogged during frame n. The denominator is the sum of negotiated service shares of all backlogged queues 104 over the same sliding interval of frames.
The measured service share (~~~) is determined by taking all the bits transmitted over BDCH 38 for the selected queue 104 during a sliding window interval, and dividing it by all the bits transmitted over BDCH 38 during the same sliding window interval for every queue 104. In the current embodiment, ~~"eas is calculated as follows:
~bf(n) rtE w as - ~
lJr(II) rtE ~E B (!I
where b~(n) is the number of bits transmitted from flow f during frame n.
At step 212, scheduler 100 computes the throughput priority (tf) for each backlogged queue 104 for this frame. tfis a normalized interpretation of the SNR for a particular flow 102 (~, yielding a value between 0 and 1, where 0 indicates a minimum SNR ratio and 1 indicates the maximum possible SNR. Mathematically, b(asscry-b(a ~) tf=
b a~ -b(a~) where b(x) is the number of bits in block 128 at SNR x, as determined by the block structure.
At step 216, scheduler 100 computes the total priority function (pf) for each backlogged queue 104 by calculating a value derived from adding the QoS priority (c~, as determined at step 208) to the throughput value (tf, as determined at step 212), where each priority is multiplied by fairness factor 120 or an inverse of the fairness factor 120 respectively.
In the current embodiment, pfis calculated as follows:
pf=Fqf+t~(1-F), f E B
where B is the set of all backlogged queues 104.
In the above equation, F is the fairness factor 120. When fairness factor 120 is zero, then the product of F and qfis zero and the term (a'SS~(1-F)), which is the throughput value, is maximal. When fairness factor 120 is one, then qfis maximal and the throughput value is multiplied to a product of zero.
At step 220, all the backlogged queues 104 are sorted by the total priority function (p') determined in step 212 in descending order, (p'J~pP~~;~ > pP.~~~, i<j.
At step 224, starting with the queue 104 with the highest total priority value p'(1), scheduler 100 computes the maximum number of blocks 128, referred to herein as (m'(i)), that will be allocated to this queue 104 by calculating a percentage of blocks 128 available to it, based upon the priority of this queue 104 relative to the sum of all total priority values for all backlogged queues . More specifically, scheduler 100 multiplies the number of blocks 128 in frame 124 (M) by the priority function (pf) for this queue 104 (as determined in step 212), then dividing the result by the sum of all priority functions (also determined in step 212) for all backlogged queues. In the current embodiment the maximum number of blocks 128 m'(i) is determined as follows:
Mp' (i ) m'(i) = round ~ p,( f.) fEB
where the round() operator rounds the number of scheduled blocks, m'(i), to the closest integer.
At step 228, starting with the highest priority queue 104 (p'(1 )), scheduler 100 allocates up to m'(1 ) blocks 128. Fewer blocks 128 are allocated if the number of bits available for scheduling to queue 104 ( p'(1)) requires fewer than m'(1) blocks 128. Step 220 is repeated for all backlogged queues 104 (p'(2)), p'(3)), etc., until either all backlogged queues 104 are cleared or all the blocks 128 in frame 124 are scheduled. Once step 220 is completed, the method returns to step 204 to schedule the next frame 124. The method continues whenever data is present to be scheduled for transmission.
While the embodiments discussed herein are directed specific implementations of the invention, it will be understood that combinations, sub-sets and variations of the embodiments are within the scope of the invention. For example, in the current embodiment, only a single queue is scheduled for each traffic block 128. However, multiple queues can be scheduled in a traffic block 128 and are within the scope of the invention. If multiple subscriber stations 28 were scheduled in the same block 128, the block structure must be suitable to satisfy the SNR
requirements of all subscriber stations 28 scheduled. One way to ensure this, for example, is to only allocate subscribers that have an SNR that is greater than the first subscriber that is allocated to the block.
It is contemplated that scheduler 100 may run less frequently when the total amount of traffic in all queues 104 is below a certain threshold (representing a low amount of data traffic volume), as to reduce latency for microprocessor-router assembly 36.
Alternatively, when the total amount of traffic is below a certain threshold, scheduler 100 may elect to suspend its scheduling method (as described with regards to Figure 6), and instead schedule traffic according to some other scheduling method such as FIFO, in order to reduce latency for each queue 104 It is further contemplated that scheduler 100 can run more frequently than once per frame 124, where the scheduling frequency is a multiple integer of the duration of block 128. This will permit the scheduler to perform well in situations where the coherence time of the channel is lower, such as when the carrier frequency is higher or if there is a greater amount of motion of pedestrians or other traffic near to the subscriber or the base station.
It is also not intended that the present invention be limited to use with the particular radio based system described above, nor to radio-based systems in general and it is believed that the present invention can be advantageously employed with any system for scheduling data for transmission from a single node to one or more of a plurality of other nodes over a shared communications link. Use with optical, wireline and other shared communication links is contemplated.
The above-described embodiments of the invention are intended to be examples of the present invention and alterations and modifications may be effected thereto, by those of skill in the art, without departing from the scope of the invention which is defined solely by the claims appended hereto.
Claims (25)
- What is claimed is:
A method of scheduling data for transmission from a base station over a shared channel to a plurality of subscriber stations, said method comprising:
determining a fairness factor from a range of possible fairness factors, where a first end of said range indicates a policy of scheduling data with maximum fairness between said plurality of subscriber stations and a second end of said range indicates a policy of scheduling data for maximum data traffic over said shared channel;
for each subscriber station in said plurality of subscriber stations that said base station has data to be delivered to determining a quality of service priority value, said quality of service priority value indicating a priority of said subscriber station relative to other subscriber stations in said plurality of subscriber stations;
determining a throughput value, said throughput value indicating the quantity of data to be moved to said each subscriber station if data is scheduled to said subscriber station;
determining a total priority value, said total priority value is the sum of said quality of service priority adjusted according to said fairness factor and said throughput value conversely adjusted according to said fairness factor; and scheduling data on a portion of said shared channel to at least one subscriber station of said plurality of subscriber stations, beginning with the subscriber station with the highest total priority value. - 2. The method of claim 1, where said quality of service priority value is determined by comparing said negotiated service share to said measured service share, where said quality of service priority is higher when said negotiated service share is greater than said measured share and said quality of service priority value is low when said measured share is greater than said negotiated service share.
- 3. The method of claim 1, where said throughput value is calculated by determining the largest block format receivable at said each subscriber station in said plurality of subscriber stations.
- 4. The method of claim 3, where said determining the largest block format receivable at said each subscriber station occurs by determining the signal to noise ratio received at said each subscriber station of said plurality of subscriber stations.
- 5. The method of claim 4, where said portion of said shared channel scheduled to said at least one subscriber station of said plurality of subscriber stations is proportional to said total priority value for said at least one subscriber station.
- 6. The method of claim 5, where said method occurs at least once per frame on said shared channel.
- 7. A system for transmitting data, comprising:
a plurality of subscriber stations, each having a processor, a modem, a radio and an antenna, each subscriber station operable to receive data traffic from a base station; and a base station having a processor, a modem, a radio and an antenna, and operable to receive said requests for a dedicated data channel from said subscriber stations and to schedule data for transmission to said plurality of subscriber stations over a shared channel in accordance with a scheduling policy that varies priorities between scheduling for fairness between subscriber stations and for improved throughput to said plurality of subscriber stations. - 8. The system of claim 7, where each subscriber station of said plurality of subscriber stations is operable to negotiate with said at least one base station a negotiated service share of said shared channel for data traffic bound to said each subscriber station.
- 9. The system of claim 8, where said base station determines a quality of service priority value for each subscriber station in said plurality of subscriber stations by comparing a negotiated service share for said each subscriber station to a measured service share for said each subscriber station.
- 10. The system of claim 9, where said base station determines a throughput value for said each subscriber station in said plurality of subscriber stations by determining the largest block format receivable at said each subscriber station in said plurality of subscriber stations.
- 11. The system of claim 10, where said base station determines the largest block format receivable at said each subscriber station occurs by determining the signal to noise ratio received at said each subscriber station of said plurality of subscriber stations.
- 12. The system of claim 11, where said each subscriber station of said plurality of subscriber stations transmits an indication of its signal to noise ratio to said base station.
- 13. A system for transmitting data, comprising:
a plurality of subscriber stations, each having a processor, a modem, a radio and an antenna, each subscriber station operable to receive data traffic from a base station; and at least one base station, said at least one base station having a processor, a modem, a radio and an antenna, operable to schedule data traffic to each of said plurality of subscriber stations over a shared channel in accordance with the method described in claim 1. - 14. The system of claim 13, where each subscriber station of said plurality of subscriber stations is operable to negotiate with said at least one base station a negotiated service share of said shared channel for data traffic bound to said each subscriber station.
- 15. The system of claim 14, where said base station determines a quality of service priority value for each subscriber station in said plurality of subscriber stations by comparing a negotiated service share for said each subscriber station to a measured service share for said each subscriber station.
- 16. The system of claim 15, where said base station determines a throughput value for said each subscriber station in said plurality of subscriber stations by determining the largest block format receivable at said each subscriber station in said plurality of subscriber stations.
- 17. The system of claim 16, where said base station determines the largest block format receivable at said each subscriber station occurs by determining the signal to noise ratio received at said each subscriber station of said plurality of subscriber stations.
- 18. The system of claim 17, where said each subscriber station of said plurality of subscriber stations transmits an indication of its signal to noise ration to said base station.
- 19. The system of claim 18, where said base station schedules data traffic to each of said plurality of subscriber stations over a shared channel in accordance with the method described in claim 1 whenever the total amount of data traffic reaches at least a predetermined threshold and schedules data traffic to each of said plurality of subscriber stations in accordance with another method whenever the total amount of data traffic is below said predetermined threshold.
- 20. A base station having a microprocessor, a modem, a radio and an antenna, and operable to schedule data traffic to a plurality of subscriber stations over a shared channel in accordance with the method described in claim 1.
- 21. The base station of claim 20, where said base station determines a quality of service priority value for each subscriber station in said plurality of subscriber stations by comparing a negotiated service share for said each subscriber station to a measured service share for each subscriber station.
- 22. The base station of claim 21, where said base station determines a throughput value for said each subscriber station in said plurality of subscriber stations by determining the largest block format receivable at said each subscriber station in said plurality of subscriber stations.
- 23. The base station of claim 22, where said base station determines the largest block format receivable at said each subscriber station occurs by determining the signal to noise ratio received at said each subscriber station of said plurality of subscriber stations.
- 24. The base station of claim 23, where said each subscriber station of said plurality of subscriber stations transmits an indication of its signal to noise ratio to said base station.
- 25. The base station of claim 24, where said base station schedules data traffic to each of said plurality of subscriber stations over a shared channel in accordance with the method described in claim 1 whenever the total amount of data traffic reaches at least a predetermined threshold and schedules data traffic to each of said plurality of subscriber stations in accordance with another method whenever the total amount of data traffic is below said predetermined threshold.
Priority Applications (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CA002398755A CA2398755A1 (en) | 2002-08-19 | 2002-08-19 | Scheduler for a shared channel |
PCT/CA2003/001256 WO2004017650A2 (en) | 2002-08-19 | 2003-08-14 | Scheduler for a shared channel |
CNB038195674A CN1330153C (en) | 2002-08-19 | 2003-08-14 | Scheduler for a shared channel |
JP2004528217A JP2005536129A (en) | 2002-08-19 | 2003-08-14 | Shared communication channel scheduler |
AU2003257341A AU2003257341A1 (en) | 2002-08-19 | 2003-08-14 | Scheduler for a shared channel |
MXPA05002000A MXPA05002000A (en) | 2002-08-19 | 2003-08-14 | Scheduler for a shared channel. |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CA002398755A CA2398755A1 (en) | 2002-08-19 | 2002-08-19 | Scheduler for a shared channel |
Publications (1)
Publication Number | Publication Date |
---|---|
CA2398755A1 true CA2398755A1 (en) | 2004-02-19 |
Family
ID=31722361
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CA002398755A Abandoned CA2398755A1 (en) | 2002-08-19 | 2002-08-19 | Scheduler for a shared channel |
Country Status (6)
Country | Link |
---|---|
JP (1) | JP2005536129A (en) |
CN (1) | CN1330153C (en) |
AU (1) | AU2003257341A1 (en) |
CA (1) | CA2398755A1 (en) |
MX (1) | MXPA05002000A (en) |
WO (1) | WO2004017650A2 (en) |
Families Citing this family (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4537390B2 (en) * | 2004-02-27 | 2010-09-01 | 三菱電機株式会社 | Base station and scheduling method |
KR100896399B1 (en) * | 2004-05-05 | 2009-05-08 | 콸콤 인코포레이티드 | An apparatus for scheduling transmission instances |
US8331377B2 (en) | 2004-05-05 | 2012-12-11 | Qualcomm Incorporated | Distributed forward link schedulers for multi-carrier communication systems |
US7715341B2 (en) * | 2005-01-28 | 2010-05-11 | Nortel Networks Limited | Optimized scheduling method for delay-sensitive traffic on high speed shared packet data channels |
CN100401847C (en) * | 2005-08-01 | 2008-07-09 | 华为技术有限公司 | Different service realizing method |
FI20055703A0 (en) * | 2005-12-28 | 2005-12-28 | Nokia Corp | Package controller in radio system |
US8228920B2 (en) * | 2006-05-01 | 2012-07-24 | Agere Systems Inc. | High-throughput scheduler with guaranteed fairness for wireless networks and other applications |
KR100911304B1 (en) * | 2007-06-18 | 2009-08-11 | 엘지전자 주식회사 | Method for transmitting data of radio bearer having priority in wirelss communication system |
CN103188165B (en) * | 2013-03-12 | 2016-06-01 | 神州数码网络(北京)有限公司 | A kind of router multipath exports method and the router of intelligent load equilibrium |
CN103152284B (en) * | 2013-03-18 | 2016-06-01 | 神州数码网络(北京)有限公司 | A kind of router multipath exports method and the router of intelligent load equilibrium |
CN103780532B (en) * | 2014-01-16 | 2018-01-05 | 广东省电信规划设计院有限公司 | Upgoing O FDM system subcarriers and power distribution method and system |
CN105939280A (en) * | 2015-08-27 | 2016-09-14 | 杭州迪普科技有限公司 | Flow scheduling method and device |
CN114945965A (en) * | 2020-01-20 | 2022-08-26 | 日本电信电话株式会社 | Secret reciprocal calculation system, secret normalization system, methods thereof, secret calculation device, and program |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1312885C (en) * | 2000-02-25 | 2007-04-25 | 艾利森电话股份有限公司 | Flow control between transmitter and receiver entities in communications system |
US7215678B1 (en) * | 2000-04-10 | 2007-05-08 | Switchcore, A.B. | Method and apparatus for distribution of bandwidth in a switch |
US6847629B2 (en) * | 2000-11-30 | 2005-01-25 | Qualcomm Incorporated | Method and apparatus for scheduling packet data transmissions in a wireless communication system |
IT1319611B1 (en) * | 2000-12-22 | 2003-10-20 | Siemens Inf & Comm Networks | PROCEDURE FOR SCHEDULING PERMITS OF DATA TRANSMISSION APACCHETTO ON RADIO CHANNELS SHARED BY MOBILE STATIONS IN SYSTEMS |
CN1347253A (en) * | 2001-11-23 | 2002-05-01 | 杨大成 | Algorithm for scheduling and distributing packet data service resources in mobile environment |
US20030135632A1 (en) * | 2001-12-13 | 2003-07-17 | Sophie Vrzic | Priority scheduler |
-
2002
- 2002-08-19 CA CA002398755A patent/CA2398755A1/en not_active Abandoned
-
2003
- 2003-08-14 CN CNB038195674A patent/CN1330153C/en not_active Expired - Fee Related
- 2003-08-14 MX MXPA05002000A patent/MXPA05002000A/en unknown
- 2003-08-14 AU AU2003257341A patent/AU2003257341A1/en not_active Abandoned
- 2003-08-14 WO PCT/CA2003/001256 patent/WO2004017650A2/en active Application Filing
- 2003-08-14 JP JP2004528217A patent/JP2005536129A/en active Pending
Also Published As
Publication number | Publication date |
---|---|
CN1330153C (en) | 2007-08-01 |
WO2004017650A3 (en) | 2004-05-06 |
WO2004017650A2 (en) | 2004-02-26 |
CN1692663A (en) | 2005-11-02 |
MXPA05002000A (en) | 2005-05-27 |
AU2003257341A1 (en) | 2004-03-03 |
JP2005536129A (en) | 2005-11-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8194556B2 (en) | Latency-aware adaptive bandwidth request mechanism for real-time communication in WiMAX | |
AU2003246474B2 (en) | System, apparatus and method for uplink resource allocation | |
AU2001249379B2 (en) | Method and apparatus for controlling transmissions of a communications system | |
US20070026868A1 (en) | Licensed/unlicensed frequency management in a wireless wide-area network | |
US8180359B2 (en) | Data transmission method, system, base station and subscriber station, a data processing unit, computer program product, computer program distribution medium and baseband module | |
WO1997013388A1 (en) | System and method for flexible coding, modulation, and time slot allocation in a radio telecommunications network | |
KR20040012947A (en) | Adaptive scheduling for multi-carrier system | |
US20080205275A1 (en) | Communication Resource Scheduling | |
JP2010522452A (en) | Wireless communication system | |
CA2398755A1 (en) | Scheduler for a shared channel | |
US20040228296A1 (en) | Data transmission method, system, base station and subscriber station | |
US8379520B2 (en) | Group based packet scheduling algorithm | |
Mohr | Spectrum demand for systems beyond IMT‐2000 based on data rate estimates | |
Indumathi et al. | A cross-layer resource scheduling with QoS guarantees using adaptive token bank fair queuing algorithm in wireless networks | |
Kramling et al. | Interaction of power control and link adaptation for capacity enhancement and QoS assistance | |
Wang et al. | QoS. Services. in. Wireless. Metropolitan. Area. Networks | |
AU2007216880A1 (en) | System, apparatus and method for uplink resource allocation | |
Gutiérrez et al. | New prioritization function for user scheduling in OFDMA Systems | |
Niyato | Radio resource management in broadband wireless access networks | |
Chen et al. | A channel state aware uplink bandwidth allocation algorithm for wireless networks | |
GB2382945A (en) | Scheduling data packets for transmission over a shared channel | |
Goyal et al. | Optimal resource allocation policies for a multiaccess fading channel with a quality of service constraint |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
FZDE | Discontinued |