GB2561677A - Improved contention mechanism for access to random resource units in an 802.11 Channel - Google Patents

Improved contention mechanism for access to random resource units in an 802.11 Channel Download PDF

Info

Publication number
GB2561677A
GB2561677A GB1802654.2A GB201802654A GB2561677A GB 2561677 A GB2561677 A GB 2561677A GB 201802654 A GB201802654 A GB 201802654A GB 2561677 A GB2561677 A GB 2561677A
Authority
GB
United Kingdom
Prior art keywords
data
cwo
access point
random
transmission
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB1802654.2A
Other versions
GB2561677B (en
GB201802654D0 (en
Inventor
Baron Stéphane
Nezou Patrice
Guignard Romain
Viger Pascal
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Canon Inc
Original Assignee
Canon Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Canon Inc filed Critical Canon Inc
Priority to GB1802654.2A priority Critical patent/GB2561677B/en
Priority claimed from GB1603515.6A external-priority patent/GB2540450B/en
Publication of GB201802654D0 publication Critical patent/GB201802654D0/en
Publication of GB2561677A publication Critical patent/GB2561677A/en
Application granted granted Critical
Publication of GB2561677B publication Critical patent/GB2561677B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W74/00Wireless channel access, e.g. scheduled or random access
    • H04W74/002Transmission of channel access control information
    • H04W74/008Transmission of channel access control information with additional processing of random access related information at receiving side
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W74/00Wireless channel access, e.g. scheduled or random access
    • H04W74/08Non-scheduled or contention based access, e.g. random access, ALOHA, CSMA [Carrier Sense Multiple Access]
    • H04W74/0833Non-scheduled or contention based access, e.g. random access, ALOHA, CSMA [Carrier Sense Multiple Access] using a random access procedure
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W74/00Wireless channel access, e.g. scheduled or random access
    • H04W74/002Transmission of channel access control information
    • H04W74/006Transmission of channel access control information in the downlink, i.e. towards the terminal
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W74/00Wireless channel access, e.g. scheduled or random access
    • H04W74/08Non-scheduled or contention based access, e.g. random access, ALOHA, CSMA [Carrier Sense Multiple Access]
    • H04W74/0808Non-scheduled or contention based access, e.g. random access, ALOHA, CSMA [Carrier Sense Multiple Access] using carrier sensing, e.g. as in CSMA
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W84/00Network topologies
    • H04W84/02Hierarchically pre-organised networks, e.g. paging networks, cellular networks, WLAN [Wireless Local Area Network] or WLL [Wireless Local Loop]
    • H04W84/10Small scale networks; Flat hierarchical networks
    • H04W84/12WLAN [Wireless Local Area Networks]

Abstract

In an 802.11ax network with an access point, a trigger frame offers random resource units to nodes for data uplink communication to the access point. To dynamically adapt the contention mechanism used by the nodes to access the random resource units, the AP updates a correcting TBD parameter at each new TXOP and includes the updated adjusting parameter in the trigger frame for the next TXOP. The nodes use the TBD parameter to generate a local random RU backoff value from a contention window range, for contending for access to the random resource units. The TBD parameter may directly impact the contention window size CWO or boundaries values of a selection range from which CWO is selected. The backoff value may be re-decided after the successful transmission of data to the access point. Further, the backoff value may be re-decided after a failed attempt to transmit data to the access point using a different method to when the transmission is successful.

Description

(56) Documents Cited:
US 20150139209 A1 (62) Divided from Application No
1603515.6 under section 15(9) of the Patents Act 1977 (58) Field of Search:
INT CL H04W Other: WPI, EPODOC (71) Applicant(s):
Canon Kabushiki Kaisha
30-2 Shimomaruko 3-chome, Ohta-ku,
Tokyo 1468501, Japan (72) Inventor(s):
Stephane Baron Patrice Nezou Romain Guignard Pascal Viger (74) Agent and/or Address for Service:
Santarelli
49, Avenue des Champs-Elysees, Paris 75008,
France (including Overseas Departments and Territori es) (54) Title of the Invention: Improved contention mechanism for access to random resource units in an 802.11 Channel
Abstract Title: Contention mechanism for access to random resource units in an 802.11 channel (57) In an 802.11 ax network with an access point, a trigger frame offers random resource units to nodes for data uplink communication to the access point. To dynamically adapt the contention mechanism used by the nodes to access the random resource units, the AP updates a correcting TBD parameter at each new TXOP and includes the updated adjusting parameter in the trigger frame for the next TXOP. The nodes use the TBD parameter to generate a local random RU backoff value from a contention window range, for contending for access to the random resource units. The TBD parameter may directly impact the contention window size CWO or boundaries values of a selection range from which CWO is selected. The backoff value may be re-decided after the successful transmission of data to the access point. Further, the backoff value may be re-decided after a failed attempt to transmit data to the access point using a different method to when the transmission is successful.
Figure GB2561677A_D0001
1/15 to
Figure GB2561677A_D0002
Fig. 1
2/15
Timer Timer expires, Timer resumed ο o and then re-init resumed
Figure GB2561677A_D0003
Figure GB2561677A_D0004
CM
I
Figure GB2561677A_D0005
% o
rCM f
Backoff countdown Defer access 30 Backoff countdown (access contention) . (access contention)
3/15
ΡΚίΟΚΓίτ ro access Category Mappings
Figure GB2561677A_D0006
Figure GB2561677A_D0007
>*
OL o
ί'ϋ &,
SU <*>
p ik o
co
CD u?· u, <
*£ b;
o o
co
4/15
OS Ajepuooes
CD o
Ajepuooes
Ajeuiud
CO
CM
O
Of Ajepuooes
CD o
ιΟ
N
X o
CD
Figure GB2561677A_D0008
ίω y= ω
Figure GB2561677A_D0009
5/15
009
Figure GB2561677A_D0010
LD
Ο
Fig. 7
Figure GB2561677A_D0011
6/15
Ο <
Figure GB2561677A_D0012
tfi
Figure GB2561677A_D0013
ψ ▲
Figure GB2561677A_D0014
Figure GB2561677A_D0015
7/15
Fig. 9 Fig. 10
Figure GB2561677A_D0016
c-\
E w
c
CO o
o co
-t ro to ro co
Ό ό
ω σ
ω ω
c ο
CQ
Ο (D
-t—· =3
Ω.
Ε σ
ω σ
ω ω
c σ
c ω
ο.
Ε
V.
ο σ>
CM
Ο
CD
CO
Ο
CD
8/15
Ο ο
CM
Figure GB2561677A_D0017
CM CM
Figure GB2561677A_D0018
ο
CM
Ο
CO ο
ιη ο
co ο
Η9/15
Figure GB2561677A_D0019
to ο
co
CO
Figure GB2561677A_D0020
10/15 co
Figure GB2561677A_D0021
EO xr
Figure GB2561677A_D0022
11/15 m
0)
Figure GB2561677A_D0023
s' Build a trigger frame with RUs information _ Send the trigger frame(s), to cause one or more nodes to
1512 and new TBD value transmit in Random RUs during the TXOP
12/15
Figure GB2561677A_D0024
z\ / \
Figure GB2561677A_D0025
OJ
Q.
CD
Η—I ω
o
13/15
Fig. 16
Ο
CN
Figure GB2561677A_D0026
CD
14/15
1300 γρ ίθ recejvec|
Figure GB2561677A_D0027
15/15
Figure GB2561677A_D0028
CWO locally driven by the node
Fig. 18
IMPROVED CONTENTION MECHANISM FOR ACCESS TO RANDOM RESOURCE UNITS IN AN 802.11 CHANNEL
FIELD OF THE INVENTION
The present invention relates generally to communication networks and more specifically to the contention-based access of channels and their splitting sub-channels (or Resource Units) that are available to a group of nodes.
The invention finds application in wireless communication networks, in particular to the access of an 802.11ax composite channel and of OFDMA Resource Units forming for instance an 802.11 ax composite channel for Uplink communication. One application of the method regards wireless data communication over a wireless communication network using Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA), the network being accessible by a plurality of node devices.
BACKGROUND OF THE INVENTION
The IEEE 802.11 MAC standard defines the way Wireless local area networks (WLANs) must work at the physical and medium access control (MAC) level. Typically, the 802.11 MAC (Medium Access Control) operating mode implements the well-known Distributed Coordination Function (DCF) which relies on a contention-based mechanism based on the socalled “Carrier Sense Multiple Access with Collision Avoidance” (CSMA/CA) technique.
The 802.11 medium access protocol standard or operating mode is mainly directed to the management of communication nodes waiting for the wireless medium to become idle so as to try to access to the wireless medium.
The network operating mode defined by the IEEE 802.11ac standard provides very high throughput (VHT) by, among other means, moving from the 2.4GHz band which is deemed to be highly susceptible to interference to the 5GHz band, thereby allowing for wider frequency contiguous channels of 80MHz to be used, two of which may optionally be combined to get a 160MHz channel as operating band ofthe wireless network.
The 802.11ac standard also tweaks control frames such as the Request-To-Send (RTS) and Clear-To-Send (CTS) frames to allow for composite channels of varying and predefined bandwidths of 20, 40 or 80MHz, the composite channels being made of one or more channels that are contiguous within the operating band. The 160MHz composite channel is possible by the combination of two 80MHz composite channels within the 160MHz operating band. The control frames specify the channel width (bandwidth) for the targeted composite channel.
A composite channel therefore consists of a primary channel on which a given node performs EDCA backoff procedure to access the medium, and of at least one secondary channel, of for example 20MHz each.
EDCA defines traffic categories and four corresponding access categories that make it possible to handle differently high-priority traffic compared to low-priority traffic.
Implementation of EDCA in the nodes can be made using a plurality of traffic queues for serving data traffic at different priorities, with which a respective plurality of queue backoff engines is associated. The queue backoff engines are configured to compute respective queue backoff values when the associated traffic queue stores data to transmit.
Thanks to the EDCA backoff procedure, the node can thus access the communication network using contention type access mechanism based on the computed queue backoff values.
The primary channel is used by the communication nodes to sense whether or not the channel is idle, and the primary channel can be extended using the secondary channel or channels to form a composite channel.
Given a tree breakdown of the operating band into elementary 20MHz channels, some secondary channels are named tertiary or quaternary channels.
In 802.11ac, all the transmissions, and thus the possible composite channels, include the primary channel. This is because the nodes perform full Carrier Sense Multiple Access/Collision Avoidance (CSMA/CA) and Network Allocation Vector (NAV) tracking on the primary channel only. The other channels are assigned as secondary channels, on which the nodes have only capability of CCA (clear channel assessment), i.e. detection of an idle or busy state/status of said secondary channel.
An issue with the use of composite channels as defined in the 802.11n or 802.11ac (or 802.11 ax) is that the 802.11η and 802.11ac-compliant nodes (i.e. HT nodes standing for High Throughput nodes) and the other legacy nodes (i.e. non-HT nodes compliant only with for instance 802.11a/b/g) have to co-exist within the same wireless network and thus have to share the 20MHz channels.
To cope with this issue, the 802.11η and 802.11ac standards provide the possibility to duplicate control frames (e.g. RTS/CTS or CTS-to-Self or ACK frames to acknowledge correct or erroneous reception of the sent data) on each 20MHz channel in an 802.11a legacy format (called as “non-HT”) to establish a protection of the requested TXOP over the whole composite channel.
This is for any legacy 802.11a node that uses any of the 20MHz channel involved in the composite channel to be aware of on-going communications on the 20MHz channel. As a result, the legacy node is prevented from initiating a new transmission until the end of the current composite channel TXOP granted to an 802.11 n/ac node.
As originally proposed by 802.11η, a duplication of conventional 802.11a or “nonHT” transmission is provided to allow the two identical 20MHz non-HT control frames to be sent simultaneously on both the primary and secondary channels forming the used composite channel.
This approach has been widened for 802.11ac to allow duplication over the channels forming an 80MHz or 160MHz composite channel. In the remainder of the present document, the “duplicated non-HT frame” or “duplicated non-HT control frame” or “duplicated control frame” means that the node device duplicates the conventional or “non-HT” transmission of a given control frame over secondary 20MHz channel(s) of the (40MHz 80MHz or 160MHz) operating band.
In practice, to request a composite channel (equal to or greater than 40MHz) for a new TXOP, an 802.11n/ac node does an EDCA backoff procedure in the primary 20MHz channel as mentioned above. In parallel, it performs a channel sensing mechanism, such as a Clear-Channel-Assessment (CCA) signal detection, on the secondary channels to detect the secondary channel or channels that are idle (channel state/status is “idle”) during a PIFS interval before the start of the new TXOP (i.e. before any queue backoff counter expires).
More recently, Institute of Electrical and Electronics Engineers (IEEE) officially approved the 802.11 ax task group, as the successor of 802.11ac. The primary goal of the 802.11ax task group consists in seeking for an improvement in data speed to wireless communicating devices used in dense deployment scenarios.
Recent developments in the 802.11ax standard sought to optimize usage of the composite channel by multiple nodes in a wireless network having an access point (AP). Indeed, typical contents have important amount of data, for instance related to high-definition audio-visual real-time and interactive content. Furthermore, it is well-known that the performance of the CSMA/CA protocol used in the IEEE 802.11 standard deteriorates rapidly as the number of nodes and the amount of traffic increase, i.e. in dense WLAN scenarios.
In this context, multi-user transmission has been considered to allow multiple simultaneous transmissions to/from different users in both downlink and uplink directions. In the uplink to the AP, multi-user transmissions can be used to mitigate the collision probability by allowing multiple nodes to simultaneously transmit.
To actually perform such multi-user transmission, it has been proposed to split a granted 20MHz channel into sub-channels, also referred to as resource units (RUs), that are shared in the frequency domain by multiple users, based for instance on Orthogonal Frequency Division Multiple Access (OFDMA) technique. Each RU may be defined by a number of tones, the 20MHz channel containing up to 242 usable tones.
OFDMA is a multi-user variation of OFDM which has emerged as a new key technology to improve efficiency in advanced infrastructure-based wireless networks. It combines OFDM on the physical layer with Frequency Division Multiple Access (FDMA) on the MAC layer, allowing different subcarriers to be assigned to different nodes in order to increase concurrency. Adjacent sub-carriers often experience similar channel conditions and are thus grouped to sub-channels: an OFDMA sub-channel or RU is thus a set of sub-carriers.
The multi-user feature of OFDMA allows the AP to assign different RUs to different nodes in order to increase competition. This may help to reduce contention and collisions inside 802.11 networks.
As currently envisaged, the granularity of such OFDMA sub-channels is finer than the original 20MHz channel band. Typically, a 2MHz or 5MHz sub-channel may be contemplated as a minimal width, therefore defining for instance 9 sub-channels or resource units within a single 20MHz channel.
To support multi-user uplink, i.e. uplink transmission to the 802.11ax access point (AP) during the granted TxOP, the 802.11 ax AP has to provide signalling information for the legacy nodes (non-802.11ax nodes) to set their NAV and for the 802.11 ax nodes to determine the allocation of the resource units RUs.
It has been proposed for the APto send a trigger frame (TF) to the 802.11 ax nodes to trigger uplink communications.
The document IEEE 802.11-15/0365 proposes that a ‘Trigger’ frame (TF) is sent by the AP to solicit the transmission of uplink (UL) Multi-User (OFDMA) PPDU from multiple nodes. In response, the nodes transmit UL MU (OFDMA) PPDU as immediate responses to the Trigger frame. All transmitters can send data at the same time, but using disjoint sets of RUs (i.e. of frequencies in the OFDMA scheme), resulting in transmissions with less interference.
The bandwidth or width of the targeted composite channel is signalled in the TF frame, meaning that the 20, 40, 80 or 160 MHz value is added. The TF frame is sent over the primary 20MHz channel and duplicated (replicated) on each other 20MHz channels forming the targeted composite channel, if appropriate. As described above for the duplication of control frames, it is expected that every nearby legacy node (non-HT or 802.11ac nodes) receiving the TF on its primary channel, then sets its NAV to the value specified in the TF frame. This prevents these legacy nodes from accessing the channels of the targeted composite channel during the TXOP.
A resource unit RU can be reserved for a specific node, in which case the AP indicates, in the TF, the node to which the RU is reserved. Such RU is called Scheduled RU. The indicated node does not need to perform contention on accessing a scheduled RU reserved to it.
In order to better improve the efficiency of the system in regards to un-managed traffic to the AP (for example, uplink management frames from associated nodes, unassociated nodes intending to reach an AP, or simply unmanaged data traffic), the document IEEE 802.1115/0604 proposes a new trigger frame (TF-R) above the previous UL MU procedure, allowing random access onto the OFDMA TXOP. In other words, the resource unit RU can be randomly accessed by more than one node (of the group of nodes registered with the AP). Such RU is called Random RU and is indicated as such in the TF. Random RUs may serve as a basis for contention between nodes willing to access the communication medium for sending data.
A random resource selection procedure is defined in document IEEE 802.1115/1105. According to this procedure, each 802.11 ax node maintains a dedicated backoff engine, referred below to as OFDMA or RU (for resource unit) backoff engine, to contend for access to the random RUs. The dedicated OFDMA or RU backoff, also called OBO, is randomly assigned in a contention window range [0, CWO] wherein CWO is the contention window size defined in a range [CWOmin, CWOmax],
Once the OFDMA or RU backoff value reaches zero in a node (it is decremented at each new TF-R frame by the number of random RUs defined therein for instance), the node becomes eligible for RU access and thus randomly selects one RU from among all the random RUs defined in the received trigger frame. It then uses the selected RU to transmit data of at least one of the traffic queues.
The management of the OFDMA or RU backoff engine is not optimal.
SUMMARY OF INVENTION
As the nodes access the RUs on a random basis, the risk that either nodes collide on the same RU, or some RUs are not used, or both is high.
For instance, there is no guarantee that the Scheduled and Random RUs will be used by the nodes.
It is particularly the case for the Random RUs because any rule used by the nodes to select a Random RU may result in having RUs not allocated at all to any node. Also, the AP does not know whether or not some nodes need bandwidth. In addition, some RUs provided by the AP may not be accessible for some nodes because of hidden legacy nodes.
It is also the case for the Scheduled RUs (which are reserved by the AP because some nodes have explicitly requested bandwidth) if the specified nodes do not send data.
It results that the channel bandwidth is not optimally used.
On the other hand, depending on the contention procedure used by the nodes to randomly access the Random RUs, it may happen that nodes select the same RUs and thus collide.
To reduce the risk, a desired access rule may be deployed over the nodes to drive the random access as desired. For instance, the same mapping may be implemented in each node to map a local random value, such as the conventional local backoff counter or the OBO value, onto the RU having the same index value in the composite channel (for instance based on an ordering index of the RUs within the composite channel), which mapped RU is thus selected for access by the node.
However, the use of an access rule may not be satisfactory to efficiently reduce the risk, in particular because the network evolves: the number of nodes registered in the AP evolves over time, the number of nodes having data to upload to the AP, etc. Due to such network evolution, an access rule relevant at a first time may prove not to be relevant at a later time.
The present invention has been devised to overcome one or more foregoing limitations, in particular to provide a communication device as defined in Claim 1, a communication method as defined in Claim 9 and a non-transitory computer-readable medium as defined in Claim 10. They have more efficient usage of the network bandwidth (of the RUs) with limited risks of collisions.
The invention can be applied to any wireless network in which an access point provides the registered nodes with a plurality of sub-channels (or resource units) forming a communication channel and that can be accessed by the nodes using a contention scheme. The communication channel is the elementary channel on which the nodes perform sensing to determine whether it is idle or busy.
The invention is especially suitable for data uplink transmission to the AP of an IEEE 802.11 ax network (and future version).
At least parts of the methods according to the invention may be computer implemented. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, microcode, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a circuit, module or system. Furthermore, the present invention may take the form of a computer program product embodied in any tangible medium of expression having computer usable program code embodied in the medium.
Since the present invention can be implemented in software, the present invention can be embodied as computer readable code for provision to a programmable apparatus on any suitable carrier medium. A tangible carrier medium may comprise a storage medium such as a hard disk drive, a magnetic tape device or a solid state memory device and the like. A transient carrier medium may include a signal such as an electrical signal, an electronic signal, an optical signal, an acoustic signal, a magnetic signal or an electromagnetic signal, e.g. a microwave or RF signal.
BRIEF DESCRIPTION OF THE DRAWINGS
Further advantages of the present invention will become apparent to those skilled in the art upon examination of the drawings and detailed description. Embodiments of the invention will now be described, by way of example only, and with reference to the following drawings.
Figure 1 illustrates a typical wireless communication system in which embodiments of the invention may be implemented;
Figure 2 is a timeline schematically illustrating a conventional communication mechanism according to the IEEE 802.11 standard;
Figures 3a, 3b and 3c illustrate the IEEE 802.11e EDCA involving access categories;
Figure 4 illustrates 802.11ac channel allocation that support channel bandwidth of 20 MHz, 40 MHz, 80 MHz or 160 MHz as known in the art;
Figure 5 illustrates an example of 802.11 ax uplink OFDMA transmission scheme, wherein the AP issues a Trigger Frame for reserving a transmission opportunity of OFDMA subchannels (resource units) on an 80 MHz channel as known in the art;
Figure 6 shows a schematic representation a communication device or station in accordance with embodiments of the present invention;
Figure 7 shows a schematic representation of a wireless communication device in accordance with embodiments of the present invention;
Figure 8 illustrates an exemplary transmission block of a communication node according to embodiments of the invention;
Figure 9 illustrates, using a flowchart, main steps performed by a MAC layer of a node, when receiving new data to transmit, in first embodiments of the invention;
Figure 10 illustrates, using a flowchart, main steps for setting an RU backoff parameter, namely contention window size CWO for OFDMA contention, in first embodiments of the invention;
Figure 11 illustrates, using a flowchart, steps of accessing the medium based on the conventional EDCA medium access scheme, in first embodiments of the invention;
Figure 12 illustrates, using a flowchart, exemplary steps for updating RU backoff parameters and value upon receiving a positive or negative acknowledgment of a multi-user OFDMA transmission, in first embodiments of the invention;
Figure 13 illustrates, using a flowchart, first exemplary embodiments of accessing the medium based on the OFDMA medium access scheme, and of locally updating the RU backoff parameters, such as the contention window size CWO, when a new trigger frame is received;
Figure 14 illustrates, using a flowchart, second exemplary embodiments of accessing the medium based on the OFDMA medium access scheme, and of updating the RU backoff parameters, either locally or based on a received TBD parameter, when a new trigger frame is received;
Figure 15 illustrates, using a flowchart, steps of a wireless communication method at the access point;
Figure 15a illustrates a variant of the process of Figure 15;
Figure 16 illustrates an exemplary format for an information Element dedicated to the transmission of parameter values from the AP to the nodes in the first exemplary embodiments of the invention;
Figure 17 illustrates, using a flowchart, third exemplary embodiments of accessing the medium based on the OFDMA medium access scheme, and of updating the RU backoff parameters, either locally or based on a received TBD parameter, when a new trigger frame is received; and
Figure 18 illustrates, using curves obtained by simulation, the evolution of a random-RU efficiency metric depending on the number of the nodes contending for accessing the random RUs.
DETAILED DESCRIPTION
The invention will now be described by means of specific non-limiting exemplary embodiments and by reference to the figures.
Figure 1 illustrates a communication system in which several communication nodes (or stations) 101-107 exchange data frames over a radio transmission channel 100 of a wireless local area network (WLAN), under the management of a central station, or access point (AP) 110. The radio transmission channel 100 is defined by an operating frequency band constituted by a single channel or a plurality of channels forming a composite channel.
Access to the shared radio medium to send data frames is based on the CSMA/CA technique, for sensing the carrier and avoiding collision by separating concurrent transmissions in space and time.
Carrier sensing in CSMA/CA is performed by both physical and virtual mechanisms. Virtual carrier sensing is achieved by transmitting control frames to reserve the medium prior to transmission of data frames.
Next, a source or transmitting node first attempts through the physical mechanism, to sense a medium that has been idle for at least one DIFS (standing for DCF InterFrame Spacing) time period, before transmitting data frames.
However, if it is sensed that the shared radio medium is busy during the DIFS period, the source node continues to wait until the radio medium becomes idle.
To access the medium, the node starts a countdown backoff counter designed to expire after a number of timeslots, chosen randomly in the contention window range [0, CW], CW (integer) being also referred to as the Contention Window size and defining the upper boundary of the backoff selection interval (contention window range). This backoff mechanism or procedure is the basis of the collision avoidance mechanism that defers the transmission time for a random interval, thus reducing the probability of collisions on the shared channel. After the backoff time period, the source node may send data or control frames if the medium is idle.
One problem of wireless data communications is that it is not possible for the source node to listen while sending, thus preventing the source node from detecting data corruption due to channel fading or interference or collision phenomena. A source node remains unaware of the corruption of the data frames sent and continues to transmit the frames unnecessarily, thus wasting access time.
The Collision Avoidance mechanism of CSMA/CA thus provides positive acknowledgement (ACK) of the sent data frames by the receiving node if the frames are received with success, to notify the source node that no corruption of the sent data frames occurred.
The ACK is transmitted at the end of reception of the data frame, immediately after a period of time called Short InterFrame Space (SIFS).
If the source node does not receive the ACK within a specified ACK timeout or detects the transmission of a different frame on the channel, it may infer data frame loss. In that case, it generally reschedules the frame transmission according to the above-mentioned backoff procedure.
To improve the Collision Avoidance efficiency of CSMA/CA, a four-way handshaking mechanism is optionally implemented. One implementation is known as the RTS/CTS exchange, defined in the 802.11 standard.
The RTS/CTS exchange consists in exchanging control frames to reserve the radio medium prior to transmitting data frames during a transmission opportunity called TXOP in the 802.11 standard as described below, thus protecting data transmissions from any further collisions.
Figure 2 illustrates the behaviour of three groups of nodes during a conventional communication over a 20 MHz channel of the 802.11 medium: transmitting or source node 20, receiving or addressee or destination node 21 and other nodes 22 not involved in the current communication.
Upon starting the backoff process 270 prior to transmitting data, a station e.g. source node 20, initializes its backoff time counter to a random value as explained above. The backoff time counter is decremented once every time slot interval 260 for as long as the radio medium is sensed idle (countdown starts from TO, 23 as shown in the Figure).
Channel sensing is for instance performed using Clear-Channel-Assessment (CCA) signal detection which is a WLAN carrier sense mechanisms defined in the IEEE 802.112007 standards.
The time unit in the 802.11 standard is the slot time called ‘aSlotTime’ parameter. This parameter is specified by the PHY (physical) layer (for example, aSlotTime is equal to 9ps for the 802.11η standard). All dedicated space durations (e.g. backoff) add multiples of this time unit to the SIFS value.
The backoff time counter is ‘frozen’ or suspended when a transmission is detected on the radio medium channel (countdown is stopped at T1, 24 for other nodes 22 having their backoff time counter decremented).
The countdown of the backoff time counter is resumed or reactivated when the radio medium is sensed idle anew, after a DIFS time period. This is the case for the other nodes at T2, 25 as soon as the transmission opportunity TXOP granted to source node 20 ends and the DIFS period 28 elapses. DIFS 28 (DCF inter-frame space) thus defines the minimum waiting time for a source node before trying to transmit some data. In practice, DIFS = SIFS + 2 * aSlotTime.
When the backoff time counter reaches zero (26) at T1, the timer expires, the corresponding node 20 requests access onto the medium in order to be granted a TXOP, and the backoff time counter is reinitialized 29 using a new random backoff value.
In the example of the Figure implementing the RTS/CTS scheme, atT1, the source node 20 that wants to transmit data frames 230 sends a special short frame or message acting as a medium access request to reserve the radio medium, instead of the data frames themselves, just after the channel has been sensed idle for a DIFS or after the backoff period as explained above.
The medium access request is known as a Request-To-Send (RTS) message or frame. The RTS frame generally includes the addresses of the source and receiving nodes (destination 21) and the duration for which the radio medium is to be reserved for transmitting the control frames (RTS/CTS) and the data frames 230.
Upon receiving the RTS frame and if the radio medium is sensed as being idle, the receiving node 21 responds, after a SIFS time period 27 (for example, SIFS is equal to 16 ps for the 802.11η standard), with a medium access response, known as a Clear-To-Send (CTS) frame. The CTS frame also includes the addresses of the source and receiving nodes, and indicates the remaining time required for transmitting the data frames, computed from the time point at which the CTS frame starts to be sent.
The CTS frame is considered by the source node 20 as an acknowledgment of its request to reserve the shared radio medium for a given time duration.
Thus, the source node 20 expects to receive a CTS frame 220 from the receiving node 21 before sending data 230 using unique and unicast (one source address and one addressee or destination address) frames.
The source node 20 is thus allowed to send the data frames 230 upon correctly receiving the CTS frame 220 and after a new SIFS time period 27, in a transmission opportunity that is thus granted to it thanks to the RTS/CTS exchange.
An ACK frame 240 is sent by the receiving node 21 after having correctly received the data frames sent, after a new SIFS time period 27.
If the source node 20 does not receive the ACK 240 within a specified ACK Timeout (generally within the TXOP), or if it detects the transmission of a different frame on the radio medium, it reschedules the frame transmission using the backoff procedure anew.
Since the RTS/CTS four-way handshaking mechanism 210/220 is optional in the 802.11 standard, it is possible for the source node 20 to send data frames 230 immediately upon its backoff time counter reaching zero (i.e. at T1).
The requested time duration for transmission defined in the RTS and CTS frames defines the length of the granted transmission opportunity TXOP, and can be read by any listening node (other nodes 22 in Figure 2) in the radio network.
To do so, each node has in memory a data structure known as the network allocation vector or NAV to store the time duration for which it is known that the medium will remain busy. When listening to a control frame (RTS 210 or CTS 220) not addressed to itself, a listening node 22 updates its NAVs (NAV 255 associated with RTS and NAV 250 associated with CTS) with the requested transmission time duration specified in the control frame. The listening nodes 22 thus keep in memory the time duration for which the radio medium will remain busy.
Access to the radio medium for the other nodes 22 is consequently deferred 30 by suspending 31 their associated timer and then by later resuming 32 the timer when the NAV has expired.
This prevents the listening nodes 22 from transmitting any data or control frames during that period.
It is possible that receiving node 21 does not receive RTS frame 210 correctly due to a message/frame collision or to fading. Even if it does receive it, receiving node 21 may not always respond with a CTS 220 because, for example, its NAV is set (i.e. another node has already reserved the medium). In any case, the source node 20 enters into a new backoff procedure.
The RTS/CTS four-way handshaking mechanism is very efficient in terms of system performance, in particular with regard to large frames since it reduces the length of the messages involved in the contention process.
In detail, assuming perfect channel sensing by each communication node, collision may only occur when two (or more) frames are transmitted within the same time slot after a DIFS 28 (DCF inter-frame space) or when their own back-off counter has reached zero nearly at the same time T1. If both source nodes use the RTS/CTS mechanism, this collision can only occur for the RTS frames. Fortunately, such collision is early detected by the source nodes since it is quickly determined that no CTS response has been received.
Figures 3a, 3b and 3c illustrate the IEEE 802.11e EDCA involving access categories, in order to improve the quality of service (QoS). In the original DCF standard, a communication node includes only one transmission queue/buffer. However, since a subsequent data frame cannot be transmitted until the transmission/retransmission of a preceding frame ends, the delay in transmitting/retransmitting the preceding frame prevents the communication from having QoS.
The IEEE 802.11e has overturned this deficiency in providing quality of service (QoS) enhancements to make more efficient use of the wireless medium.
This standard relies on a coordination function, called hybrid coordination function (HCF), which has two modes of operation: enhanced distributed channel access (EDCA) and HCF controlled channel access (HCCA).
EDCA enhances or extends functionality of the original access DCF method: EDCA has been designed for support of prioritized traffic similar to DiffServ (Differentiated Services), which is a protocol for specifying and controlling network traffic by class so that certain types of traffic get precedence.
EDCA is the dominant channel access mechanism in WLANs because it features a distributed and easily deployed mechanism.
The above deficiency of failing to have satisfactory QoS due to delay in frame retransmission has been solved with a plurality of transmission queues/buffers.
QoS support in EDCA is achieved with the introduction of four Access Categories (ACs), and thereby of four corresponding transmission/traffic queues or buffers (310). Of course, another number of traffic queues may be contemplated.
Each AC has its own traffic queue/buffer to store corresponding data frames to be transmitted on the network. The data frames, namely the MSDUs, incoming from an upper layer of the protocol stack are mapped onto one of the four AC queues/buffers and thus input in the mapped AC buffer.
Each AC has also its own set of channel access parameters or o its obackoff parameters”, and is associated with a priority value, thus defining traffic of higher or lower priority of MSDUs. Thus, there is a plurality of traffic queues for serving data traffic at different priorities.
That means that each AC (and corresponding buffer) acts as an independent DCF contending entity including its respective queue backoff engine 311. Thus, each queue backoff engine 311 is associated with a respective traffic queue for computing a respective queue backoff value to be used to contend for access to at least one communication channel in order to transmit data stored in the respective traffic queue.
It results that the ACs within the same communication node compete one with each other to access the wireless medium and to obtain a transmission opportunity, using the contention mechanism as explained above with reference to Figure 2 for example.
Service differentiation between the ACs is achieved by setting different queue backoff parameters between the ACs, such as different contention window parameters (CWmin, CWmax), different arbitration interframe spaces (AIFS), and different transmission opportunity duration limits (TXOPJJmit).
With EDCA, high priority traffic has a higher chance of being sent than low priority traffic: a node with high priority traffic waits a little less (low CW) before it sends its packet, on average, than a node with low priority traffic.
The four AC buffers (310) are shown in Figure 3a.
Buffers AC3 and AC2 are usually reserved for real-time applications (e.g., voice or video transmission). They have, respectively, the highest priority and the last-but-one highest priority.
Buffers AC1 and AC0 are reserved for best effort and background traffic. They have, respectively, the last-but-one lowest priority and the lowest priority.
Each data unit, MSDU, arriving at the MAC layer from an upper layer (e.g. Link layer) with a priority is mapped into an AC according to mapping rules. Figure 3b shows an example of mapping between eight priorities of traffic class (User Priorities or UP, 0-7 according
IEEE 802.1d) and the four ACs. The data frame is then stored in the buffer corresponding to the mapped AC.
When the backoff procedure for a traffic queue (or an AC) ends, the MAC controller (reference 704 in Figure 7 below) of the transmitting node transmits a data frame from this traffic queue to the physical layer for transmission onto the wireless communication network.
Since the ACs operate concurrently in accessing the wireless medium, it may happen that two ACs of the same communication node have their backoff ending simultaneously. In such a situation, a virtual collision handler (312) of the MAC controller operates a selection of the AC having the highest priority (as shown in Figure 3b) between the conflicting ACs, and gives up transmission of data frames from the ACs having lower priorities.
Then, the virtual collision handler commands those ACs having lower priorities to start again a backoff operation using an increased CW value.
Figure 3c illustrates configurations of a MAC data frame and a QoS control field (300) included in the header of the IEEE 802.11e MAC frame.
The MAC data frame also includes, among other fields, a Frame Control header (301) and a frame body (302).
As represented in the Figure, the QoS control field 300 is made of two bytes, including the following information items:
- Bits B0 to B3 are used to store a traffic identifier (TID) which identifies a traffic stream. The traffic identifier takes the value of the transmission priority value (User Priority UP, value between 0 and 7 - see Figure 3b) corresponding to the data conveyed by the data frame or takes the value of a traffic stream identifier (TSID, value between 8 and 15) for other data streams;
- Bit B4 is set to 1 and is not detailed here;
- Bits B5 and B6 define the ACK policy subfield which specifies the acknowledgment policy associated with the data frame. This subfield is used to determine how the data frame has to be acknowledged by the receiving node; normal ACK, no ACK or Block ACK.
“Normal ACK” refers to the case where the transmitting node or source node requires a conventional acknowledgment to be sent (by the receiving node) for each data frame, after a short interframe space (SIFS) period following the transmission of the data frame.
“No ACK” refers to the case where the source node does not require acknowledgment. That means that the receiving node takes no action upon receipt of the data frame.
“Block ACK” refers to an acknowledgment per block of MSDUs. The Block Ack scheme allows two or more data frames 230 to be transmitted before a Block ACK frame is returned to acknowledge the receipt of the data frames. The Block ACK increases communication efficiency since only one signalling ACK frame is needed to acknowledge a block of frames, while every ACK frame originally used has a significant overhead for radio synchronization. The receiving node takes no action immediately upon receiving the last data frame, except the action of recording the state of reception in its scoreboard context. With such a value, the source node is expected to send a Block ACK request (BAR) frame, to which the receiving node responds using the procedure described below;
Bit B7 is reserved (not used by the current 802.11 standards); and
Bits B8-B15 indicate the amount of buffered traffic for a given TID at the non-AP station sending this frame. The AP may use this information to determine the next TXOP duration it will grant to the station. A queue size of 0 indicates the absence of any buffered traffic for that TID.
To meet the ever-increasing demand for faster wireless networks to support bandwidth-intensive applications, 802.11ac is targeting larger bandwidth transmission through multi-channel operations. Figure 4 illustrates 802.11ac channel allocation that support composite channel bandwidth of 20 MHz, 40 MHz, 80 MHz or 160 MHz
IEEE 802.11ac introduces support of a restricted number of predefined subsets of 20MHz channels to form the sole predefined composite channel configurations that are available for reservation by any 802.11ac node on the wireless network to transmit data.
The predefined subsets are shown in the Figure and correspond to 20 MHz, 40 MHz, 80 MHz, and 160 MHz channel bandwidths, compared to only 20 MHz and 40 MHz supported by 802.11η. Indeed, the 20 MHz component channels 300-1 to 300-8 are concatenated to form wider communication composite channels.
In the 802.11ac standard, the channels of each predefined 40MHz, 80MHz or 160MHz subset are contiguous within the operating frequency band, i.e. no hole (missing channel) in the composite channel as ordered in the operating frequency band is allowed.
The 160 MHz channel bandwidth is composed of two 80 MHz channels that may or may not be frequency contiguous. The 80 MHz and 40 MHz channels are respectively composed of two frequency adjacent or contiguous 40 MHz and 20 MHz channels, respectively. However the present invention may have embodiments with either composition of the channel bandwidth, i.e. including only contiguous channels or formed of non-contiguous channels within the operating band.
A node is granted a TxOP through the enhanced distributed channel access (EDCA) mechanism on the “primary channel” (300-3). Indeed, for each composite channel having a bandwidth, 802.11ac designates one channel as “primary” meaning that it is used for contending for access to the composite channel. The primary 20MHz channel is common to all nodes (STAs) belonging to the same basic set, i.e. managed by or registered to the same local Access Point (AP).
However, to make sure that no other legacy node (i.e. not belonging to the same set) uses the secondary channels, it is provided that the control frames (e.g. RTS frame/CTS frame) reserving the composite channel are duplicated over each 20MHz channel of such composite channel.
As addressed earlier, the IEEE 802.11ac standard enables up to four, or even eight, 20 MHz channels to be bound. Because of the limited number of channels (19 in the 5 GHz band in Europe), channel saturation becomes problematic. Indeed, in densely populated areas, the 5 GHz band will surely tend to saturate even with a 20 or 40 MHz bandwidth usage per Wireless-LAN cell.
Developments in the 802.11 ax standard seek to enhance efficiency and usage of the wireless channel for dense environments.
In this perspective, one may consider multi-user transmission features, allowing multiple simultaneous transmissions to different users in both downlink and uplink directions. In the uplink, multi-user transmissions can be used to mitigate the collision probability by allowing multiple nodes to simultaneously transmit.
To actually perform such multi-user transmission, it has been proposed to split a granted 20MHz channel (300-1 to 300-4) into sub-channels 410 (elementary sub-channels), also referred to as sub-carriers or resource units (RUs), that are shared in the frequency domain by multiple users, based for instance on Orthogonal Frequency Division Multiple Access (OFDMA) technique.
This is illustrated with reference to Figure 5.
The multi-user feature of OFDMA allows the AP to assign different RUs to different nodes in order to increase competition. This may help to reduce contention and collisions inside
802.11 networks.
Contrary to downlink OFDMA wherein the AP can directly send multiple data to multiple stations (supported by specific indications inside the PLOP header), a trigger mechanism has been adopted for the AP to trigger uplink communications from various nodes.
To support an uplink multi-user transmission (during a pre-empted TxOP), the
802.11 ax AP has to provide signalling information for both legacy stations (non-802.11ax nodes) to set their NAV and for 802.11 ax nodes to determine the Resource Units allocation.
In the following description, the term legacy refers to non-802.11ax nodes, meaning 802.11 nodes of previous technologies that do not support OFDMA communications.
As shown in the example of Figure 5, the AP sends a trigger frame (TF) 430 to the targeted 802.11ax nodes. The bandwidth or width of the targeted composite channel is signalled in the TF frame, meaning that the 20, 40, 80 or 160 MHz value is added. The TF frame is sent over the primary 20MHz channel and duplicated (replicated) on each other 20MHz channels forming the targeted composite channel. As described above for the duplication of control frames, it is expected that every nearby legacy node (non-HT or 802.11ac nodes) receiving the TF on its primary channel, then sets its NAV to the value specified in the TF frame in order. This prevents these legacy nodes from accessing the channels of the targeted composite channel during the TXOP.
Based on an AP’s decision, the trigger frame TF may define a plurality of resource units (RUs) 410, or “Random RUs”, which can be randomly accessed by the nodes of the network. In other words, Random RUs designated or allocated by the AP in the TF may serve as basis for contention between nodes willing to access the communication medium for sending data. A collision occurs when two or more nodes attempt to transmit at the same time over the same RU.
A trigger frame that can be randomly accessed is referred to as a trigger frame for random access (TF-R). A TF-R may be emitted by the AP to allow multiple nodes to perform UL MU (UpLink Multi-User) random access to obtain an RU for their UL transmissions.
The trigger frame TF may also designate Scheduled resource units, in addition or in replacement of the Random RUs. Scheduled RUs may be reserved by the AP for certain nodes in which case no contention for accessing such RUs is needed for these nodes. Such RUs and their corresponding scheduled nodes are indicated in the trigger frame. For instance, a node identifier, such as the Association ID (AID) assigned to each node upon registration, is added in association with each Scheduled RU in order to explicitly indicate the node that is allowed to use each Scheduled RU.
An AID equal to 0 may be used to identify random RUs.
The multi-user feature of OFDMA allows the AP to assign different RUs to different nodes in order to increase competition. This may help to reduce contention and collisions inside
802.11 networks.
Also the AP may assign Random RUs to a specific group of nodes, which thus compete for contending for access to these Random RUs. For instance, the AP may specify a node group ID, such as a BSSID (standing for “Basic Service Set Identification”) in case the AP handles a plurality of BSSs.
In the example of Figure 5, each 20MHz channel (400-1,400-2, 400-3 or 400-4) is sub-divided in frequency domain into four sub-channels or RUs 410, typically of size 5 Mhz.
Of course the number of RUs splitting a 20MHz channel may be different from four. For instance, between two to nine RUs may be provided (thus each having a size between 10MHz and about 2MHz).
Once the nodes have used the RUs to transmit data to the AP, the AP responds with an acknowledgment (not show in the Figure) to acknowledge the data on each RU.
Document IEEE 802.11-15/1105 provides an exemplary random allocation procedure that may be used by the nodes to access the Random RUs indicated in the TF. This random allocation procedure is based on a new backoff counter, referred below to as the OFDMA or RU backoff value (or OBO), inside the 802.11 ax nodes for allowing a dedicated contention when accessing an RU to send data.
The OFDMA backoff value OBO to contend for access to the random RUs is randomly selected within the contention window range [0, CWO] wherein CWO is the contention window size and is defined in a selection range [CWOmin, CWOmax],
The RU backoff counter may for instance be the same as a conventional backoff counter, i.e. be a simple copy thereof.
Each node STA1 to STAn is a transmitting node with regards to receiving AP, and as a consequence, each node has an active RU backoff engine separate from the one or more queue backoff engines, for computing an RU backoff value (OBO) to be used to contend for access to at least one random resource unit splitting a transmission opportunity granted on the communication channel, in order to transmit data stored in one or either traffic queue AC.
Below RU backoff and OBO backoff are synonymous and refer to the same backoff engine used to contend for access to the Random RUs.
The random allocation procedure comprises, for a node of a plurality of nodes having an active RU backoff value OBO, a first step of determining from the trigger frame the sub-channels or RUs of the communication medium available for contention, a second step of verifying if the value of the active RU backoff value OBO local to the considered node is not greater than a number of detected-as-available random RUs, and then, in case of successful verification, a third step of randomly selecting a RU among the detected-as-available RUs for sending data. In case of second step is not verified, a fourth step (instead of the third) is performed in order to decrement the RU backoff value OBO by the number of detected-asavailable RUs.
As shown in the Figure, some Resource Units may not be used (41 Ou) because no node with an RU backoff value OBO less than the number of available random RUs has randomly selected one of these RUs, whereas some other are collided (as example 410c) because two of these nodes have randomly selected the same RU.
The conventional handling of random RUs is not satisfactory. There is a need to provide fair use of the network in dense wireless environments with more efficient allocation schemes used to allocate the OFDMA RUs to the nodes.
Also the coexistence of OFDMA (or RU) backoff scheme and EDCA queue backoff scheme for CSMA/CA contention may make the handling of Random RUs more difficult.
The present invention provides improved wireless communications with more efficient use ofthe OFDMA Random RUs while limiting the risks of collisions on these RUs. All of this is preferably kept compliant with 802.11 standards.
An exemplary wireless network is an IEEE 802.11ac network (and upper versions). However, the invention applies to any wireless network comprising an access point AP 110 and a plurality of nodes 101-107 transmitting data to the AP through a multi-user transmission. The invention is especially suitable for data transmission in an IEEE 802.11 ax network (and future versions) requiring better use of bandwidth.
An exemplary management of multi-user transmission in such a network has been described above with reference to Figures 1 to 4.
First embodiments provide a dynamic control by the AP of parameters used by the nodes to contend for access to the Random RUs. Following one or more trigger frames reserving one or more transmission opportunities on at least one communication channel ofthe wireless network, each trigger frame defining resource units forming the communication channel and including a plurality of random resource units that the nodes access using a contention scheme, the wireless communication method according to the first embodiments has specific steps.
At the access point AP, they include:
determining statistics on random resource units not used by the nodes during the one or more transmission opportunities and/or random resource units on which nodes collide during the one or more transmission opportunities;
determining a correcting or “TBD” parameter based on the determined statistics, sending, to the nodes, a next trigger frame for reserving a next transmission opportunity, the next trigger frame including the determined TBD parameter.
At the nodes, they include:
determining, based on the received TBD parameter and on one random parameter local to the node, one of the random resource units (this step corresponds to the way the nodes contend for access to the random resource units according to the first embodiments of the invention);
transmitting data to the access point using the determined random resource unit.
All of this shows that a correcting or TBD parameter is exchanged between the access point and the nodes. On one hand, it is used by the nodes to adjust how the local random parameter impacts the choice of the random RUs to be used. On the other hand, this TBD parameter is calculated by the access point based on statistics related to the use of the Random RUs (unused or collided RUs) in one or more previous transmission opportunities. This is because the access point has an overall view of the network, as the nodes only communicate with it.
It results that the contention scheme used by the nodes to access the Random RUs can be dynamically adapted to the network environment. As a consequence, more efficient usage of the network bandwidth (of the RUs) with limited risks of collisions can be achieved.
Figure 6 schematically illustrates a communication device 600 of the radio network 100, configured to implement at least one embodiment of the present invention. The communication device 600 may preferably be a device such as a micro-computer, a workstation or a light portable device. The communication device 600 comprises a communication bus 613 to which there are preferably connected:
• a central processing unit 611, such as a microprocessor, denoted CPU;
• a read only memory 607, denoted ROM, for storing computer programs for implementing the invention;
• a random access memory 612, denoted RAM, for storing the executable code of methods according to embodiments of the invention as well as the registers adapted to record variables and parameters necessary for implementing methods according to embodiments of the invention; and • at least one communication interface 602 connected to the radio communication network 100 over which digital data packets or frames or control frames are transmitted, for example a wireless communication network according to the 802.11ax protocol. The frames are written from a FIFO sending memory in RAM 612 to the network interface for transmission or are read from the network interface for reception and writing into a FIFO receiving memory in RAM 612 under the control of a software application running in the CPU 611.
Optionally, the communication device 600 may also include the following components:
• a data storage means 604 such as a hard disk, for storing computer programs for implementing methods according to one or more embodiments of the invention;
• a disk drive 605 for a disk 606, the disk drive being adapted to read data from the disk 606 or to write data onto said disk;
• a screen 609 for displaying decoded data and/or serving as a graphical interface with the user, by means of a keyboard 610 or any other pointing means.
The communication device 600 may be optionally connected to various peripherals, such as for example a digital camera 608, each being connected to an input/output card (not shown) so as to supply data to the communication device 600.
Preferably the communication bus provides communication and interoperability between the various elements included in the communication device 600 or connected to it. The representation of the bus is not limiting and in particular the central processing unit is operable to communicate instructions to any element of the communication device 600 directly or by means of another element of the communication device 600.
The disk 606 may optionally be replaced by any information medium such as for example a compact disk (CD-ROM), rewritable or not, a ZIP disk, a USB key or a memory card and, in general terms, by an information storage means that can be read by a microcomputer or by a microprocessor, integrated or not into the apparatus, possibly removable and adapted to store one or more programs whose execution enables a method according to the invention to be implemented.
The executable code may optionally be stored either in read only memory 607, on the hard disk 604 or on a removable digital medium such as for example a disk 606 as described previously. According to an optional variant, the executable code of the programs can be received by means of the communication network 603, via the interface 602, in order to be stored in one of the storage means of the communication device 600, such as the hard disk 604, before being executed.
The central processing unit 611 is preferably adapted to control and direct the execution of the instructions or portions of software code of the program or programs according to the invention, which instructions are stored in one of the aforementioned storage means. On powering up, the program or programs that are stored in a non-volatile memory, for example on the hard disk 604 or in the read only memory 607, are transferred into the random access memory 612, which then contains the executable code of the program or programs, as well as registers for storing the variables and parameters necessary for implementing the invention.
In a preferred embodiment, the apparatus is a programmable apparatus which uses software to implement the invention. However, alternatively, the present invention may be implemented in hardware (for example, in the form of an Application Specific Integrated Circuit or ASIC).
Figure 7 is a block diagram schematically illustrating the architecture of a communication device or node 600, either the AP 110 or one of nodes 100-107, adapted to carry out, at least partially, the invention. As illustrated, node 600 comprises a physical (PHY) layer block 703, a MAC layer block 702, and an application layer block 701.
The PHY layer block 703 (here an 802.11 standardized PHY layer) has the task of formatting, modulating on or demodulating from any 20MHz channel or the composite channel, and thus sending or receiving frames over the radio medium used 100, such as 802.11 frames, for instance medium access trigger frames TF 430 to reserve a transmission slot, MAC data and management frames based on a 20 MHz width to interact with legacy 802.11 stations, as well as of MAC data frames of OFDMA type having smaller width than 20 MHz legacy (typically 2 or 5 MHz) to/from that radio medium.
The MAC layer block or controller 702 preferably comprises a MAC 802.11 layer 704 implementing conventional 802.11 ax MAC operations, and an additional block 705 for carrying out, at least partially, the invention. The MAC layer block 702 may optionally be implemented in software, which software is loaded into RAM 512 and executed by CPU 511.
Preferably, the additional block, referred as to random RU procedure module 705 for controlling access to OFDMA resource units (sub-channels), implements the part of the invention that regards node 600, i.e. transmitting operations for a source node, receiving operations for a receiving node, or operations for the AP.
For instance and not exhaustively, the operations for the AP may include gathering statistics on use of the Random RUs, computing a correcting “TBD” parameter and optionally a time window size, adjusting the number of Random RUs; the operations for a node different from the AP may include using such information from the AP to compute a contention window size and thus to contend for access to the RUs, calculating a local RU backoff value for such contention, sensing use or not of the Random RUs before accessing one of them, at the nodes.
MAC 802.11 layer 704 and random RU procedure module 705 interact one with the other in order to provide management of the queue backoff engines and RU backoff engines.
On top of the Figure, application layer block 701 runs an application that generates and receives data packets, for example data packets of a video stream. Application layer block 701 represents all the stack layers above MAC layer according to ISO standardization.
Embodiments of the present invention are now illustrated using various exemplary embodiments. Although the proposed examples use the trigger frame 430 (see Figure 5a) sent by an AP for a multi-user uplink transmissions, equivalent mechanisms can be used in a centralized or in an adhoc environment (i.e. without an AP).
Such a trigger frame may be dedicated to a specific data traffic, in which case it includes a reference to a type of data traffic, for instance any priority or AC as shown in Figure 3b.
As a consequence, the management of the RU backoff value may be performed with respect to a single type of data traffic, i.e. more generally with respect to any data regardless of the data traffic. In other words, the contention for access to the Random RUs according to the invention can be conducted regardless of the ACs.
More generally, the invention may apply for transmitting data, regardless of the ACs, meaning that a general transmission buffer is used instead of a plurality of AC queues. In such a case, the references below to “active AC” are meaningless, and only refer to such general transmission buffer.
However, for illustrative purposes, specific implementations taking into account the ACs are described below.
Figure 8 illustrates an exemplary transmission block of a communication node 600 according to illustrative embodiments of the invention.
The node includes:
a plurality of traffic queues 310 for serving data traffic at different priorities; a plurality of queue backoff engines 311, each associated with a respective traffic queue for computing a respective queue backoff value to be used to contend for access to at least one communication channel in order to transmit data stored in the respective traffic queue. This is the EDCA; and an RU backoff engine 800 separate from the queue backoff engines, for computing an RU backoff value to be used to contend for access to the OFDMA resources defined in a received TF (sent by the AP for instance), in order to transmit data stored in either traffic queue in an OFDMA RU. The RU backoff engine 800 belongs to a more general module, namely Random RU procedure module 705, which also includes a transmission module, referred to as OFDMA muxer801.
The conventional AC queue back-off registers 311 drive the medium access request along EDCA protocol, while in parallel, the RU backoff engine 800 drives the medium access request onto OFDMA multi-user protocol.
As these two contention schemes coexist, the source node implements a medium access mechanism with collision avoidance based on a computation of backoff values:
- a queue backoff counter value corresponding to a number of time-slots the node waits, after the communication medium has been detected to be idle, before accessing the medium. This is EDCA;
- an RU backoff counter value (OE3O) corresponding to a number of idle RUs the node detects, after a TxOP has been granted to the AP over a composite channel formed of RUs, before accessing the medium. This is OFDMA.
RU backoff engine 800 computes the RU backoff value OE3O by randomly selecting a value within a contention (or congestion) window range [0, CWO], wherein the contention window size CWO is selected from selection range [CWOmin, CWOmaJ.
OFDMA muxer 801 is in charge, when the RU backoff value OBO reaches zero, of selecting data to be sent from one or more AC queues 310 (or the general transmission buffer in a more general context). Various ways to select the data to be sent from the one or more queues can be implemented. As it is not the core of the present invention, such selection approaches are not further detailed here.
One main advantage of embodiments of the present invention is to still be able to use, for the OBO/RU backoff engine, a classical hardware/state-machine of standard back-off mechanism, in particular the basic mechanism that enables, when a back-off value reaches zero, a medium access to be requested. Adjusting back-off parameters (backoff value, contention window min and max) is implemented simply by overwriting registers.
Upon receiving a Trigger Frame 430, the contention procedure for counting down the OBO backoff may consist in decreasing the OBO backoff count value by the number of detected-as-available RUs in the received trigger frame, or in a variant in decreasing the OBO backoff count value each elementary time unit (which may be different in size, in particular shorter, compared to the time units used when contending for access to the 20MHz communication channels).
The medium access to be requested when OBO is down to zero (or less than), may consist in applying a random selection of a RU among the detected-as-available RUs for sending data (according example of Figure 5). In a variant, the random RUs may be indexed from 1 to NbRU, and the selected random RU is the one having the RU backoff value OBO (before the above decrementing by the number of detected-as-available RUs) as index.
According to first embodiments of the invention, the Trigger Frame 430 includes a correcting “TBD” parameter calculated by the AP, and one random resource unit is determined from the detected-as-available random resource units, based on the received TBD parameter and on one random parameter local to the node, The random parameter local to the node is for instance the OBO backoff value randomly selected from contention window range [0, CWO],
According to second embodiments of the invention, the contention window range [0, CWO] from which any new OBO backoff value is randomly selected is updated depending on a success or failure in transmitting the data during the previous RU access.
Embodiments of the invention are now described with reference to Figures 9 to 18.
Figure 9 illustrates, using a flowchart, main steps performed by MAC layer 702 of node 600, when receiving new data to transmit.
At the very beginning, none traffic queue (or the general transmission buffer) stores data to transmit. As a consequence, no queue backoff value has been computed. It is said that the corresponding queue backoff engine or corresponding AC (Access Category) is inactive. As soon as data are stored in a traffic queue, a queue backoff value is computed (from corresponding queue backoff parameters), and the associated queue backoff engine or AC is said to be active.
At step 901, new data is received from an application local running on the device (from application layer 701 for instance), from another network interface, or from any other data source. The new data are ready to be send by the node.
At step 902, conventional 802.11 AC backoff computation is performed by the queue backoff engine corresponding to the type of the received data.
If the AC queue corresponding to the type (Access Category) of the received data is empty (i.e. the AC is originally inactive), then there is a need to compute a queue backoff value for the corresponding backoff counter.
The node then computes the queue backoff value as being equal to a random value selected in range [0, CW] + AIFS, where CW is the current value of the contention window size for the Access Category considered (as defined in 802.11 standard and updated for instance in step 1170 below), and AIFS is an offset value which depends on the AC of the data (all the AIFS values being defined in the 802.11 standard) and which is designed to implement the relative priority of the different access categories.
As a result the AC is made active.
Next to step 902, step 903 computes the RU backoff value OBO if needed.
An RU backoff value OBO needs to be computed if the RU backoff engine 800 was inactive (for instance because there were no data in the traffic queues I general transmission buffer until previous step 901) and if new data to be addressed to the AP have been received. This step 903 is thus a step of initializing OBO.
It first includes initializing the Contention Window size CWO (note that CW refers to the conventional contention window size for the ACs while CWO refers to the contention window size for the RU/OBO backoff, specific to embodiments of the invention) as explained below with reference to Figure 10, and then computing RU backoff value OBO from CWO.
In particular, RU backoff value OBO may be determined as a random integer selected from contention window range [0, CWO] uniformly distributed: OBO = random[0, CWO], This is why the random RUs selected by the nodes for transmission are based on one random parameter local to the node.
In variants, RU backoff value OBO may be determined by adding, to a value randomly selected from contention window range [0, CWO] uniformly distributed, a value computed from one or more arbitration interframe spaces, AIFS:
OBO = random[0, CWO] + AIFS[AC],
For instance, AIFS[AC] is either the lowest AIFS value from the EDCA AIFS value or values of the active AC or ACs in considered node 600, or an average value from the same the EDCA AIFS value or values.
According to the first embodiments of the invention, RU backoff value OBO is determined based on a correcting parameter TBD, such as an RU collision and unuse factor, received from the AP (in the Trigger Frame), for instance because CWO itself may be computed from TBD parameter. The RU collision and unuse factor TBD is further explained below. It is an adjustment parameter transmitted by the AP to drive node 600 to adjust its RU backoff value OBO. This adjustment parameter preferably reflects the AP point of view of collisions on RUs and/or of unuse of RUs, in the overall 802.11 ax network.
Thus, the RU collision and unuse factor TBD is preferably function of the number of unused random resource units and of the number of collided random resource units in one or more previous trigger frames, as detected by the AP.
Symmetrically, it may also be function of a number of random resource units that are used by the nodes and that do not experience collision during the one or more transmission opportunities.
Next to step 903, the process of Figure 9 ends.
For completeness of description, an exemplary determination of TBD parameter is provided. It takes place at the AP upon providing random RUs in trigger frames. The number of RUs in the trigger frame may also evolve simultaneously.
Figure 15 illustrates, using a flowchart, general steps of a wireless communication method at the AP adapted to compute the TBD parameter or RU collision and unuse factor TBD. Such information is encapsulated inside a new Trigger Frame (TF) sent by the AP, for instance as shown below with reference to Figure 16.
According to some embodiments of the invention, the TBD parameter is added to TF only if it is relevant, i.e. if its use improves efficiency of the network. Implementations of this approach are described below with reference to Figure 17.
Upon receiving an uplink OFDMA frame (1501), the AP is in charge of sending an acknowledgment frame to acknowledge safe reception of transmitted data by all or part of the nodes over the OFDMA RUs (1502).
At step 1503, the AP analyses the number of collided and empty (i.e. unused) OFDMA random RUs. It may perform this step by sensing each RU forming the composite channel. These values are used to update OFDMA use statistics. In particular, the AP determines statistics on random resource units not used by the nodes during the transmission opportunity and/or random resource units on which nodes collide during the transmission opportunity.
The OFDMA use statistics are used by the AP at steps 1504-1505 to determine various parameters to dynamically adapt (from one TXOP to the other) the contention scheme of the nodes for accessing the Random RUs.
It includes determining the TBD parameter for the next OFDMA transmission (1504).
It may also include determining and thus modifying the number of random resource units within the communication channel for the next transmission opportunity (1505).
Steps 1504-1508 thus dynamically adapt (from one TXOP to the other) the contention scheme of the nodes for accessing the Random RUs, by both adjusting the TBD parameter and the number of Random RUs available for the nodes.
It may be considered the case where all (or more than 80%) OFDMA Random RUs are used in the last OFDMA TXOP (or N previous OFMDA TXOPs, N being integer). It means that many nodes are requesting to transmit data. As a consequence, the number of Random RUs for the next OFDMA transmission can be increased by the AP (for instance by 1 up to a maximum number), while the TBD parameter can remain the same.
In addition, if collisions occur on several used OFDMA Random RUs (at least for instance more than a third), it means that the TBD parameter should be decreased to minimize the collisions between the nodes during the RU allocation. For instance, the TBD parameter may be decreased by about 30%.
A drawback of decreasing the TBD parameter (in case it is used as a divisor of the RU backoff value OBO by the nodes) is that the Random RU allocation is less optimized.
On the other hand, if several OFDMA Random RUs remain unused (at least for instance more than a third - or less than 50% of the RUs are used), the TBD parameter can be increased, for instance by 30%, and/or the number of Random RUs for the next OFDMA transmission can be decreased by the AP (for instance by 1) to optimize the OFDMA Random RU allocation.
A drawback of increasing the TBD parameter is that the collisions during the Random RU allocation may increase.
This illustrates that, upon termination of each uplink OFDMA TXOP, the updating of the TBD parameter is a trade-off between minimizing collisions during Random RU allocation and optimizing the filling of the OFDMA Random RUs.
To be precise, at step 1504, the AP may compute a new TBD parameter based on the determined OFDMA use statistics, optionally further based on the number of nodes transmitting on the random resource units during the previous transmission opportunity. Note that the OFDMA use statistics may be statistics on the previous TXOP only or on N (integer) previous TXOPs.
In variants, the TBD parameter may be a percentage according to the collision ratio detected by the AP among the OFDMA RUs, and/or the ratio of unused OFDMA RUs in previous MU OFDMA transmission opportunities and/or the ratio of used and non-collided OFDMA RUs. Depending on whether the TBD parameter is a percentage or an integer value, the formulae involving TBD may be slightly adapted, in particular at the nodes.
For instance, the TBD parameter includes a value used together with a random parameter local to each node, for the node to determine which one of the random resource units to access. For instance, the random parameter can be based on an RU backoff value used by the node to contend for access to the communication channel, and the TBD parameter may be used to define the contention window size CWO from which the RU backoff value is randomly selected.
In embodiments where the TBD parameter is used to define the contention window size CWO at the nodes, the TBD parameter may be function of a ratio between the number of collided random RUs and the number of random RUs in the one or more transmission opportunities. The other ratio defined above may also be used.
The above ratio may be multiplied by a predefined factor, for instance 0.08, such that TBD is function of CRF=a.(Nb_collided_RU I Nb_RU_total) with a=0.08.
Using this formula advantageously makes it possible for the AP to determine an optimum CWO for the nodes without knowledge of the number of concurrent nodes. Indeed, the AP cannot know the number of nodes having tried to send data by analysing the result of transmissions in response to a previous trigger frame, because the AP cannot differentiate between the different nodes colliding on a single RU (the collision detection result is the same if 2 or more nodes are colliding).
However, statistically, the proportion of collided RUs reflects the number of concurrent nodes. So if the AP analyses the number of collided RUs from the previous TFs and creates corresponding statistics, it can use them to determine CWO.
In details, increasing CWO is a way to adapt the frequency at which the nodes try to access the medium, to the effective number of free channels (number of random RUs). So the AP just needs to determine value CRF according to the collided RU statistics, which in turns can be applied to CWOmin value to adapt CWO.
In specific embodiments, TBD equals this value CRF.
In other specific embodiments, TBD equals 2ACRF (Λ being the power function).
In yet other specific embodiments, TBD directly defines the contention window size to be used by the nodes, i.e. directly defines CWO. For instance, TBD = CWOmin*2ACRF, where CWOmin is a (predetermined) low boundary value.
Indeed, CWO is selected from [CWOmin, CWOmaJ· CWOmin is the lower boundary of a selection range from which the nodes select the contention window size to use to contend for access to the random RUs. Symmetrically, CWOmax is the upper boundary of the selection range from which the nodes select the contention window size to use to contend for access to the random RUs.
As an example, CWOmin is (or more generally may be determined as a function of) the number of random resource units defined in the trigger frame (in which TBD is to be encapsulated).
Defining TBD as CWO to be used by the nodes advantageously avoids having the nodes performing a certain number of tries before reaching an optimum CWO value. Indeed, the AP has an overall view of the traffic in the network, and thus can directly compute an optimum CWO for the nodes. Higher stability in latency is thus achieved.
In other specific embodiments, TBD is used to define the above selection range. For instance, TBD as provided by the AP defines CWOmin or defines CWOmax.
Setting CWOmax with TBD advantageously makes it possible for the AP to control the maximum latency, in particular if the nodes increase their CWO as they experience data collisions in the accessed random RUs. Thus the AP globally controls the latency in the network.
This is for example useful in the scenario where the AP wants to get timely reports from the nodes (by using aging mechanisms, i.e. cancelling outdated packets, to avoid outdated report emission), such as buffer status or ΜΙΜΟ efficiency reports (sounding reports).
Furthermore, it may be noted that increasing CWOmax may enhance efficiency of random RU usage (i.e. number of used random RUs without collisions). Thus, CWOmax set by the AP through the TBD parameter may be a tradeoff between the maximum latency and RU usage efficiency.
In yet other embodiments, TBD may also be used to identify an entry to select in a predefined table of contention window sizes.
Such a table may be shared between the AP and the nodes or can be predetermined at each node. Thus the AP identifies a CWO value to be used by the nodes from the table, by specifying an entry index therein.
These values can be adjusted as the OFDMA use statistics show that too many collisions occur on the Random RUs or too many Random RUs remain unused.
Any of the TBD parameters above may be adjusted or adapted to a specific group of nodes in which case the TBD parameter is preferably computed from OFDMA use statistics related to the nodes of the specific group, if such statistics can be identified. This is to assign different priorities to different groups of nodes, and to control different QoS between the node groups. Preference is given to setting different values of CWO through TBD, instead of different values of CWOmax for instance, because it provides a finer granularity/better control of the discrimination/prioritization between the node groups (by setting CWOmax for different node groups, the discrimination is obtained only when CWO in one group is above CWOmax of another group).
Different groups of nodes may be identified through different BSSIDs, thus corresponding to different virtual sub-networks managed by the AP.
In a variant to the node group approach or in combination therewith, different TBD parameters (values) may be set for different types of data (ACs). Since the trigger frame may be restricted to a specific type of data (specified in the frame), a corresponding TBD parameter may be provided to drive the nodes along a specific behaviour when access random RUs to transmit this type of data. In this way, the AP can manage the latency of given type of required data.
This optional assignment of the TBD value to a group (BSSID) of nodes or to a type of data is shown through optional blocks 1505 and 1506 in the Figure. Step 1505 checks whether a specific requirement is defined at the AP, in which case the assignment is performed at step 1506.
Next, at step 1507, the AP determines the number of Random RUs to consider for the next multi-user TXOP about to be granted (because the AP can pre-empt the wireless medium over the nodes, since it must wait for the medium to be idle during a shorter duration than the waiting duration applied by the nodes).
The determination of step 1507 can based on the BSS configuration environment, that is to say the basic operational width (namely 20MHz, 40MHz, 80MHz or 160MHz composite channels that include the primary 20MHz channel according to the 802.11ac standard).
For the sake of simplicity, one may consider that a fixed number of OFDMA RUs is allocated per 20 MHz band by the 802.11ax standard: in that case, it is sufficient that the Bandwidth signalling is added to the TF frames (i.e. 20, 40, 80 or 160 MHz values is added). Typically, such information is signalled in the SERVICE field of the DATA section of non-HT frames according the 802.11 standard. As a consequence, compliance with 802.11 is kept for the medium access mechanism.
Note that in embodiments where the number of random RUs is kept fixed, step 1507 may be avoided.
Next to step 1507, the OFDMA use statistics may also be used to evaluate an use efficiency of the random resource units based on the determined use statistics. The related steps 1508 to 1511 are implemented when a switch between the AP-initiated mode and the local mode to drive the computation of CWO by the nodes is searched.
Step 1508 evaluates the use efficiency of the random resource units based on the determined OFDMA use statistics. A metric or measure, function of a number of random resource units that are used by the nodes and that do not experience collision during the one or more transmission opportunities, can be used. It means that the use efficiency metric is based on statistics on the RUs that have been successfully used by the nodes (i.e. neither the collided random RUs nor the non-used random RUs).
For instance, the evaluated use efficiency measure may include a ratio between the number of random resource units that are used by the nodes and that do not experience collisions, and a total number of random resource units available during the one or more transmission opportunities. This metric thus mirrors how efficiently the available random RUs have been used.
In variants, the evaluated use efficiency measure may include a ratio between a number of collided random resource units and the total number of random resource units available during the one or more transmission opportunities.
In another variant, the evaluated use efficiency measure may include a ratio between a number of unused random resource units and the total number of random resource units available during the one or more transmission opportunities.
Of course, other formulae mixing the above numbers can be used, provided that they mirror how efficiently the available random RUs have been used.
All of these alternative metrics are based on use statistics accumulated during the one or more transmission opportunities. Any number of transmission opportunities considered can be envisioned. Also, all the transmission opportunities within a sliding time window can be taken into account, as a variant.
Next to step 1508, step 1509 consists in determining whether the evaluated use efficiency measure (e.g. any ratio defined above) indicates that the random RUs are efficiently used or not.
Indeed, Figure 8 shows that sometimes the local mode is more efficient than the AP-initiated mode, and sometimes the reverse happens. Taking into account this information, it is worth trying to switch to the other mode in case the current use efficiency measure is too low.
Thus depending on the evaluated RU efficiency measure, the TBD parameter sent to the nodes to drive them in computing their own RU contention window size CWO should be set to a dedicated value or an UNDEFINED value.
Thus the access point decides, based on the evaluated use efficiency measure, between the two modes, from which results the decision to transmit or not, to the nodes, the determined TBD parameter within the next trigger frame to drive the nodes in determining their own contention window size.
A simple approach may be used, for instance by comparing the evaluated use efficiency measure to an efficiency threshold, e.g. 30%, to determine whether the current use of the random RUs is efficient or not.
For instance, if the evaluated use efficiency measure is below the efficiency threshold, a TBD Information Element (to be included in the next trigger frame as described below with reference to Figure 16) is set to the TBD parameter as determined at step 1504. This is step 1511. This aims at transmitting, to the nodes, the determined TBD parameter TBD within a next trigger frame for reserving a next transmission opportunity, in case of low use efficiency.
On the contrary, in case of an evaluated use efficiency measure above the efficiency threshold, a local approach for computing CWO is sufficient. In this case, the TBD Information Element is set to an UNDEFINED (or UNUSED) value. This is step 1510 for the AP to have a next trigger frame to transmit that does not define a TBD parameter to drive nodes in defining their own contention window size. In this particular case, the transmitted next trigger frame includes a TBD parameter field set to undefined.
Of course, more complex use efficiency metrics (more complex to the ratios mentioned above) and more complex tests for step 1509 can be used to evaluate whether it is opportune to switch to one or the other mode between the local and AP-initiated modes.
A variant is shown in Figure 15a based on an hysteresis cycle.
To switch from one of the local or AP-initiated mode to the other, two predefined efficiency thresholds (THR1 and THR2) may be defined in order to avoid noisy switching. The two thresholds are used in an hysteresis cycle, to lock a current mode as long as an unlocking criterion (e.g. a comparison with THR2) is not reached. With this hysteresis cycle, the access point decides to switch from a current mode among a first mode in which the determined TBD parameter is transmitted within a trigger frame and a second mode in which the determined TBD parameter is not transmitted, to the other mode when the evaluated use efficiency measure falls below a first predefined efficiency threshold.
The evaluated use efficiency measure is first compared to THR1, for instance 30% in case the measure used includes a ratio between the number of random resource units that are used by the nodes and that do not experience collisions, and a total number of random resource units available during the one or more transmission opportunities. This is step 1550.
In case the evaluated use efficiency measure is less than THR1 (output “yes”), it is determined at step 1551 whether the current mode (either local or AP-initiated) is locked or not. The lock may be implemented using one bit in a memory or register.
If it is locked (output “yes” at test 1551), no switch can be performed and the current mode is kept. The next step is step 1555.
Otherwise (output “no” at test 1551), the current mode can be switched to the other mode, i.e. either local mode to AP-initiated mode, or the reverse. This is step 1552, at the end of which the new mode is locked (a locking bit in the register is set to “on”). The next step is step 1555.
Back to step 1550, if the evaluated use efficiency measure is above THR1 (output “no” at test 1550), the next steps are used to determine whether or not the current mode can be unlocked.
To do so an unlocking criterion is evaluated at step 1553 using THR2, for instance the evaluated use efficiency measure is compared to THR2, e.g. 32% for the above mentioned ratio. This idea behind step 1553 is to allow the current mode to be unlocked only if it has provided some benefits in the use of the random RUs.
If the unlocking criterion is not met (e.g. the evaluated use efficiency measure remains below THR2), the current mode is kept locked by going to next step 1555.
Otherwise (the unlocking criterion is met), the current mode is unlocked (the locking bit in the register is set to “off’). It means that the current mode is locked until an evaluated use efficiency measure reaches a second predefined efficiency threshold.The next step is step 1555.
In a variant shown through the optional step 1556, it may be decided to unlock the current mode in case the last mode switch occurred a long time ago (a time threshold may be used). This is to avoid blocking the network in a specific mode with low efficiency, in case the other mode could provide better results, Indeed, after the unlocking due to expiry of the time threshold, the AP can switch to the other mode in case the evaluated use efficiency measure remains low.
Following the hysteresis steps, step 1555 consists to set the TBD Information Element to the appropriate value depending on the current mode. If the current mode is the local mode, then the TBD Information Element is set to UNDEFINED (similarly to step 1510). On the other hand, if the current mode is the AP-initiated mode, the TBD Information Element is set to the TBD parameter (value) obtained at step 1504 (similarly to step 1511).
Once the TBD Information Element has been set at step 1510 or 1511 (or 1555 in Figure 15a), it is inserted in the next trigger frame to be sent. Thus, next steps 1512/1513 consist for the AP to build and send the next trigger frame with the above determined information: Random RUs information and TBD value (in the TBD Information Element).
It is expected that every nearby node (legacy or 802.11ac, i.e. which is neither STA1 nor STA2) can receive the TF on its primary channel. Each of these nodes then sets its NAV to the value specified in the TF frame: the medium is thus reserved by the AP.
The process of Figure 15 can be performed at each new trigger frame (step 1501 occurs following transmissions triggered by a trigger frame). However, it may be contemplated performing the process at each N (integer) trigger frames, in order to reduce the maximum frequency of switching. This is to have time to accurately evaluate the RU use efficiency of the mode (local or AP-initiated) to which the network has switched.
Figure 16 illustrates an exemplary format for an information Element dedicated to the transmission of the TBD parameter within the TF.
The ‘TBD Information Element’ (1610) is used by the AP to embed additional information within the trigger frame TF related to the OFDMA TXOP.
The proposed format follows the ‘Vendor Specific information element’ format as defined in IEEE 802.11-2007 standard.
The ‘TBD Information Element’ (1610) is a container of the TBD parameter attributes (1620), having each a dedicated attribute ID for identification. The header of TBD Information Element can be standardized (and thus easily identified by stations 600) through the Element ID.
The TBD attributes 1620 are defined to have a common general format consisting of a 1-byte TBD Attribute ID field, a two-byte Length field and a TBD attribute body (1630) including the TBD parameter (value) computed by the AP.
The usage of the Information Element inside the MAC frame payload is given for illustration only, any other format may be supportable. The choice of embedding additional information in the MAC payload is advantageous in that it keeps legacy compliancy for the medium access mechanism. This is because any modification performed inside the PHY header of the 802.11 frame would have inhibited any successful decoding of the MAC header (the Duration field would not have been decoded, so the NAV would not have been set by legacy devices).
Figure 10 illustrates, using a flowchart, main steps for setting (including initializing) CWO at node 600. In other words, it describes a first sub-step within step 903 to prepare a random access (contention) for (UL) MU OFDMA transmission in the context of 802.11. It may include computing RU backoff parameters.
It starts initially when node 600 receives (e.g. locally from upper layer 701) new data in any of its AC queue 310 or its general transmission buffer, to be addressed to the AP.
At step 1000, node 600 determines the number NbRU of random RUs, i.e. of the RUs available for contention, to be considered for the multi-user TxOP upon next grant. This information may be provided by the AP through beacon frames or trigger frames themselves, or both. For instance, the information may be retrieved from the last TF detected. An initial value may be used as long as no TF (or beacon frame) is detected.
When the information is conveyed inside a Trigger Frame TF, it may be deduced by counting the number of random RUs, that is to say each RU having an associated address identifier (AID) equal to 0 (contrary to Scheduled RUs which have non-zero AIDs).
Step 1000 may be optional in embodiments where the RU backoff parameters (and thus the computation of CWO) are not function of the number NbRU of random RUs.
Next, at step 1001, node 600 obtains queue backoff parameters for the active ACs. Indeed, they may be used to compute the RU backoff parameters for OFDMA access as described below. These queue backoff parameters may be retrieved from the active queue backoff engines 311. At step 903, we know that at least one AC is active, but also that data it stores are intended to the AP.
Each active AC maintains contention window size CW of its contention window range [0,CW] within the interval [CWmin, CWmaJ, and uses it to select the random queue backoff value.
Thus, examples of queue backoff (AC) parameters are the following:
- boundaries (CWmin, CWmax);
- arbitration interframe spaces (AIFS);
- contention window size CW.
Next to step 1001, step 1002 consists for node 600 in computing the selection interval [CWOmin, CWOmax] and then CWO. It may be based on the retrieved queue backoff parameters.
In the first embodiments of the invention, step 1002 is based on the TBD parameter received from the AP in a Trigger Frame, in particular the last received Trigger Frame. In particular, the contention window size, i.e. CWO, is determined (directly or indirectly through CWOmin and/or CWOmax) based on the TBD parameter received from the access point.
In the second embodiments of the invention, step 1002 is based on a success or failure in transmitting data during a previous RU access. A local value of CWO may be doubled or the like in case of failure, in order to restrict transmissions in case of collisions, which in turn reduces the probability of collisions and thus improves use of the communication network.
Step 1002 may include two sub-steps:
- a first sub-step to determine CWOmin and CWOmax, wherein at least one of CWOmin and CWOmax, preferably both, is an RU backoff parameter determined based on one or more queue backoff parameters;
- a second sub-step to compute or select CWO from range [CWOmin, CWOmax].
This ensures CWO to be dependent on the current EDCA parameters, such as the
CWs. As a consequence, this advantageously takes into account the priorities raised by EDCA ACs in the process of computing the RU backoff parameters for OBO.
However, in a more general approach that does not take into account the data traffic ACs, CWOmin and CWOmax can be computed using other parameters or using the backoff parameters defining the sole general transmission buffer.
According to some of the second embodiments of the invention, CWOmin and CWOmax and CWO are computed only from information computed locally by node 600. This is for instance the case in the process of Figure 13 described below.
Regarding the first sub-step, as the targeted transmission is of UL OFDMA type, RU backoff parameters CWOmin and CWOmax should be computed differently than the corresponding CWmin/CWmax values of the EDCA scheme.
As an example, CWOmin may be set to the number of random resource units defined in a received trigger frame: CWOmin = NbRU. This improves usage OFDMA RUs. This advantageously does not take into account the ACs.
As another example, CWOmin may be the lowest lower boundaries (CWmin) from selection intervals [CWmin,CWmax| of the active queue backoff engines at node 600, i.e. having non-zero queue backoff values:
CWOmin = Min({CWmin}active ac)· This option is preferably performed when the CWmin are greater than the number of random RUs. Indeed, there is no interest to have CWmin lower than NbRU since the risk of collisions would be very high.
As another example, CWOmin may be set both according to the lowest lower boundaries (CWmin) from selection intervals [CWmin,CWmax] of the active queue backoff engines at node 600 (i.e. having non-zero queue backoff values), and according to the number of random RUs:
CWOmin - Min({CWmin}active Ac) X NbRuIn case of a single general transmission buffer, CWmin for this buffer may be used to compute CWOmin according any of the above formulae.
Similarly regarding CWOmax, it may be set to an upper boundary (C\Nmsa) of a selection interval [CWmin,CWmax] of the active queue backoff engine 311 having the lowest non34 zero queue backoff value, i.e. the next AC to transmit, reflecting the highest priority AC: CWOmax = (CWma^iowest non-zero ac- This exemplary configuration advantageously takes the same priority as the AC.
In another example, CWOmax may be a mean of upper boundaries (CWmax) of selection intervals [CWmin,CWmax] of the active queue backoff engines 311, i.e. having non-zero queue backoff values:
CWOmax= average({CWmax}active ac)· This exemplary configuration advantageously takes a medium priority, and is more relaxed compared to the first exemplary configuration.
In another example CWOmax may be the highest upper boundaries (CWmax) from selection intervals [CWmin,CWmax] of the active queue backoff engines 311, i.e. having non-zero queue backoff values:
CWOmax= max({CWmax}active AC). Thus node 600 is even more relaxed. This exemplary configuration advantageously ensures that OFDMA will not take a medium priority lowest than EDCA.
In case of a single general transmission buffer, CWmax for this buffer may be used to compute CWOmax according to any of the above formulae.
According to a particular option, the various configurations may be used in turns, instead of selecting only one of them. Either the configuration to use is randomly selected, or it may be based on use statistics: for instance if feedback information of large number of collisions is received, the third configuration may be used. Another configuration will be used as soon as the feedback information informs of a number of collisions under a predefined threshold.
Regarding the second sub-step, CWO may be initially assigned the CWOmin value. Exemplary embodiments for updating of CWO are described below with reference to Figure 12. CWO may be allowed to increase up to the upper bound CWOmax value as the node attempts to access and use random RUs.
For instance, CWO is updated depending on a success or failure in transmitting the data when accessing one or more random RUs.
In embodiments, CWO is doubled in case of transmission failure, and may start with an initial value equal to CWOmin. As successive attempts fail, CWO = CWOmin * 2n, where n is the number of successive transmission failures for the node computing CWO.
In other embodiments that takes into account variations over time, CWO(t) = CWOmin(t) * 2n, where n is the number of successive transmission failures. Specifically, CWOmin(t) may be the number of random resource units defined in a current trigger frame received at time t, i.e. the last received trigger frame.
According to some of the first embodiments of the invention, at least one of CWOmin and CWOmax and CWO depends on the RU collision and unuse factor TBD received from another node (preferably from the Access Point). This is for instance the case in the process of Figure 14 described below. As CWO usually depends on CWOmin and CWOmax (it is selected from the selection range defined by these two values), the contention window size
CWO is also determined based on the TBD parameter received from the access point when it is CWOmin or CWOmaxthat directly depends on TBD.
For instance, CWOmin may be computed as described above for the second embodiments ofthe invention (e.g. CWOmin is the number of random resource units defined in the trigger frame), and CWO may be function of CWOmin and ofthe received TBD parameter. As an example, CWO is set to 2 * CWOmin. Note that this value may be upper bounded by a
CWOmax value as determined above.
As a variant, CWO is set to TBD * CWOmin.
Of course, these variants mirror the variants implemented at the AP to compute TBD. The transmitted TBD parameter is such that the final calculation of CWO is preferably according to the following formula: 2CRF * CWOmin, wherein CRF=a*(Nb_collided_RU I Nb_RU_total).
In other embodiments, CWO is directly TBD as received.
In yet other embodiments, CWOmin or CWOmax is directly TBD as received. Then CWO may be randomly selected from [CWOmin, CWOmax] and indirectly depends on the TBD parameter received from the AP.
In yet other embodiments, CWO is selected as an entry of a predefined table of contention window sizes (defined above with reference to Figure 15), wherein TBD received from the access point identifies the entry to select in the predefined table.
However, as long as the TBD parameter is not received, optional variants may be implemented. In a first variant, the second embodiments ofthe invention as defined above may be applied meaning that an initial value (CWmin) is assigned to CWO. In a variant, a local RU collision factor CF, thus built locally for instance from past history, may be used. This ability for the node to switch between a global (AP-initiated) approach and a node local approach is further explained below, in particular with reference to Figure 17.
Next to step 1002, step 1003 checks whether a triggering event for updating the RU backoff parameters is detected, before a new OFDMA access is performed.
Some triggering events may come from the AP.
For instance, similarly to the EDCF parameters (AIFS[AC], CWmin[AC] and CWmax[AC]), the AP may announce the number NbRU of random RUs through beacon frames, of alternatively (or in combination with) through the trigger frames. Indeed, the AP can dynamically adapt the number NbRU of RUs depending on network conditions. An example of such adaptation is given above in connection with the building of the TBD parameter at AP side. Thus a triggering event for node 600 may be receiving a new trigger/beacon frame defining a number of random resource units that is different from a current known number of random resource units.
Other triggering events may be produced locally by node 600.
For instance, as mentioned above, data newly stored in a previously empty AC traffic queue 310 activate the corresponding queue backoff engine 311. A corresponding triggering event may thus be detecting that an empty traffic queue from the plurality of traffic queues has now received data to transmit, in which case the CW parameters of this newly activated queue backoff engine may be taken into account to compute the CWO range anew.
More generally, a triggering event may consist in detecting a change in at least one queue backoff parameter used to determine the one or more RU backoff parameters, i.e. when one of the reference queue backoff parameters has changed. Note that it is not the case for beacon frames indicating the same parameters.
In specific embodiments, illustrated for instance in the process of Figures 12 and 13, a triggering event may be the end of OFDMA transmission and thus the reception of a positive or negative acknowledgment of a previous transmission of data in an RU.
In other specific embodiments, illustrated for instance in the process of Figure 14, a triggering event may be the reception of a new trigger frame.
Upon receiving any triggering event, the process of Figure 10 loops back to step 1000 to obtain NbRU and queue backoff parameters again if appropriate and then to compute new RU backoff parameters.
This ends the process of Figure 10.
Figure 11 illustrates, using a flowchart, steps of accessing the medium based on the conventional EDCA medium access scheme.
Steps 1100 to 1120 describe a conventional waiting introduced in the EDCA mechanism to reduce the collision on a shared wireless medium. In step 1100, node 600 senses the medium waiting for it to become available (i.e. detected energy is below a given threshold on the primary channel).
When the medium becomes free, step 1110 is executed in which node 600 decrements all the active (non-zero) AC[] queue backoff counters 311 by one.
Next, at step 1120, node 600 determines if at least one of the AC backoff counters reached zero.
If no AC queue backoff reaches zero, node 600 waits for a given time corresponding to a backoff slot (typically 9ps), and then loops back to step 1100 in order to sense the medium again.
If at least one AC queue backoff reaches zero, step 1130 is executed in which node 600 (more precisely virtual collision handler 312) selects the active AC queue having a zero queue backoff counter and having the highest priority.
At step 1140, the data from this selected AC are selected for transmission.
Next, at step 1150, node 600 initiates an EDCA transmission, in case for instance an RTS/CTS exchange has been successfully performed to have a TxOP granted. Node 600 thus sends the selected data on the medium, during the granted TxOP.
Next, at step 1160, node 600 determines if the transmission has ended, in which case step 1170 is executed.
At step 1170, node 600 updates CW of the selected traffic queue, based on the status of transmission (positive or negative ack, or no ack received). Typically, node 600 doubles the value of CW if the transmission failed until CW reaches a maximum value defined by the standard 802.11 and which depends on the AC type of the data. If the transmission is successful, CW is set to a minimum value also defined by the 802.11 standard and which is also dependent on the AC type of the data.
Then, if the selected traffic queue is not empty after the EDCA data transmission, a new associated queue backoff counter is randomly selected from [0,CW], like in step 902.
This ends the process of Figure 11. Note that this process can be applied in a similar manner (but with only one AC queue) in case a single general transmission buffer is considered.
Figure 12 illustrates, using a flowchart, exemplary steps for updating the RU backoff parameters and value upon receiving a positive or negative acknowledgment of a multiuser OFDMA transmission.
It is recalled that in a simple implementation, the RU backoff value OBO is used to determine if node 600 is eligible to contend for access to an OFDMA resource unit: OBO should be not greater than the number of available random RUs in order to allow for an UL OFDMA transmission for node 600. Scheduled RUs are accessible to node 600 if indicated as such by the AP, independently of RU backoff value OBO.
Thus step 1200 happens during such an UL OFDMA transmission in a random RU (when decremented OBO reaches zero).
Step 1201 is executed when the UL OFDMA transmission finishes on an accessed random RU, upon having the status of transmission; either by receiving a positive or negative acknowledgment from the AP, or by inferring loss of data (in case no ack is received)..
At step 1201, the contention window size CWO is updated depending on a success or failure in transmitting the data. This step and following step 1202 are performed if needed only. In particular, if the ending transmission has sent all the data intended to the AP (i.e. no more of such data remain in all the traffic queues), there is no need to keep the RU backoff engine active. It is thus desactivated, by clearing OBO value.
If the update is needed, when an OFDMA transmission fails (e.g. the transmitted data frame has not been acknowledged), a new CWO value may be computed for instance.
In particular, CWO may be doubled, for instance CWO = 2 x (CWO + 1) - 1 or CWO = 2 * CWO. This illustrates some of the second embodiments of the invention.
As CWO may be initially assigned CWO0 = CWOmin value and may increase up to CWOmax value, this approach makes that CWOn = CWO0 * 2n, where n is the number of successive fails when trying to access the network and send data. For instance CWO0 = CWOmin as defined above. More precisely, CWOn = min (CWO0 * 2n; C\NOmsa).
To illustrate this, three successive attempts may be considered as follows:
For the first access attempt: CWO = CWO0
For the second access attempt: CWO = CWOt = CWO0 * 21
For the third access attempt: CWO = CWO2 = CWO0 * 22 = CWCL * 2.
In other embodiments of the second embodiments of the invention,
CWOn = min (CWOmin(t) * 2n; CWOmax).
Again, n is the number of successive failing attempts. CWOmin(t) is a value that evolves over time. Indeed, [CWOmin, CWOJ from which CWO is selected may evolve over time.
For instance, as the number of random RUs in the trigger frames usually evolves over time, it may be worth updating CWOmin based on this evolving number of random RUs. This is why CWOmin evolves overtime, as noted by CWOmin(t).
Thus the above embodiments take into account the TF characteristics changes (number of random RUs) as well as the collision history (failing attempts). This may be important in network that substantially evolves over time. Indeed, the probability to have two successive trigger frames with the same characteristics (same number of random RUs, same type of data required, same width of the RU, etc.) may be low. So an approach able to dynamically adapt the CWO value to the current TF characteristics provides benefits.
To illustrate this dynamic approach, let consider three successive attempts, as follows:
For the first access attempt: CWO = CWOmin (t=0) = Nbr_rRU (number of random RUs) of TFO, where TFO is the trigger frame corresponding to the first transmission attempt by node;
For the second access attempt (in case of failing first attempt): CWO = CWOt = CWOmin(t=1) x 21, where CWOmin (t=1) = Nbr_rRU of TF1, where TF1 is the trigger frame corresponding to the second transmission attempt by node. Nbr_rRU of TF1 can be different from Nbr_rRU of TFO;
For the third access attempt: CWO = CWO2 = CWOmin(t=2) x 22, where CWOmin (t=2) = Nbr_rRU of TF2, where TF2 is the trigger frame corresponding to the third transmission attempt of current data by station
In variants illustrating some of the first embodiments of the invention, a new CWO value may be obtained using TBD as received from the AP as described above, e.g. CWO = 2tbd * CWOmin or CWO = TBD * CWOmin or CWO = TBD or CWO is defined by the table entry having TBD as entry index, or CWO is randomly selected from [CWOmin, CWOmax] where CWOmin or CWOmax equals TBD, depending on which approach the AP adopts.
This reduces the collision probability in case there are too many nodes attempting to access the RUs.
In case the OFDMA transmission succeeds, CWO may be reset to a (predetermined) low boundary, such as CWOminThis description of step 1201 reflects a local point of view at node 600.
Next to step 1201, step 1202 consists in computing a new RU backoff value OBO based on the updated contention window size CWO. The same approaches as described above with reference to step 903 can be used: for instance OBO = random[0, CWO], OBO = random[0, CWO] + AIFS[AC],
This ends the process of Figure 12.
Figure 13 illustrates, using a flowchart, first exemplary embodiments of accessing the medium based on the OFDMA medium access scheme, and of locally updating the RU backoff parameters, such as the contention window size CWO, when a new trigger frame is received at transmitting node 600. This Figure illustrates some of the second embodiments according to the invention.
It means that node 600 has data to transmit, and thus has at least one active EDCA queue backoff engine 311. Furthermore, node 600 has a non-zero RU backoff value OBO, meaning that it has data to send to the AP upon receiving the trigger frame. The same process can be applied in case the node has a single general transmission buffer, in which case a single AC queue is considered.
At step 1300, node 600 checks whether or not it has received an 802.11a frame in a non-HT format. Preferably, the type of the frame indicates a trigger frame (TF), and the Receiver Address (RA) of the TF is a broadcast or group address (i.e. not a unicast address corresponding specifically to node 600’s MAC address).
Upon receiving the trigger frame, the channel width occupied by the TF control frame is signaled in the SERVICE field of the 802.11 data frame (the DATA field is composed of SERVICE, PSDU, tail, and pad parts). An indication that the control frame is a Trigger Frame may be provided in frame control field 301, which indicates the type of the frame. In addition, frame control field 301 may include a sub-type field for identifying the type of the trigger frame, such as a TF-R.
As noted above, even without such sub-type field, the random RUs can be determined using for instance the AID associated with each RU defined in the TF (AID=0 may mean random RU). So the number of random Resource Units supporting the random OFDMA contention scheme is known at this stage. Obtaining the number of random RUs may be advantageously performed if the number of random RUs varies from one TF to the other.
Next, at step 1301, node 600 consists in decrementing the RU backoff value OBO based on the number NbRU of random resource units defined in the received trigger frame: OBO = OBO - NbRU. This is because node 600 is determined as being an eligible node to transmit data in an OFDMA random RU, if its pending RU backoff value OBO is not greater than the number of OFDMA random RUs.
Step 1301 thus updates OBO value upon receiving a new trigger frame.
Next to step 1301, step 1302 consists for node 600 in determining whether it is an eligible node for transmission. This means either a scheduled RU of the TF is assigned to node 600 or its RU backoff value OBO is less or equal to zero.
As alternative, and if node 600 supports concurrent OFDMA transmission capabilities, both cases (scheduled RU and OBO is less or equal to zero) are handled and steps 1303 to 1310 are conducted in parallel for the two accesses.
In case of no eligibility, the process ends.
In case of eligibility, node 600 selects one RU for sending the data. It is either the assigned scheduled RU, or a random RU selected from the NbRU random RUs of the TF (either randomly or using the RU backoff value OBO before step 1301 as an index to select the random RU having the same index). This is step 1303.
Once the RU for OFDMA transmission has been determined, step 1304 selects data to transmit to the AP, usually from one or more of the active AC traffic queues 310. OFDMA muxer 801 is in charge of selecting such data to be transmitted, from among at least one AC traffic queue 310.
Note that during an MU OFDMA TXOP (i.e. transmission in an RU), node 600 is allowed to transmit multiple data frames (MPDUs) from the same AC traffic queue, with the condition that the whole OFDMA transmission lasts the duration originally specified by the received trigger frame (i.e. the TxOP length).
Of course, if not enough data is stored in the selected AC traffic queue, another or more active AC traffic queue may be considered.
Generally speaking, the data frames from the active ACs having the highest priority are selected. “Highest priority” may means having the lower queue backoff value, or having the highest priority according to EDCA traffic class prioritization (see Figure 3b).
Next to step 1304, step 1305 consists for node 600 in initiating and performing a MU UL OFDMA transmission of the selected data (at step 1304) in the selected RU (at step 1303).
As commonly known, the destination node (i.e. the AP) will send an acknowledgment related to each received MPDU from multiple users inside the OFDMA TXOP (see step 1502).
Preferably, the ACK frame is transmitted in a non-HT duplicate format in each 20 MHz channel covered by the initial TF’s reservation. This acknowledgment can be used by the multiple source nodes 600 to determine if the destination (AP) has well received the OFDMA MPDUs. This is because source nodes 600 are not able to detect collisions inside their selected RUs.
Thus at step 1306, node 600 obtains a status of transmission, for instance receives an acknowledgment frame.
In case a scheduled RU of the TF is assigned to node 600, as the OFDMA access is not granted through OBO, then the algorithm goes directly to step 1309 (arrow not shown in the figure).
Otherwise, the algorithm continues either at step 1307 or at step 1308. In case of positive acknowledgment, the MU UL OFDMA transmission is considered as a success and step 1307 is executed. Otherwise, step 1308 is executed.
In case of successful OFDMA transmission on the selected random RU, CWO is set to a (predetermined) low boundary value, for instance CWOmin, at step 1307.
In case of failing OFDMA transmission, CWO is doubled, for instance CWO = 2 x (CWO + 1) -1, at step 1308. Note that CWO cannot be above CWOmax.
As mentioned above, other variants exist, for instance: CWO=CWO*2; CWO=CWOmin*2n; CWO(t)=CWOmin(t)*2n; etc.
Next to step 1307 or 1308, step 1309 consists for node 600 in deactivating the AC queue backoff engines that have no more data to transmit. This is because due to the UL MU transmission, some AC queues may have been emptied from the transmitted data. In such a case the corresponding queue backoff value is cleared (the value is no longer taken into account to compute the RU backoff values and to EDCA access the medium).
As long as the selected (at step 1304) AC queue engines still stores data to be transmitted in their respective traffic queues, their respective (non-zero) queue backoff is kept unchanged. Note that in any case, as only an OFDMA access has been performed (and not over the EDCA channel), the AC contention window values CW of the queue backoff engine(s) 311 (EDCA CW) are not modified.
Next to step 1309, step 1310 consists for node 600 to determine whether or not a new RU backoff value OBO has to be computed. This is because value OBO has expired (test 1302) and data intended to the AP have been consumed.
Thus, it is first determined whether or not data intended to the AP remain in any of the AC traffic queues. In case of positive determination, a new OBO value is computed. Otherwise, the RU backoff engine is deactivated.
The computation of OBO value may be according to any approach described above with reference to step 903, for instance OBO is determined as a random integer selected from contention window range [0, CWO] uniformly distributed.
This ends the process of Figure 13.
Figure 14 illustrates, using a flowchart, second exemplary embodiments of accessing the medium based on the OFDMA medium access scheme, and of updating the RU backoff parameters, either locally or based on a received TBD parameter, when a new trigger frame is received at transmitting node 600.
This Figure illustrates some of the first embodiments according to the invention which are based on the TBD parameter transmitted by the AP.
It also illustrates some of the second embodiments, when no TBD parameter is received from the AP. Thus it is also a first illustration of a decision by the node to switch between these two approaches: the AP-initiated mode to compute CWO (using transmitted TBD) and the local mode (using only values local to the node).
Compared to the example of Figure 13, the embodiment of Figure 14 involves use of an adjustment/correcting parameter issued from the AP, namely the above-mentioned TBD parameter, to compute CWO. The TBD parameter, reflecting the AP point of view of collisions in overall 802.11 ax network, may evolve overtime and be provided in the TFs.
Until a first TBD parameter is received by node 600, the latter manages a corresponding local parameter, namely local RU collision factor CF. Local factor CF will allow to use local statistics instead of AP parameter for applying steps 1404 to 1405 further explained below.
In this second exemplary embodiment, the computation of RU backoff parameters (including CWO) is performed upon reception of the trigger frame, and not when new data arrive from an upper layer application 701 (as in the case of above step 903).
Thus steps 1400 to 1408 are new compared to Figure 13. Steps 13xx are similar to step 13xx of Figure 13.
After step 1300 of receiving a new TF, step 1400 aims at determining whether or not the RU backoff parameters should be initialized upon receiving the trigger frame. More precisely, step 1400 consists in determining whether the RU backoff engine is inactive (e.g. OBO value is less or equal to 0) and data intended to the AP are now stored in traffic queues 310 (i.e. it is the first TF received after some first data for the AP have been input in the traffic queues).
In step 1400, the node thus determines whether or not the received trigger frame includes a TBD parameter to drive the node in defining its own contention window size.
In case the RU backoff engine requires to be activated, steps 1401 to 1406 are performed to initialize the RU backoff engine, after which step 1301 is executed. In case the RU backoff engine is already active, step 1301 is directly executed.
The initialization sequence (steps 1401-1406) consists first for node 600 in checking whether or not a TBD parameter has been received from the AP (step 1401). By sending or not the TBD parameter (i.e. setting a TBD value or an UNDEFINED value in the appropriate field of the trigger frame), the AP thus controls how the nodes compute their own CWO value.
If such TBD parameter has been received, steps 1404-1406 are performed. Otherwise, the un-initialized TBD parameter is initialized with the local CF value (step 1403): here node 600 acts alone for adapting the CWO value, that is to say only in regards to the success of its own past OFDMA transmissions.
The evolving of factor CF is described below with reference to steps 1407-1408.
Next to step 1403, step 1404 is performed during which new RU backoff parameters are determined. For instance a new CWOmin value is determined, using any approach as described above with reference to step 1002.
For instance, CWOmin may be set with regards to the lowest CWmin of the active AC queues. CWOmjn — Min({CWmjn}actjVeAc)·
In a variant, CWOmin may be set with regards to the lowest CWmin of the active AC queues and the number of random RUs: CWOmin= Min({CWmin}activeAc) x NbRU.
In another variant, CWOmin may be set to the number of random RUs as declared in the trigger frame received.
In another variant, CWOmin may be set to the TBD parameter (which is thus as received from the AP or as set through CF).
Next, step 1405 consists for node 600 in computing CWO. This may be done from CWOmin, and possibly from the TBD parameter.
An example of computation is: CWO= 2TBD * CWOmin or CWO = TBD * CWOmin or CWO = TBD or CWO is defined by the table entry having TBD as entry index, or CWO is randomly selected from [CWOmin, CWOmax] where CWOmin or CWOmax equals TBD. Of course, the formula used may correspond to the mere nature of the TBD value sent by the access point, in order for CWO to be preferably equal to CWO = 2CRF * CWOmin, wherein CRF=a*(Nb_collided_RU I Nb_RU_total) as defined above.
CWO value may be limited to an upper bound, for instance CWOmax defined above (step 1002).
As a result, if the TBD parameter is 0, then the minimum value of EDCA CWmin may drive the medium access in case CWO= 2TBD * CWOmin : CWmin=3 for VOICE, so approximately a maximum of two trigger frames to backoff, the third one being the one to access in the worst case.
Thus in step 1405, the node computes a new contention window size CWO based on the received TBD parameter, in case it is positively determined that the received trigger frame includes such TBD parameter, to contend for access to the random resource units splitting the transmission opportunity (i.e. to compute a new OBO value - see step 1406 below). Otherwise, the node uses a local contention window size (e.g. deriving from local factor CF) as new contention window size, to contend for access to the random resource units splitting the transmission opportunity.
Next, at step 1406, node 600 computes the RU backoff value OBO from CWO. See for instance above step 903: e.g. OBO = random[0, CWO],
Back to the positive output of test 1400, the algorithm of Figure 13 is reused, except for steps 1307 and 1308. They are replaced by steps 1407 and 1408 during which the local RU collision factor CF is updated depending on a success or failure in transmitting the data (instead of directly updating CWO).
Of course, using the doubling approach of steps 1307 and 1308 for setting a new CWO is also possible, in which case step 1403 may set TBD to a value corresponding to the new CWO. For instance, TBD=log2(CWO/CWOmin) for the formula CWO= 2TBD * CWOmin.
Factor CF may evolve within the range [0, CFmax] if formula CW0= 2TBD * CWOmin is used, wherein CFmax is a maximum coefficient: for instance 5. As an alternative, CFmax can be drawn according the active EDCA AC queues: CFmax = [(CWmax)Ac +1]! [(CWmin)AC' +1], wherein “ AC ” and “ AC’ ” designate the active queue backoff engines having the highest priority (e.g. the highest EDCA traffic class prioritization of Figure 3b), or having the highest CWmax value and lowest CWmin value respectively (that is to say CWmax= 1023 and CWmin=15, for the Background or Best effort queues)
In a variant, factor CF may evolve within the range [1, CFmax] if formula CWO= TBD * CWOmin is used, with CFmax=32 for instance.
Thus at steps 1407-1408, factor CF is updated upon each success/failure of OFDMA transmission.
In case of positive acknowledgment, the MU UL OFDMA transmission is considered as a success and step 1407 is executed during which factor CF is set to a (predetermined) low CF value, for instance 1 in case formula CWO= TBD * CWOmin is used, or 0 in case formula CWO= 2TBD * CWOmin is used.
Otherwise, step 1408 (failing OFDMA transmission) is executed in which factor CF is doubled in case formula CWO= TBD * CWOmin, or is increased by one in case formula CWO= 2 * CWOmin. In both cases, it corresponds to the doubling of the corresponding CWO in case of transmission failure. Note that factor CF is kept below CFmax.
Note that further to step 1309 of handling properly the EDCA queue backoff values, step 1310 is suppressed as the OBO computation is now handled in the initialization phase of steps 1401-1406.
This ends the process of Figure 14.
The various alternative embodiments presented above with respect to Figures 9 to 16 are compatible one with each other, and thus may be combined to take advantage of their respective advantages. For instance, the triggering events (new trigger frame or new data in the AC/transmission buffer) and/or the updating of local CWO (direct doubling or through factor CF) of Figures 13 and 14 can be substituted one with the other.
Figure 17 illustrates, using a flowchart, third exemplary embodiments of accessing the medium based on the OFDMA medium access scheme and of updating the RU backoff parameters (e.g. CWO), either locally or based on a received TBD parameter, when a new trigger frame is received. Thus it includes computing CWO by the node either through a local approach or through an AP-initiated approach (with the transmitted TBD parameter); it also includes switching between the two approaches.
In the third exemplary embodiments, a single transmission buffer is considered. It implies that steps 1304 and 1309 specific to the management of a plurality of AC queues are avoided.
Step 1700 is new compared to Figures 13 and 14. Steps 13xx and 14xx are similar to corresponding steps 13xx and 14xx described above.
Upon receiving a new trigger frame (step 1300), the node determines whether or not OBO value is less or equal to 0 (test 1400), meaning that a new OBO value should be computed.
In case a current positive OBO value is running, meaning that the node currently contends for access to the random RUs defined by the received trigger frame, steps 1301 to 1308 are performed, similar to the process of Figure 13 in case a single transmission buffer is used.
During this process, the local contention window size, namely local CWO, is updated depending on a success or failure in transmitting the data.
In particular, at step 1307, the local contention window size is set to CWOmin which preferably represents the number of random resource units defined in the received trigger frame. At step 1308, the local contention window size CWO is doubled in case of transmission failure. For instance, when the local contention window size is determined as a function of the number CWOmin of random resource units defined in a received trigger frame, CWO = CWOmin * 2n or CWO = CWOmin(t) * 2n, where n is the number of successive transmission failures by the node and CWOmin(t) is the number of random resource units defined in the trigger frame received at time t.
In case of zero or negative OBO value (test 1400), test 1401 determines whether or not the received trigger frame includes a TBD parameter to drive the node in defining its own contention window size. This test makes it possible to switch between a local approach and an AP-initiated approach to obtain CWO for computing a new OBO value (step 1406).
If the trigger frame does not include a TBD parameter (i.e. TBD field is set to UNDEFINED in the trigger frame), the current local value of CWO (as obtained through the last iteration of step 1307 or 1308) is used to compute a new OBO value (step 1406) for contending for access to the random RUs defined by the received trigger frame.
If the trigger frame sets a TBD value, step 1700 makes it possible to handle the cases where the TBD value is restricted to a specific group of nodes or a specific type of data or any other configuration parameter. Such information is included in the received trigger frame, for instance setting a BSSID as mentioned above for corresponding step 1506 at the AP side.
In such a case for instance, the node checks whether a TBD parameter included in the received trigger frame is assigned to a group of nodes to which the node belongs. In particular, the checking step may include reading a BSSID, Basic Service Set Identification, in the received trigger frame. This is step 1700.
Of course, other information can be read to determine whether or not the set TBD parameter should be applied by the node.
If the TBD parameter should not be applied, the new OBO value is computed (step 1406) using the local CWO value.
Otherwise (the TBD parameter should be applied), steps 1404 and 1405 are performed to compute CWOmin and CWO, from the received TBD parameter. These steps are described above.
CWOmin may be equal to the number of random RUs in the received trigger frame.
For instance, CWO= 2TBD * CWOmin or CWO = TBD * CWOmin or CWO = TBD or CWO is defined by the table entry having TBD as entry index, or CWO is randomly selected from [CWOmin, CWOmaJ where CWOmin or CWOmax equals TBD. Of course, the formula used may correspond to the mere nature of the TBD value sent by the access point, in order for CWO to be preferably equal to CWO = 2CRF * CWOmin, wherein CRF=a*(Nb_collided_RU I Nb_RU_total) as defined above.
Next to step 1405 or 1700 or 1401, the new OBO value can be computed based on CWO newly obtained or on local CWO, when appropriate.
Next, the process loops back to step 1301 to actually contend for access to the random RUs defined by the received trigger frame, given the new OBO value.
It is apparent from the above that in the embodiments of the invention, the management of the access to random RUs through RU backoff engines is fully distributed over the nodes. Furthermore it keeps compliancy with 802.11 standard, in particular because the EDCA prioritization scheme is kept.
Note that the probability of collisions occurring over RUs, or even more low usage of RUs, is monitored by the AP in some embodiments and fed back to the nodes through the TBD parameter. This makes it possible to consider this overall network aspect for each individual medium access at the nodes. This makes it possible to advantageously adapt the medium access to improve OFDMA RU usage.
Although the present invention has been described hereinabove with reference to specific embodiments, the present invention is not limited to the specific embodiments, and modifications will be apparent to a skilled person in the art which lie within the scope of the present invention.
Many further modifications and variations will suggest themselves to those versed in the art upon making reference to the foregoing illustrative embodiments, which are given by way of example only and which are not intended to limit the scope of the invention, that being determined solely by the appended claims. In particular the different features from different embodiments may be interchanged, where appropriate.
In the claims, the word “comprising” does not exclude other elements or steps, and the indefinite article “a” or “an” does not exclude a plurality. The mere fact that different features are recited in mutually different dependent claims does not indicate that a combination of these features cannot be advantageously used.

Claims (10)

1. A communication device comprising:
a reception unit configured to receive, from an access point, a trigger frame including a predetermined communication parameter and including information related to a random resource unit for which a device that transmits data using the random resource unit is not designated by the access point;
a determination unit configured to, in a case where the reception unit has received the trigger frame, determine whether to transmit, based on the predetermined communication parameter and a backoff value randomly decided by the communication device and using the random resource unit, data to the access point; and a transmission unit configured to transmit the data to the access point using the random resource unit in a case where the determination unit has determined that the transmission of the data to the access point is to be performed.
2. The communication device according to claim 1, wherein the trigger frame includes information related to a plurality of the random resource units.
3. The communication device according to claim 1 or 2, wherein the random resource unit is a resource unit to which an access right is obtained by a device belonging to a wireless network formed by the access point in accordance with a predetermined contention scheme.
4. The communication device according to any one of claims 1 to 3, wherein the backoff value is determined based on the predetermined communication parameter.
5. The communication device according to any one of claims 1 to 4, further comprising a plurality of queues each storing data, the plurality of queues being different from one another in priority of data stored therein, wherein the transmission unit transmits data stored in any one of the plurality of queues to the access point.
6. The communication device according to any one of claims 1 to 5, further comprising a first decision unit configured to re-decide the backoff value in a case where the transmission unit has transmitted, using the random resource unit, the data to the access point.
7. The communication device according to claim 6, further comprising a second decision unit configured to re-decide the backoff value in a case where the determination unit has determined that the transmission of the data to the access point is to be performed and the transmission unit has failed in transmitting the data using the random resource unit, wherein the second decision unit re-decides the backoff value using a method different from a method with which the first decision unit re-decides the backoff value.
8. The communication device according to any one of claims 1 to 7, wherein the communication device performs communication with the access point in an IEEE802.1 laxcompliant wireless network
9. A communication method comprising the following steps performed by a 5 communication device:
receiving, from an access point, a trigger frame including a predetermined communication parameter and including information related to a random resource unit for which a device that transmits data using the random resource unit is not designated by the access point;
10 upon receiving the trigger frame, determining whether to transmit, based on the predetermined communication parameter and a backoff value randomly decided by the communication device and using the random resource unit, data to the access point; and transmitting the data to the access point using the random resource unit in a case where it is determined that the transmission of the data to the access point is to be
15 performed.
10. A non-transitory computer-readable medium storing a program which, when 15 executed by a microprocessor or computer system in a communication device, causes the device to perform the method of Claim 9.
Intellectual
Property
Office
Application No: GB 1802654.2 Examiner: Mr Steve Evans
10. A non-transitory computer-readable medium storing a program which, when executed by a microprocessor or computer system in a communication device, causes the device to perform the method of Claim 9.
Amendments to the Claims have been filed as follows:CLAIMS
1. A communication device comprising:
a reception unit configured to receive, from an access point, a trigger frame including a predetermined communication parameter and including information related to a random resource unit, a random resource unit being a resource unit that a device accesses using a contention scheme;
a determination unit configured to, in a case where the reception unit has received the trigger frame, determine whether to transmit, based on the predetermined communication parameter and a backoff value randomly decided by the communication device and using the random resource unit, data to the access point; and a transmission unit configured to transmit the data to the access point using the random resource unit in a case where the determination unit has determined that the transmission of the data to the access point is to be performed.
2. The communication device according to claim 1, wherein the trigger frame includes information related to a plurality of the random resource units.
3. The communication device according to claim 1 or 2, wherein the random resource unit is a resource unit to which an access right is obtained by a device belonging to a wireless network formed by the access point in accordance with a predetermined contention scheme.
4. The communication device according to any one of claims 1 to 3, wherein the backoff value is determined based on the predetermined communication parameter.
5. The communication device according to any one of claims 1 to 4, further comprising a plurality of queues each storing data, the plurality of queues being different from one another in priority of data stored therein, wherein the transmission unit transmits data stored in any one of the plurality of queues to the access point.
6. The communication device according to any one of claims 1 to 5, further comprising a first decision unit configured to re-decide the backoff value in a case where the transmission unit has transmitted, using the random resource unit, the data to the access point.
7. The communication device according to claim 6, further comprising a second decision unit configured to re-decide the backoff value in a case where the determination unit has determined that the transmission of the data to the access point is to be performed and the transmission unit has failed in transmitting the data using the random resource unit, wherein the second decision unit re-decides the backoff value using a method different from a method with which the first decision unit re-decides the backoff value.
8. The communication device according to any one of claims 1 to 7, wherein the communication device performs communication with the access point in an IEEE802.1 laxcompliant wireless network
9. A communication method comprising the following steps performed by a 5 communication device:
receiving, from an access point, a trigger frame including a predetermined communication parameter and including information related to a random resource unit, a random resource unit being a resource unit that a device accesses using a contention scheme;
upon receiving the trigger frame, determining whether to transmit, based on the 10 predetermined communication parameter and a backoff value randomly decided by the communication device and using the random resource unit, data to the access point; and transmitting the data to the access point using the random resource unit in a case where it is determined that the transmission of the data to the access point is to be performed.
GB1802654.2A 2016-02-29 2016-02-29 Improved contention mechanism for access to random resource units in an 802.11 Channel Active GB2561677B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB1802654.2A GB2561677B (en) 2016-02-29 2016-02-29 Improved contention mechanism for access to random resource units in an 802.11 Channel

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GB1802654.2A GB2561677B (en) 2016-02-29 2016-02-29 Improved contention mechanism for access to random resource units in an 802.11 Channel
GB1603515.6A GB2540450B (en) 2015-07-08 2016-02-29 Improved contention mechanism for access to random resource units in an 802.11 channel

Publications (3)

Publication Number Publication Date
GB201802654D0 GB201802654D0 (en) 2018-04-04
GB2561677A true GB2561677A (en) 2018-10-24
GB2561677B GB2561677B (en) 2020-03-18

Family

ID=61783676

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1802654.2A Active GB2561677B (en) 2016-02-29 2016-02-29 Improved contention mechanism for access to random resource units in an 802.11 Channel

Country Status (1)

Country Link
GB (1) GB2561677B (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150139209A1 (en) * 2012-06-18 2015-05-21 Lg Electronics Inc. Method and apparatus for initial access distribution over wireless lan

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150139209A1 (en) * 2012-06-18 2015-05-21 Lg Electronics Inc. Method and apparatus for initial access distribution over wireless lan

Also Published As

Publication number Publication date
GB2561677B (en) 2020-03-18
GB201802654D0 (en) 2018-04-04

Similar Documents

Publication Publication Date Title
US11039476B2 (en) Contention mechanism for access to random resource units in an 802.11 channel
EP3902365B1 (en) Enhanced management of acs in multi-user edca transmission mode in wireless networks
US10492231B2 (en) Backoff based selection method of channels for data transmission
US20220030629A1 (en) Restored fairness in an 802.11 network implementing resource units
GB2562601B (en) Improved contention mechanism for access to random resource units in an 802.11 channel
GB2544824A (en) Improved contention mechanism for access to random resource units in an 802.11 channel
GB2555143B (en) QoS management for multi-user EDCA transmission mode in wireless networks
GB2575555A (en) Enhanced management of ACs in multi-user EDCA transmission mode in wireless networks
GB2588267A (en) Restored fairness in an 802.11 network implementing resource units
GB2561677A (en) Improved contention mechanism for access to random resource units in an 802.11 Channel
GB2588042A (en) Restored fairness in an 802.11 network implementing resource units