GB2543584A - Improved contention mechanism for access to random resource units in an 802.11 channel - Google Patents

Improved contention mechanism for access to random resource units in an 802.11 channel Download PDF

Info

Publication number
GB2543584A
GB2543584A GB1518867.5A GB201518867A GB2543584A GB 2543584 A GB2543584 A GB 2543584A GB 201518867 A GB201518867 A GB 201518867A GB 2543584 A GB2543584 A GB 2543584A
Authority
GB
United Kingdom
Prior art keywords
data
queue
traffic
backoff
transmission
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB1518867.5A
Other versions
GB2543584B (en
GB201518867D0 (en
Inventor
Viger Pascal
Guignard Romain
Baron Stéphane
Nezou Patrice
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Canon Inc
Original Assignee
Canon Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Canon Inc filed Critical Canon Inc
Priority to GB1518867.5A priority Critical patent/GB2543584B/en
Priority to GB1804883.5A priority patent/GB2562601B/en
Publication of GB201518867D0 publication Critical patent/GB201518867D0/en
Publication of GB2543584A publication Critical patent/GB2543584A/en
Application granted granted Critical
Publication of GB2543584B publication Critical patent/GB2543584B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W74/00Wireless channel access, e.g. scheduled or random access
    • H04W74/08Non-scheduled or contention based access, e.g. random access, ALOHA, CSMA [Carrier Sense Multiple Access]
    • H04W74/0833Non-scheduled or contention based access, e.g. random access, ALOHA, CSMA [Carrier Sense Multiple Access] using a random access procedure
    • H04W74/0841Non-scheduled or contention based access, e.g. random access, ALOHA, CSMA [Carrier Sense Multiple Access] using a random access procedure with collision treatment
    • H04W74/085Non-scheduled or contention based access, e.g. random access, ALOHA, CSMA [Carrier Sense Multiple Access] using a random access procedure with collision treatment collision avoidance
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W74/00Wireless channel access, e.g. scheduled or random access
    • H04W74/08Non-scheduled or contention based access, e.g. random access, ALOHA, CSMA [Carrier Sense Multiple Access]
    • H04W74/0808Non-scheduled or contention based access, e.g. random access, ALOHA, CSMA [Carrier Sense Multiple Access] using carrier sensing, e.g. as in CSMA
    • H04W74/0816Non-scheduled or contention based access, e.g. random access, ALOHA, CSMA [Carrier Sense Multiple Access] using carrier sensing, e.g. as in CSMA carrier sensing with collision avoidance
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W72/00Local resource management
    • H04W72/04Wireless resource allocation
    • H04W72/044Wireless resource allocation based on the type of the allocated resource
    • H04W72/0446Resources in time domain, e.g. slots or frames
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W84/00Network topologies
    • H04W84/02Hierarchically pre-organised networks, e.g. paging networks, cellular networks, WLAN [Wireless Local Area Network] or WLL [Wireless Local Loop]
    • H04W84/10Small scale networks; Flat hierarchical networks
    • H04W84/12WLAN [Wireless Local Area Networks]

Abstract

In an 802.11ax network with an access point, a trigger frame offers random resource units to nodes, having also EDCA buffers, for data uplink OFDMA communication. To improve management of the random resource units, the backoff parameters for OFDMA access to the random RUs are determined based on EDCA backoff parameters. EDCA prioritization is thus provided within the OFDMA access, while keeping compliance with 802.11ax. Another improvement takes place after transmitting data either in EDCA access or OFDMA access. Since some EDCA-compliant data may be consumed by an OFDMA access and vice-versa, the EDCA and OFDMA backoff counters may be decorrelated from the real content of the EDCA buffers. The improvement thus consists in modifying at least one non-zero EDCA or OFDMA backoff value based on the data remaining in the traffic queues. The backoff counters are thus kept consistent with the content of the EDCA buffers.

Description

IMPROVED CONTENTION MECHANISM FOR ACCESS TO RANDOM RESOURCE UNITS IN
AN 802.11 CHANNEL
FIELD OF THE INVENTION
The present invention relates generally to communication networks and more specifically to the contention-based access of channels and their splitting sub-channels (or Resource Units) that are available to a group of nodes.
The invention finds application in wireless communication networks, in particular to the access of an 802.11ax composite channel and of OFDMA Resource Units forming for instance an 802.11 ax composite channel for Uplink communication. One application of the method regards wireless data communication over a wireless communication network using Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA), the network being accessible by a plurality of node devices.
BACKGROUND OF THE INVENTION
The IEEE 802.11 MAC standard defines the way Wireless local area networks (WLANs) must work at the physical and medium access control (MAC) level. Typically, the 802.11 MAC (Medium Access Control) operating mode implements the well-known Distributed Coordination Function (DCF) which relies on a contention-based mechanism based on the so-called “Carrier Sense Multiple Access with Collision Avoidance” (CSMA/CA) technique.
The 802.11 medium access protocol standard or operating mode is mainly directed to the management of communication nodes waiting for the wireless medium to become idle so as to try to access to the wireless medium.
The network operating mode defined by the IEEE 802.11ac standard provides very high throughput (VHT) by, among other means, moving from the 2.4GHz band which is deemed to be highly susceptible to interference to the 5GHz band, thereby allowing for wider frequency contiguous channels of 80MHz to be used, two of which may optionally be combined to get a 160MHz channel as operating band of the wireless network.
The 802.11ac standard also tweaks control frames such as the Request-To-Send (RTS) and Clear-To-Send (CTS) frames to allow for composite channels of varying and predefined bandwidths of 20, 40 or 80MHz, the composite channels being made of one or more channels that are contiguous within the operating band. The 160MHz composite channel is possible by the combination of two 80MHz composite channels within the 160MHz operating band. The control frames specify the channel width (bandwidth) for the targeted composite channel. A composite channel therefore consists of a primary channel on which a given node performs EDCA backoff procedure to access the medium, and of at least one secondary channel, of for example 20MHz each. EDCA defines traffic categories and four corresponding access categories that make it possible to handle differently high-priority traffic compared to low-priority traffic.
Implementation of EDCA in the nodes can be made using a plurality of traffic queues for serving data traffic at different priorities, to which a respective plurality of queue backoff engines is associated. The queue backoff engines are configured to compute respective queue backoff values when the associated traffic queue stores data to transmit.
Thanks to the EDCA backoff procedure, the node can thus access the communication network using contention type access mechanism based on the computed queue backoff values.
The primary channel is used by the communication nodes to sense whether or not the channel is idle, and the primary channel can be extended using the secondary channel or channels to form a composite channel.
Given a tree breakdown of the operating band into elementary 20MHz channels, some secondary channels are named tertiary or quaternary channels.
In 802.11ac, all the transmissions, and thus the possible composite channels, include the primary channel. This is because the nodes perform full Carrier Sense Multiple Access/Collision Avoidance (CSMA/CA) and Network Allocation Vector (NAV) tracking on the primary channel only. The other channels are assigned as secondary channels, on which the nodes have only capability of CCA (clear channel assessment), i.e. detection of an idle or busy state/status of said secondary channel.
An issue with the use of composite channels as defined in the 802.11η or 802.11ac (or 802.11 ax) is that the 802.11η and 802.11 ac-compliant nodes (i.e. HT nodes standing for High Throughput nodes) and the other legacy nodes (i.e. non-HT nodes compliant only with for instance 802.11a/b/g) have to co-exist within the same wireless network and thus have to share the 20MHz channels.
To cope with this issue, the 802.11η and 802.11ac standards provide the possibility to duplicate control frames (e.g. RTS/CTS or CTS-to-Self or ACK frames to acknowledge correct or erroneous reception of the sent data) in an 802.11a legacy format (called as “non-HT”) to establish a protection of the requested TXOP over the whole composite channel.
This is for any legacy 802.11a node that uses any of the 20MHz channel involved in the composite channel to be aware of on-going communications on the 20MHz channel. As a result, the legacy node is prevented from initiating a new transmission until the end of the current composite channel TXOP granted to an 802.11 n/ac node.
As originally proposed by 802.11η, a duplication of conventional 802.11a or “non-HT” transmission is provided to allow the two identical 20MHz non-HT control frames to be sent simultaneously on both the primary and secondary channels forming the used composite channel.
This approach has been widened for 802.11ac to allow duplication over the channels forming an 80MHz or 160MHz composite channel. In the remainder of the present document, the “duplicated non-HT frame” or “duplicated non-HT control frame” or “duplicated control frame” means that the node device duplicates the conventional or “non-HT” transmission of a given control frame over secondary 20MHz channel(s) of the (40MHz 80MHz or 160MHz) operating band.
In practice, to request a composite channel (equal to or greater than 40MHz) for a new TXOP, an 802.11n/ac node does an EDCA backoff procedure in the primary 20MHz channel as mentioned above. In parallel, it performs a channel sensing mechanism, such as a Clear-Channel-Assessment (CCA) signal detection, on the secondary channels to detect the secondary channel or channels that are idle (channel state/status is “idle”) during a PIFS interval before the start of the new TXOP (i.e. before any queue backoff counter expires).
More recently, Institute of Electrical and Electronics Engineers (IEEE) officially approved the 802.11 ax task group, as the successor of 802.11ac. The primary goal of the 802.11 ax task group consists in seeking for an improvement in data speed to wireless communicating devices used in dense deployment scenarios.
Recent developments in the 802.11ax standard sought to optimize usage of the composite channel by multiple nodes in a wireless network having an access point (AP). Indeed, typical contents have important amount of data, for instance related to high-definition audio-visual real-time and interactive content. Furthermore, it is well-known that the performance of the CSMA/CA protocol used in the IEEE 802.11 standard deteriorates rapidly as the number of nodes and the amount of traffic increase, i.e. in dense WLAN scenarios.
In this context, multi-user transmission has been considered to allow multiple simultaneous transmissions to/from different users in both downlink and uplink directions. In the uplink, multi-user transmissions can be used to mitigate the collision probability by allowing multiple nodes to simultaneously transmit.
To actually perform such multi-user transmission, it has been proposed to split a granted 20MHz channel into sub-channels, also referred to as resource units (RUs), that are shared in the frequency domain by multiple users, based for instance on Orthogonal Frequency Division Multiple Access (OFDMA) technique. Each RU may be defined by a number of tones, the 20MHz channel containing up to 242 usable tones. OFDMA is a multi-user variation of OFDM which has emerged as a new key technology to improve efficiency in advanced infrastructure-based wireless networks. It combines OFDM on the physical layer with Frequency Division Multiple Access (FDMA) on the MAC layer, allowing different subcarriers to be assigned to different nodes in order to increase concurrency. Adjacent sub-carriers often experience similar channel conditions and are thus grouped to sub-channels: an OFDMA sub-channel or RU is thus a set of sub-carriers.
As currently envisaged, the granularity of such OFDMA sub-channels is finer than the original 20MHz channel band. Typically, a 2MHz or 5MHz sub-channel may be contemplated as a minimal width, therefore defining for instance 9 sub-channels or resource units within a single 20MHz channel.
To support multi-user uplink, i.e. uplink transmission to the 802.11 ax access point (AP) during the granted TxOP, the 802.11ax AP has to provide signalling information for the legacy nodes (non-802.11ax nodes) to set their NAV and for the 802.11 ax nodes to determine the allocation of the resource units RUs.
It has been proposed for the AP to send a trigger frame (TF) to the 802.11 ax nodes to trigger uplink communications.
The document IEEE 802.11-15/0365 proposes that a ‘Trigger’ frame (TF) is sent by the AP to solicit the transmission of uplink (UL) Multi-User (OFDMA) PPDU from multiple nodes. In response, the nodes transmit UL MU (OFDMA) PPDU as immediate responses to the Trigger frame. All transmitters can send data at the same time, but using disjoint sets of RUs (i.e. of frequencies in the OFDMA scheme), resulting in transmissions with less interference.
The bandwidth or width of the targeted composite channel is signalled in the TF frame, meaning that the 20, 40, 80 or 160 MHz value is added. The TF frame is sent over the primary 20MHz channel and duplicated (replicated) on each other 20MHz channels forming the targeted composite channel, if appropriate. As described above for the duplication of control frames, it is expected that every nearby legacy node (non-HT or 802.11ac nodes) receiving the TF on its primary channel, then sets its NAV to the value specified in the TF frame in order. This prevents these legacy nodes from accessing the channels of the targeted composite channel during the TXOP. A resource unit RU can be reserved for a specific node, in which case the AP indicates, in the TF, the node to which the RU is reserved. Such RU is called Scheduled RU. The indicated node does not need to perform contention on accessing a scheduled RU reserved to it.
In order to better improve the efficiency of the system in regards to un-managed traffic to the AP (for example, uplink management frames from associated nodes, unassociated nodes intending to reach an AP, or simply unmanaged data traffic), the document IEEE 802.11 -15/0604 proposes a new trigger frame (TF-R) above the previous UL MU procedure, allowing random access onto the OFDMA TXOP. In other words, the resource unit RU can be randomly accessed by more than one node (of the group of nodes registered with the AP). Such RU is called Random RU and is indicated as such in the TF. Random RUs may serve as a basis for contention between nodes willing to access the communication medium for sending data. A random resource selection procedure is defined in document IEEE 802.11-15/1105. According to this procedure, each 802.11 ax node maintains a dedicated backoff engine, referred below to as OFDMA or RU (for resource unit) backoff engine, to contend for access to one of the random RUs. Once its OFDMA or RU backoff value reaches zero (it is decremented at each new TF-R frame by the number of random RUs defined therein), a node becomes eligible for RU access and thus randomly selects one RU from among all the random RUs defined in the received trigger frame. It then uses the selected RU to transmit data of at least one of the traffic queues.
The management of the OFDMA or RU backoff engine is not optimal.
SUMMARY OF INVENTION
The inventors have observed that the OFDMA or RU backoff scheme for random RU contention is not optimal given its coexistence with the EDCA queue backoff schemes for CSMA/CA contention.
For instance, it is undisputable that the OFDMA or RU backoff scheme runs in parallel to the EDCA queue backoff schemes. It means that some data (e.g. dedicated to the AP) in an EDCA traffic queue may be transmitted through any of the two access procedures: EDCA providing a new TxOP, and UL OFDMA providing a new random (or scheduled) RU. Of course, uplink traffics are not the only ones in a basic service set (BSS) made of the AP and its registered nodes; there may exist peer-to-peer or direct traffics in between registered nodes of the BSS.
This is why the inventors believe that the interaction between the OFDMA or RU backoff scheme and the EDCA queue backoff schemes should be exploited in a better way to manage efficient use of the random OFDMA RUs.
In addition, while QoS (Quality of Service) is provided by EDCA thanks to the traffic differentiation, this is believed that UL OFDMA medium access misses QoS.
It is a broad objective of the present invention to provide improved communication methods and devices in a communication network. The communication network includes a plurality of nodes, possibly including an Access Point with which the other nodes have registered, all of the nodes sharing the physical medium of the communication network.
The present invention has been devised to overcome one or more foregoing limitations, in particular to provide communication methods having improved use of random RUs. This may result in having more efficient usage of the network bandwidth (of the RUs) with limited risks of collisions.
The invention can be applied to any communication network, e.g. a wireless network, in which random resource units are available through contention-based access, within a granted transmission opportunity. The RUs are sub-channels of one or more communication channels. The communication channel is the elementary channel on which the nodes perform sensing to determine whether it is idle or busy.
The invention is especially suitable for data uplink transmission from nodes to the AP of an IEEE 802.11 ax network (and future version), in which case the random RUs are accessed using OFDMA. Embodiments of the invention may also apply between nodes (without AP), as well as in any communication network other than 802.11 ax provided that it offers random RUs or the like that can be accessed simultaneously (and thus through a contention approach) by the nodes.
In this context, first embodiments of the invention provide a communication method in a communication network comprising a plurality of nodes, at least one node comprising: a plurality of traffic queues for serving data traffic at different priorities; a plurality of queue backoff engines, each associated with a respective traffic queue for computing a respective queue backoff value to be used to contend access to at least one communication channel in order to transmit data stored in the respective traffic queue. Such queue backoff value may be computed either when an empty traffic queue starts storing new data to transmit, or when a transmission of data of a traffic queue ends if there are still data to transmit in the traffic queue; and an RU backoff engine separate from the queue backoff engines, for computing an RU backoff value to be used to contend access to at least one random resource unit splitting a transmission opportunity granted on the communication channel, in order to transmit data stored in either traffic queue, the method comprising; at said node: determining one or more RU backoff parameters based on one or more queue backoff parameters of the queue backoff engines; and computing the RU backoff value from the determined one or more RU backoff parameters.
Correspondingly, embodiments of the invention provide a communication device forming node in a communication network, comprising: a plurality of traffic queues for serving data traffic at different priorities; a plurality of queue backoff engines, each associated with a respective traffic queue for computing a respective queue backoff value to be used to contend access to at least one communication channel in order to transmit data stored in the respective traffic queue; an RU backoff engine separate from the queue backoff engines and configured to compute an RU backoff value to be used to contend access to at least one random resource unit splitting a transmission opportunity granted on the communication channel, in order to transmit data stored in either traffic queue, wherein the RU backoff engine is further configured to: determine one or more RU backoff parameters based on one or more queue backoff parameters of the queue backoff engines; and compute the RU backoff value from the determined one or more RU backoff parameters.
Note that the node may actually access the communication network using contention type access mechanism based on the computed queue backoff values (EDCA-based CMSA/CA access) and access the one or more random resources units defined in a trigger frame, using contention type access mechanism based on the RU backoff value (OFDMA access) or scheduled access, the accesses being in order to transmit data of at least one of the traffic queues.
By using the queue backoff parameters, the RU backoff value that applied to all traffic queues may thus include some traffic prioritization, thereby improving efficiency of usage of random RUs and QoS of the OFDMA access.
It results that the node can manage a randomized prioritization for local traffic (EDCA compliancy) along with a proper backoff for OFDMA medium access, without requiring new prioritization parameters to be set for OFDMA medium access. In addition, the approach according to the invention may keep compliancy with 802.11ax and be implemented within conventional environments (i.e. without change in the EDCA state machine).
Second embodiments of the invention provide a communication method in a communication network comprising an access point and a plurality of nodes, at least one node comprising: a plurality of traffic queues for serving data traffic at different priorities; a plurality of queue backoff engines, each associated with a respective traffic queue for computing a respective queue backoff value to be used to contend access to at least one communication channel in order to transmit data stored in the respective traffic queue. Such queue backoff value may be computed either when an empty traffic queue starts storing new data to transmit or when a transmission of data of a traffic queue ends if there are still data to transmit in the traffic queue; and an RU backoff engine separate from the queue backoff engines, for computing an RU backoff value to be used to contend access to at least one random resource unit splitting a transmission opportunity granted on the communication channel, in order to transmit data stored in either traffic queue, the method comprising, at the node: transmitting data from at least one traffic queue in a transmission opportunity (usually granted to the node in the communication channel) or in a (random or scheduled) resource unit splitting a transmission opportunity (usually granted to another node such as the access point); and modifying at least one non-zero (in general strictly positive) backoff value based on the data remaining in the traffic queues after the transmitting step.
Correspondingly, embodiments of the invention provide a communication device forming node in a communication network comprising an access point and a plurality of nodes, comprising: a plurality of traffic queues for serving data traffic at different priorities; a plurality of queue backoff engines, each associated with a respective traffic queue for computing a respective queue backoff value to be used to contend access to at least one communication channel in order to transmit data stored in the respective traffic queue; an RU backoff engine separate from the queue backoff engines, for computing an RU backoff value to be used to contend access to at least one random resource unit splitting a transmission opportunity granted on the communication channel, in order to transmit data stored in either traffic queue; and a controller for transmitting data from at least one traffic queue in a transmission opportunity or in a (random or scheduled) resource unit splitting a transmission opportunity, wherein the RU backoff engine is further configured to modify at least one non-zero backoff value based on the data remaining in the traffic queues after the transmission.
Thanks to the modification of non-zero EDCA or OFDMA backoff value or values, the second embodiments of the invention may keep consistency of the backoff values with the effective content of the AC traffic queues, even if data in the traffic queues is consumed by another contention scheme (EDCA-based CSMA/CA or OFDMA).
As a consequence, the bandwidth, in particular the random RUs, are better used in the communication network.
Of course the features of the first and second embodiments can be combined together.
Optional features of embodiments of the invention are defined in the appended claims. Some of these features are explained here below with reference to a method, while they can be transposed into system features dedicated to any node device according to embodiments of the invention.
In some embodiments involving the determination of RU backoff parameters as defined above: - the one or more queue backoff parameters used to determine the one or more RU backoff parameters are parameters of queue backoff engines associated with a traffic queue storing data to transmit. In other words, only the parameters of active EDCA traffics are taken into account. The contention-based RU access thus advantageously reflects the traffic prioritization of the data that are currently available for transmission; and/or - the one or more RU backoff parameters include a boundary of a contention window from which the RU backoff value is computed. This is usually the upper boundary of the contention window used. As a consequence, initialization of the RU backoff parameters for random RU contention depends on the EDCA traffics.
The second embodiments of the invention may also involve such a boundary of contention window to compute the RU backoff value.
Embodiments below relate to embodiments involving a contention window.
In specific embodiments, the contention window boundary for the RU backoff value is selected within an interval [CWOmin, CWOmax], wherein at least one of CWOmin and CWOmax is an RU backoff parameter determined based on one or more queue backoff parameters. As the contention window boundary is determined within an interval that directly depends on the queue backoff parameters, it indirectly also depends on the same queue backoff parameters.
According to a particular feature, both CWOmin and CWOmax are RU backoff parameters determined based on one or more queue backoff parameters. This makes it possible to strictly bind the contention window boundary depending on the current EDCA parameters for CSMA/CA contention.
According to another particular feature, CWOmax is one from: an upper boundary of a contention window boundary interval (This is the interval from which a contention window boundary is selected. It is a queue backoff parameter) of the queue backoff engine having the lowest non-zero queue backoff value (i.e. whose traffic queue stores data to transmit). That is the queue backoff engine associated with the highest priority active traffic (Access Category), in the meaning that it is the first AC to transmit on the network. The node advantageously takes the same highest priority for its contention-based RU access scheme; a mean of upper boundaries of contention window boundary intervals of the queue backoff engines having non-zero queue backoff values (i.e. active Access Categories or traffic queues having data to transmit). The node advantageously takes a medium priority, and is thus more relaxed compared to the first proposed value, and the highest upper boundaries from contention window boundary intervals of the queue backoff engines having non-zero queue backoff values. The node is even more relaxed. In addition, this proposed value avoids the contention-based RU access to have a medium priority lower than EDCA-based CSMA/CA contention scheme.
According to yet another particular feature, CWOmin is one or a combination of: the number of random resource units defined in a received trigger frame and, the lowest lower boundaries from contention window boundary intervals of the queue backoff engines having non-zero queue backoff values.
According to yet another particular feature, a formula used to determine at least one of CWOmin and CWOmax from one or more queue backoff parameters depends on an RU collision and unuse factor received from another node (preferably from an Access Point). The RU collision and unuse factor may reflect the other nodes’ point of view regarding how the random RUs are used, in particular with respect to the number of unused random RUs and of the number of collided random RUs, in the previous one or more trigger frames (history of trigger frames).
This approach using the factor makes it possible to dynamically adapt the RU backoff parameters (from which the RU backoff value for RU contention is determined) to the network conditions.
In some embodiments, the method further comprises: transmitting data of at least one of the traffic queues upon accessing one random resource unit based on the RU backoff value (conventionally the RU backoff value is decremented from time to time); updating the contention window boundary depending on a success or failure in transmitting the data (which can be determined based on an acknowledgment message); and computing a new RU backoff value based on the updated contention window boundary.
In this approach, the RU backoff parameters for the UL-OFDMA random backoff procedure are continuously adjusted. As a parameter for the adjustments includes success/failure of the UL-OFDMA transmissions as perceived by the addressee node (usually the AP), this approach may thus reduce the probability of RU collision.
In specific embodiments, the contention window boundary is set to a (predetermined) low boundary value in case of transmission success. This low value may for instance be the CWOmin value defined above. This approach thus favors transmissions in case no collision is detected. This improves use of the communication network.
In a variant to setting directly the CW boundary to the (predetermined) low value, it may be decided to divide by two the current CW boundary (while keeping an integer value equal or above the predetermined low value).
In other specific embodiments, the contention window boundary is doubled, for instance CWO = 2 x (CWO + 1)-1 where CWO is the contention window boundary, in case of transmission failure. Again, this approach restricts transmissions in case of collisions, which in turn reduces the probability of collisions and thus improves use of the communication network.
In some embodiments, the contention window boundary is determined as a function of the number of random resource units defined in a received trigger frame.
In other embodiments, the contention window boundary is determined as a function of an RU collision and unuse factor either received from another node (preferably from an Access Point) or built locally in case no factor is received from another node. Again, the RU collision and unuse factor may reflect the other node or the local node’s point of view regarding how the random RUs are used.
In specific embodiments, the method further comprises: transmitting data of at least one of the traffic queues upon accessing one random resource unit based on the RU backoff value; updating the local RU collision and unuse factor depending on a success or failure in transmitting the data, for instance by setting it to a minimum value or dividing it by two in case of transmission success and doubling it in case of transmission failure. This is to build a local factor that efficiently reflects the local node’s point of view regarding the use of the random RUs. Again, the RU collision and unuse factor may reflect the other node or the local node’s point of view regarding how the random RUs are used.
In specific embodiments, the method further comprises computing a new value for the contention window boundary and a new RU backoff value upon receiving a new trigger frame following the step of transmitting data. In these embodiments, the values are computed only when a new transmission opportunity comes (through a new trigger frame). This is to stick on the current states or conditions of the nodes and the network. Indeed, the network conditions and EDCA queue filing may substantially evolve from one time to the other.
In specific embodiments, the contention window boundary is equal to 2(TBD1) x CWOmin, wherein TBD is the RU collision and unuse factor and CWOmin is a (predetermined) low boundary value. This formula provides good results, in particular because it makes it possible to use the optimum value CWOmin while enabling to slightly correct or adapt it according to factor TDB reflecting network conditions.
In some embodiments, computing the RU backoff value includes randomly selecting a value within an interval [0, CWO], wherein CWO is the contention window boundary for the RU backoff value.
In specific embodiments, computing the RU backoff value further includes applying an RU collision and unuse factor received from another node (preferably from an Access Point) to the randomly selected value. Again, the RU collision and unuse factor may reflect the other node’s point of view regarding how the random RUs are used. More efficient usage of the communication network may thus be obtained.
In other specific embodiments, computing the RU backoff value further includes adding, to the randomly selected value, a value computed from one or more arbitration interframe spaces, AIFS, associated with respective queue backoff engines. This is to take into account the relative priority of some different queue buffers, in particular the active ones.
In some embodiments, the method further comprises, upon receiving a trigger frame, decrementing the RU backoff value based on the number of random resource units defined in the received trigger frame. Thus, a random resource unit can be accessed as soon as the RU backoff value reaches zero or becomes less than zero.
In specific embodiments, decrementing the RU backoff value is also based on an RU collision and unuse factor received from another node (preferably from an Access Point). Again, the RU collision and unuse factor may reflect the other node’s point of view regarding how the random RUs are used.
In some embodiments, new RU backoff parameters and a new RU backoff value to be used to contend access to at least one random resource units in order to transmit data stored in either traffic queue are determined upon detecting a triggering event, the triggering event being one from: receiving a new trigger frame defining a number of random resource units that is different from a current known number of random resource units (e.g. from the number of RUs defined in a previous trigger frame); detecting that an empty traffic queue from the plurality of traffic queues has now received data to transmit; receiving a positive or negative acknowledgment of a previous transmission of data in an RU; receiving a new trigger frame; and detecting a change in at least one queue backoff parameter used to determine the one or more RU backoff parameters.
This provision dynamically adapts the contention-based RU access on the network and node evolutions.
In some embodiments, the RU collision and unuse factor is function of the number of unused random resource units and of the number of collided random resource units in one or more previous trigger frames. In other words, it represents statistics on random resource units not used by the nodes during one or more previous transmission opportunities and/or random resource units on which nodes collide during one or more previous transmission opportunities.
In other embodiments, the random resource units are accessed using OFDMA within the communication channel. It means that the random RUs are provided by splitting the communication channel on a frequency basis.
In yet other embodiments, the communication network is an 802.11 ax network.
In some embodiments, the method further comprises receiving a trigger frame from an access point in the communication network, the trigger frame reserving the transmission opportunity on the communication channel (on behalf of another node, usually the access point) and defining resource units, RUs, forming the communication channel including the at least one random resource unit.
Embodiments below relate to embodiments with modification of at least one nonzero backoff value.
In some embodiments, modifying at least one non-zero backoff value includes clearing (i.e. no more value exists) the RU backoff value. This may help keeping the OFDMA or RU backoff fully consistent with the content of the traffic queues. For instance, clearing the RU backoff value is triggered when no more data to be transmitted to the AP remain in the traffic queues. This is because the resource units for which the RU backoff value is computed and then decremented, are usually used for uplink to the access point (in case the split TxOP is granted to the access point).
In specific embodiments, the method further comprises comparing a destination address of the data remaining in the traffic queues with an address of the access point. This is to determine whether or not data intended to the AP remains in the traffic queues.
In embodiments, the method further comprises selecting data to be transmitted in the transmission opportunity, from a traffic queue associated with a queue backoff value reaching zero, wherein selecting data selects data of the traffic queue that are compatible with transmission in a transmission opportunity granted to the node.
In specific embodiments, the method further comprises reading a contention scheme indication associated with each data in the traffic queue, the contention scheme indication defining whether the data is compatible only with transmission in a transmission opportunity granted to the node, or compatible only with transmission to the access point in a resource unit, or compatible with both transmissions. Thanks to such indication, the node can quickly determine which data to select according to the medium access performed (access to the whole communicate on channel or access to a scheduled or random resource unit only).
In some embodiments, the method further comprises receiving a trigger frame from the access point in the communication network, the trigger frame reserving the transmission opportunity on the communication channel and defining resource units, RUs, forming the communication channel including the at least one random resource unit.
In some embodiments, modifying at least one non-zero backoff value includes clearing (i.e. no more value exists) at least one of the queue backoff values. This may also help keeping the OFDMA or RU backoff fully consistent with the content of the traffic queues. For instance, clearing the non-zero queue backoff value is triggered when no more data remain in the associated traffic queue. This is because the data the traffic queue may have stored have been sent using an OFDMA (scheduled or random) resource unit. It results that no more data remain in the traffic queue and thus CSMA/CA contention based on the queue backoff value is no longer required.
In specific embodiments, the transmitted data are transmitted in the random resource unit. In other specific embodiments, the transmitted data are transmitted in a scheduled resource unit splitting the granted transmission opportunity, the scheduled resource being reserved by the access point for said node.
In specific embodiments, the method further comprises selecting at least one of the traffic queues from which the data to be transmitted are selected. Such a selecting step is required because the OFDMA access to resource units is not linked to a specific traffic queue. According to a particular feature, selecting one traffic queue includes one of: selecting the traffic queue having the lowest associated queue backoff value; selecting randomly one non-empty traffic queue from the traffic queues; selecting the traffic queue storing the highest amount of data (i.e. the most loaded); selecting the non-empty traffic queue having the highest associated traffic priority. According to another particular feature, selecting data from one selected traffic queue includes selecting data of the selected traffic queue that are compatible with transmission in a resource unit. Indeed, the node may send appropriate data over the resource unit.
For instance, this may be done by comparing a destination address of the data in the selected traffic queue with an address of the access point.
In a variant, this may be done by reading a contention scheme indication associated with each data in the traffic queue, the contention scheme indication defining whether the data is compatible only with transmission in a transmission opportunity granted to the node, or compatible only with transmission to the access point in a resource unit, or compatible with both transmissions. Thanks to such indication, the node can quickly determine which data to select according to the medium access performed (access to the whole communication channel or access to a scheduled or random resource unit only).
Various embodiments for processing the data in the traffic queues when selecting data to transmit may be contemplated.
In particular embodiments, the method may further comprise, upon detecting data not compatible with transmission in a resource unit, stopping transmitting additional data from the traffic queues. This also applies to the other transmitting mode (in the transmission opportunity granted to the node).
In alternative embodiments, the method may further comprise, upon detecting data not compatible with transmission in a resource unit, skipping the non-compatible data and searching for other data in the selected traffic queue that are compatible with transmission in a resource unit. Thus, the maximum of data of the selected traffic queue can be sent in the accessed resource unit. This also applies to the other transmitting mode.
In alternative embodiments, the method may further comprise, upon detecting data not compatible with transmission in a resource unit or upon reaching the end of one selected traffic queue, selecting data from another selected traffic queue that are compatible with transmission in a resource unit. Thus, a maximum of data can be transmitted in the accessed resource unit.
According to a particular feature, the method further comprises transmitting padding data up to the end of the resource unit. This advantageously keeps the resource unit busy from legacy node’s point of view, avoiding the communication channel to become available during the granted transmission opportunity.
In some embodiments, the method further comprises marking data in the traffic queues as being either compatible only with transmission in a transmission opportunity granted to the node, or compatible only with transmission to the access point in a resource unit, or compatible with both transmissions.
As already mentioned above, the method may further comprise, upon receiving a trigger frame, decrementing the RU backoff value based on the number of random resource units defined in the received trigger frame.
Also, the method may further comprise decrementing the queue backoff values each elementary time unit the communication channel is detected as idle.
In another approach of the embodiments of the present invention, it is sought to improve the OFDMA or RU backoff scheme with respects to the network conditions.
In this context, third embodiments of the invention provide a communication method in a communication network comprising an access point and a plurality of nodes, at least one node comprising: a plurality of traffic queues for serving data traffic at different priorities; a plurality of queue backoff engines, each associated with a respective traffic queue for computing a respective queue backoff value to be used to contend access to the communication network in order to transmit data stored in the respective traffic queue. Such queue backoff value may be computed either an empty traffic queue starts storing new data to transmit or when a transmission of data of a traffic queue ends if data to transmit remain in the traffic queue; and an RU backoff engine separate from the queue backoff engines, for computing an RU backoff value to be used to contend access to at least one random resource unit splitting a transmission opportunity granted on the communication channel, in order to transmit data stored in either traffic queue, the method comprising, at the node: computing the RU backoff value by randomly selecting a value within a contention window, wherein at least one boundary of the contention window is determined based on at least one indication received from the access point.
As a consequence, the contention window and thus the RU backoff value used to contend for RU access may be adapted to the network conditions as analyzed by the AP.
Correspondingly, embodiments of the invention provide a communication device forming node in a communication network comprising an access point and a plurality of nodes, comprising: a plurality of traffic queues for serving data traffic at different priorities; a plurality of queue backoff engines, each associated with a respective traffic queue for computing a respective queue backoff value to be used to contend access to the communication network in order to transmit data stored in the respective traffic queue; an RU backoff engine separate from the queue backoff engines, for computing an RU backoff value to be used to contend access to at least one random resource unit splitting a transmission opportunity granted on the communication channel, in order to transmit data stored in either traffic queue, computing the RU backoff value including randomly selecting a value within a contention window, wherein at least one boundary of the contention window is determined based on at least one indication received from the access point.
Of course, these third embodiments of the invention may be combined with any or both of first and second embodiments defined above (and their variations).
Optional features of embodiments of the invention are defined in the appended claims. Some of these features are explained here below with reference to a method, while they can be transposed into system features dedicated to any node device according to embodiments of the invention.
In embodiments, the indication received from the access point is an RU collision and unuse factor reflecting the access point’s point of view regarding the usage of random resource units defined in one or more previous trigger frames.
In specific embodiments, the collision and unuse factor is based on a number of unused random RUs and/or of a number of collided random RUs in the one or more previous trigger frames.
In other embodiments, the upper boundary of the contention window is determined based on the indication received from the access point. Indeed, usually a [0, CWO] contention window is used, meaning that only the upper boundary can be determined.
In yet other embodiments, the method further comprises receiving a trigger frame from the access point in the communication network, the trigger frame reserving the transmission opportunity on the communication channel and defining resource units, RUs, forming the communication channel including the at least one random resource unit. According to a specific feature, the boundary of the contention window is determined as a function of the number of random resource units defined in the received trigger frame.
In specific embodiments, the boundary of the contention window is equal to 2aBD1) x CWOmin, wherein TBD is the RU collision and unuse factor received from the access point and CWOmin is a (predetermined) low boundary value.
In another approach of the embodiments of the present invention, it is sought to improve selection of the data to transmit when a plurality of access schemes compete, for instance EDCA contention, RU contention and RU scheduling.
In this context, fourth embodiments of the invention provide a communication method in a communication network comprising an access point and a plurality of nodes, at least one node comprising: a plurality of traffic queues for serving data traffic at different priorities; a plurality of queue backoff engines, each associated with a respective traffic queue for computing a respective queue backoff value to be used to contend access to at least one communication channel in order to transmit data stored in the respective traffic queue; and an RU backoff engine separate from the queue backoff engines, for computing an RU backoff value to be used to contend access to at least one random resource unit splitting a transmission opportunity granted on the communication channel, in order to transmit data stored in either traffic queue, the method comprising, at the node: selecting data from at least one traffic queue and transmitting the selected data in a transmission opportunity granted to the node if a first access scheme is used or in a resource unit splitting a transmission opportunity granted to the access point if a second access scheme is used, wherein selecting the data includes: successively considering data from the at least one traffic queue; determining whether or not a data item currently considered is compatible with a transmission according to the access scheme used; and selecting for transmission the data item currently considered only in case of compatibility.
Correspondingly, embodiments of the invention provide a communication device forming node in a communication network comprising an access point and a plurality of nodes, comprising: a plurality of traffic queues for serving data traffic at different priorities; a plurality of queue backoff engines, each associated with a respective traffic queue for computing a respective queue backoff value to be used in a first contention scheme to contend access to at least one communication channel in order to transmit data stored in the respective traffic queue; an RU backoff engine separate from the queue backoff engines, for computing an RU backoff value to be used in a second contention scheme to contend access to at least one random resource unit splitting a transmission opportunity granted on the communication channel, in order to transmit data stored in either traffic queue; and a controller for selecting data from at least one traffic queue and transmitting the selected data in a transmission opportunity granted to the node if a first access scheme is used or in a resource unit splitting a transmission opportunity granted to the access point if a second access scheme is used, wherein the controller is configured, in order to select the data, to: successively consider data from the at least one traffic queue; determine whether or not a data item currently considered is compatible with a transmission according to the access scheme used; and select for transmission the data item currently considered only in case of compatibility.
Thanks to these embodiments appropriate data are selected from the traffic queues depending on the access scheme used. This ensures the network to be efficiently used.
Of course, these fourth embodiments of the invention may be combined with any or more of the first to third embodiments defined above (and their variations).
Optional features of embodiments of the invention are defined in the appended claims. Some of these features are explained here below with reference to a method, while they can be transposed into system features dedicated to any node device according to embodiments of the invention.
In embodiments, selecting data from at least one traffic queue is triggered upon expiry of one of the backoff values (i.e. value reaching zero).
In specific embodiments, the expiring backoff value is the RU backoff value thereby making the second access scheme to be used, and wherein the data item or items are selected if compatible with transmission to the access point in a resource unit. Thus, the RU contention scheme is used, and access to a random RU is obtained for transmission.
According to a specific feature, the method further comprises selecting at least one of the traffic queues from which the data to be transmitted are selected. This is because the data to be transmitted in RUs can be selected from any AC traffic queues.
In particular, selecting one traffic queue may include one of: selecting the traffic queue having the lowest associated queue backoff value; selecting randomly one non-empty traffic queue from the traffic queues; selecting the traffic queue storing the highest amount of data; selecting the non-empty traffic queue having the highest associated traffic priority. According to a specific feature, the determining step comprises comparing a destination address of the data item currently considered with an address of the access point. This is to quickly identify the data items compatibles with transmission over random RUs.
In other specific embodiments, the expiring backoff value is one of the queue backoff values thereby making the first access scheme to be used, and wherein the data item or items are selected from the traffic queue associated with the expiring queue backoff value and are selected if compatible with transmission in a transmission opportunity granted to the node. Thus, the EDCA contention scheme is used, and access to a TxOP granted to the node is obtained for transmission.
In specific embodiments, the determining step comprises reading a contention scheme indication associated with each data item in the traffic queue, the contention scheme indication defining whether the data is compatible only with transmission in a transmission opportunity granted to the node, or compatible only with transmission to the access point in a resource unit, or compatible with both transmissions.
In embodiments, the method further comprises receiving a trigger frame from the access point in the communication network, the trigger frame reserving the transmission opportunity on the communication channel and defining resource units, RUs, forming the communication channel including the at least one random resource unit.
In other embodiments, the method further comprises, upon determining the data item currently considered is not compatible with a transmission according to the access scheme used, stopping the step of successively considering data from the at least one traffic queue.
In alternative embodiments, the method further comprises, upon determining the data item currently considered is not compatible with a transmission according to the access scheme used, skipping the non-compatible data item and searching for other data in the traffic queue that are compatible with a transmission according to the access scheme used.
In alternative embodiments, the method may further comprise, upon determining the data item currently considered is not compatible with a transmission according to the access scheme used or upon reaching the end of the traffic queue, successively considering data from another traffic queue to select data item or items that are compatible with a transmission according to the access scheme used. This particularly takes place with transmission in scheduled or random resource units.
According to a particular feature, the method further comprises transmitting padding data up to the end of a resource unit. This particularly takes place with transmission in scheduled or random resource units.
In embodiments, the first access scheme is a contention-based scheme based on the queue backoff values, and the data item selected for transmission is determined to be compatible with a transmission in a transmission opportunity granted to the node.
In other embodiments, the second access scheme is a contention-based scheme based on the RU backoff value, and the data item selected for transmission is determined to be compatible with a transmission to the access point in a resource unit.
In yet other embodiments, the second access scheme is a scheduled access scheme to access scheduled resource unit splitting the granted transmission opportunity, the scheduled resource being reserved by the access point for said node, and the data item selected for transmission is determined to be compatible with a transmission to the access point in a resource unit.
In yet other embodiments, the method further comprises marking data in the traffic queues as being either compatible only with transmission in a transmission opportunity granted to the node, or compatible only with transmission to the access point in a resource unit, or compatible with both transmissions.
In yet other embodiments, the method further comprises, upon receiving a trigger frame, decrementing the RU backoff value based on the number of random resource units defined in the received trigger frame.
In yet other embodiments, the method further comprises, decrementing the queue backoff values each elementary time unit the communication channel is detected as idle.
Another aspect of the invention relates to a wireless communication system having an access point and at least one communication device forming node as defined above.
Another aspect of the invention relates to a non-transitory computer-readable medium storing a program which, when executed by a microprocessor or computer system in a device of a communication network, causes the device to perform any method as defined above.
The non-transitory computer-readable medium may have features and advantages that are analogous to those set out above and below in relation to the methods and node devices.
Another aspect of the invention relates to a communication method in a communication network comprising a plurality of nodes, substantially as herein described with reference to, and as shown in, Figure 13, or Figure 14, or Figures 9,10,11 and 13, or Figures 9, 10, 11 and 14, or Figure 16, or Figure 17, or Figures 16 and 17, or Figure 15, 16 and 17 of the accompanying drawings.
At least parts of the methods according to the invention may be computer implemented. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, microcode, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a "circuit", "module" or "system". Furthermore, the present invention may take the form of a computer program product embodied in any tangible medium of expression having computer usable program code embodied in the medium.
Since the present invention can be implemented in software, the present invention can be embodied as computer readable code for provision to a programmable apparatus on any suitable carrier medium. A tangible carrier medium may comprise a storage medium such as a hard disk drive, a magnetic tape device or a solid state memory device and the like. A transient carrier medium may include a signal such as an electrical signal, an electronic signal, an optical signal, an acoustic signal, a magnetic signal or an electromagnetic signal, e.g. a microwave or RF signal.
BRIEF DESCRIPTION OF THE DRAWINGS
Further advantages of the present invention will become apparent to those skilled in the art upon examination of the drawings and detailed description. Embodiments of the invention will now be described, by way of example only, and with reference to the following drawings.
Figure 1 illustrates a typical wireless communication system in which embodiments of the invention may be implemented;
Figure 2 is a timeline schematically illustrating a conventional communication mechanism according to the IEEE 802.11 standard;
Figures 3a, 3b and 3c illustrate the IEEE 802.11e EDCA involving access categories;
Figure 4 illustrates 802.11ac channel allocation that support channel bandwidth of 20 MHz, 40 MHz, 80 MHz or 160 MHz as known in the art;
Figure 5 illustrates an example of 802.11ax uplink OFDMA transmission scheme, wherein the AP issues a Trigger Frame for reserving a transmission opportunity of OFDMA subchannels (resource units) on an 80 MHz channel as known in the art;
Figure 6 shows a schematic representation a communication device or station in accordance with embodiments of the present invention;
Figure 7 shows a schematic representation of a wireless communication device in accordance with embodiments of the present invention;
Figure 8 illustrates an exemplary transmission block of a communication node according to embodiments of the invention;
Figure 9 illustrates, using a flowchart, main steps performed by a MAC layer of a node, when receiving new data to transmit, in first embodiments of the invention;
Figure 10 illustrates, using a flowchart, main steps for setting an RU backoff parameter, namely contention window CWO for OFDMA contention, in first embodiments of the invention;
Figure 11 illustrates, using a flowchart, steps of accessing the medium based on the conventional EDCA medium access scheme, in first embodiments of the invention;
Figure 12 illustrates, using a flowchart, exemplary steps for updating RU backoff parameters and value upon receiving a positive or negative acknowledgment of a multi-user OFDMA transmission, in first embodiments of the invention;
Figure 13 illustrates, using a flowchart, a first exemplary embodiment of accessing the medium based on the OFDMA medium access scheme, and of updating the RU backoff parameters when a new trigger frame is received, in first embodiments of the invention;
Figure 14 illustrates, using a flowchart, a second exemplary embodiment of accessing the medium based on the OFDMA medium access scheme, and of updating the RU backoff parameters when a new trigger frame is received at transmitting node, in first embodiments of the invention;
Figure 15 illustrates, using a flowchart, main steps performed by a MAC layer of a node, when receiving new data to transmit, in second embodiments of the invention;
Figure 16 illustrates, using a flowchart, steps of accessing the medium based on the conventional EDCA medium access scheme, in second embodiments of the invention; and
Figure 17 illustrates, using a flowchart, an exemplary embodiment of accessing the medium based on the OFDMA medium access scheme, in second embodiments of the invention.
DETAILED DESCRIPTION
The invention will now be described by means of specific non-limiting exemplary embodiments and by reference to the figures.
Figure 1 illustrates a communication system in which several communication nodes (or stations) 101-107 exchange data frames over a radio transmission channel 100 of a wireless local area network (WLAN), under the management of a central station, or access point (AP) 110. The radio transmission channel 100 is defined by an operating frequency band constituted by a single channel ora plurality of channels forming a composite channel.
Access to the shared radio medium to send data frames is based on the CSMA/CA technique, for sensing the carrier and avoiding collision by separating concurrent transmissions in space and time.
Carrier sensing in CSMA/CA is performed by both physical and virtual mechanisms. Virtual carrier sensing is achieved by transmitting control frames to reserve the medium prior to transmission of data frames.
Next, a source or transmitting node first attempts through the physical mechanism, to sense a medium that has been idle for at least one DIFS (standing for DCF InterFrame Spacing) time period, before transmitting data frames.
However, if it is sensed that the shared radio medium is busy during the DIFS period, the source node continues to wait until the radio medium becomes idle.
To access the medium, the node starts a countdown backoff counter designed to expire after a number of timeslots, chosen randomly in the contention window [0, CW|, CW (integer) being also referred to as the Contention Window and defining the upper boundary of the backoff selection interval. This backoff mechanism or procedure is the basis of the collision avoidance mechanism that defers the transmission time for a random interval, thus reducing the probability of collisions on the shared channel. After the backoff time period, the source node may send data or control frames if the medium is idle.
One problem of wireless data communications is that it is not possible for the source node to listen while sending, thus preventing the source node from detecting data corruption due to channel fading or interference or collision phenomena. A source node remains unaware of the corruption of the data frames sent and continues to transmit the frames unnecessarily, thus wasting access time.
The Collision Avoidance mechanism of CSMA/CA thus provides positive acknowledgement (ACK) of the sent data frames by the receiving node if the frames are received with success, to notify the source node that no corruption of the sent data frames occurred.
The ACK is transmitted at the end of reception of the data frame, immediately after a period of time called Short InterFrame Space (SIFS).
If the source node does not receive the ACK within a specified ACK timeout or detects the transmission of a different frame on the channel, it may infer data frame loss. In that case, it generally reschedules the frame transmission according to the above-mentioned backoff procedure.
To improve the Collision Avoidance efficiency of CSMA/CA, a four-way handshaking mechanism is optionally implemented. One implementation is known as the RTS/CTS exchange, defined in the 802.11 standard.
The RTS/CTS exchange consists in exchanging control frames to reserve the radio medium prior to transmitting data frames during a transmission opportunity called TXOP in the 802.11 standard as described below, thus protecting data transmissions from any further collisions.
Figure 2 illustrates the behaviour of three groups of nodes during a conventional communication over a 20 MHz channel of the 802.11 medium: transmitting or source node 20, receiving or addressee or destination node 21 and other nodes 22 not involved in the current communication.
Upon starting the backoff process 270 prior to transmitting data, a station e.g. source node 20, initializes its backoff time counter to a random value as explained above. The backoff time counter is decremented once every time slot interval 260 for as long as the radio medium is sensed idle (countdown starts from TO, 23 as shown in the Figure).
Channel sensing is for instance performed using Clear-Channel-Assessment (CCA) signal detection which is a WLAN carrier sense mechanisms defined in the IEEE 802.11 -2007 standards.
The time unit in the 802.11 standard is the slot time called ‘aSlotTime’ parameter. This parameter is specified by the PHY (physical) layer (for example, aSlotTime is equal to 9ps for the 802.11η standard). All dedicated space durations (e.g. backoff) add multiples of this time unit to the SIFS value.
The backoff time counter is ‘frozen’ or suspended when a transmission is detected on the radio medium channel (countdown is stopped at T1, 24 for other nodes 22 having their backoff time counter decremented).
The countdown of the backoff time counter is resumed or reactivated when the radio medium is sensed idle anew, after a DIFS time period. This is the case for the other nodes at T2, 25 as soon as the transmission opportunity TXOP granted to source node 20 ends and the DIFS period 28 elapses. DIFS 28 (DCF inter-frame space) thus defines the minimum waiting time for a source node before trying to transmit some data. In practice, DIFS = SIFS + 2 * aSlotTime.
When the backoff time counter reaches zero (26) at T1, the timer expires, the corresponding node 20 requests access onto the medium in order to be granted a TXOP, and the backoff time counter is reinitialized 29 using a new random backoff value.
In the example of the Figure implementing the RTS/CTS scheme, atT1, the source node 20 that wants to transmit data frames 230 sends a special short frame or message acting as a medium access request to reserve the radio medium, instead of the data frames themselves, just after the channel has been sensed idle for a DIFS or after the backoff period as explained above.
The medium access request is known as a Request-To-Send (RTS) message or frame. The RTS frame generally includes the addresses of the source and receiving nodes ("destination 21") and the duration for which the radio medium is to be reserved for transmitting the control frames (RTS/CTS) and the data frames 230.
Upon receiving the RTS frame and if the radio medium is sensed as being idle, the receiving node 21 responds, after a SIFS time period 27 (for example, SIFS is equal to 16 ps for the 802.11η standard), with a medium access response, known as a Clear-To-Send (CTS) frame. The CTS frame also includes the addresses of the source and receiving nodes, and indicates the remaining time required for transmitting the data frames, computed from the time point at which the CTS frame starts to be sent.
The CTS frame is considered by the source node 20 as an acknowledgment of its request to reserve the shared radio medium for a given time duration.
Thus, the source node 20 expects to receive a CTS frame 220 from the receiving node 21 before sending data 230 using unique and unicast (one source address and one addressee or destination address) frames.
The source node 20 is thus allowed to send the data frames 230 upon correctly receiving the CTS frame 220 and after a new SIFS time period 27, in a transmission opportunity that is thus granted to it thanks to the RTS/CTS exchange.
An ACK frame 240 is sent by the receiving node 21 after having correctly received the data frames sent, after a new SIFS time period 27.
If the source node 20 does not receive the ACK 240 within a specified ACK Timeout (generally within the TXOP), or if it detects the transmission of a different frame on the radio medium, it reschedules the frame transmission using the backoff procedure anew.
Since the RTS/CTS four-way handshaking mechanism 210/220 is optional in the 802.11 standard, it is possible for the source node 20 to send data frames 230 immediately upon its backoff time counter reaching zero (i.e. at T1).
The requested time duration for transmission defined in the RTS and CTS frames defines the length of the granted transmission opportunity TXOP, and can be read by any listening node ("other nodes 22" in Figure 2) in the radio network.
To do so, each node has in memory a data structure known as the network allocation vector or NAV to store the time duration for which it is known that the medium will remain busy. When listening to a control frame (RTS 210 or CTS 220) not addressed to itself, a listening node 22 updates its NAVs (NAV 255 associated with RTS and NAV 250 associated with CTS) with the requested transmission time duration specified in the control frame. The listening nodes 22 thus keep in memory the time duration for which the radio medium will remain busy.
Access to the radio medium for the other nodes 22 is consequently deferred 30 by suspending 31 their associated timer and then by later resuming 32 the timer when the NAV has expired.
This prevents the listening nodes 22 from transmitting any data or control frames during that period.
It is possible that receiving node 21 does not receive RTS frame 210 correctly due to a message/frame collision or to fading. Even if it does receive it, receiving node 21 may not always respond with a CTS 220 because, for example, its NAV is set (i.e. another node has already reserved the medium). In any case, the source node 20 enters into a new backoff procedure.
The RTS/CTS four-way handshaking mechanism is very efficient in terms of system performance, in particular with regard to large frames since it reduces the length of the messages involved in the contention process.
In detail, assuming perfect channel sensing by each communication node, collision may only occur when two (or more) frames are transmitted within the same time slot after a DIFS 28 (DCF inter-frame space) or when their own back-off counter has reached zero nearly at the same time T1. If both source nodes use the RTS/CTS mechanism, this collision can only occur for the RTS frames. Fortunately, such collision is early detected by the source nodes since it is quickly determined that no CTS response has been received.
Figures 3a, 3b and 3c illustrate the IEEE 802.11e EDCA involving access categories, in order to improve the quality of service (QoS). In the original DCF standard, a communication node includes only one transmission queue/buffer. However, since a subsequent data frame cannot be transmitted until the transmission/retransmission of a preceding frame ends, the delay in transmitting/retransmitting the preceding frame prevents the communication from having QoS.
The IEEE 802.11e has overturned this deficiency in providing quality of service (QoS) enhancements to make more efficient use of the wireless medium.
This standard relies on a coordination function, called hybrid coordination function (HCF), which has two modes of operation: enhanced distributed channel access (EDCA) and HCF controlled channel access (HCCA). EDCA enhances or extends functionality of the original access DCF method: EDCA has been designed for support of prioritized traffic similar to DiffServ (Differentiated Services), which is a protocol for specifying and controlling network traffic by class so that certain types of traffic get precedence. EDCA is the dominant channel access mechanism in WLANs because it features a distributed and easily deployed mechanism.
The above deficiency of failing to have satisfactory QoS due to delay in frame retransmission has been solved with a plurality of transmission queues/buffers.
QoS support in EDCA is achieved with the introduction of four Access Categories (ACs), and thereby of four corresponding transmission/traffic queues or buffers (310). Of course, another number of traffic queues may be contemplated.
Each AC has its own traffic queue/buffer to store corresponding data frames to be transmitted on the network. The data frames, namely the MSDUs, incoming from an upper layer of the protocol stack are mapped onto one of the four AC queues/buffers and thus input in the mapped AC buffer.
Each AC has also its own set of channel access parameters or “queue backoff parameters”, and is associated with a priority value, thus defining traffic of higher or lower priority of MSDUs. Thus, there is a plurality of traffic queues for serving data traffic at different priorities.
That means that each AC (and corresponding buffer) acts as an independent DCF contending entity including its respective queue backoff engine 311. Thus, each queue backoff engine 311 is associated with a respective traffic queue for computing a respective queue backoff value to be used to contend access to at least one communication channel in order to transmit data stored in the respective traffic queue.
It results that the ACs within the same communication node compete one with each other to access the wireless medium and to obtain a transmission opportunity, using the contention mechanism as explained above with reference to Figure 2 for example.
Service differentiation between the ACs is achieved by setting different queue backoff parameters between the ACs, such as different contention window parameters (CWmin, CWmax), different arbitration interframe spaces (AIFS), and different transmission opportunity duration limits (TXOPJJmit).
With EDCA, high priority traffic has a higher chance of being sent than low priority traffic: a node with high priority traffic waits a little less (low CW) before it sends its packet, on average, than a node with low priority traffic.
The four AC buffers (310) are shown in Figure 3a.
Buffers AC3 and AC2 are usually reserved for real-time applications (e.g., voice or video transmission). They have, respectively, the highest priority and the last-but-one highest priority.
Buffers AC1 and ACO are reserved for best effort and background traffic. They have, respectively, the last-but-one lowest priority and the lowest priority.
Each data unit, MSDU, arriving at the MAC layer from an upper layer (e.g. Link layer) with a priority is mapped into an AC according to mapping rules. Figure 3b shows an example of mapping between eight priorities of traffic class (User Priorities or UP, 0-7 according IEEE 802.1d) and the four ACs. The data frame is then stored in the buffer corresponding to the mapped AC.
When the backoff procedure for a traffic queue (or an AC) ends, the MAC controller (reference 704 in Figure 7 below) of the transmitting node transmits a data frame from this traffic queue to the physical layer for transmission onto the wireless communication network.
Since the ACs operate concurrently in accessing the wireless medium, it may happen that two ACs of the same communication node have their backoff ending simultaneously. In such a situation, a virtual collision handler (312) of the MAC controller operates a selection of the AC having the highest priority (as shown in Figure 3b) between the conflicting ACs, and gives up transmission of data frames from the ACs having lower priorities.
Then, the virtual collision handler commands those ACs having lower priorities to start again a backoff operation using an increased CW value.
Figure 3c illustrates configurations of a MAC data frame and a QoS control field (300) included in the header of the IEEE 802.11e MAC frame.
The MAC data frame also includes, among other fields, a Frame Control header (301) and a frame body (302).
As represented in the Figure, the QoS control field 300 is made of two bytes, including the following information items: - Bits B0 to B3 are used to store a traffic identifier (TID) which identifies a traffic stream. The traffic identifier takes the value of the transmission priority value (User Priority UP, value between 0 and 7 - see Figure 3b) corresponding to the data conveyed by the data frame or takes the value of a traffic stream identifier (TSID, value between 8 and 15) for other data streams; - Bit B4 is set to 1 and is not detailed here; - Bits B5 and B6 define the ACK policy subfield which specifies the acknowledgment policy associated with the data frame. This subfield is used to determine how the data frame has to be acknowledged by the receiving node; normal ACK, no ACK or Block ACK. “Normal ACK” refers to the case where the transmitting node or source node requires a conventional acknowledgment to be sent (by the receiving node) for each data frame, after a short interframe space (SIFS) period following the transmission of the data frame. “No ACK” refers to the case where the source node does not require acknowledgment. That means that the receiving node takes no action upon receipt of the data frame. “Block ACK” refers to an acknowledgment per block of MSDUs. The Block Ack scheme allows two or more data frames 230 to be transmitted before a Block ACK frame is returned to acknowledge the receipt of the data frames. The Block ACK increases communication efficiency since only one signalling ACK frame is needed to acknowledge a block of frames, while every ACK frame originally used has a significant overhead for radio synchronization. The receiving node takes no action immediately upon receiving the last data frame, except the action of recording the state of reception in its scoreboard context. With such a value, the source node is expected to send a Block ACK request (BAR) frame, to which the receiving node responds using the procedure described below;
Bit B7 is reserved (not used by the current 802.11 standards); and
Bits B8-B15 indicate the amount of buffered traffic for a given TID at the non-AP station sending this frame. The AP may use this information to determine the next TXOP duration it will grant to the station. A queue size of 0 indicates the absence of any buffered traffic for that TID.
To meet the ever-increasing demand for faster wireless networks to support bandwidth-intensive applications, 802.11ac is targeting larger bandwidth transmission through multi-channel operations. Figure 4 illustrates 802.11ac channel allocation that support composite channel bandwidth of 20 MHz, 40 MHz, 80 MHz or 160 MHz IEEE 802.11ac introduces support of a restricted number of predefined subsets of 20MHz channels to form the sole predefined composite channel configurations that are available for reservation by any 802.11ac node on the wireless network to transmit data.
The predefined subsets are shown in the Figure and correspond to 20 MHz, 40 MHz, 80 MHz, and 160 MHz channel bandwidths, compared to only 20 MHz and 40 MHz supported by 802.11η. Indeed, the 20 MHz component channels 300-1 to 300-8 are concatenated to form wider communication composite channels.
In the 802.11ac standard, the channels of each predefined 40MHz, 80MHz or 160MHz subset are contiguous within the operating frequency band, i.e. no hole (missing channel) in the composite channel as ordered in the operating frequency band is allowed.
The 160 MHz channel bandwidth is composed of two 80 MHz channels that may or may not be frequency contiguous. The 80 MHz and 40 MHz channels are respectively composed of two frequency adjacent or contiguous 40 MHz and 20 MHz channels, respectively. However the present invention may have embodiments with either composition of the channel bandwidth, i.e. including only contiguous channels or formed of non-contiguous channels within the operating band. A node is granted a TxOP through the enhanced distributed channel access (EDCA) mechanism on the “primary channel” (300-3). Indeed, for each composite channel having a bandwidth, 802.11ac designates one channel as “primary” meaning that it is used for contending for access to the composite channel. The primary 20MHz channel is common to all nodes (STAs) belonging to the same basic set, i.e. managed by or registered to the same local Access Point (AP).
However, to make sure that no other legacy node (i.e. not belonging to the same set) uses the secondary channels, it is provided that the control frames (e.g. RTS frame/CTS frame) reserving the composite channel are duplicated over each 20MHz channel of such composite channel.
As addressed earlier, the IEEE 802.11ac standard enables up to four, or even eight, 20 MHz channels to be bound. Because of the limited number of channels (19 in the 5 GHz band in Europe), channel saturation becomes problematic. Indeed, in densely populated areas, the 5 GHz band will surely tend to saturate even with a 20 or 40 MHz bandwidth usage per Wireless-LAN cell.
Developments in the 802.11 ax standard seek to enhance efficiency and usage of the wireless channel for dense environments.
In this perspective, one may consider multi-user transmission features, allowing multiple simultaneous transmissions to different users in both downlink and uplink directions. In the uplink, multi-user transmissions can be used to mitigate the collision probability by allowing multiple nodes to simultaneously transmit.
To actually perform such multi-user transmission, it has been proposed to split a granted 20MHz channel (300-1 to 300-4) into sub-channels 410 (elementary sub-channels), also referred to as sub-carriers or resource units (RUs), that are shared in the frequency domain by multiple users, based for instance on Orthogonal Frequency Division Multiple Access (OFDMA) technique.
This is illustrated with reference to Figure 5.
The multi-user feature of OFDMA allows the AP to assign different RUs to different nodes in order to increase competition. This may help to reduce contention and collisions inside 802.11 networks.
Contrary to downlink OFDMA wherein the AP can directly send multiple data to multiple stations (supported by specific indications inside the PLCP header), a trigger mechanism has been adopted for the AP to trigger uplink communications from various nodes.
To support an uplink multi-user transmission (during a pre-empted TxOP), the 802.11 ax AP has to provide signalling information for both legacy stations (non-802.11ax nodes) to set their NAV and for 802.11 ax nodes to determine the Resource Units allocation.
In the following description, the term legacy refers to non-802.11ax nodes, meaning 802.11 nodes of previous technologies that do not support OFDMA communications.
As shown in the example of Figure 5, the AP sends a trigger frame (TF) 430 to the targeted 802.11 ax nodes. The bandwidth or width of the targeted composite channel is signalled in the TF frame, meaning that the 20, 40, 80 or 160 MHz value is added. The TF frame is sent over the primary 20MHz channel and duplicated (replicated) on each other 20MHz channels forming the targeted composite channel. As described above for the duplication of control frames, it is expected that every nearby legacy node (non-HT or 802.11ac nodes) receiving the TF on its primary channel, then sets its NAV to the value specified in the TF frame in order. This prevents these legacy nodes from accessing the channels of the targeted composite channel during the TXOP.
Based on an AP’s decision, the trigger frame TF may define a plurality of resource units (RUs) 410, or “Random RUs”, which can be randomly accessed by the nodes of the network. In other words, Random RUs designated or allocated by the AP in the TF may serve as basis for contention between nodes willing to access the communication medium for sending data. A collision occurs when two or more nodes attempt to transmit at the same time over the same RU.
In that case, the trigger frame is referred to as a trigger frame for random access (TF-R). A TF-R may be emitted by the AP to allow multiple nodes to perform UL MU (UpLink Multi-User) random access to obtain an RU for their UL transmissions.
The trigger frame TF may also designate Scheduled resource units, in addition or in replacement of the Random RUs. Scheduled RUs may be reserved by the AP for certain nodes in which case no contention for accessing such RUs is needed for these nodes. Such RUs and their corresponding scheduled nodes are indicated in the trigger frame. For instance, a node identifier, such as the Association ID (AID) assigned to each node upon registration, is added in association with each Scheduled RU in order to explicitly indicate the node that is allowed to use each Scheduled RU.
An AID equal to 0 may be used to identify random RUs.
The multi-user feature of OFDMA allows the AP to assign different RUs to different nodes in order to increase competition. This may help to reduce contention and collisions inside 802.11 networks.
In the example of Figure 5, each 20MHz channel (400-1,400-2, 400-3 or 400-4) is sub-divided in frequency domain into four sub-channels or RUs 410, typically of size 5 Mhz.
Of course the number of RUs splitting a 20MHz channel may be different from four. For instance, between two to nine RUs may be provided (thus each having a size between 10MHz and about 2MHz).
Once the nodes have used the RUs to transmit data to the AP, the AP responds with an acknowledgment (not show in the Figure) to acknowledge the data on each RU.
Document IEEE 802.11-15/1105 provides an exemplary random allocation procedure that may be used by the nodes to access the Random RUs indicated in the TF. This random allocation procedure is based on a new backoff counter, referred below to as the OFDMA or RU backoff value (or OBO), inside the 802.11ax nodes for allowing a dedicated contention when accessing an RU to send data.
Each node STA1 to STAn is a transmitting node with regards to receiving AP, and as a consequence, each node has an active RU backoff engine separate from the queue backoff engines, for computing an RU backoff value (OBO) to be used to contend access to at least one random resource unit splitting a transmission opportunity granted on the communication channel, in order to transmit data stored in either traffic queue AC.
The random allocation procedure comprises, for a node of a plurality of nodes having an active RU backoff value OBO, a first step of determining from the trigger frame the sub-channels or RUs of the communication medium available for contention, a second step of verifying if the value of the active RU backoff value OBO local to the considered node is not greater than a number of detected-as-available random RUs, and then, in case of successful verification, a third step of randomly selecting a RU among the detected-as-available RUs for sending data. In case of second step is not verified, a fourth step (instead of the third) is performed in order to decrement the RU backoff value OBO by the number of detected-as-available RUs.
As shown in the Figure, some Resource Units may not be used (41 Ou) because no node with an RU backoff value OBO less than the number of available random RUs has randomly selected one of these RUs, whereas some other are collided (as example 410c) because two of these nodes have randomly selected the same RU.
The conventional handling of RUs is not satisfactory, in particular because of the coexistence of OFDMA (or RU) backoff scheme and EDCA queue backoff scheme for CSMA/CA contention.
More appropriate setting and updating of parameters for managing the RU backoff engine is proposed to be used in some embodiments of the invention. An idea of these embodiments is to determine one or more RU backoff parameters based on one or more queue backoff parameters of the queue backoff engines; and then to compute the RU backoff value from the determined one or more RU backoff parameters. This approach may thus take into account the prioritization of un-managed traffic towards the AP to improve the management of the RU backoff parameters with regards to the pending traffic.
Also, there are risks that the consistency between the backoff engines and the data they are managing (for transmission) is lost. This is mainly because one type of medium access (EDCA or UL MU OFDMA) may consume all the data for which a backoff engine has been set active. An improved management of the backoff engines with respect to the data stored in the traffic queues is now proposed in some embodiments of the invention. An idea of these embodiments occurs after having transmitted data from at least one traffic queue in a transmission opportunity granted to the node in the communication channel or in a (random or scheduled) resource unit splitting a transmission opportunity granted to another node (usually the access point), by modifying at least one non-zero backoff value based on the data remaining in the traffic queues after the transmitting step.
Figure 6 schematically illustrates a communication device 600 of the radio network 100, configured to implement at least one embodiment of the present invention. The communication device 600 may preferably be a device such as a micro-computer, a workstation or a light portable device. The communication device 600 comprises a communication bus 613 to which there are preferably connected: • a central processing unit 611, such as a microprocessor, denoted CPU; • a read only memory 607, denoted ROM, for storing computer programs for implementing the invention; • a random access memory 612, denoted RAM, for storing the executable code of methods according to embodiments of the invention as well as the registers adapted to record variables and parameters necessary for implementing methods according to embodiments of the invention; and • at least one communication interface 602 connected to the radio communication network 100 over which digital data packets or frames or control frames are transmitted, for example a wireless communication network according to the 802.11 ax protocol. The frames are written from a FIFO sending memory in RAM 612 to the network interface for transmission or are read from the network interface for reception and writing into a FIFO receiving memory in RAM 612 under the control of a software application running in the CPU 611.
Optionally, the communication device 600 may also include the following components: • a data storage means 604 such as a hard disk, for storing computer programs for implementing methods according to one or more embodiments of the invention; • a disk drive 605 for a disk 606, the disk drive being adapted to read data from the disk 606 or to write data onto said disk; • a screen 609 for displaying decoded data and/or serving as a graphical interface with the user, by means of a keyboard 610 or any other pointing means.
The communication device 600 may be optionally connected to various peripherals, such as for example a digital camera 608, each being connected to an input/output card (not shown) so as to supply data to the communication device 600.
Preferably the communication bus provides communication and interoperability between the various elements included in the communication device 600 or connected to it. The representation of the bus is not limiting and in particular the central processing unit is operable to communicate instructions to any element of the communication device 600 directly or by means of another element of the communication device 600.
The disk 606 may optionally be replaced by any information medium such as for example a compact disk (CD-ROM), rewritable or not, a ZIP disk, a USB key or a memory card and, in general terms, by an information storage means that can be read by a microcomputer or by a microprocessor, integrated or not into the apparatus, possibly removable and adapted to store one or more programs whose execution enables a method according to the invention to be implemented.
The executable code may optionally be stored either in read only memory 607, on the hard disk 604 or on a removable digital medium such as for example a disk 606 as described previously. According to an optional variant, the executable code of the programs can be received by means of the communication network 603, via the interface 602, in order to be stored in one of the storage means of the communication device 600, such as the hard disk 604, before being executed.
The central processing unit 611 is preferably adapted to control and direct the execution of the instructions or portions of software code of the program or programs according to the invention, which instructions are stored in one of the aforementioned storage means. On powering up, the program or programs that are stored in a non-volatile memory, for example on the hard disk 604 or in the read only memory 607, are transferred into the random access memory 612, which then contains the executable code of the program or programs, as well as registers for storing the variables and parameters necessary for implementing the invention.
In a preferred embodiment, the apparatus is a programmable apparatus which uses software to implement the invention. However, alternatively, the present invention may be implemented in hardware (for example, in the form of an Application Specific Integrated Circuit or ASIC).
Figure 7 is a block diagram schematically illustrating the architecture of a communication device or node 600, in particular one of nodes 100-107, adapted to carry out, at least partially, the invention. As illustrated, node 600 comprises a physical (PHY) layer block 703, a MAC layer block 702, and an application layer block 701.
The PHY layer block 703 (here an 802.11 standardized PHY layer) has the task of formatting, modulating on or demodulating from any 20MHz channel or the composite channel, and thus sending or receiving frames over the radio medium used 100, such as 802.11 frames, for instance medium access trigger frames TF 430 to reserve a transmission slot, MAC data and management frames based on a 20 MHz width to interact with legacy 802.11 stations, as well as of MAC data frames of OFDMA type having smaller width than 20 MHz legacy (typically 2 or 5 MHz) to/from that radio medium.
The MAC layer block or controller 702 preferably comprises a MAC 802.11 layer 704 implementing conventional 802.11ax MAC operations, and an additional block 705 for carrying out, at least partially, the invention. The MAC layer block 702 may optionally be implemented in software, which software is loaded into RAM 512 and executed by CPU 511.
Preferably, the additional block, referred as to random RU procedure module 705 for controlling access to OFDMA resource units (sub-channels), implements the part of the invention that regards node 600, i.e. transmitting operations for a source node, receiving operations for a receiving node. MAC 802.11 layer 704 and random RU procedure module 705 interact one with the other in order to provide management of the queue backoff engines and RU backoff engines as described below.
On top of the Figure, application layer block 701 runs an application that generates and receives data packets, for example data packets of a video stream. Application layer block 701 represents all the stack layers above MAC layer according to ISO standardization.
Embodiments of the present invention are now illustrated using various exemplary embodiments. Although the proposed examples use the trigger frame 430 (see Figure 5a) sent by an AP for a multi-user uplink transmissions, equivalent mechanisms can be used in a centralized or in an adhoc environment (i.e. without an AP).
Figure 8 illustrates an exemplary transmission block of a communication node 600 according to embodiments of the invention.
As mentioned above, the node includes: a plurality of traffic queues 310 for serving data traffic at different priorities; a plurality of queue backoff engines 311, each associated with a respective traffic queue for computing a respective queue backoff value to be used to contend access to at least one communication channel in order to transmit data stored in the respective traffic queue. This is the EDCA; and an RU backoff engine 800 separate from the queue backoff engines, for computing an RU backoff value to be used to contend access to the OFDMA resources defined in a received TF (sent by the AP for instance), in order to transmit data stored in either traffic queue in an OFDMA RU. The RU backoff engine 800 belongs to a more general module, namely Random RU procedure module 705, which also includes a transmission module, referred to as OFDMA muxer 801,
The conventional AC queue back-off registers 311 drive the medium access request along EDCA protocol, while in parallel, the RU backoff engine 800 drives the medium access request onto OFDMA multi-user protocol.
As these two contention schemes coexist, the source node implements a medium access mechanism with collision avoidance based on a computation of backoff values: - a queue backoff counter value corresponding to a number of time-slots the node waits, after the communication medium has been detected to be idle, before accessing the medium. This is EDCA; - an RU backoff counter value (OBO) corresponding to a number of idle RUs the node detects, after a TxOP has been granted to the AP over a composite channel formed of RUs, before accessing the medium. This is OFDMA.
In embodiments of the invention, RU backoff engine 800 is in charge of determining appropriate RU backoff parameters based on one or more queue backoff parameters of the queue backoff engines, in particular during initialization and management of the RU backoff value OBO and of its associated congestion window noted CWO. OFDMA muxer 801 is in charge, when the RU backoff value OBO reaches zero, of selecting data to be sent from at least one AC queue 310. In addition, in embodiments of the invention, OFDMA muxer 801 is also in charge of driving the RU backoff engine 800 and the EDCA queue backoff engines 311 to keep consistency with the content of the traffic queues 310. To achieve this, specific embodiments provides that OFDMA muxer 801 is further in charge of implementing an optimization mechanism able to mark the data in the traffic queues based on their compatibility with EDCA and/or OFDMA and to use this information to efficiently select the data to send and to keep the above-mentioned compliancy. In other words, OFDMA muxer 801 is able to mark data in the traffic queues as being either compatible only with transmission in a transmission opportunity granted to the node (EDCA), or compatible only with transmission to the access point in a resource unit (OFDMA), or compatible with both transmissions. This marking information, noted “Scheme Mark” below, can be added inside the traffic queues 310, or be implemented using an additional table.
One main advantage of embodiments of the present invention is to still be able to use, for the OBO backoff engine, a classical hardware/state-machine of standard back-off mechanism, in particular the basic mechanism that enables, when a back-off value reaches zero, a medium access to be requested. Adjusting the back-off parameters (backoff value, contention window min and max) is implemented simply by overwriting registers.
Upon receiving a Trigger Frame 430, the contention procedure for counting down the OBO backoff may consist in decreasing the OBO backoff count value by the number of detected-as-available RUs in the received trigger frame.
The medium access to be requested when OBO is down to zero (or less than), may consist in applying a random selection of a RU among the detected-as-available RUs for sending data (according example of Figure 5). In a variant, the random RUs may be indexed from 1 to NbRU, and the selected random RU is the one having the RU backoff value OBO as index.
First aspects of embodiments of the invention are now described with reference to Figures 9 to 14. They are based on determining one or more RU backoff parameters based on one or more queue backoff parameters of the queue backoff engines, RU backoff parameters from which the RU backoff value for OFDMA contention is computed.
Second aspects of the embodiments of invention are then described with reference to Figures 15 to 17. They are based on modifying at least one non-zero backoff value based on the data remaining in the traffic queues after a transmitting step.
While these various aspects may be implemented separately, best embodiments include first and second aspects in any variation described below.
Figure 9 illustrates, using a flowchart, main steps performed by MAC layer 702 of node 600, when receiving new data to transmit.
At the very beginning, none traffic queue stores data to transmit. As a consequence, no queue backoff value has been computed. It is said that the corresponding queue backoff engine or corresponding AC (Access Category) is inactive. As soon as data are stored in a traffic queue, a queue backoff value is computed (from corresponding queue backoff parameters), and the associated queue backoff engine or AC is said to be active.
At step 901, new data is received from an application local running on the device (from application layer 701 for instance), from another network interface, or from any other data source. The new data are ready to be send by the node.
At step 902, conventional 802.11 AC backoff computation is performed by the queue backoff engine corresponding to the type of the received data.
If the AC queue corresponding to the type (Access Category) of the received data is empty (i.e. the AC is originally inactive), then there is a need to compute a queue backoff value for the corresponding backoff counter.
The node then computes the queue backoff value as being equal to a random value selected in range [0, CW] + AIFS, where CW is the current value of the CW for the Access Category considered (as defined in 802.11 standard and updated for instance in step 1170 below), and AIFS is an offset value which depends on the AC of the data (all the AIFS values being defined in the 802.11 standard) and which is designed to implement the relative priority of the different access categories.
As a result the AC is made active.
Next to step 902, step 903 computes the RU backoff value OBO if needed.
An RU backoff value OBO needs to be computed if the RU backoff engine 800 was inactive (for instance because there were no data in the traffic queues until previous step 901) and if new data to be addressed to the AP have been received. This step 903 is thus a step of initializing OBO.
It first includes initializing the Contention Window value CWO as explained below with reference to Figure 10, and then to compute RU backoff value OBO from CWO.
In particular, RU backoff value OBO may be determined as a random integer selected from an interval [0, CWO] uniformly distributed: OBO = random[0, CWO],
In variants, RU backoff value OBO may be determined by adding, to a value randomly selected from the interval [0, CWO] uniformly distributed, a value computed from one or more arbitration interframe spaces, AIFS: OBO = random[0, CWO] + AIFS[AC],
For instance, AIFS[AC] is either the lowest AIFS value from the EDCA AIFS value or values of the active AC or ACs in considered node 600, or an average value from the same the EDCA AIFS value or values.
In other variants that may be combined, RU backoff value OBO is determined by applying an RU collision and unuse factor (noted TBD below) received from the Access Point (i.e. remote information) to a value randomly selected from the interval [0, CWO] uniformly distributed: OBO = random[0, CWO] / TBD. The RU collision and unuse factor is further explained below. It is an adjustment parameter transmitted by the AP (for instance within a trigger frame) to drive node 600 to adjust its RU backoff value OBO. This adjustment parameter preferably reflects the AP point of view of collisions on RUs and/or of unuse of RUs, in the overall 802.11 ax network.
Thus, the RU collision and un use factor TBD is preferably function of the number of unused random resource units and of the number of collided random resource units in one or more previous trigger frames, as detected by the AP.
For completeness of description, an exemplary determination of TBD is provided. It takes place at the AP upon providing random RUs in trigger frames. The number of RUs in the trigger frame may also evolve simultaneously.
Starting from an initial value, for instance TBD=2, its value may evolve as a function of collisions or unused of RUs.
It may be considered the case where all (or more than 80%) OFDMA Random RUs are used in the last OFDMA TXOP (or N previous OFMDA TXOPs, N being integer). It means that many nodes are requesting to transmit data. As a consequence, the number of Random RUs for the next OFDMA transmission can be increased by the AP (for instance by 1 up to a maximum number), while the correcting parameter TBD value can remain the same.
In addition, if collisions occur on several used OFDMA Random RUs (at least for instance more than a third), it means that the correcting parameter TBD value should be decreased to minimize the collisions between the nodes during the RU allocation. For instance, the CRF value may be decreased by about 30%. A drawback of decreasing the CRF value (used as a divisor of the RU backoff value OBO by the nodes) is that the Random RU allocation is less optimized.
On the other hand, if several OFDMA Random RUs remain unused (at least for instance more than a third - or less than 50% of the RUs are used), the correcting parameter TBD value can be increased, for instance by 30%, and/or the number of Random RUs for the next OFDMA transmission can be decreased by the AP (for instance by 1) to optimize the OFDMA Random RU allocation. A drawback of increasing the TBD value is that the collisions during the Random RU allocation may increase.
This illustrates that, upon termination of each uplink OFDMA TXOP, the updating of the correcting parameter TBD value is a trade-off between minimizing collisions during Random RU allocation and optimizing the filling of the OFDMA Random RUs.
To be precise, the AP may compute a new correcting parameter TBD value based on determined OFDMA statistics, optionally further based on the number of nodes transmitting on the random resource units during the previous transmission opportunity. Note that the OFDMA statistics may be statistics on the previous TXOP only or on N (integer) previous TXOPs.
In variants, TBD may be a percentage according to the collision ratio detected by the AP among the OFDMA RUs, and/or the ratio of unused OFDMA RUs in previous MU OFDMA transmission opportunities. Depending on whether TBD is a percentage or an integer value, the formulae involving TBD may be slightly adapted.
Next to step 903, the process of Figure 9 ends.
Figure 10 illustrates, using a flowchart, main steps for setting (including initializing) CWO at node 600. In other words, it describes a first sub-step within step 903 to prepare a random access (contention) for (UL) MU OFDMA transmission in the context of 802.11. It includes computing RU backoff parameters.
It thus starts initially when node 600 receives (e.g. locally from upper layer 701) new data in any of its AC queue 310, to be addressed to the AP.
At step 1000, node 600 determines the number NbRU of random RUs, i.e. of the RUs available for contention, to be considered for the multi-user TxOP upon next grant. This information may be provided by the AP through beacon frames or trigger frames themselves, or both. For instance, the information may be retrieved from the last TF detected. An initial value may be used as long as no TF (or beacon frame) is detected.
When the information is conveyed inside a Trigger Frame TF, it may be deduced by counting the number of random RUs, that is to say each RU having an associated address identifier (AID) equal to 0 (contrary to Scheduled RUs which have non-zero AIDs).
Step 1000 may be optional in embodiments where the RU backoff parameters are not function of the number NbRU of random RUs.
Next, at step 1001, node 600 obtains queue backoff parameters for the active ACs. Indeed, they are used to compute the RU backoff parameters for OFDMA access as described below. These queue backoff parameters may be retrieved from the active queue backoff engines 311. At step 903, we know that at least one AC is active, but also that data it stores are intended to the AP.
Each active AC maintains boundary CW of its contention window [0,CW] within the contention window boundary interval [CWmin, CWmax], and uses it to select the random queue backoff value.
Thus, examples of queue backoff (AC) parameters are the following: - boundaries (CWmin, CWmax); - arbitration interframe spaces (AIFS); - contention window boundary CW.
Next to step 1001, step 1002 consists for node 600 in computing CWO from the retrieved queue backoff parameters.
Step 1002 may include two sub-steps: - a first sub-step to determine CWOmin and CWOmax, wherein at least one of CWOmin and CWOmax, preferably both, is an RU backoff parameter determined based on one or more queue backoff parameters; - a second sub-step to compute or select CWO from range [CWOmin, CWOmax].
This ensures CWO to be dependent on the current EDCA parameters, such as the CWs. As a consequence, this advantageously takes into account the priorities raised by EDCA ACs in the process of computing the RU backoff parameters for OBO.
In a first approach, CWOmin and CWOmax and CWO are computed only from information computed locally by node 600. This is for instance the case in the process of Figure 13 described below.
Regarding the first sub-step, as the targeted transmission is of UL OFDMA type, RU backoff parameters CWOmin and CWOmax should be computed differently than the corresponding CWmin/CWmax values of the EDCA scheme.
As an example, CWOmin may be set to the number of random resource units defined in a received trigger frame: CWOmin = NbRU. This improves usage OFDMA RUs.
As another example, CWOmin may be the lowest lower boundaries (CWmin) from contention window boundary intervals [CWmin,CWmax] of the active queue backoff engines at node 600, i.e. having non-zero queue backoff values: CWOmin = Min({CWmm}active ac)· This option is preferably performed when the CWmin are greater than the number of random RUs. Indeed, there is no interest to have CWmin lower than NbRU since the risk of collisions would be very high.
As another example, CWOmin may be set both according to the lowest lower boundaries (CWmin) from contention window boundary intervals [CWmin,CWmax] of the active queue backoff engines at node 600 (i.e. having non-zero queue backoff values), and according to the number of random RUs: CWOmin — Min({CWmin}activeAc) X NbRy.
Similarly regarding CWOmax, it may be an upper boundary (CWmax) of a contention window boundary interval [CWmin,CWmax] of the active queue backoff engine 311 having the lowest non-zero queue backoff value, i.e. the next AC to transmit, reflecting the highest priority AC: CWOmax = (CWmax)lowest non-zero Ac- This exemplary configuration advantageously takes the same priority as the AC.
In another example, CWOmax may be a mean of upper boundaries (CWmax) of contention window boundary intervals [CWmin,CWmax] of the active queue backoff engines 311, i.e. having non-zero queue backoff values: CWOmax= average({CWmax}actiVe ac)· This exemplary configuration advantageously takes a medium priority, and is more relaxed compared to the first exemplary configuration.
In another example CWOmax may be the highest upper boundaries (CWmax) from contention window boundary intervals [CWmin,CWmax] of the active queue backoff engines 311, i.e. having non-zero queue backoff values: CWOmax= rnax({CWmax}actiVe Ac)· Thus node 600 is even more relaxed. This exemplary configuration advantageously ensures that OFDMA will not take a medium priority lowest than EDCA.
According to a particular option, the various configurations may be used in turns, instead of selecting only one of them. Either the configuration to use is randomly selected, or it may be based on factor TBD mentioned above: for instance if feedback information of large number of collisions is received, the third configuration may be used. Another configuration will be used as soon as the feedback information informs of a number of collisions under a predefined threshold.
Regarding the second sub-step, CWO may be initially assigned the CWOmin value. Exemplary embodiments for updating of CWO are described below with reference to Figure 12. CWO may be allowed to increase up to the upper bound CWOmax value.
In a second approach, all of CWOmin and CWOmax and CWO are computed from information computed locally by node 600. In addition, at least one of them depends on the RU collision and unuse factor TBD received from another node (preferably from the Access Point). This is for instance the case in the process of Figure 14 described below.
For instance, CWOmin may be computed as described above for the first approach, and CWO may be function of CWOmin and of factor TBD. As an example, CWO is set to 2(TBCM) x CWOmin. Note that this value may be upper bounded by a CWOmax value as determined above.
However, as long as factor TBD has not been received, optional variants may be implemented. In a first variant, the first approach above may be applied meaning that an initial value (CWmin) is assigned to CWO. In a second variant, a local RU collision and unuse factor CF, thus built locally for instance from past history, may be used.
Next to step 1002, step 1003 checks whether a triggering event for updating the RU backoff parameters is detected, before a new OFDMA access is performed.
Some triggering events may come from the AP.
For instance, similarly to the EDCF parameters (AIFS[AC], CWmin[AC] and CWmax[AC]), the AP may announce the number NbRU of random RUs through beacon frames, of alternatively (or in combination with) through the trigger frames. Indeed, the AP can dynamically adapt the number NbRU of RUs depending on network conditions. An example of such adaptation is given above in connection with the building of factor TBD at AP side. Thus a triggering event for node 600 may be receiving a new trigger/beacon frame defining a number of random resource units that is different from a current known number of random resource units.
Other triggering events may be produced locally by node 600.
For instance, as mentioned above, data newly stored in a previously empty AC traffic queue 310 activate the corresponding queue backoff engine 311. A corresponding triggering event may thus be detecting that an empty traffic queue from the plurality of traffic queues has now received data to transmit, in which case the CW parameters of this newly activated queue backoff engine may be taken into account to compute the CWO range anew.
More generally, a triggering event may consist in detecting a change in at least one queue backoff parameter used to determine the one or more RU backoff parameters, i.e. when one of the reference queue backoff parameters has changed. Note that it is not the case for beacon frames indicating the same parameters.
In specific embodiments, illustrated for instance in the process of Figures 12 and 13, a triggering event may be the end of OFDMA transmission and thus the reception of a positive or negative acknowledgment of a previous transmission of data in an RU.
In other specific embodiments, illustrated for instance in the process of Figure 14, a triggering event may be the reception of a new trigger frame.
Upon receiving any triggering event, the process of Figure 10 loops back of step 1000 to obtain NbRU and queue backoff parameters anew if appropriate and then to compute new RU backoff parameters.
This ends the process of Figure 10.
Figure 11 illustrates, using a flowchart, steps of accessing the medium based on the conventional EDCA medium access scheme.
Steps 1100 to 1120 describe a conventional waiting introduced in the EDCA mechanism to reduce the collision on a shared wireless medium. In step 1100, node 600 senses the medium waiting for it to become available (i.e. detected energy is below a given threshold on the primary channel).
When the medium becomes free, step 1110 is executed in which node 600 decrements all the active (non-zero) AC[] queue backoff counters 311 by one.
Next, at step 1120, node 600 determines if at least one of the AC backoff counters reached zero.
If no AC queue backoff reaches zero, node 600 waits for a given time corresponding to a backoff slot (typically 9ps), and then loops back to step 1100 in order to sense the medium again.
If at least one AC queue backoff reaches zero, step 1130 is executed in which node 600 (more precisely virtual collision handler 312) selects the active AC queue having a zero queue backoff counter and having the highest priority.
At step 1140, the data from this selected AC are selected for transmission.
Next, at step 1150, node 600 initiates an EDCA transmission, in case for instance an RTS/CTS exchange has been successfully performed to have a TxOP granted. Node 600 thus sends the selected data on the medium, during the granted TxOP.
Next, at step 1160, node 600 determines if the transmission has ended, in which case step 1170 is executed.
At step 1170, node 600 updates CW of the selected traffic queue, based on the status of transmission (positive or negative ack, or no ack received). Typically, node 600 doubles the value of CW if the transmission failed until CW reaches a maximum value defined by the standard 802.11 and which depends on the AC type of the data. If the transmission is successful, CW is set to a minimum value also defined by the 802.11 standard and which is also dependent on the AC type of the data.
Then, if the selected traffic queue is not empty after the EDCA data transmission, a new associated queue backoff counter is randomly selected from [0,CW|, like in step 902.
This ends the process of Figure 11.
Figure 12 illustrates, using a flowchart, exemplary steps for updating the RU backoff parameters and value upon receiving a positive or negative acknowledgment of a multiuser OFDMA transmission.
It is recalled that in a simple implementation, the RU backoff value OBO is used to determine if node 600 is eligible to contend access to an OFDMA resource unit: OBO should be not greater than the number of available random RUs in order to allow for an UL OFDMA transmission for node 600. Scheduled RUs are accessible to node 600 if indicated as such by the AP, independently of RU backoff value OBO.
Thus step 1200 happens during such an UL OFDMA transmission in a random RU (when decremented OBO reaches zero).
Step 1201 is executed when the UL OFDMA transmission finishes, upon having the status of transmission; either by receiving a positive or negative acknowledgment from the AP, or by inferring loss of data (in case no ack is received).
As noted above, CWO is initially assigned CWOmin value and may increase up to CWOmax value, for instance.
At step 1201, the contention window boundary CWO is updated depending on a success or failure in transmitting the data. This step and following step 1202 are performed if needed only. In particular, if the ending transmission has sent all the data intended to the AP (i.e. no more of such data remain in all the traffic queues), there is no need to keep the RU backoff engine active. It is thus desactivated, by clearing OBO value.
If the update is needed, when an OFDMA transmission fails (e.g. the transmitted data frame has not been acknowledged), a new CWO value may be computed for instance. In particular, CWO may be doubled, for instance CWO = 2 x (CWO + 1) -1. Or a new CWO value may be obtained using formula 2(TBD1) x CWOmin, i.e. using remote information TBD obtained from the AP.
This reduces the collision probability in case there are too many nodes attempting to access the RUs.
In case the OFDMA transmission succeeds, CWO may be reset to a (predetermined) low boundary, such as CWOmin.
This description of step 1201 reflects a local point of view at node 600.
Next to step 1201, step 1202 consists in computing a new RU backoff value OBO based on the updated contention window boundary. The same approaches as described above with reference to step 903 can be used: OBO = random[0, CWO], OBO = random[0, CWO] + AIFS[AC], or OBO = random[0, CWO] / TBD.
This ends the process of Figure 12.
Figure 13 illustrates, using a flowchart, a first exemplary embodiment of accessing the medium based on the OFDMA medium access scheme, and of updating the RU backoff parameters when a new trigger frame is received at transmitting node 600.
It means that node 600 has data to transmit, and thus has at least one active EDCA queue backoff engine 311. Furthermore, node 600 has a non-zero RU backoff value OBO, meaning that it has data to send to the AP upon receiving the trigger frame.
At step 1300, node 600 checks whether or not it has received an 802.11a frame in a non-HT format. Preferably, the type of the frame indicates a trigger frame (TF), and the Receiver Address (RA) of the TF is a broadcast or group address (i.e. not a unicast address corresponding specifically to node 600’s MAC address).
Upon receiving the trigger frame, the channel width occupied by the TF control frame is signaled in the SERVICE field of the 802.11 data frame (the DATA field is composed of SERVICE, PSDU, tail, and pad parts). An indication that the control frame is a Trigger Frame may be provided in frame control field 301, which indicates the type of the frame. In addition, frame control field 301 may include a sub-type field for identifying the type of the trigger frame, such as a TF-R.
As noted above, even without such sub-type field, the random RUs can be determined using for instance the AID associated with each RU defined in the TF (AID=0 may mean random RU). So the number of random Resource Units supporting the random OFDMA contention scheme is known at this stage. Obtaining the number of random RUs may be advantageously performed if the number of random RUs varies from one TF to the other.
Next, at step 1301, node 600 consists in decrementing the RU backoff value OBO based on the number NbRU of random resource units defined in the received trigger frame: OBO = OBO - NbRU. This is because node 600 is determined as being an eligible node to transmit data in an OFDMA random RU, if its pending RU backoff value OBO is not greater than the number of OFDMA random RUs.
Step 1301 thus updated OBO value upon receiving a new trigger frame.
In a slight alternative, decrementing the RU backoff value is also based on the RU collision and unuse factor TBD received from another node.
For instance, OBO = (OBO - NbRU) * TBD. As a result, this alternative embodiment updates RU backoff value OBO with an AP’s parameter upon each OFDMA transmission.
In another example, OBO = OBO - (NbRU * TBD). This formula thus adapts the speed of decrementing OBO to the network conditions, through factor TBD.
Next to step 1301, step 1302 consists for node 600 in determining whether it is an eligible node for transmission. This means either a scheduled RU of the TF is assigned to node 600 or its RU backoff value OBO is less or equal to zero.
As alternative, and if node 600 supports concurrent OFDMA transmission capabilities, both cases (scheduled RU and OBO is less or equal to zero) are handled and steps 1303 to 1310 are conducted in parallel for the two accesses.
In case of no eligibility, the process ends.
In case of eligibility, node 600 selects the RU for sending the data. It is either the assigned scheduled RU, or a random RU selected from the NbRU random RUs of the TF (either randomly or using the RU backoff value OBO before step 1301 as an index to select the random RU having the same index). This is step 1303.
Once the RU for OFDMA transmission has been determined, step 1304 selects, from the active AC traffic queues 310, data to transmit (i.e. data intended to the AP). OFDMA muxer 801 is in charge of selecting such data to be transmitted, from among at least one AC traffic queue 310.
Note that during an MU OFDMA TXOP (i.e. transmission in an RU), node 600 is allowed to transmit multiple data frames (MPDUs) from the same AC traffic queue, with the condition that the whole OFDMA transmission lasts the duration originally specified by the received trigger frame (i.e. the TxOP length).
Of course, if not enough data is stored in the selected AC traffic queue, another or more active AC traffic queue may be considered.
Generally speaking, the data frames from the active ACs having the highest priority are selected. “Highest priority” may means having the lower queue backoff value, or having the highest priority according to EDCA traffic class prioritization (see Figure 3b).
Next to step 1304, step 1305 consists for node 600 in initiating and performing a MU UL OFDMA transmission of the selected data (at step 1304) in the selected RU (at step 1303).
As commonly known, the destination node (i.e. the AP) will send an acknowledgment related to each received MPDU from multiple users inside the OFDMA TXOP.
Preferably, the ACK frame is transmitted in a non-HT duplicate format in each 20 MHz channel covered by the initial TF’s reservation. This acknowledgment can be used by the multiple source nodes 600 to determine if the destination (AP) has well received the OFDMA MPDUs. This is because source nodes 600 are not able to detect collisions inside their selected RUs.
Thus at step 1306, node 600 obtains a status of transmission, for instance receives an acknowledgment frame.
In case a scheduled RU of the TF is assigned to node 600, as the OFDMA access is not granted through OBO, then the algorithm goes directly to step 1309 (arrow not shown in the figure).
Otherwise, the algorithm continues either in step 1307 or 1308. In case of positive acknowledgment, the MU UL OFDMA transmission is considered as a success and step 1307 is executed. Otherwise, step 1308 is executed.
In case of successful OFDMA transmission, CWO is set to a (predetermined) low boundary value, for instance CWOmin, at step 1307.
In case of failing OFDMA transmission, CWO is doubled, for instance CWO = 2 x (CWO + 1) -1, at step 1308. Note that CWO cannot be above CWOmax.
Next to step 1307 or 1308, step 1309 consists for node 600 in deactivating the AC queue backoff engines that have no more data to transmit. This is because due to the UL MU transmission, some AC queues may have been emptied from the transmitted data. In such a case the corresponding queue backoff value is cleared (the value is no longer taken into account to compute the RU backoff values and to EDCA access the medium).
As long as the selected (at step 1304) AC queue engines still stores data to be transmitted in their respective traffic queues, their respective (non-zero) queue backoff is kept unchanged. Note that in any case, as only an OFDMA access has been performed (and not over the EDCA channel), the AC contention window values CW of the queue backoff engine(s) 311 (EDCA CW) are not modified.
Next to step 1309, step 1310 consists for node 600 to determine whether or not a new RU backoff value OBO has to be computed. This is because value OBO has expired (test 1302) and data intended to the AP have been consumed.
Thus, it is first determined whether or not data intended to the AP remain in any of the AC traffic queues. In case of positive determination, a new OBO value is computed. Otherwise, the RU backoff engine is deactivated. A computation of OBO value may be according to any approach described above with reference to step 903.
This ends the process of Figure 13.
Figure 14 illustrates, using a flowchart, a second exemplary embodiment of accessing the medium based on the OFDMA medium access scheme, and of updating the RU backoff parameters when a new trigger frame is received at transmitting node 600.
Compared to the first exemplary embodiment, the second exemplary embodiment involves use of an adjustment parameter issued from the AP, namely the above-mentioned TBD factor, to compute CWO. TBD factor, reflecting the AP point of view of collisions in overall 802.11 ax network, may evolve overtime and be provided in the TFs.
Until a first TBD value is received by node 600, the latter manage a corresponding local parameter, namely local RU collision and unuse factor CF. Local factor CF will allow to use local statistics instead of AP parameter for applying steps 1404 to 1405 further explained.
In this second exemplary embodiment, the computation of RU backoff parameters (including CWO) is performed upon reception of the trigger frame, and not when new data arrive from an upper layer application 701 (as in the case of above step 903).
Thus steps 1400 to 1408 are new compared to Figure 14. Steps 13xx are similar to step 13xx of Figure 13.
After step 1300 of receiving a new TF, step 1400 aims at determining whether or not the RU backoff parameters should be initialized upon receiving the trigger frame. More precisely, step 1400 consists in determining whether the RU backoff engine is inactive and data intended to the AP are now stored in traffic queues 310 (i.e. it is the first TF received after some first data for the AP have been input in the traffic queues).
In case the RU backoff engine requires to be activated, steps 1401 to 1406 are performed to initialize the RU backoff engine, after which step 1301 is executed. In case the RU backoff engine is already active, step 1301 is directly executed.
The initialization sequence (steps 1401-1406) consists first for node 600 in checking whether or not factor TBD has been received from the AP (step 1401).
If such factor TBD has been received, steps 1404-1406 are performed. Otherwise, the un-initialized TBD parameter is initialized with the local CF value (step 1403): here node 600 acts alone for adapting the CW value, that is to say only in regards to the success of its own past OFDMA transmissions.
The evolving of factor CF is described below with reference to steps 1407-1408.
Next to step 1403, step 1404 is performed during which new RU backoff parameters are determined. For instance a new CWOmin value is determined, using any approach as described above with reference to step 1002.
For instance, CWOmin may be set with regards to the lowest CWmin of the active AC queues. CWOmin— Min({CWmm}actiVe ac)·
In a variant, CWOmin may be set with regards to the lowest CWmin of the active AC queues and the number of random RUs: CWOmin = Min({CWmin}active ac) x NbRU.
Next, step 1405 consists for node 600 in computing CWO from CWmin, and factor TBD.
An example of computation is: CWO= 2(TBD1) x CWOmin. CWO value may be limited to an upper bound, for instance CWOmax defined above (step 1002).
As a result, if TBD factor is 1, then the minimum value of EDCA CWmin drives the medium access: CWmin=3 for VOICE, so approximately a maximum of two trigger frames to backoff, the third one being the one to access in the worst case.
Next, at step 1406, node 600 computes the RU backoff value OBO from CWO. See for instance above step 903: e.g. OBO = random[0, CWO],
Back to the positive output of test 1400, the algorithm of Figure 13 is reused, except for steps 1307 and 1308. They are replaced by steps 1407 and 1408 during which the local RU collision and unuse factor CF is updated depending on a success or failure in transmitting the data (instead of directly updating CWO).
Factor CF may evolve within the range [1, CFmax], wherein CFmax is a maximum coefficient: for instance 32. As an alternative, CFmax can be drawn according the active EDCA AC queues: CFmax = [(CWmax)AC +1] / [(CWmin)AC· +1], wherein “ AC ” and “ AC’ ” designate the active queue backoff engines having the highest priority (e.g. the highest EDCA traffic class prioritization of Figure 3b), or having the highest CWmax value and lowest CWmin value respectively (that is to say CWmax= 1023 and CWmin=15, for the Background or Best effort queues)
Thus at steps 1407-1408, factor CF is updated upon each success/failure of OFDMA transmission.
In case of positive acknowledgment, the MU UL OFDMA transmission is considered as a success and step 1407 is executed during which factor CF is set to a (predetermined) low CF value, for instance 1.
Otherwise, step 1408 (successful OFDMA transmission) is executed in which factor CF is doubled. Note that factor CF is kept below to CFmax.
Note that further to step 1309 of handling properly the EDCA queue backoff values, step 1310 is suppressed as the OBO computation is now handled in the initialization phase of steps 1401-1406.
This ends the process of Figure 14.
The various alternative embodiments presented above with respect to Figures 9 to 14 are compatible one with each other, and thus may be combined to take advantage of their respective advantages.
It is apparent from the above that the first embodiments of the invention (based on determining one or more RU backoff parameters as a function of one or more queue backoff parameters of the queue backoff engines) is fully distributed over the nodes. Furthermore it keeps compliancy with 802.11 standard, in particular because the EDCA prioritization scheme is kept.
Note that the probability of collisions occurring over RUs, or even more low usage of RUs, is monitored by the AP in some embodiments. This makes it possible to consider this overall network aspect for each individual medium access at the nodes. This makes it possible to advantageously adapt the medium access to improve OFDMA RU usage.
Turning now to second aspects of embodiments the invention described with reference to Figures 15 to 17, it is recalled they are based on modifying at least one non-zero backoff value based on the data remaining in the traffic queues after a transmitting step.
First of all, all or part of the backoff engines may be activated upon the MAC layer receiving new data, and their backoff values computed.
Such activation and computation may be as described above with reference to
Figure 9.
In a slight variant that interests the second aspects, such new data received in the traffic queues may be marked data as being either compatible only with transmission in a, EDCA transmission opportunity granted to the node, or compatible only with OFDMA transmission to the access point in a resource unit, or compatible with both transmissions. Such marking information, noted Scheme Mark, can be added directly in the traffic queues (requiring a slight structural adaptation thereof). Of course, an additional table to only supplement the current queue structure (without modifying it) may be used to store the scheme compatibility of the data with the two medium access schemes.
Scheme Mark is used below (Figures 16 and 17) to efficiently select the data to transmit either through EDCA transmission or through OFDMA transmission. This is because the marking information helps to speed up the data selection process necessary when the medium access is granted (by the EDCA or the MU UL method). A goal of such efficient selection is to keep the RU backoff engine and the AC queue backoff engines consistent with the data actually stored in the traffic queues.
The optional marking operation is described with reference to Figure 15 which illustrates, using a flowchart, main steps performed by MAC layer 702 of node 600, when receiving new data to transmit. Steps 9xx are similar to same steps of Figure 9.
As mentioned above there are mainly three possibilities for the marking given the two contention-based accesses (EDCA and OFDMA).
First, the data can be compatible only with the EDCA medium access scheme. This is for instance the case of direct link data that are sent directly from one node to another node without being relayed by the AP.
Second, the data are compatible with MU UL OFDMA medium access scheme only. This is for instance the case of data belonging to a node’s data stream already registered with the AP as a stream to pull only via a trigger frame (i.e. a TF-pull registered data stream).
Third and finally, most data are compatible with both EDCA and MU UL OFDMA medium access schemes. These are data intended to the AP (for instance for relay only if the addressee is another node), which are not specifically pull via a trigger frame.
At step 901, new data to transmit is obtained by node 600, for instance from an application locally running on the device.
Next to step 901, step 1500 consists in determining whether or not the newly available data is candidate for UL MU OFDMA transmission.
In a first embodiment, this may be done by considering the next receiver address for of the data (this information is in the header of the data frames). In the present example, only the data to be sent to (or via) the AP are considered as candidate to the UL MU OFDMA transmission. A consequence of such approach is that all broadcast data, or data of a direct link session not involving the AP (direct communication between nodes without being relayed by the AP) are excluded from the UL MU OFDMA candidates.
In another embodiment, the upper layer application creating and providing the data is able to indicate whether or not the provided data can be handled by the UL MU OFDMA medium access scheme.
If step 1500 determines that the obtained data are compatible with an UL MU OFDMA medium access scheme, step 903 determines if there is a need to compute a new RU backoff value OBO, and computes a new value if needed. Step 903 has been described above.
Next to step 903, step 1501 consists for node 600 in marking the newly available data as being OFDMA compatible, using Scheme Mark information. This information will be used to speed up the selection processes at steps 1600 and 1701).
Of course, as mentioned above, the Scheme Mark information may indicate more than the sole compatibility with UL MU OFDMA. It may take three values to indicate either such compatibility, or a compatibility with EDCA only, or a compatibility with both schemes.
Next to step 1501, step 902 is executed as described above.
In addition, a corresponding Scheme Mark field (compatibility with both schemes) may be specified in association with the data.
In case of negative determination, the process directly ends.
If step 1500 determines that the newly available data are not compatible with the UL MU OFDMA medium access scheme, step 902 is also executed. However, if a Scheme Mark field is specified for the data, it indicates a compatibility with only EDCA scheme.
This ends the process of Figure 15.
Figure 16 illustrates, using a flowchart, steps of accessing the medium based on the conventional EDCA medium access scheme, using the above-defined second aspects, i.e. modifying at least one non-zero backoff value based on the data remaining in the traffic queues after a transmitting step. Figure 16 is based on Figure 11 above, wherein steps 11xx having the same reference are similar.
New steps are steps 1600 and 1601. The first one aims at selecting only data dedicated to be sent via the EDCA method, and the second one aims at ensuring consistency of the RU backoff value OBO with the data remaining in the AC traffic queues 310 after the EDCA-based transmission.
Thus steps 1100-1120 describe the conventional waiting process to detect an available primary channel for transmission (by decrementing the queue backoff values each elementary time unit the communication channel is detected as idle). In case of availability, the active AC traffic queue having a zero queue backoff counter and having the highest priority is selected (step 1130).
Next, new step 1600 is executed to select from among the data stored in the selected AC traffic queue, which data will be send on the medium. This is mainly to avoid selecting data that must be sent using an UL MU OFDMA session only.
This step thus aims at selecting data to be transmitted in the EDCA transmission opportunity granted to the node, from a traffic queue associated with a queue backoff value reaching zero, wherein selecting data selects data of the traffic queue that are compatible with transmission in an EDCA transmission opportunity granted to the node.
In first embodiments, new step 1600 may consist in reading a contention scheme indication associated with each data in the traffic queue, i.e. in reading the Scheme Mark field (if implemented using the process of Figure 15). Such reading makes it possible to directly know whether the parsed data are compatible with the EDCA scheme. If they are compatible, they are selected, meaning they are added to the transmission buffer.
In second embodiments (for instance Figure 15 is not implemented), new step 1600 may consist in reading the destination addresses of the data parsed in the selected traffic queue. Data other that those intended to the AP through a TF-pull registered data stream are selected from the selected traffic queue.
Once data have been selected, they are sent at step 1150, which conducts to update the selected queue backoff engine (its CW and backoff value) at step 1170.
It should be noted that if no data has been selected in step 1600 (when the queue selected in step 1130 contains no data compatible with the EDCA scheme), no real transmission is initiated, but transmission is considered as successful, and step 1170 is executed.
It should also be noted that some data thus sent may be compatible with UL MU OFDMA scheme, meaning that it may happen that resulting from transmission 1150, no more OFDMA compatible data remain in the traffic queues, while the RU backoff value is strictly positive.
This is why new step 1601 is provided in order to keep the RU backoff engine consistent with the content of the traffic queues.
New step 1601 provides to modify the non-zero (or positive) RU backoff value by clearing (i.e. no more value exists or is set to 0) the RU backoff value. This is performed when no more data to be transmitted to the AP remain in the traffic queues.
For instance, new step 1601 determines whether or not the traffic queues still store data compatible with OFDMA scheme. This may be done by directly reading the Scheme Mark field associated with the data, or by comparing a destination address of the data remaining in the traffic queues with an address of the access point.
If data compatible with OFDMA scheme (i.e. intended to the AP) remain, the RU backoff value OBO is kept unchanged.
Otherwise it is cleared by deactivating RU backoff engine 800.
This ends the process of Figure 16.
Figure 17 illustrates, using a flowchart, an exemplary embodiment of accessing the medium based on the OFDMA medium access scheme, still using the above-defined second aspects, i.e. modifying at least one non-zero backoff value based on the data remaining in the traffic queues after a transmitting step. Figure 17 is based on Figure 13 above, wherein steps 13xx having the same reference are similar.
New steps are steps 1700, 1701, 1702 and 1703. The first three steps 1700-1702 aim at selecting only data dedicated to be sent via the OFDMA method, and the last step 1703 aims at ensuring consistency of the AC queue backoff parameters (in particular the AC backoff values) with the data remaining in their corresponding AC traffic queues 310 after the OFDMA-based transmission in an RU.
Again, the process starts by receiving a TF, either a TF-R or a normal TF, at step 1300. Step 1301 still regards upon receiving a trigger frame, decrementing the RU backoff value based on the number of random resource units defined in the received trigger frame. Step 1302 makes it possible to determine whether or not node 600 is eligible to actually access an RU, either a random RU because the RU backoff value OBO reaches 0, or a scheduled RU assigned to it by the AP.
In case of no eligibility, the process ends.
In case of eligibility, an RU is thus selected at step 1303.
New step 1700 includes selecting at least one of the AC traffic queues from which the data to be transmitted will be selected. This selection can advantageously take into account the EDCA priority of the AC traffic queues.
Indeed, compared to conventional EDCA (see Figure 11), no AC queue backoff value reaches zero, so node 600 has to decide which AC traffic queue will provide the data to transmit.
Various selecting approaches may be used, such as: selecting the AC traffic queue having the lowest associated queue backoff value. This approach sticks to the data priority; selecting randomly one non-empty AC traffic queue from the traffic queues; selecting the AC traffic queue storing the highest amount of data (i.e. the most loaded); or selecting the non-empty AC traffic queue having the highest associated traffic priority (see Figure 3b, voice queue has an higher priority than video queue for instance).
When the at least one AC traffic queue has been selected, next step 1701 is executed. A first AC traffic queue is thus currently selected (e.g. the one having the lowest backff value).
In step 1701, node 600 determines from among the data stored in the traffic queue currently selected which data to send in the selected RU.
Node 600 thus parses the data stored in the traffic queue to avoid sending data that are not compatible with the UL MU OFDMA medium access scheme.
To do so, node 600 selects data of the currently selected traffic queue that are compatible with OFDMA transmission in a resource unit. This may be done by reading the Scheme Mark field as mentioned above, or by comparing a destination address of the data parsed in the selected traffic queue with an address of the access point. The OFDMA-compatible data are added to the MAC transmission buffer until the amount of data required by the trigger frame is reached (i.e. transmission of the data requires the specified TxOP duration).
As the retrieval of data from the AC traffic queue must be performed in a FIFO mode, specific processing must be performed when encountering non OFDMA-compatible data.
In one embodiment, upon detecting data not compatible with OFDMA transmission in a resource unit, the algorithm stops feeding the transmission buffer and transmitting step 1305 is performed, meaning that node 600 stops transmitting additional data from the traffic queues.
In another embodiment, upon detecting data not compatible with OFDMA transmission in a resource unit or upon reaching the end of one selected traffic queue, another AC traffic queue may be considered (search in the currently selected traffic queue is thus stopped). The same approach may also be used upon reaching the end of one selected traffic queue. That is why step 1702 is executed to consider whether or not additional data to fill entirely the RU are needed. In the affirmative, a new AC traffic queue is selected at new occurrence of step 1700. It results that new data may be selected from another selected AC traffic queue that are compatible with transmission in a resource unit.
In yet another embodiment, upon detecting data not compatible with OFDMA transmission in a resource unit, the non-compatible data may be skipped and other data may be searched in the same currently selected traffic queue, that are compatible with transmission in a resource unit. Of course, if the end of the currently selected traffic queue is reached, another traffic queue may be selected.
As explained above, new step 1702 determines whether or not the amount of selected data matches the quantity of data required by the selected RU. If more data is needed, the process loops back to step 1700 to consider a next traffic queue.
Adding data up the quantity of data required by the selected RU is advantageously performed in order to avoid legacy node to detect the communication channel bearing the RU as being available. Indeed, the data transmitted ensure energy level to be maintained in the RU for the whole granted TxOP.
Thus, if the selected data do not reach the wished quantity of data required by the selected RU, padding data may be added to the transmission buffer in order to transmit the padding data up to the end of the resource unit. This is why it is worth sending data from different traffic queues rather than sending padding data.
Next to step 1702, the selected data are sent in the selected RU (random or scheduled). Based on the acknowledgment from the AP, steps 1306-1308 update RU backoff parameters, such as CWO.
It should be noted that some data thus sent may be compatible with the EDCA scheme, meaning that it may happen that resulting from transmission 1305 no more data remain in one or more traffic queues that were previously active (i.e. with respective queue backoff values strictly positive).
This is why new step 1703 is provided in order to keep the AC queue backoff engines consistent with the content of their respective traffic queues.
New step 1703 provides to modify at least one non-zero AC queue backoff value by clearing (i.e. no more value exists or is set to 0) the non-zero AC queue backoff value. This is performed when no more data remain in the associated AC traffic queue.
For instance, new step 1703 determines whether or not previously one or more active AC backoff engines have now a respective empty AC traffic queue.
In the affirmative, the one or more AC backoff engines 311 are deactivated.
In the negative and for each active AC backoff engine having a respective nonempty AC traffic queue, the AC queue backoff value is kept unchanged.
Next to step 1703, step 1310 is performed to compute a new RU backoff value if needed (i.e. if some OFDMA-compliant data remain in the AC traffic queues).
This ends the process of Figure 17.
Although the present invention has been described hereinabove with reference to specific embodiments, the present invention is not limited to the specific embodiments, and modifications will be apparent to a skilled person in the art which lie within the scope of the present invention.
Many further modifications and variations will suggest themselves to those versed in the art upon making reference to the foregoing illustrative embodiments, which are given by way of example only and which are not intended to limit the scope of the invention, that being determined solely by the appended claims. In particular the different features from different embodiments may be interchanged, where appropriate.
In the claims, the word “comprising” does not exclude other elements or steps, and the indefinite article “a” or “an” does not exclude a plurality. The mere fact that different features are recited in mutually different dependent claims does not indicate that a combination of these features cannot be advantageously used.

Claims (47)

1. A communication method in a communication network comprising an access point and a plurality of nodes, at least one node comprising: a plurality of traffic queues for serving data traffic at different priorities; a plurality of queue backoff engines, each associated with a respective traffic queue for computing a respective queue backoff value to be used to contend access to at least one communication channel in order to transmit data stored in the respective traffic queue; and an RU backoff engine separate from the queue backoff engines, for computing an RU backoff value to be used to contend access to at least one random resource unit splitting a transmission opportunity granted on the communication channel, in order to transmit data stored in either traffic queue, the method comprising, at the node: transmitting data from at least one traffic queue in a transmission opportunity or in a resource unit splitting a transmission opportunity; and modifying at least one non-zero backoff value based on the data remaining in the traffic queues after the transmitting step.
2. The method of Claim 1, wherein modifying at least one non-zero backoff value includes clearing the RU backoff value.
3. The method of Claim 2, wherein clearing the RU backoff value is triggered when no more data to be transmitted to the AP remain in the traffic queues.
4. The method of Claim 3, further comprising comparing a destination address of the data remaining in the traffic queues with an address of the access point.
5. The method of Claim 1, further comprising selecting data to be transmitted in the transmission opportunity, from a traffic queue associated with a queue backoff value reaching zero, wherein selecting data selects data of the traffic queue that are compatible with transmission in a transmission opportunity granted to the node.
6. The method of Claim 3 or 5, further comprising reading a contention scheme indication associated with each data in the traffic queue, the contention scheme indication defining whether the data is compatible only with transmission in a transmission opportunity granted to the node, or compatible only with transmission to the access point in a resource unit, or compatible with both transmissions.
7. The method of Claim 1, further comprising receiving a trigger frame from the access point in the communication network, the trigger frame reserving the transmission opportunity on the communication channel and defining resource units, RUs, forming the communication channel including the at least one random resource unit.
8. The method of Claim 1, wherein modifying at least one non-zero backoff value includes clearing at least one of the queue backoff values.
9. The method of Claim 8, wherein clearing the non-zero queue backoff value is triggered when no more data remain in the associated traffic queue.
10. The method of Claim 1, wherein the transmitted data are transmitted in the random resource unit.
11. The method of Claim 1, wherein the transmitted data are transmitted in a scheduled resource unit splitting the granted transmission opportunity, the scheduled resource being reserved by the access point for said node.
12. The method of Claim 1, further comprising selecting at least one of the traffic queues from which the data to be transmitted are selected.
13. The method of Claim 12, wherein selecting one traffic queue includes one of: selecting the traffic queue having the lowest associated queue backoff value; selecting randomly one non-empty traffic queue from the traffic queues; selecting the traffic queue storing the highest amount of data; selecting the non-empty traffic queue having the highest associated traffic priority.
14. The method of Claim 12, wherein selecting data from one selected traffic queue includes selecting data of the selected traffic queue that are compatible with transmission in a resource unit.
15. The method of Claim 14, further comprising comparing a destination address of the data in the selected traffic queue with an address of the access point.
16. The method of Claim 14, further comprising reading a contention scheme indication associated with each data in the traffic queue, the contention scheme indication defining whether the data is compatible only with transmission in a transmission opportunity granted to the node, or compatible only with transmission to the access point in a resource unit, or compatible with both transmissions.
17. The method of Claim 14, further comprising, upon detecting data not compatible with transmission in a resource unit, stopping transmitting additional data from the traffic queues.
18. The method of Claim 14, further comprising, upon detecting data not compatible with transmission in a resource unit, skipping the non-compatible data and searching for other data in the selected traffic queue that are compatible with transmission in a resource unit.
19. The method of Claim 14, further comprising, upon detecting data not compatible with transmission in a resource unit or upon reaching the end of one selected traffic queue, selecting data from another selected traffic queue that are compatible with transmission in a resource unit.
20. The method of Claim 14, further comprising transmitting padding data up to the end of the resource unit.
21. The method of Claim 14, further comprising marking data in the traffic queues as being either compatible only with transmission in a transmission opportunity granted to the node, or compatible only with transmission to the access point in a resource unit, or compatible with both transmissions.
22. The method of Claim 1, further comprising, upon receiving a trigger frame, decrementing the RU backoff value based on the number of random resource units defined in the received trigger frame.
23. The method of Claim 1, further comprising, decrementing the queue backoff values each elementary time unit the communication channel is detected as idle.
24. A communication method in a communication network comprising an access point and a plurality of nodes, at least one node comprising: a plurality of traffic queues for serving data traffic at different priorities; a plurality of queue backoff engines, each associated with a respective traffic queue for computing a respective queue backoff value to be used to contend access to at least one communication channel in order to transmit data stored in the respective traffic queue; and an RU backoff engine separate from the queue backoff engines, for computing an RU backoff value to be used to contend access to at least one random resource unit splitting a transmission opportunity granted on the communication channel, in order to transmit data stored in either traffic queue, the method comprising, at the node: selecting data from at least one traffic queue and transmitting the selected data in a transmission opportunity granted to the node if a first access scheme is used or in a resource unit splitting a transmission opportunity granted to the access point if a second access scheme is used, wherein selecting the data includes: successively considering data from the at least one traffic queue; determining whether or not a data item currently considered is compatible with a transmission according to the access scheme used; and selecting for transmission the data item currently considered only in case of compatibility.
25. The method of Claim 24, wherein selecting data from at least one traffic queue is triggered upon expiry of one of the backoff values,
26. The method of Claim 25, wherein the expiring backoff value is the RU backoff value thereby making the second access scheme to be used, and wherein the data item or items are selected if compatible with transmission to the access point in a resource unit.
27. The method of Claim 26, further comprising selecting at least one of the traffic queues from which the data to be transmitted are selected.
28. The method of Claim 27, wherein selecting one traffic queue includes one of: selecting the traffic queue having the lowest associated queue backoff value; selecting randomly one non-empty traffic queue from the traffic queues; selecting the traffic queue storing the highest amount of data; selecting the non-empty traffic queue having the highest associated traffic priority.
29. The method of Claim 26, wherein the determining step comprises comparing a destination address of the data item currently considered with an address of the access point.
30. The method of Claim 25, wherein the expiring backoff value is one of the queue backoff values thereby making the first access scheme to be used, and wherein the data item or items are selected from the traffic queue associated with the expiring queue backoff value and are selected if compatible with transmission in a transmission opportunity granted to the node.
31. The method of Claim 26 or 30, wherein the determining step comprises reading a contention scheme indication associated with each data item in the traffic queue, the contention scheme indication defining whether the data is compatible only with transmission in a transmission opportunity granted to the node, or compatible only with transmission to the access point in a resource unit, or compatible with both transmissions.
32. The method of Claim 24, further comprising receiving a trigger frame from the access point in the communication network, the trigger frame reserving the transmission opportunity on the communication channel and defining resource units, RUs, forming the communication channel including the at least one random resource unit.
33. The method of Claim 24, further comprising, upon determining the data item currently considered is not compatible with a transmission according to the access scheme used, stopping the step of successively considering data from the at least one traffic queue.
34. The method of Claim 24, further comprising, upon determining the data item currently considered is not compatible with a transmission according to the access scheme used, skipping the non-compatible data item and searching for other data in the traffic queue that are compatible with a transmission according to the access scheme used.
35. The method of Claim 24, further comprising, upon determining the data item currently considered is not compatible with a transmission according to the access scheme used or upon reaching the end of the traffic queue, successively considering data from another traffic queue to select data item or items that are compatible with a transmission according to the access scheme used.
36. The method of Claim 24, further comprising transmitting padding data up to the end of a resource unit.
37. The method of Claim 24, wherein the first access scheme is a contention-based scheme based on the queue backoff values, and the data item selected for transmission is determined to be compatible with a transmission in a transmission opportunity granted to the node.
38. The method of Claim 24, wherein the second access scheme is a contention-based scheme based on the RU backoff value, and the data item selected for transmission is determined to be compatible with a transmission to the access point in a resource unit.
39. The method of Claim 24, wherein the second access scheme is a scheduled access scheme to access scheduled resource unit splitting the granted transmission opportunity, the scheduled resource being reserved by the access point for said node, and the data item selected for transmission is determined to be compatible with a transmission to the access point in a resource unit.
40. The method of Claim 24, further comprising marking data in the traffic queues as being either compatible only with transmission in a transmission opportunity granted to the node, or compatible only with transmission to the access point in a resource unit, or compatible with both transmissions.
41. The method of Claim 24, further comprising, upon receiving a trigger frame, decrementing the RU backoff value based on the number of random resource units defined in the received trigger frame.
42. The method of Claim 24, further comprising, decrementing the queue backoff values each elementary time unit the communication channel is detected as idle.
43. A non-transitory computer-readable medium storing a program which, when executed by a microprocessor or computer system in a device of a communication network, causes the device to perform the method of Claim 1 or 24.
44. A communication device forming node in a communication network comprising an access point and a plurality of nodes, comprising: a plurality of traffic queues for serving data traffic at different priorities; a plurality of queue backoff engines, each associated with a respective traffic queue for computing a respective queue backoff value to be used to contend access to at least one communication channel in order to transmit data stored in the respective traffic queue; an RU backoff engine separate from the queue backoff engines, for computing an RU backoff value to be used to contend access to at least one random resource unit splitting a transmission opportunity granted on the communication channel, in order to transmit data stored in either traffic queue; and a controller for transmitting data from at least one traffic queue in a transmission opportunity or in a resource unit splitting a transmission opportunity, wherein the RU backoff engine is further configured to modify at least one non-zero backoff value based on the data remaining in the traffic queues after the transmission.
45. A communication device forming node in a communication network comprising an access point and a plurality of nodes, comprising: a plurality of traffic queues for serving data traffic at different priorities; a plurality of queue backoff engines, each associated with a respective traffic queue for computing a respective queue backoff value to be used to contend access to at least one communication channel in order to transmit data stored in the respective traffic queue; an RU backoff engine separate from the queue backoff engines, for computing an RU backoff value to be used to contend access to at least one random resource unit splitting a transmission opportunity granted on the communication channel, in order to transmit data stored in either traffic queue; and a controller for selecting data from at least one traffic queue and transmitting the selected data in a transmission opportunity granted to the node if a first access scheme is used or in a resource unit splitting a transmission opportunity granted to the access point if a second access scheme is used, wherein the controller is configured, in order to select the data, to: successively consider data from the at least one traffic queue; determine whether or not a data item currently considered is compatible with a transmission according to the access scheme used; and select for transmission the data item currently considered only in case of compatibility.
46. A wireless communication system having an access point and at least one communication device forming node according to Claim 44 or 45.
47. A communication method in a communication network comprising a plurality of nodes, substantially as herein described with reference to, and as shown in, Figure 16, or Figure 17, or Figures 16 and 17, or Figure 15,16 and 17 of the accompanying drawings.
GB1518867.5A 2015-10-23 2015-10-23 Improved contention mechanism for access to random resource units in an 802.11 channel Active GB2543584B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
GB1518867.5A GB2543584B (en) 2015-10-23 2015-10-23 Improved contention mechanism for access to random resource units in an 802.11 channel
GB1804883.5A GB2562601B (en) 2015-10-23 2015-10-23 Improved contention mechanism for access to random resource units in an 802.11 channel

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1518867.5A GB2543584B (en) 2015-10-23 2015-10-23 Improved contention mechanism for access to random resource units in an 802.11 channel

Publications (3)

Publication Number Publication Date
GB201518867D0 GB201518867D0 (en) 2015-12-09
GB2543584A true GB2543584A (en) 2017-04-26
GB2543584B GB2543584B (en) 2018-05-09

Family

ID=55130189

Family Applications (2)

Application Number Title Priority Date Filing Date
GB1804883.5A Active GB2562601B (en) 2015-10-23 2015-10-23 Improved contention mechanism for access to random resource units in an 802.11 channel
GB1518867.5A Active GB2543584B (en) 2015-10-23 2015-10-23 Improved contention mechanism for access to random resource units in an 802.11 channel

Family Applications Before (1)

Application Number Title Priority Date Filing Date
GB1804883.5A Active GB2562601B (en) 2015-10-23 2015-10-23 Improved contention mechanism for access to random resource units in an 802.11 channel

Country Status (1)

Country Link
GB (2) GB2562601B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170257887A1 (en) * 2016-03-01 2017-09-07 Chittabrata Ghosh Random access with carrier sensing
GB2560540A (en) * 2017-03-14 2018-09-19 Canon Kk Queues management for multi-user and single user edca transmission mode in wireless networks
GB2560562A (en) * 2017-03-15 2018-09-19 Canon Kk Improved access management to multi-user uplink random resource units by a plurality of BSSs
WO2018217141A1 (en) * 2017-05-22 2018-11-29 Telefonaktiebolaget Lm Ericsson (Publ) Controlling and/or enabling control of a backoff procedure in a wireless communication system

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3664566B1 (en) * 2016-03-04 2022-05-25 Panasonic Intellectual Property Management Co., Ltd. Access point for generating a trigger frame for allocating resource units for uora
GB2552189B (en) * 2016-07-13 2020-08-05 Canon Kk Restored fairness in an 802.11 network implementing resource units
CN113595950B (en) * 2021-06-29 2023-06-13 中国船舶重工集团公司第七一五研究所 Signal compatibility method for multi-body underwater acoustic communication network

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US16352A (en) * 1857-01-06 Improvement in rudders
US81047A (en) * 1868-08-11 Caleb whitmorb
US101308A (en) * 1870-03-29 Michael powell
US207074A (en) * 1878-08-13 Improvement in hay-tedders
US314694A (en) * 1885-03-31 Shaft-shackle
WO2011140302A1 (en) * 2010-05-05 2011-11-10 Qualcomm Incorporated Collision detection and backoff window adaptation for multiuser mimo transmission

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7801104B2 (en) * 2006-10-26 2010-09-21 Hitachi, Ltd. System and method for reducing packet collisions in wireless local area networks
US9086488B2 (en) * 2010-04-20 2015-07-21 Michigan Aerospace Corporation Atmospheric measurement system and method
KR101760074B1 (en) * 2010-08-26 2017-07-20 마벨 월드 트레이드 리미티드 Wireless communications with primary and secondary access categories
US20120207074A1 (en) * 2011-02-10 2012-08-16 Nokia Corporation Transmitting multiple group-addressed frames in a wireless network
TWI478550B (en) * 2011-06-07 2015-03-21 Htc Corp Method of back-off procedure setup in a wireless communication system
US9913296B2 (en) * 2013-01-16 2018-03-06 Lg Electronics Inc. Method for performing backoff in wireless LAN system and apparatus therefor
US20150016352A1 (en) * 2013-07-10 2015-01-15 Qualcomm Incorporated Methods and apparatus for performing random access channel procedures
US9699747B2 (en) * 2014-09-12 2017-07-04 Electronics And Telecommunications Research Institute Synchronization method in distributed wireless communication system and terminal supporting the same
US9942925B2 (en) * 2015-01-07 2018-04-10 Qualcomm, Incorporated Station contention behavior in uplink multiple user protocols
GB2536453B (en) * 2015-03-17 2018-01-24 Canon Kk Enhanced channel allocation over multi-channel wireless networks
GB2542818A (en) * 2015-09-30 2017-04-05 Canon Kk Methods and systems for reserving a transmission opportunity for a plurality of wireless communication devices belonging to a collaborative group

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US16352A (en) * 1857-01-06 Improvement in rudders
US81047A (en) * 1868-08-11 Caleb whitmorb
US101308A (en) * 1870-03-29 Michael powell
US207074A (en) * 1878-08-13 Improvement in hay-tedders
US314694A (en) * 1885-03-31 Shaft-shackle
WO2011140302A1 (en) * 2010-05-05 2011-11-10 Qualcomm Incorporated Collision detection and backoff window adaptation for multiuser mimo transmission

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170257887A1 (en) * 2016-03-01 2017-09-07 Chittabrata Ghosh Random access with carrier sensing
US10178694B2 (en) * 2016-03-01 2019-01-08 Intel IP Corporation Random access with carrier sensing
GB2560540A (en) * 2017-03-14 2018-09-19 Canon Kk Queues management for multi-user and single user edca transmission mode in wireless networks
GB2560540B (en) * 2017-03-14 2019-05-01 Canon Kk Queues management for multi-user and single user edca transmission mode in wireless networks
US10966247B2 (en) 2017-03-14 2021-03-30 Canon Kabushiki Kaisha Queues management for multi-user and single user EDCA transmission mode in wireless networks
GB2560562A (en) * 2017-03-15 2018-09-19 Canon Kk Improved access management to multi-user uplink random resource units by a plurality of BSSs
GB2560562B (en) * 2017-03-15 2019-11-06 Canon Kk Improved access management to multi-user uplink random resource units by a plurality of BSSs
WO2018217141A1 (en) * 2017-05-22 2018-11-29 Telefonaktiebolaget Lm Ericsson (Publ) Controlling and/or enabling control of a backoff procedure in a wireless communication system

Also Published As

Publication number Publication date
GB2562601A (en) 2018-11-21
GB2562601B (en) 2019-06-12
GB2543584B (en) 2018-05-09
GB201804883D0 (en) 2018-05-09
GB201518867D0 (en) 2015-12-09

Similar Documents

Publication Publication Date Title
US11039476B2 (en) Contention mechanism for access to random resource units in an 802.11 channel
US11595996B2 (en) Enhanced management of ACs in multi-user EDCA transmission mode in wireless networks
US11019660B2 (en) Trigger frames adapted to packet-based policies in an 802.11 network
US10492231B2 (en) Backoff based selection method of channels for data transmission
US20220030629A1 (en) Restored fairness in an 802.11 network implementing resource units
GB2562601B (en) Improved contention mechanism for access to random resource units in an 802.11 channel
GB2543583A (en) Improved contention mechanism for access to random resource units in an 802.11 channel
US10966247B2 (en) Queues management for multi-user and single user EDCA transmission mode in wireless networks
GB2555143B (en) QoS management for multi-user EDCA transmission mode in wireless networks
GB2575555A (en) Enhanced management of ACs in multi-user EDCA transmission mode in wireless networks
GB2588267A (en) Restored fairness in an 802.11 network implementing resource units
GB2561677A (en) Improved contention mechanism for access to random resource units in an 802.11 Channel
GB2588042A (en) Restored fairness in an 802.11 network implementing resource units