WO2020063722A1 - Congestion management in a wireless communications network - Google Patents

Congestion management in a wireless communications network Download PDF

Info

Publication number
WO2020063722A1
WO2020063722A1 PCT/CN2019/108060 CN2019108060W WO2020063722A1 WO 2020063722 A1 WO2020063722 A1 WO 2020063722A1 CN 2019108060 W CN2019108060 W CN 2019108060W WO 2020063722 A1 WO2020063722 A1 WO 2020063722A1
Authority
WO
WIPO (PCT)
Prior art keywords
protocol data
data units
node
network
flow
Prior art date
Application number
PCT/CN2019/108060
Other languages
French (fr)
Other versions
WO2020063722A9 (en
Inventor
Olivier Marco
Original Assignee
JRD Communication (Shenzhen) Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by JRD Communication (Shenzhen) Ltd. filed Critical JRD Communication (Shenzhen) Ltd.
Priority to CN201980039629.8A priority Critical patent/CN112352449B/en
Publication of WO2020063722A1 publication Critical patent/WO2020063722A1/en
Publication of WO2020063722A9 publication Critical patent/WO2020063722A9/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/24Traffic characterised by specific attributes, e.g. priority or QoS
    • H04L47/2483Traffic characterised by specific attributes, e.g. priority or QoS involving identification of individual flows
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/02Traffic management, e.g. flow control or congestion control
    • H04W28/0289Congestion control

Definitions

  • the following disclosure relates to congestion management in a wireless communications network.
  • Wireless communication systems such as the third-generation (3G) of mobile telephone standards and technology are well known.
  • 3G standards and technology have been developed by the Third Generation Partnership Project (3GPP) .
  • 3GPP Third Generation Partnership Project
  • the 3rd generation of wireless communications has generally been developed to support macro-cell mobile phone communications.
  • Communication systems and networks have developed towards a broadband and mobile system.
  • UE User Equipment
  • RAN Radio Access Network
  • CN Core Network
  • LTE Long Term Evolution
  • E-UTRAN Evolved Universal Mobile Telecommunication System Territorial Radio Access Network
  • 5G or NR new radio
  • NR is proposed to utilise an Orthogonal Frequency Division Multiplexed (OFDM) physical transmission format.
  • OFDM Orthogonal Frequency Division Multiplexed
  • base stations provide wireless coverage to the UE. This is called access.
  • traffic is carried between base stations and the CN, or between base stations in a network using relays. This is called backhaul.
  • the backhaul can use wireless resources.
  • IAB Integrated Access and Backhaul
  • wireless channel resources are shared between wireless access and wireless backhaul.
  • NR creates an opportunity to deploy IAB links for providing access to UEs.
  • FIG. 1 shows an IAB architecture.
  • the system comprises a plurality of wireless or Radio Access Network (RAN) nodes.
  • An IAB donor node interfaces to the core network (CN) and interfaces to IAB nodes 1a and 1b by a respective wireless backhaul link.
  • the nodes may support access and backhaul links.
  • Each of IAB-nodes 1a and 1b serves as a relay node.
  • An IAB-node may support backhaul to another IAB-node as well as access to one or more UEs, see nodes 2a and 2b.
  • a UE may be served directly by an access link to the IAB donor node, see UE A , or by an access link to one of the IAB-nodes, see the UEs.
  • a plurality of RAN nodes can be involved in a route between a UE and the CN.
  • UE B is connected to the core network CN by a route comprising an access link (UE B to IAB node 1b) and a backhaul link (IAB node 1b to the IAB donor node) .
  • UE D is connected to the CN by a route comprising an access link (UE D to IAB node 2a) , a backhaul link (IAB node 2a to IAB node 1b) and a backhaul link (IAB node 1b to the IAB donor node) .
  • Each of the IAB nodes may multiplex access and backhaul links in one or more of time, frequency, and space (e.g. beam-based operation) .
  • the IAB donor node may be treated as a single logical node that comprises a set of functions such as gNB-DU, gNB-CU-CP, gNB-CU-UP and potentially other functions.
  • the IAB donor node can be split according to these functions, which can be either be collocated or non-collocated as allowed by the NG-RAN architecture.
  • the non-transitory computer readable medium may comprise at least one from a group consisting of: a hard disk, a CD-ROM, an optical storage device, a magnetic storage device, a Read Only Memory, a Programmable Read Only Memory, an Erasable Programmable Read Only Memory, EPROM, an Electrically Erasable Programmable Read Only Memory and a Flash memory.
  • Figure 1 shows an example of an IAB network
  • Figure 2 shows an example of an architecture 1a of an IAB network
  • Figure 3 shows an example of an architecture 1b of an IAB network
  • FIG. 4 shows examples of IAB network architectures with access and intermediate node requirements
  • Figure 5 shows an example of UE bearer queues within an IAB node in DL
  • Figure 6 shows an example of a DL in an IAB network using congestion management based on flow control
  • Figure 7 shows an example of a UL in an IAB network using congestion management based on UL grant (scheduling) limitation
  • Figure 8 shows packet dropping on DL
  • Figure 9 shows packet dropping on UL
  • Figure 10 shows an example of a DL in an IAB network using congestion management by marking
  • Figure 11 shows an example of a UL in an IAB network using congestion management by marking
  • Figure 12 shows a PDCP header format for a PDN
  • Figure 13 shows examples of IAB user plane protocol stacks for architecture 1a with access and intermediate node requirements
  • Figure 14 shows an example of IAB user plane protocol stacks for architecture 1b with access and intermediate node requirements.
  • TCP Round Trip Time
  • RTP Round Trip Time
  • Throughput MaxBandwidth (maximum possible throughput of the connection) .
  • MaxBandwidth is determined by the slowest link (the bottleneck) .
  • Figure 1 shows a reference diagram for IAB architectures. There are 2 main architecture groups.
  • CU Central Unit
  • DU Distributed Unit
  • Architecture 1a is shown in Figure 2.
  • Backhauling of F1-U uses an adaptation layer or GTP-U combined with an adaptation layer.
  • Hop-by-hop forwarding across intermediate nodes uses the adaptation layer.
  • UP protocol stacks proposal are shown in Figure 4. It can be seen that in these architectures, IAB-nodes need to relay Protocol Data Units (PDUs) belonging to a given UE-bearer.
  • PDUs Protocol Data Units
  • the type of relayed PDU depends on the node and on the IAB architecture (see Figure 4) . However in all cases, this relaying will be done at least with a UE-bearer granularity.
  • PDUs Protocol Data Units
  • UL an IAB-node is expected to have transmission queues (in MT, or between DU and MT) corresponding to UE-bearers. This is described in Figure 5.
  • Figure 13 shows proposed UP protocol stacks for the group 1a architecture
  • Figure 14 shows the UP protocol stacks for the group 1b architecture
  • Figure 6 shows the PDUs which are relayed at the IAB nodes.
  • the air interface (Uu) is the bottleneck and congestion in DL would happen at the base station (gNB or eNB) .
  • This can be handled by appropriate techniques such as AQM, ECN etc before sending traffic over wireless protocol stack.
  • This is transparent for the air interface protocol stack, which does not need to accommodate specific congestion related features.
  • the bottleneck might be the access link (assuming bad UE radio conditions) but also any other backhaul link, since backhaul links aggregate more and more traffic as they become closer to the IAB donor node.
  • the bottleneck node in DL might be IAB-node 3, 2b, 1b or DU.
  • the bottleneck node in UL might be 3, 2b, 1b (it is not expected that the wireline is the bottleneck) .
  • IAB-node 3 DU In DL, assume the access link is the bottleneck (e.g. UE L in bad radio conditions) .
  • the associated queue within the IAB-node 3 DU will start building up. Assuming a conventional hop-by-hop flow control, for instance limiting the queue size, at some point IAB-node 3 will indicate the congestion to IAB-node 2b which will restrain from sending traffic to IAB-node 3. Hence the corresponding queue in IAB-node 2b will build up and so on until the donor DU. At this point, thanks to F1-U flow control between DU and CU, the corresponding queue in CU will build up as well. This will trigger AQM mechanisms such as packet drop, whose primary goal is to notify the TCP traffic sender. This congestion notification will have to be conveyed back from the CU to the TCP traffic receiver in the UE across the congested IAB network before TCP traffic congestion avoidance can kick in. This is described in more detail in Figure
  • IAB-node 1b MT In UL, assume the backhaul to the IAB donor node is the bottleneck (e.g. overloaded link due to aggregated traffic) .
  • the associated queue within the IAB-node 1b MT will start building up.
  • IAB-node 1b DU will stop granting IAB-node 2b MT.
  • the associated queue within the IAB-node 2b MT will start building up.
  • IAB-node 2b DU will stop granting IAB-node 3 MT.
  • the associated queue within the IAB-node 3 MT will start building up.
  • IAB-node 3 DU will stop granting UE L .
  • the associated queue within UE L will start building up (e.g., PDCP SDUs queue for that RB) . This will trigger a SDU discard mechanism at the UE, e.g. at UE PDCP sending entity, or above layers. This congestion notification will have to be conveyed back from the UE across the congested IAB network and back to the TCP sender in the UE before TCP congestion avoidance can kick in. This is described in more detail in Figure 7.
  • the current granularity is per UE-bearer. It is seen as beneficial to increase this granularity.
  • the IAB node does not have visibility of the IP header (as ciphering is used) .
  • a UE may be configured with a default bearer for non Guaranteed Bit Rate (GBR) traffic.
  • GBR Guaranteed Bit Rate
  • Dedicated bearers are commonly used for specific QoS purposes and are limited in number. It is expected for instance that best effort TCP traffic will end up in the default bearer, as they have the same QoS characteristics, although, it is possible for the CN to associate them to different QoS flows.
  • the QoS flow is the finest granularity of QoS differentiation in the Protocol Data Unit (PDU) Session.
  • a QoS Flow Identification (QFI) is used to identify a QoS flow in the 5G system.
  • User Plane traffic with the same QFI within a PDU Session receives the same traffic forwarding treatment (e.g. scheduling, admission threshold) .
  • the data radio bearer defines the packet treatment on the radio interface (Uu) .
  • a DRB serves packets with the same packet forwarding treatment.
  • the QoS flow to DRB mapping by NG-RAN is based on QFI and the associated QoS profiles (i.e. QoS parameters and QoS characteristics) .
  • Separate DRBs may be established for QoS flows requiring different packet forwarding treatment, or several QoS flows belonging to the same PDU session can be multiplexed in the same DRB.
  • a UE DRB is established between a UE and a CU, and realized over several Uu interfaces of an access link and backhaul links. On each of these links, the DRB is supported on a backhaul RLC bearer. It is proposed that a restriction resulting in the same treatment over each Uu interface should be applied to the backhaul RLC bearer, not necessarily to the DRB.
  • a method is proposed of controlling traffic flow congestion in a wireless communications network, the wireless communications network comprising a plurality of nodes which support integrated wireless access and backhaul between a core network and user equipment, the method comprising at at least one node of the network: receiving at least one traffic flow comprising a plurality of protocol data units, accessing a quality of service flow identification of each of the plurality of protocol data units, using the quality of service flow identifications to sort the plurality of protocol data units into at least one quality of service flow, and applying active queue management to the at least one quality of service flow to manage congestion of traffic flow in the wireless communications network.
  • Accessing the quality of service flow identification of each of the plurality of protocol data units comprises reading the quality of service flow identification (QFI) from a service data adaptation protocol (SDAP) layer header of each of the plurality of protocol data units.
  • QFI quality of service flow identification
  • SDAP service data adaptation protocol
  • the IAB node should be configured with information such as PDU format (PDCP header size/format, SDAP header presence/size/format, .. ) in order to be able to extract the QFI from the underlying SDAP header.
  • accessing the quality of service flow identification of each of the plurality of protocol data units comprises reading the quality of service flow identification from an adaptation layer of each of the plurality of protocol data units.
  • the at least one node is configured with protocol data unit format information to access the quality of service flow identification of each of the plurality of protocol data units.
  • a method is proposed of controlling traffic flow congestion in a wireless communications network, the wireless communications network comprising a plurality of nodes which support integrated wireless access and backhaul between a core network and user equipment, the method comprising at at least one node of the network: receiving at least one traffic flow comprising a plurality of protocol data units, accessing at least one flow indicator of each of the plurality of protocol data units, using the flow indicators to sort the plurality of protocol data units into at least one flow, and applying active queue management to the at least one flow to manage congestion of traffic flow in the wireless communications network.
  • Accessing the flow indicator of each of the plurality of protocol data units comprises reading the flow indicator from an adaptation layer of each of the plurality of protocol data units. This may comprise reading the flow indicator in a F1-U interface, e.g. in a GTP-U extension header, relayed in the adaptation layer.
  • the flow indicator may be the QFI or another flow indicator, such as a hash of the 5-tuple from the end-user packet.
  • the flow indicator may be added in the F1-U interface, e.g. in GTP-U extension header, and relayed in the adaptation layer.
  • AQM can be performed at an IAB node with the intent of dropping the end user Service Data Unit (SDU) , e.g. for instance the TCP packet of a particular flow.
  • SDU Service Data Unit
  • AQM could be realized by directly dropping the relayed PDU at the IAB node, instead of relaying it. This would naturally realize dropping of the corresponding (encapsulated) end user SDU.
  • PDU dropping is not possible as the relayed PDU is an RLC PDU: it would be retransmitted by RLC ARQ.
  • PDU dropping is possible but creates a hole in the PDCP PDU sequence at the receiver. This leads to issues as detailed below.
  • the NR PDCP entity performs reordering (linked to HARQ and/or ARQ operation) .
  • the PDCP receiver will have to wait for missing PDCP PDUs up the maximum reordering delay before it can deliver corresponding SDUs to upper layers.
  • reordering delay N x LinkReorderingDelay.
  • Packet dropping on DL is shown in more detail in Figure 8, and for UL in Figure 9.
  • the IAB node can trigger the congestion indication using modern AQM techniques before it spreads to other nodes and before impacting other flows, however the congestion indication may be delayed due to reordering delay.
  • LTE Long Term Evolution
  • NR RLC does not provide IOD
  • a reordering function is needed before an LTE PDCP receiver or activated in the PDCP receiver. It is expected that for DL a reordering function would be introduced at the Access Node, with a similar reordering timer value as the one used above. For UL a reordering function in PDCP could be used (similar as for DC) .
  • AQM could be realized at the IAB node not by dropping the relayed PDU, but by marking it with a discard instruction.
  • the receiver upon noticing the PDU is marked with a discard instruction will drop the corresponding (encapsulated) end user SDU instead of forwarding it to upper layers.
  • a method is proposed of managing traffic flow congestion in a wireless communications network, the wireless communications network comprising a plurality of nodes which support integrated wireless access and backhaul between a core network and user equipment, the method comprising at a first node of the network, receiving at least one traffic flow comprising a plurality of protocol data units, marking at least one of the plurality of protocol data units with a discard instruction, sending the plurality of protocol data units including the at least one marked protocol data unit to a second node of the network, and at the second node of the network, receiving the plurality of protocol data units including the at least one marked protocol data unit to a second node of the network, reading the discard instruction of the marked protocol data unit and discarding a service data unit corresponding to the marked protocol data unit to create a traffic congestion indicator.
  • the UE is itself a node of the network. Accordingly, the UE may read the discard instruction and discard the SDU.
  • Marking the at least one of the plurality of protocol data units with a discard instruction comprises marking a packet data convergence protocol layer header of the at least one of the plurality of protocol data units with a discard instruction. Marking the packet data convergence protocol layer header of the at least one of the plurality of protocol data units with a discard instruction comprises setting a reserve bit in the header to indicate a discard instruction.
  • the at least one node is configured with protocol data unit format information to access the packet data convergence protocol layer header of the at least one of the plurality of protocol data units.
  • This method has the benefit of not adding reordering delay as there is no PDCP PDU dropping, and also not impacting on robust header compression protocol (RoHC) (which is mostly robust to packet loss, but might still be impacted in case some specific RoHC packets are discarded) .
  • RoHC header compression protocol
  • IAB nodes In order to perform marking within PDCP header, IAB nodes would need to be configured with PDCP format (e.g., PDCP SN length, ...)
  • Figure 10 shows the method of managing traffic flow congestion by marking at least one of a plurality of protocol data units in a traffic flow with a discard instruction in the DL.
  • Figure 11 shows the method of managing traffic flow congestion by marking at least one of a plurality of protocol data units in a traffic flow with a discard instruction in the UL.
  • Figure 12 shows the format of the PDCP header in a PDU with 12 bits PDCP SN. This format is applicable for UM DRBs and AM DRBs. This shows how the marking can comprise setting a reserve bit R in the PDCP header to indicate a discard instruction.
  • An alternative to marking the PDCP header is to modify the PDCP data PDU so that the data/MAC-I part is removed.
  • “empty” the SDU part (and MAC-I if included) of the PDCP data PDU and keep only the PDCP header part. This would result in a “PDCP header only” PDCP PDU.
  • At the PDCP receiver as there is no SN gap, there is no reordering delay; as there is no longer SDU part, the goal of dropping end user SDU is achieved as well.
  • a possible benefit of this approach is that the SDU part which would be discarded is not transmitted, hence not wasting resources, while a drawback of this approach is a possible impact on RoHC protocol, is configured.
  • IAB nodes In order to perform removing of PDCP SDU part, IAB nodes would need to be configured with PDCP format (e.g., PDCP SN length, ...) as well.
  • PDCP format e.g., PDCP SN length, 10.1.1.1
  • the IAB node does not terminate the PDCP protocol, in some circumstances it may not be desirable to have it modify PDCP headers of PDUs on the fly, especially at intermediate IAB nodes.
  • Marking the at least one of the plurality of protocol data units with a discard instruction may therefore comprise marking an adaptation layer header of the at least one of the plurality of protocol data units with a discard instruction or marking a GTP-U extension of the at least one of the plurality of protocol data units with a discard instruction. This may comprise setting a bit in the header and extension to indicate a discard instruction.
  • the discard instruction in the adaptation layer would be relayed to a discard instruction in the PDCP header.
  • the discard instruction in the adaptation layer would be relayed to a discard instruction in the GTP-U extension.
  • the discard instruction may be indicated in the GTP-U extension.
  • the discard instruction in the GTP-U header may also be relayed into a discard instruction in the PDCP header.
  • the discard instruction at adaptation layer or GTP-U extension will instruct the node (e.g. access node for DL, or donor node for UL) to perform the removing of the PDCP SDU part before forwarding the PDCP PDU to the PDCP receiver (in the UE in DL case, in the CU at the donor in UL case) .
  • the node e.g. access node for DL, or donor node for UL
  • a UE capability indicating that PDCP is enhanced to support those features may be required.
  • the Access Node performs reordering and does not introduce any additional delay (at least for AM as there is no gap in PDCP PDU sequence) .
  • the Access Node can discard it.
  • the Access Node would transmit the PDCP PDU stream to the UE in sequence, except for the dropped PDCP PDU.
  • the UE will undergo reordering corresponding to one link only i.e. 1 x LinkReorderingDelay instead of N x LinkReorderingDelay. This enables keeping the LinkReorderingDelay to a reasonable value and limits the notify delay congestion.
  • LTE Long Term Evolution
  • LTE RLC provides in-order-delivery
  • An alternative so marking or removing SDU part of the PDCP data PDU is to corrupt the SDU part of the PDCP data PDU, ensuring that the underlying packet header (TCP/IP header) would not be comprehended by the receiver IP protocol stack, and discarded at that time.
  • a method is also provided of managing traffic flow congestion in a wireless communications network, the wireless communications network comprising a plurality of nodes which support integrated wireless access and backhaul between a core network and user equipment, the method comprising at a first node of the network, receiving at least one traffic flow comprising a plurality of service data units, corrupting a packet data convergence protocol layer SDU part of at least one of the plurality of service data units, sending the plurality of service data units including the at least one corrupted service data unit to a second node of the network, and
  • the second node of the network receiving the plurality of service data units including the at least one corrupted service data unit to a second node of the network, discarding the corrupted service data unit to create a traffic congestion indicator, and sending the traffic congestion indicator to a transmitter of the traffic flow to manage congestion of traffic flow in the wireless communications network.
  • the UE itself is a node of the network and accordingly may perform relevant steps of the method including discarding the SDU.
  • IAB may also support Explicit Congestion Notification (ECN) .
  • ECN Explicit Congestion Notification
  • ECN marking ECN capable transport, congestion encountered fields
  • end user TCP/IP header packet which is not visible to the IAB node (as it is ciphered) .
  • ECN marking can be introduced in the adaptation layer and/or GTP-U and/or PDCP. The ECB marking would be relayed from end-user TCP/IP header to GTP-U or PDCP header, and furthered relayed within the adaptation layer if needed.
  • an IAB node performing AQM operation would then have visibility on ECN marking. For instance, in case AQM consider a congestion signal needs to be sent for a particular queue, if the packet is marked as “ECN capable transport” , the IAB node would not drop or mark the packet with discard instruction, but instead mark it with “congestion encountered” at adaptation layer or GTP-U or PDCP header. The ECN indication will be relayed as needed, and finally mapped to the ECN field of the end-user SDU.
  • a method is provided of managing traffic flow congestion in a wireless communications network, the wireless communications network comprising a plurality of nodes which support integrated wireless access and backhaul between a core network and user equipment, the method comprising at a first node of the network, receiving at least one traffic flow comprising a plurality of protocol data units, marking at least one of the plurality of protocol data units with an explicit congestion notification, sending the plurality of protocol data units including the at least one marked protocol data unit to a second node of the network, and
  • the second node of the network receiving the plurality of protocol data units including the at least one marked protocol data unit to a second node of the network, reading the explicit congestion notification of the marked protocol data unit and relaying the explicit congestion notification up to a central unit of the network to manage congestion of traffic flow in the wireless communications network.
  • the UE itself is a node of the network and accordingly may perform relevant steps of the method including discarding the SDU.
  • ECN marking might also be introduced in RLC layer.
  • IAB nodes have even less visibility no RLC layer..
  • the CU may get other useful information before PDCP processing (after which packets are ciphered) . For instance, the CU can know whether a given packet should not be discarded (e.g., SYN TCP packets etc) .
  • a discard prohibit flag could be relayed in a GTP-U extension and/or a AL header to prevent IAB nodes from discarding such packets.
  • such mechanism could be used in UL at UE side, at SDAP or PDCP layer.
  • the IAB node is configured to have visibility on PDCP and/or SDAP header, as discussed above (for instance by configuring the IAB node with PDCP and/or SDAP header format/presence) , packets such as PDCP or SDAP control packets can be considered as protected so that they will not be discarded or considered by the AQM mechanism.
  • the source e.g. TCP sender or RTP source
  • the source needs to reduce its traffic flow rate and should be notified of the congestion as soon as possible.
  • AQM actions dropping/marking
  • the feedback could be sent upstream.
  • the feedback could include parameters such as:
  • queue ID (at least UE-bearer identity, but could also include QFI or other flow indicator which can enable enhance queue granularity)
  • the queue delay has benefit over the queue size as it relates directly to the QoS, does not relate directly to the throughput and is one of the main parameter used by modern AQM techniques.
  • the queue size alone is not really meaningful for congestion as generally, independently of congestion, the queue size is expected to scale proportionally with the throughput of a flow. Hence for different flows having different throughput, the queue sizes are expected to be in proportion to their respective throughput. This is mainly because queuing / buffering in a scheduler has to accommodate enough data for transmission over a given time period (scheduling period) .
  • the queue delay on the other hand is more meaningful as it shows to which extent the scheduler is not able to send the packets in time, i.e. to which extent it is congested.
  • the feedback could be sent from the bottleneck node to the parent node.
  • this feedback should be relayed to the parent node as soon as received, up to the donor. This could mitigate the issue described earlier in figures 6 and 7.
  • the intermediate IAB nodes may aggregate (add) the corresponding queue length and/or delays, so that the parent node or donor would have an aggregated view of the queue length/queue delay downstream.
  • Such feedback might be configured as triggered based (with an hysteresis and/or prohibit timer to avoid sending the feedback to often) , preferably over the MAC layer for quick feedback up to the donor.
  • any of the devices or apparatus that form part of the network may include at least a processor, a storage unit and a communications interface, wherein the processor unit, storage unit, and communications interface are configured to perform the method of any aspect of the present invention. Further options and choices are described below.
  • the signal processing functionality of the embodiments of the invention especially the gNB and the UE may be achieved using computing systems or architectures known to those who are skilled in the relevant art.
  • Computing systems such as, a desktop, laptop or notebook computer, hand-held computing device (PDA, cell phone, palmtop, etc. ) , mainframe, server, client, or any other type of special or general purpose computing device as may be desirable or appropriate for a given application or environment can be used.
  • the computing system can include one or more processors which can be implemented using a general or special-purpose processing engine such as, for example, a microprocessor, microcontroller or other control module.
  • the computing system can also include a main memory, such as random access memory (RAM) or other dynamic memory, for storing information and instructions to be executed by a processor. Such a main memory also may be used for storing temporary variables or other intermediate information during execution of instructions to be executed by the processor.
  • the computing system may likewise include a read only memory (ROM) or other static storage device for storing static information and instructions for a processor.
  • ROM read only memory
  • the computing system may also include an information storage system which may include, for example, a media drive and a removable storage interface.
  • the media drive may include a drive or other mechanism to support fixed or removable storage media, such as a hard disk drive, a floppy disk drive, a magnetic tape drive, an optical disk drive, a compact disc (CD) or digital video drive (DVD) read or write drive (R or RW) , or other removable or fixed media drive.
  • Storage media may include, for example, a hard disk, floppy disk, magnetic tape, optical disk, CD or DVD, or other fixed or removable medium that is read by and written to by media drive.
  • the storage media may include a computer-readable storage medium having particular computer software or data stored therein.
  • an information storage system may include other similar components for allowing computer programs or other instructions or data to be loaded into the computing system.
  • Such components may include, for example, a removable storage unit and an interface , such as a program cartridge and cartridge interface, a removable memory (for example, a flash memory or other removable memory module) and memory slot, and other removable storage units and interfaces that allow software and data to be transferred from the removable storage unit to computing system.
  • the computing system can also include a communications interface.
  • a communications interface can be used to allow software and data to be transferred between a computing system and external devices.
  • Examples of communications interfaces can include a modem, a network interface (such as an Ethernet or other NIC card) , a communications port (such as for example, a universal serial bus (USB) port) , a PCMCIA slot and card, etc.
  • Software and data transferred via a communications interface are in the form of signals which can be electronic, electromagnetic, and optical or other signals capable of being received by a communications interface medium.
  • computer program product may be used generally to refer to tangible media such as, for example, a memory, storage device, or storage unit.
  • These and other forms of computer-readable media may store one or more instructions for use by the processor comprising the computer system to cause the processor to perform specified operations.
  • Such instructions generally 45 referred to as ‘computer program code’ (which may be grouped in the form of computer programs or other groupings) , when executed, enable the computing system to perform functions of embodiments of the present invention.
  • the code may directly cause a processor to perform specified operations, be compiled to do so, and/or be combined with other software, hardware, and/or firmware elements (e.g., libraries for performing standard functions) to do so.
  • the non-transitory computer readable medium may comprise at least one from a group consisting of: a hard disk, a CD-ROM, an optical storage device, a magnetic storage device, a Read Only Memory, a Programmable Read Only Memory, an Erasable Programmable Read Only Memory, EPROM, an Electrically Erasable Programmable Read Only Memory and a Flash memory.
  • the software may be stored in a computer-readable medium and loaded into computing system using, for example, removable storage drive.
  • a control module (in this example, software instructions or executable computer program code) , when executed by the processor in the computer system, causes a processor to perform the functions of the invention as described herein.
  • inventive concept can be applied to any circuit for performing signal processing functionality within a network element. It is further envisaged that, for example, a semiconductor manufacturer may employ the inventive concept in a design of a stand-alone device, such as a microcontroller of a digital signal processor (DSP) , or application-specific integrated circuit (ASIC) and/or any other sub-system element.
  • DSP digital signal processor
  • ASIC application-specific integrated circuit
  • aspects of the invention may be implemented in any suitable form including hardware, software, firmware or any combination of these.
  • the invention may optionally be implemented, at least partly, as computer software running on one or more data processors and/or digital signal processors or configurable module components such as FPGA devices.
  • an embodiment of the invention may be physically, functionally and logically implemented in any suitable way. Indeed, the functionality may be implemented in a single unit, in a plurality of units or as part of other functional units.
  • the present invention has been described in connection with some embodiments, it is not intended to be limited to the specific form set forth herein. Rather, the scope of the present invention is limited only by the accompanying claims. Additionally, although a feature may appear to be described in connection with particular embodiments, one skilled in the art would recognise that various features of the described embodiments may be combined in accordance with the invention. In the claims, the term ‘comprising’ does not exclude the presence of other elements or steps.

Abstract

Methods for managing flow in an IAB network, in particular including steps to discard corrupted SDUs and indicate network congestion.

Description

Congestion Management in a Wireless Communications Network Technical Field
The following disclosure relates to congestion management in a wireless communications network.
Background
Wireless communication systems, such as the third-generation (3G) of mobile telephone standards and technology are well known. Such 3G standards and technology have been developed by the Third Generation Partnership Project (3GPP) . The 3rd generation of wireless communications has generally been developed to support macro-cell mobile phone communications. Communication systems and networks have developed towards a broadband and mobile system.
In cellular wireless communication systems User Equipment (UE) is connected by a wireless link to a Radio Access Network (RAN) . The RAN comprises a set of base stations which provide wireless links to the UEs located in cells covered by the base station, and an interface to a Core Network (CN) which provides overall network control. As will be appreciated the RAN and CN each conduct respective functions in relation to the overall network. For convenience the term cellular network will be used to refer to the combined RAN & CN, and it will be understood that the term is used to refer to the respective system for performing the disclosed function.
The 3rd Generation Partnership Project has developed the so-called Long Term Evolution (LTE) system, namely, an Evolved Universal Mobile Telecommunication System Territorial Radio Access Network, (E-UTRAN) , for a mobile access network where one or more macro-cells are supported by a base station known as an eNodeB or eNB (evolved NodeB) . More recently, LTE is evolving further towards the so-called 5G or NR (new radio) systems where one or more cells are supported by a base station known as a gNB. NR is proposed to utilise an Orthogonal Frequency Division Multiplexed (OFDM) physical transmission format.
In wireless communications networks base stations provide wireless coverage to the UE. This is called access. In addition, traffic is carried between base stations and the CN, or between base stations in a network using relays. This is called backhaul. The backhaul can use wireless resources. One area of development in wireless communications networks is Integrated Access and Backhaul (IAB) . In IAB wireless channel resources are shared between wireless access and wireless backhaul. NR creates an opportunity to deploy IAB links for providing access to UEs.
Figure 1 shows an IAB architecture. The system comprises a plurality of wireless or Radio Access Network (RAN) nodes. An IAB donor node interfaces to the core network (CN) and interfaces to  IAB nodes  1a and 1b by a respective wireless backhaul link. The nodes may support access and backhaul links. Each of IAB- nodes  1a and 1b serves as a relay node. An IAB-node may support backhaul to another IAB-node as well as access to one or more UEs, see  nodes  2a and 2b. A UE may be served directly by an access link to the IAB donor node, see UE A, or by an access link to one of the IAB-nodes, see the UEs.
A plurality of RAN nodes can be involved in a route between a UE and the CN. In Figure 1, UE B is connected to the core network CN by a route comprising an access link (UE B to IAB node 1b) and a backhaul link (IAB node 1b to the IAB donor node) . UE D is connected to the CN by a route comprising an access link (UE D to IAB node 2a) , a backhaul link (IAB node 2a to IAB node 1b) and a backhaul link (IAB node 1b to the IAB donor node) .
Each of the IAB nodes may multiplex access and backhaul links in one or more of time, frequency, and space (e.g. beam-based operation) .
The IAB donor node may be treated as a single logical node that comprises a set of functions such as gNB-DU, gNB-CU-CP, gNB-CU-UP and potentially other functions. In a deployment, the IAB donor node can be split according to these functions, which can be either be collocated or non-collocated as allowed by the NG-RAN architecture.
There is a need for control of traffic flow congestion in IAB enabled wireless communications networks.
Summary
This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.
The invention is defined by the appended claims.
The non-transitory computer readable medium may comprise at least one from a group consisting of: a hard disk, a CD-ROM, an optical storage device, a magnetic storage device, a Read Only Memory, a Programmable Read Only Memory, an Erasable Programmable Read Only Memory, EPROM, an Electrically Erasable Programmable Read Only Memory and a Flash memory.
Brief description of the drawings
Further details, aspects and embodiments of the invention will be described, by way of example only, with reference to the drawings. Elements in the figures are illustrated for simplicity and clarity and have not necessarily been drawn to scale. Like reference numerals have been included in the respective drawings to ease understanding.
Figure 1 shows an example of an IAB network,
Figure 2 shows an example of an architecture 1a of an IAB network,
Figure 3 shows an example of an architecture 1b of an IAB network,
Figure 4 shows examples of IAB network architectures with access and intermediate node requirements,
Figure 5 shows an example of UE bearer queues within an IAB node in DL,
Figure 6 shows an example of a DL in an IAB network using congestion management based on flow control,
Figure 7 shows an example of a UL in an IAB network using congestion management based on UL grant (scheduling) limitation,
Figure 8 shows packet dropping on DL,
Figure 9 shows packet dropping on UL,
Figure 10 shows an example of a DL in an IAB network using congestion management by marking,
Figure 11 shows an example of a UL in an IAB network using congestion management by marking,
Figure 12 shows a PDCP header format for a PDN,
Figure 13 shows examples of IAB user plane protocol stacks for architecture 1a with access and intermediate node requirements, and
Figure 14 shows an example of IAB user plane protocol stacks for architecture 1b with access and intermediate node requirements.
Detailed description of the preferred embodiments
Those skilled in the art will recognise and appreciate that the specifics of the examples described are merely illustrative of some embodiments and that the teachings set forth herein are applicable in a variety of alternative settings.
Background on congestion.
Consider a traffic flow between a transmitter and a receiver, over a data network such as the internet. Such traffic flow can be a TCP flow or RTP flow for instance. TCP constantly tries to increase the traffic flow throughput by increasing its transmission window, sending more outstanding (non-acknowledged) traffic in the network. The achieved throughput is such that OutstandingBytes = Round Trip Time (RTT) x Throughput. Increasing the transmission window (OutstandingBytes) increases the throughput until Throughput = MaxBandwidth (maximum possible throughput of the connection) . In a network such as the internet, the MaxBandwidth is determined by the slowest link (the bottleneck) .
When Throughput equal to MaxBandwidth is reached, data starts buffering at the bottleneck and the RTT increases. Such buffering is not good as it adds latency to the traffic flow (which delays congestion notifications, retransmissions) and consumes buffering resource (which can lead to buffer full issues and massive tail drop) without increasing the throughput. Moreover, this excessive buffering can persist: this is known as “bufferbloat” . Instead, the bottleneck node is supposed to indicate to the TCP traffic sender that it has reached its maximum throughput. This is done by an implicit congestion notification (packet dropping) or explicit congestion notification (ECN) . This is usually done by using various active queue management (AQM) techniques within the bottleneck node. These techniques enable the node to identify queues/flows for which there is an excessive queuing (buffering) , i.e. to identify queues/flows for which the node it the bottleneck node, so that a congestion notification can be sent for such queue/flow.
Background on IAB architecture.
Figure 1 shows a reference diagram for IAB architectures. There are 2 main architecture groups.
Architecture group 1:
This consists of  architectures  1a and 1b, both of which leverage Central Unit (CU) /Distributed Unit (DU) split architecture.
Architecture 1a is shown in Figure 2. Backhauling of F1-U uses an adaptation layer or GTP-U combined with an adaptation layer. Hop-by-hop forwarding across intermediate nodes uses the adaptation layer.
Architecture 1b is shown in Figure 3. Backhauling of F1-U on access node uses GTP-U/UDP/IP. Hob-by-hop forwarding across intermediate node uses the adaptation layer.
Architecture group 2:
This consists of  architectures  2a, 2b and 2c.
Architecture 2a, Backhauling of F1-U or NG-U on access node uses GTP-U/UDP/IP. Hop-by-hop forwarding across intermediate node uses PDU-session-layer routing.
Architecture 2b, Backhauling of F1-U or NG-U on access node uses GTP-U/UDP/IP. Hop-by-hop forwarding across intermediate node uses GTP-U/UDP/IP nested tunnelling.
Architecture 2c: Backhauling of F1-U or NG-U on access node uses GTP-U/UDP/IP. Hop-by-hop forwarding across intermediate node uses GTP-U/UDP/IP/PDCP nested tunnelling.
Architecture group 2 yields significant additional overhead.
Background on IAB UP Protocol Stack architecture
For group 1, UP protocol stacks proposal are shown in Figure 4. It can be seen that in these architectures, IAB-nodes need to relay Protocol Data Units (PDUs) belonging to a given UE-bearer. The type of relayed PDU depends on the node and on the IAB architecture (see Figure 4) . However in all cases, this relaying will be done at least with a UE-bearer granularity. In DL, an IAB-node is expected to have transmission queues (in DU, or between DU and MT) corresponding to UE-bearers. In UL, an IAB-node is expected to have transmission queues (in MT, or between DU and MT) corresponding to UE-bearers. This is described in Figure 5.
Figure 13 shows proposed UP protocol stacks for the group 1a architecture, and Figure 14 shows the UP protocol stacks for the group 1b architecture. Figure 6 shows the PDUs which are relayed at the IAB nodes.
Congestion within an IAB network.
Traditionally, the air interface (Uu) is the bottleneck and congestion in DL would happen at the base station (gNB or eNB) . This can be handled by appropriate techniques such as AQM, ECN etc before sending traffic over wireless protocol stack. This is transparent for the air interface protocol stack, which does not need to accommodate specific congestion related features.
In an IAB network, the bottleneck might be the access link (assuming bad UE radio conditions) but also any other backhaul link, since backhaul links aggregate more and more traffic as they become closer to the IAB donor node.
Referring to Figure 1, consider a TCP traffic low established with UE L, on the default bearer for that UE. The bottleneck node in DL might be IAB- node  3, 2b, 1b or DU. Similarly, the bottleneck node in UL might be 3, 2b, 1b (it is not expected that the wireline is the bottleneck) .
In DL, assume the access link is the bottleneck (e.g. UE L in bad radio conditions) . The associated queue within the IAB-node 3 DU will start building up. Assuming a conventional hop-by-hop flow control, for instance limiting the queue size, at some point IAB-node 3 will indicate the congestion to IAB-node 2b which will restrain from sending traffic to IAB-node 3. Hence the corresponding queue in IAB-node 2b will build up and so on until the donor DU. At this point, thanks to F1-U flow control between DU and CU, the corresponding queue in CU will build up as well. This will trigger AQM mechanisms such as packet drop, whose primary goal is to notify the TCP traffic sender. This congestion notification will have to be conveyed back from the CU to the TCP traffic receiver in the UE across the congested IAB network before TCP traffic congestion avoidance can kick in. This is described in more detail in Figure 6.
In UL, assume the backhaul to the IAB donor node is the bottleneck (e.g. overloaded link due to aggregated traffic) . The associated queue within the IAB-node 1b MT will start building up.  At some point IAB-node 1b DU will stop granting IAB-node 2b MT. The associated queue within the IAB-node 2b MT will start building up. At some point IAB-node 2b DU will stop granting IAB-node 3 MT. The associated queue within the IAB-node 3 MT will start building up. At some point IAB-node 3 DU will stop granting UE L. The associated queue within UE L will start building up (e.g., PDCP SDUs queue for that RB) . This will trigger a SDU discard mechanism at the UE, e.g. at UE PDCP sending entity, or above layers. This congestion notification will have to be conveyed back from the UE across the congested IAB network and back to the TCP sender in the UE before TCP congestion avoidance can kick in. This is described in more detail in Figure 7.
Modern Active Queue Management (AQM) algorithms work best with finer queue granularity. Ideally one queue per flow, so that for instance when a Transport Control Protocol (TCP) flow starts building up, a packet drop or packet marking is applied on that flow. Without detailed granularity, there is less probability that the packet drop or marking is applied to the correct flow. An innocent flow may be penalized while an aggressive flow is not required to reduce its transmission rate.
The current granularity is per UE-bearer. It is seen as beneficial to increase this granularity. However, contrary to an internet node, the IAB node does not have visibility of the IP header (as ciphering is used) . Usually, a UE may be configured with a default bearer for non Guaranteed Bit Rate (GBR) traffic. Dedicated bearers are commonly used for specific QoS purposes and are limited in number. It is expected for instance that best effort TCP traffic will end up in the default bearer, as they have the same QoS characteristics, although, it is possible for the CN to associate them to different QoS flows.
In the 5G system, the QoS flow is the finest granularity of QoS differentiation in the Protocol Data Unit (PDU) Session. A QoS Flow Identification (QFI) is used to identify a QoS flow in the 5G system. User Plane traffic with the same QFI within a PDU Session receives the same traffic forwarding treatment (e.g. scheduling, admission threshold) .
At Access Stratum level, the data radio bearer (DRB) defines the packet treatment on the radio interface (Uu) . A DRB serves packets with the same packet forwarding treatment. The QoS flow to DRB mapping by NG-RAN is based on QFI and the associated QoS profiles (i.e. QoS parameters and QoS characteristics) . Separate DRBs may be established for QoS flows requiring different packet forwarding treatment, or several QoS flows belonging to the same PDU session can be multiplexed in the same DRB.
In the case of IAB, for the group 1 architecture, a UE DRB is established between a UE and a CU, and realized over several Uu interfaces of an access link and backhaul links. On each of these links, the DRB is supported on a backhaul RLC bearer. It is proposed that a restriction resulting in the same treatment over each Uu interface should be applied to the backhaul RLC bearer, not necessarily to the DRB.
At the IAB nodes, some additional granularity would be beneficial to enhance AQM decisions. At the minimum, AQM could be run with QoS flow granularity.
A method is proposed of controlling traffic flow congestion in a wireless communications network, the wireless communications network comprising a plurality of nodes which support integrated wireless access and backhaul between a core network and user equipment, the method comprising at at least one node of the network: receiving at least one traffic flow comprising a plurality of protocol data units, accessing a quality of service flow identification of each of the plurality of protocol data units, using the quality of service flow identifications to sort the plurality of protocol data units into at least one quality of service flow, and applying active queue management to the at least one quality of service flow to manage congestion of traffic flow in the wireless communications network.
Accessing the quality of service flow identification of each of the plurality of protocol data units comprises reading the quality of service flow identification (QFI) from a service data adaptation protocol (SDAP) layer header of each of the plurality of protocol data units. For this purpose, the IAB node should be configured with information such as PDU format (PDCP header size/format, SDAP header presence/size/format, .. ) in order to be able to extract the QFI from the underlying SDAP header.
Alternatively, accessing the quality of service flow identification of each of the plurality of protocol data units comprises reading the quality of service flow identification from an adaptation layer of each of the plurality of protocol data units.
The at least one node is configured with protocol data unit format information to access the quality of service flow identification of each of the plurality of protocol data units.
A method is proposed of controlling traffic flow congestion in a wireless communications network, the wireless communications network comprising a plurality of nodes which support integrated wireless access and backhaul between a core network and user equipment, the method comprising at at least one node of the network: receiving at least one traffic flow comprising a plurality of protocol data units, accessing at least one flow indicator of each of the plurality of protocol data units, using the flow indicators to sort the plurality of protocol data units into at least one flow, and applying active queue management to the at least one flow to manage congestion of traffic flow in the wireless communications network.
Accessing the flow indicator of each of the plurality of protocol data units comprises reading the flow indicator from an adaptation layer of each of the plurality of protocol data units. This may comprise reading the flow indicator in a F1-U interface, e.g. in a GTP-U extension header, relayed in the adaptation layer.
The flow indicator may be the QFI or another flow indicator, such as a hash of the 5-tuple from the end-user packet.
The flow indicator may be added in the F1-U interface, e.g. in GTP-U extension header, and relayed in the adaptation layer.
AQM can be performed at an IAB node with the intent of dropping the end user Service Data Unit (SDU) , e.g. for instance the TCP packet of a particular flow. There are different impacts depending on UP architecture of the IAB node and direction of the traffic (UL/DL) .
In a first alternative, AQM could be realized by directly dropping the relayed PDU at the IAB node, instead of relaying it. This would naturally realize dropping of the corresponding (encapsulated) end user SDU.
Referring to Figure 4, in UP Option 1a a) , PDU dropping is not possible as the relayed PDU is an RLC PDU: it would be retransmitted by RLC ARQ. In other UP Options, PDU dropping is possible but creates a hole in the PDCP PDU sequence at the receiver. This leads to issues as detailed below.
For NR, the NR PDCP entity performs reordering (linked to HARQ and/or ARQ operation) . In case of a hole in the PDCP PDU sequence, the PDCP receiver will have to wait for missing PDCP PDUs up the maximum reordering delay before it can deliver corresponding SDUs to upper layers. Assuming a maximum reordering delay of x ms over one link, in total the reordering delay scales with the number of links N, i.e. reordering delay = N x LinkReorderingDelay. In practice, it is unlikely that the same packet is delayed over each link; however a reordering delay timer may need to be set to a conservative value. For AM operation, packet loss is in general not expected, i.e. reordering timer expiry is not expected and in general it will be set to such a  conservative value. Trying to have a small value for the reordering timer comes with a risk that a late packet will not be waited for and interpreted as a congestion indication, indicating to the sending entity to reduce throughput. Hence there is a strong incentive to keep the value of the reordering timer high, i.e. around reordering delay = N x LinkReorderingDelay.
Packet dropping on DL is shown in more detail in Figure 8, and for UL in Figure 9.
The IAB node can trigger the congestion indication using modern AQM techniques before it spreads to other nodes and before impacting other flows, however the congestion indication may be delayed due to reordering delay.
For LTE, the legacy LTE PDCP entity does not perform reordering. Over the NR backhaul, LTE PDCP PDUs would be transmitted out of order (as NR RLC does not provide IOD) and hence a reordering function is needed before an LTE PDCP receiver or activated in the PDCP receiver. It is expected that for DL a reordering function would be introduced at the Access Node, with a similar reordering timer value as the one used above. For UL a reordering function in PDCP could be used (similar as for DC) .
Hence in both NR and LTE cases, at the receiver, a dropped packet will be waited for during the reordering timer delay which is expected to be very long (at least around reordering delay = N x LinkReorderingDelay) in order to not misinterpret a delayed (retransmitted) packet as a lost packet.
This delays the congestion notification, which should ideally be sent as fast as possible to the sending entity. This also introduces a delay spike, which might lead to unexpected TCP behaviour (retransmission timeout and going into slow start) . Further, as congestion indication is delayed, buffer overflow could occur in the bottleneck node leading to massive/uncontrolled packet discard. Without delay in the congestion notification, there is no such negative consequences. Timely congestion notification is needed to prevent congestion and further massive packet drop. All internet stability is based on this.
In a second alternative, AQM could be realized at the IAB node not by dropping the relayed PDU, but by marking it with a discard instruction. The receiver upon noticing the PDU is marked with a discard instruction will drop the corresponding (encapsulated) end user SDU instead of forwarding it to upper layers.
A method is proposed of managing traffic flow congestion in a wireless communications network, the wireless communications network comprising a plurality of nodes which support integrated wireless access and backhaul between a core network and user equipment, the method comprising at a first node of the network, receiving at least one traffic flow comprising a plurality of protocol data units, marking at least one of the plurality of protocol data units with a discard instruction, sending the plurality of protocol data units including the at least one marked protocol data unit to a second node of the network, and at the second node of the network, receiving the plurality of protocol data units including the at least one marked protocol data unit to a second node of the network, reading the discard instruction of the marked protocol data unit and discarding a service data unit corresponding to the marked protocol data unit to create a traffic congestion indicator. The final
As will be appreciated the UE is itself a node of the network. Accordingly, the UE may read the discard instruction and discard the SDU.
Marking the at least one of the plurality of protocol data units with a discard instruction comprises marking a packet data convergence protocol layer header of the at least one of the plurality of protocol data units with a discard instruction. Marking the packet data convergence protocol layer header of the at least one of the plurality of protocol data units with a discard  instruction comprises setting a reserve bit in the header to indicate a discard instruction. The at least one node is configured with protocol data unit format information to access the packet data convergence protocol layer header of the at least one of the plurality of protocol data units.
This method has the benefit of not adding reordering delay as there is no PDCP PDU dropping, and also not impacting on robust header compression protocol (RoHC) (which is mostly robust to packet loss, but might still be impacted in case some specific RoHC packets are discarded) .
In order to perform marking within PDCP header, IAB nodes would need to be configured with PDCP format (e.g., PDCP SN length, …)
Figure 10 shows the method of managing traffic flow congestion by marking at least one of a plurality of protocol data units in a traffic flow with a discard instruction in the DL. Figure 11 shows the method of managing traffic flow congestion by marking at least one of a plurality of protocol data units in a traffic flow with a discard instruction in the UL. Figure 12 shows the format of the PDCP header in a PDU with 12 bits PDCP SN. This format is applicable for UM DRBs and AM DRBs. This shows how the marking can comprise setting a reserve bit R in the PDCP header to indicate a discard instruction.
An alternative to marking the PDCP header is to modify the PDCP data PDU so that the data/MAC-I part is removed. In other words, “empty” the SDU part (and MAC-I if included) of the PDCP data PDU, and keep only the PDCP header part. This would result in a “PDCP header only” PDCP PDU. At the PDCP receiver, as there is no SN gap, there is no reordering delay; as there is no longer SDU part, the goal of dropping end user SDU is achieved as well. A possible benefit of this approach is that the SDU part which would be discarded is not transmitted, hence not wasting resources, while a drawback of this approach is a possible impact on RoHC protocol, is configured.
In order to perform removing of PDCP SDU part, IAB nodes would need to be configured with PDCP format (e.g., PDCP SN length, …) as well.
As the IAB node does not terminate the PDCP protocol, in some circumstances it may not be desirable to have it modify PDCP headers of PDUs on the fly, especially at intermediate IAB nodes.
Marking the at least one of the plurality of protocol data units with a discard instruction may therefore comprise marking an adaptation layer header of the at least one of the plurality of protocol data units with a discard instruction or marking a GTP-U extension of the at least one of the plurality of protocol data units with a discard instruction. This may comprise setting a bit in the header and extension to indicate a discard instruction. For DL, the discard instruction in the adaptation layer would be relayed to a discard instruction in the PDCP header. For UL, the discard instruction in the adaptation layer would be relayed to a discard instruction in the GTP-U extension. For UP Options d) e) and 2b, the discard instruction may be indicated in the GTP-U extension. The discard instruction in the GTP-U header may also be relayed into a discard instruction in the PDCP header.
In the alternative where removing of PDCP SDU part is used, the discard instruction at adaptation layer or GTP-U extension will instruct the node (e.g. access node for DL, or donor node for UL) to perform the removing of the PDCP SDU part before forwarding the PDCP PDU to the PDCP receiver (in the UE in DL case, in the CU at the donor in UL case) .
For NR, in DL, due to possible early implementation of UEs with PDCP layer not supporting discard instruction or PDCP data PDUs without SDU part, a UE capability indicating  that PDCP is enhanced to support those features may be required.
For NR, in DL, for UEs not supporting the discussed PDCP improvements, it is possible to mitigate this high delay by introducing a reordering function in the Access Node (and optionally in other IAB nodes) . The Access Node performs reordering and does not introduce any additional delay (at least for AM as there is no gap in PDCP PDU sequence) . However, upon noticing that a PDU has been marked with a discard instruction, the Access Node can discard it. The Access Node would transmit the PDCP PDU stream to the UE in sequence, except for the dropped PDCP PDU. Hence, the UE will undergo reordering corresponding to one link only i.e. 1 x LinkReorderingDelay instead of N x LinkReorderingDelay. This enables keeping the LinkReorderingDelay to a reasonable value and limits the notify delay congestion.
For LTE, it is expected that for DL a reordering function would be introduced at the Access Node. The Access Node would transmit the PDCP PDU stream to the UE in sequence, except for the dropped PDCP PDU. LTE PDCP does not have a reordering function (as LTE RLC provides in-order-delivery) . The UE PDCP receiver would receive a continuous stream of PDCP PDUs with a missing PDU –which does not cause any additional reordering delay in the LTE case.
An alternative so marking or removing SDU part of the PDCP data PDU, is to corrupt the SDU part of the PDCP data PDU, ensuring that the underlying packet header (TCP/IP header) would not be comprehended by the receiver IP protocol stack, and discarded at that time.
A method is also provided of managing traffic flow congestion in a wireless communications network, the wireless communications network comprising a plurality of nodes which support integrated wireless access and backhaul between a core network and user equipment, the method comprising at a first node of the network, receiving at least one traffic flow comprising a plurality of service data units, corrupting a packet data convergence protocol layer SDU part of at least one of the plurality of service data units, sending the plurality of service data units including the at least one corrupted service data unit to a second node of the network, and
at the second node of the network, receiving the plurality of service data units including the at least one corrupted service data unit to a second node of the network, discarding the corrupted service data unit to create a traffic congestion indicator, and sending the traffic congestion indicator to a transmitter of the traffic flow to manage congestion of traffic flow in the wireless communications network.
As will be appreciated the UE itself is a node of the network and accordingly may perform relevant steps of the method including discarding the SDU.
IAB may also support Explicit Congestion Notification (ECN) . With ECN, packet dropping as an implicit congestion notification is replaced by an explicit indication in TCP/IP header, with the benefit of not dropping packet. However, this feature is not widely supported.
It may be beneficial for IAB to support ECN. However, ECN marking (ECN capable transport, congestion encountered fields) is included in end user TCP/IP header packet which is not visible to the IAB node (as it is ciphered) . To solve this issue, ECN marking can be introduced in the adaptation layer and/or GTP-U and/or PDCP. The ECB marking would be relayed from end-user TCP/IP header to GTP-U or PDCP header, and furthered relayed within the adaptation layer if needed.
In this alternative, an IAB node performing AQM operation would then have visibility on ECN marking. For instance, in case AQM consider a congestion signal needs to be sent for a particular queue, if the packet is marked as “ECN capable transport” , the IAB node would not  drop or mark the packet with discard instruction, but instead mark it with “congestion encountered” at adaptation layer or GTP-U or PDCP header. The ECN indication will be relayed as needed, and finally mapped to the ECN field of the end-user SDU.
A method is provided of managing traffic flow congestion in a wireless communications network, the wireless communications network comprising a plurality of nodes which support integrated wireless access and backhaul between a core network and user equipment, the method comprising at a first node of the network, receiving at least one traffic flow comprising a plurality of protocol data units, marking at least one of the plurality of protocol data units with an explicit congestion notification, sending the plurality of protocol data units including the at least one marked protocol data unit to a second node of the network, and
at the second node of the network, receiving the plurality of protocol data units including the at least one marked protocol data unit to a second node of the network, reading the explicit congestion notification of the marked protocol data unit and relaying the explicit congestion notification up to a central unit of the network to manage congestion of traffic flow in the wireless communications network.
As will be appreciated the UE itself is a node of the network and accordingly may perform relevant steps of the method including discarding the SDU.
Alternatively to PDCP, ECN marking might also be introduced in RLC layer. However, it is expected IAB nodes have even less visibility no RLC layer..
Other assistance marking:
In DL, the CU may get other useful information before PDCP processing (after which packets are ciphered) . For instance, the CU can know whether a given packet should not be discarded (e.g., SYN TCP packets etc) . A discard prohibit flag could be relayed in a GTP-U extension and/or a AL header to prevent IAB nodes from discarding such packets. Similarly, such mechanism could be used in UL at UE side, at SDAP or PDCP layer.
Similarly, if the IAB node is configured to have visibility on PDCP and/or SDAP header, as discussed above (for instance by configuring the IAB node with PDCP and/or SDAP header format/presence) , packets such as PDCP or SDAP control packets can be considered as protected so that they will not be discarded or considered by the AQM mechanism.
Flow control /congestion feedback:
Eventually, the source (e.g. TCP sender or RTP source) needs to reduce its traffic flow rate and should be notified of the congestion as soon as possible. As an alternative to the above mechanisms, particularly if AQM actions (dropping/marking) are limited within IAB nodes, feedback can be sent upstream. In DL, the feedback could include parameters such as:
- queue ID (at least UE-bearer identity, but could also include QFI or other flow indicator which can enable enhance queue granularity) ,
- queue size (length) (in bytes) ,
- queue delay (average time a packet stays in queue before being transmitted) .
The queue delay has benefit over the queue size as it relates directly to the QoS, does not relate directly to the throughput and is one of the main parameter used by modern AQM techniques. The queue size alone is not really meaningful for congestion as generally, independently of congestion, the queue size is expected to scale proportionally with the throughput of a flow. Hence for different flows having different throughput, the queue sizes are expected to be in proportion to their respective throughput. This is mainly because queuing / buffering in a scheduler has to accommodate enough data for transmission over a given time period (scheduling period) . The queue delay on the other hand is more meaningful as it shows to which extent the scheduler is not able to send the packets in time, i.e. to which extent it is congested.
The feedback could be sent from the bottleneck node to the parent node. In one option, in order to avoid propagating the bottleneck, this feedback should be relayed to the parent node as soon as received, up to the donor. This could mitigate the issue described earlier in figures 6 and 7. When relaying this information, the intermediate IAB nodes may aggregate (add) the corresponding queue length and/or delays, so that the parent node or donor would have an aggregated view of the queue length/queue delay downstream. Such feedback might be configured as triggered based (with an hysteresis and/or prohibit timer to avoid sending the feedback to often) , preferably over the MAC layer for quick feedback up to the donor.
Although not shown in detail any of the devices or apparatus that form part of the network may include at least a processor, a storage unit and a communications interface, wherein the processor unit, storage unit, and communications interface are configured to perform the method of any aspect of the present invention. Further options and choices are described below.
The signal processing functionality of the embodiments of the invention especially the gNB and the UE may be achieved using computing systems or architectures known to those who are skilled in the relevant art. Computing systems such as, a desktop, laptop or notebook computer, hand-held computing device (PDA, cell phone, palmtop, etc. ) , mainframe, server, client, or any other type of special or general purpose computing device as may be desirable or appropriate for a given application or environment can be used. The computing system can include one or more processors which can be implemented using a general or special-purpose processing engine such as, for example, a microprocessor, microcontroller or other control module.
The computing system can also include a main memory, such as random access memory (RAM) or other dynamic memory, for storing information and instructions to be executed by a processor. Such a main memory also may be used for storing temporary variables or other intermediate information during execution of instructions to be executed by the processor. The computing system may likewise include a read only memory (ROM) or other static storage device for storing static information and instructions for a processor.
The computing system may also include an information storage system which may include, for example, a media drive and a removable storage interface. The media drive may include a drive or other mechanism to support fixed or removable storage media, such as a hard disk drive, a floppy disk drive, a magnetic tape drive, an optical disk drive, a compact disc (CD) or digital video drive (DVD) read or write drive (R or RW) , or other removable or fixed media drive. Storage media may include, for example, a hard disk, floppy disk, magnetic tape, optical disk, CD or DVD, or other fixed or removable medium that is read by and written to by media drive. The storage media may include a computer-readable storage medium having particular computer software or data stored therein.
In alternative embodiments, an information storage system may include other similar components for allowing computer programs or other instructions or data to be loaded into the computing system. Such components may include, for example, a removable storage unit and an interface , such as a program cartridge and cartridge interface, a removable memory (for example, a flash memory or other removable memory module) and memory slot, and other removable storage units and interfaces that allow software and data to be transferred from the removable storage unit to computing system.
The computing system can also include a communications interface. Such a communications interface can be used to allow software and data to be transferred between a computing system and external devices. Examples of communications interfaces can include a modem, a network interface (such as an Ethernet or other NIC card) , a communications port (such as for example, a universal serial bus (USB) port) , a PCMCIA slot and card, etc. Software and data transferred via a communications interface are in the form of signals which can be electronic, electromagnetic, and optical or other signals capable of being received by a communications interface medium.
In this document, the terms ‘computer program product’ , ‘computer-readable medium’ and the like may be used generally to refer to tangible media such as, for example, a memory, storage device, or storage unit. These and other forms of computer-readable media may store one or more instructions for use by the processor comprising the computer system to cause the processor to perform specified operations. Such instructions, generally 45 referred to as ‘computer program code’ (which may be grouped in the form of computer programs or other groupings) , when executed, enable the computing system to perform functions of embodiments of the present invention. Note that the code may directly cause a processor to perform specified operations, be compiled to do so, and/or be combined with other software, hardware, and/or firmware elements (e.g., libraries for performing standard functions) to do so.
The non-transitory computer readable medium may comprise at least one from a group consisting of: a hard disk, a CD-ROM, an optical storage device, a magnetic storage device, a Read Only Memory, a Programmable Read Only Memory, an Erasable Programmable Read Only Memory, EPROM, an Electrically Erasable Programmable Read Only Memory and a Flash memory. In an embodiment where the elements are implemented using software, the software may be stored in a computer-readable medium and loaded into computing system using, for example, removable storage drive. A control module (in this example, software instructions or executable computer program code) , when executed by the processor in the computer system, causes a processor to perform the functions of the invention as described herein.
Furthermore, the inventive concept can be applied to any circuit for performing signal processing functionality within a network element. It is further envisaged that, for example, a semiconductor manufacturer may employ the inventive concept in a design of a stand-alone device, such as a microcontroller of a digital signal processor (DSP) , or application-specific integrated circuit (ASIC) and/or any other sub-system element.
It will be appreciated that, for clarity purposes, the above description has described embodiments of the invention with reference to a single processing logic. However, the inventive concept may equally be implemented by way of a plurality of different functional units and processors to provide the signal processing functionality. Thus, references to specific functional units are only to be seen as references to suitable means for providing the described functionality, rather than indicative of a strict logical or physical structure or organisation.
Aspects of the invention may be implemented in any suitable form including hardware, software, firmware or any combination of these. The invention may optionally be implemented, at least partly, as computer software running on one or more data processors and/or digital signal processors or configurable module components such as FPGA devices.
Thus, the elements and components of an embodiment of the invention may be physically, functionally and logically implemented in any suitable way. Indeed, the functionality may be implemented in a single unit, in a plurality of units or as part of other functional units. Although the present invention has been described in connection with some embodiments, it is not intended to be limited to the specific form set forth herein. Rather, the scope of the present  invention is limited only by the accompanying claims. Additionally, although a feature may appear to be described in connection with particular embodiments, one skilled in the art would recognise that various features of the described embodiments may be combined in accordance with the invention. In the claims, the term ‘comprising’ does not exclude the presence of other elements or steps.
Furthermore, although individually listed, a plurality of means, elements or method steps may be implemented by, for example, a single unit or processor. Additionally, although individual features may be included in different claims, these may possibly be advantageously combined, and the inclusion in different claims does not imply that a combination of features is not feasible and/or advantageous. Also, the inclusion of a feature in one category of claims does not imply a limitation to this category, but rather indicates that the feature is equally applicable to other claim categories, as appropriate.
Furthermore, the order of features in the claims does not imply any specific order in which the features must be performed and in particular the order of individual steps in a method claim does not imply that the steps must be performed in this order. Rather, the steps may be performed in any suitable order. In addition, singular references do not exclude a plurality. Thus, references to ‘a’ , ‘an’ , ‘first’ , ‘second’ , etc. do not preclude a plurality.
Although the present invention has been described in connection with some embodiments, it is not intended to be limited to the specific form set forth herein. Rather, the scope of the present invention is limited only by the accompanying claims. Additionally, although a feature may appear to be described in connection with particular embodiments, one skilled in the art would recognise that various features of the described embodiments may be combined in accordance with the invention. In the claims, the term ‘comprising’ or “including” does not exclude the presence of other elements.

Claims (23)

  1. A method of managing traffic flow congestion in a wireless communications network, the wireless communications network comprising a plurality of nodes which support integrated wireless access and backhaul between a core network and user equipment, the method comprising
    at at least one node of the network
    receiving at least one traffic flow comprising a plurality of protocol data units,
    accessing a quality of service flow identification of each of the plurality of protocol data units,
    using the quality of service flow identifications to sort the plurality of protocol data units into at least one quality of service flow, and
    applying active queue management to the at least one quality of service flow to manage congestion of traffic flow in the wireless communications network.
  2. A method according to claim 1 wherein accessing the quality of service flow identification of each of the plurality of protocol data units comprises reading the quality of service flow identification from a service data adaptation protocol layer header of each of the plurality of protocol data units.
  3. A method according to claim 1 wherein accessing the quality of service flow identification of each of the plurality of protocol data units comprises reading the quality of service flow identification from an adaptation layer of each of the plurality of protocol data units.
  4. A method according to any one of the preceding claims wherein the at least one node is configured with protocol data unit format information to access the quality of service flow identification of each of the plurality of protocol data units.
  5. A method of controlling traffic flow congestion in a wireless communications network, the wireless communications network comprising a plurality of nodes which support integrated wireless access and backhaul between a core network and user equipment, the method comprising
    at least one node of the network
    receiving at least one traffic flow comprising a plurality of protocol data units,
    accessing at least one flow indicator of each of the plurality of protocol data units,
    using the flow indicators to sort the plurality of protocol data units into at least one flow, and
    applying active queue management to the at least one flow to manage congestion of traffic flow in the wireless communications network.
  6. A method according to claim 5 wherein accessing the flow indicator of each of the plurality of protocol data units comprises reading the flow indicator from an adaptation layer of each of the plurality of protocol data units.
  7. A method according to claim 6 wherein may comprise reading the flow indicator from an adaptation layer comprises reading the flow indicator in a F1-U interface relayed in the adaptation layer.
  8. A method according to claim 7 wherein the F1-U interface comprises a GTP-U extension header.
  9. A method of managing traffic flow congestion in a wireless communications network, the wireless communications network comprising a plurality of nodes which support integrated wireless access and backhaul between a core network and user equipment, the method comprising
    at a first node of the network
    receiving at least one traffic flow comprising a plurality of protocol data units,
    marking at least one of the plurality of protocol data units with a discard instruction,
    sending the plurality of protocol data units including the at least one marked protocol data unit to a second node of the network, and
    at the second node of the network
    receiving the plurality of protocol data units including the at least one marked protocol data unit to a second node of the network,
    reading the discard instruction of the marked protocol data unit and discarding a service data unit corresponding to the marked protocol data unit to create a traffic congestion indicator, and
    sending the traffic congestion indicator to a transmitter of the traffic flow to manage congestion of traffic flow in the wireless communications network.
  10. A method according to claim 9 wherein marking the at least one of the plurality of protocol data units with a discard instruction comprises marking a packet data convergence protocol layer header of the at least one of the plurality of protocol data units with a discard instruction.
  11. A method according to claim 10 wherein marking the packet data convergence protocol layer header of the at least one of the plurality of protocol data units with a discard instruction comprises setting a reserve bit in the header to indicate a discard instruction.
  12. A method according to claim 10 or claim 11 wherein the at least one node is configured with protocol data unit format information to access the packet data convergence protocol layer header of the at least one of the plurality of protocol data units.
  13. A method according to claim 9 wherein marking the at least one of the plurality of protocol data units with a discard instruction comprises marking an adaptation layer header of the at least one of the plurality of protocol data units with a discard instruction.
  14. A method according to claim 9 or claim 13 wherein marking the at least one of the plurality of protocol data units with a discard instruction comprises marking GTP-U extension of the at least one of the plurality of protocol data units with a discard instruction.
  15. A method according to claim 13 or claim 14 wherein marking the adaptation layer header and the GTP-U extension of the at least one of the plurality of protocol data units with a discard instruction comprises setting a bit in the header and extension to indicate a discard instruction.
  16. A method according to previous claims wherein when a packet is relayed at an IAB network node, the discard instruction is relayed.
  17. A method according to previous claims wherein marking a packet data convergence protocol data unit is realized by removing the service data unit part of the packet data convergence protocol data unit.
  18. A method of managing traffic flow congestion in a wireless communications network, the wireless communications network comprising a plurality of nodes which support integrated wireless access and backhaul between a core network and user equipment, the method comprising
    at a first node of the network
    receiving at least one traffic flow comprising a plurality of service data units,
    corrupting a packet data convergence protocol layer SDU part of at least one of the plurality of service data units,
    sending the plurality of service data units including the at least one corrupted service data unit to a second node of the network, and
    at the second node of the network
    receiving the plurality of service data units including the at least one corrupted service data unit to a second node of the network,
    discarding the corrupted service data unit to create a traffic congestion indicator, and
    sending the traffic congestion indicator to a transmitter of the traffic flow to manage congestion of traffic flow in the wireless communications network.
  19. A method of managing traffic flow congestion in a wireless communications network, the wireless communications network comprising a plurality of nodes which support integrated wireless access and backhaul between a core network and user equipment, the method comprising
    at a first node of the network
    receiving at least one traffic flow comprising a plurality of protocol data units,
    marking at least one of the plurality of protocol data units with an explicit congestion notification,
    sending the plurality of protocol data units including the at least one marked protocol data unit to a second node of the network, and
    at the second node of the network
    receiving the plurality of protocol data units including the at least one marked protocol data unit to a second node of the network,
    reading the explicit congestion notification of the marked protocol data unit and relaying the explicit congestion notification up to a central unit of the,
    mapping explicit congestion notification of the marked protocol data unit to the ECN field of the end-user SDU to manage congestion of traffic flow in the wireless communications network.
  20. A method according to claim 19 wherein for UL marking the at least one of the plurality of protocol data units with an explicit congestion notification comprises adding the explicit congestion notification to an AL header of the at least one of the plurality of protocol data units.
  21. A method according to claim 19 or claim 20 wherein for UL marking the at least one of the plurality of protocol data units with an explicit congestion notification comprises adding the explicit congestion notification to a GTP-U extension of the at least one of the plurality of protocol data units.
  22. A base station configured to perform the method of any of claims 1 to 21.
  23. A method according to any of claims 9 to 21 wherein the second node is the user equipment.
PCT/CN2019/108060 2018-09-27 2019-09-26 Congestion management in a wireless communications network WO2020063722A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201980039629.8A CN112352449B (en) 2018-09-27 2019-09-26 Congestion management in a wireless communication network

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GB1815803.0A GB2577531A (en) 2018-09-27 2018-09-27 Congestion management in a wireless communications network
GB1815803.0 2018-09-27

Publications (2)

Publication Number Publication Date
WO2020063722A1 true WO2020063722A1 (en) 2020-04-02
WO2020063722A9 WO2020063722A9 (en) 2020-06-11

Family

ID=64108966

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/108060 WO2020063722A1 (en) 2018-09-27 2019-09-26 Congestion management in a wireless communications network

Country Status (3)

Country Link
CN (1) CN112352449B (en)
GB (1) GB2577531A (en)
WO (1) WO2020063722A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024016277A1 (en) * 2022-07-21 2024-01-25 Zte Corporation Method, device, and system for congestion control in wireless networks

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110536350A (en) * 2019-02-14 2019-12-03 中兴通讯股份有限公司 IAB chainlink control method, communication unit, computer readable storage medium
CN114979002A (en) * 2021-02-23 2022-08-30 华为技术有限公司 Flow control method and flow control device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103081529A (en) * 2010-06-22 2013-05-01 捷讯研究有限公司 Information dissemination in a wireless communication system
US20170332282A1 (en) * 2016-05-13 2017-11-16 Huawei Technologies Co., Ltd. Method and system for providing guaranteed quality of service and quality of experience channel

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10136359B2 (en) * 2015-06-30 2018-11-20 Qualcomm Incorporated Traffic flow migration in backhaul networks

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103081529A (en) * 2010-06-22 2013-05-01 捷讯研究有限公司 Information dissemination in a wireless communication system
US20170332282A1 (en) * 2016-05-13 2017-11-16 Huawei Technologies Co., Ltd. Method and system for providing guaranteed quality of service and quality of experience channel

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
HUAWEI: "Some considerations about congestion handling and flow control for IAB networks", 3GPP TSG-RAN WG2#103 R2-1812711, vol. RAN WG2, 10 August 2018 (2018-08-10), XP051522305 *
SAMSUNG: "Overview of flow control solutions for architecture 1 and 2", 3GPP TSG-RAN WG2 MEETING #103 R2-1811056, vol. RAN WG2, 10 August 2018 (2018-08-10), XP051520757 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024016277A1 (en) * 2022-07-21 2024-01-25 Zte Corporation Method, device, and system for congestion control in wireless networks

Also Published As

Publication number Publication date
CN112352449B (en) 2024-03-19
GB201815803D0 (en) 2018-11-14
CN112352449A (en) 2021-02-09
WO2020063722A9 (en) 2020-06-11
GB2577531A (en) 2020-04-01

Similar Documents

Publication Publication Date Title
EP3682670B1 (en) Transmission techniques in a cellular network
US8369221B2 (en) Efficient flow control in a radio network controller (RNC)
US7664017B2 (en) Congestion and delay handling in a packet data network
US8339962B2 (en) Limiting RLC window size in the HSDPA flow control
US9271303B2 (en) Method and arrangement in a wireless communication system
US9107100B2 (en) Prioritization of data packets
WO2020063722A1 (en) Congestion management in a wireless communications network
EP3214808A1 (en) Gateway apparatus and method of controlling gateway apparatus
CN102239666A (en) Method and device for enabling indication of congestion in a telecommunications network
ES2356861T3 (en) IMPROVED USE OF DATA LINKS.
EP3214886A1 (en) Wireless base station, packet transmission device, wireless terminal, control method and program
WO2019101054A1 (en) Aggregation rate control method, device and system
US20150289165A1 (en) Introducing simple rlc functionality to node b
US9549339B2 (en) Radio network node, network control node and methods therein
GB2571260A (en) Method and related aspects for buffer management
EP3222014B1 (en) Active queue management for a wireless communication network
WO2023130453A1 (en) Methods and apparatus of packet classification for xr traffic
WO2020063791A1 (en) Packet management in a cellular communications network
US20170201352A1 (en) Packet handling in wireless networks
EP2890179B1 (en) Method, apparatus and computer program for data transfer
JP2013085031A (en) Base station and communications control method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19867098

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19867098

Country of ref document: EP

Kind code of ref document: A1