WO2020085969A1 - Methods for handling link failures in integrated access backhaul (iab) networks - Google Patents

Methods for handling link failures in integrated access backhaul (iab) networks Download PDF

Info

Publication number
WO2020085969A1
WO2020085969A1 PCT/SE2019/050935 SE2019050935W WO2020085969A1 WO 2020085969 A1 WO2020085969 A1 WO 2020085969A1 SE 2019050935 W SE2019050935 W SE 2019050935W WO 2020085969 A1 WO2020085969 A1 WO 2020085969A1
Authority
WO
WIPO (PCT)
Prior art keywords
node
network
indication
iab
nodes
Prior art date
Application number
PCT/SE2019/050935
Other languages
French (fr)
Inventor
Jose Luis Pradas
Ajmal MUHAMMAD
Gunnar Mildh
Oumer Teyeb
Original Assignee
Telefonaktiebolaget Lm Ericsson (Publ)
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Telefonaktiebolaget Lm Ericsson (Publ) filed Critical Telefonaktiebolaget Lm Ericsson (Publ)
Publication of WO2020085969A1 publication Critical patent/WO2020085969A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/22Alternate routing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/28Routing or path finding of packets in data switching networks using route fault recovery
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/25Flow control; Congestion control with rate being modified by the source upon detecting a change of network conditions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W40/00Communication routing or communication path finding
    • H04W40/02Communication route or path selection, e.g. power-based or shortest path routing
    • H04W40/22Communication route or path selection, e.g. power-based or shortest path routing using selective relaying for reaching a BTS [Base Transceiver Station] or an access point
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W40/00Communication routing or communication path finding
    • H04W40/34Modification of an existing route

Definitions

  • the present application relates generally to the field of wireless communication networks, and more specifically to integrated access backhaul (IAB) networks in which the available wireless communication resources are shared between user access to the network and backhaul of user traffic within the network (e.g, to/from a core network).
  • IAB integrated access backhaul
  • FIG. 1 illustrates a high-level view of a fifth-generation (5G) wireless network architecture, consisting of a Next Generation RAN (NG-RAN) 199 and a 5G Core (5GC) 198.
  • NG-RAN 199 can include one or more gNodeB’s (gNBs) connected to the 5GC via one or more NG interfaces, such as gNBs 100, 150 connected via interfaces 102, 152, respectively. More specifically, gNBs 100, 150 can be connected to one or more Access and Mobility Management Functions (AMF) in the 5GC 198 via respective NG-C interfaces. Similarly, gNBs 100, 150 can be connected to one or more User Plane Functions (UPFs) in 5GC 198 via respective NG-U interfaces.
  • AMF Access and Mobility Management Functions
  • UPFs User Plane Functions
  • 5GC 198 can be replaced by an Evolved Packet Core (EPC), which conventionally has been used together with a Long-Term Evolution (LTE) Evolved UMTS RAN (E-UTRAN).
  • EPC Evolved Packet Core
  • gNBs 100, 150 can connect to one or more Mobility Management Entities (MMEs) in EPC 198 via respective Sl-C interfaces.
  • MMEs Mobility Management Entities
  • SGWs Serving Gateways
  • each of the gNBs can be connected to each other via one or more Xn interfaces, such as Xn interface 140 between gNBs 100 and 150.
  • the radio technology for the NG-RAN is often referred to as“New Radio” (NR).
  • NR New Radio
  • each of the gNBs can support frequency division duplexing (FDD), time division duplexing (TDD), or a combination thereof.
  • NG-RAN 199 is layered into a Radio Network Layer (RNL) and a Transport Network Layer (TNL).
  • RNL Radio Network Layer
  • TNL Transport Network Layer
  • the NG-RAN architecture i.e., the NG-RAN logical nodes and interfaces between them, is defined as part of the RNL.
  • the NG-RAN interface e.g ., NG, Xn, Fl
  • the TNL provides services for user plane transport and signaling transport.
  • each gNB is connected to all 5GC nodes within an“AMF Region” which is defined in 3GPP TS 23.501 (vl5.6.0). If security protection for CP and UP data on TNL of NG-RAN interfaces is supported, NDS/IP (e.g., as defined in 3GPP TS 33.401 vl5.4.0) shall be applied.
  • the NG RAN logical nodes shown in Figure 1 include a Central Unit (CU or gNB-CU) and one or more Distributed Units (DU or gNB-DU).
  • CU or gNB-CU Central Unit
  • DU or gNB-DU Distributed Units
  • gNB 100 includes gNB-CU 110 and gNB-DUs 120 and 130.
  • CUs e.g, gNB-CU 110
  • CUs are logical nodes that host higher-layer protocols and perform various gNB functions such controlling the operation of DUs.
  • a DU (e.g, gNB-DUs 120, 130) is a decentralized logical node that hosts lower layer protocols and can include, depending on the functional split option, various subsets of the gNB functions.
  • each of the CUs and DUs can include various circuitry needed to perform their respective functions, including processing circuitry, transceiver circuitry (e.g, for communication), and power supply circuitry.
  • central unit and centralized unit are used interchangeably herein, as are the terms“distributed unit” and “decentralized unit.”
  • a gNB-CU connects to one or more gNB-DUs over respective Fl logical interfaces, such as interfaces 122 and 132 shown in Figure 1.
  • a gNB-DU can be connected to only a single gNB-CU.
  • the gNB-CU and connected gNB-DU(s) are only visible to other gNBs and the 5GC as a gNB.
  • the Fl interface is not visible beyond gNB-CU.
  • the Fl interface between the gNB-CU and gNB-DU is specified and/or based on the following general principles: • Fl is an open interface;
  • Fl is a point-to-point interface between the endpoints (even in the absence of a physical direct connection between the endpoints);
  • Fl supports control plane and user plane separation into respective Fl-AP protocol and Fl-U protocol (also referred to as NR User Plane Protocol), such that a gNB-CU may also be separated in CP and UP;
  • a gNB terminates X2, Xn, NG and Sl-U interfaces and, for the Fl interface between DU and CU, utilizes the Fl-AP protocol that is defined in 3GPP TS 38.473.
  • the Fl-U protocol is used to convey control information related to the user data flow management of data radio bearers, as defined in 3GPP TS 38.425.
  • the Fl-U protocol data is conveyed by the GTP-U protocol, specifically, by the“RAN Container” GTP- U extension header as defined in 3GPP TS 29.281 (vl5.2.0).
  • the GTP-U protocol over user datagram protocol (UDP) over IP carries data streams on the Fl interface.
  • a GTP-U“tunnel” between two nodes is identified in each node by tunnel endpoint identifier (TEID), an IP address, and a UDP port number.
  • TEID tunnel endpoint identifier
  • IP address IP address
  • UDP port number UDP port number
  • a CU can host protocols such as RRC and PDCP, while a DU can host protocols such as RLC, MAC and PHY.
  • Other variants of protocol distributions between CU and DU can exist, however, such as hosting the RRC, PDCP and part of the RLC protocol in the CU (e.g ., Automatic Retransmission Request (ARQ) function), while hosting the remaining parts of the RLC protocol in the DU, together with MAC and PHY.
  • the CU can host RRC and PDCP, where PDCP is assumed to handle both UP traffic and CP traffic.
  • other exemplary embodiments may utilize other protocol splits that by hosting certain protocols in the CU and certain others in the DU.
  • Exemplary embodiments can also locate centralized control plane protocols (e.g., PDCP-C and RRC) in a different CU with respect to the centralized user plane protocols (e.g., PDCP- U).
  • Densification via the deployment of more and more base stations is one of the mechanisms that can be employed to satisfy the increasing demand for bandwidth and/or capacity in mobile networks, which is mainly driven by the increasing use of video streaming services. Due to the availability of more spectrum in the millimeter wave (mmw) band, deploying small cells that operate in this band is an attractive deployment option for these purposes. However, the normal approach of connecting the small cells to the operator’s backhaul network with optical fiber can end up being very expensive and impractical. Employing wireless links for connecting the small cells to the operator’ s network is a cheaper and more practical alternative. One such approach is an integrated access backhaul (IAB) network where the operator can utilize part of the radio resources for the backhaul link.
  • IAB integrated access backhaul
  • LTE Long Term Evolution
  • RN Relay Node
  • the RN is connected to a donor eNB which has a S1/X2 proxy functionality hiding the RN from the rest of the network.
  • That architecture enabled the Donor eNB to also be aware of the UEs behind the RN and hide any UE mobility between Donor eNB and Relay Node(2) on the same Donor eNB from the CN.
  • other architectures were also considered including, e.g. , where the RNs are more transparent to the Donor gNB and allocated a separate stand-alone P/S-GW node.
  • IAB Interference Access/Reliable and Low Latency NR
  • LTE Long Term Evolution
  • gNB-CU/DU split described above, which separates time critical RLC/MAC/PHY protocols from less time critical RRC/PDCP protocols. It is anticipated that a similar split could also be applied for the IAB case.
  • Other IAB-related differences anticipated in NR as compared to LTE are the support of multiple hops and the support of redundant paths.
  • FIG. 3 shows a reference diagram for an IAB network in standalone mode, as further explained in 3GPP TR 38.874 (v0.2. l).
  • the IAB network shown in Figure 3 includes one IAB-donor 340 and multiple IAB-nodes 311-315, all of which can be part of a radio access network (RAN) such as an NG-RAN.
  • IAB donor 340 includes DUs 321, 322 connected to a CU, which is represented by functions CU-CP 331 and CU-UP 332.
  • IAB donor 340 can communicate with core network (CN) 350 via the CU functionality shown.
  • CN core network
  • Each of the IAB nodes 311-315 connects to the IAB-donor via one or more wireless backhaul links (also referred to herein as“hops”). More specifically, the Mobile-Termination (MT) function of each IAB-node 311-315 terminates the radio interface layers of the wireless backhaul towards a corresponding“upstream” (or“northbound”) DU function.
  • This MT functionality is similar to functionality that enables UEs to access the IAB network and, in fact, has been specified by 3GPP as part of the Mobile Equipment (ME).
  • upstream DUs can include either DU 321 or 322 of IAB donor 340 and, in some cases, a DU function of an intermediate IAB node that is “downstream” (or“southbound”) from IAB donor 340.
  • IAB- node 314 is downstream from IAB-node 312 and DU 321
  • IAB-node 312 is upstream from IAB-node 314 but downstream from DU 321
  • DU 321 is upstream from IAB-nodes 312 and 314.
  • the DU functionality of IAB nodes 311-315 also terminates the radio interface layers toward UEs ( e.g ., for network access via the DU) and other downstream IAB nodes.
  • IAB-donor 340 can be treated as a single logical node that comprises a set of functions such as gNB-DUs 321-322, gNB-CU-CP 331, gNB-CU-UP 332, and possibly other functions.
  • the IAB-donor can be split according to these functions, which can all be either co-located or non-co-located as allowed by the 3 GPP NG-RAN architecture.
  • some of the functions presently associated with the IAB-donor can be moved outside of the IAB-donor if such functions do not perform IAB-specific tasks.
  • Each IAB-node DU connects to the IAB-donor CU using a modified form of Fl, which is referred to as Fl*.
  • the user-plane portion of Fl* (referred to as“Fl*-U”) runs over RLC channels on the wireless backhaul between the MT on the serving IAB-node and the DU on the IAB donor.
  • an adaptation layer is included to hold routing information, thereby enabling hop-by-hop forwarding by IAB nodes. In some sense, the adaptation layer replaces the IP functionality of the standard Fl stack.
  • Fl*-U may carry a GTP-U header for the end- to-end association between CU and DU ( e.g ., IAB-node DU).
  • information carried inside the GTP-U header can be included into the adaption layer.
  • the adaptation layer for IAB can be inserted either below or above the RLC layer. Optimizations to RLC layer itself are also possible, such as applying ARQ only on the end-to-end connection (i.e., between the donor DU and the IAB node MT) rather than hop-by-hop along access and backhaul links (e.g., between downstream IAB node MT and upstream IAB node DU).
  • Failure of a wireless backhaul link between an intermediate node (e.g, IAB node 312 in Figure 3) and its parent node (e.g, DU 321) in an IAB network can create various problems for other nodes (e.g, IAB nodes 314-315) that utilize that failed backhaul link.
  • Such problems can include packet losses, retransmissions, or other undesired effects that can exacerbate congestion of an IAB network that already includes one failed wireless backhaul link. Such congestion can result in failure of additional wireless backhaul links in the IAB network and, consequently, loss of service to network users.
  • exemplary embodiments of the present disclosure address these and other difficulties in schedule of uplink (UL) transmissions in a 5G network comprising IAB nodes, thereby enabling the otherwise-advantageous deployment of IAB solutions.
  • Exemplary embodiments of the present disclosure include methods and/or procedures for managing a link failure in an integrated access backhaul (IAB) network. These exemplary methods and/or procedures can be performed by a network node (e.g, an intermediate IAB node) in a radio access network (RAN, e.g., NG-RAN).
  • the exemplary methods and/or procedures can include receiving, from a first upstream node in the IAB network, a first indication of failure of a first backhaul link in a first network path that includes the intermediate node, the first upstream node, and a destination node for uplink (UL) data in the IAB network.
  • the destination node can be a donor DU and/or a donor CU.
  • the exemplary methods and/or procedures can also include, in response to the first indication, performing one or more first actions with respect to transmission of UL data towards the first upstream node.
  • the one or more first actions can include various actions with respect to different protocol layers (e.g, PDCP, RLC, MAC) comprising the network node.
  • the exemplary methods and/or procedures can also include, based on information associated with the first indication, selectively forwarding the first indication to one or more downstream nodes in the IAB network.
  • the one or more downstream nodes can comprise one or more intermediate nodes and one or user equipment (UEs).
  • UEs user equipment
  • the exemplary methods and/or procedures can also include the receiving a second indication concerning a path in the IAB network.
  • the second indication can be received from the first upstream node and can indicate that the first backhaul of in the first network path has been restored.
  • the second indication can be received from a second upstream node and can indicate the establishment of a second network path that includes the intermediate node, a second upstream node in the IAB network, and the destination node.
  • the exemplary methods and/or procedures can also include, in response to the second indication, performing one or more second actions with respect to transmission of UL data towards the upstream node.
  • the one or more second actions can include various actions with respect to different protocol layers (e.g ., PDCP, RLC, MAC) comprising the network node.
  • the exemplary methods and/or procedures can also include, based on information associated with the second indication, selectively forwarding the second indication to the one or more downstream nodes.
  • the selective forwarding of the second indication can include substantially similar operations, or be based on substantially similar information, as the selective forwarding of the first indication.
  • exemplary embodiments of the present disclosure include additional methods and/or procedures for managing a link failure in an IAB network. These exemplary methods and/or procedures can be performed by a network node (e.g., an intermediate node immediately downstream of the failed link) in a RAN (e.g, NG-RAN).
  • the exemplary methods and/or procedures can include detecting a failure of a first backhaul link between the intermediate node and a first upstream node in the IAB network.
  • the first backhaul link can be part of a first network path that includes the intermediate node, the first upstream node, a plurality of downstream nodes, and a destination node for UL data.
  • the destination node can be a donor DU and/or a donor CU.
  • the exemplary methods and/or procedures can also include sending, to the first downstream node, a first indication of the failure of the first backhaul link, and performing one or more first actions with respect to transmission of UL data towards the first upstream node.
  • the one or more first actions can include various actions with respect to different protocol layers comprising the network node, e.g ., PDCP, RLC, and MAC layers.
  • the exemplary methods and/or procedures can also include determining that a second network path has been established.
  • the second network path can include the intermediate node, the plurality of downstream nodes, and the destination node.
  • the exemplary methods and/or procedures can also include sending, to the first downstream node, a second indication concerning the second path.
  • the second network path can include the first network path, and the second indication can indicate that the first backhaul of in the first network path has been restored.
  • the second network path can include a second upstream node but not the first upstream node, and the second indication can indicate that the second network path has been established to replace the first network path.
  • the exemplary methods and/or procedures can also include performing one or more second actions with respect to transmission of UL data towards the first upstream node.
  • the one or more second actions can include various actions with respect to different protocol layers comprising the network node, e.g. , PDCP, RLC, and MAC layers.
  • Exemplary embodiments also include network nodes (e.g, intermediate LAB nodes and/or components thereof) configured to perform operations corresponding to any of the exemplary methods and/or procedures described herein.
  • Exemplary embodiments also include non-transitory, computer-readable media storing computer-executable instructions that, when executed by processing circuitry comprising a network node, configure the network node to perform operations corresponding to any of the exemplary methods and/or procedures described herein.
  • Figure 1 illustrates a high-level view of the 5G network architecture, including split central unit (CU)-distributed unit (DU) split architecture of gNBs.
  • CU central unit
  • DU distributed unit
  • Figure 2 illustrates the control-plane (CP) and user-plane (UP) interfaces within the split CU-DU architecture shown in Figure 1.
  • Figure 3 shows a reference diagram for an integrated access backhaul (IAB) network in standalone mode, as further explained in 3GPP TR 38.874.
  • IAB integrated access backhaul
  • Figure 4-8 show block diagrams of IAB reference architectures la, lb, 2a, 2b, and 2c, respectively.
  • Figure 9 which includes Figures 9A-E, shows five (5) different exemplary user plane (UP) protocol stack options for architecture la.
  • UP user plane
  • Figure 10 shows an exemplary UP protocol stack arrangement for architecture lb.
  • FIGS 11-12 are block diagrams of an exemplary IAB network that includes a donor DU, a donor CU, and various IAB nodes that are capable of providing access to various UEs, according to various exemplary embodiments of the present disclosure.
  • Figure 13 shows an exemplary data flow diagram corresponding to the IAB network illustrated in Figures 11-12, according to various exemplary embodiments of the present disclosure.
  • Figures 14-15 illustrate exemplary methods and/or procedures for managing a link failure in an integrated access backhaul (IAB) network, according to various exemplary embodiments of the present disclosure.
  • IAB integrated access backhaul
  • Figure 16 illustrates an exemplary wireless network, according to various exemplary embodiments of the present disclosure.
  • Figure 17 illustrates an exemplary UE, according to various exemplary embodiments of the present disclosure.
  • Figure 18 is a block diagram illustrating an exemplary virtualization environment usable for implementation of various embodiments described herein.
  • Figures 19-20 are block diagrams of various exemplary communication systems and/or networks, according to various exemplary embodiments of the present disclosure.
  • Figures 21-24 are flow diagrams of exemplary methods and/or procedures for transmission and/or reception of user data, according to various exemplary embodiments of the present disclosure.
  • Radio Node As used herein, a“radio node” can be either a“radio access node” or a “wireless device.”
  • a“radio access node” can be any node in a radio access network (RAN) of a cellular communications network that operates to wirelessly transmit and/or receive signals.
  • a radio access node include, but are not limited to, a base station (e.g ., a New Radio (NR) base station (gNB) in a 3 GPP Fifth Generation (5G) NR network or an enhanced or evolved Node B (eNB) in a 3 GPP LTE network), a high-power or macro base station, a low-power base station (e.g., a micro base station, a pico base station, a home eNB, or the like), an integrated access backhaul (LAB) node, and a relay node.
  • a base station e.g ., a New Radio (NR) base station (gNB) in a 3 GPP Fifth Generation (5G) NR network or an enhanced or evolved Node B (eNB) in a 3 GPP LTE network
  • 5G Fifth Generation
  • a“core network node” is any type of node in a core network.
  • Some examples of a core network node include, e.g, a Mobility Management Entity (MME), a Packet Data Network Gateway (P-GW), a Service
  • SCEF Capability Exposure Function
  • a“wireless device” (or“WD” for short) is any type of device that has access to (i.e., is served by) a cellular communications network by communicate wirelessly with network nodes and/or other wireless devices.
  • the term“wireless device” is used interchangeably herein with“user equipment” (or“UE” for short).
  • Some examples of a wireless device include, but are not limited to, a UE in a 3GPP network and a Machine Type Communication (MTC) device. Communicating wirelessly can involve transmitting and/or receiving wireless signals using electromagnetic waves, radio waves, infrared waves, and/or other types of signals suitable for conveying information through air.
  • MTC Machine Type Communication
  • a“network node” is any node that is either part of the radio access network or the core network of a cellular communications network.
  • a network node is equipment capable, configured, arranged, and/or operable to communicate directly or indirectly with a wireless device and/or with other network nodes or equipment in the cellular communications network, to enable and/or provide wireless access to the wireless device, and/or to perform other functions (e.g, administration) in the cellular communications network.
  • functions e.g, administration
  • WCDMA Wide Band Code Division Multiple Access
  • WiMax Worldwide Interoperability for Microwave Access
  • UMB Ultra Mobile Broadband
  • GSM Global System for Mobile Communications
  • functions and/or operations described herein as being performed by a wireless device or a network node may be distributed over a plurality of wireless devices and/or network nodes.
  • the term“cell” is used herein, it should be understood that (particularly with respect to 5G R) beams may be used instead of cells and, as such, concepts described herein apply equally to both cells and beams.
  • a backhaul link failure between an intermediate node and its parent node in an IAB network can create various problems for other nodes that utilize that failed backhaul link.
  • Such problems can include packet losses, retransmissions, or other undesired effects that can exacerbate congestion of an IAB network that already includes one failed backhaul link.
  • Such congestion can result in failure of additional wireless backhaul links in the IAB network and, consequently, loss of service to network users.
  • 3GPP TR 38.874 (v0.2.1) specifies several reference architectures for supporting user plane (UP) traffic over IAB nodes, including IAB Donor nodes.
  • Figure 4 shows a block diagram of reference architecture“la”, which leverages the CU/DU split architecture in a two-hop chain of IAB nodes underneath an IAB-donor.
  • each IAB node holds a DU and a mobile terminal (MT).
  • the IAB-node connects to an upstream IAB-node or the IAB-donor.
  • the IAB-node establishes RLC-channels to UEs and to MTs of downstream IAB-nodes.
  • this RLC-channel may refer to a modified RLC*. Whether an IAB node can connect to more than one upstream IAB-node or IAB-donor is for further study.
  • the IAB Donor also includes a DU to support UEs and MTs of downstream IAB nodes.
  • the IAB-donor holds a CU for the DUs of all IAB-nodes and for its own DU. It is FFS if different CUs can serve the DUs of the IAB-nodes.
  • Each DU on an IAB-node connects to the CU in the IAB-donor using a modified form of Fl, which is referred to as Fl*.
  • Fl*-U runs over RLC channels on the wireless backhaul between the MT on the serving IAB-node and the DU on the donor.
  • Fl *-U transport between MT and DU on the serving IAB-node as well as between DU and CU on the donor is for further study.
  • An adaptation layer is added, which holds routing information, enabling hop-by-hop forwarding. It replaces the IP functionality of the standard Fl -stack.
  • Fl*-U may carry a GTP-U header for the end-to-end association between CU and DU.
  • information carried inside the GTP-U header may be included into the adaption layer.
  • optimizations to RLC are possible, such as applying ARQ only on the end-to-end connection rather than hop-by-hop.
  • FIG. 4 The right side of Figure 4 shows two examples of such Fl*-U protocol stacks.
  • enhancements of RLC are referred to as RLC*.
  • the MT of each IAB-node further sustains NAS connectivity to the NGC, e.g ., for authentication of the IAB-node. It further sustains a PDU-session via the NGC, e.g. , to provide the IAB-node with connectivity to the OAM.
  • Details of Fl*, the adaptation layer, RLC*, hop-by-hop forwarding, and transport of Fl-AP are for further study. Protocol translation between Fl* and Fl in case the IAB-donor is split is also for further study.
  • FIG. 5 shows a block diagram of a reference architecture“lb”, which also leverages the CU/DU split architecture in a two-hop chain of IAB nodes underneath an IAB-donor.
  • the IAB-donor holds only one logical CU.
  • each IAB-node and the IAB-donor hold the same functions as in architecture la.
  • every backhaul link establishes an RLC-channel, and an adaptation layer is inserted to enable hop-by-hop forwarding of F 1 * .
  • the MT on each IAB-node establishes a PDU-session with a UPF residing on the donor.
  • the MT’s PDU-session carries Fl* for the collocated DU.
  • the PDU-session provides a point-to-point link between CU and DU.
  • the PDCP-PDUs of Fl* are forwarded via an adaptation layer in the same manner as described for architecture la.
  • the right side of Figure 5 shows an example of the Fl*-U protocol stack.
  • the UE establishes RLC channels over the wireless backhaul to the DU on the UE’s access IAB node (i.e., IAB donor) via the Fl*-U interface.
  • IAB node i.e., IAB donor
  • Transport of Fl*-U over the wireless backhaul is enabled by an adaptation layer, which is integrated with the RLC channel.
  • information carried on the adaptation layer supports the following functions:
  • FIG. 6 shows a block diagram of a reference architecture“2a”, which employs hop- by-hop forwarding across intermediate nodes using PDU-session-layer routing.
  • each IAB-node holds a MT to establish an NR Uu link with a gNB on the parent IAB-node or IAB-donor. Via this NR-Uu link, the MT sustains a PDU-session with a UPF that is collocated with the gNB. In this manner, an independent PDU-session can be created on every backhaul link.
  • Each IAB-node can also support a routing function to forward data between PDU sessions of adjacent links. This can create a forwarding plane across the wireless backhaul. Based on PDU-session type, this forwarding plane can support IP or
  • Ethernet (e.g ., 802.1).
  • PDU-session type is Ethernet
  • an IP layer can be established on top. In this manner, each IAB-node obtains IP-connectivity to the wireline backhaul network.
  • IP -based interfaces such as NG, Xn, Fl, N4, etc. are carried over this forwarding plane.
  • an IAB-Node serving a UE can contain a DU for access links in addition to the gNB and UPF for the backhaul links.
  • the CU for access links would reside in or beyond the IAB Donor.
  • the right side of Figure 6 shows an example of the NG-U protocol stack for IP-based and for Ethernet-based PDU-session type.
  • FIG. 7 shows a block diagram of a reference architecture“2b”, which employs hop- by-hop forwarding across intermediate nodes using GTP-U/UDP/IP nested tunneling.
  • the IAB-node holds an MT to establish an NR Uu link with a gNB on the parent IAB-node or IAB-donor. Via this NR-Uu link, the MT sustains a PDU-session with a UPF. In contrast to architecture 2a, however, this UPF is located at the IAB-donor. Also, forwarding of PDUs across upstream IAB-nodes is accomplished via tunneling. The forwarding across multiple hops therefore creates a stack of nested tunnels.
  • each IAB- node obtains IP-connectivity to the wireline backhaul network. All IP -based interfaces such as NG, Xn, Fl, N4, etc. are carried over this forwarding IP plane.
  • the right side of Figure 7 shows a protocol stack example for NG-U.
  • FIG 8 shows a block diagram of a reference architecture“2c”, which employs hop- by-hop forwarding across intermediate node uses GTP-U/UDP/IP/PDCP nested tunneling.
  • the IAB-node holds an MT which sustains an RLC-channel with a DU on the parent IAB-node or IAB-donor.
  • the IAB donor holds a CU and a UPF for each IAB-node’ s DU.
  • the MT on each IAB-node sustains a NR-Uu link with a CU and a PDU session with a UPF on the donor.
  • Forwarding on intermediate nodes is accomplished via tunneling. The forwarding across multiple hops creates a stack of nested tunnels.
  • each IAB-node obtains IP-connectivity to the wireline backhaul network.
  • each tunnel includes an SDAP/PDCP layer. All IP -based interfaces such as NG, Xn, Fl, N4, etc. are carried over this forwarding plane.
  • the right side of Figure 8 shows a protocol stack example for NG-U.
  • UP user plane
  • architecture group 1 i.e., architectures la and lb
  • UP user plane
  • FIGs 9 and 10 show exemplary protocol stacks for architectures la and lb
  • Figures 9A-E show five (5) different UP protocol stack options for architecture la
  • Figure 10 shows an exemplary UP protocol stack arrangement for architecture lb.
  • both the IAB-donor and the UE will always have PDCP, RLC, and MAC layers
  • the intermediate IAB-nodes will only have RLC and MAC layers.
  • the adaptation layer can be included in the intermediate IAB- nodes and the IAB-donor. These IAB nodes can use identifiers carried via the adaptation layer to ensure required QoS treatment and to decide which hop any given packet should be sent to.
  • Each PDCP transmitter entity in Figures 9-10 receives PDCP service data units (SDUs) from higher layers and assigns each SDU a Sequence Number before delivery to the RLC layer.
  • SDUs PDCP service data units
  • a discardTimer is also started when a PDCP SDU is received. When the discardTimer expires, the PDCP SDU is discarded and a discard indication is sent to lower layers. In response, RLC will discard the RLC SDU if possible.
  • Each PDCP receiver entity in Figures 9-10 starts a reordering timer (e.g ., t- reordering) when it receives packets in out-of-order.
  • t-reordering expires, the PDCP entity updates the variable RX DELIV which indicates the value of the first PDCP SDU not delivered to the upper layers (e.g, the lower side of a receiving window).
  • Each RLC transmitter entity in Figures 9-10 associates a sequence number with each SDU received from higher layers (e.g, PDCP).
  • the RLC transmitter can set a poll bit to request the RLC receiver to transmit a status report on RLC PDUs sent by the transmitter.
  • the RLC transmitter starts a timer (e.g, t-pollRetransmit).
  • the RLC transmitter can again set again the poll bit and can retransmit those PDUs that were awaiting acknowledgement.
  • an RLC receiver will start a timer (e.g, t-reassembly) when RLC PDUs are received out of sequence.
  • a missing PDU can be determined based on a gap in RLC sequence numbers. This function is similar to the t-reordering timer in PDCP.
  • t-reassembly expires during AM operation, the RLC receiver will transmit a status report to trigger a retransmission by the RLC transmitter.
  • a MAC transmitter entity in Figures 9-10 receives SDUs from higher layers (e.g ., RLC) for transmission, it can request a resource grant for transmitting the corresponding MAC PDUs.
  • the MAC transmitter can request a resource grant by sending either a scheduling request (SR) or a buffer status report (BSR).
  • SR scheduling request
  • BSR buffer status report
  • FIG 11 is a block diagram of an exemplary IAB network that includes a donor DU, a donor CU, and various IAB nodes that are capable of providing access to various UEs.
  • node IAB1 provides access to various UEs (labelled UE_q ... UE_z) and also provides backhaul services to“child” node IAB2.
  • IAB 1 also provides backhaul services to all nodes that rely on IAB2 for backhaul services, e.g., nodes IAB3-7 in Figure 11.
  • the wireless link between IAB 1 and IAB2 is expected to provide backhaul for traffic originating from UEs served by nodes IAB2-7.
  • nodes IAB3-7 can be considered “descendants” of (or downstream to) IAB2, while nodes IAB2-7 can be considered descendants of IAB 1.
  • nodes IAB1, IAB2, and IAB4 can be referred to as“intermediate” (or upstream) nodes with respect to the nodes IAB6-7 that provide access to various UEs (e.g, UE_l ... UE_m and UE_n ... LE_p).
  • nodes IAB6 (1120), IAB4 (1130), IAB2 (1140), and IAB1 (1150) are part of a first path between UE_l (1110) and the donor DU (1160).
  • Figure 11 also illustrates a failure in the wireless backhaul link between nodes IAB1 and IAB2.
  • IAB2 may be aware of this failure, none of the nodes IAB3-7 - nor the UEs that they serve - are aware of this failure. As such, these nodes and UEs will continue requesting resources from other intermediate nodes (e.g, IAB4) to send UL data towards the donor CU, and the intermediate nodes may continue to grant such requests since they are unaware of the LAB 1- 2 link failure. This can cause packet buildup, buffer overflow, and/or retransmissions in
  • intermediate nodes closer to the link failure e.g, IAB2
  • link failure e.g, IAB2
  • intermediate nodes did not grant such requests, this could lead to buffer overflow, packet drops, and/or retransmissions in the intermediate nodes closer to the traffic sources (e.g, UEs). These effects are undesirable and are likely to increase congestion and reduce performance in a network that already includes one wireless backhaul link failure.
  • Exemplary embodiments of the present disclosure address these and other problems, challenges, and/or issues by providing specific enhancements and/or improvements to handling wireless backhaul link failures multi-hop IAB networks.
  • embodiments involve techniques and/or mechanisms for communicating the link failure condition to some or all of the affected IAB nodes and/or UEs whose data traverses the failed link.
  • the IAB nodes and/or UEs receiving this information can then pause and/or reduce the transmission rate of UL data towards the donor CU.
  • embodiments can reduce buffer buildup at intermediate IAB nodes, thereby reducing the probability of packet drops and retransmissions and maintaining acceptable service performance in the IAB network.
  • the affected IAB nodes and/or UEs can forego sending Scheduling Requests (SR) and/or Buffer Status Reports (BSR) to parent IAB nodes.
  • SR Scheduling Requests
  • BSR Buffer Status Reports
  • the affected IAB nodes and/or UEs can adjust the value of various timers (e.g., PDCP SDU discard timer set to an infinite value), or halt the timers altogether, to ensure that UL data packets will not be discarded from the transmission buffers and/or that data retransmission will not occur.
  • the affected IAB nodes and/or UEs can continue sending lower layer ACK/NACK to DL data to ensure that transmission of DL data packets continues downstream of the failure (e.g, from IAB2 towards IAB3-7 in Figure 11).
  • the affected IAB nodes and/or UEs can also deactivate and/or reduce usage of UL resource grants that were previously configured (e.g, semi-static, periodic, and/or longer-duration grants). This deactivation and/or usage reduction can be done in a particular manner that can be pre-configured. Additionally, the affected IAB nodes can avoid or delay scheduling child IAB nodes and/or UEs for UL and/or DL data transmission.
  • the first indication (i.e., of the failure) and the second indication (i.e., of the failure mitigation) can be explicit or implicit. Furthermore, after receiving the first indication or the second indication from a particular node (e.g, parent IAB node), the receiving node can forward the received indication to one or more other nodes (e.g, child/descendant IAB nodes).
  • a particular node e.g., parent IAB node
  • the receiving node can forward the received indication to one or more other nodes (e.g, child/descendant IAB nodes).
  • the first and second indications can be provided via dedicated signaling (e.g. RRC message, MAC Control Element (CE), a field/value in a resource grant, etc.), broadcast signaling (e.g. SIB1), or any other higher layer signaling at RLC or PDCP.
  • the first indication can include any of the following information: • Type of problem and/or failure detected, e.g. radio link failure, slow link performance, etc.
  • a node receiving a first indication with a non-zero depth value can forward the first indication after decrementing the depth value.
  • a particular value of the depth flag can be reserved to indicate propagation to the leaf nodes/UEs.
  • a received first indication with no depth flag can indicate one of the following: no propagation is needed, propagation should be done on a predetermined number of hops, propagation should be done all the way until leaf nodes/UEs are reached, or propagation of the first indication is left to the discretion of the receiving IAB node.
  • the IAB node can base its propagation decision on the buffer occupancy (BO) status of its MT module. For example, the IAB node can propagate the first indication to the descendent nodes and/or UEs only when BO is greater than or equal to a predetermined threshold (e.g, X% of buffer size).
  • a predetermined threshold e.g, X% of buffer size
  • the second indication can also include one or more of the above-listed information, but with respect to correction of the problem and/or failure, and the resumption of protocol layers and/or functions.
  • the intermediate node can propagate this second indication in a manner (e.g, sequentially, in groups, etc.) to avoid and/or mitigate congestion on the backhaul network due to all descendant nodes and/or UEs resuming UL data transmission simultaneously.
  • the two nodes directly connected to the failed backhaul link can establish a new path or link to reroute the UL data from the descendant nodes and UEs.
  • Figure 12 shows the IAB network of Figure 11, but where a new wireless backhaul link between IAB2 (1140) and IAB1 (1150) has been established via IAB8 (1145) to replace the failed wireless backhaul link directly between IAB2 and IAB1.
  • an intermediate node can establish a new backhaul link with other descendant nodes that bypasses a failed child node.
  • IAB1 can establish a new wireless backhaul link with IAB4 (1130) that completely bypasses IAB2. This is illustrated by the dashed line between IAB1 and LAB4 in Figure 12.
  • the second indication to resume normal operation can be an implicit indication. For example, when a descendant IAB node and/or UE that previously received a first indication notices that its parent/serving IAB node has been changed, it can interpret this information as the second indication. The IAB node that implicitly receives this second indication can then send an explicit second indication to its descendant IAB nodes and/or EEs, in the same manner as described above.
  • a descendant IAB node and/or EE that previously received a first indication from a parent node receives a resource grant from the parent node, it can interpret this information as the second indication.
  • the first indication can include information identifying the affected path, so that traffic using the path(s) comprising the failed link can be halted and/or reduced without affecting traffic using the other path(s).
  • the first IAB node can identify the bearers associated with the failed path and the descendant nodes associated with those bearers, and then send the first and/or second indications only to those nodes.
  • the IAB node can include information about the bearers such that descendant IAB nodes can perform similar operations.
  • an IAB node can include the adaptation layer address associated with the affected path in the first and/or second indications that it transmits. ETsing this address, any receiving descendant node can identify associated adaptation layer address(es) from its own set of adaptation layer addresses, and then propagates the indication only to its child node(s) that use path(s) associated with the identified adaptation layer address(es).
  • the first IAB node could have information (e.g., adaptation layer addresses) for all of its descendant nodes that are associated with the failed path. In such case, the first IAB node can include such information in the first and/or second indications.
  • the nodes receiving these first and/or second indications can remove their own adaptation layer addresses from the first indication, and then forward the modified indication to their child nodes that are associated with the other adaptation layer addresses remaining in the modified indication.
  • the first or second indication can be modified and forward in this manner until no other adaptation layer addresses are remaining.
  • the affected IAB nodes and/or UEs upon receiving the first indication, can halt, modify, adjust, and/or limit one or more processes in the respective PDCP, RLC, and MAC layers. Likewise, upon receiving the second indication, the affected IAB nodes and/or UEs can restore the one or more processes to their respective operational settings prior to receiving the first indication. Various examples are discussed below.
  • some transmitter and/or functions can be halted or limited upon receiving the first indication.
  • the PDCP transmitter can stop assigning SNs to PDCP SDUs, stop creating new PDCP PDUs, and/or stop delivering PDCP PDUs to lower layers.
  • the PDCP transmitter can also reduce or limit the rate at which it performs these procedures.
  • the PDCP transmitter timers can be halted, or the current configured values can be modified.
  • the discardTimer associated to each PDCP SDU can be halted. Its value may be stored, or reset to its initial value, or to a new value.
  • one or more PDCP receiver timers can be halted, or the current configured values can be modified. For instance, if t-reordering was running, the timer may be stopped and its value stored, or reset to its initial value or to a new value. In addition, t-reordering (or a new timer) can be started with a value to protect against long periods and once the timer expires, the stored PDCP PDUs can be delivered to higher layers
  • the PDCP transmitter may resume assigning SNs to PDCP SDUs, creating new PDCP PDUs, and/or delivering further PDCP PDUs to lower layers.
  • the PDCP transmitter can also lift any restriction in the rate at which it performs these procedures.
  • the PDCP transmitter can also resume any halted timers (e.g, discardTimer ), or re-start halted timers with initial configured values or other values.
  • the PDCP receiver can resume any halted timers (e.g, t- reordering) or restart them with initial configured values or other values.
  • any timer that was started due to the reception of the first indication can be stopped when the second indication is received.
  • the RLC transmitter can stop assigning SNs to RLC SDUs, stop creating new RLC PDUs, and/or stop delivering RLC PDUs to lower layers (e.g ., MAC).
  • the RLC transmitter can also reduce or limit the rate at which it performs these procedures.
  • the PDCP transmitter timers can be halted, or the current configured values can be modified.
  • the t-pollRetransmit can be halted. Its value may be stored, or reset to its initial value, or to a new value.
  • one or more RLC receiver timers can be halted, or the current configured values can be modified. For instance, if t-reassembly and/or t-StatusProhibit was running, the timer(s) can be stopped and its value stored, or reset to its initial value or to a new value. In addition, the timer(s) (or a new timer) can be started with a value to protect against long periods and once the timer(s) expires, the stored complete RLC SDUs can be delivered to higher layers.
  • the RLC transmitter can resume assigning SNs to RLC SDUs, creating new RLC PDUs, and/or delivering further RLC PDUs to lower layers.
  • the RLC transmitter can also lift any restriction in the rate at which it performs these procedures.
  • the RLC transmitter can also resume any halted timers (e.g., t-pollRetransmit ), or re-start halted timers with initial configured values or other values.
  • the RLC receiver can resume any halted timers (e.g, t-reassembly and/or t-StatusProhibit) or restart them with initial configured values or other values.
  • any timer that was started due to the reception of the first indication can be stopped when the second indication is received.
  • the MAC transmitter upon receiving the first indication, can halt transmission of scheduling requests (SRs), or reduce/limit the rate at which SRs are transmitted. Likewise, if the MAC transmitter was previously configured with resource grants, the MAC transmitter can halt and/or restrict usage of such resource grants after receiving the first indication. For example, the MAC transmitter can use such resource grants for retransmission of MAC or RLC-layer data, but not use such resource grants for initial transmission of data.
  • the MAC transmitter can resume transmission of scheduling requests (SRs), or increase the rate at which SRs are transmitted to the rate used prior to receiving the first indication.
  • SRs scheduling requests
  • the MAC transmitter can resume full usage of such resource grants after receiving the second indication, e.g, for initial transmission and re-transmission.
  • the MAC transmitter can also transmit a buffer status report (BSR) after receiving the second indication, thereby providing upstream nodes with as much information as possible about buffer status of all logical channels with buffered data. For example, the MAC transmitter can send such information in a long BSR.
  • BSR buffer status report
  • Figure 13 shows an exemplary data flow diagram corresponding to the IAB network illustrated in Figures 11-12.
  • Figure 13 shows a UE_l (1110) sending user data via a first network path comprising intermediate nodes IAB6 (1120, i.e ., the access node for UE_l), IAB4 (1130), IAB2 (1140), and IAB1 (1150).
  • IAB6 intermediate nodes
  • IAB4 intermediate nodes
  • IAB2 1140
  • IAB1 IAB1
  • IAB2 detects a failure of a link with upstream (e.g, parent) node IAB1. Subsequently, IAB2 can send a first indication of this link failure to downstream (e.g, child) node IAB4.
  • the first indication can include, or be associated with, various information as described above.
  • IAB2 can perform various first actions in response to the failure detection, e.g, any of the above-described operations at one or more protocol layers in IAB2.
  • IAB4 can selectively forward the first indication - modified as needed - to downstream (e.g, child) node IAB6, which can selectively forward the first indication to its served UEs, including UE_l .
  • Each of these intermediate nodes can also perform various first actions in response to receiving the first indication.
  • IAB2 can detect that the backhaul link with IAB1 has been restored. Alternately, IAB2 can detect that a second network path has been established that includes IAB2 and its downstream nodes, but not IAB1. In either event, IAB2 can send a second indication related to the restoration of the first network path, or establishment of the second network path, to downstream node IAB4. The second indication can include, or be associated with, various information as described above. In addition, IAB2 can perform various second actions in response to the failure detection, e.g, any of the above-described operations at one or more protocol layers in IAB2.
  • IAB4 can selectively forward the second indication - modified as needed - to downstream node IAB6, which can selectively forward the first indication to its served UEs, including UE_l.
  • Each of these intermediate nodes can also perform various second actions in response to receiving the second indication.
  • Figure 14 illustrates an exemplary method and/or procedure for managing a link failure in an integrated access backhaul (IAB) network, according to various exemplary embodiments of the present disclosure.
  • the exemplary method and/or procedure shown in Figure 14 can be performed by a network node (e.g ., an intermediate IAB node) in an radio access network (RAN), such as shown in and/or described in relation to other figures herein.
  • RAN radio access network
  • the exemplary method and/or procedure shown in Figure 14 can be complementary to other exemplary methods and/or procedures disclosed herein (e.g., Figure 15) such that they are capable of being used cooperatively to provide benefits, advantages, and/or solutions to problems described herein.
  • the exemplary method and/or procedure can include the operations of block 1410, where the network node can receive, from a first upstream node in the IAB network, a first indication of failure of a first backhaul link in a first network path that includes the intermediate node, the first upstream node, and a destination node for uplink (UL) data in the IAB network.
  • the destination node can be a donor DU and/or a donor CU.
  • the first indication can include a depth value that identifies a number of downstream hops in the IAB network for forwarding the first indication.
  • the first indication can include one or more of the following: type of failure associated with the first backhaul link; identifiers of one or more nodes comprising the first network path; expected time of resolution of the failure of the first backhaul link; protocol layers affected by the failure; and node functions affected by the failure.
  • the identifiers of the one or more nodes can include identifiers of bearers associated with the one or more nodes, or adaptation layer addresses associated with the one or more nodes.
  • the exemplary method and/or procedure can also include the operations of block 1420, where the network node can, in response to the first indication, perform one or more first actions with respect to transmission of UL data towards the first upstream node.
  • the one or more first actions can include various actions with respect to different protocol layers comprising the network node.
  • the one or more first actions can include any of the following operations with respect to a packet data convergence protocol (PDCP) layer of the network node: stopping, or decreasing the rate of, assignment of sequence numbers (SNs) to PDCP service data units (SDUs) received from higher layers; stopping, or decreasing the rate of, creation of PDCP protocol data units (PDUs) for delivery to lower layers; stopping, or decreasing the rate of, delivery of PDCP PDUs to lower layers; stopping a discard timer associated with one or more PDCP SDUs that are ready for transmission; and stopping a reordering timer associated with one or more received PDCP PDUs.
  • PDCP packet data convergence protocol
  • the one or more first actions can include any of the following operations with respect to a radio link control (RLC) layer of the network node: stopping, or decreasing the rate of, assignment of SNs to RLC SDUs received from higher layers; stopping, or decreasing the rate of, creation of RLC PDUs for delivery to lower layers; stopping, or decreasing the rate of, delivery of RLC PDUs to lower layers; stopping a poll retransmission timer associated with one or more RLC SDUs that are ready for transmission; and stopping a reassembly timer associated with one or more received RLC PDUs.
  • RLC radio link control
  • the one or more first actions can include any of the following operations with respect to a medium access control (MAC) layer of the network node: stopping, or decreasing the rate of, transmission of scheduling requests (SRs); stopping, or decreasing the usage of, previously configured resource grants; and using previously configured resource grants for retransmission of data but not for initial transmission of data.
  • MAC medium access control
  • the exemplary method and/or procedure can also include the operations of block 1430, where the network node can, based on information associated with the first indication, selectively forward the first indication to one or more downstream nodes in the LAB network.
  • the one or more downstream nodes can comprise one or more intermediate nodes and one or user equipment (UEs).
  • selectively forwarding the first indication can be based on the depth value.
  • the operations of block 1430 can include the operations of sub-block 1431, where if the depth value is non-zero, the network node can decrement the depth value and forward the first indication, including the decremented depth value, to the one or more downstream nodes.
  • the operations of block 1430 can include the operations of sub-block 1432, where if the depth value is zero, the network node can refrain from forwarding the first indication.
  • the operations of block 1430 can also include the operations of sub-block 1433, where the network node can perform one of the following operations if the depth value is not included with the first indication: refraining from forwarding the first indication; forwarding the first indication; and selectively forwarding the first indication further based on a buffer occupancy (BO) value associated with UL data buffers of the intermediate node.
  • BO buffer occupancy
  • the adaptation layer addresses included with the first indication can include a first address associated with the intermediate node.
  • the operations of block 1430 can also include the operations of sub-blocks 1434-1436, where the network node can modify the first indication by removing the first address; identify one or more downstream nodes associated with the other adaptation layer addresses comprising the first indication; and forward the modified first indication only to the identified downstream nodes.
  • the exemplary method and/or procedure can also include the operations of block 1440, where the network node can receive a second indication concerning a path in the IAB network.
  • the second indication can be received from the first upstream node and can indicate that the first backhaul of in the first network path has been restored.
  • the second indication can include a resource grant from the first upstream node.
  • the second indication can be received from a second upstream node and can indicate the establishment of a second network path that includes the intermediate node, a second upstream node in the IAB network, and the destination node.
  • the exemplary method and/or procedure can also include the operations of block 1450, where the network node can, in response to the second indication, perform one or more second actions with respect to transmission of UL data towards the upstream node.
  • the one or more second actions can include various actions with respect to different protocol layers comprising the network node.
  • the one or more second actions can include any of the following with respect to a PDCP layer of the network node: resuming, or increasing the rate of, assignment SNs to PDCP SDUs received from higher layers; resuming, or increasing the rate of, creation of PDCP PDUs for delivery to lower layers; resuming, or increasing the rate of, delivery of PDCP PDUs to lower layers; restarting a discard timer associated with the one or more PDCP SDUs that are ready for transmission; and restarting a reordering timer associated with the one or more received PDCP PDUs.
  • the one or more second actions can include any of the following with respect to an RLC layer of the network node: resuming, or increasing the rate of, assignment of SNs to RLC SDUs received from higher layers; resuming, or increasing the rate of, creation of RLC PDUs for delivery to lower layers; resuming, or increasing the rate of, delivery of RLC PDUs to lower layers; restarting a poll retransmission timer associated with one or more RLC SDUs that are ready for transmission; and restarting a reassembly timer associated with one or more received RLC PDUs.
  • the one or more second actions can include any of the following with respect to a MAC layer of the network node: resuming, or increasing the rate of, transmission of SRs; resuming, or increasing the usage of, previously configured resource grants; and resuming use of previously configured resource grants for both initial transmission of data and retransmission of data.
  • the exemplary method and/or procedure can also include the operations of block 1460, where the network node can, based on information associated with the second indication, selectively forward the second indication to the one or more downstream nodes.
  • the selective forwarding of the second indication can include substantially similar operations, or be based on substantially similar information, as the selective forwarding of the first indication described above (block 1430 including sub- blocks 1431-1436).
  • Figure 15 illustrates another exemplary method and/or procedure for managing a link failure in an integrated access backhaul (LAB) network, according to various exemplary embodiments of the present disclosure.
  • the exemplary method and/or procedure shown in Figure 15 can be performed by a network node (e.g ., an intermediate node immediately downstream of the failed link) in an radio access network (RAN), such as shown in and/or described in relation to other figures herein.
  • RAN radio access network
  • the exemplary method and/or procedure shown in Figure 15 can be complementary to other exemplary methods and/or procedures disclosed herein (e.g., Figure 14) such that they are capable of being used cooperatively to provide benefits, advantages, and/or solutions to problems described herein.
  • the exemplary method and/or procedure in Figure 15 is illustrated by blocks in a particular order, this order is exemplary and the operations corresponding to the blocks can be performed in different orders than shown, and can be combined and/or divided into blocks and/or operations having different functionality than shown. Optional blocks and/or operations are indicated by dashed lines.
  • the exemplary method and/or procedure can include the operations of block 1510, where the network node can detect a failure of a first backhaul link between the intermediate node and a first upstream node in the IAB network.
  • the first backhaul link can be part of a first network path that includes the intermediate node, the first upstream node, a plurality of downstream nodes, and a destination node for UL data.
  • the destination node can be a donor DU and/or a donor CU.
  • the exemplary method and/or procedure can also include the operations of block 1520, where the network node can send, to the first downstream node, a first indication of the failure of the first backhaul link.
  • This operation can correspond to the (downstream) intermediate node receiving the first indication, such as in operation 1410 described above.
  • the first indication can include a depth value that identifies a number of downstream hops in the IAB network for forwarding the first indication.
  • the exclusion of the depth value can indicate that the first downstream node should perform one of the following operations: refraining from forwarding the first indication; forwarding the first indication; and selectively forwarding the first indication further based on a buffer occupancy (BO) value associated with an UL data buffer of the first downstream node.
  • BO buffer occupancy
  • the first indication can include one or more of the following: type of failure associated with the first backhaul link; identifiers of one or more nodes included in the first network path; expected time of resolution of the failure of the first backhaul link; protocol layers affected by the failure; and node functions affected by the failure.
  • the identifiers of the one or more nodes can include identifiers of bearers associated with the one or more nodes, or adaptation layer addresses associated with the one or more nodes.
  • the exemplary method and/or procedure can include the operations of block 1530, where the network node can perform one or more first actions with respect to transmission of UL data towards the first upstream node.
  • the one or more first actions can include various actions with respect to different protocol layers comprising the network node, e.g ., PDCP, RLC, and MAC layers.
  • the one or more first actions performed in block 1530 can include any of the exemplary protocol-layer operations described herein, including any of the first actions described above in relation to block 1420 of Figure 14.
  • the exemplary method and/or procedure can also include the operations of block 1540, where the network node can determine that a second network path has been established.
  • the second network path can include the intermediate node, the plurality of downstream nodes, and the destination node.
  • the exemplary method and/or procedure can also include the operations of block 1550, where the network node can send, to the first downstream node, a second indication concerning the second path. This operation can correspond to the (downstream) intermediate node receiving the second indication, such as in operation 1440 described above.
  • the second network path can include the first network path, and the second indication can indicate that the first backhaul link (z.e., of the first network path) has been restored.
  • the second indication can include a resource grant to the first downstream node.
  • the second network path includes a second upstream node but not the first upstream node, and the second indication can indicate that the second network path has been established to replace the first network path.
  • the exemplary method and/or procedure can include the operations of block 1560, where the network node can perform one or more second actions with respect to transmission of UL data towards the first upstream node.
  • the one or more second actions can include various actions with respect to different protocol layers comprising the network node, e.g ., PDCP, RLC, and MAC layers.
  • the one or more second actions performed in block 1560 can be different than the first actions performed in block 1530, and can include any of the second actions described above in relation to block 1450 of Figure 14.
  • a wireless network such as the example wireless network illustrated in Figure 16.
  • the wireless network of Figure 16 only depicts network 1606, network nodes 1660 and l660b, and WDs 1610, l6l0b, and l6l0c.
  • a wireless network can further include any additional elements suitable to support communication between wireless devices or between a wireless device and another communication device, such as a landline telephone, a service provider, or any other network node or end device.
  • network node 1660 and wireless device (WD) 1610 are depicted with additional detail.
  • the wireless network can provide communication and other types of services to one or more wireless devices to facilitate the wireless devices’ access to and/or use of the services provided by, or via, the wireless network.
  • the wireless network can comprise and/or interface with any type of communication, telecommunication, data, cellular, and/or radio network or other similar type of system.
  • the wireless network can be configured to operate according to specific standards or other types of predefined rules or procedures.
  • particular embodiments of the wireless network can implement communication standards, such as Global System for Mobile Communications (GSM), Universal Mobile Telecommunications System (UMTS), Long Term Evolution (LTE), and/or other suitable 2G, 3G, 4G, or 5G standards; wireless local area network (WLAN) standards, such as the IEEE 802.1 1 standards; and/or any other appropriate wireless communication standard, such as the Worldwide Interoperability for Microwave Access (WiMax), Bluetooth, Z-Wave and/or ZigBee standards.
  • GSM Global System for Mobile Communications
  • UMTS Universal Mobile Telecommunications System
  • LTE Long Term Evolution
  • WLAN wireless local area network
  • WiMax Worldwide Interoperability for Microwave Access
  • Bluetooth Z-Wave and/or ZigBee standards.
  • Network 1606 can comprise one or more backhaul networks, core networks, IP networks, public switched telephone networks (PSTNs), packet data networks, optical networks, wide-area networks (WANs), local area networks (LANs), wireless local area networks (WLANs), wired networks, wireless networks, metropolitan area networks, and other networks to enable communication between devices.
  • PSTNs public switched telephone networks
  • WANs wide-area networks
  • LANs local area networks
  • WLANs wireless local area networks
  • wired networks wireless networks, metropolitan area networks, and other networks to enable communication between devices.
  • Network node 1660 and WD 1610 comprise various components described in more detail below. These components work together in order to provide network node and/or wireless device functionality, such as providing wireless connections in a wireless network.
  • the wireless network can comprise any number of wired or wireless networks, network nodes, base stations, controllers, wireless devices, relay stations, and/or any other components or systems that can facilitate or participate in the communication of data and/or signals whether via wired or wireless connections.
  • network nodes include, but are not limited to, access points (APs) (e.g ., radio access points), base stations (BSs) (e.g., radio base stations, Node Bs, evolved Node Bs (eNBs) and NR NodeBs (gNBs)).
  • Base stations can be categorized based on the amount of coverage they provide (or, stated differently, their transmit power level) and can then also be referred to as femto base stations, pico base stations, micro base stations, or macro base stations.
  • a base station can be a relay node or a relay donor node controlling a relay.
  • a network node can also include one or more (or all) parts of a distributed radio base station such as centralized digital units and/or remote radio units (RRUs), sometimes referred to as Remote Radio Heads (RRHs). Such remote radio units may or may not be integrated with an antenna as an antenna integrated radio. Parts of a distributed radio base station can also be referred to as nodes in a distributed antenna system (DAS).
  • DAS distributed antenna system
  • network nodes include multi -standard radio (MSR) equipment such as MSR BSs, network controllers such as radio network controllers (RNCs) or base station controllers (BSCs), base transceiver stations (BTSs), transmission points, transmission nodes, multi-cell/multicast coordination entities (MCEs), core network nodes (e.g, MSCs, MMEs), O&M nodes, OSS nodes, SON nodes, positioning nodes (e.g ., E-SMLCs), and/or MDTs.
  • MSR multi -standard radio
  • RNCs radio network controllers
  • BSCs base station controllers
  • BTSs base transceiver stations
  • transmission points transmission nodes
  • MCEs multi-cell/multicast coordination entities
  • core network nodes e.g, MSCs, MMEs
  • O&M nodes e.g., OSS nodes
  • SON nodes e.g ., SON nodes
  • positioning nodes e.g .
  • network node 1660 includes processing circuitry 1670, device readable medium 1680, interface 1690, auxiliary equipment 1684, power source 1686, power circuitry 1687, and antenna 1662.
  • network node 1660 illustrated in the example wireless network of Figure 16 can represent a device that includes the illustrated combination of hardware components, other embodiments can comprise network nodes with different combinations of components. It is to be understood that a network node comprises any suitable combination of hardware and/or software needed to perform the tasks, features, functions and methods and/or procedures disclosed herein.
  • network node 1660 can comprise multiple different physical components that make up a single illustrated component (e.g., device readable medium 1680 can comprise multiple separate hard drives as well as multiple RAM modules).
  • network node 1660 can be composed of multiple physically separate components (e.g, a NodeB component and a RNC component, or a BTS component and a BSC component, etc.), which can each have their own respective components.
  • network node 1660 comprises multiple separate components (e.g, BTS and BSC components)
  • one or more of the separate components can be shared among several network nodes.
  • a single RNC can control multiple NodeB’ s.
  • each unique NodeB and RNC pair can in some instances be considered a single separate network node.
  • network node 1660 can be configured to support multiple radio access technologies (RATs).
  • RATs radio access technologies
  • Network node 1660 can also include multiple sets of the various illustrated components for different wireless technologies integrated into network node 1660, such as, for example, GSM, WCDMA, LTE, NR, WiFi, or Bluetooth wireless technologies. These wireless technologies can be integrated into the same or different chip or set of chips and other components within network node 1660.
  • Processing circuitry 1670 can be configured to perform any determining, calculating, or similar operations (e.g, certain obtaining operations) described herein as being provided by a network node. These operations performed by processing circuitry 1670 can include processing information obtained by processing circuitry 1670 by, for example, converting the obtained information into other information, comparing the obtained information or converted information to information stored in the network node, and/or performing one or more operations based on the obtained information or converted information, and as a result of said processing making a determination.
  • processing information obtained by processing circuitry 1670 by, for example, converting the obtained information into other information, comparing the obtained information or converted information to information stored in the network node, and/or performing one or more operations based on the obtained information or converted information, and as a result of said processing making a determination.
  • Processing circuitry 1670 can comprise a combination of one or more of a microprocessor, controller, microcontroller, central processing unit, digital signal processor, application-specific integrated circuit, field programmable gate array, or any other suitable computing device, resource, or combination of hardware, software and/or encoded logic operable to provide, either alone or in conjunction with other network node 1660 components, such as device readable medium 1680, network node 1660 functionality.
  • processing circuitry 1670 can execute instructions stored in device readable medium 1680 or in memory within processing circuitry 1670. Such functionality can include providing any of the various wireless features, functions, or benefits discussed herein.
  • processing circuitry 1670 can include a system on a chip (SOC).
  • SOC system on a chip
  • processing circuitry 1670 can include one or more of radio frequency (RF) transceiver circuitry 1672 and baseband processing circuitry 1674.
  • radio frequency (RF) transceiver circuitry 1672 and baseband processing circuitry 1674 can be on separate chips (or sets of chips), boards, or units, such as radio units and digital units.
  • part or all of RF transceiver circuitry 1672 and baseband processing circuitry 1674 can be on the same chip or set of chips, boards, or units
  • some or all of the functionality described herein as being provided by a network node, base station, eNB or other such network device can be performed by processing circuitry 1670 executing instructions stored on device readable medium 1680 or memory within processing circuitry 1670.
  • some or all of the functionality can be provided by processing circuitry 1670 without executing instructions stored on a separate or discrete device readable medium, such as in a hard-wired manner.
  • processing circuitry 1670 can be configured to perform the described functionality.
  • the benefits provided by such functionality are not limited to processing circuitry 1670 alone or to other components of network node 1660 but are enjoyed by network node 1660 as a whole, and/or by end users and the wireless network generally.
  • Device readable medium 1680 can comprise any form of volatile or non-volatile computer readable memory including, without limitation, persistent storage, solid-state memory, remotely mounted memory, magnetic media, optical media, random access memory (RAM), read-only memory (ROM), mass storage media (for example, a hard disk), removable storage media (for example, a flash drive, a Compact Disk (CD) or a Digital Video Disk (DVD)), and/or any other volatile or non-volatile, non-transitory device readable and/or computer-executable memory devices that store information, data, and/or instructions that can be used by processing circuitry 1670.
  • volatile or non-volatile computer readable memory including, without limitation, persistent storage, solid-state memory, remotely mounted memory, magnetic media, optical media, random access memory (RAM), read-only memory (ROM), mass storage media (for example, a hard disk), removable storage media (for example, a flash drive, a Compact Disk (CD) or a Digital Video Disk (DVD)), and/or any other volatile or
  • Device readable medium 1680 can store any suitable instructions, data or information, including a computer program, software, an application including one or more of logic, rules, code, tables, etc. and/or other instructions capable of being executed by processing circuitry 1670 and, utilized by network node 1660.
  • Device readable medium 1680 can be used to store any calculations made by processing circuitry 1670 and/or any data received via interface 1690.
  • processing circuitry 1670 and device readable medium 1680 can be considered to be integrated.
  • Interface 1690 is used in the wired or wireless communication of signalling and/or data between network node 1660, network 1606, and/or WDs 1610. As illustrated, interface 1690 comprises port(s)/terminal(s) 1694 to send and receive data, for example to and from network 1606 over a wired connection. Interface 1690 also includes radio front end circuitry 1692 that can be coupled to, or in certain embodiments a part of, antenna 1662. Radio front end circuitry 1692 comprises filters 1698 and amplifiers 1696. Radio front end circuitry 1692 can be connected to antenna 1662 and processing circuitry 1670. Radio front end circuitry can be configured to condition signals communicated between antenna 1662 and processing circuitry 1670.
  • Radio front end circuitry 1692 can receive digital data that is to be sent out to other network nodes or WDs via a wireless connection. Radio front end circuitry 1692 can convert the digital data into a radio signal having the appropriate channel and bandwidth parameters using a combination of filters 1698 and/or amplifiers 1696. The radio signal can then be transmitted via antenna 1662. Similarly, when receiving data, antenna 1662 can collect radio signals which are then converted into digital data by radio front end circuitry 1692. The digital data can be passed to processing circuitry 1670. In other embodiments, the interface can comprise different components and/or different combinations of components.
  • network node 1660 may not include separate radio front end circuitry 1692, instead, processing circuitry 1670 can comprise radio front end circuitry and can be connected to antenna 1662 without separate radio front end circuitry 1692.
  • processing circuitry 1670 can comprise radio front end circuitry and can be connected to antenna 1662 without separate radio front end circuitry 1692.
  • all or some of RF transceiver circuitry 1672 can be considered a part of interface 1690.
  • interface 1690 can include one or more ports or terminals 1694, radio front end circuitry 1692, and RF transceiver circuitry 1672, as part of a radio unit (not shown), and interface 1690 can communicate with baseband processing circuitry 1674, which is part of a digital unit (not shown).
  • Antenna 1662 can include one or more antennas, or antenna arrays, configured to send and/or receive wireless signals.
  • Antenna 1662 can be coupled to radio front end circuitry 1690 and can be any type of antenna capable of transmitting and receiving data and/or signals wirelessly.
  • antenna 1662 can comprise one or more omni-directional, sector or panel antennas operable to transmit/receive radio signals between, for example, 2 GHz and 66 GHz.
  • An omni-directional antenna can be used to transmit/receive radio signals in any direction
  • a sector antenna can be used to transmit/receive radio signals from devices within a particular area
  • a panel antenna can be a line of sight antenna used to transmit/receive radio signals in a relatively straight line.
  • the use of more than one antenna can be referred to as MIMO.
  • antenna 1662 can be separate from network node 1660 and can be connectable to network node 1660 through an interface or port.
  • Antenna 1662, interface 1690, and/or processing circuitry 1670 can be configured to perform any receiving operations and/or certain obtaining operations described herein as being performed by a network node. Any information, data and/or signals can be received from a wireless device, another network node and/or any other network equipment. Similarly, antenna 1662, interface 1690, and/or processing circuitry 1670 can be configured to perform any transmitting operations described herein as being performed by a network node. Any information, data and/or signals can be transmitted to a wireless device, another network node and/or any other network equipment.
  • Power circuitry 1687 can comprise, or be coupled to, power management circuitry and can be configured to supply the components of network node 1660 with power for performing the functionality described herein. Power circuitry 1687 can receive power from power source 1686. Power source 1686 and/or power circuitry 1687 can be configured to provide power to the various components of network node 1660 in a form suitable for the respective components ( e.g ., at a voltage and current level needed for each respective component). Power source 1686 can either be included in, or external to, power circuitry 1687 and/or network node 1660.
  • network node 1660 can be connectable to an external power source (e.g., an electricity outlet) via an input circuitry or interface such as an electrical cable, whereby the external power source supplies power to power circuitry 1687.
  • power source 1686 can comprise a source of power in the form of a battery or battery pack which is connected to, or integrated in, power circuitry 1687. The battery can provide backup power should the external power source fail.
  • Other types of power sources such as photovoltaic devices, can also be used.
  • network node 1660 can include additional components beyond those shown in Figure 16 that can be responsible for providing certain aspects of the network node’s functionality, including any of the functionality described herein and/or any functionality necessary to support the subject matter described herein.
  • network node 1660 can include user interface equipment to allow and/or facilitate input of information into network node 1660 and to allow and/or facilitate output of information from network node 1660. This can allow and/or facilitate a user to perform diagnostic, maintenance, repair, and other administrative functions for network node 1660.
  • a wireless device e.g ., WD 1610
  • a wireless device can be configured to transmit and/or receive information without direct human interaction.
  • a WD can be designed to transmit information to a network on a predetermined schedule, when triggered by an internal or external event, or in response to requests from the network.
  • Examples of a WD include, but are not limited to, smart phones, mobile phones, cell phones, voice over IP (VoIP) phones, wireless local loop phones, desktop computers, personal digital assistants (PDAs), wireless cameras, gaming consoles or devices, music storage devices, playback appliances, wearable devices, wireless endpoints, mobile stations, tablets, laptops, laptop- embedded equipment (LEE), laptop-mounted equipment (LME), smart devices, wireless customer-premise equipment (CPE), mobile-type communication (MTC) devices, Internet- of-Things (IoT) devices, vehicle-mounted wireless terminal devices, etc.
  • VoIP voice over IP
  • PDAs personal digital assistants
  • LME laptop-mounted equipment
  • CPE wireless customer-premise equipment
  • MTC mobile-type communication
  • IoT Internet- of-Things
  • a WD can support device-to-device (D2D) communication, for example by implementing a 3 GPP standard for sidelink communication, vehicle-to-vehicle (V2V), vehicle-to-infrastructure (V2I), vehicle-to-everything (V2X) and can in this case be referred to as a D2D communication device.
  • D2D device-to-device
  • V2V vehicle-to-vehicle
  • V2I vehicle-to-infrastructure
  • V2X vehicle-to-everything
  • a WD can represent a machine or other device that performs monitoring and/or measurements, and transmits the results of such monitoring and/or measurements to another WD and/or a network node.
  • the WD can in this case be a machine-to-machine (M2M) device, which can in a 3 GPP context be referred to as an MTC device.
  • M2M machine-to-machine
  • the WD can be a UE implementing the 3 GPP narrow band internet of things (NB-IoT) standard.
  • NB-IoT narrow band internet of things
  • machines or devices are sensors, metering devices such as power meters, industrial machinery, or home or personal appliances (e.g, refrigerators, televisions, etc.) personal wearables (e.g, watches, fitness trackers, etc.).
  • a WD can represent a vehicle or other equipment that is capable of monitoring and/or reporting on its operational status or other functions associated with its operation.
  • a WD as described above can represent the endpoint of a wireless connection, in which case the device can be referred to as a wireless terminal. Furthermore, a WD as described above can be mobile, in which case it can also be referred to as a mobile device or a mobile terminal.
  • wireless device 1610 includes antenna 1611, interface 1614, processing circuitry 1620, device readable medium 1630, user interface equipment 1632, auxiliary equipment 1634, power source 1636 and power circuitry 1637.
  • WD 1610 can include multiple sets of one or more of the illustrated components for different wireless technologies supported by WD 1610, such as, for example, GSM, WCDMA, LTE, NR, WiFi, WiMAX, or Bluetooth wireless technologies, just to mention a few. These wireless technologies can be integrated into the same or different chips or set of chips as other components within WD 1610.
  • Antenna 1611 can include one or more antennas or antenna arrays, configured to send and/or receive wireless signals, and is connected to interface 1614.
  • antenna 1611 can be separate from WD 1610 and be connectable to WD 1610 through an interface or port.
  • Antenna 1611, interface 1614, and/or processing circuitry 1620 can be configured to perform any receiving or transmitting operations described herein as being performed by a WD. Any information, data and/or signals can be received from a network node and/or another WD.
  • radio front end circuitry and/or antenna 1611 can be considered an interface.
  • interface 1614 comprises radio front end circuitry 1612 and antenna 1611.
  • Radio front end circuitry 1612 comprise one or more filters 1618 and amplifiers 1616.
  • Radio front end circuitry 1614 is connected to antenna 1611 and processing circuitry 1620, and can be configured to condition signals communicated between antenna 1611 and processing circuitry 1620.
  • Radio front end circuitry 1612 can be coupled to or a part of antenna 1611.
  • WD 1610 may not include separate radio front end circuitry 1612; rather, processing circuitry 1620 can comprise radio front end circuitry and can be connected to antenna 1611.
  • some or all of RF transceiver circuitry 1622 can be considered a part of interface 1614.
  • Radio front end circuitry 1612 can receive digital data that is to be sent out to other network nodes or WDs via a wireless connection. Radio front end circuitry 1612 can convert the digital data into a radio signal having the appropriate channel and bandwidth parameters using a combination of filters 1618 and/or amplifiers 1616. The radio signal can then be transmitted via antenna 1611. Similarly, when receiving data, antenna 161 1 can collect radio signals which are then converted into digital data by radio front end circuitry 1612. The digital data can be passed to processing circuitry 1620. In other embodiments, the interface can comprise different components and/or different combinations of components.
  • Processing circuitry 1620 can comprise a combination of one or more of a microprocessor, controller, microcontroller, central processing unit, digital signal processor, application-specific integrated circuit, field programmable gate array, or any other suitable computing device, resource, or combination of hardware, software, and/or encoded logic operable to provide, either alone or in conjunction with other WD 1610 components, such as device readable medium 1630, WD 1610 functionality.
  • processing circuitry 1620 can execute instructions stored in device readable medium 1630 or in memory within processing circuitry 1620 to provide the functionality disclosed herein.
  • processing circuitry 1620 includes one or more of RF transceiver circuitry 1622, baseband processing circuitry 1624, and application processing circuitry 1626.
  • the processing circuitry can comprise different components and/or different combinations of components.
  • processing circuitry 1620 of WD 1610 can comprise a SOC.
  • RF transceiver circuitry 1622, baseband processing circuitry 1624, and application processing circuitry 1626 can be on separate chips or sets of chips.
  • part or all of baseband processing circuitry 1624 and application processing circuitry 1626 can be combined into one chip or set of chips, and RF transceiver circuitry 1622 can be on a separate chip or set of chips.
  • part or all of RF transceiver circuitry 1622 and baseband processing circuitry 1624 can be on the same chip or set of chips, and application processing circuitry 1626 can be on a separate chip or set of chips.
  • part or all of RF transceiver circuitry 1622, baseband processing circuitry 1624, and application processing circuitry 1626 can be combined in the same chip or set of chips.
  • RF transceiver circuitry 1622 can be a part of interface 1614.
  • RF transceiver circuitry 1622 can condition RF signals for processing circuitry 1620.
  • processing circuitry 1620 executing instructions stored on device readable medium 1630, which in certain embodiments can be a computer- readable storage medium.
  • some or all of the functionality can be provided by processing circuitry 1620 without executing instructions stored on a separate or discrete device readable storage medium, such as in a hard-wired manner.
  • processing circuitry 1620 can be configured to perform the described functionality. The benefits provided by such functionality are not limited to processing circuitry 1620 alone or to other components of WD 1610, but are enjoyed by WD 1610 as a whole, and/or by end users and the wireless network generally.
  • Processing circuitry 1620 can be configured to perform any determining, calculating, or similar operations (e.g ., certain obtaining operations) described herein as being performed by a WD. These operations, as performed by processing circuitry 1620, can include processing information obtained by processing circuitry 1620 by, for example, converting the obtained information into other information, comparing the obtained information or converted information to information stored by WD 1610, and/or performing one or more operations based on the obtained information or converted information, and as a result of said processing making a determination.
  • processing information obtained by processing circuitry 1620 by, for example, converting the obtained information into other information, comparing the obtained information or converted information to information stored by WD 1610, and/or performing one or more operations based on the obtained information or converted information, and as a result of said processing making a determination.
  • Device readable medium 1630 can be operable to store a computer program, software, an application including one or more of logic, rules, code, tables, etc. and/or other instructions capable of being executed by processing circuitry 1620.
  • Device readable medium 1630 can include computer memory (e.g., Random Access Memory (RAM) or Read Only Memory (ROM)), mass storage media (e.g, a hard disk), removable storage media (e.g, a Compact Disk (CD) or a Digital Video Disk (DVD)), and/or any other volatile or non-volatile, non- transitory device readable and/or computer executable memory devices that store information, data, and/or instructions that can be used by processing circuitry 1620.
  • processing circuitry 1620 and device readable medium 1630 can be considered to be integrated.
  • User interface equipment 1632 can include components that allow and/or facilitate a human user to interact with WD 1610. Such interaction can be of many forms, such as visual, audial, tactile, etc. User interface equipment 1632 can be operable to produce output to the user and to allow and/or facilitate the user to provide input to WD 1610. The type of interaction can vary depending on the type of user interface equipment 1632 installed in WD 1610. For example, if WD 1610 is a smart phone, the interaction can be via a touch screen; if WD 1610 is a smart meter, the interaction can be through a screen that provides usage (e.g, the number of gallons used) or a speaker that provides an audible alert (e.g, if smoke is detected).
  • usage e.g, the number of gallons used
  • a speaker that provides an audible alert
  • User interface equipment 1632 can include input interfaces, devices and circuits, and output interfaces, devices and circuits. User interface equipment 1632 can be configured to allow and/or facilitate input of information into WD 1610, and is connected to processing circuitry 1620 to allow and/or facilitate processing circuitry 1620 to process the input information. User interface equipment 1632 can include, for example, a microphone, a proximity or other sensor, keys/buttons, a touch display, one or more cameras, a USB port, or other input circuitry. User interface equipment 1632 is also configured to allow and/or facilitate output of information from WD 1610, and to allow and/or facilitate processing circuitry 1620 to output information from WD 1610.
  • User interface equipment 1632 can include, for example, a speaker, a display, vibrating circuitry, a USB port, a headphone interface, or other output circuitry. Using one or more input and output interfaces, devices, and circuits, of user interface equipment 1632, WD 1610 can communicate with end users and/or the wireless network, and allow and/or facilitate them to benefit from the functionality described herein.
  • Auxiliary equipment 1634 is operable to provide more specific functionality which may not be generally performed by WDs. This can comprise specialized sensors for doing measurements for various purposes, interfaces for additional types of communication such as wired communications etc. The inclusion and type of components of auxiliary equipment 1634 can vary depending on the embodiment and/or scenario.
  • Power source 1636 can, in some embodiments, be in the form of a battery or battery pack. Other types of power sources, such as an external power source (e.g ., an electricity outlet), photovoltaic devices or power cells, can also be used.
  • WD 1610 can further comprise power circuitry 1637 for delivering power from power source 1636 to the various parts of WD 1610 which need power from power source 1636 to carry out any functionality described or indicated herein.
  • Power circuitry 1637 can in certain embodiments comprise power management circuitry.
  • Power circuitry 1637 can additionally or alternatively be operable to receive power from an external power source; in which case WD 1610 can be connectable to the external power source (such as an electricity outlet) via input circuitry or an interface such as an electrical power cable.
  • Power circuitry 1637 can also in certain embodiments be operable to deliver power from an external power source to power source 1636. This can be, for example, for the charging of power source 1636. Power circuitry 1637 can perform any converting or other modification to the power from power source 1636 to make it suitable for supply to the respective components of WD 1610.
  • Figure 17 illustrates one embodiment of a UE in accordance with various aspects described herein.
  • a user equipment or UE may not necessarily have a user in the sense of a human user who owns and/or operates the relevant device.
  • a UE can represent a device that is intended for sale to, or operation by, a human user but which may not, or which may not initially, be associated with a specific human user (e.g ., a smart sprinkler controller).
  • a UE can represent a device that is not intended for sale to, or operation by, an end user but which can be associated with or operated for the benefit of a user (e.g., a smart power meter).
  • UE 17200 can be any UE identified by the 3 rd Generation Partnership Project (3GPP), including a NB-IoT UE, a machine type communication (MTC) UE, and/or an enhanced MTC (eMTC) UE.
  • UE 1700 as illustrated in Figure 17, is one example of a WD configured for communication in accordance with one or more communication standards promulgated by the 3 rd Generation Partnership Project (3GPP), such as 3GPP’s GSM, UMTS, LTE, and/or 5G standards.
  • 3GPP 3 rd Generation Partnership Project
  • the term WD and UE can be used interchangeable. Accordingly, although Figure 17 is a UE, the components discussed herein are equally applicable to a WD, and vice-versa.
  • UE 1700 includes processing circuitry 1701 that is operatively coupled to input/output interface 1705, radio frequency (RF) interface 1709, network connection interface 1711, memory 1715 including random access memory (RAM) 1717, read-only memory (ROM) 1719, and storage medium 1721 or the like, communication subsystem 1731, power source 1733, and/or any other component, or any combination thereof.
  • Storage medium 1721 includes operating system 1723, application program 1725, and data 1727. In other embodiments, storage medium 1721 can include other similar types of information.
  • Certain UEs can utilize all of the components shown in Figure 17, or only a subset of the components. The level of integration between the components can vary from one UE to another UE. Further, certain UEs can contain multiple instances of a component, such as multiple processors, memories, transceivers, transmitters, receivers, etc.
  • processing circuitry 1701 can be configured to process computer instructions and data.
  • Processing circuitry 1701 can be configured to implement any sequential state machine operative to execute machine instructions stored as machine- readable computer programs in the memory, such as one or more hardware-implemented state machines (e.g, in discrete logic, FPGA, ASIC, etc.); programmable logic together with appropriate firmware; one or more stored program, general-purpose processors, such as a microprocessor or Digital Signal Processor (DSP), together with appropriate software; or any combination of the above.
  • the processing circuitry 1701 can include two central processing units (CPUs). Data can be information in a form suitable for use by a computer.
  • input/output interface 1705 can be configured to provide a communication interface to an input device, output device, or input and output device.
  • UE 1700 can be configured to use an output device via input/output interface 1705.
  • An output device can use the same type of interface port as an input device.
  • a USB port can be used to provide input to and output from UE 1700.
  • the output device can be a speaker, a sound card, a video card, a display, a monitor, a printer, an actuator, an emitter, a smartcard, another output device, or any combination thereof.
  • UE 1700 can be configured to use an input device via input/output interface 1705 to allow and/or facilitate a user to capture information into UE 1700.
  • the input device can include a touch-sensitive or presence- sensitive display, a camera (e.g ., a digital camera, a digital video camera, a web camera, etc.), a microphone, a sensor, a mouse, a trackball, a directional pad, a trackpad, a scroll wheel, a smartcard, and the like.
  • the presence-sensitive display can include a capacitive or resistive touch sensor to sense input from a user.
  • a sensor can be, for instance, an accelerometer, a gyroscope, a tilt sensor, a force sensor, a magnetometer, an optical sensor, a proximity sensor, another like sensor, or any combination thereof.
  • the input device can be an accelerometer, a magnetometer, a digital camera, a microphone, and an optical sensor.
  • RF interface 1709 can be configured to provide a communication interface to RF components such as a transmitter, a receiver, and an antenna.
  • Network connection interface 1711 can be configured to provide a communication interface to network 1743 a.
  • Network 1743 a can encompass wired and/or wireless networks such as a local-area network (LAN), a wide-area network (WAN), a computer network, a wireless network, a telecommunications network, another like network or any combination thereof.
  • network l743a can comprise a Wi-Fi network.
  • Network connection interface 1711 can be configured to include a receiver and a transmitter interface used to communicate with one or more other devices over a communication network according to one or more communication protocols, such as Ethernet, TCP/IP, SONET, ATM, or the like.
  • Network connection interface 1711 can implement receiver and transmitter functionality appropriate to the communication network links (e.g., optical, electrical, and the like). The transmitter and receiver functions can share circuit components, software or firmware, or alternatively can be implemented separately.
  • RAM 1717 can be configured to interface via bus 1702 to processing circuitry 1701 to provide storage or caching of data or computer instructions during the execution of software programs such as the operating system, application programs, and device drivers.
  • ROM 1719 can be configured to provide computer instructions or data to processing circuitry 1701.
  • ROM 1719 can be configured to store invariant low-level system code or data for basic system functions such as basic input and output (EO), startup, or reception of keystrokes from a keyboard that are stored in a non-volatile memory.
  • EO basic input and output
  • Storage medium 1721 can be configured to include memory such as RAM, ROM, programmable read-only memory (PROM), erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), magnetic disks, optical disks, floppy disks, hard disks, removable cartridges, or flash drives.
  • storage medium 1721 can be configured to include operating system 1723, application program 1725 such as a web browser application, a widget or gadget engine or another application, and data file 1727.
  • Storage medium 1721 can store, for use by UE 1700, any of a variety of various operating systems or combinations of operating systems.
  • Storage medium 1721 can be configured to include a number of physical drive units, such as redundant array of independent disks (RAID), floppy disk drive, flash memory, ETSB flash drive, external hard disk drive, thumb drive, pen drive, key drive, high-density digital versatile disc (HD-DVD) optical disc drive, internal hard disk drive, Blu-Ray optical disc drive, holographic digital data storage (HDDS) optical disc drive, external mini-dual in-line memory module (DIMM), synchronous dynamic random access memory (SDRAM), external micro-DIMM SDRAM, smartcard memory such as a subscriber identity module or a removable user identity (SIM/RUIM) module, other memory, or any combination thereof.
  • RAID redundant array of independent disks
  • HD-DVD high-density digital versatile disc
  • HDDS holographic digital data storage
  • DIMM external mini-dual in-line memory module
  • SDRAM synchronous dynamic random access memory
  • SDRAM synchronous dynamic random access memory
  • smartcard memory such as a subscriber identity module or a
  • Storage medium 1721 can allow and/or facilitate UE 1700 to access computer-executable instructions, application programs or the like, stored on transitory or non-transitory memory media, to off-load data, or to upload data.
  • An article of manufacture, such as one utilizing a communication system can be tangibly embodied in storage medium 1721, which can comprise a device readable medium.
  • processing circuitry 1701 can be configured to communicate with network l743b using communication subsystem 1731.
  • Network l743a and network l743b can be the same network or networks or different network or networks.
  • Communication subsystem 1731 can be configured to include one or more transceivers used to communicate with network l743b.
  • communication subsystem 1731 can be configured to include one or more transceivers used to communicate with one or more remote transceivers of another device capable of wireless communication such as another WD, UE, or base station of a radio access network (RAN) according to one or more communication protocols, such as IEEE 802.17, CDMA, WCDMA, GSM, LTE, UTRAN, WiMax, or the like.
  • RAN radio access network
  • Each transceiver can include transmitter 1733 and/or receiver 1735 to implement transmitter or receiver functionality, respectively, appropriate to the RAN links ( e.g ., frequency allocations and the like). Further, transmitter 1733 and receiver 1735 of each transceiver can share circuit components, software or firmware, or alternatively can be implemented separately.
  • the communication functions of communication subsystem 1731 can include data communication, voice communication, multimedia communication, short-range communications such as Bluetooth, near-field communication, location-based communication such as the use of the global positioning system (GPS) to determine a location, another like communication function, or any combination thereof.
  • communication subsystem 1731 can include cellular communication, Wi-Fi communication, Bluetooth communication, and GPS communication.
  • Network l743b can encompass wired and/or wireless networks such as a local-area network (LAN), a wide-area network (WAN), a computer network, a wireless network, a telecommunications network, another like network or any combination thereof.
  • network l743b can be a cellular network, a Wi-Fi network, and/or a near-field network.
  • Power source 1713 can be configured to provide alternating current (AC) or direct current (DC) power to components of UE 1700.
  • communication subsystem 1731 can be configured to include any of the components described herein.
  • processing circuitry 1701 can be configured to communicate with any of such components over bus 1702.
  • any of such components can be represented by program instructions stored in memory that when executed by processing circuitry 1701 perform the corresponding functions described herein.
  • the functionality of any of such components can be partitioned between processing circuitry 1701 and communication subsystem 1731.
  • the non-computationally intensive functions of any of such components can be implemented in software or firmware and the computationally intensive functions can be implemented in hardware.
  • FIG 18 is a schematic block diagram illustrating a virtualization environment 1800 in which functions implemented by some embodiments can be virtualized.
  • virtualizing means creating virtual versions of apparatuses or devices which can include virtualizing hardware platforms, storage devices and networking resources.
  • virtualization can be applied to a node (e.g ., a virtualized base station or a virtualized radio access node) or to a device (e.g., a UE, a wireless device or any other type of communication device) or components thereof and relates to an implementation in which at least a portion of the functionality is implemented as one or more virtual components (e.g, via one or more applications, components, functions, virtual machines or containers executing on one or more physical processing nodes in one or more networks).
  • some or all of the functions described herein can be implemented as virtual components executed by one or more virtual machines implemented in one or more virtual environments 1800 hosted by one or more of hardware nodes 1830. Further, in embodiments in which the virtual node is not a radio access node or does not require radio connectivity (e.g ., a core network node), then the network node can be entirely virtualized.
  • the functions can be implemented by one or more applications 1820 (which can alternatively be called software instances, virtual appliances, network functions, virtual nodes, virtual network functions, etc.) operative to implement some of the features, functions, and/or benefits of some of the embodiments disclosed herein.
  • Applications 1820 are run in virtualization environment 1800 which provides hardware 1830 comprising processing circuitry 1860 and memory 1890.
  • Memory 1890 contains instructions 1895 executable by processing circuitry 1860 whereby application 1820 is operative to provide one or more of the features, benefits, and/or functions disclosed herein.
  • Virtualization environment 1800 comprises general-purpose or special-purpose network hardware devices 1830 comprising a set of one or more processors or processing circuitry 1860, which can be commercial off-the-shelf (COTS) processors, dedicated Application Specific Integrated Circuits (ASICs), or any other type of processing circuitry including digital or analog hardware components or special purpose processors.
  • processors or processing circuitry 1860 can be commercial off-the-shelf (COTS) processors, dedicated Application Specific Integrated Circuits (ASICs), or any other type of processing circuitry including digital or analog hardware components or special purpose processors.
  • Each hardware device can comprise memory 1890-1 which can be non-persistent memory for temporarily storing instructions 1895 or software executed by processing circuitry 1860.
  • Each hardware device can comprise one or more network interface controllers (NICs) 1870, also known as network interface cards, which include physical network interface 1880.
  • NICs network interface controllers
  • Each hardware device can also include non-transitory, persistent, machine-readable storage media 1890-2 having stored therein software 1895 and/or instructions executable by processing circuitry 1860.
  • Software 1895 can include any type of software including software for instantiating one or more virtualization layers 1850 (also referred to as hypervisors), software to execute virtual machines 1840 as well as software allowing it to execute functions, features and/or benefits described in relation with some embodiments described herein.
  • Virtual machines 1840 comprise virtual processing, virtual memory, virtual networking or interface and virtual storage, and can be run by a corresponding virtualization layer 1850 or hypervisor. Different embodiments of the instance of virtual appliance 1820 can be implemented on one or more of virtual machines 1840, and the implementations can be made in different ways.
  • processing circuitry 1860 executes software 1895 to instantiate the hypervisor or virtualization layer 1850, which can sometimes be referred to as a virtual machine monitor (VMM).
  • VMM virtual machine monitor
  • Virtualization layer 1850 can present a virtual operating platform that appears like networking hardware to virtual machine 1840.
  • hardware 1830 can be a standalone network node with generic or specific components.
  • Hardware 1830 can comprise antenna 18225 and can implement some functions via virtualization.
  • hardware 1830 can be part of a larger cluster of hardware (e.g, such as in a data center or customer premise equipment (CPE)) where many hardware nodes work together and are managed via management and orchestration (MANO) 18100, which, among others, oversees lifecycle management of applications 1820.
  • CPE customer premise equipment
  • NFV network function virtualization
  • NFV can be used to consolidate many network equipment types onto industry standard high volume server hardware, physical switches, and physical storage, which can be located in data centers, and customer premise equipment.
  • virtual machine 1840 can be a software implementation of a physical machine that runs programs as if they were executing on a physical, non-virtualized machine.
  • Each of virtual machines 1840, and that part of hardware 1830 that executes that virtual machine be it hardware dedicated to that virtual machine and/or hardware shared by that virtual machine with others of the virtual machines 1840, forms a separate virtual network elements (VNE).
  • VNE virtual network elements
  • VNF Virtual Network Function
  • one or more radio units 18200 that each include one or more transmitters 18220 and one or more receivers 18210 can be coupled to one or more antennas 18225.
  • Radio units 18200 can communicate directly with hardware nodes 1830 via one or more appropriate network interfaces and can be used in combination with the virtual components to provide a virtual node with radio capabilities, such as a radio access node or a base station.
  • a communication system includes telecommunication network 1910, such as a 3 GPP -type cellular network, which comprises access network 1911, such as a radio access network, and core network 1914.
  • Access network 1911 comprises a plurality of base stations l9l2a, l9l2b, l9l2c, such as NBs, eNBs, gNBs or other types of wireless access points, each defining a corresponding coverage area l9l3a, l9l3b, l9l3c.
  • Each base station l9l2a, l9l2b, l9l2c is connectable to core network 1914 over a wired or wireless connection 1915.
  • a first UE 1991 located in coverage area l9l3c can be configured to wirelessly connect to, or be paged by, the corresponding base station l9l2c.
  • a second UE 1992 in coverage area l9l3a is wirelessly connectable to the corresponding base station l9l2a. While a plurality of UEs 1991, 1992 are illustrated in this example, the disclosed embodiments are equally applicable to a situation where a sole UE is in the coverage area or where a sole UE is connecting to the
  • Telecommunication network 1910 is itself connected to host computer 1930, which can be embodied in the hardware and/or software of a standalone server, a cloud-implemented server, a distributed server or as processing resources in a server farm.
  • Host computer 1930 can be under the ownership or control of a service provider or can be operated by the service provider or on behalf of the service provider.
  • Connections 1921 and 1922 between telecommunication network 1910 and host computer 1930 can extend directly from core network 1914 to host computer 1930 or can go via an optional intermediate network 1920.
  • Intermediate network 1920 can be one of, or a combination of more than one of, a public, private or hosted network; intermediate network 1920, if any, can be a backbone network or the Internet; in particular, intermediate network 1920 can comprise two or more sub-networks (not shown).
  • the communication system of Figure 19 as a whole enables connectivity between the connected UEs 1991, 1992 and host computer 1930.
  • the connectivity can be described as an over-the-top (OTT) connection 1950.
  • Host computer 1930 and the connected UEs 1991, 1992 are configured to communicate data and/or signaling via OTT connection 1950, using access network 1911, core network 1914, any intermediate network 1920 and possible further infrastructure (not shown) as intermediaries.
  • OTT connection 1950 can be transparent in the sense that the participating communication devices through which OTT connection 1950 passes are unaware of routing of uplink and downlink communications.
  • base station 1912 may not or need not be informed about the past routing of an incoming downlink communication with data originating from host computer 1930 to be forwarded ( e.g ., handed over) to a connected UE 1991.
  • base station 1912 need not be aware of the future routing of an outgoing uplink communication originating from the UE 1991 towards the host computer 1930.
  • host computer 2010 comprises hardware 2015 including communication interface 2016 configured to set up and maintain a wired or wireless connection with an interface of a different communication device of communication system 2000.
  • Host computer 2010 further comprises processing circuitry 2018, which can have storage and/or processing capabilities.
  • processing circuitry 2018 can comprise one or more programmable processors, application-specific integrated circuits, field programmable gate arrays or combinations of these (not shown) adapted to execute instructions.
  • Host computer 2010 further comprises software 2011, which is stored in or accessible by host computer 2010 and executable by processing circuitry 2018.
  • Software 2011 includes host application 2012.
  • Host application 2012 can be operable to provide a service to a remote user, such as EE 2030 connecting via OTT connection 2050 terminating at EE 2030 and host computer 2010. In providing the service to the remote user, host application 2012 can provide user data which is transmitted using OTT connection 2050.
  • Communication system 2000 can also include base station 2020 provided in a telecommunication system and comprising hardware 2025 enabling it to communicate with host computer 2010 and with EE 2030.
  • Hardware 2025 can include communication interface
  • Communication interface 2026 can be configured to facilitate connection 2060 to host computer 2010. Connection 2060 can be direct, or it can pass through a core network (not shown in Figure 20) of the telecommunication system and/or through one or more intermediate networks outside the telecommunication system.
  • hardware 2025 of base station 2020 can also include processing circuitry 2028, which can comprise one or more programmable processors, application-specific integrated circuits, field programmable gate arrays or combinations of these (not shown) adapted to execute instructions.
  • Base station 2020 further has software 2021 stored internally or accessible via an external connection.
  • Communication system 2000 can also include EE 2030 already referred to. It’s hardware 2035 can include radio interface 2037 configured to set up and maintain wireless connection 2070 with a base station serving a coverage area in which UE 2030 is currently located. Hardware 2035 of UE 2030 can also include processing circuitry 2038, which can comprise one or more programmable processors, application-specific integrated circuits, field programmable gate arrays or combinations of these (not shown) adapted to execute instructions.
  • UE 2030 further comprises software 2031, which is stored in or accessible by UE 2030 and executable by processing circuitry 2038.
  • Software 2031 includes client application 2032. Client application 2032 can be operable to provide a service to a human or non- human user via UE 2030, with the support of host computer 2010.
  • an executing host application 2012 can communicate with the executing client application 2032 via OTT connection 2050 terminating at UE 2030 and host computer 2010.
  • client application 2032 can receive request data from host application 2012 and provide user data in response to the request data.
  • OTT connection 2050 can transfer both the request data and the user data.
  • Client application 2032 can interact with the user to generate the user data that it provides.
  • Figure 20 can be similar or identical to host computer 1930, one of base stations l9l2a, l9l2b, l9l2c and one of UEs 1991, 1992 of Figure 19, respectively. This is to say, the inner workings of these entities can be as shown in Figure 20 and independently, the surrounding network topology can be that of Figure 19.
  • OTT connection 2050 has been drawn abstractly to illustrate the communication between host computer 2010 and UE 2030 via base station 2020, without explicit reference to any intermediary devices and the precise routing of messages via these devices.
  • Network infrastructure can determine the routing, which it can be configured to hide from UE 2030 or from the service provider operating host computer 2010, or both. While OTT connection 2050 is active, the network infrastructure can further take decisions by which it dynamically changes the routing (e.g ., on the basis of load balancing consideration or reconfiguration of the network).
  • Wireless connection 2070 between UE 2030 and base station 2020 is in accordance with the teachings of the embodiments described throughout this disclosure.
  • One or more of the various embodiments improve the performance of OTT services provided to UE 2030 using OTT connection 2050, in which wireless connection 2070 forms the last segment.
  • the exemplary embodiments disclosed herein can improve flexibility for the network to monitor end-to-end quality-of-service (QoS) of data flows, including their corresponding radio bearers, associated with data sessions between a user equipment (UE) and another entity, such as an OTT data application or service external to the 5G network.
  • QoS quality-of-service
  • a measurement procedure can be provided for the purpose of monitoring data rate, latency and other network operational aspects on which the one or more embodiments improve.
  • the measurement procedure and/or the network functionality for reconfiguring OTT connection 2050 can be implemented in software 2011 and hardware 2015 of host computer 2010 or in software 2031 and hardware 2035 of UE 2030, or both.
  • sensors (not shown) can be deployed in or in association with communication devices through which OTT connection 2050 passes; the sensors can participate in the measurement procedure by supplying values of the monitored quantities exemplified above, or supplying values of other physical quantities from which software 2011, 2031 can compute or estimate the monitored quantities.
  • the reconfiguring of OTT connection 2050 can include message format, retransmission settings, preferred routing etc the reconfiguring need not affect base station 2020, and it can be unknown or imperceptible to base station 2020. Such procedures and functionalities can be known and practiced in the art.
  • measurements can involve proprietary UE signaling facilitating host computer 20l0’s measurements of throughput, propagation times, latency and the like.
  • the measurements can be implemented in that software 2011 and 2031 causes messages to be transmitted, in particular empty or‘dummy’ messages, using OTT connection 2050 while it monitors propagation times, errors etc.
  • FIG. 21 is a flowchart illustrating an exemplary method and/or procedure implemented in a communication system, in accordance with one embodiment.
  • the communication system includes a host computer, a base station and a UE which, in some exemplary embodiments, can be those described with reference to Figures 19 and 20. For simplicity of the present disclosure, only drawing references to Figure 21 will be included in this section.
  • the host computer provides user data.
  • substep 2111 (which can be optional) of step 2110, the host computer provides the user data by executing a host application.
  • the host computer initiates a transmission carrying the user data to the UE.
  • step 2130 the base station transmits to the UE the user data which was carried in the transmission that the host computer initiated, in accordance with the teachings of the embodiments described throughout this disclosure.
  • step 2140 the UE executes a client application associated with the host application executed by the host computer.
  • FIG 22 is a flowchart illustrating an exemplary method and/or procedure implemented in a communication system, in accordance with one embodiment.
  • the communication system includes a host computer, a base station and a UE which can be those described with reference to Figures 19 and 20. For simplicity of the present disclosure, only drawing references to Figure 22 will be included in this section.
  • the host computer provides user data.
  • the host computer provides the user data by executing a host application.
  • the host computer initiates a transmission carrying the user data to the UE.
  • the transmission can pass via the base station, in accordance with the teachings of the embodiments described throughout this disclosure.
  • step 2230 (which can be optional), the UE receives the user data carried in the transmission.
  • FIG 23 is a flowchart illustrating an exemplary method and/or procedure implemented in a communication system, in accordance with one embodiment.
  • the communication system includes a host computer, a base station and a UE which can be those described with reference to Figures 19 and 20. For simplicity of the present disclosure, only drawing references to Figure 23 will be included in this section.
  • step 2310 (which can be optional) the UE receives input data provided by the host computer. Additionally, or alternatively, in step 2320, the UE provides user data.
  • substep 2321 (which can be optional) of step 2320, the UE provides the user data by executing a client application.
  • substep 2311 (which can be optional) of step 2310, the UE executes a client application which provides the user data in reaction to the received input data provided by the host computer.
  • the executed client application can further consider user input received from the user.
  • the UE initiates, in substep 2330 (which can be optional), transmission of the user data to the host computer.
  • step 2340 of the method the host computer receives the user data transmitted from the UE, in accordance with the teachings of the embodiments described throughout this disclosure.
  • FIG 24 is a flowchart illustrating an exemplary method and/or procedure implemented in a communication system, in accordance with one embodiment.
  • the communication system includes a host computer, a base station and a UE which can be those described with reference to Figures 19 and 20. For simplicity of the present disclosure, only drawing references to Figure 24 will be included in this section.
  • the base station receives user data from the UE.
  • the base station initiates transmission of the received user data to the host computer.
  • step 2430 (which can be optional)
  • the host computer receives the user data carried in the transmission initiated by the base station.
  • the term unit can have conventional meaning in the field of electronics, electrical devices and/or electronic devices and can include, for example, electrical and/or electronic circuitry, devices, modules, processors, memories, logic solid state and/or discrete devices, computer programs or instructions for carrying out respective tasks, procedures, computations, outputs, and/or displaying functions, and so on, as such as those that are described herein.
  • any appropriate steps, methods, features, functions, or benefits disclosed herein may be performed through one or more functional units or modules of one or more virtual apparatuses.
  • Each virtual apparatus may comprise a number of these functional units.
  • These functional units may be implemented via processing circuitry, which may include one or more microprocessor or microcontrollers, as well as other digital hardware, which may include Digital Signal Processor (DSPs), special-purpose digital logic, and the like.
  • the processing circuitry may be configured to execute program code stored in memory, which may include one or several types of memory such as Read Only Memory (ROM), Random Access Memory (RAM), cache memory, flash memory devices, optical storage devices, etc.
  • Program code stored in memory includes program instructions for executing one or more telecommunications and/or data communications protocols as well as instructions for carrying out one or more of the techniques described herein.
  • the processing circuitry may be used to cause the respective functional unit to perform corresponding functions according one or more embodiments of the present disclosure.
  • device and/or apparatus can be represented by a semiconductor chip, a chipset, or a (hardware) module comprising such chip or chipset; this, however, does not exclude the possibility that a functionality of a device or apparatus, instead of being hardware implemented, be implemented as a software module such as a computer program or a computer program product comprising executable software code portions for execution or being run on a processor.
  • functionality of a device or apparatus can be implemented by any combination of hardware and software.
  • a device or apparatus can also be regarded as an assembly of multiple devices and/or apparatuses, whether functionally in cooperation with or independently of each other.
  • devices and apparatuses can be implemented in a distributed fashion throughout a system, so long as the functionality of the device or apparatus is preserved. Such and similar principles are considered as known to a skilled person.
  • Exemplary embodiments of the present disclosure include, but are not limited to, the following enumerated examples: 1.
  • a method for managing a link failure in an integrated access backhaul (LAB) network the method being performed by an intermediate node in the IAB network and comprising: receiving, from a first upstream node in the IAB network, a first indication of failure of a first backhaul link in a first network path that includes the intermediate node, the first upstream node, and a destination node for uplink (UL) data in the IAB network;
  • LAB integrated access backhaul
  • the first indication in response to the first indication, performing one or more first actions with respect to transmission of UL data towards the first upstream node; and based on information associated with the first indication, selectively forwarding the first indication to one or more downstream nodes in the IAB network.
  • selectively forwarding the first indication comprises: if the depth value is non-zero, decrementing the depth value and forwarding the first indication, including the decremented depth value, to the one or more downstream nodes; and
  • selectively forwarding the first indication further comprises performing one of the following operations if the depth value is not included with the first indication:
  • identifiers of the one or more nodes comprises identifiers of bearers associated with the one or more nodes.
  • identifiers of the one or more nodes comprises adaptation layer addresses associated with the one or more nodes.
  • the adaptation layer addresses include a first address associated with the intermediate node
  • selectively forwarding the first indication further comprises:
  • adaptation layer addresses comprising the first indication; and forwarding the modified first indication only to the one or more downstream nodes.
  • SDUs PDCP service data units
  • PDUs PDCP protocol data units
  • PDUs RLC protocol data units
  • SRs scheduling requests
  • SDUs PDCP service data units
  • PDUs PDCP protocol data units
  • PDUs RLC protocol data units
  • the one or more downstream nodes comprise one or more intermediate nodes and one or user equipment (UEs).
  • UEs user equipment
  • a method for managing a link failure in an integrated access backhaul (LAB) network the method being performed by an intermediate node in the IAB network and comprising: detecting a failure of a first backhaul link between the intermediate node and a first upstream node in the IAB network, wherein the first backhaul link is part of a first network path that includes the intermediate node, the first upstream node, a plurality of downstream nodes, and a destination node for uplink (UL) data;
  • LAB integrated access backhaul
  • the second network path comprises the intermediate node, the plurality of downstream nodes, and the destination node;
  • the second network path comprises the first network path
  • the second indication indicates that the first backhaul of in the first network path has been restored.
  • the second network path further comprises a second upstream node but not the first upstream node
  • the second indication indicates that the second network path has been established to replace the first network path. 25. The method of any of embodiments 20-24, wherein the first indication further includes a depth value that identifies a number of downstream hops in the IAB network for forwarding the first indication.
  • identifiers of the one or more nodes comprises identifiers of bearers associated with the one or more nodes.
  • a node in an integrated access backhaul (IAB) network configured to manage a link failure in the IAB network, the node comprising:
  • processing circuitry operably coupled to the radio transceiver circuitry and
  • a non-transitory, computer-readable medium storing computer-executable instructions that, when executed by processing circuitry comprising a node in an integrated access backhaul (LAB) network, configure the node to perform operations corresponding to any of the methods of claims 1-29.
  • LAB integrated access backhaul
  • a communication system including a host computer, the host computer comprising: a. processing circuitry configured to provide user data; and
  • a communication interface configured to forward the user data to a cellular network for transmission to a user equipment (UE) through a core network (CN) and a radio access network (RAN);
  • UE user equipment
  • CN core network
  • RAN radio access network
  • the RAN comprises first and second nodes of an integrated access backhaul (IAB) network;
  • IAB integrated access backhaul
  • the first node comprises a communication transceiver and processing
  • circuitry configured to perform operations corresponding to any of the methods of embodiments 1-19;
  • the second node comprises a communication transceiver and processing circuitry configured to perform operations corresponding to any of the methods of embodiments 20-29.
  • the processing circuitry of the host computer is configured to execute a host application, thereby providing the user data
  • the UE comprises processing circuitry configured to execute a client
  • a. at the host computer providing user data
  • b. at the host computer initiating a transmission carrying the user data to the UE via a cellular network comprising an integrated access backhaul (IAB) network
  • IAB integrated access backhaul
  • the method further comprising, at the UE, executing a client application associated with the host application.
  • a communication system including a host computer comprising a communication interface configured to receive user data originating from a transmission from a user equipment (UE) to a base station via an integrated access backhaul (IAB) radio network, wherein: a. the IAB network comprises first and second nodes; b. the first node comprises a communication interface and processing circuitry configured to perform operations corresponding to any of the methods of embodiments 1-19; and
  • IAB integrated access backhaul
  • the second node comprises a communication interface and processing
  • circuitry configured to perform operations corresponding to any of the methods of embodiments 20-29.
  • the processing circuitry of the host computer is configured to execute a host application
  • the UE is configured to execute a client application associated with the host application, thereby providing the user data to be received by the host computer.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

Embodiments include methods for managing a link failure in an integrated access backhaul (IAB) network, such as performed by an intermediate node. Such embodiments include receiving, from a first upstream node in the IAB network, a first indication of failure of a first backhaul link in a first network path that includes the intermediate node, the first upstream node, and a destination node for uplink data in the IAB network. Embodiments also include, in response to the first indication, performing one or more first actions with respect to transmission of UL data towards the first upstream node. Embodiments also include, based on information associated with the first indication, selectively forwarding the first indication (e.g., based on content of the first indication) to one or more downstream nodes in the IAB network. Other embodiments include complementary methods as well as network nodes configured to perform any of the methods.

Description

METHODS FOR HANDLING LINK FAILURES IN INTEGRATED ACCESS
BACKHAUL (IAB) NETWORKS
TECHNICAL FIELD
The present application relates generally to the field of wireless communication networks, and more specifically to integrated access backhaul (IAB) networks in which the available wireless communication resources are shared between user access to the network and backhaul of user traffic within the network (e.g, to/from a core network).
INTRODUCTION
Generally, all terms used herein are to be interpreted according to their ordinary meaning in the relevant technical field, unless a different meaning is clearly given and/or is implied from the context in which it is used. All references to a/an/the element, apparatus, component, means, step, etc. are to be interpreted openly as referring to at least one instance of the element, apparatus, component, means, step, etc ., unless explicitly stated otherwise. The steps of any methods and/or procedures disclosed herein do not have to be performed in the exact order disclosed, unless a step is explicitly described as following or preceding another step and/or where it is implicit that a step must follow or precede another step. Any feature of any of the embodiments disclosed herein can be applied to any other embodiment, wherever appropriate. Likewise, any advantage of any of the embodiments can apply to any other embodiments, and vice versa. Other objectives, features and advantages of the enclosed embodiments will be apparent from the following description.
Figure 1 illustrates a high-level view of a fifth-generation (5G) wireless network architecture, consisting of a Next Generation RAN (NG-RAN) 199 and a 5G Core (5GC) 198. NG-RAN 199 can include one or more gNodeB’s (gNBs) connected to the 5GC via one or more NG interfaces, such as gNBs 100, 150 connected via interfaces 102, 152, respectively. More specifically, gNBs 100, 150 can be connected to one or more Access and Mobility Management Functions (AMF) in the 5GC 198 via respective NG-C interfaces. Similarly, gNBs 100, 150 can be connected to one or more User Plane Functions (UPFs) in 5GC 198 via respective NG-U interfaces.
Although not shown, in some deployments 5GC 198 can be replaced by an Evolved Packet Core (EPC), which conventionally has been used together with a Long-Term Evolution (LTE) Evolved UMTS RAN (E-UTRAN). In such deployments, gNBs 100, 150 can connect to one or more Mobility Management Entities (MMEs) in EPC 198 via respective Sl-C interfaces. Similarly, gNBs 100, 150 can connect to one or more Serving Gateways (SGWs) in EPC via respective NG-U interfaces.
In addition, the gNBs can be connected to each other via one or more Xn interfaces, such as Xn interface 140 between gNBs 100 and 150. The radio technology for the NG-RAN is often referred to as“New Radio” (NR). With respect the NR interface to UEs, each of the gNBs can support frequency division duplexing (FDD), time division duplexing (TDD), or a combination thereof.
NG-RAN 199 is layered into a Radio Network Layer (RNL) and a Transport Network Layer (TNL). The NG-RAN architecture, i.e., the NG-RAN logical nodes and interfaces between them, is defined as part of the RNL. For each NG-RAN interface ( e.g ., NG, Xn, Fl), the related TNL protocol and the functionality are specified. The TNL provides services for user plane transport and signaling transport. In some exemplary configurations, each gNB is connected to all 5GC nodes within an“AMF Region” which is defined in 3GPP TS 23.501 (vl5.6.0). If security protection for CP and UP data on TNL of NG-RAN interfaces is supported, NDS/IP (e.g., as defined in 3GPP TS 33.401 vl5.4.0) shall be applied.
The NG RAN logical nodes shown in Figure 1 (and described in 3GPP TS 38.401 vl5.2.0 and 3GPP TR 38.801 vl4.0.0) include a Central Unit (CU or gNB-CU) and one or more Distributed Units (DU or gNB-DU). For example, gNB 100 includes gNB-CU 110 and gNB-DUs 120 and 130. CUs (e.g, gNB-CU 110) are logical nodes that host higher-layer protocols and perform various gNB functions such controlling the operation of DUs. A DU (e.g, gNB-DUs 120, 130) is a decentralized logical node that hosts lower layer protocols and can include, depending on the functional split option, various subsets of the gNB functions. As such, each of the CUs and DUs can include various circuitry needed to perform their respective functions, including processing circuitry, transceiver circuitry (e.g, for communication), and power supply circuitry. Moreover, the terms “central unit” and “centralized unit” are used interchangeably herein, as are the terms“distributed unit” and “decentralized unit.”
A gNB-CU connects to one or more gNB-DUs over respective Fl logical interfaces, such as interfaces 122 and 132 shown in Figure 1. However, a gNB-DU can be connected to only a single gNB-CU. The gNB-CU and connected gNB-DU(s) are only visible to other gNBs and the 5GC as a gNB. In other words, the Fl interface is not visible beyond gNB-CU. Furthermore, the Fl interface between the gNB-CU and gNB-DU is specified and/or based on the following general principles: • Fl is an open interface;
• Fl supports the exchange of signalling information between respective endpoints, as well as data transmission to the respective endpoints;
• from a logical standpoint, Fl is a point-to-point interface between the endpoints (even in the absence of a physical direct connection between the endpoints);
• Fl supports control plane and user plane separation into respective Fl-AP protocol and Fl-U protocol (also referred to as NR User Plane Protocol), such that a gNB-CU may also be separated in CP and UP;
• Fl separates Radio Network Layer (RNL) and Transport Network Layer (TNL);
• Fl enables exchange of user-equipment (UE) associated information and non-UE associated information;
• Fl is defined to be future proof with respect to new requirements, services, and functions;
• A gNB terminates X2, Xn, NG and Sl-U interfaces and, for the Fl interface between DU and CU, utilizes the Fl-AP protocol that is defined in 3GPP TS 38.473.
In addition, the Fl-U protocol is used to convey control information related to the user data flow management of data radio bearers, as defined in 3GPP TS 38.425. The Fl-U protocol data is conveyed by the GTP-U protocol, specifically, by the“RAN Container” GTP- U extension header as defined in 3GPP TS 29.281 (vl5.2.0). In other words, the GTP-U protocol over user datagram protocol (UDP) over IP carries data streams on the Fl interface. A GTP-U“tunnel” between two nodes is identified in each node by tunnel endpoint identifier (TEID), an IP address, and a UDP port number. A GTP-U tunnel is necessary to enable forwarding packets between GTP-U entities.
In addition, a CU can host protocols such as RRC and PDCP, while a DU can host protocols such as RLC, MAC and PHY. Other variants of protocol distributions between CU and DU can exist, however, such as hosting the RRC, PDCP and part of the RLC protocol in the CU ( e.g ., Automatic Retransmission Request (ARQ) function), while hosting the remaining parts of the RLC protocol in the DU, together with MAC and PHY. In some exemplary embodiments, the CU can host RRC and PDCP, where PDCP is assumed to handle both UP traffic and CP traffic. Nevertheless, other exemplary embodiments may utilize other protocol splits that by hosting certain protocols in the CU and certain others in the DU. Exemplary embodiments can also locate centralized control plane protocols (e.g., PDCP-C and RRC) in a different CU with respect to the centralized user plane protocols (e.g., PDCP- U).
It has also been agreed in 3 GPP RAN3 Working Group (WG) to support a separation of the gNB-CU into a CU-CP function (including RRC and PDCP for signaling radio bearers) and CU-UP function (including PDCP for user plane), with the El open interface between (see 3GPP TS 38.463 vl5.0.0). The CU-CP and CU-UP parts communicate with each other using the El-AP protocol over the El interface. The CU-CP/UP separation is illustrated in Figure 2. Three deployment scenarios for the split gNB architecture shown in Figure 2 are defined in 3 GPP TR 38.806 (vl5.0.0):
· Scenario 1 : CU-CP and CU-UP centralized;
• Scenario 2: CU-CP distributed and CU-UP centralized;
• Scenario 3: CU-CP centralized and CU-UP distributed.
Densification via the deployment of more and more base stations (e.g. , macro or micro base stations) is one of the mechanisms that can be employed to satisfy the increasing demand for bandwidth and/or capacity in mobile networks, which is mainly driven by the increasing use of video streaming services. Due to the availability of more spectrum in the millimeter wave (mmw) band, deploying small cells that operate in this band is an attractive deployment option for these purposes. However, the normal approach of connecting the small cells to the operator’s backhaul network with optical fiber can end up being very expensive and impractical. Employing wireless links for connecting the small cells to the operator’ s network is a cheaper and more practical alternative. One such approach is an integrated access backhaul (IAB) network where the operator can utilize part of the radio resources for the backhaul link.
IAB was studied earlier in 3GPP in the scope of Long Term Evolution (LTE) Rel-lO. In that work, an architecture was adopted where a Relay Node (RN) has the functionality of an LTE eNB and UE modem. The RN is connected to a donor eNB which has a S1/X2 proxy functionality hiding the RN from the rest of the network. That architecture enabled the Donor eNB to also be aware of the UEs behind the RN and hide any UE mobility between Donor eNB and Relay Node(2) on the same Donor eNB from the CN. During the Rel-lO study, other architectures were also considered including, e.g. , where the RNs are more transparent to the Donor gNB and allocated a separate stand-alone P/S-GW node.
For 5G/NR, similar options utilizing IAB can also be considered. One difference compared to LTE is the gNB-CU/DU split described above, which separates time critical RLC/MAC/PHY protocols from less time critical RRC/PDCP protocols. It is anticipated that a similar split could also be applied for the IAB case. Other IAB-related differences anticipated in NR as compared to LTE are the support of multiple hops and the support of redundant paths.
Figure 3 shows a reference diagram for an IAB network in standalone mode, as further explained in 3GPP TR 38.874 (v0.2. l). The IAB network shown in Figure 3 includes one IAB-donor 340 and multiple IAB-nodes 311-315, all of which can be part of a radio access network (RAN) such as an NG-RAN. IAB donor 340 includes DUs 321, 322 connected to a CU, which is represented by functions CU-CP 331 and CU-UP 332. IAB donor 340 can communicate with core network (CN) 350 via the CU functionality shown.
Each of the IAB nodes 311-315 connects to the IAB-donor via one or more wireless backhaul links (also referred to herein as“hops”). More specifically, the Mobile-Termination (MT) function of each IAB-node 311-315 terminates the radio interface layers of the wireless backhaul towards a corresponding“upstream” (or“northbound”) DU function. This MT functionality is similar to functionality that enables UEs to access the IAB network and, in fact, has been specified by 3GPP as part of the Mobile Equipment (ME).
In the context of Figure 3, upstream DUs can include either DU 321 or 322 of IAB donor 340 and, in some cases, a DU function of an intermediate IAB node that is “downstream” (or“southbound”) from IAB donor 340. As a more specific example, IAB- node 314 is downstream from IAB-node 312 and DU 321, IAB-node 312 is upstream from IAB-node 314 but downstream from DU 321, and DU 321 is upstream from IAB-nodes 312 and 314. The DU functionality of IAB nodes 311-315 also terminates the radio interface layers toward UEs ( e.g ., for network access via the DU) and other downstream IAB nodes.
As shown in Figure 3, IAB-donor 340 can be treated as a single logical node that comprises a set of functions such as gNB-DUs 321-322, gNB-CU-CP 331, gNB-CU-UP 332, and possibly other functions. In some deployments, the IAB-donor can be split according to these functions, which can all be either co-located or non-co-located as allowed by the 3 GPP NG-RAN architecture. Also, some of the functions presently associated with the IAB-donor can be moved outside of the IAB-donor if such functions do not perform IAB-specific tasks.
Each IAB-node DU connects to the IAB-donor CU using a modified form of Fl, which is referred to as Fl*. The user-plane portion of Fl* (referred to as“Fl*-U”) runs over RLC channels on the wireless backhaul between the MT on the serving IAB-node and the DU on the IAB donor. In addition, an adaptation layer is included to hold routing information, thereby enabling hop-by-hop forwarding by IAB nodes. In some sense, the adaptation layer replaces the IP functionality of the standard Fl stack. Fl*-U may carry a GTP-U header for the end- to-end association between CU and DU ( e.g ., IAB-node DU). In a further enhancement, information carried inside the GTP-U header can be included into the adaption layer. Furthermore, in various alternatives, the adaptation layer for IAB can be inserted either below or above the RLC layer. Optimizations to RLC layer itself are also possible, such as applying ARQ only on the end-to-end connection (i.e., between the donor DU and the IAB node MT) rather than hop-by-hop along access and backhaul links (e.g., between downstream IAB node MT and upstream IAB node DU).
Failure of a wireless backhaul link between an intermediate node (e.g, IAB node 312 in Figure 3) and its parent node (e.g, DU 321) in an IAB network can create various problems for other nodes (e.g, IAB nodes 314-315) that utilize that failed backhaul link. Such problems can include packet losses, retransmissions, or other undesired effects that can exacerbate congestion of an IAB network that already includes one failed wireless backhaul link. Such congestion can result in failure of additional wireless backhaul links in the IAB network and, consequently, loss of service to network users.
SUMMARY
Accordingly, exemplary embodiments of the present disclosure address these and other difficulties in schedule of uplink (UL) transmissions in a 5G network comprising IAB nodes, thereby enabling the otherwise-advantageous deployment of IAB solutions.
Exemplary embodiments of the present disclosure include methods and/or procedures for managing a link failure in an integrated access backhaul (IAB) network. These exemplary methods and/or procedures can be performed by a network node (e.g, an intermediate IAB node) in a radio access network (RAN, e.g., NG-RAN). The exemplary methods and/or procedures can include receiving, from a first upstream node in the IAB network, a first indication of failure of a first backhaul link in a first network path that includes the intermediate node, the first upstream node, and a destination node for uplink (UL) data in the IAB network. For example, the destination node can be a donor DU and/or a donor CU.
The exemplary methods and/or procedures can also include, in response to the first indication, performing one or more first actions with respect to transmission of UL data towards the first upstream node. In various embodiments, the one or more first actions can include various actions with respect to different protocol layers (e.g, PDCP, RLC, MAC) comprising the network node. The exemplary methods and/or procedures can also include, based on information associated with the first indication, selectively forwarding the first indication to one or more downstream nodes in the IAB network. In some embodiments, the one or more downstream nodes can comprise one or more intermediate nodes and one or user equipment (UEs).
In some embodiments, the exemplary methods and/or procedures can also include the receiving a second indication concerning a path in the IAB network. In some embodiments, the second indication can be received from the first upstream node and can indicate that the first backhaul of in the first network path has been restored. In some embodiments, the second indication can be received from a second upstream node and can indicate the establishment of a second network path that includes the intermediate node, a second upstream node in the IAB network, and the destination node.
In some embodiments, the exemplary methods and/or procedures can also include, in response to the second indication, performing one or more second actions with respect to transmission of UL data towards the upstream node. In various embodiments, the one or more second actions can include various actions with respect to different protocol layers ( e.g ., PDCP, RLC, MAC) comprising the network node.
In some embodiments, the exemplary methods and/or procedures can also include, based on information associated with the second indication, selectively forwarding the second indication to the one or more downstream nodes. For example, the selective forwarding of the second indication can include substantially similar operations, or be based on substantially similar information, as the selective forwarding of the first indication.
Other exemplary embodiments of the present disclosure include additional methods and/or procedures for managing a link failure in an IAB network. These exemplary methods and/or procedures can be performed by a network node (e.g., an intermediate node immediately downstream of the failed link) in a RAN (e.g, NG-RAN). The exemplary methods and/or procedures can include detecting a failure of a first backhaul link between the intermediate node and a first upstream node in the IAB network. The first backhaul link can be part of a first network path that includes the intermediate node, the first upstream node, a plurality of downstream nodes, and a destination node for UL data. For example, the destination node can be a donor DU and/or a donor CU.
The exemplary methods and/or procedures can also include sending, to the first downstream node, a first indication of the failure of the first backhaul link, and performing one or more first actions with respect to transmission of UL data towards the first upstream node. In various embodiments, the one or more first actions can include various actions with respect to different protocol layers comprising the network node, e.g ., PDCP, RLC, and MAC layers.
In some embodiments, the exemplary methods and/or procedures can also include determining that a second network path has been established. The second network path can include the intermediate node, the plurality of downstream nodes, and the destination node. In such embodiments, the exemplary methods and/or procedures can also include sending, to the first downstream node, a second indication concerning the second path. In some embodiments, the second network path can include the first network path, and the second indication can indicate that the first backhaul of in the first network path has been restored. In other embodiments, the second network path can include a second upstream node but not the first upstream node, and the second indication can indicate that the second network path has been established to replace the first network path.
In some embodiments, the exemplary methods and/or procedures can also include performing one or more second actions with respect to transmission of UL data towards the first upstream node. In various embodiments, the one or more second actions can include various actions with respect to different protocol layers comprising the network node, e.g. , PDCP, RLC, and MAC layers.
Exemplary embodiments also include network nodes (e.g, intermediate LAB nodes and/or components thereof) configured to perform operations corresponding to any of the exemplary methods and/or procedures described herein. Exemplary embodiments also include non-transitory, computer-readable media storing computer-executable instructions that, when executed by processing circuitry comprising a network node, configure the network node to perform operations corresponding to any of the exemplary methods and/or procedures described herein.
These and other objects, features, and advantages of the present disclosure will become apparent upon reading the following Detailed Description in view of the Drawings briefly described below.
BRIEF DESCRIPTION OF THE DRAWINGS Figure 1 illustrates a high-level view of the 5G network architecture, including split central unit (CU)-distributed unit (DU) split architecture of gNBs.
Figure 2 illustrates the control-plane (CP) and user-plane (UP) interfaces within the split CU-DU architecture shown in Figure 1. Figure 3 shows a reference diagram for an integrated access backhaul (IAB) network in standalone mode, as further explained in 3GPP TR 38.874.
Figure 4-8 show block diagrams of IAB reference architectures la, lb, 2a, 2b, and 2c, respectively.
Figure 9, which includes Figures 9A-E, shows five (5) different exemplary user plane (UP) protocol stack options for architecture la.
Figure 10 shows an exemplary UP protocol stack arrangement for architecture lb.
Figures 11-12 are block diagrams of an exemplary IAB network that includes a donor DU, a donor CU, and various IAB nodes that are capable of providing access to various UEs, according to various exemplary embodiments of the present disclosure.
Figure 13 shows an exemplary data flow diagram corresponding to the IAB network illustrated in Figures 11-12, according to various exemplary embodiments of the present disclosure.
Figures 14-15 illustrate exemplary methods and/or procedures for managing a link failure in an integrated access backhaul (IAB) network, according to various exemplary embodiments of the present disclosure.
Figure 16 illustrates an exemplary wireless network, according to various exemplary embodiments of the present disclosure.
Figure 17 illustrates an exemplary UE, according to various exemplary embodiments of the present disclosure.
Figure 18 is a block diagram illustrating an exemplary virtualization environment usable for implementation of various embodiments described herein.
Figures 19-20 are block diagrams of various exemplary communication systems and/or networks, according to various exemplary embodiments of the present disclosure.
Figures 21-24 are flow diagrams of exemplary methods and/or procedures for transmission and/or reception of user data, according to various exemplary embodiments of the present disclosure.
DETAILED DESCRIPTION
Exemplary embodiments briefly summarized above will now be described more fully with reference to the accompanying drawings. These descriptions are provided by way of example to explain the subject matter to those skilled in the art and should not be construed as limiting the scope of the subject matter to only the embodiments described herein. More specifically, examples are provided below that illustrate the operation of various embodiments according to the advantages discussed above. Furthermore, the following terms are used throughout the description given below:
• Radio Node: As used herein, a“radio node” can be either a“radio access node” or a “wireless device.”
· Radio Access Node: As used herein, a“radio access node” (or“radio network node”) can be any node in a radio access network (RAN) of a cellular communications network that operates to wirelessly transmit and/or receive signals. Some examples of a radio access node include, but are not limited to, a base station ( e.g ., a New Radio (NR) base station (gNB) in a 3 GPP Fifth Generation (5G) NR network or an enhanced or evolved Node B (eNB) in a 3 GPP LTE network), a high-power or macro base station, a low-power base station (e.g., a micro base station, a pico base station, a home eNB, or the like), an integrated access backhaul (LAB) node, and a relay node.
• Core Network Node: As used herein, a“core network node” is any type of node in a core network. Some examples of a core network node include, e.g, a Mobility Management Entity (MME), a Packet Data Network Gateway (P-GW), a Service
Capability Exposure Function (SCEF), or the like.
• Wireless Device: As used herein, a“wireless device” (or“WD” for short) is any type of device that has access to (i.e., is served by) a cellular communications network by communicate wirelessly with network nodes and/or other wireless devices. Unless otherwise noted, the term“wireless device” is used interchangeably herein with“user equipment” (or“UE” for short). Some examples of a wireless device include, but are not limited to, a UE in a 3GPP network and a Machine Type Communication (MTC) device. Communicating wirelessly can involve transmitting and/or receiving wireless signals using electromagnetic waves, radio waves, infrared waves, and/or other types of signals suitable for conveying information through air.
• Network Node: As used herein, a“network node” is any node that is either part of the radio access network or the core network of a cellular communications network. Functionally, a network node is equipment capable, configured, arranged, and/or operable to communicate directly or indirectly with a wireless device and/or with other network nodes or equipment in the cellular communications network, to enable and/or provide wireless access to the wireless device, and/or to perform other functions (e.g, administration) in the cellular communications network. Note that the description given herein focuses on a 3 GPP cellular communications system and, as such, 3GPP terminology or terminology similar to 3GPP terminology is generally used. However, the concepts disclosed herein are not limited to a 3 GPP system. Other wireless systems, including without limitation Wide Band Code Division Multiple Access (WCDMA), Worldwide Interoperability for Microwave Access (WiMax), Ultra Mobile Broadband (UMB) and Global System for Mobile Communications (GSM), may also benefit from the concepts, principles, and/or embodiments described herein.
In addition, functions and/or operations described herein as being performed by a wireless device or a network node may be distributed over a plurality of wireless devices and/or network nodes. Furthermore, although the term“cell” is used herein, it should be understood that (particularly with respect to 5G R) beams may be used instead of cells and, as such, concepts described herein apply equally to both cells and beams.
As briefly mentioned above, a backhaul link failure between an intermediate node and its parent node in an IAB network can create various problems for other nodes that utilize that failed backhaul link. Such problems can include packet losses, retransmissions, or other undesired effects that can exacerbate congestion of an IAB network that already includes one failed backhaul link. Such congestion can result in failure of additional wireless backhaul links in the IAB network and, consequently, loss of service to network users. These issues are discussed below in more detail.
3GPP TR 38.874 (v0.2.1) specifies several reference architectures for supporting user plane (UP) traffic over IAB nodes, including IAB Donor nodes. Figure 4 shows a block diagram of reference architecture“la”, which leverages the CU/DU split architecture in a two-hop chain of IAB nodes underneath an IAB-donor.
In this architecture, each IAB node holds a DU and a mobile terminal (MT). Via the MT, the IAB-node connects to an upstream IAB-node or the IAB-donor. Via the DU, the IAB-node establishes RLC-channels to UEs and to MTs of downstream IAB-nodes. For MTs, this RLC-channel may refer to a modified RLC*. Whether an IAB node can connect to more than one upstream IAB-node or IAB-donor is for further study.
The IAB Donor also includes a DU to support UEs and MTs of downstream IAB nodes. The IAB-donor holds a CU for the DUs of all IAB-nodes and for its own DU. It is FFS if different CUs can serve the DUs of the IAB-nodes. Each DU on an IAB-node connects to the CU in the IAB-donor using a modified form of Fl, which is referred to as Fl*. Fl*-U runs over RLC channels on the wireless backhaul between the MT on the serving IAB-node and the DU on the donor. Fl *-U transport between MT and DU on the serving IAB-node as well as between DU and CU on the donor is for further study. An adaptation layer is added, which holds routing information, enabling hop-by-hop forwarding. It replaces the IP functionality of the standard Fl -stack. Fl*-U may carry a GTP-U header for the end-to-end association between CU and DU. In a further enhancement, information carried inside the GTP-U header may be included into the adaption layer. Further, optimizations to RLC are possible, such as applying ARQ only on the end-to-end connection rather than hop-by-hop.
The right side of Figure 4 shows two examples of such Fl*-U protocol stacks. In this figure, enhancements of RLC are referred to as RLC*. The MT of each IAB-node further sustains NAS connectivity to the NGC, e.g ., for authentication of the IAB-node. It further sustains a PDU-session via the NGC, e.g. , to provide the IAB-node with connectivity to the OAM. Details of Fl*, the adaptation layer, RLC*, hop-by-hop forwarding, and transport of Fl-AP are for further study. Protocol translation between Fl* and Fl in case the IAB-donor is split is also for further study.
Figure 5 shows a block diagram of a reference architecture“lb”, which also leverages the CU/DU split architecture in a two-hop chain of IAB nodes underneath an IAB-donor. The IAB-donor holds only one logical CU. In this architecture, each IAB-node and the IAB-donor hold the same functions as in architecture la. Also, as in architecture la, every backhaul link establishes an RLC-channel, and an adaptation layer is inserted to enable hop-by-hop forwarding of F 1 * .
In architecture lb, however, the MT on each IAB-node establishes a PDU-session with a UPF residing on the donor. The MT’s PDU-session carries Fl* for the collocated DU. In this manner, the PDU-session provides a point-to-point link between CU and DU. On intermediate hops, the PDCP-PDUs of Fl* are forwarded via an adaptation layer in the same manner as described for architecture la. The right side of Figure 5 shows an example of the Fl*-U protocol stack.
In architectures la and lb, the UE establishes RLC channels over the wireless backhaul to the DU on the UE’s access IAB node (i.e., IAB donor) via the Fl*-U interface. Transport of Fl*-U over the wireless backhaul is enabled by an adaptation layer, which is integrated with the RLC channel. In both architectures la and lb, information carried on the adaptation layer supports the following functions:
• Routing across the wireless backhaul topology,
• QoS-enforcement by the scheduler on DL and UL on the wireless backhaul link, and
• Mapping of UE user-plane PDUs to backhaul RLC channels. In architecture la, information carried on the adaptation layer also supports identification of the UE-bearer for each PDU.
Figure 6 shows a block diagram of a reference architecture“2a”, which employs hop- by-hop forwarding across intermediate nodes using PDU-session-layer routing. In architecture 2a, each IAB-node holds a MT to establish an NR Uu link with a gNB on the parent IAB-node or IAB-donor. Via this NR-Uu link, the MT sustains a PDU-session with a UPF that is collocated with the gNB. In this manner, an independent PDU-session can be created on every backhaul link. Each IAB-node can also support a routing function to forward data between PDU sessions of adjacent links. This can create a forwarding plane across the wireless backhaul. Based on PDU-session type, this forwarding plane can support IP or
Ethernet ( e.g ., 802.1). In case PDU-session type is Ethernet, an IP layer can be established on top. In this manner, each IAB-node obtains IP-connectivity to the wireline backhaul network.
In reference architecture 2a, all IP -based interfaces such as NG, Xn, Fl, N4, etc. are carried over this forwarding plane. In the case of Fl, an IAB-Node serving a UE can contain a DU for access links in addition to the gNB and UPF for the backhaul links. The CU for access links would reside in or beyond the IAB Donor. The right side of Figure 6 shows an example of the NG-U protocol stack for IP-based and for Ethernet-based PDU-session type.
Figure 7 shows a block diagram of a reference architecture“2b”, which employs hop- by-hop forwarding across intermediate nodes using GTP-U/UDP/IP nested tunneling. In Figure 7, the IAB-node holds an MT to establish an NR Uu link with a gNB on the parent IAB-node or IAB-donor. Via this NR-Uu link, the MT sustains a PDU-session with a UPF. In contrast to architecture 2a, however, this UPF is located at the IAB-donor. Also, forwarding of PDUs across upstream IAB-nodes is accomplished via tunneling. The forwarding across multiple hops therefore creates a stack of nested tunnels. As in architecture 2a, each IAB- node obtains IP-connectivity to the wireline backhaul network. All IP -based interfaces such as NG, Xn, Fl, N4, etc. are carried over this forwarding IP plane. The right side of Figure 7 shows a protocol stack example for NG-U.
Figure 8 shows a block diagram of a reference architecture“2c”, which employs hop- by-hop forwarding across intermediate node uses GTP-U/UDP/IP/PDCP nested tunneling. In Figure 8, the IAB-node holds an MT which sustains an RLC-channel with a DU on the parent IAB-node or IAB-donor. The IAB donor holds a CU and a UPF for each IAB-node’ s DU. The MT on each IAB-node sustains a NR-Uu link with a CU and a PDU session with a UPF on the donor. Forwarding on intermediate nodes is accomplished via tunneling. The forwarding across multiple hops creates a stack of nested tunnels. As in architectures 2a and 2b, each IAB-node obtains IP-connectivity to the wireline backhaul network. In contrast to architecture 2b, however, each tunnel includes an SDAP/PDCP layer. All IP -based interfaces such as NG, Xn, Fl, N4, etc. are carried over this forwarding plane. The right side of Figure 8 shows a protocol stack example for NG-U.
There are various user plane (UP) considerations for architecture group 1 (i.e., architectures la and lb) including placement of an adaptation layer, functions supported by the adaptation layer, support of multi-hop RLC, and impacts on scheduler and QoS. These are illustrated by exemplary protocol stacks for architectures la and lb shown in Figures 9 and 10, respectively. More specifically, Figures 9A-E show five (5) different UP protocol stack options for architecture la, while Figure 10 shows an exemplary UP protocol stack arrangement for architecture lb. As shown in these figures, both the IAB-donor and the UE will always have PDCP, RLC, and MAC layers, while the intermediate IAB-nodes will only have RLC and MAC layers. The adaptation layer can be included in the intermediate IAB- nodes and the IAB-donor. These IAB nodes can use identifiers carried via the adaptation layer to ensure required QoS treatment and to decide which hop any given packet should be sent to.
Each PDCP transmitter entity in Figures 9-10 receives PDCP service data units (SDUs) from higher layers and assigns each SDU a Sequence Number before delivery to the RLC layer. A discardTimer is also started when a PDCP SDU is received. When the discardTimer expires, the PDCP SDU is discarded and a discard indication is sent to lower layers. In response, RLC will discard the RLC SDU if possible.
Each PDCP receiver entity in Figures 9-10 starts a reordering timer ( e.g ., t- reordering) when it receives packets in out-of-order. When t-reordering expires, the PDCP entity updates the variable RX DELIV which indicates the value of the first PDCP SDU not delivered to the upper layers (e.g, the lower side of a receiving window).
Each RLC transmitter entity in Figures 9-10 associates a sequence number with each SDU received from higher layers (e.g, PDCP). In acknowledged-mode (AM) operation, the RLC transmitter can set a poll bit to request the RLC receiver to transmit a status report on RLC PDUs sent by the transmitter. Upon setting the poll bit, the RLC transmitter starts a timer (e.g, t-pollRetransmit). Upon expiration of this timer, the RLC transmitter can again set again the poll bit and can retransmit those PDUs that were awaiting acknowledgement.
On the other hand, an RLC receiver will start a timer (e.g, t-reassembly) when RLC PDUs are received out of sequence. A missing PDU can be determined based on a gap in RLC sequence numbers. This function is similar to the t-reordering timer in PDCP. When t-reassembly expires during AM operation, the RLC receiver will transmit a status report to trigger a retransmission by the RLC transmitter.
Once a MAC transmitter entity in Figures 9-10 receives SDUs from higher layers ( e.g ., RLC) for transmission, it can request a resource grant for transmitting the corresponding MAC PDUs. The MAC transmitter can request a resource grant by sending either a scheduling request (SR) or a buffer status report (BSR).
Figure 11 is a block diagram of an exemplary IAB network that includes a donor DU, a donor CU, and various IAB nodes that are capable of providing access to various UEs. For example, node IAB1 provides access to various UEs (labelled UE_q ... UE_z) and also provides backhaul services to“child” node IAB2. Furthermore, IAB 1 also provides backhaul services to all nodes that rely on IAB2 for backhaul services, e.g., nodes IAB3-7 in Figure 11. In other words, the wireless link between IAB 1 and IAB2 is expected to provide backhaul for traffic originating from UEs served by nodes IAB2-7. As such, nodes IAB3-7 can be considered “descendants” of (or downstream to) IAB2, while nodes IAB2-7 can be considered descendants of IAB 1. Furthermore, nodes IAB1, IAB2, and IAB4 can be referred to as“intermediate” (or upstream) nodes with respect to the nodes IAB6-7 that provide access to various UEs (e.g, UE_l ... UE_m and UE_n ... LE_p).
As illustrated in Figure 11, nodes IAB6 (1120), IAB4 (1130), IAB2 (1140), and IAB1 (1150) are part of a first path between UE_l (1110) and the donor DU (1160). Figure 11 also illustrates a failure in the wireless backhaul link between nodes IAB1 and IAB2. Although IAB2 may be aware of this failure, none of the nodes IAB3-7 - nor the UEs that they serve - are aware of this failure. As such, these nodes and UEs will continue requesting resources from other intermediate nodes (e.g, IAB4) to send UL data towards the donor CU, and the intermediate nodes may continue to grant such requests since they are unaware of the LAB 1- 2 link failure. This can cause packet buildup, buffer overflow, and/or retransmissions in
PDCP, RLC, and/or MAC layers of intermediate nodes closer to the link failure (e.g, IAB2). Alternately, if intermediate nodes did not grant such requests, this could lead to buffer overflow, packet drops, and/or retransmissions in the intermediate nodes closer to the traffic sources (e.g, UEs). These effects are undesirable and are likely to increase congestion and reduce performance in a network that already includes one wireless backhaul link failure.
Exemplary embodiments of the present disclosure address these and other problems, challenges, and/or issues by providing specific enhancements and/or improvements to handling wireless backhaul link failures multi-hop IAB networks. In general, embodiments involve techniques and/or mechanisms for communicating the link failure condition to some or all of the affected IAB nodes and/or UEs whose data traverses the failed link. The IAB nodes and/or UEs receiving this information can then pause and/or reduce the transmission rate of UL data towards the donor CU. In this manner, embodiments can reduce buffer buildup at intermediate IAB nodes, thereby reducing the probability of packet drops and retransmissions and maintaining acceptable service performance in the IAB network.
In some embodiments, during a period after receiving a first indication of a backhaul link failure and prior to receiving a second indication that the backhaul link failure has been mitigated ( e.g ., by establishing a second backhaul link to a different parent IAB node), the affected IAB nodes and/or UEs can forego sending Scheduling Requests (SR) and/or Buffer Status Reports (BSR) to parent IAB nodes. In some embodiments, during this period, the affected IAB nodes and/or UEs can adjust the value of various timers (e.g., PDCP SDU discard timer set to an infinite value), or halt the timers altogether, to ensure that UL data packets will not be discarded from the transmission buffers and/or that data retransmission will not occur. However, the affected IAB nodes and/or UEs can continue sending lower layer ACK/NACK to DL data to ensure that transmission of DL data packets continues downstream of the failure (e.g, from IAB2 towards IAB3-7 in Figure 11).
In some embodiments, during this period, the affected IAB nodes and/or UEs can also deactivate and/or reduce usage of UL resource grants that were previously configured (e.g, semi-static, periodic, and/or longer-duration grants). This deactivation and/or usage reduction can be done in a particular manner that can be pre-configured. Additionally, the affected IAB nodes can avoid or delay scheduling child IAB nodes and/or UEs for UL and/or DL data transmission.
The first indication (i.e., of the failure) and the second indication (i.e., of the failure mitigation) can be explicit or implicit. Furthermore, after receiving the first indication or the second indication from a particular node (e.g, parent IAB node), the receiving node can forward the received indication to one or more other nodes (e.g, child/descendant IAB nodes).
For example, the first and second indications can be provided via dedicated signaling (e.g. RRC message, MAC Control Element (CE), a field/value in a resource grant, etc.), broadcast signaling (e.g. SIB1), or any other higher layer signaling at RLC or PDCP. In various embodiments, the first indication can include any of the following information: • Type of problem and/or failure detected, e.g. radio link failure, slow link performance, etc.
• Nodes and/or paths that are affected. For example, this information can be used to allow other nodes to use other paths or intermediate nodes for communication.
• Expected timing of resolution.
• Need for receiving node to take action toward resolution (e.g, re-connect through another node or path).
• Affected protocol layers and/or functions that need to be halted and/or limited.
• Depth for indication propagation. For example, for the LAB 1-2 failure scenario of Figure 11, it may be unnecessary to halt data flow on all hops until the failed link is restored or the path is changed. An alternative approach is to specify the number of hops for propagating the indication. For example, the first indication can include an optional flag indicating the depth of propagation (e.g, 0 = no propagation, l..n = number of hops for propagation). A node receiving a first indication with a non-zero depth value can forward the first indication after decrementing the depth value. In some embodiments, a particular value of the depth flag can be reserved to indicate propagation to the leaf nodes/UEs.
In various embodiments, a received first indication with no depth flag can indicate one of the following: no propagation is needed, propagation should be done on a predetermined number of hops, propagation should be done all the way until leaf nodes/UEs are reached, or propagation of the first indication is left to the discretion of the receiving IAB node. In embodiments where the propagation is at the IAB node’s discretion, the IAB node can base its propagation decision on the buffer occupancy (BO) status of its MT module. For example, the IAB node can propagate the first indication to the descendent nodes and/or UEs only when BO is greater than or equal to a predetermined threshold (e.g, X% of buffer size).
The second indication can also include one or more of the above-listed information, but with respect to correction of the problem and/or failure, and the resumption of protocol layers and/or functions. In the cases when an intermediate node has many descendant nodes and/or UEs a that require the second indication to resume normal operation, the intermediate node can propagate this second indication in a manner (e.g, sequentially, in groups, etc.) to avoid and/or mitigate congestion on the backhaul network due to all descendant nodes and/or UEs resuming UL data transmission simultaneously. In some cases, the two nodes directly connected to the failed backhaul link can establish a new path or link to reroute the UL data from the descendant nodes and UEs. Figure 12 shows the IAB network of Figure 11, but where a new wireless backhaul link between IAB2 (1140) and IAB1 (1150) has been established via IAB8 (1145) to replace the failed wireless backhaul link directly between IAB2 and IAB1. Alternately, in some network topologies, an intermediate node can establish a new backhaul link with other descendant nodes that bypasses a failed child node. For example, in the topology of Figure 12, IAB1 can establish a new wireless backhaul link with IAB4 (1130) that completely bypasses IAB2. This is illustrated by the dashed line between IAB1 and LAB4 in Figure 12.
In such cases, since the descendant IAB nodes/UEs can be rerouted to a path that doesn’t include the affected IAB node, the second indication to resume normal operation can be an implicit indication. For example, when a descendant IAB node and/or UE that previously received a first indication notices that its parent/serving IAB node has been changed, it can interpret this information as the second indication. The IAB node that implicitly receives this second indication can then send an explicit second indication to its descendant IAB nodes and/or EEs, in the same manner as described above.
In other embodiments, when a descendant IAB node and/or EE that previously received a first indication from a parent node receives a resource grant from the parent node, it can interpret this information as the second indication.
In some network topologies, there can be multiple backhaul paths set up between two
IAB nodes ( e.g ., for redundancy, load balancing, etc), such that the failure of a backhaul link affects only one of the available paths. In such case, the first indication can include information identifying the affected path, so that traffic using the path(s) comprising the failed link can be halted and/or reduced without affecting traffic using the other path(s). In some embodiments, the first IAB node can identify the bearers associated with the failed path and the descendant nodes associated with those bearers, and then send the first and/or second indications only to those nodes. In some embodiments, the IAB node can include information about the bearers such that descendant IAB nodes can perform similar operations.
For example, an IAB node can include the adaptation layer address associated with the affected path in the first and/or second indications that it transmits. ETsing this address, any receiving descendant node can identify associated adaptation layer address(es) from its own set of adaptation layer addresses, and then propagates the indication only to its child node(s) that use path(s) associated with the identified adaptation layer address(es). In other embodiments, the first IAB node could have information (e.g., adaptation layer addresses) for all of its descendant nodes that are associated with the failed path. In such case, the first IAB node can include such information in the first and/or second indications. The nodes receiving these first and/or second indications can remove their own adaptation layer addresses from the first indication, and then forward the modified indication to their child nodes that are associated with the other adaptation layer addresses remaining in the modified indication. The first or second indication can be modified and forward in this manner until no other adaptation layer addresses are remaining.
In various embodiments, upon receiving the first indication, the affected IAB nodes and/or UEs can halt, modify, adjust, and/or limit one or more processes in the respective PDCP, RLC, and MAC layers. Likewise, upon receiving the second indication, the affected IAB nodes and/or UEs can restore the one or more processes to their respective operational settings prior to receiving the first indication. Various examples are discussed below.
In one example, some transmitter and/or functions can be halted or limited upon receiving the first indication. For example, the PDCP transmitter can stop assigning SNs to PDCP SDUs, stop creating new PDCP PDUs, and/or stop delivering PDCP PDUs to lower layers. The PDCP transmitter can also reduce or limit the rate at which it performs these procedures. Furthermore, the PDCP transmitter timers can be halted, or the current configured values can be modified. For example, the discardTimer associated to each PDCP SDU can be halted. Its value may be stored, or reset to its initial value, or to a new value.
Similarly, upon receiving the first indication, one or more PDCP receiver timers can be halted, or the current configured values can be modified. For instance, if t-reordering was running, the timer may be stopped and its value stored, or reset to its initial value or to a new value. In addition, t-reordering (or a new timer) can be started with a value to protect against long periods and once the timer expires, the stored PDCP PDUs can be delivered to higher layers
Furthermore, when the second indication is received, the PDCP transmitter may resume assigning SNs to PDCP SDUs, creating new PDCP PDUs, and/or delivering further PDCP PDUs to lower layers. The PDCP transmitter can also lift any restriction in the rate at which it performs these procedures. Furthermore, the PDCP transmitter can also resume any halted timers (e.g, discardTimer ), or re-start halted timers with initial configured values or other values. Similarly, the PDCP receiver can resume any halted timers (e.g, t- reordering) or restart them with initial configured values or other values. Likewise, any timer that was started due to the reception of the first indication can be stopped when the second indication is received.
In a second example, upon receiving the first indication, the RLC transmitter can stop assigning SNs to RLC SDUs, stop creating new RLC PDUs, and/or stop delivering RLC PDUs to lower layers ( e.g ., MAC). The RLC transmitter can also reduce or limit the rate at which it performs these procedures. Furthermore, the PDCP transmitter timers can be halted, or the current configured values can be modified. For example, the t-pollRetransmit can be halted. Its value may be stored, or reset to its initial value, or to a new value.
Similarly, upon receiving the first indication, one or more RLC receiver timers can be halted, or the current configured values can be modified. For instance, if t-reassembly and/or t-StatusProhibit was running, the timer(s) can be stopped and its value stored, or reset to its initial value or to a new value. In addition, the timer(s) (or a new timer) can be started with a value to protect against long periods and once the timer(s) expires, the stored complete RLC SDUs can be delivered to higher layers.
Furthermore, when the second indication is received, the RLC transmitter can resume assigning SNs to RLC SDUs, creating new RLC PDUs, and/or delivering further RLC PDUs to lower layers. The RLC transmitter can also lift any restriction in the rate at which it performs these procedures. Furthermore, the RLC transmitter can also resume any halted timers (e.g., t-pollRetransmit ), or re-start halted timers with initial configured values or other values. Similarly, the RLC receiver can resume any halted timers (e.g, t-reassembly and/or t-StatusProhibit) or restart them with initial configured values or other values. Likewise, any timer that was started due to the reception of the first indication can be stopped when the second indication is received.
In a third example, upon receiving the first indication, the MAC transmitter can halt transmission of scheduling requests (SRs), or reduce/limit the rate at which SRs are transmitted. Likewise, if the MAC transmitter was previously configured with resource grants, the MAC transmitter can halt and/or restrict usage of such resource grants after receiving the first indication. For example, the MAC transmitter can use such resource grants for retransmission of MAC or RLC-layer data, but not use such resource grants for initial transmission of data.
Furthermore, when the second indication is received, the MAC transmitter can resume transmission of scheduling requests (SRs), or increase the rate at which SRs are transmitted to the rate used prior to receiving the first indication. Likewise, if the MAC transmitter was previously configured with resource grants, the MAC transmitter can resume full usage of such resource grants after receiving the second indication, e.g, for initial transmission and re-transmission. In some embodiments, the MAC transmitter can also transmit a buffer status report (BSR) after receiving the second indication, thereby providing upstream nodes with as much information as possible about buffer status of all logical channels with buffered data. For example, the MAC transmitter can send such information in a long BSR.
Figure 13 shows an exemplary data flow diagram corresponding to the IAB network illustrated in Figures 11-12. In particular, Figure 13 shows a UE_l (1110) sending user data via a first network path comprising intermediate nodes IAB6 (1120, i.e ., the access node for UE_l), IAB4 (1130), IAB2 (1140), and IAB1 (1150). For the sake of brevity, the destination donor CU-DET shown in Figures 11-12 are not shown in Figure 13.
In the scenario illustrated by Figure 13, IAB2 detects a failure of a link with upstream (e.g, parent) node IAB1. Subsequently, IAB2 can send a first indication of this link failure to downstream (e.g, child) node IAB4. The first indication can include, or be associated with, various information as described above. In addition, IAB2 can perform various first actions in response to the failure detection, e.g, any of the above-described operations at one or more protocol layers in IAB2. In turn, IAB4 can selectively forward the first indication - modified as needed - to downstream (e.g, child) node IAB6, which can selectively forward the first indication to its served UEs, including UE_l . Each of these intermediate nodes can also perform various first actions in response to receiving the first indication.
Subsequently, IAB2 can detect that the backhaul link with IAB1 has been restored. Alternately, IAB2 can detect that a second network path has been established that includes IAB2 and its downstream nodes, but not IAB1. In either event, IAB2 can send a second indication related to the restoration of the first network path, or establishment of the second network path, to downstream node IAB4. The second indication can include, or be associated with, various information as described above. In addition, IAB2 can perform various second actions in response to the failure detection, e.g, any of the above-described operations at one or more protocol layers in IAB2. In turn, IAB4 can selectively forward the second indication - modified as needed - to downstream node IAB6, which can selectively forward the first indication to its served UEs, including UE_l. Each of these intermediate nodes can also perform various second actions in response to receiving the second indication. These embodiments described above can be further illustrated with reference to Figures 14-15, which depict exemplary methods and/or procedures performed by intermediate IAB nodes. Put differently, various features of the operations described below correspond to various embodiments described above.
More specifically, Figure 14 illustrates an exemplary method and/or procedure for managing a link failure in an integrated access backhaul (IAB) network, according to various exemplary embodiments of the present disclosure. The exemplary method and/or procedure shown in Figure 14 can be performed by a network node ( e.g ., an intermediate IAB node) in an radio access network (RAN), such as shown in and/or described in relation to other figures herein. Furthermore, the exemplary method and/or procedure shown in Figure 14 can be complementary to other exemplary methods and/or procedures disclosed herein (e.g., Figure 15) such that they are capable of being used cooperatively to provide benefits, advantages, and/or solutions to problems described herein. Although the exemplary method and/or procedure in Figure 14 is illustrated by blocks in a particular order, this order is exemplary and the operations corresponding to the blocks can be performed in different orders than shown, and can be combined and/or divided into blocks and/or operations having different functionality than shown. Optional blocks and/or operations are indicated by dashed lines.
The exemplary method and/or procedure can include the operations of block 1410, where the network node can receive, from a first upstream node in the IAB network, a first indication of failure of a first backhaul link in a first network path that includes the intermediate node, the first upstream node, and a destination node for uplink (UL) data in the IAB network. For example, the destination node can be a donor DU and/or a donor CU. In some embodiments, the first indication can include a depth value that identifies a number of downstream hops in the IAB network for forwarding the first indication.
In some embodiments, the first indication can include one or more of the following: type of failure associated with the first backhaul link; identifiers of one or more nodes comprising the first network path; expected time of resolution of the failure of the first backhaul link; protocol layers affected by the failure; and node functions affected by the failure. In various embodiments, the identifiers of the one or more nodes can include identifiers of bearers associated with the one or more nodes, or adaptation layer addresses associated with the one or more nodes.
The exemplary method and/or procedure can also include the operations of block 1420, where the network node can, in response to the first indication, perform one or more first actions with respect to transmission of UL data towards the first upstream node. In various embodiments, the one or more first actions can include various actions with respect to different protocol layers comprising the network node.
In some embodiments, the one or more first actions can include any of the following operations with respect to a packet data convergence protocol (PDCP) layer of the network node: stopping, or decreasing the rate of, assignment of sequence numbers (SNs) to PDCP service data units (SDUs) received from higher layers; stopping, or decreasing the rate of, creation of PDCP protocol data units (PDUs) for delivery to lower layers; stopping, or decreasing the rate of, delivery of PDCP PDUs to lower layers; stopping a discard timer associated with one or more PDCP SDUs that are ready for transmission; and stopping a reordering timer associated with one or more received PDCP PDUs.
In some embodiments, the one or more first actions can include any of the following operations with respect to a radio link control (RLC) layer of the network node: stopping, or decreasing the rate of, assignment of SNs to RLC SDUs received from higher layers; stopping, or decreasing the rate of, creation of RLC PDUs for delivery to lower layers; stopping, or decreasing the rate of, delivery of RLC PDUs to lower layers; stopping a poll retransmission timer associated with one or more RLC SDUs that are ready for transmission; and stopping a reassembly timer associated with one or more received RLC PDUs.
In some embodiments, the one or more first actions can include any of the following operations with respect to a medium access control (MAC) layer of the network node: stopping, or decreasing the rate of, transmission of scheduling requests (SRs); stopping, or decreasing the usage of, previously configured resource grants; and using previously configured resource grants for retransmission of data but not for initial transmission of data.
The exemplary method and/or procedure can also include the operations of block 1430, where the network node can, based on information associated with the first indication, selectively forward the first indication to one or more downstream nodes in the LAB network. In some embodiments, the one or more downstream nodes can comprise one or more intermediate nodes and one or user equipment (UEs). In some embodiments, selectively forwarding the first indication can be based on the depth value. In such embodiments, the operations of block 1430 can include the operations of sub-block 1431, where if the depth value is non-zero, the network node can decrement the depth value and forward the first indication, including the decremented depth value, to the one or more downstream nodes. In addition, the operations of block 1430 can include the operations of sub-block 1432, where if the depth value is zero, the network node can refrain from forwarding the first indication. In some embodiments, the operations of block 1430 can also include the operations of sub-block 1433, where the network node can perform one of the following operations if the depth value is not included with the first indication: refraining from forwarding the first indication; forwarding the first indication; and selectively forwarding the first indication further based on a buffer occupancy (BO) value associated with UL data buffers of the intermediate node.
In some embodiments, the adaptation layer addresses included with the first indication can include a first address associated with the intermediate node. In such embodiments, the operations of block 1430 can also include the operations of sub-blocks 1434-1436, where the network node can modify the first indication by removing the first address; identify one or more downstream nodes associated with the other adaptation layer addresses comprising the first indication; and forward the modified first indication only to the identified downstream nodes.
In some embodiments, the exemplary method and/or procedure can also include the operations of block 1440, where the network node can receive a second indication concerning a path in the IAB network. In some embodiments, the second indication can be received from the first upstream node and can indicate that the first backhaul of in the first network path has been restored. In some embodiments, the second indication can include a resource grant from the first upstream node. In some embodiments, the second indication can be received from a second upstream node and can indicate the establishment of a second network path that includes the intermediate node, a second upstream node in the IAB network, and the destination node.
In some embodiments, the exemplary method and/or procedure can also include the operations of block 1450, where the network node can, in response to the second indication, perform one or more second actions with respect to transmission of UL data towards the upstream node. In various embodiments, the one or more second actions can include various actions with respect to different protocol layers comprising the network node.
In some embodiments, the one or more second actions can include any of the following with respect to a PDCP layer of the network node: resuming, or increasing the rate of, assignment SNs to PDCP SDUs received from higher layers; resuming, or increasing the rate of, creation of PDCP PDUs for delivery to lower layers; resuming, or increasing the rate of, delivery of PDCP PDUs to lower layers; restarting a discard timer associated with the one or more PDCP SDUs that are ready for transmission; and restarting a reordering timer associated with the one or more received PDCP PDUs. In some embodiments, the one or more second actions can include any of the following with respect to an RLC layer of the network node: resuming, or increasing the rate of, assignment of SNs to RLC SDUs received from higher layers; resuming, or increasing the rate of, creation of RLC PDUs for delivery to lower layers; resuming, or increasing the rate of, delivery of RLC PDUs to lower layers; restarting a poll retransmission timer associated with one or more RLC SDUs that are ready for transmission; and restarting a reassembly timer associated with one or more received RLC PDUs.
In some embodiments, the one or more second actions can include any of the following with respect to a MAC layer of the network node: resuming, or increasing the rate of, transmission of SRs; resuming, or increasing the usage of, previously configured resource grants; and resuming use of previously configured resource grants for both initial transmission of data and retransmission of data.
In some embodiments, the exemplary method and/or procedure can also include the operations of block 1460, where the network node can, based on information associated with the second indication, selectively forward the second indication to the one or more downstream nodes. For example, the selective forwarding of the second indication can include substantially similar operations, or be based on substantially similar information, as the selective forwarding of the first indication described above (block 1430 including sub- blocks 1431-1436).
In addition, Figure 15 illustrates another exemplary method and/or procedure for managing a link failure in an integrated access backhaul (LAB) network, according to various exemplary embodiments of the present disclosure. The exemplary method and/or procedure shown in Figure 15 can be performed by a network node ( e.g ., an intermediate node immediately downstream of the failed link) in an radio access network (RAN), such as shown in and/or described in relation to other figures herein. Furthermore, the exemplary method and/or procedure shown in Figure 15 can be complementary to other exemplary methods and/or procedures disclosed herein (e.g., Figure 14) such that they are capable of being used cooperatively to provide benefits, advantages, and/or solutions to problems described herein. Although the exemplary method and/or procedure in Figure 15 is illustrated by blocks in a particular order, this order is exemplary and the operations corresponding to the blocks can be performed in different orders than shown, and can be combined and/or divided into blocks and/or operations having different functionality than shown. Optional blocks and/or operations are indicated by dashed lines. The exemplary method and/or procedure can include the operations of block 1510, where the network node can detect a failure of a first backhaul link between the intermediate node and a first upstream node in the IAB network. The first backhaul link can be part of a first network path that includes the intermediate node, the first upstream node, a plurality of downstream nodes, and a destination node for UL data. For example, the destination node can be a donor DU and/or a donor CU.
The exemplary method and/or procedure can also include the operations of block 1520, where the network node can send, to the first downstream node, a first indication of the failure of the first backhaul link. This operation can correspond to the (downstream) intermediate node receiving the first indication, such as in operation 1410 described above.
In some embodiments, the first indication can include a depth value that identifies a number of downstream hops in the IAB network for forwarding the first indication. In some embodiments, if the first indication does not include the depth value, the exclusion of the depth value can indicate that the first downstream node should perform one of the following operations: refraining from forwarding the first indication; forwarding the first indication; and selectively forwarding the first indication further based on a buffer occupancy (BO) value associated with an UL data buffer of the first downstream node.
In some embodiments, the first indication can include one or more of the following: type of failure associated with the first backhaul link; identifiers of one or more nodes included in the first network path; expected time of resolution of the failure of the first backhaul link; protocol layers affected by the failure; and node functions affected by the failure. In various embodiments, the identifiers of the one or more nodes can include identifiers of bearers associated with the one or more nodes, or adaptation layer addresses associated with the one or more nodes.
The exemplary method and/or procedure can include the operations of block 1530, where the network node can perform one or more first actions with respect to transmission of UL data towards the first upstream node. In various embodiments, the one or more first actions can include various actions with respect to different protocol layers comprising the network node, e.g ., PDCP, RLC, and MAC layers. The one or more first actions performed in block 1530 can include any of the exemplary protocol-layer operations described herein, including any of the first actions described above in relation to block 1420 of Figure 14.
In some embodiments, the exemplary method and/or procedure can also include the operations of block 1540, where the network node can determine that a second network path has been established. The second network path can include the intermediate node, the plurality of downstream nodes, and the destination node. In such embodiments, the exemplary method and/or procedure can also include the operations of block 1550, where the network node can send, to the first downstream node, a second indication concerning the second path. This operation can correspond to the (downstream) intermediate node receiving the second indication, such as in operation 1440 described above.
In some embodiments, the second network path can include the first network path, and the second indication can indicate that the first backhaul link (z.e., of the first network path) has been restored. In some embodiments, the second indication can include a resource grant to the first downstream node. In other embodiments, the second network path includes a second upstream node but not the first upstream node, and the second indication can indicate that the second network path has been established to replace the first network path.
In some embodiments, the exemplary method and/or procedure can include the operations of block 1560, where the network node can perform one or more second actions with respect to transmission of UL data towards the first upstream node. In various embodiments, the one or more second actions can include various actions with respect to different protocol layers comprising the network node, e.g ., PDCP, RLC, and MAC layers. The one or more second actions performed in block 1560 can be different than the first actions performed in block 1530, and can include any of the second actions described above in relation to block 1450 of Figure 14.
Although the subject matter described herein can be implemented in any appropriate type of system using any suitable components, the embodiments disclosed herein are described in relation to a wireless network, such as the example wireless network illustrated in Figure 16. For simplicity, the wireless network of Figure 16 only depicts network 1606, network nodes 1660 and l660b, and WDs 1610, l6l0b, and l6l0c. In practice, a wireless network can further include any additional elements suitable to support communication between wireless devices or between a wireless device and another communication device, such as a landline telephone, a service provider, or any other network node or end device. Of the illustrated components, network node 1660 and wireless device (WD) 1610 are depicted with additional detail. The wireless network can provide communication and other types of services to one or more wireless devices to facilitate the wireless devices’ access to and/or use of the services provided by, or via, the wireless network.
The wireless network can comprise and/or interface with any type of communication, telecommunication, data, cellular, and/or radio network or other similar type of system. In some embodiments, the wireless network can be configured to operate according to specific standards or other types of predefined rules or procedures. Thus, particular embodiments of the wireless network can implement communication standards, such as Global System for Mobile Communications (GSM), Universal Mobile Telecommunications System (UMTS), Long Term Evolution (LTE), and/or other suitable 2G, 3G, 4G, or 5G standards; wireless local area network (WLAN) standards, such as the IEEE 802.1 1 standards; and/or any other appropriate wireless communication standard, such as the Worldwide Interoperability for Microwave Access (WiMax), Bluetooth, Z-Wave and/or ZigBee standards.
Network 1606 can comprise one or more backhaul networks, core networks, IP networks, public switched telephone networks (PSTNs), packet data networks, optical networks, wide-area networks (WANs), local area networks (LANs), wireless local area networks (WLANs), wired networks, wireless networks, metropolitan area networks, and other networks to enable communication between devices.
Network node 1660 and WD 1610 comprise various components described in more detail below. These components work together in order to provide network node and/or wireless device functionality, such as providing wireless connections in a wireless network. In different embodiments, the wireless network can comprise any number of wired or wireless networks, network nodes, base stations, controllers, wireless devices, relay stations, and/or any other components or systems that can facilitate or participate in the communication of data and/or signals whether via wired or wireless connections.
Examples of network nodes include, but are not limited to, access points (APs) ( e.g ., radio access points), base stations (BSs) (e.g., radio base stations, Node Bs, evolved Node Bs (eNBs) and NR NodeBs (gNBs)). Base stations can be categorized based on the amount of coverage they provide (or, stated differently, their transmit power level) and can then also be referred to as femto base stations, pico base stations, micro base stations, or macro base stations. A base station can be a relay node or a relay donor node controlling a relay. A network node can also include one or more (or all) parts of a distributed radio base station such as centralized digital units and/or remote radio units (RRUs), sometimes referred to as Remote Radio Heads (RRHs). Such remote radio units may or may not be integrated with an antenna as an antenna integrated radio. Parts of a distributed radio base station can also be referred to as nodes in a distributed antenna system (DAS).
Further examples of network nodes include multi -standard radio (MSR) equipment such as MSR BSs, network controllers such as radio network controllers (RNCs) or base station controllers (BSCs), base transceiver stations (BTSs), transmission points, transmission nodes, multi-cell/multicast coordination entities (MCEs), core network nodes (e.g, MSCs, MMEs), O&M nodes, OSS nodes, SON nodes, positioning nodes ( e.g ., E-SMLCs), and/or MDTs. As another example, a network node can be a virtual network node as described in more detail below.
In Figure 16, network node 1660 includes processing circuitry 1670, device readable medium 1680, interface 1690, auxiliary equipment 1684, power source 1686, power circuitry 1687, and antenna 1662. Although network node 1660 illustrated in the example wireless network of Figure 16 can represent a device that includes the illustrated combination of hardware components, other embodiments can comprise network nodes with different combinations of components. It is to be understood that a network node comprises any suitable combination of hardware and/or software needed to perform the tasks, features, functions and methods and/or procedures disclosed herein. Moreover, while the components of network node 1660 are depicted as single boxes located within a larger box, or nested within multiple boxes, in practice, a network node can comprise multiple different physical components that make up a single illustrated component (e.g., device readable medium 1680 can comprise multiple separate hard drives as well as multiple RAM modules).
Similarly, network node 1660 can be composed of multiple physically separate components (e.g, a NodeB component and a RNC component, or a BTS component and a BSC component, etc.), which can each have their own respective components. In certain scenarios in which network node 1660 comprises multiple separate components (e.g, BTS and BSC components), one or more of the separate components can be shared among several network nodes. For example, a single RNC can control multiple NodeB’ s. In such a scenario, each unique NodeB and RNC pair, can in some instances be considered a single separate network node. In some embodiments, network node 1660 can be configured to support multiple radio access technologies (RATs). In such embodiments, some components can be duplicated (e.g, separate device readable medium 1680 for the different RATs) and some components can be reused (e.g, the same antenna 1662 can be shared by the RATs). Network node 1660 can also include multiple sets of the various illustrated components for different wireless technologies integrated into network node 1660, such as, for example, GSM, WCDMA, LTE, NR, WiFi, or Bluetooth wireless technologies. These wireless technologies can be integrated into the same or different chip or set of chips and other components within network node 1660.
Processing circuitry 1670 can be configured to perform any determining, calculating, or similar operations (e.g, certain obtaining operations) described herein as being provided by a network node. These operations performed by processing circuitry 1670 can include processing information obtained by processing circuitry 1670 by, for example, converting the obtained information into other information, comparing the obtained information or converted information to information stored in the network node, and/or performing one or more operations based on the obtained information or converted information, and as a result of said processing making a determination.
Processing circuitry 1670 can comprise a combination of one or more of a microprocessor, controller, microcontroller, central processing unit, digital signal processor, application-specific integrated circuit, field programmable gate array, or any other suitable computing device, resource, or combination of hardware, software and/or encoded logic operable to provide, either alone or in conjunction with other network node 1660 components, such as device readable medium 1680, network node 1660 functionality. For example, processing circuitry 1670 can execute instructions stored in device readable medium 1680 or in memory within processing circuitry 1670. Such functionality can include providing any of the various wireless features, functions, or benefits discussed herein. In some embodiments, processing circuitry 1670 can include a system on a chip (SOC).
In some embodiments, processing circuitry 1670 can include one or more of radio frequency (RF) transceiver circuitry 1672 and baseband processing circuitry 1674. In some embodiments, radio frequency (RF) transceiver circuitry 1672 and baseband processing circuitry 1674 can be on separate chips (or sets of chips), boards, or units, such as radio units and digital units. In alternative embodiments, part or all of RF transceiver circuitry 1672 and baseband processing circuitry 1674 can be on the same chip or set of chips, boards, or units In certain embodiments, some or all of the functionality described herein as being provided by a network node, base station, eNB or other such network device can be performed by processing circuitry 1670 executing instructions stored on device readable medium 1680 or memory within processing circuitry 1670. In alternative embodiments, some or all of the functionality can be provided by processing circuitry 1670 without executing instructions stored on a separate or discrete device readable medium, such as in a hard-wired manner. In any of those embodiments, whether executing instructions stored on a device readable storage medium or not, processing circuitry 1670 can be configured to perform the described functionality. The benefits provided by such functionality are not limited to processing circuitry 1670 alone or to other components of network node 1660 but are enjoyed by network node 1660 as a whole, and/or by end users and the wireless network generally.
Device readable medium 1680 can comprise any form of volatile or non-volatile computer readable memory including, without limitation, persistent storage, solid-state memory, remotely mounted memory, magnetic media, optical media, random access memory (RAM), read-only memory (ROM), mass storage media (for example, a hard disk), removable storage media (for example, a flash drive, a Compact Disk (CD) or a Digital Video Disk (DVD)), and/or any other volatile or non-volatile, non-transitory device readable and/or computer-executable memory devices that store information, data, and/or instructions that can be used by processing circuitry 1670. Device readable medium 1680 can store any suitable instructions, data or information, including a computer program, software, an application including one or more of logic, rules, code, tables, etc. and/or other instructions capable of being executed by processing circuitry 1670 and, utilized by network node 1660. Device readable medium 1680 can be used to store any calculations made by processing circuitry 1670 and/or any data received via interface 1690. In some embodiments, processing circuitry 1670 and device readable medium 1680 can be considered to be integrated.
Interface 1690 is used in the wired or wireless communication of signalling and/or data between network node 1660, network 1606, and/or WDs 1610. As illustrated, interface 1690 comprises port(s)/terminal(s) 1694 to send and receive data, for example to and from network 1606 over a wired connection. Interface 1690 also includes radio front end circuitry 1692 that can be coupled to, or in certain embodiments a part of, antenna 1662. Radio front end circuitry 1692 comprises filters 1698 and amplifiers 1696. Radio front end circuitry 1692 can be connected to antenna 1662 and processing circuitry 1670. Radio front end circuitry can be configured to condition signals communicated between antenna 1662 and processing circuitry 1670. Radio front end circuitry 1692 can receive digital data that is to be sent out to other network nodes or WDs via a wireless connection. Radio front end circuitry 1692 can convert the digital data into a radio signal having the appropriate channel and bandwidth parameters using a combination of filters 1698 and/or amplifiers 1696. The radio signal can then be transmitted via antenna 1662. Similarly, when receiving data, antenna 1662 can collect radio signals which are then converted into digital data by radio front end circuitry 1692. The digital data can be passed to processing circuitry 1670. In other embodiments, the interface can comprise different components and/or different combinations of components.
In certain alternative embodiments, network node 1660 may not include separate radio front end circuitry 1692, instead, processing circuitry 1670 can comprise radio front end circuitry and can be connected to antenna 1662 without separate radio front end circuitry 1692. Similarly, in some embodiments, all or some of RF transceiver circuitry 1672 can be considered a part of interface 1690. In still other embodiments, interface 1690 can include one or more ports or terminals 1694, radio front end circuitry 1692, and RF transceiver circuitry 1672, as part of a radio unit (not shown), and interface 1690 can communicate with baseband processing circuitry 1674, which is part of a digital unit (not shown).
Antenna 1662 can include one or more antennas, or antenna arrays, configured to send and/or receive wireless signals. Antenna 1662 can be coupled to radio front end circuitry 1690 and can be any type of antenna capable of transmitting and receiving data and/or signals wirelessly. In some embodiments, antenna 1662 can comprise one or more omni-directional, sector or panel antennas operable to transmit/receive radio signals between, for example, 2 GHz and 66 GHz. An omni-directional antenna can be used to transmit/receive radio signals in any direction, a sector antenna can be used to transmit/receive radio signals from devices within a particular area, and a panel antenna can be a line of sight antenna used to transmit/receive radio signals in a relatively straight line. In some instances, the use of more than one antenna can be referred to as MIMO. In certain embodiments, antenna 1662 can be separate from network node 1660 and can be connectable to network node 1660 through an interface or port.
Antenna 1662, interface 1690, and/or processing circuitry 1670 can be configured to perform any receiving operations and/or certain obtaining operations described herein as being performed by a network node. Any information, data and/or signals can be received from a wireless device, another network node and/or any other network equipment. Similarly, antenna 1662, interface 1690, and/or processing circuitry 1670 can be configured to perform any transmitting operations described herein as being performed by a network node. Any information, data and/or signals can be transmitted to a wireless device, another network node and/or any other network equipment.
Power circuitry 1687 can comprise, or be coupled to, power management circuitry and can be configured to supply the components of network node 1660 with power for performing the functionality described herein. Power circuitry 1687 can receive power from power source 1686. Power source 1686 and/or power circuitry 1687 can be configured to provide power to the various components of network node 1660 in a form suitable for the respective components ( e.g ., at a voltage and current level needed for each respective component). Power source 1686 can either be included in, or external to, power circuitry 1687 and/or network node 1660. For example, network node 1660 can be connectable to an external power source (e.g., an electricity outlet) via an input circuitry or interface such as an electrical cable, whereby the external power source supplies power to power circuitry 1687. As a further example, power source 1686 can comprise a source of power in the form of a battery or battery pack which is connected to, or integrated in, power circuitry 1687. The battery can provide backup power should the external power source fail. Other types of power sources, such as photovoltaic devices, can also be used.
Alternative embodiments of network node 1660 can include additional components beyond those shown in Figure 16 that can be responsible for providing certain aspects of the network node’s functionality, including any of the functionality described herein and/or any functionality necessary to support the subject matter described herein. For example, network node 1660 can include user interface equipment to allow and/or facilitate input of information into network node 1660 and to allow and/or facilitate output of information from network node 1660. This can allow and/or facilitate a user to perform diagnostic, maintenance, repair, and other administrative functions for network node 1660.
In some embodiments, a wireless device (WD, e.g ., WD 1610) can be configured to transmit and/or receive information without direct human interaction. For instance, a WD can be designed to transmit information to a network on a predetermined schedule, when triggered by an internal or external event, or in response to requests from the network. Examples of a WD include, but are not limited to, smart phones, mobile phones, cell phones, voice over IP (VoIP) phones, wireless local loop phones, desktop computers, personal digital assistants (PDAs), wireless cameras, gaming consoles or devices, music storage devices, playback appliances, wearable devices, wireless endpoints, mobile stations, tablets, laptops, laptop- embedded equipment (LEE), laptop-mounted equipment (LME), smart devices, wireless customer-premise equipment (CPE), mobile-type communication (MTC) devices, Internet- of-Things (IoT) devices, vehicle-mounted wireless terminal devices, etc.
A WD can support device-to-device (D2D) communication, for example by implementing a 3 GPP standard for sidelink communication, vehicle-to-vehicle (V2V), vehicle-to-infrastructure (V2I), vehicle-to-everything (V2X) and can in this case be referred to as a D2D communication device. As yet another specific example, in an Internet of Things (IoT) scenario, a WD can represent a machine or other device that performs monitoring and/or measurements, and transmits the results of such monitoring and/or measurements to another WD and/or a network node. The WD can in this case be a machine-to-machine (M2M) device, which can in a 3 GPP context be referred to as an MTC device. As one particular example, the WD can be a UE implementing the 3 GPP narrow band internet of things (NB-IoT) standard. Particular examples of such machines or devices are sensors, metering devices such as power meters, industrial machinery, or home or personal appliances (e.g, refrigerators, televisions, etc.) personal wearables (e.g, watches, fitness trackers, etc.). In other scenarios, a WD can represent a vehicle or other equipment that is capable of monitoring and/or reporting on its operational status or other functions associated with its operation. A WD as described above can represent the endpoint of a wireless connection, in which case the device can be referred to as a wireless terminal. Furthermore, a WD as described above can be mobile, in which case it can also be referred to as a mobile device or a mobile terminal.
As illustrated, wireless device 1610 includes antenna 1611, interface 1614, processing circuitry 1620, device readable medium 1630, user interface equipment 1632, auxiliary equipment 1634, power source 1636 and power circuitry 1637. WD 1610 can include multiple sets of one or more of the illustrated components for different wireless technologies supported by WD 1610, such as, for example, GSM, WCDMA, LTE, NR, WiFi, WiMAX, or Bluetooth wireless technologies, just to mention a few. These wireless technologies can be integrated into the same or different chips or set of chips as other components within WD 1610.
Antenna 1611 can include one or more antennas or antenna arrays, configured to send and/or receive wireless signals, and is connected to interface 1614. In certain alternative embodiments, antenna 1611 can be separate from WD 1610 and be connectable to WD 1610 through an interface or port. Antenna 1611, interface 1614, and/or processing circuitry 1620 can be configured to perform any receiving or transmitting operations described herein as being performed by a WD. Any information, data and/or signals can be received from a network node and/or another WD. In some embodiments, radio front end circuitry and/or antenna 1611 can be considered an interface.
As illustrated, interface 1614 comprises radio front end circuitry 1612 and antenna 1611. Radio front end circuitry 1612 comprise one or more filters 1618 and amplifiers 1616. Radio front end circuitry 1614 is connected to antenna 1611 and processing circuitry 1620, and can be configured to condition signals communicated between antenna 1611 and processing circuitry 1620. Radio front end circuitry 1612 can be coupled to or a part of antenna 1611. In some embodiments, WD 1610 may not include separate radio front end circuitry 1612; rather, processing circuitry 1620 can comprise radio front end circuitry and can be connected to antenna 1611. Similarly, in some embodiments, some or all of RF transceiver circuitry 1622 can be considered a part of interface 1614. Radio front end circuitry 1612 can receive digital data that is to be sent out to other network nodes or WDs via a wireless connection. Radio front end circuitry 1612 can convert the digital data into a radio signal having the appropriate channel and bandwidth parameters using a combination of filters 1618 and/or amplifiers 1616. The radio signal can then be transmitted via antenna 1611. Similarly, when receiving data, antenna 161 1 can collect radio signals which are then converted into digital data by radio front end circuitry 1612. The digital data can be passed to processing circuitry 1620. In other embodiments, the interface can comprise different components and/or different combinations of components.
Processing circuitry 1620 can comprise a combination of one or more of a microprocessor, controller, microcontroller, central processing unit, digital signal processor, application-specific integrated circuit, field programmable gate array, or any other suitable computing device, resource, or combination of hardware, software, and/or encoded logic operable to provide, either alone or in conjunction with other WD 1610 components, such as device readable medium 1630, WD 1610 functionality. Such functionality can include providing any of the various wireless features or benefits discussed herein. For example, processing circuitry 1620 can execute instructions stored in device readable medium 1630 or in memory within processing circuitry 1620 to provide the functionality disclosed herein.
As illustrated, processing circuitry 1620 includes one or more of RF transceiver circuitry 1622, baseband processing circuitry 1624, and application processing circuitry 1626. In other embodiments, the processing circuitry can comprise different components and/or different combinations of components. In certain embodiments processing circuitry 1620 of WD 1610 can comprise a SOC. In some embodiments, RF transceiver circuitry 1622, baseband processing circuitry 1624, and application processing circuitry 1626 can be on separate chips or sets of chips. In alternative embodiments, part or all of baseband processing circuitry 1624 and application processing circuitry 1626 can be combined into one chip or set of chips, and RF transceiver circuitry 1622 can be on a separate chip or set of chips. In still alternative embodiments, part or all ofRF transceiver circuitry 1622 and baseband processing circuitry 1624 can be on the same chip or set of chips, and application processing circuitry 1626 can be on a separate chip or set of chips. In yet other alternative embodiments, part or all of RF transceiver circuitry 1622, baseband processing circuitry 1624, and application processing circuitry 1626 can be combined in the same chip or set of chips. In some embodiments, RF transceiver circuitry 1622 can be a part of interface 1614. RF transceiver circuitry 1622 can condition RF signals for processing circuitry 1620.
In certain embodiments, some or all of the functionality described herein as being performed by a WD can be provided by processing circuitry 1620 executing instructions stored on device readable medium 1630, which in certain embodiments can be a computer- readable storage medium. In alternative embodiments, some or all of the functionality can be provided by processing circuitry 1620 without executing instructions stored on a separate or discrete device readable storage medium, such as in a hard-wired manner. In any of those particular embodiments, whether executing instructions stored on a device readable storage medium or not, processing circuitry 1620 can be configured to perform the described functionality. The benefits provided by such functionality are not limited to processing circuitry 1620 alone or to other components of WD 1610, but are enjoyed by WD 1610 as a whole, and/or by end users and the wireless network generally.
Processing circuitry 1620 can be configured to perform any determining, calculating, or similar operations ( e.g ., certain obtaining operations) described herein as being performed by a WD. These operations, as performed by processing circuitry 1620, can include processing information obtained by processing circuitry 1620 by, for example, converting the obtained information into other information, comparing the obtained information or converted information to information stored by WD 1610, and/or performing one or more operations based on the obtained information or converted information, and as a result of said processing making a determination.
Device readable medium 1630 can be operable to store a computer program, software, an application including one or more of logic, rules, code, tables, etc. and/or other instructions capable of being executed by processing circuitry 1620. Device readable medium 1630 can include computer memory (e.g., Random Access Memory (RAM) or Read Only Memory (ROM)), mass storage media (e.g, a hard disk), removable storage media (e.g, a Compact Disk (CD) or a Digital Video Disk (DVD)), and/or any other volatile or non-volatile, non- transitory device readable and/or computer executable memory devices that store information, data, and/or instructions that can be used by processing circuitry 1620. In some embodiments, processing circuitry 1620 and device readable medium 1630 can be considered to be integrated.
User interface equipment 1632 can include components that allow and/or facilitate a human user to interact with WD 1610. Such interaction can be of many forms, such as visual, audial, tactile, etc. User interface equipment 1632 can be operable to produce output to the user and to allow and/or facilitate the user to provide input to WD 1610. The type of interaction can vary depending on the type of user interface equipment 1632 installed in WD 1610. For example, if WD 1610 is a smart phone, the interaction can be via a touch screen; if WD 1610 is a smart meter, the interaction can be through a screen that provides usage (e.g, the number of gallons used) or a speaker that provides an audible alert (e.g, if smoke is detected). User interface equipment 1632 can include input interfaces, devices and circuits, and output interfaces, devices and circuits. User interface equipment 1632 can be configured to allow and/or facilitate input of information into WD 1610, and is connected to processing circuitry 1620 to allow and/or facilitate processing circuitry 1620 to process the input information. User interface equipment 1632 can include, for example, a microphone, a proximity or other sensor, keys/buttons, a touch display, one or more cameras, a USB port, or other input circuitry. User interface equipment 1632 is also configured to allow and/or facilitate output of information from WD 1610, and to allow and/or facilitate processing circuitry 1620 to output information from WD 1610. User interface equipment 1632 can include, for example, a speaker, a display, vibrating circuitry, a USB port, a headphone interface, or other output circuitry. Using one or more input and output interfaces, devices, and circuits, of user interface equipment 1632, WD 1610 can communicate with end users and/or the wireless network, and allow and/or facilitate them to benefit from the functionality described herein.
Auxiliary equipment 1634 is operable to provide more specific functionality which may not be generally performed by WDs. This can comprise specialized sensors for doing measurements for various purposes, interfaces for additional types of communication such as wired communications etc. The inclusion and type of components of auxiliary equipment 1634 can vary depending on the embodiment and/or scenario.
Power source 1636 can, in some embodiments, be in the form of a battery or battery pack. Other types of power sources, such as an external power source ( e.g ., an electricity outlet), photovoltaic devices or power cells, can also be used. WD 1610 can further comprise power circuitry 1637 for delivering power from power source 1636 to the various parts of WD 1610 which need power from power source 1636 to carry out any functionality described or indicated herein. Power circuitry 1637 can in certain embodiments comprise power management circuitry. Power circuitry 1637 can additionally or alternatively be operable to receive power from an external power source; in which case WD 1610 can be connectable to the external power source (such as an electricity outlet) via input circuitry or an interface such as an electrical power cable. Power circuitry 1637 can also in certain embodiments be operable to deliver power from an external power source to power source 1636. This can be, for example, for the charging of power source 1636. Power circuitry 1637 can perform any converting or other modification to the power from power source 1636 to make it suitable for supply to the respective components of WD 1610.
Figure 17 illustrates one embodiment of a UE in accordance with various aspects described herein. As used herein, a user equipment or UE may not necessarily have a user in the sense of a human user who owns and/or operates the relevant device. Instead, a UE can represent a device that is intended for sale to, or operation by, a human user but which may not, or which may not initially, be associated with a specific human user ( e.g ., a smart sprinkler controller). Alternatively, a UE can represent a device that is not intended for sale to, or operation by, an end user but which can be associated with or operated for the benefit of a user (e.g., a smart power meter). UE 17200 can be any UE identified by the 3rd Generation Partnership Project (3GPP), including a NB-IoT UE, a machine type communication (MTC) UE, and/or an enhanced MTC (eMTC) UE. UE 1700, as illustrated in Figure 17, is one example of a WD configured for communication in accordance with one or more communication standards promulgated by the 3rd Generation Partnership Project (3GPP), such as 3GPP’s GSM, UMTS, LTE, and/or 5G standards. As mentioned previously, the term WD and UE can be used interchangeable. Accordingly, although Figure 17 is a UE, the components discussed herein are equally applicable to a WD, and vice-versa.
In Figure 17, UE 1700 includes processing circuitry 1701 that is operatively coupled to input/output interface 1705, radio frequency (RF) interface 1709, network connection interface 1711, memory 1715 including random access memory (RAM) 1717, read-only memory (ROM) 1719, and storage medium 1721 or the like, communication subsystem 1731, power source 1733, and/or any other component, or any combination thereof. Storage medium 1721 includes operating system 1723, application program 1725, and data 1727. In other embodiments, storage medium 1721 can include other similar types of information. Certain UEs can utilize all of the components shown in Figure 17, or only a subset of the components. The level of integration between the components can vary from one UE to another UE. Further, certain UEs can contain multiple instances of a component, such as multiple processors, memories, transceivers, transmitters, receivers, etc.
In Figure 17, processing circuitry 1701 can be configured to process computer instructions and data. Processing circuitry 1701 can be configured to implement any sequential state machine operative to execute machine instructions stored as machine- readable computer programs in the memory, such as one or more hardware-implemented state machines (e.g, in discrete logic, FPGA, ASIC, etc.); programmable logic together with appropriate firmware; one or more stored program, general-purpose processors, such as a microprocessor or Digital Signal Processor (DSP), together with appropriate software; or any combination of the above. For example, the processing circuitry 1701 can include two central processing units (CPUs). Data can be information in a form suitable for use by a computer.
In the depicted embodiment, input/output interface 1705 can be configured to provide a communication interface to an input device, output device, or input and output device. UE 1700 can be configured to use an output device via input/output interface 1705. An output device can use the same type of interface port as an input device. For example, a USB port can be used to provide input to and output from UE 1700. The output device can be a speaker, a sound card, a video card, a display, a monitor, a printer, an actuator, an emitter, a smartcard, another output device, or any combination thereof. UE 1700 can be configured to use an input device via input/output interface 1705 to allow and/or facilitate a user to capture information into UE 1700. The input device can include a touch-sensitive or presence- sensitive display, a camera ( e.g ., a digital camera, a digital video camera, a web camera, etc.), a microphone, a sensor, a mouse, a trackball, a directional pad, a trackpad, a scroll wheel, a smartcard, and the like. The presence-sensitive display can include a capacitive or resistive touch sensor to sense input from a user. A sensor can be, for instance, an accelerometer, a gyroscope, a tilt sensor, a force sensor, a magnetometer, an optical sensor, a proximity sensor, another like sensor, or any combination thereof. For example, the input device can be an accelerometer, a magnetometer, a digital camera, a microphone, and an optical sensor.
In Figure 17, RF interface 1709 can be configured to provide a communication interface to RF components such as a transmitter, a receiver, and an antenna. Network connection interface 1711 can be configured to provide a communication interface to network 1743 a. Network 1743 a can encompass wired and/or wireless networks such as a local-area network (LAN), a wide-area network (WAN), a computer network, a wireless network, a telecommunications network, another like network or any combination thereof. For example, network l743a can comprise a Wi-Fi network. Network connection interface 1711 can be configured to include a receiver and a transmitter interface used to communicate with one or more other devices over a communication network according to one or more communication protocols, such as Ethernet, TCP/IP, SONET, ATM, or the like. Network connection interface 1711 can implement receiver and transmitter functionality appropriate to the communication network links (e.g., optical, electrical, and the like). The transmitter and receiver functions can share circuit components, software or firmware, or alternatively can be implemented separately.
RAM 1717 can be configured to interface via bus 1702 to processing circuitry 1701 to provide storage or caching of data or computer instructions during the execution of software programs such as the operating system, application programs, and device drivers. ROM 1719 can be configured to provide computer instructions or data to processing circuitry 1701. For example, ROM 1719 can be configured to store invariant low-level system code or data for basic system functions such as basic input and output (EO), startup, or reception of keystrokes from a keyboard that are stored in a non-volatile memory. Storage medium 1721 can be configured to include memory such as RAM, ROM, programmable read-only memory (PROM), erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), magnetic disks, optical disks, floppy disks, hard disks, removable cartridges, or flash drives. In one example, storage medium 1721 can be configured to include operating system 1723, application program 1725 such as a web browser application, a widget or gadget engine or another application, and data file 1727. Storage medium 1721 can store, for use by UE 1700, any of a variety of various operating systems or combinations of operating systems.
Storage medium 1721 can be configured to include a number of physical drive units, such as redundant array of independent disks (RAID), floppy disk drive, flash memory, ETSB flash drive, external hard disk drive, thumb drive, pen drive, key drive, high-density digital versatile disc (HD-DVD) optical disc drive, internal hard disk drive, Blu-Ray optical disc drive, holographic digital data storage (HDDS) optical disc drive, external mini-dual in-line memory module (DIMM), synchronous dynamic random access memory (SDRAM), external micro-DIMM SDRAM, smartcard memory such as a subscriber identity module or a removable user identity (SIM/RUIM) module, other memory, or any combination thereof. Storage medium 1721 can allow and/or facilitate UE 1700 to access computer-executable instructions, application programs or the like, stored on transitory or non-transitory memory media, to off-load data, or to upload data. An article of manufacture, such as one utilizing a communication system can be tangibly embodied in storage medium 1721, which can comprise a device readable medium.
In Figure 17, processing circuitry 1701 can be configured to communicate with network l743b using communication subsystem 1731. Network l743a and network l743b can be the same network or networks or different network or networks. Communication subsystem 1731 can be configured to include one or more transceivers used to communicate with network l743b. For example, communication subsystem 1731 can be configured to include one or more transceivers used to communicate with one or more remote transceivers of another device capable of wireless communication such as another WD, UE, or base station of a radio access network (RAN) according to one or more communication protocols, such as IEEE 802.17, CDMA, WCDMA, GSM, LTE, UTRAN, WiMax, or the like. Each transceiver can include transmitter 1733 and/or receiver 1735 to implement transmitter or receiver functionality, respectively, appropriate to the RAN links ( e.g ., frequency allocations and the like). Further, transmitter 1733 and receiver 1735 of each transceiver can share circuit components, software or firmware, or alternatively can be implemented separately. In the illustrated embodiment, the communication functions of communication subsystem 1731 can include data communication, voice communication, multimedia communication, short-range communications such as Bluetooth, near-field communication, location-based communication such as the use of the global positioning system (GPS) to determine a location, another like communication function, or any combination thereof. For example, communication subsystem 1731 can include cellular communication, Wi-Fi communication, Bluetooth communication, and GPS communication. Network l743b can encompass wired and/or wireless networks such as a local-area network (LAN), a wide-area network (WAN), a computer network, a wireless network, a telecommunications network, another like network or any combination thereof. For example, network l743b can be a cellular network, a Wi-Fi network, and/or a near-field network. Power source 1713 can be configured to provide alternating current (AC) or direct current (DC) power to components of UE 1700.
The features, benefits and/or functions described herein can be implemented in one of the components of UE 1700 or partitioned across multiple components of UE 1700. Further, the features, benefits, and/or functions described herein can be implemented in any combination of hardware, software or firmware. In one example, communication subsystem 1731 can be configured to include any of the components described herein. Further, processing circuitry 1701 can be configured to communicate with any of such components over bus 1702. In another example, any of such components can be represented by program instructions stored in memory that when executed by processing circuitry 1701 perform the corresponding functions described herein. In another example, the functionality of any of such components can be partitioned between processing circuitry 1701 and communication subsystem 1731. In another example, the non-computationally intensive functions of any of such components can be implemented in software or firmware and the computationally intensive functions can be implemented in hardware.
Figure 18 is a schematic block diagram illustrating a virtualization environment 1800 in which functions implemented by some embodiments can be virtualized. In the present context, virtualizing means creating virtual versions of apparatuses or devices which can include virtualizing hardware platforms, storage devices and networking resources. As used herein, virtualization can be applied to a node ( e.g ., a virtualized base station or a virtualized radio access node) or to a device (e.g., a UE, a wireless device or any other type of communication device) or components thereof and relates to an implementation in which at least a portion of the functionality is implemented as one or more virtual components (e.g, via one or more applications, components, functions, virtual machines or containers executing on one or more physical processing nodes in one or more networks).
In some embodiments, some or all of the functions described herein can be implemented as virtual components executed by one or more virtual machines implemented in one or more virtual environments 1800 hosted by one or more of hardware nodes 1830. Further, in embodiments in which the virtual node is not a radio access node or does not require radio connectivity ( e.g ., a core network node), then the network node can be entirely virtualized.
The functions can be implemented by one or more applications 1820 (which can alternatively be called software instances, virtual appliances, network functions, virtual nodes, virtual network functions, etc.) operative to implement some of the features, functions, and/or benefits of some of the embodiments disclosed herein. Applications 1820 are run in virtualization environment 1800 which provides hardware 1830 comprising processing circuitry 1860 and memory 1890. Memory 1890 contains instructions 1895 executable by processing circuitry 1860 whereby application 1820 is operative to provide one or more of the features, benefits, and/or functions disclosed herein.
Virtualization environment 1800, comprises general-purpose or special-purpose network hardware devices 1830 comprising a set of one or more processors or processing circuitry 1860, which can be commercial off-the-shelf (COTS) processors, dedicated Application Specific Integrated Circuits (ASICs), or any other type of processing circuitry including digital or analog hardware components or special purpose processors. Each hardware device can comprise memory 1890-1 which can be non-persistent memory for temporarily storing instructions 1895 or software executed by processing circuitry 1860. Each hardware device can comprise one or more network interface controllers (NICs) 1870, also known as network interface cards, which include physical network interface 1880. Each hardware device can also include non-transitory, persistent, machine-readable storage media 1890-2 having stored therein software 1895 and/or instructions executable by processing circuitry 1860. Software 1895 can include any type of software including software for instantiating one or more virtualization layers 1850 (also referred to as hypervisors), software to execute virtual machines 1840 as well as software allowing it to execute functions, features and/or benefits described in relation with some embodiments described herein.
Virtual machines 1840, comprise virtual processing, virtual memory, virtual networking or interface and virtual storage, and can be run by a corresponding virtualization layer 1850 or hypervisor. Different embodiments of the instance of virtual appliance 1820 can be implemented on one or more of virtual machines 1840, and the implementations can be made in different ways.
During operation, processing circuitry 1860 executes software 1895 to instantiate the hypervisor or virtualization layer 1850, which can sometimes be referred to as a virtual machine monitor (VMM). Virtualization layer 1850 can present a virtual operating platform that appears like networking hardware to virtual machine 1840.
As shown in Figure 18, hardware 1830 can be a standalone network node with generic or specific components. Hardware 1830 can comprise antenna 18225 and can implement some functions via virtualization. Alternatively, hardware 1830 can be part of a larger cluster of hardware (e.g, such as in a data center or customer premise equipment (CPE)) where many hardware nodes work together and are managed via management and orchestration (MANO) 18100, which, among others, oversees lifecycle management of applications 1820.
Virtualization of the hardware is in some contexts referred to as network function virtualization (NFV). NFV can be used to consolidate many network equipment types onto industry standard high volume server hardware, physical switches, and physical storage, which can be located in data centers, and customer premise equipment.
In the context of NFV, virtual machine 1840 can be a software implementation of a physical machine that runs programs as if they were executing on a physical, non-virtualized machine. Each of virtual machines 1840, and that part of hardware 1830 that executes that virtual machine, be it hardware dedicated to that virtual machine and/or hardware shared by that virtual machine with others of the virtual machines 1840, forms a separate virtual network elements (VNE).
Still in the context of NFV, Virtual Network Function (VNF) is responsible for handling specific network functions that run in one or more virtual machines 1840 on top of hardware networking infrastructure 1830 and corresponds to application 1820 in Figure 18.
In some embodiments, one or more radio units 18200 that each include one or more transmitters 18220 and one or more receivers 18210 can be coupled to one or more antennas 18225. Radio units 18200 can communicate directly with hardware nodes 1830 via one or more appropriate network interfaces and can be used in combination with the virtual components to provide a virtual node with radio capabilities, such as a radio access node or a base station.
In some embodiments, some signalling can be effected with the use of control system 18230 which can alternatively be used for communication between the hardware nodes 1830 and radio units 18200. With reference to FIGURE 19, in accordance with an embodiment, a communication system includes telecommunication network 1910, such as a 3 GPP -type cellular network, which comprises access network 1911, such as a radio access network, and core network 1914. Access network 1911 comprises a plurality of base stations l9l2a, l9l2b, l9l2c, such as NBs, eNBs, gNBs or other types of wireless access points, each defining a corresponding coverage area l9l3a, l9l3b, l9l3c. Each base station l9l2a, l9l2b, l9l2c is connectable to core network 1914 over a wired or wireless connection 1915. A first UE 1991 located in coverage area l9l3c can be configured to wirelessly connect to, or be paged by, the corresponding base station l9l2c. A second UE 1992 in coverage area l9l3a is wirelessly connectable to the corresponding base station l9l2a. While a plurality of UEs 1991, 1992 are illustrated in this example, the disclosed embodiments are equally applicable to a situation where a sole UE is in the coverage area or where a sole UE is connecting to the
Telecommunication network 1910 is itself connected to host computer 1930, which can be embodied in the hardware and/or software of a standalone server, a cloud-implemented server, a distributed server or as processing resources in a server farm. Host computer 1930 can be under the ownership or control of a service provider or can be operated by the service provider or on behalf of the service provider. Connections 1921 and 1922 between telecommunication network 1910 and host computer 1930 can extend directly from core network 1914 to host computer 1930 or can go via an optional intermediate network 1920. Intermediate network 1920 can be one of, or a combination of more than one of, a public, private or hosted network; intermediate network 1920, if any, can be a backbone network or the Internet; in particular, intermediate network 1920 can comprise two or more sub-networks (not shown).
The communication system of Figure 19 as a whole enables connectivity between the connected UEs 1991, 1992 and host computer 1930. The connectivity can be described as an over-the-top (OTT) connection 1950. Host computer 1930 and the connected UEs 1991, 1992 are configured to communicate data and/or signaling via OTT connection 1950, using access network 1911, core network 1914, any intermediate network 1920 and possible further infrastructure (not shown) as intermediaries. OTT connection 1950 can be transparent in the sense that the participating communication devices through which OTT connection 1950 passes are unaware of routing of uplink and downlink communications. For example, base station 1912 may not or need not be informed about the past routing of an incoming downlink communication with data originating from host computer 1930 to be forwarded ( e.g ., handed over) to a connected UE 1991. Similarly, base station 1912 need not be aware of the future routing of an outgoing uplink communication originating from the UE 1991 towards the host computer 1930.
Example implementations, in accordance with an embodiment, of the EE, base station and host computer discussed in the preceding paragraphs will now be described with reference to Figure 20. In communication system 2000, host computer 2010 comprises hardware 2015 including communication interface 2016 configured to set up and maintain a wired or wireless connection with an interface of a different communication device of communication system 2000. Host computer 2010 further comprises processing circuitry 2018, which can have storage and/or processing capabilities. In particular, processing circuitry 2018 can comprise one or more programmable processors, application-specific integrated circuits, field programmable gate arrays or combinations of these (not shown) adapted to execute instructions. Host computer 2010 further comprises software 2011, which is stored in or accessible by host computer 2010 and executable by processing circuitry 2018. Software 2011 includes host application 2012. Host application 2012 can be operable to provide a service to a remote user, such as EE 2030 connecting via OTT connection 2050 terminating at EE 2030 and host computer 2010. In providing the service to the remote user, host application 2012 can provide user data which is transmitted using OTT connection 2050.
Communication system 2000 can also include base station 2020 provided in a telecommunication system and comprising hardware 2025 enabling it to communicate with host computer 2010 and with EE 2030. Hardware 2025 can include communication interface
2026 for setting up and maintaining a wired or wireless connection with an interface of a different communication device of communication system 2000, as well as radio interface
2027 for setting up and maintaining at least wireless connection 2070 with EE 2030 located in a coverage area (not shown in Figure 20) served by base station 2020. Communication interface 2026 can be configured to facilitate connection 2060 to host computer 2010. Connection 2060 can be direct, or it can pass through a core network (not shown in Figure 20) of the telecommunication system and/or through one or more intermediate networks outside the telecommunication system. In the embodiment shown, hardware 2025 of base station 2020 can also include processing circuitry 2028, which can comprise one or more programmable processors, application-specific integrated circuits, field programmable gate arrays or combinations of these (not shown) adapted to execute instructions. Base station 2020 further has software 2021 stored internally or accessible via an external connection.
Communication system 2000 can also include EE 2030 already referred to. It’s hardware 2035 can include radio interface 2037 configured to set up and maintain wireless connection 2070 with a base station serving a coverage area in which UE 2030 is currently located. Hardware 2035 of UE 2030 can also include processing circuitry 2038, which can comprise one or more programmable processors, application-specific integrated circuits, field programmable gate arrays or combinations of these (not shown) adapted to execute instructions. UE 2030 further comprises software 2031, which is stored in or accessible by UE 2030 and executable by processing circuitry 2038. Software 2031 includes client application 2032. Client application 2032 can be operable to provide a service to a human or non- human user via UE 2030, with the support of host computer 2010. In host computer 2010, an executing host application 2012 can communicate with the executing client application 2032 via OTT connection 2050 terminating at UE 2030 and host computer 2010. In providing the service to the user, client application 2032 can receive request data from host application 2012 and provide user data in response to the request data. OTT connection 2050 can transfer both the request data and the user data. Client application 2032 can interact with the user to generate the user data that it provides.
It is noted that host computer 2010, base station 2020 and UE 2030 illustrated in
Figure 20 can be similar or identical to host computer 1930, one of base stations l9l2a, l9l2b, l9l2c and one of UEs 1991, 1992 of Figure 19, respectively. This is to say, the inner workings of these entities can be as shown in Figure 20 and independently, the surrounding network topology can be that of Figure 19.
In Figure 20, OTT connection 2050 has been drawn abstractly to illustrate the communication between host computer 2010 and UE 2030 via base station 2020, without explicit reference to any intermediary devices and the precise routing of messages via these devices. Network infrastructure can determine the routing, which it can be configured to hide from UE 2030 or from the service provider operating host computer 2010, or both. While OTT connection 2050 is active, the network infrastructure can further take decisions by which it dynamically changes the routing ( e.g ., on the basis of load balancing consideration or reconfiguration of the network).
Wireless connection 2070 between UE 2030 and base station 2020 is in accordance with the teachings of the embodiments described throughout this disclosure. One or more of the various embodiments improve the performance of OTT services provided to UE 2030 using OTT connection 2050, in which wireless connection 2070 forms the last segment. More precisely, the exemplary embodiments disclosed herein can improve flexibility for the network to monitor end-to-end quality-of-service (QoS) of data flows, including their corresponding radio bearers, associated with data sessions between a user equipment (UE) and another entity, such as an OTT data application or service external to the 5G network. These and other advantages can facilitate more timely design, implementation, and deployment of 5G/NR solutions. Furthermore, such embodiments can facilitate flexible and timely control of data session QoS, which can lead to improvements in capacity, throughput, latency, etc. that are envisioned by 5G/NR and important for the growth of OTT services.
A measurement procedure can be provided for the purpose of monitoring data rate, latency and other network operational aspects on which the one or more embodiments improve. There can further be an optional network functionality for reconfiguring OTT connection 2050 between host computer 2010 and UE 2030, in response to variations in the measurement results. The measurement procedure and/or the network functionality for reconfiguring OTT connection 2050 can be implemented in software 2011 and hardware 2015 of host computer 2010 or in software 2031 and hardware 2035 of UE 2030, or both. In embodiments, sensors (not shown) can be deployed in or in association with communication devices through which OTT connection 2050 passes; the sensors can participate in the measurement procedure by supplying values of the monitored quantities exemplified above, or supplying values of other physical quantities from which software 2011, 2031 can compute or estimate the monitored quantities. The reconfiguring of OTT connection 2050 can include message format, retransmission settings, preferred routing etc the reconfiguring need not affect base station 2020, and it can be unknown or imperceptible to base station 2020. Such procedures and functionalities can be known and practiced in the art. In certain embodiments, measurements can involve proprietary UE signaling facilitating host computer 20l0’s measurements of throughput, propagation times, latency and the like. The measurements can be implemented in that software 2011 and 2031 causes messages to be transmitted, in particular empty or‘dummy’ messages, using OTT connection 2050 while it monitors propagation times, errors etc.
Figure 21 is a flowchart illustrating an exemplary method and/or procedure implemented in a communication system, in accordance with one embodiment. The communication system includes a host computer, a base station and a UE which, in some exemplary embodiments, can be those described with reference to Figures 19 and 20. For simplicity of the present disclosure, only drawing references to Figure 21 will be included in this section. In step 2110, the host computer provides user data. In substep 2111 (which can be optional) of step 2110, the host computer provides the user data by executing a host application. In step 2120, the host computer initiates a transmission carrying the user data to the UE. In step 2130 (which can be optional), the base station transmits to the UE the user data which was carried in the transmission that the host computer initiated, in accordance with the teachings of the embodiments described throughout this disclosure. In step 2140 (which can also be optional), the UE executes a client application associated with the host application executed by the host computer.
Figure 22 is a flowchart illustrating an exemplary method and/or procedure implemented in a communication system, in accordance with one embodiment. The communication system includes a host computer, a base station and a UE which can be those described with reference to Figures 19 and 20. For simplicity of the present disclosure, only drawing references to Figure 22 will be included in this section. In step 2210 of the method, the host computer provides user data. In an optional sub step (not shown) the host computer provides the user data by executing a host application. In step 2220, the host computer initiates a transmission carrying the user data to the UE. The transmission can pass via the base station, in accordance with the teachings of the embodiments described throughout this disclosure. In step 2230 (which can be optional), the UE receives the user data carried in the transmission.
Figure 23 is a flowchart illustrating an exemplary method and/or procedure implemented in a communication system, in accordance with one embodiment. The communication system includes a host computer, a base station and a UE which can be those described with reference to Figures 19 and 20. For simplicity of the present disclosure, only drawing references to Figure 23 will be included in this section. In step 2310 (which can be optional), the UE receives input data provided by the host computer. Additionally, or alternatively, in step 2320, the UE provides user data. In substep 2321 (which can be optional) of step 2320, the UE provides the user data by executing a client application. In substep 2311 (which can be optional) of step 2310, the UE executes a client application which provides the user data in reaction to the received input data provided by the host computer. In providing the user data, the executed client application can further consider user input received from the user. Regardless of the specific manner in which the user data was provided, the UE initiates, in substep 2330 (which can be optional), transmission of the user data to the host computer. In step 2340 of the method, the host computer receives the user data transmitted from the UE, in accordance with the teachings of the embodiments described throughout this disclosure.
Figure 24 is a flowchart illustrating an exemplary method and/or procedure implemented in a communication system, in accordance with one embodiment. The communication system includes a host computer, a base station and a UE which can be those described with reference to Figures 19 and 20. For simplicity of the present disclosure, only drawing references to Figure 24 will be included in this section. In step 2410 (which can be optional), in accordance with the teachings of the embodiments described throughout this disclosure, the base station receives user data from the UE. In step 2420 (which can be optional), the base station initiates transmission of the received user data to the host computer. In step 2430 (which can be optional), the host computer receives the user data carried in the transmission initiated by the base station.
The foregoing merely illustrates the principles of the disclosure. Various modifications and alterations to the described embodiments will be apparent to those skilled in the art in view of the teachings herein. It will thus be appreciated that those skilled in the art will be able to devise numerous systems, arrangements, and procedures that, although not explicitly shown or described herein, embody the principles of the disclosure and can be thus within the spirit and scope of the disclosure. Various exemplary embodiments can be used together with one another, as well as interchangeably therewith, as should be understood by those having ordinary skill in the art.
The term unit, as used herein, can have conventional meaning in the field of electronics, electrical devices and/or electronic devices and can include, for example, electrical and/or electronic circuitry, devices, modules, processors, memories, logic solid state and/or discrete devices, computer programs or instructions for carrying out respective tasks, procedures, computations, outputs, and/or displaying functions, and so on, as such as those that are described herein.
Any appropriate steps, methods, features, functions, or benefits disclosed herein may be performed through one or more functional units or modules of one or more virtual apparatuses. Each virtual apparatus may comprise a number of these functional units. These functional units may be implemented via processing circuitry, which may include one or more microprocessor or microcontrollers, as well as other digital hardware, which may include Digital Signal Processor (DSPs), special-purpose digital logic, and the like. The processing circuitry may be configured to execute program code stored in memory, which may include one or several types of memory such as Read Only Memory (ROM), Random Access Memory (RAM), cache memory, flash memory devices, optical storage devices, etc. Program code stored in memory includes program instructions for executing one or more telecommunications and/or data communications protocols as well as instructions for carrying out one or more of the techniques described herein. In some implementations, the processing circuitry may be used to cause the respective functional unit to perform corresponding functions according one or more embodiments of the present disclosure. As described herein, device and/or apparatus can be represented by a semiconductor chip, a chipset, or a (hardware) module comprising such chip or chipset; this, however, does not exclude the possibility that a functionality of a device or apparatus, instead of being hardware implemented, be implemented as a software module such as a computer program or a computer program product comprising executable software code portions for execution or being run on a processor. Furthermore, functionality of a device or apparatus can be implemented by any combination of hardware and software. A device or apparatus can also be regarded as an assembly of multiple devices and/or apparatuses, whether functionally in cooperation with or independently of each other. Moreover, devices and apparatuses can be implemented in a distributed fashion throughout a system, so long as the functionality of the device or apparatus is preserved. Such and similar principles are considered as known to a skilled person.
Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure belongs. It will be further understood that terms used herein should be interpreted as having a meaning that is consistent with their meaning in the context of this specification and the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
In addition, certain terms used in the present disclosure, including the specification, drawings and exemplary embodiments thereof, can be used synonymously in certain instances, including, but not limited to, e.g ., data and information. It should be understood that, while these words and/or other words that can be synonymous to one another, can be used synonymously herein, that there can be instances when such words can be intended to not be used synonymously. Further, to the extent that the prior art knowledge has not been explicitly incorporated by reference herein above, it is explicitly incorporated herein in its entirety. All publications referenced are incorporated herein by reference in their entireties.
Exemplary embodiments of the present disclosure include, but are not limited to, the following enumerated examples: 1. A method for managing a link failure in an integrated access backhaul (LAB) network, the method being performed by an intermediate node in the IAB network and comprising: receiving, from a first upstream node in the IAB network, a first indication of failure of a first backhaul link in a first network path that includes the intermediate node, the first upstream node, and a destination node for uplink (UL) data in the IAB network;
in response to the first indication, performing one or more first actions with respect to transmission of UL data towards the first upstream node; and based on information associated with the first indication, selectively forwarding the first indication to one or more downstream nodes in the IAB network.
2 The method of embodiment 1, further comprising:
receiving a second indication concerning a path in the IAB network;
in response to the second indication, performing one or more second actions with respect to transmission of UL data towards the upstream node; and based on information associated with the second indication, selectively forwarding the second indication to the one or more downstream nodes. 3. The method of embodiment 2, wherein the second indication is received from the first upstream node and indicates that the first backhaul of in the first network path has been restored.
4. The method of any of embodiments 2-3, wherein the second indication comprises a resource grant from the first upstream node.
5. The method of embodiment 2, wherein the second indication is received from a second upstream node and indicates the establishment of a second network path that includes the intermediate node, a second upstream node in the IAB network, and the destination node.
6. The method of any of embodiments 1-5, wherein selectively forwarding the first indication is based on a depth value, included with the first indication, that identifies a number of downstream hops in the IAB network for forwarding the first indication.
7. The method of embodiment 6, wherein selectively forwarding the first indication comprises: if the depth value is non-zero, decrementing the depth value and forwarding the first indication, including the decremented depth value, to the one or more downstream nodes; and
if the depth value is zero, refraining from forwarding the first indication.
8. The method of any of embodiments 6-7, wherein selectively forwarding the first indication further comprises performing one of the following operations if the depth value is not included with the first indication:
refraining from forwarding the first indication;
forwarding the first indication; and
selectively forwarding the first indication further based on a buffer occupancy (BO) value associated with UL data buffers of the intermediate node.
9. The method of any of embodiments 1-8, wherein the first indication further comprises one or more of the following:
type of failure associated with the first backhaul link;
identifiers of one or more nodes comprising the first network path;
expected time of resolution of the failure of the first backhaul link;
protocol layers affected by the failure; and
node functions affected by the failure.
10. The method of embodiment 9, where the identifiers of the one or more nodes comprises identifiers of bearers associated with the one or more nodes. 11. The method of embodiment 9, where the identifiers of the one or more nodes comprises adaptation layer addresses associated with the one or more nodes.
12. The method of embodiment 10, wherein:
the adaptation layer addresses include a first address associated with the intermediate node; and
selectively forwarding the first indication further comprises:
modifying the first indication by removing the first address;
identifying one or more downstream nodes associated with the other
adaptation layer addresses comprising the first indication; and forwarding the modified first indication only to the one or more downstream nodes.
13. The method of any of embodiments 1-12, wherein the one or more first actions include any of the following operations with respect to a packet data convergence protocol (PDCP) layer of the intermediate node:
stopping, or decreasing the rate of, assignment of sequence numbers (SNs) to
PDCP service data units (SDUs) received from higher layers; stopping, or decreasing the rate of, creation of PDCP protocol data units (PDUs) for delivery to lower layers;
stopping, or decreasing the rate of, delivery of PDCP PDUs to lower layers;
stopping a discard timer associated with one or more PDCP SDUs that are ready for transmission; and
stopping a reording timer associated with one or more received PDCP PDUs.
14. The method of any of embodiments 1-12, wherein the one or more first actions include any of the following operations with respect to a radio link control (RLC) layer of the intermediate node:
stopping, or decreasing the rate of, assignment of sequence numbers (SNs) to RLC service data units (SDUs) received from higher layers;
stopping, or decreasing the rate of, creation of RLC protocol data units (PDUs) for delivery to lower layers;
stopping, or decreasing the rate of, delivery of RLC PDUs to lower layers;
stopping a poll retransmission timer associated with one or more RLC SDUs that are ready for transmission; and
stopping a reassembly timer associated with one or more received RLC PDUs.
15. The method of any of embodiments 1-12, wherein the one or more first actions include any of the following operations with respect to a medium access control (MAC) layer of the intermediate node:
stopping, or decreasing the rate of, transmission of scheduling requests (SRs);
stopping, or decreasing the usage of, previously configured resource grants; and using previously configured resource grants for retransmission of data but not for initial transmission of data. 16. The method of any of embodiments 2-15, wherein the one or more second actions include any of the following with respect to a packet data convergence protocol (PDCP) layer of the intermediate node:
resuming, or increasing the rate of, assignment of sequence numbers (SNs) to
PDCP service data units (SDUs) received from higher layers; resuming, or increasing the rate of, creation of PDCP protocol data units (PDUs) for delivery to lower layers;
resuming, or increasing the rate of, delivery of PDCP PDUs to lower layers;
restarting a discard timer associated with the one or more PDCP SDUs that are ready for transmission; and
restarting a reording timer associated with the one or more received PDCP PDUs.
17. The method of any of embodiments 2-15, wherein the one or more second actions include any of the following operations with respect to a radio link control (RLC) layer of the intermediate node:
resuming, or increasing the rate of, assignment of sequence numbers (SNs) to RLC service data units (SDUs) received from higher layers;
resuming, or increasing the rate of, creation of RLC protocol data units (PDUs) for delivery to lower layers;
resuming, or increasing the rate of, delivery of RLC PDUs to lower layers;
restarting a poll retransmission timer associated with one or more RLC SDUs that are ready for transmission; and
restarting a reassembly timer associated with one or more received RLC PDUs.
18. The method of any of embodiments 2-15, wherein the one or more second actions include any of the following operations with respect to a medium access control (MAC) layer of the intermediate node:
resuming, or increasing the rate of, transmission of scheduling requests (SRs); resuming, or increasing the usage of, previously configured resource grants; and resuming use of previously configured resource grants for both initial transmission of data and retransmission of data. 19. The method of any of embodiments 1-18, wherein the one or more downstream nodes comprise one or more intermediate nodes and one or user equipment (UEs).
20. A method for managing a link failure in an integrated access backhaul (LAB) network, the method being performed by an intermediate node in the IAB network and comprising: detecting a failure of a first backhaul link between the intermediate node and a first upstream node in the IAB network, wherein the first backhaul link is part of a first network path that includes the intermediate node, the first upstream node, a plurality of downstream nodes, and a destination node for uplink (UL) data;
sending, to the first downstream node, a first indication of the failure; and performing one or more first actions with respect to transmission of UL data towards the first upstream node.
21. The method of embodiment 20, further comprising:
determining that a second network path has been established, wherein the second network path comprises the intermediate node, the plurality of downstream nodes, and the destination node;
sending, to the first downstream node, a second indication concerning the second path.
22. The method of embodiment 21, wherein:
the second network path comprises the first network path; and
the second indication indicates that the first backhaul of in the first network path has been restored.
23. The method of any of embodiments 21-22, wherein the second indication comprises a resource grant to the first downstream node.
24. The method of embodiment 21, wherein:
the second network path further comprises a second upstream node but not the first upstream node; and
the second indication indicates that the second network path has been established to replace the first network path. 25. The method of any of embodiments 20-24, wherein the first indication further includes a depth value that identifies a number of downstream hops in the IAB network for forwarding the first indication.
26. The method of embodiments 25, wherein the first indication does not include the depth value, and wherein the non-inclusion of the depth value is used to indicate that the plurality of downstream nodes should perform one of the following operations:
refraining from forwarding the first indication;
forwarding the first indication; and
selectively forwarding the first indication further based on a buffer occupancy (BO) value associated with UL data buffers of the respective nodes.
27. The method of any of embodiments 21-26, wherein the first indication futher comprises one or more of the following:
type of failure associated with the first backhaul link;
identifiers of the plurality of downstream nodes;
expected time of resolution of the failure of the first backhaul link;
protocol layers affected by the failure; and
node functions affected by the failure.
28. The method of embodiment 27, where the identifiers of the one or more nodes comprises identifiers of bearers associated with the one or more nodes. 29. The method of embodiment 27, where the identifiers of the one or more nodes comprises adaptation layer addresses associated with the one or more nodes.
30. A node in an integrated access backhaul (IAB) network configured to manage a link failure in the IAB network, the node comprising:
radio transceiver circuitry; and
processing circuitry operably coupled to the radio transceiver circuitry and
configured to perform operations corresponding to any of the methods of claims 1-29; and
power supply circuitry configured to supply power to the node. 31. A non-transitory, computer-readable medium storing computer-executable instructions that, when executed by processing circuitry comprising a node in an integrated access backhaul (LAB) network, configure the node to perform operations corresponding to any of the methods of claims 1-29.
32. A communication system including a host computer, the host computer comprising: a. processing circuitry configured to provide user data; and
b. a communication interface configured to forward the user data to a cellular network for transmission to a user equipment (UE) through a core network (CN) and a radio access network (RAN);
wherein:
c. the RAN comprises first and second nodes of an integrated access backhaul (IAB) network;
d. the first node comprises a communication transceiver and processing
circuitry configured to perform operations corresponding to any of the methods of embodiments 1-19; and
e. the second node comprises a communication transceiver and processing circuitry configured to perform operations corresponding to any of the methods of embodiments 20-29.
33. The communication system of the previous embodiment, further comprising the UE configured to communicate with the IAB node.
34. The communication system of any of the previous two embodiments, wherein:
f. the processing circuitry of the host computer is configured to execute a host application, thereby providing the user data; and
g. the UE comprises processing circuitry configured to execute a client
application associated with the host application.
35. A method implemented in a communication system including a host computer, a cellular network, and a user equipment (UE), the method comprising:
a. at the host computer, providing user data; b. at the host computer, initiating a transmission carrying the user data to the UE via a cellular network comprising an integrated access backhaul (IAB) network; and c. operations, performed by first and second nodes of the IAB network,
corresponding to any of the methods of embodiments 1-29.
36. The method of the previous embodiment, wherein the data message comprises the user data, and further comprising transmitting the user data by the access node.
37. The method of any of the previous two embodiments, wherein the user data is
provided at the host computer by executing a host application, the method further comprising, at the UE, executing a client application associated with the host application.
38. A communication system including a host computer comprising a communication interface configured to receive user data originating from a transmission from a user equipment (UE) to a base station via an integrated access backhaul (IAB) radio network, wherein: a. the IAB network comprises first and second nodes; b. the first node comprises a communication interface and processing circuitry configured to perform operations corresponding to any of the methods of embodiments 1-19; and
C. the second node comprises a communication interface and processing
circuitry configured to perform operations corresponding to any of the methods of embodiments 20-29.
39. The communication system of the previous embodiment, further including the UE, wherein the UE is configured to communicate with the IAB node.
40. The communication system of any of the previous two embodiments, wherein:
a. the processing circuitry of the host computer is configured to execute a host application;
b. the UE is configured to execute a client application associated with the host application, thereby providing the user data to be received by the host computer.

Claims

1. A method for managing a link failure in an integrated access backhaul, IAB, network, the method being performed by an intermediate node in the IAB network and comprising:
receiving (1410), from a first upstream node in the IAB network, a first indication of failure of a first backhaul link in a first network path that includes the intermediate node, the first upstream node, and a destination node for uplink, UL, data in the IAB network;
in response to the first indication, performing (1420) one or more first actions with respect to transmission of UL data towards the first upstream node; and based on information associated with the first indication, selectively forwarding (!430) the first indication to one or more downstream nodes in the IAB network.
2. The method of claim 1, further comprising:
receiving (1440) a second indication concerning a path in the IAB network;
in response to the second indication, performing (1450) one or more second actions with respect to transmission of UL data towards the upstream node; and based on information associated with the second indication, selectively forwarding
(1460) the second indication to the one or more downstream nodes.
3. The method of claim 2, wherein the second indication is received from the first upstream node and indicates that the first backhaul link has been restored.
4. The method of any of claims 2-3, wherein the second indication includes a resource grant from the first upstream node.
5. The method of claim 2, wherein the second indication is received from a second upstream node and indicates the establishment of a second network path that includes the intermediate node, the second upstream node, and the destination node.
6. The method of any of claims 2-5, wherein the one or more second actions include any of the following with respect to a packet data convergence protocol, PDCP, layer of the intermediate node:
resuming, or increasing the rate of, assignment of sequence numbers, SNs, to
PDCP service data units, SDUs, received from higher layers; resuming, or increasing the rate of, creation of PDCP protocol data units, PDUs, for delivery to lower layers;
resuming, or increasing the rate of, delivery of PDCP PDUs to lower layers;
restarting a discard timer associated with the one or more PDCP SDUs that are ready for transmission; and
restarting a reordering timer associated with the one or more received PDCP PDUs.
7. The method of any of claims 2-5, wherein the one or more second actions include any of the following operations with respect to a radio link control, RLC, layer of the intermediate node:
resuming, or increasing the rate of, assignment of sequence numbers, SNs, to RLC service data units, SDUs, received from higher layers;
resuming, or increasing the rate of, creation of RLC protocol data units, PDUs, for delivery to lower layers;
resuming, or increasing the rate of, delivery of RLC PDUs to lower layers;
restarting a poll retransmission timer associated with one or more RLC SDUs that are ready for transmission; and
restarting a reassembly timer associated with one or more received RLC PDUs. 8. The method of any of claims 2-5, wherein the one or more second actions include any of the following operations with respect to a medium access control, MAC, layer of the intermediate node:
resuming, or increasing the rate of, transmission of scheduling requests, SRs,; resuming, or increasing the usage of, previously configured resource grants; and resuming use of previously configured resource grants for both initial transmission of data and retransmission of data.
9. The method of any of claims 1-8, wherein:
the first indication conditionally includes a depth value; and when the depth value is included in the first indication, the depth value identifies a number of downstream hops in the IAB network for forwarding the first indication. 10. The method of claim 9, wherein selectively forwarding (1430) the first indication comprises:
if the first indication includes a non-zero depth value, decrementing (1431) the depth value and forwarding the first indication, including the decremented depth value, to the one or more downstream nodes; and
if the first indication includes a depth value of zero, refraining (1432) from
forwarding the first indication.
11. The method of any of claims 9-10, wherein selectively forwarding (1430) the first indication further comprises performing (1433) one of the following operations if the depth value is excluded from the first indication:
refraining from forwarding the first indication;
forwarding the first indication; or
selectively forwarding the first indication further based on a buffer occupancy, BO, value associated with UL data buffers of the intermediate node.
12. The method of any of claims 1-11, wherein the first indication further comprises one or more of the following:
type of failure associated with the first backhaul link;
identifiers of one or more nodes included in the first network path;
expected time of resolution of the failure of the first backhaul link;
protocol layers affected by the failure; and
node functions affected by the failure.
13. The method of claim 12, wherein the identifiers of the one or more nodes included in the first network path comprise one of the following:
identifiers of bearers associated with the one or more nodes; or
adaptation layer addresses associated with the one or more nodes.
14. The method of claim 13, wherein: the adaptation layer addresses include a first address associated with the
intermediate node; and
selectively forwarding (1430) the first indication further comprises:
modifying (1434) the first indication by removing the first address;
identifying (1435) one or more downstream nodes associated with the other adaptation layer addresses comprising the first indication; and forwarding (1436) the modified first indication only to the identified
downstream nodes. 15. The method of any of claims 1-14, wherein the one or more first actions include any of the following operations with respect to a packet data convergence protocol, PDCP, layer of the intermediate node:
stopping, or decreasing the rate of, assignment of sequence numbers, SNs, to
PDCP service data units, SDUs, received from higher layers; stopping, or decreasing the rate of, creation of PDCP protocol data units, PDUs, for delivery to lower layers;
stopping, or decreasing the rate of, delivery of PDCP PDUs to lower layers;
stopping a discard timer associated with one or more PDCP SDUs that are ready for transmission; and
stopping a reordering timer associated with one or more received PDCP PDUs.
16. The method of any of claims 1-14, wherein the one or more first actions include any of the following operations with respect to a radio link control, RLC, layer of the intermediate node:
stopping, or decreasing the rate of, assignment of sequence numbers, SNs, to RLC service data units, SDUs, received from higher layers;
stopping, or decreasing the rate of, creation of RLC protocol data units, PDUs, for delivery to lower layers;
stopping, or decreasing the rate of, delivery of RLC PDUs to lower layers;
stopping a poll retransmission timer associated with one or more RLC SDUs that are ready for transmission; and
stopping a reassembly timer associated with one or more received RLC PDUs.
17. The method of any of claims 1-14, wherein the one or more first actions include any of the following operations with respect to a medium access control, MAC, layer of the intermediate node:
stopping, or decreasing the rate of, transmission of scheduling requests, SRs,;
stopping, or decreasing the usage of, previously configured resource grants; and using previously configured resource grants for retransmission of data but not for initial transmission of data.
18. A method for managing a link failure in an integrated access backhaul, LAB, network, the method being performed by an intermediate node in the IAB network and comprising:
detecting (1510) a failure of a first backhaul link between the intermediate node and a first upstream node in the IAB network, wherein the first backhaul link is part of a first network path that includes the intermediate node, the first upstream node, a first downstream node, and a destination node for uplink,
UL, data;
sending (1520), to the first downstream node, a first indication of the failure; and performing (1530) one or more first actions with respect to transmission of UL data towards the first upstream node.
19. The method of claim 18, further comprising:
determining (1540) that a second network path has been established, wherein the second network path includes the intermediate node, the first downstream node, and the destination node;
sending (1550), to the first downstream node, a second indication concerning the second path; and
performing (1560) one or more second actions with respect to transmission of UL data towards the first upstream node, the second actions being different than the first actions.
20. The method of claim 19, wherein:
the second network path includes the first network path; and
the second indication indicates that the first backhaul link has been restored.
21. The method of any of claims 19-20, wherein the second indication includes a resource grant to the first downstream node.
22. The method of claim 19, wherein:
the second network path includes a second upstream node but does not include the first upstream node; and
the second indication indicates that the second network path has been established to replace the first network path.
23. The method of any of claims 18-22, wherein:
the first indication conditionally includes a depth value; and
when the depth value is included in the first indication, the depth value identifies a number of downstream hops in the IAB network for forwarding the first indication.
24. The method of claims 23, wherein exclusion of the depth value from the first indication indicates that the first downstream node should perform one of the following operations:
refraining from forwarding the first indication;
forwarding the first indication; or
selectively forwarding the first indication further based on a buffer occupancy, BO, value associated with an UL data buffer of the first downstream node.
25. The method of any of claims 18-24, wherein the first indication further comprises one or more of the following:
type of failure associated with the first backhaul link;
identifiers of one or more nodes included in the first network path;
expected time of resolution of the failure of the first backhaul link;
protocol layers affected by the failure; and
node functions affected by the failure.
26. The method of claim 25, where the identifiers of the one or more nodes include one of the following:
identifiers of bearers associated with the one or more nodes. adaptation layer addresses associated with the one or more nodes.
27. The method of any of claims 18-26, wherein the one or more first actions include any of the following operations with respect to a packet data convergence protocol, PDCP, layer of the intermediate node:
stopping, or decreasing the rate of, assignment of sequence numbers, SNs, to
PDCP service data units, SDUs, received from higher layers; stopping, or decreasing the rate of, creation of PDCP protocol data units, PDUs, for delivery to lower layers;
stopping, or decreasing the rate of, delivery of PDCP PDUs to lower layers;
stopping a discard timer associated with one or more PDCP SDUs that are ready for transmission; and
stopping a reordering timer associated with one or more received PDCP PDUs. 28. The method of any of claims 18-26, wherein the one or more first actions include any of the following operations with respect to a radio link control, RLC, layer of the intermediate node:
stopping, or decreasing the rate of, assignment of sequence numbers, SNs, to RLC service data units, SDUs, received from higher layers;
stopping, or decreasing the rate of, creation of RLC protocol data units, PDUs, for delivery to lower layers;
stopping, or decreasing the rate of, delivery of RLC PDUs to lower layers;
stopping a poll retransmission timer associated with one or more RLC SDUs that are ready for transmission; and
stopping a reassembly timer associated with one or more received RLC PDUs.
29. The method of any of claims 18-26, wherein the one or more first actions include any of the following operations with respect to a medium access control, MAC, layer of the intermediate node:
stopping, or decreasing the rate of, transmission of scheduling requests, SRs,;
stopping, or decreasing the usage of, previously configured resource grants; and using previously configured resource grants for retransmission of data but not for initial transmission of data.
30. A network node (311-315, 1120-1150, 1660, 1820) configured to operate in an integrated access backhaul, IAB, network, the network node comprising:
interface circuitry (1690, 1870, 18200) operable to communicate with upstream and downstream nodes in the IAB network; and
processing circuitry (1670, 1860) operably coupled to the interface circuitry,
whereby the processing circuitry and the interface circuitry are configured to perform operations corresponding to any of the methods of claims 1-29. 31. A network node (311-315, 1120-1150, 1660, 1820) configured to operate in an integrated access backhaul, IAB, network, the network node being further arranged to perform operations corresponding to any of the methods of claims 1-29.
32. A non-transitory, computer-readable medium (1680, 1890) storing computer- executable instructions that, when executed by processing circuitry (1670, 1860) of a network node (311-315, 1120-1150, 1660, 1820) of an integrated access backhaul, IAB, network, configure the network node to perform operations corresponding to any of the methods of claims 1-29. 33. A computer program product comprising computer-executable instructions that, when executed by processing circuitry (1670, 1860) of a network node (311-315, 1120-1150, 1660, 1820) of an integrated access backhaul, IAB, network, configure the network node to perform operations corresponding to any of the methods of claims 1-29.
PCT/SE2019/050935 2018-10-24 2019-09-27 Methods for handling link failures in integrated access backhaul (iab) networks WO2020085969A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201862749741P 2018-10-24 2018-10-24
US62/749,741 2018-10-24

Publications (1)

Publication Number Publication Date
WO2020085969A1 true WO2020085969A1 (en) 2020-04-30

Family

ID=68172249

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/SE2019/050935 WO2020085969A1 (en) 2018-10-24 2019-09-27 Methods for handling link failures in integrated access backhaul (iab) networks

Country Status (1)

Country Link
WO (1) WO2020085969A1 (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210160956A1 (en) * 2019-11-22 2021-05-27 Mediatek Singapore Pte. Ltd. Packet Routing for Layer-2-Based Sidelink Relay
WO2022083865A1 (en) * 2020-10-22 2022-04-28 Nokia Technologies Oy Method, apparatus and computer program
CN114650549A (en) * 2020-12-18 2022-06-21 大唐移动通信设备有限公司 Data processing method and device of IAB (inter-Access node) and IAB
WO2022148714A1 (en) * 2021-01-06 2022-07-14 Canon Kabushiki Kaisha Management of radio link failure and deficiencies in integrated access backhauled networks
GB2602802A (en) * 2021-01-06 2022-07-20 Canon Kk Management of radio link failure and deficiencies in integrated access backhauled networks
CN115412978A (en) * 2021-05-27 2022-11-29 大唐移动通信设备有限公司 Rerouting method and device for uplink data transmission
WO2023008977A1 (en) * 2021-07-30 2023-02-02 Lg Electronics Inc. Method and apparatus for routing path switching in wireless communication system
CN116349384A (en) * 2020-10-09 2023-06-27 上海诺基亚贝尔股份有限公司 Methods, apparatuses, and computer readable media for peer-to-peer communication via an integrated access and backhaul network
WO2023153660A1 (en) * 2022-02-14 2023-08-17 Lg Electronics Inc. Method and apparatus for processing rrc messages at iab node in wireless communication system

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
"3rd Generation Partnership Project; Technical Specification Group Radio Access Network; Study on Integrated Access and Backhaul; (Release 15)", 3GPP STANDARD; TECHNICAL REPORT; 3GPP TR 38.874, 3RD GENERATION PARTNERSHIP PROJECT (3GPP), MOBILE COMPETENCE CENTRE ; 650, ROUTE DES LUCIOLES ; F-06921 SOPHIA-ANTIPOLIS CEDEX ; FRANCE, vol. RAN WG2, no. V0.5.0, 26 September 2018 (2018-09-26), pages 1 - 78, XP051487397 *
AT&T ET AL: "Lossless Data Transfer for IAB Design with Hop-by-Hop RLC ARQ", vol. RAN WG2, no. Montreal, Canada; 20180702 - 20180706, 1 July 2018 (2018-07-01), XP051467553, Retrieved from the Internet <URL:http://www.3gpp.org/ftp/Meetings%5F3GPP%5FSYNC/RAN2/Docs> [retrieved on 20180701] *
ERICSSON: "Recovery from Link Failure in IAB Networks", vol. RAN WG3, no. Athens, Greece; 20190225 - 20190301, 15 February 2019 (2019-02-15), XP051604304, Retrieved from the Internet <URL:http://www.3gpp.org/ftp/tsg%5Fran/WG3%5FIu/TSGR3%5F103/Docs/R3%2D190363%2Ezip> [retrieved on 20190215] *
LG ELECTRONICS: "Discussions on node behavior for IAB link management", vol. RAN WG1, no. Gothenburg, Sweden; 20180820 - 20180824, 11 August 2018 (2018-08-11), XP051515893, Retrieved from the Internet <URL:http://www.3gpp.org/ftp/tsg%5Fran/WG1%5FRL1/TSGR1%5F94/Docs/R1%2D1808515%2Ezip> [retrieved on 20180811] *
MICHAEL BAHR: "Proposed Routing for IEEE 802.11s WLAN Mesh Networks", ANNUAL INTERNATIONAL WIRELESS INTERNET CONFERENCE, XX, XX, 2 August 2006 (2006-08-02), pages 1 - 10, XP002469387 *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210160956A1 (en) * 2019-11-22 2021-05-27 Mediatek Singapore Pte. Ltd. Packet Routing for Layer-2-Based Sidelink Relay
CN116349384A (en) * 2020-10-09 2023-06-27 上海诺基亚贝尔股份有限公司 Methods, apparatuses, and computer readable media for peer-to-peer communication via an integrated access and backhaul network
WO2022083865A1 (en) * 2020-10-22 2022-04-28 Nokia Technologies Oy Method, apparatus and computer program
CN114650549A (en) * 2020-12-18 2022-06-21 大唐移动通信设备有限公司 Data processing method and device of IAB (inter-Access node) and IAB
WO2022148714A1 (en) * 2021-01-06 2022-07-14 Canon Kabushiki Kaisha Management of radio link failure and deficiencies in integrated access backhauled networks
GB2602802A (en) * 2021-01-06 2022-07-20 Canon Kk Management of radio link failure and deficiencies in integrated access backhauled networks
GB2602794A (en) * 2021-01-06 2022-07-20 Canon Kk Management of radio link failure and deficiencies in integrated access backhauled networks
GB2602802B (en) * 2021-01-06 2023-12-20 Canon Kk Management of radio link failure and deficiencies in integrated access backhauled networks
CN115412978A (en) * 2021-05-27 2022-11-29 大唐移动通信设备有限公司 Rerouting method and device for uplink data transmission
WO2023008977A1 (en) * 2021-07-30 2023-02-02 Lg Electronics Inc. Method and apparatus for routing path switching in wireless communication system
WO2023153660A1 (en) * 2022-02-14 2023-08-17 Lg Electronics Inc. Method and apparatus for processing rrc messages at iab node in wireless communication system

Similar Documents

Publication Publication Date Title
US11659447B2 (en) Flow control for integrated access backhaul (IAB) networks
US11425599B2 (en) Preventing/mitigating packet loss in integrated access backhaul (IAB) networks
US20220201777A1 (en) Enhanced Handover of Nodes in Integrated Access Backhaul (IAB) Networks - Control Plane (CP) Handling
EP3841828B1 (en) Transport layer handling for split radio network architecture
US11516829B2 (en) Enhanced uplink scheduling in integrated access backhaul (IAB) networks
WO2020085969A1 (en) Methods for handling link failures in integrated access backhaul (iab) networks
US12096274B2 (en) Preventing / mitigating packet loss in IAB systems
US20230247495A1 (en) Iab node handover in inter-cu migration
US20230379792A1 (en) Rerouting of ul/dl traffic in an iab network
EP4014675B1 (en) Mapping between ingress and egress backhaul rlc channels in integrated access backhaul (iab) networks
US20230239755A1 (en) Improved f1 setup during iab handover
CN116134886A (en) Handling of buffered traffic during inter-CU migration of Integrated Access Backhaul (IAB) nodes
US20230328604A1 (en) Handling of buffered traffic during inter-cu migration of an ancestor integrated access backhaul (iab) node
EP3949558A1 (en) Integrated access backhaul (iab) nodes with negative propagation delay indication
EP4327592A1 (en) Methods for revoking inter-donor topology adaptation in integrated access and backhaul networks

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19784155

Country of ref document: EP

Kind code of ref document: A1

DPE1 Request for preliminary examination filed after expiration of 19th month from priority date (pct application filed from 20040101)
NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19784155

Country of ref document: EP

Kind code of ref document: A1