WO2022075912A1 - Group pdcp discard timer for low-latency services - Google Patents

Group pdcp discard timer for low-latency services Download PDF

Info

Publication number
WO2022075912A1
WO2022075912A1 PCT/SE2021/050981 SE2021050981W WO2022075912A1 WO 2022075912 A1 WO2022075912 A1 WO 2022075912A1 SE 2021050981 W SE2021050981 W SE 2021050981W WO 2022075912 A1 WO2022075912 A1 WO 2022075912A1
Authority
WO
WIPO (PCT)
Prior art keywords
layer
sdus
discard timer
discard
group
Prior art date
Application number
PCT/SE2021/050981
Other languages
French (fr)
Inventor
Du Ho Kang
Jose Luis Pradas
Original Assignee
Telefonaktiebolaget Lm Ericsson (Publ)
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Telefonaktiebolaget Lm Ericsson (Publ) filed Critical Telefonaktiebolaget Lm Ericsson (Publ)
Priority to EP21794662.3A priority Critical patent/EP4226596A1/en
Publication of WO2022075912A1 publication Critical patent/WO2022075912A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/02Traffic management, e.g. flow control or congestion control
    • H04W28/0268Traffic management, e.g. flow control or congestion control using specific QoS parameters for wireless networks, e.g. QoS class identifier [QCI] or guaranteed bit rate [GBR]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/28Flow control; Congestion control in relation to timing considerations
    • H04L47/283Flow control; Congestion control in relation to timing considerations in response to processing delays, e.g. caused by jitter or round trip time [RTT]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/32Flow control; Congestion control by discarding or delaying data units, e.g. packets or frames
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W80/00Wireless network protocols or protocol adaptations to wireless operation
    • H04W80/02Data link layer protocols

Definitions

  • the present disclosure generally relates to wireless communication networks, and particularly relates to techniques for ensuring timely deliver by a wireless network of data packets generated by latency-sensitive applications such as extended reality (XR) and cloud gaming.
  • XR extended reality
  • NR New Radio
  • 3GPP Third-Generation Partnership Project
  • eMBB enhanced mobile broadband
  • MTC machine type communications
  • URLLC ultra-reliable low latency communications
  • D2D side-link device-to-device
  • FIG. 1 illustrates an exemplary high-level view of the 5G network architecture, consisting of a Next Generation RAN (NG-RAN) 199 and a 5G Core (5GC) 198.
  • NG-RAN 199 can include a set of gNodeB’s (gNBs) connected to the 5GC via one or more NG interfaces, such as gNBs 100, 150 connected via interfaces 102, 152, respectively.
  • the gNBs can be connected to each other via one or more Xn interfaces, such as Xn interface 140 between gNBs 100 and 150.
  • each of the gNBs can support frequency division duplexing (FDD), time division duplexing (TDD), or a combination thereof.
  • FDD frequency division duplexing
  • TDD time division duplexing
  • NG-RAN 199 is layered into a Radio Network Layer (RNL) and a Transport Network Layer (TNL).
  • RNL Radio Network Layer
  • TNL Transport Network Layer
  • NG, Xn, Fl the related TNL protocol and the functionality are specified.
  • the TNL provides services for user plane transport and signaling transport.
  • the NG RAN logical nodes shown in Figure 1 include a central (or centralized) unit (CU or gNB-CU) and one or more distributed (or decentralized) units (DU or gNB-DU).
  • gNB 100 includes gNB-CU 110 and gNB-DUs 120 and 130.
  • CUs e.g., gNB-CU 110
  • CUs are logical nodes that host higher-layer protocols and perform various gNB functions such controlling the operation of DUs.
  • Each DU is a logical node that hosts lower-layer protocols and can include, depending on the functional split, various subsets of the gNB functions.
  • each of the CUs and DUs can include various circuitry needed to perform their respective functions, including processing circuitry, transceiver circuitry (e.g., for communication), and power supply circuitry.
  • central unit and “centralized unit” are used interchangeably herein, as are the terms “distributed unit” and “decentralized unit.”
  • a gNB-CU connects to gNB-DUs over respective Fl logical interfaces, such as interfaces 122 and 132 shown in Figure 1.
  • the gNB-CU and connected gNB-DUs are only visible to other gNBs and the 5GC as a gNB. In other words, the Fl interface is not visible beyond gNB-CU.
  • FIG. 2 shows a high-level view of an exemplary 5G network architecture, including a Next Generation Radio Access Network (NG-RAN) 299 and a 5G Core (5GC) 298.
  • NG-RAN 299 can include gNBs 210 e.g., 210a, b) and ng-eNBs 220 (e.g., 220a, b) that are interconnected with each other via respective Xn interfaces.
  • gNBs 210 e.g., 210a, b
  • ng-eNBs 220 e.g., 220a, b
  • the gNBs and ng-eNBs are also connected via the NG interfaces to 5GC 298, more specifically to the AMF (Access and Mobility Management Function) 230 (e.g., AMFs 230a, b) via respective NG-C interfaces and to the UPF (User Plane Function) 240 (e.g., UPFs 240a, b) via respective NG-U interfaces.
  • the AMFs 230a, b can communicate with one or more policy control functions (PCFs, e.g., PCFs 250a, b) and network exposure functions (NEFs, e.g., NEFs 260a, b).
  • PCFs policy control functions
  • NEFs network exposure functions
  • Each of the gNBs 210 can support the NR radio interface including frequency division duplexing (FDD), time division duplexing (TDD), or a combination thereof.
  • Each of ng-eNBs 220 can support the fourth-generation (4G) Long-Term Evolution (LTE) radio interface. Unlike conventional LTE eNBs, however, ng-eNBs 220 connect to the 5GC via the NG interface.
  • Each of the gNBs and ng-eNBs can serve a geographic coverage area including one more cells, such as cells 21 la-b and 221a-b shown in Figure 2.
  • a UE 205 can communicate with the gNB or ng-eNB serving that particular cell via the NR or LTE radio interface, respectively.
  • Figure 2 shows gNBs and ng-eNBs separately, it is also possible that a single NG-RAN node provides both types of functionality.
  • NR uses CP-OFDM (Cyclic Prefix Orthogonal Frequency Division Multiplexing) in the DL and both CP-OFDM and DFT-spread OFDM (DFT-S-OFDM) in the UL.
  • CP-OFDM Cyclic Prefix Orthogonal Frequency Division Multiplexing
  • DFT-S-OFDM DFT-spread OFDM
  • NR DL and UL physical resources are organized into equal-sized 1-ms subframes. A subframe is further divided into multiple slots of equal duration, with each slot including multiple OFDM-based symbols.
  • time-frequency resources can be configured much more flexibly for an NR cell than for an LTE cell.
  • SCS 15-kHz OFDM sub-carrier spacing
  • NR SCS can range from 15 to 240 kHz, with even greater SCS considered for future NR releases.
  • NR networks In addition to providing coverage via cells as in LTE, NR networks also provide coverage via “beams.”
  • a downlink (DL, i.e., network to UE) “beam” is a coverage area of a network-transmitted reference signal (RS) that may be measured or monitored by a UE.
  • RS network-transmitted reference signal
  • Figure 3 shows an exemplary configuration of NR user plane (UP) and control plane (CP) protocol stacks between a UE (310), a gNB (320), and an AMF (320), such as those shown in Figures 1-2.
  • the Physical (PHY), Medium Access Control (MAC), Radio Link Control (RLC), and Packet Data Convergence Protocol (PDCP) layers between the UE and the gNB are common to UP and CP.
  • the PDCP layer provides ciphering/deciphering, integrity protection, sequence numbering, reordering, and duplicate detection for both CP and UP.
  • PDCP provides header compression and retransmission for UP data.
  • IP Internet protocol
  • SDU service data units
  • PDU protocol data units
  • the RLC layer transfers PDCP PDUs to the MAC through logical channels (LCH).
  • LCH logical channels
  • RLC provides error detection/correction, concatenation, segmentation/reassembly, sequence numbering, reordering of data transferred to/from the upper layers. If RLC receives a discard indication from associated with a PDCP PDU, it will discard the corresponding RLC SDU (or any segment thereof) if it has not been sent to lower layers.
  • the MAC layer provides mapping between LCHs and PHY transport channels, LCH prioritization, multiplexing into or demultiplexing from transport blocks (TBs), hybrid ARQ (HARQ) error correction, and dynamic scheduling (on gNB side).
  • the PHY layer provides transport channel services to the MAC layer and handles transfer over the NR radio interface, e.g., via modulation, coding, antenna mapping, and beam forming.
  • the Service Data Adaptation Protocol (SDAP) layer handles quality-of-service (QoS). This includes mapping between QoS flows and Data Radio Bearers (DRBs) and marking QoS flow identifiers (QFI) in UL and DL packets.
  • QoS quality-of-service
  • DRBs Data Radio Bearers
  • QFI QoS flow identifiers
  • the non-access stratum (NAS) layer is between UE and AMF and handles UE/gNB authentication, mobility management, and security control.
  • the RRC layer sits below NAS in the UE but terminates in the gNB rather than the AMF.
  • RRC controls communications between UE and gNB at the radio interface as well as the mobility of a UE between cells in the NG-RAN.
  • RRC also broadcasts system information (SI) and performs establishment, configuration, maintenance, and release of DRBs and Signaling Radio Bearers (SRBs) and used by UEs.
  • SI system information
  • SRBs Signaling Radio Bearers
  • RRC controls addition, modification, and release of carrier aggregation (CA) and dual -connectivity (DC) configurations for UEs.
  • CA carrier aggregation
  • DC dual -connectivity
  • RRC also performs various security functions such as key management.
  • RRC IDLE state After a UE is powered ON it will be in the RRC IDLE state until an RRC connection is established with the network, at which time the UE will transition to RRC CONNECTED state (e.g., where data transfer can occur). The UE returns to RRC IDLE after the connection with the network is released.
  • RRC IDLE state the UE’s radio is active on a discontinuous reception (DRX) schedule configured by upper layers.
  • DRX active periods also referred to as “DRX On durations”
  • an RRC IDLE UE receives SI broadcast in the cell where the UE is camping, performs measurements of neighbor cells to support cell reselection, and monitors a paging channel on PDCCH for pages from 5GC via gNB.
  • NR RRC includes an RRC_INACTIVE state in which a UE is known (e.g., via UE context) by the serving gNB.
  • RRC INACTIVE has some properties similar to a “suspended” condition used in LTE.
  • XR Extended Reality
  • Cloud Gaming are some of the most important 5G media applications under consideration in the industry.
  • XR is an umbrella term that refers to all real- and-virtual combined environments and human-machine interactions generated by computer technology and wearables. It includes exemplary forms such as Augmented Reality (AR), Mixed Reality (MR), and Virtual Reality (VR), as well as various other types that span or sit between these examples.
  • AR Augmented Reality
  • MR Mixed Reality
  • VR Virtual Reality
  • the term “XR” also refers to cloud gaming and related applications.
  • Edge Computing is generally viewed as an important network architecture enabler for XR.
  • EC facilitates deployment of cloud computing capabilities and service environments close to the cellular radio access network (RAN). It can provide benefits such as lower latency and higher bandwidth for user-plane (UP, e.g., data) traffic, as well as reduced backhaul traffic to the 5G core network (5GC).
  • UP user-plane
  • 5GC 5G core network
  • 3GPP is also studying prospects for several new services on application architecture for enabling Edge Applications, as described further in 3 GPP TR 23.758. Edge Applications are expected to take advantage of the low latencies enabled by 5G and EC network architecture to reduce the end-to-end application-level latencies.
  • the 5G NR radio interface is designed to support applications demanding high throughput and low latency in line with the requirements of XR and Edge Applications in NR networks.
  • XR applications generate periodic traffic with variable size.
  • the packet may be transmitted as a single PDU or may be segmented several PDUs before transmission.
  • One application packet could, for instance, correspond to one or several IP packets.
  • XR application PDUs may have time constraints, such that one or a set of application PDUs (referred to generically as “application PDUs”) may need to reach the receiver within a certain period of time, e.g., a maximum allowed latency. If not received by this time, the application PDUs are useless and can be discarded.
  • application PDUs referred to generically as “application PDUs”
  • the NR PDCP layer currently uses a discard timer
  • PDCP has no knowledge about how PDCP SDUs (or IP PDUs) map to the application PDUs that need to be delivered within the maximum allowed latency. This can cause various problems such as late delivery of XR application PDUs and waste of network resources delivering application PDUs that ultimately will be discarded by the receiver.
  • Embodiments of the present disclosure provide specific improvements to communication between UEs and network nodes in a wireless network, such as by providing, enabling, and/or facilitating solutions to overcome exemplary problems summarized above and described in more detail below.
  • Embodiments include methods (e.g., procedures) for communicating data using a protocol stack that includes a first layer comprising at least one group discard timer.
  • these exemplary methods can be performed by a UE (e.g., wireless device) or a network node (e.g., base station, eNB, gNB, ng-eNB, etc., or component thereof) in a wireless network (e.g., E-UTRAN, NG-RAN).
  • a UE e.g., wireless device
  • a network node e.g., base station, eNB, gNB, ng-eNB, etc., or component thereof
  • a wireless network e.g., E-UTRAN, NG-RAN.
  • These exemplary methods can include receiving, at the first layer from a higher layer of the protocol stack, a first plurality of SDUs associated with a common maximum latency requirement. These exemplary methods can also include, based on the common maximum latency requirement, initiating at least one group discard timer associated with the first plurality of SDUs. These exemplary methods can also include, upon expiration of the at least one discard timer, discarding the first plurality of SDUs associated with common latency requirement.
  • the first plurality of SDUs is associated with one or more of the following: one or more higher-layer PDUs that have a common maximum latency requirement; a common group discard time; and a single data flow (e.g., an XR data flow).
  • the first plurality of SDUs comprises a first SDU and a second SDU received a duration after the first SDU.
  • the initiating operations can include initiating a first discard timer with a first value upon receipt of the first SDU.
  • an indication of the first value can be received from the higher layer in association with the first SDU.
  • the first discard timer is associated with the first and second SDUs.
  • the initiating operations can also include initiating a second discard timers with a second value upon receipt of the second SDU.
  • the second value can be the first value minus the duration.
  • the discarding operations can include discarding the first SDU upon expiration of the first discard timer and discarding the second SDU upon expiration of the second discard timer.
  • the initiating operations can also include refraining from initiating a second discard timer upon receipt of the second SDU.
  • the discarding operations can include discarding the first plurality of SDUs upon expiration of the first discard timer.
  • the first plurality of SDUs can be associated with a data flow comprising a plurality of higher-layer PDUs.
  • the initiating operations can also include refraining from initiating further discard timers upon receipt, after the second SDU, of further SDUs associated with the data flow.
  • these exemplary methods can also include forming the first plurality of SDUs into a second plurality of first-layer PDUs; sending the second plurality of first- layer PDUs to a lower layer of the protocol stack; and, upon expiration of the at least one group discard timer, sending to the lower layer respective discard indications associated with the second plurality of first-layer PDUs.
  • these exemplary methods can also include determining that the first plurality of SDU are associated with a common maximum latency requirement based on one of the following:
  • the first layer can be a PDCP layer and the higher layer can be an application layer, an IP layer, or an SDAP layer.
  • the lower layer can be an RLC layer.
  • the node can be a UE.
  • these exemplary methods can also include receiving, from a network node in the wireless network, a discard timer configuration including one or more of the following:
  • initiating the at least one group discard timer and discarding the first plurality of SDUs can be based on the received discard timer configuration.
  • the node can be a network node in a wireless network.
  • these exemplary methods can also include sending, to a UE, a discard timer configuration including one or more of the above-mentioned items.
  • these exemplary methods can also include determining remaining durations of validity for the respective first plurality of SDUs based on the following for one or more higher-layer PDUs associated with the first plurality of SDUs: a maximum latency requirement, and a time of arrival in the wireless network. Additionally, initiating the at least one group discard timer can be based on the remaining durations of validity.
  • determining remaining durations of validity for the respective first plurality of SDUs can be based on one of the following:
  • UEs e.g., wireless devices
  • network nodes e.g., base stations, eNBs, gNBs, ng-eNBs, etc., or components thereof
  • Other embodiments include non-transitory, computer-readable media storing program instructions that, when executed by processing circuitry, configure such UEs or network nodes to perform operations corresponding to any of the exemplary methods described herein.
  • Figures 1-2 illustrate two high-level views of an exemplary 5G/NR network architecture.
  • Figure 3 shows an exemplary configuration of NR UP and CP protocol stacks.
  • Figure 4 illustrates a comparison of various characteristics or requirements between XR and other 5G applications.
  • Figure 5 illustrates some exemplary traffic characteristics for XR.
  • Figure 6 illustrates some problems that application-layer PDUs can encounter between a source and a destination.
  • Figures 7A-B show a flow diagram of an exemplary method for a node (e.g., UE, wireless device, base station, eNB, gNB, ng-eNB, etc.) in a wireless network (e.g., NG-RAN, E-UTRAN), according to various embodiments of the present disclosure.
  • a node e.g., UE, wireless device, base station, eNB, gNB, ng-eNB, etc.
  • a wireless network e.g., NG-RAN, E-UTRAN
  • Figure 8 shows a block diagram of an exemplary wireless device or UE, according to various embodiments of the present disclosure.
  • Figure 9 shows a block diagram of an exemplary network node, according to various embodiments of the present disclosure.
  • FIG. 10 shows a block diagram of an exemplary network configured to provide over- the-top (OTT) data services between a host computer and a UE, according to various embodiments of the present disclosure.
  • OTT over-the-top
  • Radio Node can be either a radio access node or a wireless device.”
  • a “node” can be a network node or a wireless device.
  • Radio Access Node As used herein, a “radio access node” (or equivalently “radio network node,” “radio access network node,” or “RAN node”) can be any node in a radio access network (RAN) of a cellular communications network that operates to wirelessly transmit and/or receive signals.
  • RAN radio access network
  • a radio access node examples include, but are not limited to, a base station (e.g, a New Radio (NR) base station (gNB) in a 3GPP Fifth Generation (5G) NR network or an enhanced or evolved Node B (eNB) in a 3GPP LTE network), base station distributed components (e.g., CU and DU), a high-power or macro base station, a low-power base station (e.g., micro, pico, femto, or home base station, or the like), an integrated access backhaul (IAB) node, a transmission point, a remote radio unit (RRU or RRH), and a relay node.
  • a base station e.g, a New Radio (NR) base station (gNB) in a 3GPP Fifth Generation (5G) NR network or an enhanced or evolved Node B (eNB) in a 3GPP LTE network
  • base station distributed components e.g., CU and DU
  • a “core network node” is any type of node in a core network.
  • Some examples of a core network node include, e.g., a Mobility Management Entity (MME), a serving gateway (SGW), a Packet Data Network Gateway (P-GW), an access and mobility management function (AMF), a session management function (AMF), a user plane function (UPF), a Service Capability Exposure Function (SCEF), or the like.
  • MME Mobility Management Entity
  • SGW serving gateway
  • P-GW Packet Data Network Gateway
  • AMF access and mobility management function
  • AMF access and mobility management function
  • AMF AMF
  • UPF user plane function
  • SCEF Service Capability Exposure Function
  • Wireless Device As used herein, a “wireless device” (or “WD” for short) is any type of device that has access to (z.e., is served by) a cellular communications network by communicate wirelessly with network nodes and/or other wireless devices. Communicating wirelessly can involve transmitting and/or receiving wireless signals using electromagnetic waves, radio waves, infrared waves, and/or other types of signals suitable for conveying information through air.
  • wireless device examples include, but are not limited to, smart phones, mobile phones, cell phones, voice over IP (VoIP) phones, wireless local loop phones, desktop computers, personal digital assistants (PDAs), wireless cameras, gaming consoles or devices, music storage devices, playback appliances, wearable devices, wireless endpoints, mobile stations, tablets, laptops, laptop- embedded equipment (LEE), laptop-mounted equipment (LME), smart devices, wireless customer-premise equipment (CPE), mobile-type communication (MTC) devices, Internet-of-Things (loT) devices, vehicle-mounted wireless terminal devices, etc.
  • the term “wireless device” is used interchangeably herein with the term “user equipment” (or “UE” for short).
  • Network Node is any node that is either part of the radio access network (e.g, a radio access node or equivalent name discussed above) or of the core network (e.g, a core network node discussed above) of a cellular communications network.
  • a network node is equipment capable, configured, arranged, and/or operable to communicate directly or indirectly with a wireless device and/or with other network nodes or equipment in the cellular communications network, to enable and/or provide wireless access to the wireless device, and/or to perform other functions (e.g., administration) in the cellular communications network.
  • PDCP has no knowledge about how PDCP SDUs (or IP PDUs) map to the application PDUs that need to be delivered within the maximum allowed latency. This can cause various problems such as late delivery of XR application PDUs and waste of network resources delivering such PDUs that will be ultimately be discarded. This is discussed in more detail below.
  • Figure 4 illustrates a comparison of various characteristics requirements for XR and other 5G applications.
  • Figure 4 shows a comparison of latency, reliability, and bitrate requirements for URLLC, streaming, and EC-based XR.
  • URLLC services have extreme requirements of 1-ms latency and of 10' 5
  • EC-based XR can have relaxed requirements 5-10 ms and 10' 4 reliability.
  • XR services can require a much higher bite rate than either URLLC or streaming, (e.g., due to codec inefficiency).
  • EC -based XR traffic can also be very dynamic, e.g., due to eye/viewport tracking. In general, the traffic can appear to be periodic but with variable file sizes, as illustrated in Figure 5.
  • the packet When an XR application-layer packet enters the Internet, the packet may be transmitted as a single PDU or may be segmented several PDUs before transmission.
  • One application packet could, for instance, correspond to one or several IP packets.
  • XR application PDUs may have a maximum allowed latency of 5-10 ms. If the application PDU(s) is/are not received by this time, the application PDU(s) is/are not of any use and can be discarded.
  • IP packets reach the PDCP layer with a certain jitter that comes from traversing the Internet and the 5GC.
  • the PDCP layer starts a discard timer each time a PDCP SDU is received by higher layers.
  • the NR PDCP layer currently uses a discard timer, PDCP has no knowledge about how PDCP SDUs (or IP PDUs) map to the application PDUs that need to be delivered within the maximum allowed latency.
  • one XR application PDU can be segmented into 5 IP packets.
  • Each IP packet arrives in- or out-of-sequence to the PDCP layer (as a PDCP SDU) at times X+deltal, X+delta2, etc.
  • Each packet will have a discard timer running with a certain time.
  • all 5 PDCP SDUs must be delivered within a defined time budget. If the delay budget for the application packet is consumed, the 5 PDCP SDUs corresponding to the application packet can be discarded regardless of whether the PDCP discard timer is still running.
  • the value of the current PDCP discard timer cannot depend on the number of PDCP SDUs that may correspond to a single application PDU. This is because the number of PDCP SDUs that correspond to a single application PDU may vary among application PDUs. Setting the PDCP discard timer to a fraction of the maximum latency of the application PDU may also impose a fictitious restriction and possibly lead to unnecessary discards. For example, if the maximum latency is 10 ms and PDCP discard timer is set to 2 ms (i.e., 10 ms/5 PDUs), any single PDCP PDU will be discarded 2 ms after it reaches PDCP.
  • all 5 PDCP PDUs are transmitted at the same time after 7 ms from the reception of the first PDCP SDU.
  • all 5 PDPC PDUs could be delivered within the 10-ms latency budget but the 2-ms time would cause unnecessary discard.
  • Figure 6 illustrates certain problems that application-layer PDUs can encounter between a source and a destination.
  • the application (illustrated as a “cloud”) generates one or more first application PDUs at time tO, all of which need to be delivered within the same maximum latency.
  • the application may generate second application PDUs at time tl with a common maximum latency that may be different than the first application PDUs, and/or at a later time than the first application PDUs.
  • These first and second application PDUs may traverse one or more intermediate networks or may be directly connected to a 3GPP network (e.g., 5GC).
  • 3GPP network e.g., 5GC
  • these application PDUs may be adapted (e.g. segmented) by lower-layer protocols to better fit the transmission conditions and/or constraints.
  • Challenges for the gNB PDCP layer include identifying which PDUs are associated with application PDUs (e.g., first or second) having the same maximum latency and which ones need to be delivered first. For example, the first set generated at tO may need to be delivered before the second set generated at tl (or vice versa).
  • the UE needs to receive UL grants to transmit these PDCP SDUs. If the UE does not receive the UL grants in time or the UL grants are not large enough to meet the maximum latency requirement, there may be PDCP SDUs related to the application PDU still pending for transmission. PDCP would attempt to transmit those pending PDCP SDUs even though they are no longer useful to the receiver. Moreover, transmitting those “late” PDCP SDUs may actually delay other PDCP SDUs related to a second application PDU coming after the first application PDU.
  • the current PDCP timer is not suitable for handling XR services.
  • An XR application may produce one or more application PDUs that must be delivered within a maximum latency (or delay budget).
  • the application and/or lower layers may segment and/or concatenate these application PDUs.
  • PDCP receives IP PDUs (or other protocol, if used) from upper layers, but has no information about how these IP PDUs (PDCP SDUs) map to the application PDUs having the maximum latency. As such, the PDCP layer cannot properly configure the single discard timer for each PDCP SDU.
  • PDCP SDUs subsequent application PDUs
  • subsequent PDCP SDUs should not be transmitted either, because doing so wastes network and UE resources.
  • SDU discard timer independent of other SDUs allows these subsequent SDUs to stay in the PDCP buffer even though they are unnecessary from an application perspective.
  • embodiments of the present disclosure provide flexible and efficient techniques that provide information to the PDCP layer about which PDCP SDUs are associated with a single application PDU that needs to be delivered according to a maximum latency requirement. Such techniques also provide different conditions for a transmitting PDCP entity to trigger a group discard timer for a set of PDCP SDUs associated with one or more applicationlayer PDUs having a common maximum latency requirement. Furthermore, when the group discard timer expires, the transmitting PDCP entity discards the set of PDCP SDUs and corresponding PDCP PDUs associated with the application PDU and provides a discard indication to lower layers with respect to all corresponding PDCP PDUs.
  • Embodiments of the present disclosure can provide various benefits and/or advantages. For example, by allowing the network (or UE) to discard a set of packets that are no longer valid for an application, network resources are used more efficiently thereby increasing capacity. Moreover, embodiments can facilitate more efficient resource management (e.g., scheduling) and/or better planning of for delivery of PDCP PDUs within a maximum latency associated with their corresponding application PDUs. This can provide better fulfillment of committed QoS and improvements to quality of experience (QoE) by avoiding unnecessary resource usage and interference caused by unwanted application PDU data.
  • QoE quality of experience
  • different conditions can be used to trigger a group discard timer for a set of PDCP SDUs associated with one or more application-layer PDUs having a common maximum latency requirement. For example, this could be trigger upon reception of a first PDCP SDU identified as being associated with: 1) a set of PDCP SDUs that share a group discard time, and/or 2) one or more application PDUs having a common maximum latency requirement.
  • a UE can be configured by the network (e.g., via RRC) to use a group PDCP discard timer and, when configured, the UE uses the triggering condition above.
  • the network configuration can be responsive to a UE indication of support of PDCP group discard timer.
  • legacy (i.e., non-supporting) UEs would not be configured as such but would continue to use the existing SDU-independent PDCP timer arrangement discussed above. If multiple group discard timer triggering conditions are available, the network can also configure the UE to use a particular one (or more) of the available conditions.
  • the transmitting PDCP entity starts one or more discard timer(s) according to various embodiments.
  • a transmitting PDCP entity can start multiple discard timers (i.e., one for each PDCP SDU) with each timer having a different initial value corresponding to the arrival time of the associated PDCP SDU.
  • a second PDCP SDU may have a shorter discard timer value than a first PDCP SDU that arrived previously, in accordance with the overall latency requirement of the associated application PDU.
  • a transmitting PDCP entity can start one discard timer that will be associated with all PDCP SDUs with a common group discard time or that are associated with one or more application PDUs having a common maximum latency requirement.
  • a transmitting PDCP entity can start one discard timer for the first received PDCP SDU associated with an XR flow comprising multiple application PDUs, and refrain from starting a discard timer for subsequently received PDCP SDUs associated with the same XR flow.
  • These embodiments can be beneficial when subsequently-received application PDUs are dependent on an initial application PDU.
  • a specific example is a P-frame in the context of compressed video.
  • the duration for the timer(s) can be configured by the network, e.g., via RRC.
  • multiple timer durations may be used/configured to differentiate PDCP SDUs in terms of latency requirement. This can be particularly beneficial in the case where multiple discard timers are started using different values, such as in the first option discussed above.
  • the particular duration for a discard timer may be set in various ways, such as the maximum latency requirement for the corresponding packet, an amount of remaining latency budget, etc.
  • the transmitting PDCP entity discards the set of PDCP SDUs and PDCP data PDUs corresponding to the application PDU.
  • each PDCP SDU will be discarded based on expiration of its associated discard timer without affecting or being affected by discard timers associated with other PDCP SDUs.
  • all PDCP SDUs associated with a common group will be discarded upon expiration of the discard timer started upon receipt of the first PDCP SDU.
  • all PDCP SDUs associated with the same XR flow will be discarded upon expiration of the discard timer started upon receipt of the first PDCP SDU.
  • the PDCP layer can provide discard indications to RLC for all PDCP PDUs corresponding to PDCP SDUs discarded based on timer expiration.
  • the network can configure (e.g., via RRC) discarding behavior in the UE, preferably compatible with the discard timer configuration that may also be network- configured, as discussed above.
  • the network may need to comply with specific QoS requirements for a flow.
  • the NG-RAN may obtain certain QoS parameters indicating performance for the flow, such as maximum latency.
  • the NG-RAN may need assistance to identify which PDCP SDUs are associated with which application PDUs. This information can facilitate delivery of all PDCP SDUs within a given delay budget for the application PDU(s) associated with a service.
  • the core network e.g., 5GC
  • the core network can provide a sequence number for each IP PDU (or packet) delivered to the PDCP layer.
  • the sequence number can be the same for all packets associated with a one application PDU or for all packets that have the same maximum latency requirement.
  • the PDCP layer can perform a packet inspection of the incoming packet headers to identify a relevant sequence number. For example, if PDCP SDUs are known to be IP packets, the Sequence Number (SN) and segmentation information can be extracted from each packet to identify if a packet is a segment and, when all the packets are received, the order of that reconstructed packet.
  • SN Sequence Number
  • segmentation information can be extracted from each packet to identify if a packet is a segment and, when all the packets are received, the order of that reconstructed packet.
  • the PDCP layer may be informed of the remaining time left for each incoming IP PDU/PDCP SDU.
  • the remaining time T for PDCP SDU j referred as TrJ, can be estimated as:
  • TrJ Tc - TeJ
  • TeJ Ton - TdJ - Tpc, where Tc is a common frame level latency requirement, TeJ is an elapsed time for each IP packet, Taj is an arrival time of PDCP SDU j (generally known in NG-RAN), TdJ is a departure time of the IP PDU (or transport layer PDU, depending on XR application server configuration), and Tpc is a per-SDU processing time in SDAP layer.
  • the information about TdJ can be provided occasionally by an XR Edge application server to the 5GC and forwarded to the NG-RAN. It is expected that TdJ will be approximately the frame interval over time, or its variance will be very small.
  • the NG-RAN can also estimate TeJ indirectly, such as by checking the arrival timing of the first SDAP SDU in every SDAP burst and comparing it with a typical frame refresh rate.
  • FIGS 7A-B show an exemplary method (e.g., procedures) for communicating data using a protocol stack that includes a first layer comprising at least one group discard timer, according to various embodiments of the present disclosure.
  • various features of the operations described below correspond to various embodiments described above.
  • the exemplary method shown in Figure 7 can be performed by a node in a wireless network (e.g., E-UTRAN, NG-RAN), such as a UE (e.g., wireless device) or a network node (e.g., base station, eNB, gNB, ng-eNB, etc., or component thereof).
  • a wireless network e.g., E-UTRAN, NG-RAN
  • UE e.g., wireless device
  • a network node e.g., base station, eNB, gNB, ng-eNB, etc., or component thereof.
  • Figure 7 shows specific blocks in a particular order, the operations of the exemplary method can be performed in a different order than shown and can be combined and/or divided into blocks having different functionality than shown. Optional blocks or operations are indicated by dashed lines.
  • the exemplary method can include the operations of block 720, where the node can receive, at the first layer from a higher layer of the protocol stack, a first plurality of SDUs associated with a common maximum latency requirement.
  • the exemplary method can also include the operations of block 750, where the node can, based on the common maximum latency requirement, initiate at least one group discard timer associated with the first plurality of SDUs.
  • the exemplary method can also include the operations of block 770, where the node can, upon expiration of the at least one discard timer, discard the first plurality of SDUs associated with common latency requirement.
  • the first plurality of SDUs is associated with one or more of the following: one or more higher-layer PDUs that have a common maximum latency requirement; a common group discard time; and a single data flow (e.g., an XR data flow).
  • the first plurality of SDUs comprises a first SDU and a second SDU received a duration after the first SDU.
  • the initiating operations of block 740 can include the operations of sub-block 741, where the node can initiate a first discard timer with a first value upon receipt of the first SDU.
  • an indication of the first value can be received from the higher layer in association with the first SDU.
  • the first discard timer is associated with the first and second SDUs. Examples of these embodiments include the “second option” discussed above.
  • the initiating operations of block 740 can include the operations of sub-block 742, where the node can initiate a second discard timer with a second value upon receipt of the second SDU.
  • the second value can be the first value minus the duration.
  • the discarding operations of block 760 can include the operations of subblocks 761-762, where the node can discard the first SDU upon expiration of the first discard timer and discard the second SDU upon expiration of the second discard timer. Examples of these embodiments include the “first option” discussed above.
  • the initiating operations of block 740 can also include the operations of sub-block 743, where the node can refrain from initiating a second discard timer upon receipt of the second SDU.
  • the discarding operations of block 760 can include the operations of sub-block 763, where the node can discard the first plurality of SDUs upon expiration of the first discard timer. Examples of these embodiments include the “third option” discussed above.
  • the first plurality of SDUs can be associated with a data flow comprising a plurality of higher-layer PDUs.
  • the initiating operations of block 740 can also include the operations of sub-block 744, where the node can refrain from initiating further discard timers upon receipt, after the second SDU, of further SDUs associated with the data flow.
  • the exemplary method can also include the operations of blocks 760 and 780.
  • the node can form the first plurality of SDUs into a second plurality of first-layer PDUs and send the second plurality of first-layer PDUs to a lower layer of the protocol stack.
  • the node can, upon expiration of the at least one group discard timer, send to the lower layer respective discard indications associated with the second plurality of first-layer PDUs.
  • the exemplary method can also include the operations of block 730, where the node can determine that the first plurality of SDU are associated with a common maximum latency requirement based on one of the following:
  • the first layer can be a PDCP layer and the higher layer can be an application layer, an IP layer, or an SDAP layer.
  • the lower layer mentioned above can be an RLC layer.
  • the node can be a UE.
  • the exemplary method can also include the operations of block 710a, where the UE can receive, from a network node in the wireless network, a discard timer configuration including one or more of the following:
  • initiating the at least one group discard timer (e.g., in block 750) and discarding the first plurality of SDUs (e.g., in block 770) can be based on the received discard timer configuration.
  • the node can be a network node in a wireless network.
  • the exemplary method can also include the operations of block 710b, where the network node can send, to a UE, a discard timer configuration including one or more of the above- mentioned items.
  • the network node can use the same or similar discard timer configuration as provided to the UE, thereby facilitating communication interoperability and/or compatibility between the two nodes.
  • the exemplary method can also include the operations of block 740, where the node can determine remaining durations of validity for the respective first plurality of SDUs based on the following for one or more higher-layer PDUs associated with the first plurality of SDUs: a maximum latency requirement, and a time of arrival in the wireless network. Additionally, initiating the at least one group discard timer (e.g., in block 750) is based on the remaining durations of validity.
  • determining remaining durations of validity for the respective first plurality of SDUs can be based on one of the following:
  • embodiments can include a UE configured to communicate data using a protocol stack that includes a first layer comprising at least one group discard timer.
  • the UE can include radio transceiver circuitry configured to communicate with a network node, in a wireless network, that has a compatible protocol stack.
  • the UE can also include processing circuitry operatively coupled to the radio transceiver circuitry.
  • the processing circuitry and the radio transceiver circuitry can be configured to perform operations corresponding to any of the relevant embodiments discussed above in relation to Figure 7.
  • embodiments can include a UE configured to communicate data using a protocol stack that includes a first layer comprising at least one group discard timer.
  • the UE can include various modules with functionality that corresponds to respective operations discussed above in relation to Figure 7. As a specific example, the UE can include:
  • a discarding module with functionality that corresponds to the operations of block 770.
  • the functionality of these and other modules of the UE can be implemented by any appropriate combination of hardware and software, such as by processing circuitry, radio transceiver circuitry, and/or executable program instructions stored a computer-readable medium (e.g., a memory).
  • embodiments can include a network node, of a wireless network, that is configured to communicate data using a protocol stack that includes a first layer comprising at least one group discard timer.
  • the network node can include radio network interface circuitry configured to communicate with a UE that has a compatible protocol stack.
  • the network node can also include processing circuitry operatively coupled to the radio network interface circuitry. The processing circuitry and the radio network interface circuitry can be configured to perform operations corresponding to any of the relevant embodiments discussed above in relation to Figure 7.
  • embodiments can include a network node, of a wireless network, that is configured to communicate data using a protocol stack that includes a first layer comprising at least one group discard timer.
  • the network node can include various modules with functionality that corresponds to respective operations discussed above in relation to Figure 7. As a specific example, the network node can include:
  • a discarding module with functionality that corresponds to the operations of block 770.
  • the functionality of these and other modules of the network node can be implemented by any appropriate combination of hardware and software, such as by processing circuitry, radio network interface circuitry, and/or executable program instructions stored a computer-readable medium (e.g., a memory).
  • FIG 8 shows a block diagram of an exemplary wireless device or UE 800 (hereinafter referred to as “UE 800”) according to various embodiments of the present disclosure, including those described above with reference to other figures.
  • UE 800 can be configured by execution of instructions, stored on a computer-readable medium, to perform operations corresponding to one or more of the exemplary methods described herein.
  • UE 800 can include a processor 810 (also referred to as “processing circuitry”) that can be operably connected to a program memory 820 and/or a data memory 830 via a bus 870 that can comprise parallel address and data buses, serial ports, or other methods and/or structures known to those of ordinary skill in the art.
  • Program memory 820 can store software code, programs, and/or instructions (collectively shown as computer program product (CPP) 821 in Figure 8) that, when executed by processor 810, can configure and/or facilitate UE 800 to perform various operations, including operations corresponding to various exemplary methods described herein.
  • CPP computer program product
  • execution of such instructions can configure and/or facilitate UE 800 to communicate using one or more wired or wireless communication protocols, including one or more wireless communication protocols standardized by 3GPP, 3GPP2, or IEEE, such as those commonly known as 5G/NR, LTE, LTE-A, UMTS, HSPA, GSM, GPRS, EDGE, IxRTT, CDMA2000, 802.11 WiFi, HDMI, USB, Firewire, etc., or any other current or future protocols that can be utilized in conjunction with radio transceiver 840, user interface 850, and/or control interface 860.
  • 3GPP 3GPP2
  • IEEE such as those commonly known as 5G/NR, LTE, LTE-A, UMTS, HSPA, GSM, GPRS, EDGE, IxRTT, CDMA2000, 802.11 WiFi, HDMI, USB, Firewire, etc., or any other current or future protocols that can be utilized in conjunction with radio transceiver 840, user interface 850, and/or control interface 860.
  • processor 810 can execute program code stored in program memory 820 that corresponds to MAC, RLC, PDCP, SDAP, RRC, and NAS layer protocols standardized by 3GPP (e.g., for NR and/or LTE).
  • processor 810 can execute program code stored in program memory 820 that, together with radio transceiver 840, implements corresponding PHY layer protocols, such as Orthogonal Frequency Division Multiplexing (OFDM), Orthogonal Frequency Division Multiple Access (OFDMA), and Single-Carrier Frequency Division Multiple Access (SC-FDMA).
  • processor 810 can execute program code stored in program memory 820 that, together with radio transceiver 840, implements device-to-device (D2D) communications with other compatible devices and/or UEs.
  • D2D device-to-device
  • Program memory 820 can also include software code executed by processor 810 to control the functions of UE 800, including configuring and controlling various components such as radio transceiver 840, user interface 850, and/or control interface 860.
  • Program memory 820 can also comprise one or more application programs and/or modules comprising computer-executable instructions embodying any of the exemplary methods described herein.
  • Such software code can be specified or written using any known or future developed programming language, such as e.g., Java, C++, C, Objective C, HTML, XHTML, machine code, and Assembler, as long as the desired functionality, e.g., as defined by the implemented method steps, is preserved.
  • program memory 820 can comprise an external storage arrangement (not shown) remote from UE 800, from which the instructions can be downloaded into program memory 820 located within or removably coupled to UE 800, so as to enable execution of such instructions.
  • Data memory 830 can include memory area for processor 810 to store variables used in protocols, configuration, control, and other functions of UE 800, including operations corresponding to, or comprising, any of the exemplary methods described herein.
  • program memory 820 and/or data memory 830 can include non-volatile memory (e.g., flash memory), volatile memory (e.g., static or dynamic RAM), or a combination thereof.
  • data memory 830 can comprise a memory slot by which removable memory cards in one or more formats (e.g., SD Card, Memory Stick, Compact Flash, etc.) can be inserted and removed.
  • processor 810 can include multiple individual processors (including, e.g., multi-core processors), each of which implements a portion of the functionality described above. In such cases, multiple individual processors can be commonly connected to program memory 820 and data memory 830 or individually connected to multiple individual program memories and or data memories. More generally, persons of ordinary skill in the art will recognize that various protocols and other functions of UE 800 can be implemented in many different computer arrangements comprising different combinations of hardware and software including, but not limited to, application processors, signal processors, general-purpose processors, multi-core processors, ASICs, fixed and/or programmable digital circuitry, analog baseband circuitry, radio-frequency circuitry, software, firmware, and middleware.
  • Radio transceiver 840 can include radio-frequency transmitter and/or receiver functionality that facilitates the UE 800 to communicate with other equipment supporting like wireless communication standards and/or protocols.
  • the radio transceiver 840 includes one or more transmitters and one or more receivers that enable UE 800 to communicate according to various protocols and/or methods proposed for standardization by 3GPP and/or other standards bodies.
  • such functionality can operate cooperatively with processor 810 to implement a PHY layer based on OFDM, OFDMA, and/or SC-FDMA technologies, such as described herein with respect to other figures.
  • radio transceiver 840 includes one or more transmitters and one or more receivers that can facilitate the UE 800 to communicate with various LTE, LTE-Advanced (LTE-A), and/or NR networks according to standards promulgated by 3GPP.
  • the radio transceiver 840 includes circuitry, firmware, etc. necessary for the UE 800 to communicate with various NR, NR-U, LTE, LTE-A, LTE-LAA, UMTS, and/or GSMZEDGE networks, also according to 3GPP standards.
  • radio transceiver 840 can include circuitry supporting D2D communications between UE 800 and other compatible devices.
  • radio transceiver 840 includes circuitry, firmware, etc. necessary for the UE 800 to communicate with various CDMA2000 networks, according to 3GPP2 standards.
  • the radio transceiver 840 can be capable of communicating using radio technologies that operate in unlicensed frequency bands, such as IEEE 802.11 WiFi that operates using frequencies in the regions of 2.4, 5.6, and/or 60 GHz.
  • radio transceiver 840 can include a transceiver that is capable of wired communication, such as by using IEEE 802.3 Ethernet technology.
  • the functionality particular to each of these embodiments can be coupled with and/or controlled by other circuitry in the UE 800, such as the processor 810 executing program code stored in program memory 820 in conjunction with, and/or supported by, data memory 830.
  • User interface 850 can take various forms depending on the particular embodiment of UE 800, or can be absent from UE 800 entirely.
  • user interface 850 can comprise a microphone, a loudspeaker, slidable buttons, depressible buttons, a display, a touchscreen display, a mechanical or virtual keypad, a mechanical or virtual keyboard, and/or any other user-interface features commonly found on mobile phones.
  • the UE 800 can comprise a tablet computing device including a larger touchscreen display.
  • one or more of the mechanical features of the user interface 850 can be replaced by comparable or functionally equivalent virtual user interface features (e.g., virtual keypad, virtual buttons, etc.) implemented using the touchscreen display, as familiar to persons of ordinary skill in the art.
  • the UE 800 can be a digital computing device, such as a laptop computer, desktop computer, workstation, etc. that comprises a mechanical keyboard that can be integrated, detached, or detachable depending on the particular exemplary embodiment.
  • a digital computing device can also comprise a touch screen display.
  • Many exemplary embodiments of the UE 800 having a touch screen display are capable of receiving user inputs, such as inputs related to exemplary methods described herein or otherwise known to persons of ordinary skill.
  • UE 800 can include an orientation sensor, which can be used in various ways by features and functions of UE 800.
  • the UE 800 can use outputs of the orientation sensor to determine when a user has changed the physical orientation of the UE 800’ s touch screen display.
  • An indication signal from the orientation sensor can be available to any application program executing on the UE 800, such that an application program can change the orientation of a screen display (e.g., from portrait to landscape) automatically when the indication signal indicates an approximate 90-degree change in physical orientation of the device.
  • the application program can maintain the screen display in a manner that is readable by the user, regardless of the physical orientation of the device.
  • the output of the orientation sensor can be used in conjunction with various exemplary embodiments of the present disclosure.
  • a control interface 860 of the UE 800 can take various forms depending on the particular exemplary embodiment of UE 800 and of the particular interface requirements of other devices that the UE 800 is intended to communicate with and/or control.
  • the control interface 860 can comprise an RS-232 interface, a USB interface, an HDMI interface, a Bluetooth interface, an IEEE (“Firewire”) interface, an I 2 C interface, a PCMCIA interface, or the like.
  • control interface 860 can comprise an IEEE 802.3 Ethernet interface such as described above.
  • the control interface 860 can comprise analog interface circuitry including, for example, one or more digital-to-analog converters (DACs) and/or analog-to-digital converters (ADCs).
  • DACs digital-to-analog converters
  • ADCs analog-to-digital converters
  • the UE 800 can comprise more functionality than is shown in Figure 8 including, for example, a video and/or still-image camera, microphone, media player and/or recorder, etc.
  • radio transceiver 840 can include circuitry necessary to communicate using additional radio-frequency communication standards including Bluetooth, GPS, and/or others.
  • the processor 810 can execute software code stored in the program memory 820 to control such additional functionality. For example, directional velocity and/or position estimates output from a GPS receiver can be available to any application program executing on the UE 800, including any program code corresponding to and/or embodying any exemplary embodiments (e.g., of methods) described herein.
  • FIG. 9 shows a block diagram of an exemplary network node 900 according to various embodiments of the present disclosure, including those described above with reference to other figures.
  • exemplary network node 900 can be configured by execution of instructions, stored on a computer-readable medium, to perform operations corresponding to one or more of the exemplary methods described herein.
  • network node 900 can comprise a base station, eNB, gNB, or one or more components thereof.
  • network node 900 can be configured as a central unit (CU) and one or more distributed units (DUs) according to NR gNB architectures specified by 3GPP. More generally, the functionally of network node 900 can be distributed across various physical devices and/or functional units, modules, etc.
  • CU central unit
  • DUs distributed units
  • Network node 900 can include processor 910 (also referred to as “processing circuitry”) that is operably connected to program memory 920 and data memory 930 via bus 970, which can include parallel address and data buses, serial ports, or other methods and/or structures known to those of ordinary skill in the art.
  • processor 910 also referred to as “processing circuitry”
  • bus 970 can include parallel address and data buses, serial ports, or other methods and/or structures known to those of ordinary skill in the art.
  • Program memory 920 can store software code, programs, and/or instructions (collectively shown as computer program product (CPP) 921 in Figure 9) that, when executed by processor 910, can configure and/or facilitate network node 900 to perform various operations, including operations corresponding to various exemplary methods described herein.
  • CPP computer program product
  • program memory 920 can also include software code executed by processor 910 that can configure and/or facilitate network node 900 to communicate with one or more other UEs or network nodes using other protocols or protocol layers, such as one or more of the PHY, MAC, RLC, PDCP, SDAP, RRC, and NAS layer protocols standardized by 3GPP for LTE, LTE-A, and/or NR, or any other higher-layer protocols utilized in conjunction with radio network interface 940 and/or core network interface 950.
  • core network interface 950 can comprise the SI or NG interface and radio network interface 940 can comprise the Uu interface, as standardized by 3GPP.
  • Program memory 920 can also comprise software code executed by processor 910 to control the functions of network node 900, including configuring and controlling various components such as radio network interface 940 and core network interface 950.
  • Data memory 930 can comprise memory area for processor 910 to store variables used in protocols, configuration, control, and other functions of network node 900.
  • program memory 920 and data memory 930 can comprise non-volatile memory (e.g., flash memory, hard disk, etc.), volatile memory (e.g., static or dynamic RAM), network-based (e.g., “cloud”) storage, or a combination thereof.
  • processor 910 can include multiple individual processors (not shown), each of which implements a portion of the functionality described above. In such case, multiple individual processors may be commonly connected to program memory 920 and data memory 930 or individually connected to multiple individual program memories and/or data memories.
  • network node 900 may be implemented in many different combinations of hardware and software including, but not limited to, application processors, signal processors, general-purpose processors, multi-core processors, ASICs, fixed digital circuitry, programmable digital circuitry, analog baseband circuitry, radiofrequency circuitry, software, firmware, and middleware.
  • Radio network interface 940 can comprise transmitters, receivers, signal processors, ASICs, antennas, beamforming units, and other circuitry that enables network node 900 to communicate with other equipment such as, in some embodiments, a plurality of compatible user equipment (UE). In some embodiments, interface 940 can also enable network node 900 to communicate with compatible satellites of a satellite communication network. In some exemplary embodiments, radio network interface 940 can comprise various protocols or protocol layers, such as the PHY, MAC, RLC, PDCP, and/or RRC layer protocols standardized by 3GPP for LTE, LTE- A, LTE-LAA, NR, NR-U, etc.
  • the radio network interface 940 can comprise a PHY layer based on OFDM, OFDMA, and/or SC-FDMA technologies.
  • the functionality of such a PHY layer can be provided cooperatively by radio network interface 940 and processor 910 (including program code in memory 920).
  • Core network interface 950 can comprise transmitters, receivers, and other circuitry that enables network node 900 to communicate with other equipment in a core network such as, in some embodiments, circuit-switched (CS) and/or packet-switched Core (PS) networks.
  • core network interface 950 can comprise the SI interface standardized by 3GPP.
  • core network interface 950 can comprise the NG interface standardized by 3GPP.
  • core network interface 950 can comprise one or more interfaces to one or more AMFs, SMFs, SGWs, MMEs, SGSNs, GGSNs, and other physical devices that comprise functionality found in GERAN, UTRAN, EPC, 5GC, and CDMA2000 core networks that are known to persons of ordinary skill in the art. In some embodiments, these one or more interfaces may be multiplexed together on a single physical interface.
  • lower layers of core network interface 950 can comprise one or more of asynchronous transfer mode (ATM), Internet Protocol (IP)-over-Ethemet, SDH over optical fiber, T1/E1/PDH over a copper wire, microwave radio, or other wired or wireless transmission technologies known to those of ordinary skill in the art.
  • ATM asynchronous transfer mode
  • IP Internet Protocol
  • SDH over optical fiber
  • T1/E1/PDH over a copper wire
  • microwave radio or other wired or wireless transmission technologies known to those of ordinary skill in the art.
  • network node 900 can include hardware and/or software that configures and/or facilitates network node 900 to communicate with other network nodes in a RAN, such as with other eNBs, gNBs, ng-eNBs, en-gNBs, IAB nodes, etc.
  • Such hardware and/or software can be part of radio network interface 940 and/or core network interface 950, or it can be a separate functional unit (not shown).
  • such hardware and/or software can configure and/or facilitate network node 900 to communicate with other RAN nodes via the X2 or Xn interfaces, as standardized by 3 GPP.
  • OA&M interface 960 can comprise transmitters, receivers, and other circuitry that enables network node 900 to communicate with external networks, computers, databases, and the like for purposes of operations, administration, and maintenance of network node 900 or other network equipment operably connected thereto.
  • Lower layers of OA&M interface 960 can comprise one or more of asynchronous transfer mode (ATM), Internet Protocol (IP)-over-Ethernet, SDH over optical fiber, T1/E1/PDH over a copper wire, microwave radio, or other wired or wireless transmission technologies known to those of ordinary skill in the art.
  • ATM asynchronous transfer mode
  • IP Internet Protocol
  • SDH over optical fiber
  • T1/E1/PDH over a copper wire, microwave radio, or other wired or wireless transmission technologies known to those of ordinary skill in the art.
  • radio network interface 940, core network interface 950, and OA&M interface 960 may be multiplexed together on a single physical interface, such as the examples listed above.
  • FIG 10 is a block diagram of an exemplary communication network configured to provide over-the-top (OTT) data services between a host computer and a user equipment (UE), according to one or more exemplary embodiments of the present disclosure.
  • UE 1010 can communicate with radio access network (RAN) 1030 over radio interface 1020, which can be based on protocols described above including, e.g., LTE, LTE-A, and 5G/NR.
  • RAN radio access network
  • UE 1010 can be configured and/or arranged as shown in other figures discussed above.
  • RAN 1030 can include one or more network nodes (e.g., base stations, eNBs, gNBs, controllers, etc.) operable in licensed spectrum bands, as well one or more network nodes operable in unlicensed spectrum (using, e.g., LAA or NR-U technology), such as a 2.4-GHz band and/or a 5-GHz band.
  • network nodes comprising RAN 1030 can cooperatively operate using licensed and unlicensed spectrum.
  • RAN 1030 can include, or be capable of communication with, one or more satellites comprising a satellite access network.
  • RAN 1030 can further communicate with core network 1040 according to various protocols and interfaces described above.
  • one or more apparatus e.g., base stations, eNBs, gNBs, etc.
  • RAN 1030 and core network 1040 can be configured and/or arranged as shown in other figures discussed above.
  • eNBs comprising an evolved UTRAN (E-UTRAN) 1030 can communicate with an evolved packet core (EPC) network 1040 via an SI interface.
  • EPC evolved packet core
  • gNBs and ng-eNBs comprising an NG-RAN 1030 can communicate with a 5GC network 1030 via an NG interface.
  • Core network 1040 can further communicate with an external packet data network, illustrated in Figure 10 as Internet 1050, according to various protocols and interfaces known to persons of ordinary skill in the art. Many other devices and/or networks can also connect to and communicate via Internet 1050, such as exemplary host computer 1060.
  • host computer 1060 can communicate with UE 1010 using Internet 1050, core network 1040, and RAN 1030 as intermediaries.
  • Host computer 1060 can be a server (e.g., an application server) under ownership and/or control of a service provider.
  • Host computer 1060 can be operated by the OTT service provider or by another entity on the service provider’s behalf.
  • host computer 1060 can provide an over-the-top (OTT) packet data service to UE 1010 using facilities of core network 1040 and RAN 1030, which can be unaware of the routing of an outgoing/incoming communication to/from host computer 1060.
  • host computer 1060 can be unaware of routing of a transmission from the host computer to the UE, e.g., the routing of the transmission through RAN 1030.
  • OTT services can be provided using the exemplary configuration shown in Figure 10 including, e.g., streaming (unidirectional) audio and/or video from host computer to UE, interactive (bidirectional) audio and/or video between host computer and UE, interactive messaging or social communication, interactive virtual or augmented reality, cloud gaming, etc.
  • the exemplary network shown in Figure 10 can also include measurement procedures and/or sensors that monitor network performance metrics including data rate, latency and other factors that are improved by exemplary embodiments disclosed herein.
  • the exemplary network can also include functionality for reconfiguring the link between the endpoints (e.g., host computer and UE) in response to variations in the measurement results.
  • Such procedures and functionalities are known and practiced; if the network hides or abstracts the radio interface from the OTT service provider, measurements can be facilitated by proprietary signaling between the UE and the host computer.
  • the exemplary embodiments described herein provide flexible and efficient techniques to inform a PDCP layer about which PDCP SDUs are associated with a single application PDU that needs to be delivered according to a maximum latency requirement.
  • a network or UE
  • network resources are used more efficiently, thereby increasing capacity.
  • these techniques facilitate more efficient resource scheduling for delivery of PDCP PDUs. This can better fulfill QoS requirements and improve QoE by avoiding unnecessary resource usage and interference caused by unwanted application data.
  • the term unit can have conventional meaning in the field of electronics, electrical devices and/or electronic devices and can include, for example, electrical and/or electronic circuitry, devices, modules, processors, memories, logic solid state and/or discrete devices, computer programs or instructions for carrying out respective tasks, procedures, computations, outputs, and/or displaying functions, and so on, as such as those that are described herein.
  • any appropriate steps, methods, features, functions, or benefits disclosed herein may be performed through one or more functional units or modules of one or more virtual apparatuses.
  • Each virtual apparatus may comprise a number of these functional units.
  • These functional units may be implemented via processing circuitry, which may include one or more microprocessor or microcontrollers, as well as other digital hardware, which may include Digital Signal Processor (DSPs), special-purpose digital logic, and the like.
  • the processing circuitry may be configured to execute program code stored in memory, which may include one or several types of memory such as Read Only Memory (ROM), Random Access Memory (RAM), cache memory, flash memory devices, optical storage devices, etc.
  • Program code stored in memory includes program instructions for executing one or more telecommunications and/or data communications protocols as well as instructions for carrying out one or more of the techniques described herein.
  • the processing circuitry may be used to cause the respective functional unit to perform corresponding functions according one or more embodiments of the present disclosure.
  • device and/or apparatus can be represented by a semiconductor chip, a chipset, or a (hardware) module comprising such chip or chipset; this, however, does not exclude the possibility that a functionality of a device or apparatus, instead of being hardware implemented, be implemented as a software module such as a computer program or a computer program product comprising executable software code portions for execution or being run on a processor.
  • functionality of a device or apparatus can be implemented by any combination of hardware and software.
  • a device or apparatus can also be regarded as an assembly of multiple devices and/or apparatuses, whether functionally in cooperation with or independently of each other.
  • devices and apparatuses can be implemented in a distributed fashion throughout a system, so long as the functionality of the device or apparatus is preserved. Such and similar principles are considered as known to a skilled person.
  • Embodiments of the techniques and apparatus described herein also include, but are not limited to, the following enumerated examples:
  • SDU associated with a common maximum latency requirement
  • A2 The method of embodiment Al, wherein the first plurality of SDUs is associated with one or more of the following: one or more higher-layer PDUs that have a common maximum latency requirement; a group discard time; and an extended reality (XR) application.
  • XR extended reality
  • A3 The method of any of embodiments A1-A2, wherein: the first plurality comprises a first SDU and a second SDU received a duration after the first SDU; and initiating the at least one group discard timer comprises initiating a first discard timer with a first value upon receipt of the first SDU.
  • initiating the at least one group discard timer further comprises initiating a second discard timers with a second value upon receipt of the second SDU; and the second value is the first value minus the duration.
  • discarding comprises: discarding the first SDU upon expiration of the first discard timer; and discarding the second SDU upon expiration of the second discard timer.
  • initiating the at least one group discard timer further comprises refraining from initiating a second discard timer upon receipt of the second SDU; and discarding comprises discarding the first plurality of SDUs upon expiration of the first discard timer.
  • A7 The method of embodiment A6, wherein: the first plurality of SDUs are associated with an extended reality (XR) data flow comprising a plurality of higher-layer PDUs; and initiating the at least one group discard timer further comprises refraining from initiating further discard timers upon receipt, after the second SDU, of further SDUs associated with the XR data flow.
  • XR extended reality
  • A8 The method of any of embodiments A1-A6, further comprising: forming the first plurality of SDUs into a second plurality of first-layer PDUs; sending the second plurality of first-layer PDUs to a lower layer; and upon expiration of the at least one group discard timer, sending, to the lower layer, respective discard indications associated with the second plurality of first-layer PDUs.
  • A9 The method of any of embodiments A1-A8, further comprising determining that the first plurality of SDU are associated with a common maximum latency requirement based on a common sequence number identified by one of the following: inspecting packet headers of the first plurality of SDUs; or receiving from the higher layer in association with each of the first plurality of SDUs.
  • the first layer is a Packet Data Convergence Protocol (PDCP) layer
  • the higher layer is an Internet Protocol ((IP) layer or a Service Data Adaptation Protocol (SDAP) layer.
  • PDCP Packet Data Convergence Protocol
  • IP Internet Protocol
  • SDAP Service Data Adaptation Protocol
  • the method further comprises receiving, from a network node in the wireless network, a discard timer configuration including one or more of the following: a number of group discard timers to be used; relationship between received SDUs and group discard timers; a duration for a first group discard timer; and relationship between expiration of group discard timers and discarding of SDUs.
  • a discard timer configuration including one or more of the following: a number of group discard timers to be used; relationship between received SDUs and group discard timers; a duration for a first group discard timer; and relationship between expiration of group discard timers and discarding of SDUs.
  • the node is a network node in the wireless network; and the method further comprises sending, to a user equipment (UE), a discard timer configuration including one or more of the following: a number of group discard timers to be used; relationship between received SDUs and group discard timers; a duration for a first group discard timer; and relationship between expiration of group discard timers and discarding of SDUs.
  • UE user equipment
  • a user equipment configured to communicate data using a protocol stack that includes a first layer comprising at least one group discard timer, the UE comprising: radio transceiver circuitry configured to communicate with a network node, in a wireless network, that has a compatible protocol stack; and processing circuitry operatively coupled to the radio transceiver circuitry, whereby the processing circuitry and the radio transceiver circuitry are configured to perform operations corresponding to any of the methods of embodiments A1-A12.
  • a user equipment configured to communicate data using a protocol stack that includes a first layer comprising at least one group discard timer, the UE being further arranged to perform operations corresponding to any of the methods of embodiments A1-A12.
  • a non-transitory, computer-readable medium storing computer-executable instructions that, when executed by processing circuitry of a user equipment (UE) configured to communicate data using a protocol stack that includes a first layer comprising at least one group discard timer, configure the UE to perform operations corresponding to any of the methods of embodiments A1-A12.
  • UE user equipment
  • a computer program product comprising computer-executable instructions that, when executed by processing circuitry of a user equipment (UE) configured to communicate data using a protocol stack that includes a first layer comprising at least one group discard timer, configure the UE to perform operations corresponding to any of the methods of embodiments A1-A12.
  • UE user equipment
  • a network node of a wireless network, configured to communicate data using a protocol stack that includes a first layer comprising at least one group discard timer, the network node comprising: radio network interface circuitry configured to communicate with a user equipment (UE) that has a compatible protocol stack; and processing circuitry operatively coupled to the radio network interface circuitry, whereby the processing circuitry and the radio network interface circuitry are configured to perform operations corresponding to any of the methods of embodiments Al- A12.
  • UE user equipment
  • a network node, of a wireless network configured to communicate data using a protocol stack that includes a first layer comprising at least one group discard timer, the network node being further arranged to perform operations corresponding to any of the methods of embodiments A1-A12.
  • a non-transitory, computer-readable medium storing computer-executable instructions that, when executed by processing circuitry of a network node configured to communicate data using a protocol stack that includes a first layer comprising at least one group discard timer, configure the network node to perform operations corresponding to any of the methods of embodiments A1-A12.
  • a computer program product comprising computer-executable instructions that, when executed by processing circuitry of a network node configured to communicate data using a protocol stack that includes a first layer comprising at least one group discard timer, configure the network node to perform operations corresponding to any of the methods of embodiments A1-A12.

Abstract

Embodiments include methods for a node in a wireless network to communicate data using a protocol stack that includes a first layer comprising at least one group discard timer. Such methods include receiving (720), at the first layer from a higher layer of the protocol stack, a first plurality of service data units, SDUs, associated with a common maximum latency requirement. Such methods include, based on the common maximum latency requirement, initiating (750) at least one group discard timer associated with the first plurality of SDUs and, upon expiration of the at least one discard timer, discarding (770) the first plurality of SDUs associated with common latency requirement. The node can be a user equipment, UE, or a network node. Other embodiments include UEs and network nodes configured to perform such methods.

Description

GROUP PDCP DISCARD TIMER FOR LOW-LATENCY SERVICES
TECHNICAL FIELD
The present disclosure generally relates to wireless communication networks, and particularly relates to techniques for ensuring timely deliver by a wireless network of data packets generated by latency-sensitive applications such as extended reality (XR) and cloud gaming.
BACKGROUND
Currently the fifth generation (“5G”) of cellular systems, also referred to as New Radio (NR), is being standardized within the Third-Generation Partnership Project (3GPP). NR is developed for maximum flexibility to support multiple and substantially different use cases. These include enhanced mobile broadband (eMBB), machine type communications (MTC), ultra-reliable low latency communications (URLLC), side-link device-to-device (D2D), and several other use cases.
Figure 1 illustrates an exemplary high-level view of the 5G network architecture, consisting of a Next Generation RAN (NG-RAN) 199 and a 5G Core (5GC) 198. NG-RAN 199 can include a set of gNodeB’s (gNBs) connected to the 5GC via one or more NG interfaces, such as gNBs 100, 150 connected via interfaces 102, 152, respectively. In addition, the gNBs can be connected to each other via one or more Xn interfaces, such as Xn interface 140 between gNBs 100 and 150. With respect the NR interface to UEs, each of the gNBs can support frequency division duplexing (FDD), time division duplexing (TDD), or a combination thereof.
NG-RAN 199 is layered into a Radio Network Layer (RNL) and a Transport Network Layer (TNL). The NG-RAN architecture, /.< ., the NG-RAN logical nodes and interfaces between them, is defined as part of the RNL. For each NG-RAN interface (NG, Xn, Fl) the related TNL protocol and the functionality are specified. The TNL provides services for user plane transport and signaling transport.
The NG RAN logical nodes shown in Figure 1 include a central (or centralized) unit (CU or gNB-CU) and one or more distributed (or decentralized) units (DU or gNB-DU). For example, gNB 100 includes gNB-CU 110 and gNB-DUs 120 and 130. CUs (e.g., gNB-CU 110) are logical nodes that host higher-layer protocols and perform various gNB functions such controlling the operation of DUs. Each DU is a logical node that hosts lower-layer protocols and can include, depending on the functional split, various subsets of the gNB functions. As such, each of the CUs and DUs can include various circuitry needed to perform their respective functions, including processing circuitry, transceiver circuitry (e.g., for communication), and power supply circuitry. Moreover, the terms “central unit” and “centralized unit” are used interchangeably herein, as are the terms “distributed unit” and “decentralized unit.” A gNB-CU connects to gNB-DUs over respective Fl logical interfaces, such as interfaces 122 and 132 shown in Figure 1. The gNB-CU and connected gNB-DUs are only visible to other gNBs and the 5GC as a gNB. In other words, the Fl interface is not visible beyond gNB-CU.
Figure 2 shows a high-level view of an exemplary 5G network architecture, including a Next Generation Radio Access Network (NG-RAN) 299 and a 5G Core (5GC) 298. As shown in the figure, NG-RAN 299 can include gNBs 210 e.g., 210a, b) and ng-eNBs 220 (e.g., 220a, b) that are interconnected with each other via respective Xn interfaces. The gNBs and ng-eNBs are also connected via the NG interfaces to 5GC 298, more specifically to the AMF (Access and Mobility Management Function) 230 (e.g., AMFs 230a, b) via respective NG-C interfaces and to the UPF (User Plane Function) 240 (e.g., UPFs 240a, b) via respective NG-U interfaces. Moreover, the AMFs 230a, b can communicate with one or more policy control functions (PCFs, e.g., PCFs 250a, b) and network exposure functions (NEFs, e.g., NEFs 260a, b).
Each of the gNBs 210 can support the NR radio interface including frequency division duplexing (FDD), time division duplexing (TDD), or a combination thereof. Each of ng-eNBs 220 can support the fourth-generation (4G) Long-Term Evolution (LTE) radio interface. Unlike conventional LTE eNBs, however, ng-eNBs 220 connect to the 5GC via the NG interface. Each of the gNBs and ng-eNBs can serve a geographic coverage area including one more cells, such as cells 21 la-b and 221a-b shown in Figure 2. Depending on the particular cell in which it is located, a UE 205 can communicate with the gNB or ng-eNB serving that particular cell via the NR or LTE radio interface, respectively. Although Figure 2 shows gNBs and ng-eNBs separately, it is also possible that a single NG-RAN node provides both types of functionality.
5G/NR technology shares many similarities with LTE. For example, NR uses CP-OFDM (Cyclic Prefix Orthogonal Frequency Division Multiplexing) in the DL and both CP-OFDM and DFT-spread OFDM (DFT-S-OFDM) in the UL. As another example, in the time domain, NR DL and UL physical resources are organized into equal-sized 1-ms subframes. A subframe is further divided into multiple slots of equal duration, with each slot including multiple OFDM-based symbols. However, time-frequency resources can be configured much more flexibly for an NR cell than for an LTE cell. For example, rather than a fixed 15-kHz OFDM sub-carrier spacing (SCS) as in LTE, NR SCS can range from 15 to 240 kHz, with even greater SCS considered for future NR releases.
In addition to providing coverage via cells as in LTE, NR networks also provide coverage via “beams.” In general, a downlink (DL, i.e., network to UE) “beam” is a coverage area of a network-transmitted reference signal (RS) that may be measured or monitored by a UE.
Figure 3 shows an exemplary configuration of NR user plane (UP) and control plane (CP) protocol stacks between a UE (310), a gNB (320), and an AMF (320), such as those shown in Figures 1-2. The Physical (PHY), Medium Access Control (MAC), Radio Link Control (RLC), and Packet Data Convergence Protocol (PDCP) layers between the UE and the gNB are common to UP and CP. The PDCP layer provides ciphering/deciphering, integrity protection, sequence numbering, reordering, and duplicate detection for both CP and UP. In addition, PDCP provides header compression and retransmission for UP data.
On the UP side, Internet protocol (IP) packets arrive to the PDCP layer as service data units (SDUs), and PDCP creates protocol data units (PDUs) to deliver to RLC. When each IP packet arrives, PDCP starts a discard timer. When this timer expires, PDCP discards the associated SDU and the corresponding PDU. If the PDU was delivered to RLC, PDCP also indicates the discard to RLC.
The RLC layer transfers PDCP PDUs to the MAC through logical channels (LCH). RLC provides error detection/correction, concatenation, segmentation/reassembly, sequence numbering, reordering of data transferred to/from the upper layers. If RLC receives a discard indication from associated with a PDCP PDU, it will discard the corresponding RLC SDU (or any segment thereof) if it has not been sent to lower layers.
The MAC layer provides mapping between LCHs and PHY transport channels, LCH prioritization, multiplexing into or demultiplexing from transport blocks (TBs), hybrid ARQ (HARQ) error correction, and dynamic scheduling (on gNB side). The PHY layer provides transport channel services to the MAC layer and handles transfer over the NR radio interface, e.g., via modulation, coding, antenna mapping, and beam forming.
On UP side, the Service Data Adaptation Protocol (SDAP) layer handles quality-of-service (QoS). This includes mapping between QoS flows and Data Radio Bearers (DRBs) and marking QoS flow identifiers (QFI) in UL and DL packets. On CP side, the non-access stratum (NAS) layer is between UE and AMF and handles UE/gNB authentication, mobility management, and security control.
The RRC layer sits below NAS in the UE but terminates in the gNB rather than the AMF. RRC controls communications between UE and gNB at the radio interface as well as the mobility of a UE between cells in the NG-RAN. RRC also broadcasts system information (SI) and performs establishment, configuration, maintenance, and release of DRBs and Signaling Radio Bearers (SRBs) and used by UEs. Additionally, RRC controls addition, modification, and release of carrier aggregation (CA) and dual -connectivity (DC) configurations for UEs. RRC also performs various security functions such as key management.
After a UE is powered ON it will be in the RRC IDLE state until an RRC connection is established with the network, at which time the UE will transition to RRC CONNECTED state (e.g., where data transfer can occur). The UE returns to RRC IDLE after the connection with the network is released. In RRC IDLE state, the UE’s radio is active on a discontinuous reception (DRX) schedule configured by upper layers. During DRX active periods (also referred to as “DRX On durations”), an RRC IDLE UE receives SI broadcast in the cell where the UE is camping, performs measurements of neighbor cells to support cell reselection, and monitors a paging channel on PDCCH for pages from 5GC via gNB. An NR UE in RRC IDLE state is not known to the gNB serving the cell where the DE is camping. However, NR RRC includes an RRC_INACTIVE state in which a UE is known (e.g., via UE context) by the serving gNB. RRC INACTIVE has some properties similar to a “suspended” condition used in LTE.
Extended Reality (XR) and Cloud Gaming are some of the most important 5G media applications under consideration in the industry. XR is an umbrella term that refers to all real- and-virtual combined environments and human-machine interactions generated by computer technology and wearables. It includes exemplary forms such as Augmented Reality (AR), Mixed Reality (MR), and Virtual Reality (VR), as well as various other types that span or sit between these examples. In the following, the term “XR” also refers to cloud gaming and related applications.
Edge Computing (EC) is generally viewed as an important network architecture enabler for XR. In general, EC facilitates deployment of cloud computing capabilities and service environments close to the cellular radio access network (RAN). It can provide benefits such as lower latency and higher bandwidth for user-plane (UP, e.g., data) traffic, as well as reduced backhaul traffic to the 5G core network (5GC). 3GPP is also studying prospects for several new services on application architecture for enabling Edge Applications, as described further in 3 GPP TR 23.758. Edge Applications are expected to take advantage of the low latencies enabled by 5G and EC network architecture to reduce the end-to-end application-level latencies.
From 3GPP Rel-15, the 5G NR radio interface is designed to support applications demanding high throughput and low latency in line with the requirements of XR and Edge Applications in NR networks.
SUMMARY
Even so, there are some problems, issues, and/or difficulties. For example, XR applications generate periodic traffic with variable size. When an application-layer packet enters the Internet, the packet may be transmitted as a single PDU or may be segmented several PDUs before transmission. One application packet could, for instance, correspond to one or several IP packets.
XR application PDUs may have time constraints, such that one or a set of application PDUs (referred to generically as “application PDUs”) may need to reach the receiver within a certain period of time, e.g., a maximum allowed latency. If not received by this time, the application PDUs are useless and can be discarded. Although the NR PDCP layer currently uses a discard timer, PDCP has no knowledge about how PDCP SDUs (or IP PDUs) map to the application PDUs that need to be delivered within the maximum allowed latency. This can cause various problems such as late delivery of XR application PDUs and waste of network resources delivering application PDUs that ultimately will be discarded by the receiver.
Embodiments of the present disclosure provide specific improvements to communication between UEs and network nodes in a wireless network, such as by providing, enabling, and/or facilitating solutions to overcome exemplary problems summarized above and described in more detail below.
Embodiments include methods (e.g., procedures) for communicating data using a protocol stack that includes a first layer comprising at least one group discard timer. Unless specifically noted otherwise below, these exemplary methods can be performed by a UE (e.g., wireless device) or a network node (e.g., base station, eNB, gNB, ng-eNB, etc., or component thereof) in a wireless network (e.g., E-UTRAN, NG-RAN).
These exemplary methods can include receiving, at the first layer from a higher layer of the protocol stack, a first plurality of SDUs associated with a common maximum latency requirement. These exemplary methods can also include, based on the common maximum latency requirement, initiating at least one group discard timer associated with the first plurality of SDUs. These exemplary methods can also include, upon expiration of the at least one discard timer, discarding the first plurality of SDUs associated with common latency requirement.
In some embodiments, the first plurality of SDUs is associated with one or more of the following: one or more higher-layer PDUs that have a common maximum latency requirement; a common group discard time; and a single data flow (e.g., an XR data flow).
In some embodiments, the first plurality of SDUs comprises a first SDU and a second SDU received a duration after the first SDU. In such embodiments, the initiating operations can include initiating a first discard timer with a first value upon receipt of the first SDU. In some of these embodiments, an indication of the first value can be received from the higher layer in association with the first SDU.
In some of these embodiments, the first discard timer is associated with the first and second SDUs.
In other of these embodiments, the initiating operations can also include initiating a second discard timers with a second value upon receipt of the second SDU. The second value can be the first value minus the duration. In such embodiments, the discarding operations can include discarding the first SDU upon expiration of the first discard timer and discarding the second SDU upon expiration of the second discard timer. In other of these embodiments, the initiating operations can also include refraining from initiating a second discard timer upon receipt of the second SDU. In such embodiments, the discarding operations can include discarding the first plurality of SDUs upon expiration of the first discard timer. In some variants, the first plurality of SDUs can be associated with a data flow comprising a plurality of higher-layer PDUs. In such variants, the initiating operations can also include refraining from initiating further discard timers upon receipt, after the second SDU, of further SDUs associated with the data flow.
In some embodiments, these exemplary methods can also include forming the first plurality of SDUs into a second plurality of first-layer PDUs; sending the second plurality of first- layer PDUs to a lower layer of the protocol stack; and, upon expiration of the at least one group discard timer, sending to the lower layer respective discard indications associated with the second plurality of first-layer PDUs.
In some embodiments, these exemplary methods can also include determining that the first plurality of SDU are associated with a common maximum latency requirement based on one of the following:
• identifying a common sequence number in respective packet headers of the first plurality of SDUs; or
• identifying a common sequence number received from the higher layer in association with each of the first plurality of SDUs.
In some embodiments, the first layer can be a PDCP layer and the higher layer can be an application layer, an IP layer, or an SDAP layer. In some embodiments, the lower layer can be an RLC layer.
In some embodiments, the node can be a UE. In such embodiments, these exemplary methods can also include receiving, from a network node in the wireless network, a discard timer configuration including one or more of the following:
• a number of group discard timers to be used;
• relationship between received SDUs and group discard timers;
• a duration for a first group discard timer; and
• relationship between expiration of group discard timers and discarding of SDUs.
In such embodiments, initiating the at least one group discard timer and discarding the first plurality of SDUs can be based on the received discard timer configuration.
In other embodiments, the node can be a network node in a wireless network. In such embodiments, these exemplary methods can also include sending, to a UE, a discard timer configuration including one or more of the above-mentioned items.
In some variants of these embodiments, these exemplary methods can also include determining remaining durations of validity for the respective first plurality of SDUs based on the following for one or more higher-layer PDUs associated with the first plurality of SDUs: a maximum latency requirement, and a time of arrival in the wireless network. Additionally, initiating the at least one group discard timer can be based on the remaining durations of validity.
In some further variants, determining remaining durations of validity for the respective first plurality of SDUs can be based on one of the following:
• a time of departure for the one or more higher-layer PDUs from an application server, and a per-PDU processing time in the higher layer; or
• a comparison of the time of arrival in the wireless network versus an expected time of arrival.
Other embodiments include UEs (e.g., wireless devices) and network nodes (e.g., base stations, eNBs, gNBs, ng-eNBs, etc., or components thereof) configured to perform operations corresponding to any of the exemplary methods described herein. Other embodiments include non-transitory, computer-readable media storing program instructions that, when executed by processing circuitry, configure such UEs or network nodes to perform operations corresponding to any of the exemplary methods described herein.
These and other embodiments described herein provide flexible and efficient techniques to inform a PDCP layer about which PDCP SDUs are associated with a single application PDU that needs to be delivered according to a maximum latency requirement. By allowing the network (or UE) to discard a set of packets that are no longer valid for an application, network resources are used more efficiently, thereby increasing capacity. Moreover, these techniques facilitate more efficient resource scheduling for delivery of PDCP PDUs. This can better fulfill quality - of-service (QoS) requirements and improve quality of experience (QoE) by avoiding unnecessary resource usage and interference caused by unwanted application data.
These and other objects, features, and advantages of embodiments of the present disclosure will become apparent upon reading the following Detailed Description in view of the Drawings briefly described below.
BRIEF DESCRIPTION OF THE DRAWINGS
Figures 1-2 illustrate two high-level views of an exemplary 5G/NR network architecture.
Figure 3 shows an exemplary configuration of NR UP and CP protocol stacks.
Figure 4 illustrates a comparison of various characteristics or requirements between XR and other 5G applications.
Figure 5 illustrates some exemplary traffic characteristics for XR. Figure 6 illustrates some problems that application-layer PDUs can encounter between a source and a destination.
Figures 7A-B show a flow diagram of an exemplary method for a node (e.g., UE, wireless device, base station, eNB, gNB, ng-eNB, etc.) in a wireless network (e.g., NG-RAN, E-UTRAN), according to various embodiments of the present disclosure.
Figure 8 shows a block diagram of an exemplary wireless device or UE, according to various embodiments of the present disclosure.
Figure 9 shows a block diagram of an exemplary network node, according to various embodiments of the present disclosure.
Figure 10 shows a block diagram of an exemplary network configured to provide over- the-top (OTT) data services between a host computer and a UE, according to various embodiments of the present disclosure.
DETAILED DESCRIPTION
Some of the embodiments contemplated herein will now be described more fully with reference to the accompanying drawings. Other embodiments, however, are contained within the scope of the subject matter disclosed herein, the disclosed subject matter should not be construed as limited to only the embodiments set forth herein; rather, these embodiments are provided by way of example to convey the scope of the subject matter to those skilled in the art.
Generally, all terms used herein are to be interpreted according to their ordinary meaning in the relevant technical field, unless a different meaning is clearly given and/or is implied from the context in which it is used. All references to a/an/the element, apparatus, component, means, step, etc. are to be interpreted openly as referring to at least one instance of the element, apparatus, component, means, step, etc., unless explicitly stated otherwise. The steps of any methods disclosed herein do not have to be performed in the exact order disclosed, unless a step is explicitly described as following or preceding another step and/or where a step must necessarily follow or precede another step due to some dependency. Any feature of any of the embodiments disclosed herein may be applied to any other embodiment, wherever appropriate. Likewise, any advantage of any of the embodiments may apply to any other embodiments, and vice versa. Other objectives, features, and advantages of the enclosed embodiments will be apparent from the following description.
Furthermore, the following terms are used throughout the description given below:
• Radio Node: As used herein, a “radio node” can be either a radio access node or a wireless device.”
Node: As used herein, a “node” can be a network node or a wireless device. • Radio Access Node: As used herein, a “radio access node” (or equivalently “radio network node,” “radio access network node,” or “RAN node”) can be any node in a radio access network (RAN) of a cellular communications network that operates to wirelessly transmit and/or receive signals. Some examples of a radio access node include, but are not limited to, a base station (e.g, a New Radio (NR) base station (gNB) in a 3GPP Fifth Generation (5G) NR network or an enhanced or evolved Node B (eNB) in a 3GPP LTE network), base station distributed components (e.g., CU and DU), a high-power or macro base station, a low-power base station (e.g., micro, pico, femto, or home base station, or the like), an integrated access backhaul (IAB) node, a transmission point, a remote radio unit (RRU or RRH), and a relay node.
• Core Network Node: As used herein, a “core network node” is any type of node in a core network. Some examples of a core network node include, e.g., a Mobility Management Entity (MME), a serving gateway (SGW), a Packet Data Network Gateway (P-GW), an access and mobility management function (AMF), a session management function (AMF), a user plane function (UPF), a Service Capability Exposure Function (SCEF), or the like.
• Wireless Device: As used herein, a “wireless device” (or “WD” for short) is any type of device that has access to (z.e., is served by) a cellular communications network by communicate wirelessly with network nodes and/or other wireless devices. Communicating wirelessly can involve transmitting and/or receiving wireless signals using electromagnetic waves, radio waves, infrared waves, and/or other types of signals suitable for conveying information through air. Some examples of a wireless device include, but are not limited to, smart phones, mobile phones, cell phones, voice over IP (VoIP) phones, wireless local loop phones, desktop computers, personal digital assistants (PDAs), wireless cameras, gaming consoles or devices, music storage devices, playback appliances, wearable devices, wireless endpoints, mobile stations, tablets, laptops, laptop- embedded equipment (LEE), laptop-mounted equipment (LME), smart devices, wireless customer-premise equipment (CPE), mobile-type communication (MTC) devices, Internet-of-Things (loT) devices, vehicle-mounted wireless terminal devices, etc. Unless otherwise noted, the term “wireless device” is used interchangeably herein with the term “user equipment” (or “UE” for short).
• Network Node: As used herein, a “network node” is any node that is either part of the radio access network (e.g, a radio access node or equivalent name discussed above) or of the core network (e.g, a core network node discussed above) of a cellular communications network. Functionally, a network node is equipment capable, configured, arranged, and/or operable to communicate directly or indirectly with a wireless device and/or with other network nodes or equipment in the cellular communications network, to enable and/or provide wireless access to the wireless device, and/or to perform other functions (e.g., administration) in the cellular communications network.
Note that the description herein focuses on a 3 GPP cellular communications system and, as such, 3GPP terminology or terminology similar to 3GPP terminology is oftentimes used. However, the concepts disclosed herein are not limited to a 3GPP system. Furthermore, although the term “cell” is used herein, it should be understood that (particularly with respect to 5G NR) beams may be used instead of cells and, as such, concepts described herein apply equally to both cells and beams.
As briefly mentioned above, although the NR PDCP layer currently uses a discard timer, PDCP has no knowledge about how PDCP SDUs (or IP PDUs) map to the application PDUs that need to be delivered within the maximum allowed latency. This can cause various problems such as late delivery of XR application PDUs and waste of network resources delivering such PDUs that will be ultimately be discarded. This is discussed in more detail below.
Figure 4 illustrates a comparison of various characteristics requirements for XR and other 5G applications. In particular, Figure 4 shows a comparison of latency, reliability, and bitrate requirements for URLLC, streaming, and EC-based XR. While URLLC services have extreme requirements of 1-ms latency and of 10'5, EC-based XR can have relaxed requirements 5-10 ms and 10'4 reliability. However, XR services can require a much higher bite rate than either URLLC or streaming, (e.g., due to codec inefficiency). EC -based XR traffic can also be very dynamic, e.g., due to eye/viewport tracking. In general, the traffic can appear to be periodic but with variable file sizes, as illustrated in Figure 5.
When an XR application-layer packet enters the Internet, the packet may be transmitted as a single PDU or may be segmented several PDUs before transmission. One application packet could, for instance, correspond to one or several IP packets. As shown in Figure 4, XR application PDUs may have a maximum allowed latency of 5-10 ms. If the application PDU(s) is/are not received by this time, the application PDU(s) is/are not of any use and can be discarded.
IP packets reach the PDCP layer with a certain jitter that comes from traversing the Internet and the 5GC. As discussed above, the PDCP layer starts a discard timer each time a PDCP SDU is received by higher layers. Although the NR PDCP layer currently uses a discard timer, PDCP has no knowledge about how PDCP SDUs (or IP PDUs) map to the application PDUs that need to be delivered within the maximum allowed latency.
For example, one XR application PDU can be segmented into 5 IP packets. Each IP packet arrives in- or out-of-sequence to the PDCP layer (as a PDCP SDU) at times X+deltal, X+delta2, etc. Each packet will have a discard timer running with a certain time. At the same time, all 5 PDCP SDUs must be delivered within a defined time budget. If the delay budget for the application packet is consumed, the 5 PDCP SDUs corresponding to the application packet can be discarded regardless of whether the PDCP discard timer is still running.
The value of the current PDCP discard timer cannot depend on the number of PDCP SDUs that may correspond to a single application PDU. This is because the number of PDCP SDUs that correspond to a single application PDU may vary among application PDUs. Setting the PDCP discard timer to a fraction of the maximum latency of the application PDU may also impose a fictitious restriction and possibly lead to unnecessary discards. For example, if the maximum latency is 10 ms and PDCP discard timer is set to 2 ms (i.e., 10 ms/5 PDUs), any single PDCP PDU will be discarded 2 ms after it reaches PDCP. However, it could be that all 5 PDCP PDUs are transmitted at the same time after 7 ms from the reception of the first PDCP SDU. In such case, all 5 PDPC PDUs could be delivered within the 10-ms latency budget but the 2-ms time would cause unnecessary discard.
Figure 6 illustrates certain problems that application-layer PDUs can encounter between a source and a destination. The application (illustrated as a “cloud”) generates one or more first application PDUs at time tO, all of which need to be delivered within the same maximum latency. The application may generate second application PDUs at time tl with a common maximum latency that may be different than the first application PDUs, and/or at a later time than the first application PDUs. These first and second application PDUs may traverse one or more intermediate networks or may be directly connected to a 3GPP network (e.g., 5GC). In any case, these application PDUs may be adapted (e.g. segmented) by lower-layer protocols to better fit the transmission conditions and/or constraints. Challenges for the gNB PDCP layer include identifying which PDUs are associated with application PDUs (e.g., first or second) having the same maximum latency and which ones need to be delivered first. For example, the first set generated at tO may need to be delivered before the second set generated at tl (or vice versa).
A similar difficulty could happen in the UL if more than one PDCP SDU related to one application PDU arrives to the PDCP layer from the application layer. The UE needs to receive UL grants to transmit these PDCP SDUs. If the UE does not receive the UL grants in time or the UL grants are not large enough to meet the maximum latency requirement, there may be PDCP SDUs related to the application PDU still pending for transmission. PDCP would attempt to transmit those pending PDCP SDUs even though they are no longer useful to the receiver. Moreover, transmitting those “late” PDCP SDUs may actually delay other PDCP SDUs related to a second application PDU coming after the first application PDU.
To summarize, the current PDCP timer is not suitable for handling XR services. An XR application may produce one or more application PDUs that must be delivered within a maximum latency (or delay budget). The application and/or lower layers may segment and/or concatenate these application PDUs. PDCP receives IP PDUs (or other protocol, if used) from upper layers, but has no information about how these IP PDUs (PDCP SDUs) map to the application PDUs having the maximum latency. As such, the PDCP layer cannot properly configure the single discard timer for each PDCP SDU.
In some cases (e.g., video decoding), when an application PDU is not delivered within its maximum latency, then subsequent application PDUs (PDCP SDUs) are no longer needed since they are dependent on the early application PDU. The subsequent PDCP SDUs should not be transmitted either, because doing so wastes network and UE resources. However, the current operation of SDU discard timer independent of other SDUs allows these subsequent SDUs to stay in the PDCP buffer even though they are unnecessary from an application perspective.
Accordingly, embodiments of the present disclosure provide flexible and efficient techniques that provide information to the PDCP layer about which PDCP SDUs are associated with a single application PDU that needs to be delivered according to a maximum latency requirement. Such techniques also provide different conditions for a transmitting PDCP entity to trigger a group discard timer for a set of PDCP SDUs associated with one or more applicationlayer PDUs having a common maximum latency requirement. Furthermore, when the group discard timer expires, the transmitting PDCP entity discards the set of PDCP SDUs and corresponding PDCP PDUs associated with the application PDU and provides a discard indication to lower layers with respect to all corresponding PDCP PDUs.
Embodiments of the present disclosure can provide various benefits and/or advantages. For example, by allowing the network (or UE) to discard a set of packets that are no longer valid for an application, network resources are used more efficiently thereby increasing capacity. Moreover, embodiments can facilitate more efficient resource management (e.g., scheduling) and/or better planning of for delivery of PDCP PDUs within a maximum latency associated with their corresponding application PDUs. This can provide better fulfillment of committed QoS and improvements to quality of experience (QoE) by avoiding unnecessary resource usage and interference caused by unwanted application PDU data.
According to various embodiments, different conditions can be used to trigger a group discard timer for a set of PDCP SDUs associated with one or more application-layer PDUs having a common maximum latency requirement. For example, this could be trigger upon reception of a first PDCP SDU identified as being associated with: 1) a set of PDCP SDUs that share a group discard time, and/or 2) one or more application PDUs having a common maximum latency requirement. A UE can be configured by the network (e.g., via RRC) to use a group PDCP discard timer and, when configured, the UE uses the triggering condition above. The network configuration can be responsive to a UE indication of support of PDCP group discard timer. In these embodiments, legacy (i.e., non-supporting) UEs would not be configured as such but would continue to use the existing SDU-independent PDCP timer arrangement discussed above. If multiple group discard timer triggering conditions are available, the network can also configure the UE to use a particular one (or more) of the available conditions.
When any (configured) triggering condition is met, the transmitting PDCP entity starts one or more discard timer(s) according to various embodiments. In some embodiments (referred to as “first option”), a transmitting PDCP entity can start multiple discard timers (i.e., one for each PDCP SDU) with each timer having a different initial value corresponding to the arrival time of the associated PDCP SDU. For example, a second PDCP SDU may have a shorter discard timer value than a first PDCP SDU that arrived previously, in accordance with the overall latency requirement of the associated application PDU.
In other embodiments (referred to as “second option”), a transmitting PDCP entity can start one discard timer that will be associated with all PDCP SDUs with a common group discard time or that are associated with one or more application PDUs having a common maximum latency requirement.
In other embodiments (referred to as “third option”), a transmitting PDCP entity can start one discard timer for the first received PDCP SDU associated with an XR flow comprising multiple application PDUs, and refrain from starting a discard timer for subsequently received PDCP SDUs associated with the same XR flow. These embodiments can be beneficial when subsequently-received application PDUs are dependent on an initial application PDU. A specific example is a P-frame in the context of compressed video.
In any of the above embodiments, the duration for the timer(s) can be configured by the network, e.g., via RRC. In some embodiments, multiple timer durations may be used/configured to differentiate PDCP SDUs in terms of latency requirement. This can be particularly beneficial in the case where multiple discard timers are started using different values, such as in the first option discussed above. The particular duration for a discard timer may be set in various ways, such as the maximum latency requirement for the corresponding packet, an amount of remaining latency budget, etc.
When the discard timer expires, the transmitting PDCP entity discards the set of PDCP SDUs and PDCP data PDUs corresponding to the application PDU. In embodiments related to the first option, each PDCP SDU will be discarded based on expiration of its associated discard timer without affecting or being affected by discard timers associated with other PDCP SDUs. In embodiments related to the second option, all PDCP SDUs associated with a common group will be discarded upon expiration of the discard timer started upon receipt of the first PDCP SDU. In embodiments related to the third option, all PDCP SDUs associated with the same XR flow will be discarded upon expiration of the discard timer started upon receipt of the first PDCP SDU. In any event, the PDCP layer can provide discard indications to RLC for all PDCP PDUs corresponding to PDCP SDUs discarded based on timer expiration.
In some embodiments, the network can configure (e.g., via RRC) discarding behavior in the UE, preferably compatible with the discard timer configuration that may also be network- configured, as discussed above.
The network may need to comply with specific QoS requirements for a flow. When the flow is set up, the NG-RAN may obtain certain QoS parameters indicating performance for the flow, such as maximum latency. When the application PDUs are large enough so that they may have been segmented before arriving to the network, the NG-RAN may need assistance to identify which PDCP SDUs are associated with which application PDUs. This information can facilitate delivery of all PDCP SDUs within a given delay budget for the application PDU(s) associated with a service.
In some embodiments, the core network (e.g., 5GC) can provide a sequence number for each IP PDU (or packet) delivered to the PDCP layer. For example, the sequence number can be the same for all packets associated with a one application PDU or for all packets that have the same maximum latency requirement.
In other embodiments, the PDCP layer can perform a packet inspection of the incoming packet headers to identify a relevant sequence number. For example, if PDCP SDUs are known to be IP packets, the Sequence Number (SN) and segmentation information can be extracted from each packet to identify if a packet is a segment and, when all the packets are received, the order of that reconstructed packet.
In some embodiments, the PDCP layer may be informed of the remaining time left for each incoming IP PDU/PDCP SDU. The remaining time T for PDCP SDU j, referred as TrJ, can be estimated as:
TrJ = Tc - TeJ,
TeJ = Taj - TdJ - Tpc, where Tc is a common frame level latency requirement, TeJ is an elapsed time for each IP packet, Taj is an arrival time of PDCP SDU j (generally known in NG-RAN), TdJ is a departure time of the IP PDU (or transport layer PDU, depending on XR application server configuration), and Tpc is a per-SDU processing time in SDAP layer. In this case, the information about TdJ can be provided occasionally by an XR Edge application server to the 5GC and forwarded to the NG-RAN. It is expected that TdJ will be approximately the frame interval over time, or its variance will be very small. If signaling of TdJ is not possible (e.g., due to third party server), the NG-RAN can also estimate TeJ indirectly, such as by checking the arrival timing of the first SDAP SDU in every SDAP burst and comparing it with a typical frame refresh rate.
Various features of the embodiments described above correspond to various operations illustrated in Figures 7A-B, which show an exemplary method (e.g., procedures) for communicating data using a protocol stack that includes a first layer comprising at least one group discard timer, according to various embodiments of the present disclosure. In other words, various features of the operations described below correspond to various embodiments described above.
Unless specifically noted otherwise, the exemplary method shown in Figure 7 can be performed by a node in a wireless network (e.g., E-UTRAN, NG-RAN), such as a UE (e.g., wireless device) or a network node (e.g., base station, eNB, gNB, ng-eNB, etc., or component thereof). Although Figure 7 shows specific blocks in a particular order, the operations of the exemplary method can be performed in a different order than shown and can be combined and/or divided into blocks having different functionality than shown. Optional blocks or operations are indicated by dashed lines.
The exemplary method can include the operations of block 720, where the node can receive, at the first layer from a higher layer of the protocol stack, a first plurality of SDUs associated with a common maximum latency requirement. The exemplary method can also include the operations of block 750, where the node can, based on the common maximum latency requirement, initiate at least one group discard timer associated with the first plurality of SDUs. The exemplary method can also include the operations of block 770, where the node can, upon expiration of the at least one discard timer, discard the first plurality of SDUs associated with common latency requirement.
In some embodiments, the first plurality of SDUs is associated with one or more of the following: one or more higher-layer PDUs that have a common maximum latency requirement; a common group discard time; and a single data flow (e.g., an XR data flow).
In some embodiments, the first plurality of SDUs comprises a first SDU and a second SDU received a duration after the first SDU. In such embodiments, the initiating operations of block 740 can include the operations of sub-block 741, where the node can initiate a first discard timer with a first value upon receipt of the first SDU. In some of these embodiments, an indication of the first value can be received from the higher layer in association with the first SDU.
In some of these embodiments, the first discard timer is associated with the first and second SDUs. Examples of these embodiments include the “second option” discussed above.
In other of these embodiments, the initiating operations of block 740 can include the operations of sub-block 742, where the node can initiate a second discard timer with a second value upon receipt of the second SDU. The second value can be the first value minus the duration. In such embodiments, the discarding operations of block 760 can include the operations of subblocks 761-762, where the node can discard the first SDU upon expiration of the first discard timer and discard the second SDU upon expiration of the second discard timer. Examples of these embodiments include the “first option” discussed above.
In other of these embodiments, the initiating operations of block 740 can also include the operations of sub-block 743, where the node can refrain from initiating a second discard timer upon receipt of the second SDU. In such embodiments, the discarding operations of block 760 can include the operations of sub-block 763, where the node can discard the first plurality of SDUs upon expiration of the first discard timer. Examples of these embodiments include the “third option” discussed above. In some variants, the first plurality of SDUs can be associated with a data flow comprising a plurality of higher-layer PDUs. In such variants, the initiating operations of block 740 can also include the operations of sub-block 744, where the node can refrain from initiating further discard timers upon receipt, after the second SDU, of further SDUs associated with the data flow.
In some embodiments, the exemplary method can also include the operations of blocks 760 and 780. In block 760, the node can form the first plurality of SDUs into a second plurality of first-layer PDUs and send the second plurality of first-layer PDUs to a lower layer of the protocol stack. In block 780, the node can, upon expiration of the at least one group discard timer, send to the lower layer respective discard indications associated with the second plurality of first-layer PDUs.
In some embodiments, the exemplary method can also include the operations of block 730, where the node can determine that the first plurality of SDU are associated with a common maximum latency requirement based on one of the following:
• identifying a common sequence number in respective packet headers of the first plurality of SDUs (e.g., IP packet headers); or
• identifying a common sequence number received from the higher layer in association with each of the first plurality of SDUs.
In some embodiments, the first layer can be a PDCP layer and the higher layer can be an application layer, an IP layer, or an SDAP layer. In some embodiments, the lower layer mentioned above can be an RLC layer.
In some embodiments, the node can be a UE. In such embodiments, the exemplary method can also include the operations of block 710a, where the UE can receive, from a network node in the wireless network, a discard timer configuration including one or more of the following:
• a number of group discard timers to be used;
• relationship between received SDUs and group discard timers;
• a duration for a first group discard timer; and
• relationship between expiration of group discard timers and discarding of SDUs.
In such embodiments, initiating the at least one group discard timer (e.g., in block 750) and discarding the first plurality of SDUs (e.g., in block 770) can be based on the received discard timer configuration.
In other embodiments, the node can be a network node in a wireless network. In such embodiments, the exemplary method can also include the operations of block 710b, where the network node can send, to a UE, a discard timer configuration including one or more of the above- mentioned items. For example, the network node can use the same or similar discard timer configuration as provided to the UE, thereby facilitating communication interoperability and/or compatibility between the two nodes.
In some embodiments where the node is a network node in the wireless network, the exemplary method can also include the operations of block 740, where the node can determine remaining durations of validity for the respective first plurality of SDUs based on the following for one or more higher-layer PDUs associated with the first plurality of SDUs: a maximum latency requirement, and a time of arrival in the wireless network. Additionally, initiating the at least one group discard timer (e.g., in block 750) is based on the remaining durations of validity.
In some variants, determining remaining durations of validity for the respective first plurality of SDUs (e.g., in block 740) can be based on one of the following:
• a time of departure for the one or more higher-layer PDUs from an application server, and a per-PDU processing time in the higher layer; or
• a comparison of the time of arrival in the wireless network versus an expected time of arrival.
Although various embodiments are described above in terms of methods, techniques, and/or procedures, the person of ordinary skill will readily comprehend that such methods, techniques, and/or procedures can be embodied by various combinations of hardware and software in various systems, communication devices, computing devices, control devices, apparatuses, non-transitory computer-readable media, computer program products, etc.
For example, embodiments can include a UE configured to communicate data using a protocol stack that includes a first layer comprising at least one group discard timer. The UE can include radio transceiver circuitry configured to communicate with a network node, in a wireless network, that has a compatible protocol stack. The UE can also include processing circuitry operatively coupled to the radio transceiver circuitry. The processing circuitry and the radio transceiver circuitry can be configured to perform operations corresponding to any of the relevant embodiments discussed above in relation to Figure 7.
As another example, embodiments can include a UE configured to communicate data using a protocol stack that includes a first layer comprising at least one group discard timer. The UE can include various modules with functionality that corresponds to respective operations discussed above in relation to Figure 7. As a specific example, the UE can include:
• a receiving module with functionality that corresponds to the operations of block 720;
• an initiating module with functionality that corresponds to the operations of block 750; and
• a discarding module with functionality that corresponds to the operations of block 770. The functionality of these and other modules of the UE can be implemented by any appropriate combination of hardware and software, such as by processing circuitry, radio transceiver circuitry, and/or executable program instructions stored a computer-readable medium (e.g., a memory).
As another example, embodiments can include a network node, of a wireless network, that is configured to communicate data using a protocol stack that includes a first layer comprising at least one group discard timer. The network node can include radio network interface circuitry configured to communicate with a UE that has a compatible protocol stack. The network node can also include processing circuitry operatively coupled to the radio network interface circuitry. The processing circuitry and the radio network interface circuitry can be configured to perform operations corresponding to any of the relevant embodiments discussed above in relation to Figure 7.
As another example, embodiments can include a network node, of a wireless network, that is configured to communicate data using a protocol stack that includes a first layer comprising at least one group discard timer. The network node can include various modules with functionality that corresponds to respective operations discussed above in relation to Figure 7. As a specific example, the network node can include:
• a receiving module with functionality that corresponds to the operations of block 720;
• an initiating module with functionality that corresponds to the operations of block 750; and
• a discarding module with functionality that corresponds to the operations of block 770. The functionality of these and other modules of the network node can be implemented by any appropriate combination of hardware and software, such as by processing circuitry, radio network interface circuitry, and/or executable program instructions stored a computer-readable medium (e.g., a memory).
Some specific examples of UE and network node embodiments are discussed in more detail below.
Figure 8 shows a block diagram of an exemplary wireless device or UE 800 (hereinafter referred to as “UE 800”) according to various embodiments of the present disclosure, including those described above with reference to other figures. For example, UE 800 can be configured by execution of instructions, stored on a computer-readable medium, to perform operations corresponding to one or more of the exemplary methods described herein.
UE 800 can include a processor 810 (also referred to as “processing circuitry”) that can be operably connected to a program memory 820 and/or a data memory 830 via a bus 870 that can comprise parallel address and data buses, serial ports, or other methods and/or structures known to those of ordinary skill in the art. Program memory 820 can store software code, programs, and/or instructions (collectively shown as computer program product (CPP) 821 in Figure 8) that, when executed by processor 810, can configure and/or facilitate UE 800 to perform various operations, including operations corresponding to various exemplary methods described herein. As part of or in addition to such operations, execution of such instructions can configure and/or facilitate UE 800 to communicate using one or more wired or wireless communication protocols, including one or more wireless communication protocols standardized by 3GPP, 3GPP2, or IEEE, such as those commonly known as 5G/NR, LTE, LTE-A, UMTS, HSPA, GSM, GPRS, EDGE, IxRTT, CDMA2000, 802.11 WiFi, HDMI, USB, Firewire, etc., or any other current or future protocols that can be utilized in conjunction with radio transceiver 840, user interface 850, and/or control interface 860.
As another example, processor 810 can execute program code stored in program memory 820 that corresponds to MAC, RLC, PDCP, SDAP, RRC, and NAS layer protocols standardized by 3GPP (e.g., for NR and/or LTE). As a further example, processor 810 can execute program code stored in program memory 820 that, together with radio transceiver 840, implements corresponding PHY layer protocols, such as Orthogonal Frequency Division Multiplexing (OFDM), Orthogonal Frequency Division Multiple Access (OFDMA), and Single-Carrier Frequency Division Multiple Access (SC-FDMA). As another example, processor 810 can execute program code stored in program memory 820 that, together with radio transceiver 840, implements device-to-device (D2D) communications with other compatible devices and/or UEs.
Program memory 820 can also include software code executed by processor 810 to control the functions of UE 800, including configuring and controlling various components such as radio transceiver 840, user interface 850, and/or control interface 860. Program memory 820 can also comprise one or more application programs and/or modules comprising computer-executable instructions embodying any of the exemplary methods described herein. Such software code can be specified or written using any known or future developed programming language, such as e.g., Java, C++, C, Objective C, HTML, XHTML, machine code, and Assembler, as long as the desired functionality, e.g., as defined by the implemented method steps, is preserved. In addition, or as an alternative, program memory 820 can comprise an external storage arrangement (not shown) remote from UE 800, from which the instructions can be downloaded into program memory 820 located within or removably coupled to UE 800, so as to enable execution of such instructions.
Data memory 830 can include memory area for processor 810 to store variables used in protocols, configuration, control, and other functions of UE 800, including operations corresponding to, or comprising, any of the exemplary methods described herein. Moreover, program memory 820 and/or data memory 830 can include non-volatile memory (e.g., flash memory), volatile memory (e.g., static or dynamic RAM), or a combination thereof. Furthermore, data memory 830 can comprise a memory slot by which removable memory cards in one or more formats (e.g., SD Card, Memory Stick, Compact Flash, etc.) can be inserted and removed.
Persons of ordinary skill will recognize that processor 810 can include multiple individual processors (including, e.g., multi-core processors), each of which implements a portion of the functionality described above. In such cases, multiple individual processors can be commonly connected to program memory 820 and data memory 830 or individually connected to multiple individual program memories and or data memories. More generally, persons of ordinary skill in the art will recognize that various protocols and other functions of UE 800 can be implemented in many different computer arrangements comprising different combinations of hardware and software including, but not limited to, application processors, signal processors, general-purpose processors, multi-core processors, ASICs, fixed and/or programmable digital circuitry, analog baseband circuitry, radio-frequency circuitry, software, firmware, and middleware.
Radio transceiver 840 can include radio-frequency transmitter and/or receiver functionality that facilitates the UE 800 to communicate with other equipment supporting like wireless communication standards and/or protocols. In some exemplary embodiments, the radio transceiver 840 includes one or more transmitters and one or more receivers that enable UE 800 to communicate according to various protocols and/or methods proposed for standardization by 3GPP and/or other standards bodies. For example, such functionality can operate cooperatively with processor 810 to implement a PHY layer based on OFDM, OFDMA, and/or SC-FDMA technologies, such as described herein with respect to other figures.
In some embodiments, radio transceiver 840 includes one or more transmitters and one or more receivers that can facilitate the UE 800 to communicate with various LTE, LTE-Advanced (LTE-A), and/or NR networks according to standards promulgated by 3GPP. In some exemplary embodiments of the present disclosure, the radio transceiver 840 includes circuitry, firmware, etc. necessary for the UE 800 to communicate with various NR, NR-U, LTE, LTE-A, LTE-LAA, UMTS, and/or GSMZEDGE networks, also according to 3GPP standards. In some embodiments, radio transceiver 840 can include circuitry supporting D2D communications between UE 800 and other compatible devices.
In some embodiments, radio transceiver 840 includes circuitry, firmware, etc. necessary for the UE 800 to communicate with various CDMA2000 networks, according to 3GPP2 standards. In some embodiments, the radio transceiver 840 can be capable of communicating using radio technologies that operate in unlicensed frequency bands, such as IEEE 802.11 WiFi that operates using frequencies in the regions of 2.4, 5.6, and/or 60 GHz. In some embodiments, radio transceiver 840 can include a transceiver that is capable of wired communication, such as by using IEEE 802.3 Ethernet technology. The functionality particular to each of these embodiments can be coupled with and/or controlled by other circuitry in the UE 800, such as the processor 810 executing program code stored in program memory 820 in conjunction with, and/or supported by, data memory 830.
User interface 850 can take various forms depending on the particular embodiment of UE 800, or can be absent from UE 800 entirely. In some embodiments, user interface 850 can comprise a microphone, a loudspeaker, slidable buttons, depressible buttons, a display, a touchscreen display, a mechanical or virtual keypad, a mechanical or virtual keyboard, and/or any other user-interface features commonly found on mobile phones. In other embodiments, the UE 800 can comprise a tablet computing device including a larger touchscreen display. In such embodiments, one or more of the mechanical features of the user interface 850 can be replaced by comparable or functionally equivalent virtual user interface features (e.g., virtual keypad, virtual buttons, etc.) implemented using the touchscreen display, as familiar to persons of ordinary skill in the art. In other embodiments, the UE 800 can be a digital computing device, such as a laptop computer, desktop computer, workstation, etc. that comprises a mechanical keyboard that can be integrated, detached, or detachable depending on the particular exemplary embodiment. Such a digital computing device can also comprise a touch screen display. Many exemplary embodiments of the UE 800 having a touch screen display are capable of receiving user inputs, such as inputs related to exemplary methods described herein or otherwise known to persons of ordinary skill.
In some embodiments, UE 800 can include an orientation sensor, which can be used in various ways by features and functions of UE 800. For example, the UE 800 can use outputs of the orientation sensor to determine when a user has changed the physical orientation of the UE 800’ s touch screen display. An indication signal from the orientation sensor can be available to any application program executing on the UE 800, such that an application program can change the orientation of a screen display (e.g., from portrait to landscape) automatically when the indication signal indicates an approximate 90-degree change in physical orientation of the device. In this exemplary manner, the application program can maintain the screen display in a manner that is readable by the user, regardless of the physical orientation of the device. In addition, the output of the orientation sensor can be used in conjunction with various exemplary embodiments of the present disclosure.
A control interface 860 of the UE 800 can take various forms depending on the particular exemplary embodiment of UE 800 and of the particular interface requirements of other devices that the UE 800 is intended to communicate with and/or control. For example, the control interface 860 can comprise an RS-232 interface, a USB interface, an HDMI interface, a Bluetooth interface, an IEEE (“Firewire”) interface, an I2C interface, a PCMCIA interface, or the like. In some exemplary embodiments of the present disclosure, control interface 860 can comprise an IEEE 802.3 Ethernet interface such as described above. In some exemplary embodiments of the present disclosure, the control interface 860 can comprise analog interface circuitry including, for example, one or more digital-to-analog converters (DACs) and/or analog-to-digital converters (ADCs).
Persons of ordinary skill in the art can recognize the above list of features, interfaces, and radio-frequency communication standards is merely exemplary, and not limiting to the scope of the present disclosure. In other words, the UE 800 can comprise more functionality than is shown in Figure 8 including, for example, a video and/or still-image camera, microphone, media player and/or recorder, etc. Moreover, radio transceiver 840 can include circuitry necessary to communicate using additional radio-frequency communication standards including Bluetooth, GPS, and/or others. Moreover, the processor 810 can execute software code stored in the program memory 820 to control such additional functionality. For example, directional velocity and/or position estimates output from a GPS receiver can be available to any application program executing on the UE 800, including any program code corresponding to and/or embodying any exemplary embodiments (e.g., of methods) described herein.
Figure 9 shows a block diagram of an exemplary network node 900 according to various embodiments of the present disclosure, including those described above with reference to other figures. For example, exemplary network node 900 can be configured by execution of instructions, stored on a computer-readable medium, to perform operations corresponding to one or more of the exemplary methods described herein. In some exemplary embodiments, network node 900 can comprise a base station, eNB, gNB, or one or more components thereof. For example, network node 900 can be configured as a central unit (CU) and one or more distributed units (DUs) according to NR gNB architectures specified by 3GPP. More generally, the functionally of network node 900 can be distributed across various physical devices and/or functional units, modules, etc.
Network node 900 can include processor 910 (also referred to as “processing circuitry”) that is operably connected to program memory 920 and data memory 930 via bus 970, which can include parallel address and data buses, serial ports, or other methods and/or structures known to those of ordinary skill in the art.
Program memory 920 can store software code, programs, and/or instructions (collectively shown as computer program product (CPP) 921 in Figure 9) that, when executed by processor 910, can configure and/or facilitate network node 900 to perform various operations, including operations corresponding to various exemplary methods described herein. As part of and/or in addition to such operations, program memory 920 can also include software code executed by processor 910 that can configure and/or facilitate network node 900 to communicate with one or more other UEs or network nodes using other protocols or protocol layers, such as one or more of the PHY, MAC, RLC, PDCP, SDAP, RRC, and NAS layer protocols standardized by 3GPP for LTE, LTE-A, and/or NR, or any other higher-layer protocols utilized in conjunction with radio network interface 940 and/or core network interface 950. By way of example, core network interface 950 can comprise the SI or NG interface and radio network interface 940 can comprise the Uu interface, as standardized by 3GPP. Program memory 920 can also comprise software code executed by processor 910 to control the functions of network node 900, including configuring and controlling various components such as radio network interface 940 and core network interface 950.
Data memory 930 can comprise memory area for processor 910 to store variables used in protocols, configuration, control, and other functions of network node 900. As such, program memory 920 and data memory 930 can comprise non-volatile memory (e.g., flash memory, hard disk, etc.), volatile memory (e.g., static or dynamic RAM), network-based (e.g., “cloud”) storage, or a combination thereof. Persons of ordinary skill in the art will recognize that processor 910 can include multiple individual processors (not shown), each of which implements a portion of the functionality described above. In such case, multiple individual processors may be commonly connected to program memory 920 and data memory 930 or individually connected to multiple individual program memories and/or data memories. More generally, persons of ordinary skill will recognize that various protocols and other functions of network node 900 may be implemented in many different combinations of hardware and software including, but not limited to, application processors, signal processors, general-purpose processors, multi-core processors, ASICs, fixed digital circuitry, programmable digital circuitry, analog baseband circuitry, radiofrequency circuitry, software, firmware, and middleware.
Radio network interface 940 can comprise transmitters, receivers, signal processors, ASICs, antennas, beamforming units, and other circuitry that enables network node 900 to communicate with other equipment such as, in some embodiments, a plurality of compatible user equipment (UE). In some embodiments, interface 940 can also enable network node 900 to communicate with compatible satellites of a satellite communication network. In some exemplary embodiments, radio network interface 940 can comprise various protocols or protocol layers, such as the PHY, MAC, RLC, PDCP, and/or RRC layer protocols standardized by 3GPP for LTE, LTE- A, LTE-LAA, NR, NR-U, etc. improvements thereto such as described herein above; or any other higher-layer protocols utilized in conjunction with radio network interface 940. According to further exemplary embodiments of the present disclosure, the radio network interface 940 can comprise a PHY layer based on OFDM, OFDMA, and/or SC-FDMA technologies. In some embodiments, the functionality of such a PHY layer can be provided cooperatively by radio network interface 940 and processor 910 (including program code in memory 920).
Core network interface 950 can comprise transmitters, receivers, and other circuitry that enables network node 900 to communicate with other equipment in a core network such as, in some embodiments, circuit-switched (CS) and/or packet-switched Core (PS) networks. In some embodiments, core network interface 950 can comprise the SI interface standardized by 3GPP. In some embodiments, core network interface 950 can comprise the NG interface standardized by 3GPP. In some exemplary embodiments, core network interface 950 can comprise one or more interfaces to one or more AMFs, SMFs, SGWs, MMEs, SGSNs, GGSNs, and other physical devices that comprise functionality found in GERAN, UTRAN, EPC, 5GC, and CDMA2000 core networks that are known to persons of ordinary skill in the art. In some embodiments, these one or more interfaces may be multiplexed together on a single physical interface. In some embodiments, lower layers of core network interface 950 can comprise one or more of asynchronous transfer mode (ATM), Internet Protocol (IP)-over-Ethemet, SDH over optical fiber, T1/E1/PDH over a copper wire, microwave radio, or other wired or wireless transmission technologies known to those of ordinary skill in the art.
In some embodiments, network node 900 can include hardware and/or software that configures and/or facilitates network node 900 to communicate with other network nodes in a RAN, such as with other eNBs, gNBs, ng-eNBs, en-gNBs, IAB nodes, etc. Such hardware and/or software can be part of radio network interface 940 and/or core network interface 950, or it can be a separate functional unit (not shown). For example, such hardware and/or software can configure and/or facilitate network node 900 to communicate with other RAN nodes via the X2 or Xn interfaces, as standardized by 3 GPP.
OA&M interface 960 can comprise transmitters, receivers, and other circuitry that enables network node 900 to communicate with external networks, computers, databases, and the like for purposes of operations, administration, and maintenance of network node 900 or other network equipment operably connected thereto. Lower layers of OA&M interface 960 can comprise one or more of asynchronous transfer mode (ATM), Internet Protocol (IP)-over-Ethernet, SDH over optical fiber, T1/E1/PDH over a copper wire, microwave radio, or other wired or wireless transmission technologies known to those of ordinary skill in the art. Moreover, in some embodiments, one or more of radio network interface 940, core network interface 950, and OA&M interface 960 may be multiplexed together on a single physical interface, such as the examples listed above.
Figure 10 is a block diagram of an exemplary communication network configured to provide over-the-top (OTT) data services between a host computer and a user equipment (UE), according to one or more exemplary embodiments of the present disclosure. UE 1010 can communicate with radio access network (RAN) 1030 over radio interface 1020, which can be based on protocols described above including, e.g., LTE, LTE-A, and 5G/NR. For example, UE 1010 can be configured and/or arranged as shown in other figures discussed above.
RAN 1030 can include one or more network nodes (e.g., base stations, eNBs, gNBs, controllers, etc.) operable in licensed spectrum bands, as well one or more network nodes operable in unlicensed spectrum (using, e.g., LAA or NR-U technology), such as a 2.4-GHz band and/or a 5-GHz band. In such cases, the network nodes comprising RAN 1030 can cooperatively operate using licensed and unlicensed spectrum. In some embodiments, RAN 1030 can include, or be capable of communication with, one or more satellites comprising a satellite access network.
RAN 1030 can further communicate with core network 1040 according to various protocols and interfaces described above. For example, one or more apparatus (e.g., base stations, eNBs, gNBs, etc.) comprising RAN 1030 can communicate to core network 1040 via core network interface 1050 described above. In some exemplary embodiments, RAN 1030 and core network 1040 can be configured and/or arranged as shown in other figures discussed above. For example, eNBs comprising an evolved UTRAN (E-UTRAN) 1030 can communicate with an evolved packet core (EPC) network 1040 via an SI interface. As another example, gNBs and ng-eNBs comprising an NG-RAN 1030 can communicate with a 5GC network 1030 via an NG interface.
Core network 1040 can further communicate with an external packet data network, illustrated in Figure 10 as Internet 1050, according to various protocols and interfaces known to persons of ordinary skill in the art. Many other devices and/or networks can also connect to and communicate via Internet 1050, such as exemplary host computer 1060. In some exemplary embodiments, host computer 1060 can communicate with UE 1010 using Internet 1050, core network 1040, and RAN 1030 as intermediaries. Host computer 1060 can be a server (e.g., an application server) under ownership and/or control of a service provider. Host computer 1060 can be operated by the OTT service provider or by another entity on the service provider’s behalf.
For example, host computer 1060 can provide an over-the-top (OTT) packet data service to UE 1010 using facilities of core network 1040 and RAN 1030, which can be unaware of the routing of an outgoing/incoming communication to/from host computer 1060. Similarly, host computer 1060 can be unaware of routing of a transmission from the host computer to the UE, e.g., the routing of the transmission through RAN 1030. Various OTT services can be provided using the exemplary configuration shown in Figure 10 including, e.g., streaming (unidirectional) audio and/or video from host computer to UE, interactive (bidirectional) audio and/or video between host computer and UE, interactive messaging or social communication, interactive virtual or augmented reality, cloud gaming, etc.
The exemplary network shown in Figure 10 can also include measurement procedures and/or sensors that monitor network performance metrics including data rate, latency and other factors that are improved by exemplary embodiments disclosed herein. The exemplary network can also include functionality for reconfiguring the link between the endpoints (e.g., host computer and UE) in response to variations in the measurement results. Such procedures and functionalities are known and practiced; if the network hides or abstracts the radio interface from the OTT service provider, measurements can be facilitated by proprietary signaling between the UE and the host computer.
The exemplary embodiments described herein provide flexible and efficient techniques to inform a PDCP layer about which PDCP SDUs are associated with a single application PDU that needs to be delivered according to a maximum latency requirement. By allowing the network (or UE) to discard a set of packets that are no longer valid for an application (e.g., XR), network resources are used more efficiently, thereby increasing capacity. Moreover, these techniques facilitate more efficient resource scheduling for delivery of PDCP PDUs. This can better fulfill QoS requirements and improve QoE by avoiding unnecessary resource usage and interference caused by unwanted application data.
When used in NR UEs (e.g., UE 1010) and gNBs (e.g., gNBs comprising RAN 1030), these improvements can increase the use of OTT data services - including XR applications - by providing better QoS/QoE to OTT service providers and end users. Consequently, this increases the benefits and/or value of such data services to end users and OTT service providers. The foregoing merely illustrates the principles of the disclosure. Various modifications and alterations to the described embodiments will be apparent to those skilled in the art in view of the teachings herein. It will thus be appreciated that those skilled in the art will be able to devise numerous systems, arrangements, and procedures that, although not explicitly shown or described herein, embody the principles of the disclosure and can be thus within the spirit and scope of the disclosure. Various exemplary embodiments can be used together with one another, as well as interchangeably therewith, as should be understood by those having ordinary skill in the art.
The term unit, as used herein, can have conventional meaning in the field of electronics, electrical devices and/or electronic devices and can include, for example, electrical and/or electronic circuitry, devices, modules, processors, memories, logic solid state and/or discrete devices, computer programs or instructions for carrying out respective tasks, procedures, computations, outputs, and/or displaying functions, and so on, as such as those that are described herein.
Any appropriate steps, methods, features, functions, or benefits disclosed herein may be performed through one or more functional units or modules of one or more virtual apparatuses. Each virtual apparatus may comprise a number of these functional units. These functional units may be implemented via processing circuitry, which may include one or more microprocessor or microcontrollers, as well as other digital hardware, which may include Digital Signal Processor (DSPs), special-purpose digital logic, and the like. The processing circuitry may be configured to execute program code stored in memory, which may include one or several types of memory such as Read Only Memory (ROM), Random Access Memory (RAM), cache memory, flash memory devices, optical storage devices, etc. Program code stored in memory includes program instructions for executing one or more telecommunications and/or data communications protocols as well as instructions for carrying out one or more of the techniques described herein. In some implementations, the processing circuitry may be used to cause the respective functional unit to perform corresponding functions according one or more embodiments of the present disclosure.
As described herein, device and/or apparatus can be represented by a semiconductor chip, a chipset, or a (hardware) module comprising such chip or chipset; this, however, does not exclude the possibility that a functionality of a device or apparatus, instead of being hardware implemented, be implemented as a software module such as a computer program or a computer program product comprising executable software code portions for execution or being run on a processor. Furthermore, functionality of a device or apparatus can be implemented by any combination of hardware and software. A device or apparatus can also be regarded as an assembly of multiple devices and/or apparatuses, whether functionally in cooperation with or independently of each other. Moreover, devices and apparatuses can be implemented in a distributed fashion throughout a system, so long as the functionality of the device or apparatus is preserved. Such and similar principles are considered as known to a skilled person.
Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure belongs. It will be further understood that terms used herein should be interpreted as having a meaning that is consistent with their meaning in the context of this specification and the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
In addition, certain terms used in the present disclosure, including the specification and drawings, can be used synonymously in certain instances (e.g., “data” and “information”). It should be understood, that although these terms (and/or other terms that can be synonymous to one another) can be used synonymously herein, there can be instances when such words can be intended to not be used synonymously. Further, to the extent that the prior art knowledge has not been explicitly incorporated by reference herein above, it is explicitly incorporated herein in its entirety. All publications referenced are incorporated herein by reference in their entireties.
Embodiments of the techniques and apparatus described herein also include, but are not limited to, the following enumerated examples:
Al . A method, for a node in a wireless network, to communicate data using a protocol stack that includes a first layer comprising at least one group discard timer, the method comprising: receiving, at the first layer from a higher layer, a first plurality of service data units
(SDU) associated with a common maximum latency requirement; based on the common maximum latency requirement, initiating at least one group discard timer associated with the first plurality of SDUs; upon expiration of the at least one discard timer, discarding the first plurality of SDUs associated with common latency requirement.
A2. The method of embodiment Al, wherein the first plurality of SDUs is associated with one or more of the following: one or more higher-layer PDUs that have a common maximum latency requirement; a group discard time; and an extended reality (XR) application.
A3. The method of any of embodiments A1-A2, wherein: the first plurality comprises a first SDU and a second SDU received a duration after the first SDU; and initiating the at least one group discard timer comprises initiating a first discard timer with a first value upon receipt of the first SDU.
A4. The method of embodiment A3, wherein an indication of the first value is received from the higher layer in association with the first SDU.
A5. The method of any of embodiments A3-A4, wherein: initiating the at least one group discard timer further comprises initiating a second discard timers with a second value upon receipt of the second SDU; and the second value is the first value minus the duration. discarding comprises: discarding the first SDU upon expiration of the first discard timer; and discarding the second SDU upon expiration of the second discard timer.
A6. The method of any of embodiments A3-A4, wherein: initiating the at least one group discard timer further comprises refraining from initiating a second discard timer upon receipt of the second SDU; and discarding comprises discarding the first plurality of SDUs upon expiration of the first discard timer.
A7. The method of embodiment A6, wherein: the first plurality of SDUs are associated with an extended reality (XR) data flow comprising a plurality of higher-layer PDUs; and initiating the at least one group discard timer further comprises refraining from initiating further discard timers upon receipt, after the second SDU, of further SDUs associated with the XR data flow.
A8. The method of any of embodiments A1-A6, further comprising: forming the first plurality of SDUs into a second plurality of first-layer PDUs; sending the second plurality of first-layer PDUs to a lower layer; and upon expiration of the at least one group discard timer, sending, to the lower layer, respective discard indications associated with the second plurality of first-layer PDUs. A9. The method of any of embodiments A1-A8, further comprising determining that the first plurality of SDU are associated with a common maximum latency requirement based on a common sequence number identified by one of the following: inspecting packet headers of the first plurality of SDUs; or receiving from the higher layer in association with each of the first plurality of SDUs.
A10. The method of any of embodiments A1-A9, wherein: the first layer is a Packet Data Convergence Protocol (PDCP) layer; and the higher layer is an Internet Protocol ((IP) layer or a Service Data Adaptation Protocol (SDAP) layer.
Al l. The method of any of embodiments A1-A10, wherein: the node is a user equipment (UE); and the method further comprises receiving, from a network node in the wireless network, a discard timer configuration including one or more of the following: a number of group discard timers to be used; relationship between received SDUs and group discard timers; a duration for a first group discard timer; and relationship between expiration of group discard timers and discarding of SDUs.
A12. The method of any of embodiments A1-A10, wherein: the node is a network node in the wireless network; and the method further comprises sending, to a user equipment (UE), a discard timer configuration including one or more of the following: a number of group discard timers to be used; relationship between received SDUs and group discard timers; a duration for a first group discard timer; and relationship between expiration of group discard timers and discarding of SDUs.
Bl. A user equipment (UE) configured to communicate data using a protocol stack that includes a first layer comprising at least one group discard timer, the UE comprising: radio transceiver circuitry configured to communicate with a network node, in a wireless network, that has a compatible protocol stack; and processing circuitry operatively coupled to the radio transceiver circuitry, whereby the processing circuitry and the radio transceiver circuitry are configured to perform operations corresponding to any of the methods of embodiments A1-A12.
B2. A user equipment (UE) configured to communicate data using a protocol stack that includes a first layer comprising at least one group discard timer, the UE being further arranged to perform operations corresponding to any of the methods of embodiments A1-A12.
B3. A non-transitory, computer-readable medium storing computer-executable instructions that, when executed by processing circuitry of a user equipment (UE) configured to communicate data using a protocol stack that includes a first layer comprising at least one group discard timer, configure the UE to perform operations corresponding to any of the methods of embodiments A1-A12.
B4. A computer program product comprising computer-executable instructions that, when executed by processing circuitry of a user equipment (UE) configured to communicate data using a protocol stack that includes a first layer comprising at least one group discard timer, configure the UE to perform operations corresponding to any of the methods of embodiments A1-A12.
Cl . A network node, of a wireless network, configured to communicate data using a protocol stack that includes a first layer comprising at least one group discard timer, the network node comprising: radio network interface circuitry configured to communicate with a user equipment (UE) that has a compatible protocol stack; and processing circuitry operatively coupled to the radio network interface circuitry, whereby the processing circuitry and the radio network interface circuitry are configured to perform operations corresponding to any of the methods of embodiments Al- A12.
C2. A network node, of a wireless network, configured to communicate data using a protocol stack that includes a first layer comprising at least one group discard timer, the network node being further arranged to perform operations corresponding to any of the methods of embodiments A1-A12. C3. A non-transitory, computer-readable medium storing computer-executable instructions that, when executed by processing circuitry of a network node configured to communicate data using a protocol stack that includes a first layer comprising at least one group discard timer, configure the network node to perform operations corresponding to any of the methods of embodiments A1-A12.
C4. A computer program product comprising computer-executable instructions that, when executed by processing circuitry of a network node configured to communicate data using a protocol stack that includes a first layer comprising at least one group discard timer, configure the network node to perform operations corresponding to any of the methods of embodiments A1-A12.

Claims

1. A method, for a node in a wireless network, to communicate data using a protocol stack that includes a first layer comprising at least one group discard timer, the method comprising: receiving (720), at the first layer from a higher layer of the protocol stack, a first plurality of service data units, SDUs, associated with a common maximum latency requirement; based on the common maximum latency requirement, initiating (750) at least one group discard timer associated with the first plurality of SDUs; and upon expiration of the at least one discard timer, discarding (770) the first plurality of SDUs associated with common latency requirement.
2. The method of claim 1, wherein the first plurality of SDUs is associated with one or more of the following: one or more higher-layer protocol data units, PDUs, that have a common maximum latency requirement; a common group discard time; and a single data flow.
3. The method of any of claims 1-2, wherein: the first plurality of SDUs comprises a first SDU and a second SDU received a duration after the first SDU; and initiating (750) the at least one group discard timer comprises initiating (751) a first discard timer with a first value upon receipt of the first SDU.
4. The method of claim 3, wherein an indication of the first value is received from the higher layer in association with the first SDU.
5. The method of any of claims 3-4, wherein the first discard timer is associated with the first and second SDUs.
6. The method of any of claims 3-4, wherein: initiating (750) the at least one group discard timer further comprises initiating (752) a second discard timer with a second value upon receipt of the second SDU, the second value being the first value minus the duration; and discarding (770) the first plurality of SDUs comprises: discarding (771) the first SDU upon expiration of the first discard timer; and discarding (772) the second SDU upon expiration of the second discard timer.
7. The method of any of claims 3-5, wherein: initiating (750) the at least one group discard timer further comprises refraining from initiating (753) a second discard timer upon receipt of the second SDU; and discarding (770) the first plurality of SDUs comprises discarding (773) the first and second SDUs upon expiration of the first discard timer.
8. The method of claim 7, wherein: the first plurality of SDUs are associated with a data flow comprising a plurality of higher-layer protocol data units, PDUs; and initiating (750) the at least one group discard timer further comprises refraining from initiating (754) further discard timers upon receipt, after the second SDU, of further SDUs associated with the data flow.
9. The method of any of claims 1-7, further comprising: forming (760) the first plurality of SDUs into a second plurality of first-layer protocol data units, PDUs, and sending the second plurality of first-layer PDUs to a lower layer of the protocol stack; and upon expiration of the at least one group discard timer, sending (780), to the lower layer, respective discard indications associated with the second plurality of first-layer PDUs.
10. The method of any of claims 1-9, further comprising determining (730) that the first plurality of SDU are associated with a common maximum latency requirement based on one of the following: identifying a common sequence number in respective packet headers of the first plurality of SDUs; or identifying a common sequence number received from the higher layer in association with each of the first plurality of SDUs.
11. The method of any of claims 1-10, wherein: the first layer is a Packet Data Convergence Protocol, PDCP, layer; and
34 the higher layer is one of the following: an application layer, an Internet Protocol, IP, layer; or a Service Data Adaptation Protocol, SDAP, layer.
12. The method of any of claims 1-11, wherein the node is a network node in the wireless network.
13. The method of claim 12, further comprising sending (710b), to a user equipment, UE, a discard timer configuration including one or more of the following: a number of group discard timers to be used; relationship between received SDUs and group discard timers; one or more discard timer durations; and relationship between expiration of group discard timers and discarding of SDUs.
14. The method of any of claims 12-13, wherein: the method further comprises determining (740) remaining durations of validity for the respective first plurality of SDUs based on the following for one or more higher- layer protocol data units, PDUs, associated with the first plurality of SDUs: a maximum latency requirement, and a time of arrival in the wireless network; and initiating (750) the at least one group discard timer is based on the remaining durations of validity.
15. The method of claim 14, wherein determining (740) the remaining durations of validity is further based on one of the following: a time of departure for the one or more higher-layer PDUs from an application server, and a per-PDU processing time in the higher layer; or a comparison of the time of arrival in the wireless network versus an expected time of arrival.
16. The method of any of claims 1-11, wherein: the node is a user equipment, UE; the method further comprises receiving (710a), from a network node in the wireless network, a discard timer configuration including one or more of the following: a number of group discard timers to be used; relationship between received SDUs and group discard timers; one or more discard timer durations; and relationship between expiration of group discard timers and discarding of SDUs; and initiating (750) the at least one group discard timer and discarding (770) the first plurality of SDUs are based on the received discard timer configuration.
17. A user equipment, UE (205, 310, 800, 1010) configured to communicate data using a protocol stack that includes a first layer comprising at least one group discard timer, the UE being further configured to: receive, at the first layer from a higher layer of the protocol stack, a first plurality of service data units, SDUs, associated with a common maximum latency requirement; based on the common maximum latency requirement, initiate at least one group discard timer associated with the first plurality of SDUs; and upon expiration of the at least one discard timer, discard the first plurality of SDUs associated with common latency requirement.
18. The UE of claim 17, being further configured to perform operations corresponding to any of the methods of claims 2-11 and 16.
19. A non-transitory, computer-readable medium (820) storing computer-executable instructions that, when executed by processing circuitry (810) of a user equipment, UE (205, 310, 800, 1010) configured to communicate data using a protocol stack that includes a first layer comprising at least one group discard timer, configure the UE to perform operations corresponding to any of the methods of claims 1-11 and 16.
20. A computer program product (821) comprising computer-executable instructions that, when executed by processing circuitry (810) of a user equipment, UE (205, 310, 800, 1010) configured to communicate data using a protocol stack that includes a first layer comprising at least one group discard timer, configure the UE to perform operations corresponding to any of the methods of claims 1-11 and 16.
21. A network node (100, 150, 210, 220, 320, 900) of a wireless network (199, 299, 1030), the network node being configured to communicate data using a protocol stack that includes a first layer comprising at least one group discard timer, the network node being further configured to: receive, at the first layer from a higher layer of the protocol stack, a first plurality of service data units, SDUs, associated with a common maximum latency requirement; based on the common maximum latency requirement, initiate at least one group discard timer associated with the first plurality of SDUs; and upon expiration of the at least one discard timer, discard the first plurality of SDUs associated with common latency requirement.
22. The network node of claim 21, being further configured to perform operations corresponding to any of the methods of claims 2-15.
23. A non-transitory, computer-readable medium (920) storing computer-executable instructions that, when executed by processing circuitry (910) of a network node (100, 150, 210, 220, 320, 900), of a wireless network (199, 299, 1030), that is configured to communicate data using a protocol stack that includes a first layer comprising at least one group discard timer, configure the network node to perform operations corresponding to any of the methods of claims 1-15.
24. A computer program product (921) comprising computer-executable instructions that, when executed by processing circuitry (910) of a network node (100, 150, 210, 220, 320, 900), of a wireless network (199, 299, 1030), that is configured to communicate data using a protocol stack that includes a first layer comprising at least one group discard timer, configure the network node to perform operations corresponding to any of the methods of claims 1-15.
37
PCT/SE2021/050981 2020-10-08 2021-10-06 Group pdcp discard timer for low-latency services WO2022075912A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP21794662.3A EP4226596A1 (en) 2020-10-08 2021-10-06 Group pdcp discard timer for low-latency services

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202063089180P 2020-10-08 2020-10-08
US63/089,180 2020-10-08

Publications (1)

Publication Number Publication Date
WO2022075912A1 true WO2022075912A1 (en) 2022-04-14

Family

ID=78294044

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/SE2021/050981 WO2022075912A1 (en) 2020-10-08 2021-10-06 Group pdcp discard timer for low-latency services

Country Status (3)

Country Link
EP (1) EP4226596A1 (en)
AR (1) AR123720A1 (en)
WO (1) WO2022075912A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023211941A1 (en) * 2022-04-27 2023-11-02 Interdigital Patent Holdings, Inc. Methods for robust link performance management for xr
WO2023217090A1 (en) * 2022-05-09 2023-11-16 维沃移动通信有限公司 Data processing method and apparatus, communication device and system, and storage medium
WO2023231550A1 (en) * 2022-05-31 2023-12-07 大唐移动通信设备有限公司 Data unit processing method and apparatus, and communication device
WO2024017395A1 (en) * 2022-07-22 2024-01-25 中国移动通信有限公司研究院 Data processing method and apparatus, and communication device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160286426A1 (en) * 2007-09-18 2016-09-29 Lg Electronics Inc. Method for qos guarantees in a multilayer structure
WO2018085209A1 (en) * 2016-11-01 2018-05-11 Intel IP Corporation Avoidance of hfn desynchronization in uplink over wlan in lte-wlan aggregation
US20180219789A1 (en) * 2017-01-31 2018-08-02 Wipro Limited System and method for processing data packets for transmission in a wireless communication network
EP3609229A1 (en) * 2017-04-25 2020-02-12 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Data transmission method and communication device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160286426A1 (en) * 2007-09-18 2016-09-29 Lg Electronics Inc. Method for qos guarantees in a multilayer structure
WO2018085209A1 (en) * 2016-11-01 2018-05-11 Intel IP Corporation Avoidance of hfn desynchronization in uplink over wlan in lte-wlan aggregation
US20180219789A1 (en) * 2017-01-31 2018-08-02 Wipro Limited System and method for processing data packets for transmission in a wireless communication network
EP3609229A1 (en) * 2017-04-25 2020-02-12 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Data transmission method and communication device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
3GPP TR 23.758

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023211941A1 (en) * 2022-04-27 2023-11-02 Interdigital Patent Holdings, Inc. Methods for robust link performance management for xr
WO2023217090A1 (en) * 2022-05-09 2023-11-16 维沃移动通信有限公司 Data processing method and apparatus, communication device and system, and storage medium
WO2023231550A1 (en) * 2022-05-31 2023-12-07 大唐移动通信设备有限公司 Data unit processing method and apparatus, and communication device
WO2024017395A1 (en) * 2022-07-22 2024-01-25 中国移动通信有限公司研究院 Data processing method and apparatus, and communication device

Also Published As

Publication number Publication date
EP4226596A1 (en) 2023-08-16
AR123720A1 (en) 2023-01-04

Similar Documents

Publication Publication Date Title
US20220182943A1 (en) Discovery of and Recovery From Missed Wake-Up Signal (WUS) Reception
WO2022075912A1 (en) Group pdcp discard timer for low-latency services
US11917635B2 (en) Network node, user equipment (UE), and associated methods for scheduling of the UE by the network node
US20230231779A1 (en) Enhanced Network Control Over Quality-of-Experience (QoE) Measurement Reports by User Equipment
US20210345369A1 (en) Enhanced uplink scheduling in integrated access backhaul (iab) networks
US20230413178A1 (en) Methods for Discontinuous Reception (DRX) in Conjunction with Guaranteed Low-Latency Services
US20220104122A1 (en) Selective Cross-Slot Scheduling for NR User Equipment
WO2022081063A1 (en) Methods for lightweight quality-of-experience (qoe) measurement and reporting in a wireless network
WO2022075904A1 (en) Linked radio-layer and application-layer measurements in a wireless network
US20240098532A1 (en) Methods for RAN-Visible (Lightweight) QoE Configuration and Measurement Coordination Among RAN Notes
WO2022086409A1 (en) Direct current (dc) location reporting for intra-band uplink carrier aggregation (ca)
US20230163893A1 (en) Methods and Apparatus for Disabled HARQ Processes
US20230319606A1 (en) User Equipment (UE) Reporting of Non-Cellular Receiver Status
US20230354453A1 (en) Beam Failure Recovery in Multi-Cell Configuration
US20230319607A1 (en) Inter-Cell Group Messages for User Equipment Operating in Multi-Connectivity
US20220286899A1 (en) Interface between a radio access network and an application
US20230156817A1 (en) Handling of Uplink Listen-Before-Talk Failures for Handover
EP4115653A1 (en) Selective transmission or reception for reducing ue energy consumption
US20240114367A1 (en) Mobility Measurement Reporting for XR Services
US20220271866A1 (en) Controlling Packet Data Convergence Protocol (PDCP) Duplication with Different Medium Access Control (MAC) Control Elements (CES)
WO2022129390A1 (en) Methods for automatic update of configured grant and semi-persistent scheduling timing for xr services
WO2022207903A1 (en) Logical channel priotization within configured grants
WO2023148335A1 (en) Enhanced configurability for semi-persistent scheduling and configured grants

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2021794662

Country of ref document: EP

Effective date: 20230508