WO2024102684A2 - Modulation based uep-hierarchical modulation - Google Patents

Modulation based uep-hierarchical modulation Download PDF

Info

Publication number
WO2024102684A2
WO2024102684A2 PCT/US2023/078882 US2023078882W WO2024102684A2 WO 2024102684 A2 WO2024102684 A2 WO 2024102684A2 US 2023078882 W US2023078882 W US 2023078882W WO 2024102684 A2 WO2024102684 A2 WO 2024102684A2
Authority
WO
WIPO (PCT)
Prior art keywords
wtru
constellation
video
bits
modulation
Prior art date
Application number
PCT/US2023/078882
Other languages
French (fr)
Inventor
Salah ELHOUSHY
Pascal Adjakple
Umer Salim
Ravikumar Pragada
Guodong Zhang
Milind Kulkarni
Original Assignee
Interdigital Patent Holdings, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Interdigital Patent Holdings, Inc. filed Critical Interdigital Patent Holdings, Inc.
Publication of WO2024102684A2 publication Critical patent/WO2024102684A2/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L1/00Arrangements for detecting or preventing errors in the information received
    • H04L1/0001Systems modifying transmission characteristics according to link quality, e.g. power backoff
    • H04L1/0002Systems modifying transmission characteristics according to link quality, e.g. power backoff by adapting the transmission rate
    • H04L1/0003Systems modifying transmission characteristics according to link quality, e.g. power backoff by adapting the transmission rate by switching between different modulation schemes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L1/00Arrangements for detecting or preventing errors in the information received
    • H04L1/0001Systems modifying transmission characteristics according to link quality, e.g. power backoff
    • H04L1/0009Systems modifying transmission characteristics according to link quality, e.g. power backoff by adapting the channel coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L1/00Arrangements for detecting or preventing errors in the information received
    • H04L2001/0098Unequal error protection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L27/00Modulated-carrier systems
    • H04L27/32Carrier systems characterised by combinations of two or more of the types covered by groups H04L27/02, H04L27/10, H04L27/18 or H04L27/26
    • H04L27/34Amplitude- and phase-modulated carrier systems, e.g. quadrature-amplitude modulated carrier systems
    • H04L27/3488Multiresolution systems

Definitions

  • Evolving wireless networks used for mobile media services, cloud augmented and virtual reality (ARA/R), cloud gaming, video-based tele-control for machines or drones, are expected to have significantly increased traffic in the near future. All of these types of media traffic, in spite of differing compression techniques and codecs being used, have some common characteristics. These characteristics can be very useful for potential improvement of transmission control and efficiency in evolution of radio access network (RAN) architectures.
  • RAN radio access network
  • current architectures generally handle media services together with other data services without taking full advantage of these commonalities
  • packets within an application data frame have dependency with each other since the application needs all of these packets for decoding the frame. Hence one packet loss will make other correlative packets useless even they may be successfully transmitted.
  • certain applications may impose requirements in terms of Media Units (Application Data Units), rather than in terms of single packets/PDUs.
  • packets of the same video stream but different frame types (l/P/B frame) or even different positions in a group of pictures (GoP) frame are of different contributions to user experience (e.g. a frame corresponding to a base layer picture at a first resolution, and a frame corresponding to an enhancement layer for providing a second, higher resolution picture), so a layered QoS approach to handling within the video stream can potentially relax the requirement thus leading to higher efficiency.
  • a layered QoS approach to handling within the video stream can potentially relax the requirement thus leading to higher efficiency.
  • current implementations of systems lack the ability to properly differentiate between types of data at lower layers of the network stack, such as a physical (PHY) layer.
  • PHY physical
  • a wireless transmit and receive unit may differentiate received packets based on an applied modulation configuration used, and priori tize/group them, for example, into data for a video base layer (BL) and data for a video enhancement layer (EL).
  • the WTRU applies a single-constellation method configured in the WTRU to demodulate signals received in the downlink.
  • the WTRU may distinguish the packets received, e.g., BL or EL, based on the points in the constellation by which they are modulated.
  • Physical packet data units PPDUs
  • PPDUs Physical packet data units
  • the mapping or grouping may follow a hierarchy or form a hierarchical constellation, with packets from a first set mapped to a first constellation subset and packets from a second set mapped to a second constellation subset, with the second constellation subset being a child or subset of the first constellation.
  • FIG. 1A is a system diagram illustrating an example communications system in which one or more disclosed embodiments may be implemented
  • FIG. 1 B is a system diagram illustrating an example wireless transmit/receive unit (WTRU) that may be used within the communications system illustrated in FIG. 1A according to an embodiment;
  • WTRU wireless transmit/receive unit
  • FIG. 1C is a system diagram illustrating an example radio access network (RAN) and an example core network (CN) that may be used within the communications system illustrated in FIG. 1A according to an embodiment;
  • RAN radio access network
  • CN core network
  • FIG. 1D is a system diagram illustrating a further example RAN and a further example CN that may be used within the communications system illustrated in FIG. 1A according to an embodiment
  • FIG. 2 is a block diagram illustrating an example multi-modal interactive system
  • FIG. 3 is an example illustration comparing an example of video game group-of-pictures (GOP) frame viewing order verses an example frame transmission order;
  • GOP group-of-pictures
  • FIG. 4 illustrates effects of errors in the example of FIG. 3
  • FIG. 5 illustrates an example architecture of a layered video scheme, where video quality is refined gradually
  • MVC multi-view video coding
  • FIG. 9 shows an example video stream packetized into a real-time protocol (RTP) packet data unit (PDU) stream;
  • FIG. 10 is a block diagram illustrating a system using a quality of service (QoS) Model with extension for Media PDU Classification;
  • QoS quality of service
  • FIG. 11 illustrates an example of PDU sets within a QoS flow of packets
  • FIG. 12 is a representation of an example of control plane protocol stack layers according to various embodiments.
  • FIG. 13 is a representation of an example user plane protocol stack layers according to various embodiments.
  • FIG. 14A is a sequence diagram between network entities demonstrating an overview for video layer-aware scheduling according to one example embodiment
  • FIG. 14B is a sequence diagram between network entities demonstrating an overview for video layer-aware scheduling according to another example embodiment
  • FIG. 15 illustrates an example of single-constellation based operation signaling according to one embodiment
  • FIG. 16 illustrates an example embodiment of signaling using single constellation- bits-allocation
  • FIG. 17 is an example embodiment of signaling using single constellation - joint bits and distance allocation
  • FIG. 18 shows a flow diagram and method for a wireless transmit and receive unit (WTRU) to enable the single-constellation unequal error protection (UEP) framework during DL transmission according to one embodiment
  • FIG. 19 shows a flow diagram and method for a WTRU to enable the single-constellation UEP framework during UL transmission according to one embodiment
  • FIG. 20 is a representation illustrating an embodiment using a split of six quadrature amplitude modulation (QAM) bits for use in three different priority layers of traffic;
  • QAM quadrature amplitude modulation
  • FIG. 21 is a flow diagram illustrating a method of a WTRU communicating in a wireless network according to an embodiment using single constellations in the downlink;
  • FIG. 22 is a flow diagram illustrating a method of a WTRU communicating in a wireless network according to an embodiment using single constellations in the uplink;
  • FIG 23 is a flow diagram illustrating a method of a WTRU communicating in a wireless network according to an embodiment using single constellations in the uplink with feedback.
  • FIG. 1A is a diagram illustrating an example communications system 100 in which one or more disclosed embodiments may be implemented.
  • the communications system 100 may be a multiple access system that provides content, such as voice, data, video, messaging, broadcast, etc., to multiple wireless users.
  • the communications system 100 may enable multiple wireless users to access such content through the sharing of system resources, including wireless bandwidth.
  • the communications systems 100 may employ one or more channel access methods, such as code division multiple access (CDMA), time division multiple access (TDMA), frequency division multiple access (FDMA), orthogonal FDMA (OFDMA), singlecarrier FDMA (SC-FDMA), zero-tail unique-word discrete Fourier transform Spread OFDM (ZT-UW-DFT-S- OFDM), unique word OFDM (UW-OFDM), resource block-filtered OFDM, filter bank multicarrier (FBMC), and the like.
  • CDMA code division multiple access
  • TDMA time division multiple access
  • FDMA frequency division multiple access
  • OFDMA orthogonal FDMA
  • SC-FDMA singlecarrier FDMA
  • ZT-UW-DFT-S- OFDM zero-tail unique-word discrete Fourier transform Spread OFDM
  • UW-OFDM unique word OFDM
  • FBMC filter bank multicarrier
  • the communications system 100 may include wireless transmit/receive units (WTRUs) 102a, 102b, 102c, 102d, a radio access network (RAN) 104, a core network (ON) 106, a public switched telephone network (PSTN) 108, the Internet 110, and other networks 112, though itwill be appreciated that the disclosed embodiments contemplate any number of WTRUs, base stations, networks, and/or network elements.
  • WTRUs wireless transmit/receive units
  • RAN radio access network
  • ON core network
  • PSTN public switched telephone network
  • Each of the WTRUs 102a, 102b, 102c, 102d may be any type of device configured to operate and/or communicate in a wireless environment
  • the WTRUs 102a, 102b, 102c, 102d may be configured to transmit and/or receive wireless signals and may include a user equipment (UE), a mobile station, a fixed or mobile subscriber unit, a subscription-based unit, a pager, a cellular telephone, a personal digital assistant (PDA), a smartphone, a laptop, a netbook, a personal computer, a wireless sensor, a hotspot or Mi-Fi device, an Internet of Things (loT) device, a watch or other wearable, a head-mounted display (HMD), a vehicle, a drone, a medical device and applications (e.g., remote surgery), an industrial device and applications (e.g., a robot and/or other wireless devices operating in an industrial and
  • UE user equipment
  • PDA personal digital assistant
  • HMD head-
  • the communications systems 100 may also include a base station 114a and/or a base station 114b.
  • Each of the base stations 114a, 114b may be any type of device configured to wirelessly interface with at least one of the WTRUs 102a, 102b, 102c, 102d to facilitate access to one or more communication networks, such as the CN 106, the Internet 110, and/or the other networks 112.
  • the base stations 114a, 114b may be a base transceiver station (BTS), a NodeB, an eNode B (eNB), a Home Node B, a Home eNode B, a next generation NodeB, such as a gNode B (gNB), a new radio (NR) NodeB, a site controller, an access point (AP), a wireless router, and the like. While the base stations 114a, 114b are each depicted as a single element, it will be appreciated that the base stations 114a, 114b may include any number of interconnected base stations and/or network elements.
  • the base station 114a may be part of the RAN 104, which may also include other base stations and/or network elements (not shown), such as a base station controller (BSC), a radio network controller (RNC), relay nodes, and the like.
  • BSC base station controller
  • RNC radio network controller
  • the base station 114a and/or the base station 114b may be configured to transmit and/or receive wireless signals on one or more carrier frequencies, which may be referred to as a cell (not shown). These frequencies may be in licensed spectrum, unlicensed spectrum, or a combination of licensed and unlicensed spectrum
  • a cell may provide coverage for a wireless service to a specific geographical area that may be relatively fixed or that may change over time. The cell may further be divided into cell sectors.
  • the cell associated with the base station 114a may be divided into three sectors.
  • the base station 114a may include three transceivers, i.e., one for each sector of the cell.
  • the base station 114a may employ multiple-input multiple output (MIMO) technology and may utilize multiple transceivers for each sector of the cell.
  • MIMO multiple-input multiple output
  • beamforming may be used to transmit and/or receive signals in desired spatial directions.
  • the base stations 114a, 114b may communicate with one or more of the WTRUs 102a, 102b, 102c, 102d over an air interface 116, which may be any suitable wireless communication link (e.g., radio frequency (RF), microwave, centimeter wave, micrometer wave, infrared (IR), ultraviolet (UV), visible light, etc.).
  • the air interface 116 may be established using any suitable radio access technology (RAT).
  • RAT radio access technology
  • the communications system 100 may be a multiple access system and may employ one or more channel access schemes, such as CDMA, TDMA, FDMA, OFDMA, SC-FDMA, and the like.
  • the base station 114a in the RAN 104 and the WTRUs 102a, 102b, 102c may implement a radio technology such as Universal Mobile Telecommunications System (UMTS) Terrestrial Radio Access (UTRA), which may establish the air interface 116 using wideband CDMA (WCDMA).
  • WCDMA may include communication protocols such as High-Speed Packet Access (HSPA) and/or Evolved HSPA (HSPA+).
  • HSPA may include High-Speed Downlink (DL) Packet Access (HSDPA) and/or High-Speed Uplink (UL) Packet Access (HSUPA).
  • the base station 114a and the WTRUs 102a, 102b, 102c may implement a radio technology such as Evolved UMTS Terrestrial Radio Access (E-UTRA), which may establish the air interface 116 using Long Term Evolution (LTE) and/or LTE-Advanced (LTE-A) and/or LTE-Advanced Pro (LTE-A Pro).
  • E-UTRA Evolved UMTS Terrestrial Radio Access
  • LTE Long Term Evolution
  • LTE-A LTE-Advanced
  • LTE-A Pro LTE-Advanced Pro
  • the base station 114a and the WTRUs 102a, 102b, 102c may implement a radio technology such as NR Radio Access , which may establish the air interface 116 using NR.
  • the base station 114a and the WTRUs 102a, 102b, 102c may implement multiple radio access technologies.
  • the base station 114a and the WTRUs 102a, 102b, 102c may implement LTE radio access and NR radio access together, for instance using dual connectivity (DC) principles.
  • DC dual connectivity
  • the air interface utilized by WTRUs 102a, 102b, 102c may be characterized by multiple types of radio access technologies and/or transmissions sent to/from multiple types of base stations (e.g , an eNB and a gNB).
  • the base station 114a and the WTRUs 102a, 102b, 102c may implement radio technologies such as IEEE 802.11 (i.e , Wireless Fidelity (WiFi), IEEE 802.16 (i.e., Worldwide Interoperability for Microwave Access (WiMAX)), CDMA2000, CDMA2000 1X, CDMA2000 EV-DO, Interim Standard 2000 (IS-2000), Interim Standard 95 (IS-95), Interim Standard 856 (IS-856), Global System for Mobile communications (GSM), Enhanced Data rates for GSM Evolution (EDGE), GSM EDGE (GERAN), and the like.
  • IEEE 802.11 i.e , Wireless Fidelity (WiFi)
  • IEEE 802.16 i.e., Worldwide Interoperability for Microwave Access (WiMAX)
  • CDMA2000, CDMA2000 1X, CDMA2000 EV-DO Code Division Multiple Access 2000
  • IS-95 Interim Standard 95
  • IS-856 Interim Standard 856
  • GSM Global System for
  • the base station 114b in FIG 1A may be a wireless router, Home Node B, Home eNode B, or access point, for example, and may utilize any suitable RAT for facilitating wireless connectivity in a localized area, such as a place of business, a home, a vehicle, a campus, an industrial facility, an air corridor (e.g., for use by drones), a roadway, and the like.
  • the base station 114b and the WTRUs 102c, 102d may implement a radio technology such as IEEE 802.11 to establish a wireless local area network (WLAN).
  • WLAN wireless local area network
  • the base station 114b and the WTRUs 102c, 102d may implement a radio technology such as IEEE 802.15 to establish a wireless personal area network (WPAN).
  • the base station 114b and the WTRUs 102c, 102d may utilize a cellular-based RAT (e.g., WCDMA, CDMA2000, GSM, LTE, LTE-A, LTE-A Pro, NR etc.) to establish a picocell or femtocell.
  • the base station 114b may have a direct connection to the Internet 110.
  • the base station 114b may not be required to access the Internet 110 via the CN 106.
  • the RAN 104 may be in communication with the CN 106, which may be any type of network configured to provide voice, data, applications, and/or voice over internet protocol (VoIP) services to one or more of the WTRUs 102a, 102b, 102c, 102d.
  • the data may have varying quality of service (QoS) requirements, such as differing throughput requirements, latency requirements, error tolerance requirements, reliability requirements, data throughput requirements, mobility requirements, and the like.
  • QoS quality of service
  • the CN 106 may provide call control, billing services, mobile location-based services, pre-paid calling, Internet connectivity, video distribution, etc., and/or perform high-level security functions, such as user authentication.
  • the RAN 104 and/or the CN 106 may be in direct or indirect communication with other RANs that employ the same RAT as the RAN 104 or a different RAT.
  • the CN 106 may also be in communication with another RAN (not shown) employing a GSM, UMTS, CDMA 2000, WiMAX, E-UTRA, or WiFi radio technology.
  • the CN 106 may also serve as a gateway for the WTRUs 102a, 102b, 102c, 102d to access the PSTN 108, the Internet 110, and/or the other networks 112.
  • the PSTN 108 may include circuit-switched telephone networks that provide plain old telephone service (POTS).
  • POTS plain old telephone service
  • the Internet 110 may include a global system of interconnected computer networks and devices that use common communication protocols, such as the transmission control protocol (TCP), user datagram protocol (UDP) and/or the internet protocol (IP) in the TCP/IP internet protocol suite.
  • the networks 112 may include wired and/or wireless communications networks owned and/or operated by other service providers.
  • the networks 112 may include another CN connected to one or more RANs, which may employ the same RAT as the RAN 104 or a different RAT.
  • Some or all of the WTRUs 102a, 102b, 102c, 102d in the communications system 100 may include multi-mode capabilities (e.g., the WTRUs 102a, 102b, 102c, 102d may include multiple transceivers for communicating with different wireless networks over different wireless links).
  • the WTRU 102c shown in FIG. 1 A may be configured to communicate with the base station 114a, which may employ a cellularbased radio technology, and with the base station 114b, which may employ an IEEE 802 radio technology.
  • FIG. 1 B is a system diagram illustrating an example WTRU 102.
  • the WTRU 102 may include a processor 118, a transceiver 120, a transmit/receive element 122, a speaker/microphone 124, a keypad 126, a display/touchpad 128, non-removable memory 130, removable memory 132, a power source 134, a global positioning system (GPS) chipset 136, and/or other peripherals 138, among others.
  • GPS global positioning system
  • the processor 118 may be a general purpose processor, a special purpose processor, a conventional processor, a digital signal processor (DSP), a plurality of microprocessors, one or more microprocessors in association with a DSP core, a controller, a microcontroller, Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs), any other type of integrated circuit (IC), a state machine, and the like.
  • the processor 118 may perform signal coding, data processing, power control, input/output processing, and/or any other functionality that enables the WTRU 102 to operate in a wireless environment.
  • the processor 118 may be coupled to the transceiver 120, which may be coupled to the transmit/receive element 122. While FIG. 1 B depicts the processor 118 and the transceiver 120 as separate components, it will be appreciated that the processor 118 and the transceiver 120 may be integrated together in an electronic package or chip.
  • the transmit/receive element 122 may be configured to transmit signals to, or receive signals from, a base station (e.g., the base station 114a) over the air interface 116.
  • the transmit/receive element 122 may be an antenna configured to transmit and/or receive RF signals.
  • the transmit/receive element 122 may be an emitter/detector configured to transmit and/or receive IR, UV, or visible light signals, for example.
  • the transmit/receive element 122 may be configured to transmit and/or receive both RF and light signals. It will be appreciated that the transmit/receive element 122 may be configured to transmit and/or receive any combination of wireless signals.
  • the WTRU 102 may include any number of transmit/receive elements 122. More specifically, the WTRU 102 may employ MIMO technology. Thus, in one embodiment, the WTRU 102 may include two or more transmit/receive elements 122 (e g., multiple antennas) for transmitting and receiving wireless signals over the air interface 116.
  • the transceiver 120 may be configured to modulate the signals that are to be transmitted by the transmit/receive element 122 and to demodulate the signals that are received by the transmit/receive element 122. As noted above, the WTRU 102 may have multi-mode capabilities. Thus, the transceiver 120 may include multiple transceivers for enabling the WTRU 102 to communicate via multiple RATs, such as NR and IEEE 802.11 , for example.
  • the processor 118 of the WTRU 102 may be coupled to, and may receive user input data from, the speaker/microphone 124, the keypad 126, and/or the display/touchpad 128 (e.g., a liquid crystal display (LCD) display unit or organic light-emitting diode (OLED) display unit)
  • the processor 118 may also output user data to the speaker/microphone 124, the keypad 126, and/or the display/touchpad 128.
  • the processor 118 may access information from, and store data in, any type of suitable memory, such as the non-removable memory 130 and/or the removable memory 132.
  • the non-removable memory 130 may include random-access memory (RAM), read-only memory (ROM), a hard disk, or any other type of memory storage device.
  • the removable memory 132 may include a subscriber identity module (SIM) card, a memory stick, a secure digital (SD) memory card, and the like.
  • SIM subscriber identity module
  • SD secure digital
  • the processor 118 may access information from, and store data in, memory that is not physically located on the WTRU 102, such as on a server or a home computer (not shown).
  • the processor 118 may receive power from the power source 134, and may be configured to distribute and/or control the power to the other components in the WTRU 102.
  • the power source 134 may be any suitable device for powering the WTRU 102.
  • the power source 134 may include one or more dry cell batteries (e.g., nickel-cadmium (NiCd), nickel-zinc (NiZn), nickel metal hydride (NiMH), lithium-ion (Li- ion), etc.), solar cells, fuel cells, and the like.
  • the processor 118 may also be coupled to the GPS chipset 136, which may be configured to provide location information (e.g., longitude and latitude) regarding the current location of the WTRU 102.
  • location information e.g., longitude and latitude
  • the WTRU 102 may receive location information over the air interface 116 from a base station (e.g., base stations 114a, 114b) and/or determine its location based on the timing of the signals being received from two or more nearby base stations. It will be appreciated that the WTRU 102 may acquire location information by way of any suitable location-determination method while remaining consistent with an embodiment
  • the processor 118 may further be coupled to other peripherals 138, which may include one or more software and/or hardware modules that provide additional features, functionality and/or wired or wireless connectivity.
  • the peripherals 138 may include an accelerometer, an e-compass, a satellite transceiver, a digital camera (for photographs and/or video), a universal serial bus (USB) port, a vibration device, a television transceiver, a handsfree headset, a Bluetooth® module, a frequency modulated (FM) radio unit, a digital music player, a media player, a video game player module, an Internet browser, a Virtual Reality and/or Augmented Reality (VR/AR) device, an activity tracker, and the like.
  • FM frequency modulated
  • the peripherals 138 may include one or more sensors.
  • the sensors may be one or more of a gyroscope, an accelerometer, a hall effect sensor, a magnetometer, an orientation sensor, a proximity sensor, a temperature sensor, a time sensor; a geolocation sensor, an altimeter, a light sensor, a touch sensor, a magnetometer, a barometer, a gesture sensor, a biometric sensor, a humidity sensor and the like.
  • the WTRU 102 may include a full duplex radio for which transmission and reception of some or all of the signals (e g., associated with particular subframes for both the UL (e.g., for transmission) and DL (e.g., for reception) may be concurrent and/or simultaneous.
  • the full duplex radio may include an interference management unit to reduce and or substantially eliminate self-interference via either hardware (e.g., a choke) or signal processing via a processor (e.g., a separate processor (not shown) or via processor 118).
  • the WTRU 102 may include a half-duplex radio for which transmission and reception of some or all of the signals (e.g., associated with particular subframes for either the UL (e g., for transmission) or the DL (e g., for reception)).
  • FIG. 1C is a system diagram illustrating the RAN 104 and the CN 106 according to an embodiment.
  • the RAN 104 may employ an E-UTRA radio technology to communicate with the WTRUs 102a, 102b, 102c over the air interface 116.
  • the RAN 104 may also be in communication with the CN 106.
  • the RAN 104 may include eNode-Bs 160a, 160b, 160c, though it will be appreciated that the RAN 104 may include any number of eNode-Bs while remaining consistent with an embodiment.
  • the eNode-Bs 160a, 160b, 160c may each include one or more transceivers for communicating with the WTRUs 102a, 102b, 102c over the air interface 116.
  • the eNode-Bs 160a, 160b, 160c may implement MIMO technology.
  • the eNode-B 160a for example, may use multiple antennas to transmit wireless signals to, and/or receive wireless signals from, the WTRU 102a.
  • Each of the eNode-Bs 160a, 160b, 160c may be associated with a particular cell (not shown) and may be configured to handle radio resource management decisions, handover decisions, scheduling of users in the UL and/or DL, and the like. As shown in FIG. 1 C, the eNode-Bs 160a, 160b, 160c may communicate with one another over an X2 interface.
  • the CN 106 shown in FIG. 1C may include a mobility management entity (MME) 162, a serving gateway (SGW) 164, and a packet data network (PDN) gateway (PGW) 166. While the foregoing elements are depicted as part of the CN 106, it will be appreciated that any of these elements may be owned and/or operated by an entity other than the CN operator.
  • MME mobility management entity
  • SGW serving gateway
  • PGW packet data network gateway
  • PGW packet data network gateway
  • the MME 162 may be connected to each of the eNode-Bs 162a, 162b, 162c in the RAN 104 via an 81 interface and may serve as a control node.
  • the MME 162 may be responsible for authenticating users of the WTRUs 102a, 102b, 102c, bearer activation/deactivation, selecting a particular serving gateway during an initial attach of the WTRUs 102a, 102b, 102c, and the like.
  • the MME 162 may provide a control plane function for switching between the RAN 104 and other RANs (not shown) that employ other radio technologies, such as GSM and/or WCDMA
  • the SGW 164 may be connected to each of the eNode Bs 160a, 160b, 160c in the RAN 104 via the S1 interface.
  • the SGW 164 may generally route and forward user data packets to/from the WTRUs 102a, 102b, 102c.
  • the SGW 164 may perform other functions, such as anchoring user planes during inter-eNode B handovers, triggering paging when DL data is available for the WTRUs 102a, 102b, 102c, managing and storing contexts of the WTRUs 102a, 102b, 102c, and the like.
  • the SGW 164 may be connected to the PGW 166, which may provide the WTRUs 102a, 102b, 102c with access to packet-switched networks, such as the Internet 110, to facilitate communications between the WTRUs 102a, 102b, 102c and IP-enabled devices.
  • packet-switched networks such as the Internet 110
  • the CN 106 may facilitate communications with other networks
  • the CN 106 may provide the WTRUs 102a, 102b, 102c with access to circuit-switched networks, such as the PSTN 108, to facilitate communications between the WTRUs 102a, 102b, 102c and traditional land-line communications devices.
  • the CN 106 may include, or may communicate with, an IP gateway (e.g., an IP multimedia subsystem (IMS) server) that serves as an interface between the CN 106 and the PSTN 108.
  • IMS IP multimedia subsystem
  • the CN 106 may provide the WTRUs 102a, 102b, 102c with access to the other networks 112, which may include other wired and/or wireless networks that are owned and/or operated by other service providers.
  • the WTRU is described in FIGS. 1A-1 D as a wireless terminal, it is contemplated that in certain representative embodiments that such a terminal may use (e.g., temporarily or permanently) wired communication interfaces with the communication network.
  • the other network 112 may be a WLAN.
  • a WLAN in Infrastructure Basic Service Set (BSS) mode may have an Access Point (AP) for the BSS and one or more stations (STAs) associated with the AP.
  • the AP may have access or an interface to a Distribution System (DS) or another type of wired/wireless network that carries traffic in to and/or out of the BSS.
  • Traffic to STAs that originates from outside the BSS may arrive through the AP and may be delivered to the STAs.
  • Traffic originating from STAs to destinations outside the BSS may be sent to the AP to be delivered to respective destinations.
  • DS Distribution System
  • Traffic between STAs within the BSS may be sent through the AP, for example, where the source STA may send traffic to the AP and the AP may deliver the traffic to the destination STA
  • the traffic between STAs within a BSS may be considered and/or referred to as peer-to-peer traffic.
  • the peer-to- peer traffic may be sent between (e.g., directly between) the source and destination STAs with a direct link setup (DLS).
  • the DLS may use an 802.11e DLS or an 802.11z tunneled DLS (TDLS).
  • a WLAN using an Independent BSS (IBSS) mode may not have an AP, and the STAs (e.g., all of the STAs) within or using the IBSS may communicate directly with each other.
  • the IBSS mode of communication may sometimes be referred to herein as an “ad-hoc” mode of communication.
  • the AP may transmit a beacon on a fixed channel, such as a primary channel.
  • the primary channel may be a fixed width (e.g., 20 MHz wide bandwidth) or a dynamically set width.
  • the primary channel may be the operating channel of the BSS and may be used by the STAs to establish a connection with the AP.
  • Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA) may be implemented, for example in 802.11 systems.
  • the STAs e.g., every STA, including the AP, may sense the primary channel. If the primary channel is sensed/detected and/or determined to be busy by a particular STA, the particular STA may back off.
  • One STA (e.g., only one station) may transmit at any given time in a given BSS.
  • High Throughput (HT) STAs may use a 40 MHz wide channel for communication, for example, via a combination of the primary 20 MHz channel with an adjacent or nonadjacent 20 MHz channel to form a 40 MHz wide channel.
  • VHT STAs may support 20MHz, 40 MHz, 80 MHz, and/or 160 MHz wide channels
  • the 40 MHz, and/or 80 MHz, channels may be formed by combining contiguous 20 MHz channels.
  • a 160 MHz channel may be formed by combining 8 contiguous 20 MHz channels, or by combining two noncontiguous 80 MHz channels, which may be referred to as an 80+80 configuration.
  • the data, after channel encoding may be passed through a segment parser that may divide the data into two streams.
  • IFFT Inverse Fast Fourier Transform
  • time domain processing may be done on each stream separately
  • the streams may be mapped on to the two 80 MHz channels, and the data may be transmitted by a transmitting STA.
  • the above described operation for the 80+80 configuration may be reversed, and the combined data may be sent to the Medium Access Control (MAC).
  • MAC Medium Access Control
  • Sub 1 GHz modes of operation are supported by 802.11 af and 802.11 ah.
  • the channel operating bandwidths, and carriers, are reduced in 802.11af and 802.11ah relative to those used in 802.11n, and 802.11ac.
  • 802.11 af supports 5 MHz, 10 MHz, and 20 MHz bandwidths in the TV White Space (TVWS) spectrum
  • 802.11 ah supports 1 MHz, 2 MHz, 4 MHz, 8 MHz, and 16 MHz bandwidths using non-TVWS spectrum.
  • 802.11 ah may support Meter Type Control/Machine- Type Communications (MTC), such as MTC devices in a macro coverage area.
  • MTC Meter Type Control/Machine- Type Communications
  • MTC devices may have certain capabilities, for example, limited capabilities including support for (e.g , only support for) certain and/or limited bandwidths
  • the MTC devices may include a battery with a battery life above a threshold (e.g., to maintain a very long battery life).
  • WLAN systems which may support multiple channels, and channel bandwidths, such as 802 11 n, 802.11ac, 802.11af, and 802.11 ah, include a channel which may be designated as the primary channel.
  • the primary channel may have a bandwidth equal to the largest common operating bandwidth supported by all STAs in the BSS.
  • the bandwidth of the primary channel may be set and/or limited by a STA, from among all STAs in operating in a BSS, which supports the smallest bandwidth operating mode.
  • the primary channel may be 1 MHz wide for STAs (e.g., MTC type devices) that support (e.g., only support) a 1 MHz mode, even if the AP, and other STAs in the BSS support 2 MHz, 4 MHz, 8 MHz, 16 MHz, and/or other channel bandwidth operating modes.
  • Carrier sensing and/or Network Allocation Vector (NAV) settings may depend on the status of the primary channel. If the primary channel is busy, for example, due to a STA (which supports only a 1 MHz operating mode) transmitting to the AP, all available frequency bands may be considered busy even though a majority of the available frequency bands remains idle.
  • STAs e.g., MTC type devices
  • NAV Network Allocation Vector
  • the available frequency bands which may be used by 802.11 ah, are from 902 MHz to 928 MHz. In Korea, the available frequency bands are from 917.5 MHz to 923.5 MHz. In Japan, the available frequency bands are from 916.5 MHz to 927.5 MHz. The total bandwidth available for 802.11 ah is 6 MHz to 26 MHz depending on the country code.
  • FIG. 1 D is a system diagram illustrating the RAN 104 and the CN 106 according to an embodiment.
  • the RAN 104 may employ an NR radio technology to communicate with the WTRUs 102a, 102b, 102c over the air interface 116.
  • the RAN 104 may also be in communication with the CN 106.
  • the RAN 104 may include gNBs 180a, 180b, 180c, though it will be appreciated that the RAN 104 may include any number of gNBs while remaining consistent with an embodiment.
  • the gNBs 180a, 180b, 180c may each include one or more transceivers for communicating with the WTRUs 102a, 102b, 102c over the air interface 116.
  • the gNBs 180a, 180b, 180c may implement MIMO technology.
  • gNBs 180a, 108b may utilize beamforming to transmit signals to and/or receive signals from the gNBs 180a, 180b, 180c.
  • the gNB 180a may use multiple antennas to transmit wireless signals to, and/or receive wireless signals from, the WTRU 102a.
  • the gNBs 180a, 180b, 180c may implement carrier aggregation technology.
  • the gNB 180a may transmit multiple component carriers to the WTRU 102a (not shown). A subset of these component carriers may be on unlicensed spectrum while the remaining component carriers may be on licensed spectrum.
  • the gNBs 180a, 180b, 180c may implement Coordinated Multi-Point (CoMP) technology.
  • WTRU 102a may receive coordinated transmissions from gNB 180a and gNB 180b (and/or gNB 180c).
  • CoMP Coordinated Multi-Point
  • the WTRUs 102a, 102b, 102c may communicate with gNBs 180a, 180b, 180c using transmissions associated with a scalable numerology. For example, the OFDM symbol spacing and/or OFDM subcarrier spacing may vary for different transmissions, different cells, and/or different portions of the wireless transmission spectrum.
  • the WTRUs 102a, 102b, 102c may communicate with gNBs 180a, 180b, 180c using subframe or transmission time intervals (TTIs) of various or scalable lengths (e.g., containing a varying number of OFDM symbols and/or lasting varying lengths of absolute time).
  • TTIs subframe or transmission time intervals
  • the gNBs 180a, 180b, 180c may be configured to communicate with the WTRUs 102a, 102b, 102c in a standalone configuration and/or a non-standalone configuration.
  • WTRUs 102a, 102b, 102c may communicate with gNBs 180a, 180b, 180c without also accessing other RANs (e.g., such as eNode-Bs 160a, 160b, 160c).
  • WTRUs 102a, 102b, 102c may utilize one or more of gNBs 180a, 180b, 180c as a mobility anchor point.
  • WTRUs 102a, 102b, 102c may communicate with gNBs 180a, 180b, 180c using signals in an unlicensed band.
  • WTRUs 102a, 102b, 102c may communicate with/connect to gNBs 180a, 180b, 180c while also communicating with/connecting to another RAN such as eNode-Bs 160a, 160b, 160c.
  • WTRUs 102a, 102b, 102c may implement DC principles to communicate with one or more gNBs 180a, 180b, 180c and one or more eNode-Bs 160a, 160b, 160c substantially simultaneously.
  • eNode-Bs 160a, 160b, 160c may serve as a mobility anchor for WTRUs 102a, 102b, 102c and gNBs 180a, 180b, 180c may provide additional coverage and/or throughput for servicing WTRUs 102a, 102b, 102c.
  • Each of the gNBs 180a, 180b, 180c may be associated with a particular cell (not shown) and may be configured to handle radio resource management decisions, handover decisions, scheduling of users in the UL and/or DL, support of network slicing, DC, interworking between NR and E-UTRA, routing of user plane data towards User Plane Function (UPF) 184a, 184b, routing of control plane information towards Access and Mobility Management Function (AMF) 182a, 182b and the like. As shown in FIG. 1D, the gNBs 180a, 180b, 180c may communicate with one another over an Xn interface.
  • UPF User Plane Function
  • AMF Access and Mobility Management Function
  • the CN 106 shown in FIG. 1 D may include at least one AMF 182a, 182b, at least one UPF 184a, 184b, at least one Session Management Function (SMF) 183a, 183b, and possibly a Data Network (DN) 185a, 185b. While the foregoing elements are depicted as part of the CN 106, it will be appreciated that any of these elements may be owned and/or operated by an entity other than the CN operator.
  • SMF Session Management Function
  • the AMF 182a, 182b may be connected to one or more of the gNBs 180a, 180b, 180c in the RAN 104 via an N2 interface and may serve as a control node.
  • the AMF 182a, 182b may be responsible for authenticating users of the WTRUs 102a, 102b, 102c, support for network slicing (e.g., handling of different protocol data unit (PDU) sessions with different requirements), selecting a particular SMF 183a, 183b, management of the registration area, termination of non-access stratum (NAS) signaling, mobility management, and the like.
  • PDU protocol data unit
  • Network slicing may be used by the AMF 182a, 182b in order to customize CN support for WTRUs 102a, 102b, 102c based on the types of services being utilized WTRUs 102a, 102b, 102c.
  • the AMF 182a, 182b may provide a control plane function for switching between the RAN 104 and other RANs (not shown) that employ other radio technologies, such as LTE, LTE-A, LTE-A Pro, and/or non-3GPP access technologies such as WiFi.
  • the SMF 183a, 183b may be connected to an AMF 182a, 182b in the CN 106 via an N11 interface.
  • the SMF 183a, 183b may also be connected to a UPF 184a, 184b in the CN 106 via an N4 interface.
  • the SMF 183a, 183b may select and control the UPF 184a, 184b and configure the routing of traffic through the UPF 184a, 184b.
  • the SMF 183a, 183b may perform other functions, such as managing and allocating UE IP address, managing PDU sessions, controlling policy enforcement and QoS, providing DL data notifications, and the like.
  • a PDU session type may be IP-based, non-IP based, Ethernet-based, and the like.
  • the UPF 184a, 184b may be connected to one or more of the gNBs 180a, 180b, 180c in the RAN 104 via an N3 interface, which may provide the WTRUs 102a, 102b, 102c with access to packet-switched networks, such as the Internet 110, to facilitate communications between the WTRUs 102a, 102b, 102c and IP-enabled devices.
  • the UPF 184, 184b may perform other functions, such as routing and forwarding packets, enforcing user plane policies, supporting multi-homed PDU sessions, handling user plane QoS, buffering DL packets, providing mobility anchoring, and the like.
  • the CN 106 may facilitate communications with other networks
  • the CN 106 may include, or may communicate with, an IP gateway (e.g., an IP multimedia subsystem (IMS) server) that serves as an interface between the CN 106 and the PSTN 108.
  • IP gateway e.g., an IP multimedia subsystem (IMS) server
  • IMS IP multimedia subsystem
  • the CN 106 may provide the WTRUs 102a, 102b, 102c with access to the other networks 112, which may include other wired and/or wireless networks that are owned and/or operated by other service providers
  • the WTRUs 102a, 102b, 102c may be connected to a local DN 185a, 185b through the UPF 184a, 184b via the N3 interface to the UPF 184a, 184b and an N6 interface between the UPF 184a, 184b and the DN 185a, 185b.
  • one or more, or all, of the functions described herein with regard to one or more of: WTRU 102a-d, Base Station 114a-b, eNode-B 160a-c, MME 162, SGW 164, PGW 166, gNB 180a-c, AMF 182a-b, UPF 184a-b, SMF 183a-b, DN 185a-b, and/or any other device(s) described herein, may be performed by one or more emulation devices (not shown).
  • the emulation devices may be one or more devices configured to emulate one or more, or all, of the functions described herein.
  • the emulation devices may be used to test other devices and/or to simulate network and/or WTRU functions.
  • the emulation devices may be designed to implement one or more tests of other devices in a lab environment and/or in an operator network environment.
  • the one or more emulation devices may perform the one or more, or all, functions while being fully or partially implemented and/or deployed as part of a wired and/or wireless communication network in order to test other devices within the communication network.
  • the one or more emulation devices may perform the one or more, or all, functions while being temporarily implemented/deployed as part of a wired and/or wireless communication network
  • the emulation device may be directly coupled to another device for purposes of testing and/or performing testing using over-the-air wireless communications.
  • the one or more emulation devices may perform the one or more, including all, functions while not being implemented/deployed as part of a wired and/or wireless communication network.
  • the emulation devices may be utilized in a testing scenario in a testing laboratory and/or a non-deployed (e.g., testing) wired and/or wireless communication network in order to implement testing of one or more components.
  • the one or more emulation devices may be test equipment. Direct RF coupling and/or wireless communications via RF circuitry (e.g., which may include one or more antennas) may be used by the emulation devices to transmit and/or receive data.
  • RF circuitry e.g., which may include one or more antennas
  • Some advanced XR or media services may include more modalities besides video and audio stream, such as information from different sensors and tactile or emotion data for more immersing experience e.g. haptic data or sensor data.
  • modalities besides video and audio stream, such as information from different sensors and tactile or emotion data for more immersing experience e.g. haptic data or sensor data.
  • multi-modality communication services these services involve multi-modal data, a term used to describe the input data from different kinds of devices/sensors or the output data to different kinds of destinations (e.g. , one or more WTRUs) required for the same task or application.
  • Multi-modal data consists of more than one single-modal data, and there is strong dependency among each single-modal data.
  • Single- modal data can be seen as one type of data
  • multi-modal outputs 212 are generated based on the inputs 204 from multiple sources.
  • modality is a type or representation of information in a specific interactive system.
  • Multi-modal interaction is the process during which information of multiple modalities are exchanged.
  • Modal types consists of motion, sentiment, gesture, etc.
  • Modal representations consists of video, audio, tactition (vibrations or other movements which provide haptic or tactile feelings to a person, or a machine), etc.
  • Examples of multi-modality communication services may include, immersive multi-modal virtual reality (VR) applications, remote control robot, immersive VR game, skillset sharing for cooperative perception and maneuvering of robots, liven vent selective immersion, haptic feedback for a person exclusion zone in dangerous remote environment, etc.
  • VR virtual reality
  • a video traffic stream (denoted in FIG. 2 for simplicity as video stream) is typically structure of a Group of Pictures (GOP) where each picture constitutes a video frame
  • FIG. 3 is an example illustration comparing an example of video game group-of-pictures (GOP) frame viewing order 302 verses an example frame transmission order 304.
  • the frames are of a different type and different frame types serve varying purposes, are of different importance for the video application rendering.
  • An “I” frame is a frame that is compressed solely based on the information contained in the frame; no reference is made to any of the other video frames before or after it.
  • the “I” stands for “intra” coded.
  • a “P” frame is a frame that has been compressed u sing the data contained in the frame itself and data from the closest preceding I or P frame.
  • the “P” stands for “predicted.”
  • a “B” frame is a frame that has been compressed using data from the closest preceding I or P frame and the closest following I or P frame.
  • the “B” stands for “bidirectional,” meaning that the frame data can depend on frames that occur before and after it in the video sequence.
  • a group of pictures, or GOP is a series of frames consisting of a single I frame and zero or more P and B frames.
  • a GOP always begins with an I frame and ends with the last frame before the next subsequent I frame. All of the frames in the GOP depend (directly or indirectly) on the data in the initial I frame.
  • Open GOP and closed GOP are terms that refer to the relationship between one GOP and another.
  • a closed GOP is self-contained; that is, none of the frames in the GOP refer to or are based on any of the frames outside the GOP.
  • An open GOP uses data from the I frame of the following for calculating some of the B frames in the GOP.
  • the second I frame shown in gray in FIG 3 at the right side of frame viewing order 302 and approximately 1 /3rd from the right in frame transmission order 304
  • the B frames 11 and 12 are based on this I frame because this is an open GOP structure.
  • packets of a same video stream but different frame types (l/P/B frame) or even different positions in the GoP may be of different contributions to user experience
  • an error 402 on the P frame 4 will cause induced errors 404 on B frames 2, 3, 6 and 7.
  • the error 402 can also propagate as propagated errors 406A, 406B to P frame 7 and P frame 10 respectively, causing further induced errors 408A, 408B to B frames 8, 9, 11 and 12.
  • a video compression algorithm may encode a video stream into multiple video layers, which enables a progressive refinement of the reconstructed video quality at the receiver. This is motivated by the fact that video distribution needs to support scenarios with heterogeneous devices, unreliable networks, and bandwidth fluctuations Generally, the most important layer is referred to as the video base layer (BL) and the less important layers are termed video enhancement layers (ELs), which rely on the video BL. Furthermore, an EL may be further relied upon by less important ELs. When the video BL or a video EL is lost or corrupted during its transmission, the dependent layers cannot be utilized by the decoder and must be dropped.
  • BL video base layer
  • ELs video enhancement layers
  • Fig. 5 shows examples of a multi-layer video stream over the length of a GoP.
  • a layered video encoder 502 may provide a multiplexed stream via multiplexer 504 comprising sub-streams or layers 506A-506D, which may be at different bit rates or throughputs
  • Each stream may be decoded by a corresponding layered video decoder 508A-508D for output to various devices.
  • a single video decoder may provide for decoding of different channels, and accordingly, multiple decoders 508 may be replaced with a single decoder capable of receiving and decoding multiple sub-streams 506
  • the layers of the video stream are referred to as partitions A, B and C.
  • “B- >A” indicates the partition B depends on partition A
  • “B->l” indicates frame B 604A is predicted from frame I 602
  • frame P 606A may be predicted from frames B 604A, 604B
  • frame P 606B may be predicted from frame B 604A.
  • FIG. 7 the dependency of the layers in the Scalable Video Coding (SVC) stream is exemplified.
  • the video layers L0, L1 , and L2 represent the video BL, the video spatial EL and the video temporal EL respectively.
  • “- ⁇ ’ indicates “depends on”, while “->” indicates “is predicted from.”
  • frame B 704A is predicted from frame I 702 and frame B 708A; frame B 704B is predicted from frame B708A and frame P 706A, etc.
  • video data may be interpreted as a multi-modal data that consists of more than one single-modal data. Each single-modal data can be interpreted as a video frame or a video layer within the video frame depending on whether the video frame is made of more than one video layer.
  • RTP real-time protocol
  • FIG. 9 An example of a video stream packetized into RTP PDU packets is depicted in FIG. 9, with I frame 802, B frames 804A, 804B, and P frame 806.
  • a PDU Set may be defined, and is composed of one or more PDUs carrying the payload of one unit of information (media data unit) generated at the application level (e.g., a video frame, or video slice, or video layer for video XRM Services, or single-modal data within a multi-modal data)
  • one unit of information media data unit
  • the application level e.g., a video frame, or video slice, or video layer for video XRM Services, or single-modal data within a multi-modal data
  • all PDUs in a PDU Set are needed by the application layer to use the corresponding data unit of information
  • the application layer can still recover parts all or of the information unit when some PDUs are missing.
  • packets of a packetized media data unit may have different levels of importance as illustrated in FIG. 10.
  • a QoS Flow is the finest granularity of QoS differentiation in the PDU Session in the 5G core network.
  • a bearer is the finest granularity of QoS differentiation for bearer level QoS control in the radio access network (RAN) or in the generation of core network earlier than 5G core network
  • RAN radio access network
  • One or more QoS flows may be mapped to a bearer in the RAN.
  • the bearer corresponds to the AN network resources illustrated by the tunnels between the Access Network (AN) 1004 and the WTRU 1002 (an example of which is denoted as “UE”).
  • User Plane Function (UPF) 1006 may provide for classification and QoS marking of packets.
  • the XR Media (XRM) service PDUs have dependency with each other.
  • the PDUs e g., I frame, a base video layer, a first single modal data of a multi-modal data
  • the other PDUs e.g. P frame, B frame, a video enhancement layer, a second single modal data of a multimodal data
  • P frame and B frame are also important as I frame to construct the fluent video, dropping of those P frame and B frame causes jitter to the QoE which is not better than giving up the whole service.
  • P frame and B frame are used to enhance the high definition, e.g., from 720p to 1080p dropping of those P frame and B frame makes sense to keep the service when the network resource cannot transmit all of the service data.
  • the PDUs with the same importance level within a QoS flow or bearer can be treated as a PDU Set (e g., a video frame, or a video layer, or single-modal data within multi-modal data).
  • XRM service data can be categorized into a list of consecutive PDU Sets. Except for importance level, the QoS requirement for the XRM service flows are consistent Hence, XRM service flows can be mapped into a QoS flow. And the QoS flow should include a list of PDU Sets with different importance levels.
  • a PDU Set may include a list of PDUs and in one example embodiment, each PDU Set may have the following factors:
  • the boundary information of the PDU Set for example, in one embodiment, (i) the Start Mark of PDU Set, which is only valid for the first PDU of the PDU Set (As shown in the example figure, unless the next PDU is the 1st PDU of another PDU set, the network cannot know whether the current PDU is the last PDU of the current PDU Set and in order to avoid always waiting for the next PDU to estimate whether the current received PDU is the last PDU of the PDU Set, it is proposed not to mark the last PDU of the PDU Set, instead the first PDU of the PDU Set is marked); and (ii) the sequence number of the PDU within the PDU Set, which may be used to allow support for out-of-order detection and reordering; and
  • PDU Sets 1102A, 1102B are depicted in FIG. 11 and an example illustration of a PDU set header is shown in the Table 1 below:
  • the above fields may be of fixed or variable length in various implementations, and may appear in any order. In some implementations, one or more fields may be absent.
  • video layer is generically, in reference to a PDU sets previously described, where a PDU set (and the PDUs within the PDU set) may be provided differentiated transmission treatment or reception treatment in the cellular system access stratum (AS) or non-access stratum (NAS), based on the PSU set’s relative importance as described in FIG. 10 and Table 1 , above.
  • AS system access stratum
  • NAS non-access stratum
  • the functions of telecommunication systems, and particularly cellular communication systems are typically structured in distinct groups of related functions traditionally referred to as “protocol layers.” Examples of 5G protocol stack layers for control plane and user plane are illustrated in FIG 12 and FIG. 13, respectively. As shown in FIG.
  • the control plane may include the following protocol layers: PHY, MAC, RLC, PDCP, RRC and NAS.
  • the user plane may include the following protocol layers: PHY, MAC, RLC, PDCP, SDAP.
  • the access stratums may be comprised of PHY, MAC, RLC, PDCP, SDAP protocol layers. Therefore, terms “video layers” and “protocol layers” are described in relation to the exemplary embodiment to mean two very different concepts From the perspective of a given protocol stack layer, a protocol stack upper layer means the one or more protocol stack layers above a given protocol layer, and the protocol stack lower layer means the one or more protocol stack layers below the mentioned protocol layer.
  • a protocol stack upper layer may be the RRC layer, while from RRC layer perspective, a protocol stack upper layer may be the NAS layer or the application layer.
  • a protocol stack upper layer may be the network Internet Protocol (IP) layer or the transport RTP protocol stack layer, or the application layer, while the protocol stack lower layer may be a PDCP layer.
  • IP Internet Protocol
  • UE 1202, gNB 1204, AMF 1206; and UE 1302 and gNB1304 may provide functionality at different layers of the protocol stack.
  • an AMF 1206 may provide NAS functionality to a UE 1202 that is not provided by a gNB 1204; such communications may be provided by a lower layer of the network stack (e.g RRC protocol, PDCP protocol, etc.) and the gNB may be agnostic to such communications.
  • a lower layer of the network stack e.g RRC protocol, PDCP protocol, etc.
  • RTP PDU access stratum protocol layer PDU for the purpose of differentiated PDU set transmission treatment or reception treatment. It should be noted that for all practical purposes, an RTP PDU can be segmented into an access stratum protocol layer PDUs or aggregated into an access stratum protocol layer PDU.
  • MIMO Layers are the independent data streams that can be transmitted between a base station and one or more users simultaneously.
  • Single-user MIMO SU-MIMO
  • MIMO layers are the ability to transmit one or multiple data streams, i.e., MIMO layers, from one transmitting array to a single user.
  • the number of layers that can be supported, called the rank, depends on the radio channel.
  • MU-MIMO multi-user MIMO
  • the base station simultaneously sends different MIMO layers in separate beams to different users using the same time and frequency resource, thereby increasing the network capacity.
  • Media application attributes which may take advantage of this potential might include information such as relative importance of a PDU set within PDU sets derived from the packetization of a media data stream, scheduling deadline of PDUs within a PDU set, content delivery criteria for PDUs within a PDU set such as “all or nothing”, “good until first loss”, or “forward error correction (FEC) with either static or variable code rate”.
  • the content delivery criteria may help define whether to deliver or discard a PDU in a PDU set after it misses its deadline, or the content criteria of its associated PDU set no longer can be fulfilled, or the content criteria of its associated PDU set have already been fulfilled.
  • Various embodiments described herein may enable solutions to provide differentiated transmission treatment or reception treatment to PDU sets and their corresponding PDUs, considering the relative importance or priority within a QoS flow or bearer of the said PDU sets, as well as their corresponding PDUs, as per the QoS framework for media PDU classification illustrated in FIG. 10.
  • certain embodiments may enable the application of different modulation and coding schemes (i.e. modulation orders and coding rates) in support of differentiated transmission treatment or reception treatment of PDU sets and their corresponding PDUs with the assumption that their relative importance or priority is visible to the physical layer.
  • video base layers should be provided with more robust error protection than video enhancement layers (video ELs) through the application of a less aggressive MCS.
  • video ELs should be provided with more robust error protection than less important video ELs.
  • the term video layer is generically used herein, in reference to a PDU or a PDU set comprising of PDUs of the same importance, as defined herein.
  • a video base layer is a PDU or a packet service unit (PSU) set with the importance of a video base layer.
  • PSU packet service unit
  • a video enhancement layer is a PDU or PDU set with the importance of the corresponding video enhancement layer.
  • video layer is generically used throughout the description in describing PHY protocol layer processing in support of differentiated transmission treatment or reception treatment, it should be understood, that embodiments described herein broadly apply to any indicators that the RAN protocol stack PDUs can be differentiated on the basis of which video layer they represent, so they can be processed according to the importance or priority level of the video layer to which they correspond.
  • a first video EL may be higher priority or more important than a second video EL if the second video EL depends upon the first video EL (e g. uses frames from the first video EL for prediction or decoding). Accordingly, a layer or frame may be more important and merit a higher QoS setting if it is used for decoding or predicting other frames or layers.
  • a WTRU is enabled to inform its capability to enable unequal error protection (UEP) for Video.
  • WTRU capabilities may be reported to base station (BS) to enable highly reliable video transmissions.
  • the WTRU reports various important capabilities to BS to enable features of the embodiments, such as supported modulation order for DL and UL, max BW, and subcarrier spacing and others.
  • the UE may report to BS one or more capabilities to:
  • LA-FEC video layer-aware forward error correction
  • IL-FEC inter-layer forward-error-correction
  • Example WTRUs might include not only smart phones, tablets and the like, but also loT devices for low cost, low power wide area network applications, mid-tier cost reduced capability (REDCAP) devices for example for industrial wireless sensor network (IWSN) applications, examples of which include power meters, parking meters, secure monitoring video cameras, connected fire hydrants, connected post boxes, etc.
  • IWSN industrial wireless sensor network
  • Intended use cases of the embodiments described herein may include applications requiring both uplink or downlink of videos, or applications requiring only either uplink or downlink video traffic.
  • any multi-modality traffic which might not include video traffic, but two or more traffics of different modalities for example, audio, sensor related traffic (e g., temperature, humidity, pressure, smell, etc.), or haptic data for example when touching a surface (e.g., pressure, texture, vibration, temperature, etc.), in support of immersive reality application, generically denoted here XR applications.
  • traffic might be formatted in different levels of resolutions, accuracy level/quantization or precision level/quantization.
  • level of resolutions, accuracy or precision can be equated to layers of video traffic or equivalents terms as described in the embodiments herein.
  • the unequal error protection methods described herein apply to that traffic as well.
  • the embodiment described herein may also apply to any RAT including cellular RAT, such as 5G cellular RAT or future 6G cellular RAT, and 802.11 WLAN (Wi-Fi) RAT that may support these capabilities.
  • cellular RAT such as 5G cellular RAT or future 6G cellular RAT, and 802.11 WLAN (Wi-Fi) RAT that may support these capabilities.
  • Wi-Fi 802.11 WLAN
  • embodiments may be described in terms of Uu interface with interactions between a WTRU and a base station, these embodiments may equally apply to the communication over the sidelink interface for e.g., PC5 interface, D2D, or any wireless link where advantages may be obtained.
  • FIG. 14A shows an overview of a video layer-aware scheduling method and apparatus according to one embodiment.
  • the steps may include one or more of the following.
  • the WTRU (1402, also referred to as a target UE) signals its capability to a scheduler (e.g. gNB 1404, base station, or other such device or network node), either autonomously for example based on a trigger from the protocol stack upper layers of the WTRU 1402, or optionally upon request from the scheduler at 1406 (shown in dashed line)
  • a scheduler e.g. gNB 1404, base station, or other such device or network node
  • the WTRU 1402 establishes an RRC connection and the associated one or more signaling bearers, as well as one or more data radio bearers, in this example through a RRC reconfiguration procedure.
  • the WTRU 1402 may be configured with measurements and reporting configuration as part of the RRC reconfiguration procedure at 1412.
  • the WTRU 1402 may report measurements to the scheduler 1404.
  • the measurements may include measurements to support scheduling operation, including transport volume measurements, RRM measurements and other link quality evaluation related measurements such as experience block error rate, bit error rate or packet error rate or other metrics/quantities that measures the deviation/delta between targeted QoS/QoE and experience QoS or QoE.
  • Example of measurement report includes buffer status report (BSR), or SR to request resources for BSR report, power headroom report (PHR).
  • BSR buffer status report
  • PHR power headroom report
  • the WTRU may report these measurements on per video-layer basis, so the scheduler has visibility into the uplink scheduling requirements at the WTRU, at the level of a video layer or any other granularity of video partition. It should be noted that two or more video layers might be associated with the same bearer or QoS flow.
  • the WTRU may report these measurements at a granularity level that enables the scheduler to have visibility into the scheduling requirement beyond the granularity of QoS treatment differentiation level traditionally offered by the existing QoS flow or bearer framework, for either communication direction i e., uplink or downlink
  • Other example of measurements reported by the WTRU may include, RSRP, RSRQ, RSSI, SINR or CSI, etc.
  • the WTRU 1402 may receive scheduling downlink control information (DCI) with one or more scheduling parameters, for DL reception with video layer-aware MCS based processing, or for UL transmission with video layer-aware MCS based processing.
  • DCI downlink control information
  • the WTRU may perform DL reception with video layer-aware based MCS processing as per the received RRC configuration and the DCI scheduling information, and may process the received data at 1422.
  • the WTRU may process UL data for transmission with video layer-aware based MCS processing as per the received RRC configuration and the DCI scheduling information, and at 1426, may perform UL transmission of the processed data.
  • the WTRU may receive DCI scheduling uplink transmission but not downlink reception scheduling. Specifically, in some implementations, the WTRU receives a scheduling DCI with one or more scheduling parameters for UL transmission with video layer-aware MCS based processing. At 1432, the WTRU may process UL data for transmission with video layer-aware based MCS processing as per the received RRC configuration and the DCI scheduling information, and at 1434, the WTRU may perform UL transmission of the processed data.
  • the WTRU provides feedback to the scheduler.
  • the feedback may include additional measurements in support of scheduling information, HARQ feedback, WTRU recommendations for per video layer MCS selection for subsequent DL scheduling or Uplink scheduling, WTRU’s recommendation for a switching to a single constellation-based method, separate constellation-based method, or hybrid constellation-based scheme, or any combination of these or other information or measurements.
  • FIG. 14B is a sequence diagram between network entities (e.g. UE or WTRU 1402 and gNB, RSU, or scheduling entity 1404) demonstrating an overview for video layer-aware scheduling according to another example embodiment.
  • a WTRU may send to a base station or other network device an identification of its capabilities for handling video layers, including its ability to differentiate between different video layers belonging to the same QoS flow; its ability to treat different video layers belonging to the same QoS flow differently; and/or its ability to create a hierarchical modulation constellation diagram.
  • the base station or other network device may send, and the WTRU may receive, a modulation configuration.
  • the configuration may include one or more configured modulation schemes for layered video coding transmission.
  • the configuration may include one or more hierarchical modulation configuration parameters.
  • the configuration may include both one or more configured modulation schemes and one or more hierarchical modulation configuration parameters.
  • the WTRU may receive the configuration information and configure codecs accordingly
  • the WTRU may perform one or more radio-related and/or layerspecific measurements and/or may transmit such measurements or identifiers of channel characteristics to transmit to the base station or other network device
  • the measurements or identifiers may be provided via one or more management reports or quality reports, or via header fields or payloads of other data packets.
  • the measurements may include one or more of RSSI, RSRP, RSRQ, SINR, or CSI measurements.
  • the measurements may include one or more layer-specific BSRs.
  • the measurements may include one or more BSRs indicating the amount of data of each video layer
  • the measurements may include one or more channel time variation indicators.
  • the measurements may include one or more channel frequency selectivity indicators.
  • the measurements may include any combination of the above measurements and/or any other type and form of measurement.
  • the base station or network device may generate and transmit dynamic scheduling information and/or modulation scheme updates to the WTRU.
  • the dynamic scheduling information can include one or more resource allocations.
  • the dynamic scheduling information can include indications of one or more HARQ redundancy versions (RVs).
  • the dynamic scheduling information can include one or more MCS configurations.
  • the dynamic scheduling information can any combination of the above or other such information.
  • modulation scheme updates can include activating or deactivating of specific hierarchical modulation schemes (e.g. bit allocations or joint bits and distance allocations).
  • modulation scheme updates can include one or more hierarchical modulation configuration parameters
  • hierarchical modulation configuration parameters may include one or more hierarchical modulation schemes.
  • hierarchical modulation configuration parameters may include one or more bounds or parameters for bits allocation for different video layers (e.g. 4 bits to a first base layer, 2 bits to an enhancement layer, 2 bits to a second enhancement layer, etc.).
  • hierarchical modulation configuration parameters may include a minimum distance between constellation points carrying different base layer/enhancement layer symbols.
  • modulation scheme updates can any combination of the above updates or other such configuration information.
  • the WTRU may encode the bit streams of different video layers.
  • the WTRU may capture video for transmission (e.g. via a camera of the WTRU or via another device), or may receive a video stream for transmission from an application, from another device, etc.
  • the WTRU may encode the bit stream via any suitable encoding scheme, and may perform any other required processing (e.g. compression, upscaling or downscaling, color adjustments, etc ).
  • the WTRU may determine a number of bits and constellation distance allocation for each video layer The number of bits and/or constellation distance may be determined in accordance with configuration parameters received from the base station or other network device at 1456.
  • the WTRU may also construct a hierarchical constellation diagram. Although referred to and described in terms of a constellation diagram, in some implementations, the WTRU may construct the hierarchical constellation as a data array, data string, index of constellation points allocated to each layer, or any other suitable type and form of data structure.
  • the WTRU may determine or identify a modulation constellation symbol from the hierarchical constellation based on the determined bits allocation and/or distance allocation. The determination may be performed for each video layer in parallel or serially (e.g. iteratively for a base layer, then enhancement layer, etc.) in various implementations.
  • the WTRU may multiplex the determined or identified bits of the different layers onto the determined modulation constellation symbol for transmission.
  • a number of bits allocated to each video layer e.g. base layer and enhancement layers
  • the symbol may be transmitted or broadcast to the base station and/or other network devices or WTRUs.
  • the WTRU may send feedback information to the base station or other network device
  • the feedback information may include its bits and distance allocation (e.g. as determined at 1460).
  • the feedback information may be transmitted via a PUCCH transmission in some embodiments.
  • the feedback information may be multiplexed with the modulated video symbols and transmitted via a PUSCH transmission.
  • the WTRU may send additional radio-related and/or layer-specific measurements, as at 1454, allowing for dynamic reconfiguration of the modulation scheme and/or scheduling as needed when channel conditions or characteristics change.
  • differential treatment of video traffic may include, in one embodiment, the protocol stack physical layer may be capable of identifying the data belonging to different video layers and is capable of applying differential treatment for each of the video layers, for example between video base layer and each of the video enhancement layers at various PHY processing stages. Subsequently, the protocol stack physical layer may also be capable of simultaneous transmission of different video layers.
  • a video data or bit stream might be partitioned into blocks of data, where each block of data might be characterized by one or more of the following: (1) The video frame that the block of data belongs to; (2) the video layer that the block of data belongs to; and/or (3) the video GOP that the block belongs to.
  • the UEP of the various embodiments are expressed in terms of differentiated treatment of video layers, it may equally apply to differentiated treatment of video data blocks, wherein the data block might be characterized by one or more of the video frames the data block belongs to, the video layer the data block belongs to, or the GOP to which the video data block belongs.
  • the differentiated treatment may be applied to video frames, or a combination of video frames and video layers.
  • embodiments may use differentiated treatments of video layers within a video frame, or video layers across video frames While embodiments are described in terms of one video base layer and one video enhancement layer, the various embodiments may equally apply to the use cases where there is one video base layer and two or more video enhancement layers.
  • modulation constellation assignment schemes are described below, although others might be utilized Three examples of differentiating video layers based on modulation constellation assignment include: (i) Single root constellation-based scheme; (ii) Separate roots or multi-roots constellation-based scheme; and/or (iii) Hybrid Constellation scheme
  • a root constellation may be defined and configured into the WTRU by its maximum modulation order, that defines the possible modulation constellation points.
  • the modulation constellations applied to the various layers of the video are all derived from the same root constellation, by for example, being based on the video layer specific minimum distance between modulation constellation points and the number of bits within the set of bits allocated to a modulation symbol as shown in FIG. 15.
  • constellations might be assigned to each of the video enhancement layers in a hierarchical manner.
  • the modulation constellation of video layer L1 might be derived from the modulation constellation of the video BL, while the modulation constellation of the video layer L2 might be derived from the modulation constellation of the video enhancement L1.
  • the terms hierarchical modulation and single-constellation scheme/diagram may be used interchangeably.
  • the modulation constellations applied to the various layers of the video may be derived from two or more root constellations.
  • the scheduler may use different constellation sets for different video layers.
  • the different layers of a video might be grouped into subgroups of video layers. The scheduler uses the same constellation for the video layers within the same subgroup of video layers, and different constellations for the video layers of different subgroups of video layers.
  • a combination of the single root constellation scheme and separate roots constellation scheme may be applied.
  • a first root constellation is assigned to a video base layer
  • a second root constellation is assigned to the enhancement layers, wherein a single root constellation scheme is used for modulation constellation assignment to each video enhancement layer, using the second root constellation.
  • the second root constellation is assigned to a first video enhancement layer
  • the one or more modulation constellations of the remaining one or more video enhancement layers are derived from the second root constellation in hierarchical manner following the single root constellation scheme.
  • the first root constellation might be assigned to the video base layer BL
  • the second root constellation is assigned to the video enhancement layer L1
  • the modulation constellation of video layer L2 might be derived from the second root constellation
  • the modulation constellation of the video layer L3 might be derived from the modulation constellation of the video enhancement layer L2.
  • Various combinations and alternatives could also be applied.
  • the mapping or grouping may follow a hierarchy or form a hierarchical constellation, with packets from a first set mapped to a first constellation subset or root constellation and packets from a second set mapped to a second constellation subset or constellation derived from the root constellation, with the second constellation subset being a child or subset of or otherwise created or determined based on the first constellation.
  • the terms hierarchical modulation and single-constellation scheme/diagram may be used interchangeably.
  • single root constellation-based scheme and single constellation-based scheme or simply single constellation scheme will be used interchangeably.
  • separate root constellation-based scheme, multi-root constellation-based scheme, separate constellation-based scheme, multi-constellation-based scheme or simply multi-root constellation scheme or separate constellation scheme may be used interchangeably.
  • Modulation-based UEP is used herein to differentiate different video layers by modulating differently according to their importance. For instance, bit streams from high-importance video layers (i.e., video BL) can be modulated using low-modulation order while the bit streams from low-importance video layers (i.e , video EL) can be modulated using high-modulation order.
  • the BS can decide to leverage this scheme based on the capability reported by the WTRU to the BS. Then, the WTRU may receive from the BS a configuration (e.g., via RRC signaling) indicating the modulation-based UEP scheme to use for the modulation of the transmit data or the demodulation of the received data.
  • a hierarchical modulation scheme may be applied within the framework of a single-constellation diagram.
  • HQAM hierarchal quadrature amplitude modulation
  • a modulator 1506 of a device to map different bit streams of different video layers to certain bits in the constellation diagram. More specifically, bits streams from video BL 1502 are assigned to the most significant bits (MSB)s in the constellation diagram while bit streams from video EL 1504 are assigned to the least significant bits (LSB)s.
  • MSB most significant bits
  • LSB least significant bits
  • a constellation diagram may be referred to as constellation set, which may comprise constellation subsets. Each constellation subset may include one or more constellation points. Also, the terms constellation region and constellation subset may be used interchangeably.
  • the WTRU might receive, from the BS or eNB, the modulation scheme to be utilized via RRC signaling. If the configuration received by the WTRU includes more than one modulation scheme, the configuration might also include whether a modulation scheme is activated or deactivated.
  • a WTRU might receive through MAC CE, DCI or SCI signaling, the activation or deactivation command of a modulation scheme configured previously into the WTRU by RRC signaling.
  • the WTRU may receive from the BS, UL/DL scheduling parameters it uses to provide differentiated PHY-based treatment of different video layers in terms of applied modulation and coding scheme for different video layers.
  • the WTRU might receive one or more scheduling parameters from BS for DL data reception through DCI messages in support of dynamic or semi-static scheduling.
  • the WTRU might receive one or more scheduling parameters from BS for UL data transmission either through DCI messages in support of one or more of dynamic scheduling or through RRC signaling.
  • Certain embodiments focus on the enablement of a differentiated treatment of video layers, namely the ability of the WTRU to receive differentiated configuration modulation and coding scheme parameters for a video base layer, or for one or more video enhancement layers, and the ability of the WTRU to process the video base layer versus the one or more video enhancement layers differently, for example, by mapping the modulated symbols to one or more of different time-frequency resources (Res), modulations, or coding schemes, according to the video layer to which a modulated symbol is assigned.
  • Res time-frequency resources
  • a single constellation may be used for two video layers.
  • the following aspects might be applied to enable differentiated error protection between various video layers including: (i) a number of allocated bits to video BL and video EL; and/or (ii) a provided level of protection to each layer via distance between the constellation regions.
  • the scheduler can control the provided level of protection to video BL bits by changing the minimum distance between the constellation regions with different values of MSBs, referred to as d_1 in FIG. 15.
  • the provided protection to LSBs, assigned to video EL data can be altered by changing the distance d_2, shown in FIG. 15, between constellation points located in regions with the same video BL information. Consequently, two (or more) single-constellation based modulation schemes can be configured into the WTRU, for example, the bits-allocation based scheme of Fig. 16 and the joint bits and distance allocation scheme shown in FIG. 17.
  • One example embodiment may define the mapping of the bits of video enhancement layer as a function of the mapping of the video base layer bits.
  • differential mapping a given symbol is mapped as a function of information bits (that it represents) and the previous symbol.
  • Embodiments of the present invention may map the bits of the video enhancement layer in a differentiated way, where the bits of the video enhancement layer (sub-symbol) are mapped as a function of the mapping of the bits of the video enhancement layer bits and the bits of the video base layer. For higher order modulations, this scheme can provide an additional tool to control the UEP performance and improve the performance of video enhancement layers decoding.
  • a variant embodiment for the single-constellation scheme may be applied with one video BL and more than one video EL.
  • each constellation region in FIG. 15 may include more than one video EL.
  • the priority of the video EL increases, the order of the assigned bits to this video EL increases.
  • the video EL with the least priority can be assigned to a certain number of LSBs while another video EL with a higher priority can be assigned a certain number of bits succeeding the LSBs assigned to the least priority video EL.
  • the minimum distance between each video EL constellation points can be altered based on the level of protection that should be provided to this video layer.
  • FIG. 17 is an example representation where a video stream is encoded in one video base layer and one video enhancement layer, termed as video BL 1702 and video EL1 1704.
  • 64-QAM constellation is used to provide UEP for these two video layers.
  • the two layers in this example use 2 bits each within each constellation symbol.
  • legend shows two MSBs are used to encode the two bits from the video BL, and the two LSBs are used to transmit video EL2.
  • a WTRU will first decode the video BL first - shown as the symbol detection over the green diamond symbols 1706. This part is shown in the left most part of the figure.
  • WTRU will process for the video EL1, in the two LSB bits of the constellation symbols at 1708. Detecting video EL1 bits translates as detecting the red diamond symbols in the middle of this figure.
  • the WTRU may receive one or more single-constellation based modulation schemes, through either dynamic signaling, or through semi-static manner via RRC signaling.
  • the activation or deactivation command of the configured modulation scheme via RRC signaling may be accomplished, for example, through MAC CE, DCI or SCI signaling.
  • WTRU should receive an extra set of parameters to be able to separate the received stream of bits for video BL and video EL.
  • the WTRU uses the received parameters to adjust the mapping between modulated symbols and time- frequency resources.
  • the WTRU uses the received parameters to identify the appropriate modulation order and code rate for signal transmission/reception.
  • the WTRU can use one of different defined MCS tables for determining the modulation order and coding rate based on the received MCS index. In one example, the WTRU can determine the table from which it identifies the modulation and coding scheme based on a received RRC signaling, the received DCI format, and the RNTI used to scramble the attached CRC to the received DCI.
  • a table that defines the different possible numbers of bits to be allocated to one of the video layers (i.e , video BL) under different modulation order should be available at both BS and WTRU.
  • the WTRU receives configurations parameters for differentiated treatment of video layers accordingly.
  • a table that defines the different combination of bits allocation and distance allocation for each modulation order should be available at both WTRU and BS.
  • the WTRU receives configurations parameters for differentiated treatment of video layers accordingly
  • the WTRU can be configured with any number of bits less than the configured modulation order for each video layer.
  • the WTRU receives configurations parameters for differentiated treatment of video layers accordingly
  • the received number of bits for each video layer by the WTRU should be multiples of 2 to be able to construct the new constellation sets (HQAM-based constellation set).
  • the WTRU receives configurations parameters for differentiated treatment of video layers accordingly
  • the WTRU can receive the aforementioned parameters: (i) as a part of DCI messages preceding DL transmission of PDSCH under both dynamic and semi-persistent scheduling; (ii) as a part of DCI messages granting UL transmission of PUSCH under both dynamic scheduling and CS type 2; and/or (iii) as a part of RRC signaling granting UL transmission of PUSCH under CS type 1.
  • a method for WTRU communicating using a single-constellation UEP framework during DL transmission is described.
  • the WTRU might perform one or more of the following actions for the processing of a PDSCH that includes the data of both video layers within the allocated slot for DL reception.
  • WTRU determines whether it is configured with the single-constellation modulation scheme or not. If yes, it proceeds with the following steps. Otherwise, it performs other actions at 1804 as will be discussed later. [0165] If the signal-constellation modulation scheme is configured, then in some implementations at 1806, the WTRU uses the received time-frequency resources through DCI messages to detect the DL transmitted symbols over the allocated time-frequency resources for DL reception
  • the WTRU identifies the received MCS index and determines from which table this index is selected to identify the used modulation order (M) and the code rate.
  • the WTRU is already (pre)configured with MCS configuration tables (e.g., one or more MCS configuration look-up tables).
  • MCS configuration tables e.g., one or more MCS configuration look-up tables.
  • the received MCS index points to an MCS configuration in an MCS configuration table.
  • the MCS configuration pointed to by the received MCS index include the modulation order (M) and the code rate used.
  • the WTRU determines whether it is configured with the bits allocation or the joint bits and distance allocation scheme. Also, it can determine whether it is configured with a dynamic or semi-static single-constellation based operation. [0168] The WTRU determines the applied modulation order for video BL and video EL to demodulate the received modulated symbols. The WTRU applies a single-constellation option configured into the WTRU to demodulate the received PDSCH. In the following, the actions to be performed by WTRU to demodulate the received PDSCH signals over allocated time-frequency resources are described in a frequency-first, time- second manner for the different considered single-constellation approaches.
  • the WTRU identifies the received number of assigned modulation symbol MSBs to BL (N_BL); and also identifies the constellation set and the constellation subsets for both the video base layer and the video enhancement layer using the minimum distance M,d_1 , and N_BL. Note that in semi-static operation, WTRU uses the received index to identify the number of assigned MSBs from the configured lookup table as explained in Table 2.
  • the WTRU determines the received number of assigned modulation symbol MSBs to BL (NBL) to know the number of regions representing the different video BL data (2 NBL ).
  • the WTRU also determines the minimum distance di between the constellation points (symbols) carrying different video BL bits and the minimum distance d2 between the modulation constellation points carrying different video EL bits.
  • the WTRU creates HQAM based constellation set depending on M, NBL, di, and d2. Specifically, the constellation set includes 2 M constellation points (constellation set), equally divided between (2 NBL ). regions.
  • the WTRU identifies the constellation subset for the video base layer using the minimum distance di and NBL. Then, within each one of the (2 WBL ). region, the WTRU identifies the constellation subset for the video enhancement layer by creating 2(M-W BL ) constellation points using the minimum distance d2. Note that in the semi-static approach, WTRU uses the received index for the selected constellation to obtain the number of MSBs assigned to video BL, the minimum distance between the modulation symbols for the video BL, and minimum distance between the modulation symbol of the video EL
  • the WTRU demodulates the received symbols using the corresponding constellation set to this modulation order. Then, the WTRU assigns the NBL bits (identified in step 4) of the demodulated symbols to video BL and assigns the other bits to video EL.
  • the WTRU assembles the obtained video BL bit streams from allocated time-frequency resources in a frequency-first, time-second manner to reconstruct the protocol stack physical layer (PHY) code block(s) for the video BL. Similarly, the WTRU then reconstructs the protocol stack physical layer code blocks for the video EL.
  • PHY protocol stack physical layer
  • the WTRU first decodes the protocol stack physical layer code block(s) for the video BL based on the determined code rate at 1808.
  • the WTRU checks if the PHY code block(s) of the video BL are correctly decoded. If not, at 1826, the WTRU drops the received protocol stack physical layer code block(s) that correspond to the video EL, and at 1828, the WTRU sends a negative acknowledgment (NACK) to the base station or serving node for retransmission. [0175] If PHY code blocks of the video BL are correctly decoded, then at 1830, the WTRU decodes the PHY code blocks of the video EL based on the same code rate used to decode the PHY code block for the video BL; and at 1832, the WTRU checks if PHY code block of the video EL code is correctly decoded.
  • NACK negative acknowledgment
  • WTRU may or may not require retransmission of the video EL based on its required QoS. If the WTRU requires retransmission of the video EL, then at 1834, the WTRU sends a NACK to the serving node or base station If the WTRU does not require retransmission of video EL or if the video EL code block(s) are correctly decoded, then at 1836, the WTRU sends an ACK to the serving node or base station and at 1838, the WTRU concatenates the correctly decoded code blocks of both video layers to construct the transport block to be transferred to the protocol stack upper layers.
  • the WTRU may determine whether retransmission of the video EL is required based on its priority, importance (e.g. whether it is needed for decoding or predicting other frames), QoS level, measurements of transmission characteristics (e.g. received signal strength, noise or interference, bandwidth, latency, etc ), or any other type and form of factors or combination of factors.
  • the WTRU might perform one or more of the following for the processing of a PUSCH that includes the data of both video layers within the allocated slot for UL transmission.
  • the WTRU sends a buffer status report (BSR) to a base station (BS) , access point, or other serving node over PUSCH as a part of MAC CE.
  • BSR buffer status report
  • the sent BSR can be used to notify the BS about the amount of data WTRU needs to send for each video layer
  • the WTRU may receive a UL grant along with scheduling-related parameters either through DCI messages if it is configured with dynamic scheduling or CS type 2, or through RRC signaling if it is configured with CS type 1.
  • the WTRU determines the coding rate it will use to encode video BL and video EL bit streams by determining the received MCS index and the table from which this MCS index is selected. As described earlier, the WTRU is already (pre)configured with MCS configuration tables (e.g., one or more MCS configuration look-up tables). The received MCS index points to an MCS configuration in an MCS configuration table. The MCS configuration pointed to by the received MCS index includes the code rate to be used by WTRU
  • the WTRU encodes both video BL bit streams and video EL bit streams with the code rate it determined at 1906 to generate encoded PHY code block(s) for the video BL code blocks and encoded PHY code block(s) for the video EL
  • WTRU may identify which modulation approach it will apply to modulate the bit streams of each video layer. To this end, and based on the RRC signaling as well as the activation/deactivation of different UEP-based modulation schemes that might be received through DCI messages, MAC CE, or SCI signaling, WTRU determines whether it is configured with the single- constellation modulation scheme or not. If yes, it proceeds to 1914 Otherwise, at 1912, it performs other actions as will be discussed later.
  • the WTRU identifies the received MCS index and determines the table from which the MCS index is selected to identify the modulation order to use (M). As described earlier, the WTRU is already (pre)configured with MCS configuration tables (e.g., one or more MCS configuration look-up tables).
  • MCS configuration tables e.g., one or more MCS configuration look-up tables.
  • the received MCS index points to an MCS configuration in an MCS configuration table.
  • the MCS configuration pointed to by the received MCS index include the modulation order M.
  • the WTRU determines whether it is configured with the bit-allocation or the joint bits and distance allocation scheme. Also, it can determine whether it is configured with a dynamic or semi-static single-constellation based operation.
  • the WTRU determines the modulation order it applies for video BL and video EL.
  • the WTRU applies a single-constellation option configured into it to modulate the encoded video BL and video EL code blocks.
  • a method for a WTRU to modulate the PHY code block(s) for the video BL and video EL is described.
  • the WTRU identifies the received number of assigned modulation symbol MSBs to BL (N BL ) and also identifies the constellation set and the constellation subsets for both the video base layer and the video enhancement layer using the minimum distance M, d lt and N BL
  • the WTRU identifies the received number of assigned MSBs to video BL (W BL ), the minimum distance between constellation points carrying different video BL symbols (di) and the minimum distance between constellation points carrying different EL symbols (d2).
  • the WTRU may create a HQAM based constellation set depending on M, N BL , d 1 , and d 2 .
  • the constellation set includes 2 M constellation points (constellation set), equally divided between 2 NBL regions.
  • the WTRU identifies the constellation subset for the video base layer using the minimum distance di and NBL.
  • the WTRU identifies the constellation subsets for the enhancement layer by creating 2 (M-/VBL) constellation points using the minimum distance d2.
  • such constellations including constellation subsets may be referred to as a hierarchical constellation.
  • the WTRU combines NBL bits (identified at 1918 or 1920) for the video BL and (M — N BL ) bits for the video EL to create M bits in which the modulation symbol bits for the BL are the MSBs. [0189]
  • the WTRU performs modulation by mapping the created M bits to the corresponding constellation point.
  • the WTRU determines the allocated time-frequency resources for PUSCH transmission based on the received parameters for time allocation and frequency allocation. [0191] At 1930, the WTRU maps the modulated symbols to time-frequency resources (identified in at 1928) in a frequency-first, time-second manner.
  • the previous discussion kept its focus on the usage where the application video data comprises two video layers of different priorities to help explain concepts around UEP.
  • the video streams are typically encoded in more than two video layers.
  • an exemplary embodiment is disclosed to show a single constellation based UEP used to provide differentiated transmission treatment (or reception) for three video layers.
  • FIG. 20 is an example representation where a video stream is encoded in one video base layer 2002 and two video enhancement layers, termed as video EL1 2004 and video EL2 2006.
  • 64-QAM constellation is used to provide UEP for these three video layers. All the three layers in this example use 2 bits each within each constellation symbol.
  • the legend shows two MSBs are used to encode the two bits from the video BL, next two bits which are in the middle of the symbol bits are used to transmit two bits from video EL1 , and the two LSBs are used to transmit video EL2. From the WTRU processing perspective, a WTRU will first decode the video BL first - shown as the symbol detection over the green diamond symbols 2008.
  • WTRU will process for the video EL1 , in the two middle bits of the constellation symbols at 2010. Detecting video EL1 bits translates as detecting the red diamond symbols in the middle of this figure Having detected the video EL1 bits, WTRU can zoom in further trying to detect two LSBs of video EL2, shown in the right side of this figure at 2012.
  • the relative error probabilities in this UEP constellation diagram with three video layers is dictated by the careful choice of the constellation parameters, d1 , d2 and d3. Additional control and tuning for relative error probabilities can be enabled by having additional constellation design parameters, such as vertical distance and unequal distance among the constellation points.
  • Channel time variation and estimation for feedback a measure of how fast channel radio conditions are changing with time can play an important role in the dynamic adaptation of UEP embodiments.
  • the measurement quantity may involve a rate of change including the phase of the estimated channel, or it can be based only upon the channel magnitude ignoring the phase. This measurement can be made more precise in the form of Doppler estimate among the available channel estimates at different time instants. Additional conditions in terms of averaging and filtering can be defined to make this quantity stable prior to feedback and use in dynamic adaptation.
  • a device can estimate the rate of change of channel conditions through estimates made over one or a combination of existing reference signals (RSs).
  • RSs can be DMRS of SSB, DMRS of data, CSI-RS, or even SSBs themselves. New more suitable RSs could also be defined exclusively for this purpose.
  • RS can be WTRU dedicated, group common or cell/beam specific which may allow a WTRU estimate of Doppler.
  • the indication of channel time variation can be transmitted in the form of a single bit flag, which may indicate the channel time variation larger than a pre-defined or configured threshold
  • the network can configure the size/pattern of the channel time variation feedback, which could be selected among a number of options defined for a given network. A subset of these options can be indicated to the WTRU as part of semi-static configuration.
  • An estimated channel time variation indication, after suitable processing/filtering, can be provided as feedback to the network, for example, as part of uplink control information (UCI).
  • the UCI carrying channel time variation indication can be transmitted either in PUCCH or in PUSCH.
  • the channel time variation feedback can be configured as periodic, semi-static or aperiodic.
  • the network can configure suitable parameters controlling the periodicity of this feedback.
  • channel frequency selectivity estimation and feedback may be desirable.
  • Channel variation in the frequency domain or channel frequency selectivity can be another important parameter to choose judicious use of UEP embodiments to combat the frequency selectivity and avoid the deep fades hitting the prioritized video layers/data.
  • the measurement quantity may involve rate of change including the phase of the estimated channel, or it can be based only upon the channel magnitude ignoring the phase. Additional conditions in terms of averaging and filtering can be defined to make this quantity stable prior to feedback and use in dynamic adaptation.
  • a device can estimate the channel frequency selectivity through multiple channel estimates made over different parts of the bandwidth. These estimates can be made using a suitable RS or a combination of RSs, examples of which include DMRS of SSB, DMRS of data, CSI-RS, or even SSBs themselves New RSs may also be defined exclusively for this purpose. These RS can be WTRU dedicated, group common or cell/beam specific which may allow a WTRU estimate of channel over different frequency portions. In one embodiment, DMRS type 1 and type 2 could be suitably used to estimate the channel frequency selectivity as they span all the physical resource blocks (PRBs) in the scheduled resource.
  • PRBs physical resource blocks
  • an indication of channel frequency selectivity can be transmitted in the form of a single bit flag, which may indicate the channel frequency selectivity larger than a pre-defined or configured threshold
  • the network can configure the size/pattern of the channel frequency selectivity feedback, which could be selected among several options defined in the specification. A subset of these options can be indicated to the WTRU as part of semi-static configuration.
  • the estimated channel frequency selectivity indication after suitable processing/filtering, can be provided as feedback to the network as part of uplink control information (UCI).
  • the UCI carrying channel frequency selectivity indication can be transmitted either in PUCCH or in PUSCH.
  • the channel frequency selectivity feedback can be configured as periodic, semi-static or aperiodic and a network can configure suitable parameters controlling the periodicity of this feedback.
  • the WTRU can estimate/measure and report to the network in suitable format.
  • the WTRU can make a direct request of constellation and video layer mapping specific parameters that is desired for use. In an example, this can be the expected receive parameters through which it prefers to receive DL layered video.
  • the requested parameters may be those that the WTRU prefers to transmit the layered video to the base station in the uplink direction.
  • the indication of modulation based UEP parameters can comprise the constellation design parameters (for example distance parameters), the bit allocation for various video layers, the relative mapping for video layers as discussed previously.
  • the direct feedback for requested modulation based UEP can be transmitted in the uplink direction to the base station by transmitting this feedback as part of uplink control information.
  • This information can be transmitted as part of PUCCH or PUSCH.
  • the base station can configure this reporting as periodic, semi-static or aperiodic. To cover dynamic embodiments, this reporting can be event triggered, where the suitable triggers can be defined to report this feedback.
  • suitable triggers can be in terms of channel variations in time or frequency being more than a configured threshold.
  • dynamic adaptation for single constellation differentiation may involve assignment and split of the available resource/capacity/bits for different video layers having different priorities. Variable factors such as traffic flows (video layers), multiple system design considerations, WTRU capabilities and some long-term channel characteristics for the relevant WTRU may be used to dynamically assign different resource or different priority to different video layers in the UEP schemes.
  • traffic flows video layers
  • WTRU capabilities and some long-term channel characteristics for the relevant WTRU may be used to dynamically assign different resource or different priority to different video layers in the UEP schemes.
  • the UEP embodiments discussed herein should utilize dynamic adjustment in the face of network dynamics, which may include variations in the system load, different cell capacities while WTRU is in mobility and changing radio conditions. Different measurement quantities were identified previously, which a WTRU can estimate and report back to the network in the suitable formats reflecting the current channel conditions.
  • the probability that a WTRU may decode successfully a given higher order constellation reduces with the channel time variations as the quality of channel estimates goes down in direct proportion to the channel time variation.
  • the network can use the time variation indication to adjust the rates for both video base layer and one or more of the video enhancement layers. Another possibility could be to stop transmitting one of the video enhancement layers if the network estimates that WTRU will not be able to decode it anyway.
  • the network decisions in the updated dynamic split of bits assignment and transmission of a given number of video layers can be indicated to the WTRU through dynamic signaling.
  • the previous embodiments have identified how a transmitting device can make dynamic updates to the layered video parameters, the allocation of different video layers bits to constellation bits, and the size of each video layer bits for a given constellation.
  • the constellation design parameters can be updated themselves to obtain a more suitable form of constellation in view of the system considerations, WTRU capability, feedback from the receiver about the channel variations, etc.
  • the constellation design parameters for example, d1 , d2 and d3 etc. as discussed previously, can be updated with respect to the available information elements.
  • the update of constellation design parameters results in a change in expected probability of detection for various video layers This change can thus provide an outcome in achieving a given prioritization of different video layers when the design considerations are changing.
  • the constellation can be made to switch from single root constellation to multiroot constellation, or a multi-root constellation can be updated to another multi-root constellation of different parameters
  • mapping of different video layers to the constellation can be updated, for example updating mapping of a given video enhancement layer as a function of detected bits (subsymbol) corresponding to the video base layer
  • one configuration may build a hierarchical modulation constellation for transmission/reception of layered video and another configuration may base the design for video layer specific modulation constellations.
  • the base station may provide the relevant configurations for hierarchical constellation and separate video layer specific constellation while providing an indication of active configuration.
  • This configuration may then be used for UL or DL transmission of layered video data.
  • the network can switch the active configuration.
  • the signaling to switch the active configuration can be sent through semi- static signaling or in a more dynamic manner by indicating in the DCI. This can be easily achieved by a single bit flag providing the active configuration indication.
  • the WTRU can request the base station to switch the active configuration for the layered video transmission/reception.
  • This configuration switch request can be transmitted to the base station in the uplink direction.
  • One signaling example to achieve this is to add this active configuration indication in the UEP feedback.
  • FIG. 21 shows an exemplary embodiment for the single constellation based UEP layered video transmission in the DL direction.
  • a WTRU may report its capability and assistance information to the base station (BS), access point, gNB, or serving node (referredto generally as a BS or serving node) at 2102
  • the BS or serving node provides the single hierarchical constellation based UEP configuration for layered video transmission.
  • This configuration provides parameters of video layer specific bits size, constellation part, distance, and bit mapping for the constellation.
  • the configuration may indicate the dynamic update option for a subset of these parameters.
  • scheduling DCI provides the time frequency resource and may complete the constellation design information.
  • the configuration provides a set of parameters which are completed by the dynamic indication later.
  • the configuration may provide a set of parameters e g., related to the constellation choice and construction with suitable distances, and then some of these parameters may be overwritten by the dynamic indication.
  • the overwriting of UEP parameters as part of dynamic indication at 2106 provides the network the ability to respond to dynamic traffic changes, the network system load variations and to respond well to the channel variations.
  • the WTRU Upon decoding DCI, the WTRU will receive the scheduled data from the indicated time frequency resource at 2108. At 2110, the WTRU will prepare the constellation for the received video layers as well using the received information from the BS or serving node.
  • the WTRU may demodulate the video base layer, and then at 2114 the video enhancement layers using the relevant parts of the hierarchical constellation. After the demodulation, at 2116, the WTRU may proceed to channel decoding of the demodulated video layers.
  • the WTRU may prepare the UEP feedback which can request a specific set of target constellations from the BS or serving node for the next transmission. UEP feedback may additionally include the indication of channel time and frequency variation to allow suitable UEP processing/scheduling for the subsequent transmissions. These estimates can be prepared over the reference symbols which are part of the scheduled resource, or the BS or serving node can transmit dedicated ones for such estimation.
  • the WTRU will transmit the UEP feedback in the UL direction at 2120.
  • a method for a WTRU communicating in a wireless network using layered video transmission in the uplink using single-Constellation based UEP is shown.
  • a WTRU may report its capability and assistance information to the BS or serving node.
  • the BS or serving node provides the single constellation based UEP configuration for layered video transmission in the UL direction.
  • This configuration provides parameters of video layer specific mapping over the hierarchical constellation, distance, and bit mapping rules for the single constellation.
  • the configuration may indicate the dynamic update option for a subset of these parameters.
  • the scheduling DCI provides the UL time frequency resource and may complete the constellation design information either by providing any missing elements or updating certain elements configured in RRC configuration at 2206.
  • the WTRU Upon decoding DCI, the WTRU will perform channel encoding of different video layers at 2208, followed by multiplexing of layered coded bits according to the configuration at 2210 At 2212, the WTRU may prepare the indicated constellation for the video layers to be transmitted. At 2214, the WTRU will then modulate the multiplexed video layered data over the constellation. The WTRU then transmits the UEP modulated layered video data over the scheduled UL time frequency resource at 2216.
  • FIG. 23 illustrates one example embodiment for a method for WTRU communicating in a wireless network using the single constellation-based UEP layered video transmission in the UL direction and providing feedback to the base station in the UL direction.
  • a WTRU may report its capability and assistance information to the BS or serving node.
  • the BS or serving node provides the single constellation based UEP configuration for layered video transmission in the UL direction.
  • This configuration provides parameters of video layer specific mapping over the hierarchical constellation, distance, and bit mapping rules for the single constellation.
  • the configuration may indicate the dynamic update option for a subset of these parameters.
  • the scheduling DCI provides the UL time frequency resource and may complete the constellation design information either by providing the missing elements or updating certain elements configured in RRC configuration.
  • the WTRU will perform channel encoding of different video layers, followed by multiplexing of layered video coded bits according to the configuration at 2310.
  • the WTRU may prepare the indicated constellation for the video layers to be transmitted.
  • the WTRU may modulate the multiplexed layered video data over the constellation.
  • the WTRU may prepare WTRU feedback which can comprise a target constellation for subsequent transmission(s) indicating in addition video layer specific mapping and constellation design parameters. The WTRU may then multiplex the UEP feedback with the layered video data at 2318.
  • the WTRU may transmit the multiplexed UEP feedback and layered video data over the scheduled UL time frequency resource.
  • the present disclosure is directed to a method including determining, by a wireless transmit/receive unit (WTRU), a first number of bits for a first media packet, and a second number of bits for a second media packet.
  • the method also includes_determining, by the WTRU, a modulation constellation symbol based on the determined first number of bits and the second number of bits.
  • the method also includes multiplexing, by the WTRU, the first number of bits of the first media packet and the second number of bits of the second media packet onto the determined modulation constellation symbol for transmission.
  • the method also includes sending, by the WTRU, feedback information to a network device, the feedback information comprising an identification of the number of first bits and the first constellation distance and the second number of bits and the second constellation distance.
  • the method includes transmitting, by the WTRU, the multiplexed modulation constellation symbol
  • the method includes identifying, by the WTRU, a first constellation subset based on the first number of bits and a second constellation subset based on the second number of bits, wherein the first constellation subset and the second constellation subset constitute a hierarchical constellation.
  • the method includes determining, by the WTRU, a first constellation distance for the first media packet, and a second constellation distance for the second media packet In a further implementation, the determination of the modulation constellation symbol is further based on the first constellation distance and the second constellation distance.
  • the method includes reporting, by the WTRU to the network device, an identification of capabilities of the WTRU for differentiating between different video layers; and receiving, by the WTRU from the network device, a modulation configuration for transmission of the different video layers.
  • the method includes reporting, by the WTRU to the network node, one or more layerspecific buffer status reports.
  • multiplexing the first number of bits of the first media packet and the second number of bits of the second media packet onto the determined modulation constellation symbol comprises: assigning the first number of bits of the first media packet to a first most significant bits of the modulation constellation symbol and the second number of bits of the second media packet to a second most significant bits of the modulation constellation symbol.
  • the first most significant bits are more reliable than the second most significant bits.
  • the first media packet comprises data of a base layer of a video signal and the second media packet comprises data of an enhancement layer of the video signal.
  • the present disclosure is directed to a wireless/transmit receive unit (WTRU)
  • the WTRU comprises one or more transceivers and one or more processors.
  • the one or more processors are configured to: determine a first number of bits for a first media packet, and a second numberof bits for a second media packet determine a modulation constellation symbol based on the determined first number of bits and the second number of bits; multiplex the first number of bits of the first media packet and the second number of bits of the second media packet onto the determined modulation constellation symbol for transmission; and send feedback information to a network device, the feedback information comprising an identification of the number of first bits and the first constellation distance and the second number of bits and the second constellation distance.
  • the one or more processors are further configured to transmit, via the one or more transceivers, the multiplexed modulation constellation symbol.
  • the one or more processors are further configured to: identify a first constellation subset based on the first number of bits and a second constellation subset based on the second number of bits, wherein the first constellation subset and the second constellation subset constitute a hierarchical constellation.
  • the one or more processors are further configured to determine a first constellation distance for the first media packet, and a second constellation distance for the second media packet.
  • the determination of the modulation constellation symbol is further based on the first constellation distance and the second constellation distance.
  • the one or more processors are further configured to: report, to the network device, an identification of capabilities of the WTRU for differentiating between different video layers; and receive, from the network device, a modulation configuration for transmission of the different video layers.
  • the one or more processors are further configured to report, to the network node, one or more layer-specific buffer status reports.
  • the one or more processors are further configured to: assign the first number of bits of the first media packet to a first most significant bits of the modulation constellation symbol and the second number of bits of the second media packet to a second most significant bits of the modulation constellation symbol.
  • the first most significant bits are more reliable than the second most significant bits.
  • the first media packet comprises data of a base layer of a video signal and the second media packet comprises data of an enhancement layer of the video signal.

Landscapes

  • Engineering & Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Mobile Radio Communication Systems (AREA)

Description

MODULATION BASED UEP-HIERARCHICAL MODULATION
RELATED APPLICATIONS
[0001] The present application claims the benefit of and priority to U S. Provisional Patent Application No. 63/423,367, entitled “Modulation Based UEP-Hierarchical Modulation,” filed November 7, 2022, the entirety of which is incorporated by reference herein.
BACKGROUND
[0002] Evolving wireless networks used for mobile media services, cloud augmented and virtual reality (ARA/R), cloud gaming, video-based tele-control for machines or drones, are expected to have significantly increased traffic in the near future. All of these types of media traffic, in spite of differing compression techniques and codecs being used, have some common characteristics. These characteristics can be very useful for potential improvement of transmission control and efficiency in evolution of radio access network (RAN) architectures. However, current architectures generally handle media services together with other data services without taking full advantage of these commonalities By way of example, packets within an application data frame have dependency with each other since the application needs all of these packets for decoding the frame. Hence one packet loss will make other correlative packets useless even they may be successfully transmitted. For example, certain applications may impose requirements in terms of Media Units (Application Data Units), rather than in terms of single packets/PDUs.
[0003] In another example, packets of the same video stream but different frame types (l/P/B frame) or even different positions in a group of pictures (GoP) frame are of different contributions to user experience (e.g. a frame corresponding to a base layer picture at a first resolution, and a frame corresponding to an enhancement layer for providing a second, higher resolution picture), so a layered QoS approach to handling within the video stream can potentially relax the requirement thus leading to higher efficiency. However, current implementations of systems lack the ability to properly differentiate between types of data at lower layers of the network stack, such as a physical (PHY) layer.
SUMMARY
[0004] To address the above and other problems in existing implementations, the present application is directed to systems and methods for unequal error protection (UEP)-hierarchical modulation In particular, certain embodiments relate to differentiating data in a PHY layer, e.g., packets for transmission and/or reception based on a relative importance of corresponding data packets they contain from an application layer For example, a wireless transmit and receive unit (WTRU) may differentiate received packets based on an applied modulation configuration used, and priori tize/group them, for example, into data for a video base layer (BL) and data for a video enhancement layer (EL). In one example, the WTRU applies a single-constellation method configured in the WTRU to demodulate signals received in the downlink. The WTRU may distinguish the packets received, e.g., BL or EL, based on the points in the constellation by which they are modulated. Physical packet data units (PPDUs) may be grouped into sets based on their priority and mapped to modulation, so that, for example, base layer video packets are demodulated with higher priority. Accordingly, in some implementations, the mapping or grouping may follow a hierarchy or form a hierarchical constellation, with packets from a first set mapped to a first constellation subset and packets from a second set mapped to a second constellation subset, with the second constellation subset being a child or subset of the first constellation. Multiple other embodiments are also described.
BRIEF DESCRIPTION OF THE DRAWINGS
[0005] A more detailed understanding may be had from the following description, given by way of example in conjunction with the accompanying drawings, wherein like reference numerals in the figures indicate like elements, and wherein:
[0006] FIG. 1A is a system diagram illustrating an example communications system in which one or more disclosed embodiments may be implemented;
[0007] FIG. 1 B is a system diagram illustrating an example wireless transmit/receive unit (WTRU) that may be used within the communications system illustrated in FIG. 1A according to an embodiment;
[0008] FIG. 1C is a system diagram illustrating an example radio access network (RAN) and an example core network (CN) that may be used within the communications system illustrated in FIG. 1A according to an embodiment;
[0009] FIG. 1D is a system diagram illustrating a further example RAN and a further example CN that may be used within the communications system illustrated in FIG. 1A according to an embodiment;
[0010] FIG. 2 is a block diagram illustrating an example multi-modal interactive system;
[0011] FIG. 3 is an example illustration comparing an example of video game group-of-pictures (GOP) frame viewing order verses an example frame transmission order;
[0012] FIG. 4 illustrates effects of errors in the example of FIG. 3;
[0013] FIG. 5 illustrates an example architecture of a layered video scheme, where video quality is refined gradually;
[0014] FIG. 6 is a representation of example of dependencies for the GOP =2 partition mode of an H.264/AVC video stream;
[0015] FIG. 7 is a representation of example of dependencies for GOP=4 in a scalable video coding (SVC) stream;
[0016] FIG. 8 is a representation of an example with a GOP =4 in a multi-view video coding (MVC) encoded stereoscopic video stream;
[0017] FIG. 9 shows an example video stream packetized into a real-time protocol (RTP) packet data unit (PDU) stream; [0018] FIG. 10 is a block diagram illustrating a system using a quality of service (QoS) Model with extension for Media PDU Classification;
[0019] FIG. 11 illustrates an example of PDU sets within a QoS flow of packets;
[0020] FIG. 12 is a representation of an example of control plane protocol stack layers according to various embodiments;
[0021] FIG. 13 is a representation of an example user plane protocol stack layers according to various embodiments;
[0022] FIG. 14A is a sequence diagram between network entities demonstrating an overview for video layer-aware scheduling according to one example embodiment;
[0023] FIG. 14B is a sequence diagram between network entities demonstrating an overview for video layer-aware scheduling according to another example embodiment;
[0024] FIG. 15 illustrates an example of single-constellation based operation signaling according to one embodiment;
[0025] FIG. 16 illustrates an example embodiment of signaling using single constellation- bits-allocation;
[0026] FIG. 17 is an example embodiment of signaling using single constellation - joint bits and distance allocation;
[0027] FIG. 18 shows a flow diagram and method for a wireless transmit and receive unit (WTRU) to enable the single-constellation unequal error protection (UEP) framework during DL transmission according to one embodiment;
[0028] FIG. 19 shows a flow diagram and method for a WTRU to enable the single-constellation UEP framework during UL transmission according to one embodiment;
[0029] FIG. 20 is a representation illustrating an embodiment using a split of six quadrature amplitude modulation (QAM) bits for use in three different priority layers of traffic;
[0030] FIG. 21 is a flow diagram illustrating a method of a WTRU communicating in a wireless network according to an embodiment using single constellations in the downlink;
[0031] FIG. 22 is a flow diagram illustrating a method of a WTRU communicating in a wireless network according to an embodiment using single constellations in the uplink; and
[0032] FIG 23 is a flow diagram illustrating a method of a WTRU communicating in a wireless network according to an embodiment using single constellations in the uplink with feedback.
DETAILED DESCRIPTION
[0033] FIG. 1A is a diagram illustrating an example communications system 100 in which one or more disclosed embodiments may be implemented. The communications system 100 may be a multiple access system that provides content, such as voice, data, video, messaging, broadcast, etc., to multiple wireless users. The communications system 100 may enable multiple wireless users to access such content through the sharing of system resources, including wireless bandwidth. For example, the communications systems 100 may employ one or more channel access methods, such as code division multiple access (CDMA), time division multiple access (TDMA), frequency division multiple access (FDMA), orthogonal FDMA (OFDMA), singlecarrier FDMA (SC-FDMA), zero-tail unique-word discrete Fourier transform Spread OFDM (ZT-UW-DFT-S- OFDM), unique word OFDM (UW-OFDM), resource block-filtered OFDM, filter bank multicarrier (FBMC), and the like.
[0034] As shown in FIG. 1A, the communications system 100 may include wireless transmit/receive units (WTRUs) 102a, 102b, 102c, 102d, a radio access network (RAN) 104, a core network (ON) 106, a public switched telephone network (PSTN) 108, the Internet 110, and other networks 112, though itwill be appreciated that the disclosed embodiments contemplate any number of WTRUs, base stations, networks, and/or network elements. Each of the WTRUs 102a, 102b, 102c, 102d may be any type of device configured to operate and/or communicate in a wireless environment By way of example, the WTRUs 102a, 102b, 102c, 102d, any of which may be referred to as a station (STA), may be configured to transmit and/or receive wireless signals and may include a user equipment (UE), a mobile station, a fixed or mobile subscriber unit, a subscription-based unit, a pager, a cellular telephone, a personal digital assistant (PDA), a smartphone, a laptop, a netbook, a personal computer, a wireless sensor, a hotspot or Mi-Fi device, an Internet of Things (loT) device, a watch or other wearable, a head-mounted display (HMD), a vehicle, a drone, a medical device and applications (e.g., remote surgery), an industrial device and applications (e.g., a robot and/or other wireless devices operating in an industrial and/or an automated processing chain contexts), a consumer electronics device, a device operating on commercial and/or industrial wireless networks, and the like. Any of the WTRUs 102a, 102b, 102c and 102d may be interchangeably referred to as a UE.
[0035] The communications systems 100 may also include a base station 114a and/or a base station 114b. Each of the base stations 114a, 114b may be any type of device configured to wirelessly interface with at least one of the WTRUs 102a, 102b, 102c, 102d to facilitate access to one or more communication networks, such as the CN 106, the Internet 110, and/or the other networks 112. By way of example, the base stations 114a, 114b may be a base transceiver station (BTS), a NodeB, an eNode B (eNB), a Home Node B, a Home eNode B, a next generation NodeB, such as a gNode B (gNB), a new radio (NR) NodeB, a site controller, an access point (AP), a wireless router, and the like. While the base stations 114a, 114b are each depicted as a single element, it will be appreciated that the base stations 114a, 114b may include any number of interconnected base stations and/or network elements.
[0036] The base station 114a may be part of the RAN 104, which may also include other base stations and/or network elements (not shown), such as a base station controller (BSC), a radio network controller (RNC), relay nodes, and the like. The base station 114a and/or the base station 114b may be configured to transmit and/or receive wireless signals on one or more carrier frequencies, which may be referred to as a cell (not shown). These frequencies may be in licensed spectrum, unlicensed spectrum, or a combination of licensed and unlicensed spectrum A cell may provide coverage for a wireless service to a specific geographical area that may be relatively fixed or that may change over time. The cell may further be divided into cell sectors. For example, the cell associated with the base station 114a may be divided into three sectors. Thus, in one embodiment, the base station 114a may include three transceivers, i.e., one for each sector of the cell. In an embodiment, the base station 114a may employ multiple-input multiple output (MIMO) technology and may utilize multiple transceivers for each sector of the cell. For example, beamforming may be used to transmit and/or receive signals in desired spatial directions.
[0037] The base stations 114a, 114b may communicate with one or more of the WTRUs 102a, 102b, 102c, 102d over an air interface 116, which may be any suitable wireless communication link (e.g., radio frequency (RF), microwave, centimeter wave, micrometer wave, infrared (IR), ultraviolet (UV), visible light, etc.). The air interface 116 may be established using any suitable radio access technology (RAT).
[0038] More specifically, as noted above, the communications system 100 may be a multiple access system and may employ one or more channel access schemes, such as CDMA, TDMA, FDMA, OFDMA, SC-FDMA, and the like. For example, the base station 114a in the RAN 104 and the WTRUs 102a, 102b, 102c may implement a radio technology such as Universal Mobile Telecommunications System (UMTS) Terrestrial Radio Access (UTRA), which may establish the air interface 116 using wideband CDMA (WCDMA). WCDMA may include communication protocols such as High-Speed Packet Access (HSPA) and/or Evolved HSPA (HSPA+). HSPA may include High-Speed Downlink (DL) Packet Access (HSDPA) and/or High-Speed Uplink (UL) Packet Access (HSUPA).
[0039] In an embodiment, the base station 114a and the WTRUs 102a, 102b, 102c may implement a radio technology such as Evolved UMTS Terrestrial Radio Access (E-UTRA), which may establish the air interface 116 using Long Term Evolution (LTE) and/or LTE-Advanced (LTE-A) and/or LTE-Advanced Pro (LTE-A Pro). [0040] In an embodiment, the base station 114a and the WTRUs 102a, 102b, 102c may implement a radio technology such as NR Radio Access , which may establish the air interface 116 using NR.
[0041] In an embodiment, the base station 114a and the WTRUs 102a, 102b, 102c may implement multiple radio access technologies. For example, the base station 114a and the WTRUs 102a, 102b, 102c may implement LTE radio access and NR radio access together, for instance using dual connectivity (DC) principles. Thus, the air interface utilized by WTRUs 102a, 102b, 102c may be characterized by multiple types of radio access technologies and/or transmissions sent to/from multiple types of base stations (e.g , an eNB and a gNB).
[0042] In other embodiments, the base station 114a and the WTRUs 102a, 102b, 102c may implement radio technologies such as IEEE 802.11 (i.e , Wireless Fidelity (WiFi), IEEE 802.16 (i.e., Worldwide Interoperability for Microwave Access (WiMAX)), CDMA2000, CDMA2000 1X, CDMA2000 EV-DO, Interim Standard 2000 (IS-2000), Interim Standard 95 (IS-95), Interim Standard 856 (IS-856), Global System for Mobile communications (GSM), Enhanced Data rates for GSM Evolution (EDGE), GSM EDGE (GERAN), and the like. [0043] The base station 114b in FIG 1A may be a wireless router, Home Node B, Home eNode B, or access point, for example, and may utilize any suitable RAT for facilitating wireless connectivity in a localized area, such as a place of business, a home, a vehicle, a campus, an industrial facility, an air corridor (e.g., for use by drones), a roadway, and the like. In one embodiment, the base station 114b and the WTRUs 102c, 102d may implement a radio technology such as IEEE 802.11 to establish a wireless local area network (WLAN). In an embodiment, the base station 114b and the WTRUs 102c, 102d may implement a radio technology such as IEEE 802.15 to establish a wireless personal area network (WPAN). In yet another embodiment, the base station 114b and the WTRUs 102c, 102d may utilize a cellular-based RAT (e.g., WCDMA, CDMA2000, GSM, LTE, LTE-A, LTE-A Pro, NR etc.) to establish a picocell or femtocell. As shown in FIG. 1A, the base station 114b may have a direct connection to the Internet 110. Thus, the base station 114b may not be required to access the Internet 110 via the CN 106.
[0044] The RAN 104 may be in communication with the CN 106, which may be any type of network configured to provide voice, data, applications, and/or voice over internet protocol (VoIP) services to one or more of the WTRUs 102a, 102b, 102c, 102d. The data may have varying quality of service (QoS) requirements, such as differing throughput requirements, latency requirements, error tolerance requirements, reliability requirements, data throughput requirements, mobility requirements, and the like. The CN 106 may provide call control, billing services, mobile location-based services, pre-paid calling, Internet connectivity, video distribution, etc., and/or perform high-level security functions, such as user authentication. Although not shown in FIG. 1A, it will be appreciated that the RAN 104 and/or the CN 106 may be in direct or indirect communication with other RANs that employ the same RAT as the RAN 104 or a different RAT. For example, in addition to being connected to the RAN 104, which may be utilizing a NR radio technology, the CN 106 may also be in communication with another RAN (not shown) employing a GSM, UMTS, CDMA 2000, WiMAX, E-UTRA, or WiFi radio technology.
[0045] The CN 106 may also serve as a gateway for the WTRUs 102a, 102b, 102c, 102d to access the PSTN 108, the Internet 110, and/or the other networks 112. The PSTN 108 may include circuit-switched telephone networks that provide plain old telephone service (POTS). The Internet 110 may include a global system of interconnected computer networks and devices that use common communication protocols, such as the transmission control protocol (TCP), user datagram protocol (UDP) and/or the internet protocol (IP) in the TCP/IP internet protocol suite. The networks 112 may include wired and/or wireless communications networks owned and/or operated by other service providers. For example, the networks 112 may include another CN connected to one or more RANs, which may employ the same RAT as the RAN 104 or a different RAT.
[0046] Some or all of the WTRUs 102a, 102b, 102c, 102d in the communications system 100 may include multi-mode capabilities (e.g., the WTRUs 102a, 102b, 102c, 102d may include multiple transceivers for communicating with different wireless networks over different wireless links). For example, the WTRU 102c shown in FIG. 1 A may be configured to communicate with the base station 114a, which may employ a cellularbased radio technology, and with the base station 114b, which may employ an IEEE 802 radio technology.
[0047] FIG. 1 B is a system diagram illustrating an example WTRU 102. As shown in FIG. 1 B, the WTRU 102 may include a processor 118, a transceiver 120, a transmit/receive element 122, a speaker/microphone 124, a keypad 126, a display/touchpad 128, non-removable memory 130, removable memory 132, a power source 134, a global positioning system (GPS) chipset 136, and/or other peripherals 138, among others. It will be appreciated that the WTRU 102 may include any sub-combination of the foregoing elements while remaining consistent with an embodiment.
[0048] The processor 118 may be a general purpose processor, a special purpose processor, a conventional processor, a digital signal processor (DSP), a plurality of microprocessors, one or more microprocessors in association with a DSP core, a controller, a microcontroller, Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs), any other type of integrated circuit (IC), a state machine, and the like. The processor 118 may perform signal coding, data processing, power control, input/output processing, and/or any other functionality that enables the WTRU 102 to operate in a wireless environment. The processor 118 may be coupled to the transceiver 120, which may be coupled to the transmit/receive element 122. While FIG. 1 B depicts the processor 118 and the transceiver 120 as separate components, it will be appreciated that the processor 118 and the transceiver 120 may be integrated together in an electronic package or chip.
[0049] The transmit/receive element 122 may be configured to transmit signals to, or receive signals from, a base station (e.g., the base station 114a) over the air interface 116. For example, in one embodiment, the transmit/receive element 122 may be an antenna configured to transmit and/or receive RF signals. In an embodiment, the transmit/receive element 122 may be an emitter/detector configured to transmit and/or receive IR, UV, or visible light signals, for example. In yet another embodiment, the transmit/receive element 122 may be configured to transmit and/or receive both RF and light signals. It will be appreciated that the transmit/receive element 122 may be configured to transmit and/or receive any combination of wireless signals.
[0050] Although the transmit/receive element 122 is depicted in FIG. 1 B as a single element, the WTRU 102 may include any number of transmit/receive elements 122. More specifically, the WTRU 102 may employ MIMO technology. Thus, in one embodiment, the WTRU 102 may include two or more transmit/receive elements 122 (e g., multiple antennas) for transmitting and receiving wireless signals over the air interface 116. [0051] The transceiver 120 may be configured to modulate the signals that are to be transmitted by the transmit/receive element 122 and to demodulate the signals that are received by the transmit/receive element 122. As noted above, the WTRU 102 may have multi-mode capabilities. Thus, the transceiver 120 may include multiple transceivers for enabling the WTRU 102 to communicate via multiple RATs, such as NR and IEEE 802.11 , for example.
[0052] The processor 118 of the WTRU 102 may be coupled to, and may receive user input data from, the speaker/microphone 124, the keypad 126, and/or the display/touchpad 128 (e.g., a liquid crystal display (LCD) display unit or organic light-emitting diode (OLED) display unit) The processor 118 may also output user data to the speaker/microphone 124, the keypad 126, and/or the display/touchpad 128. In addition, the processor 118 may access information from, and store data in, any type of suitable memory, such as the non-removable memory 130 and/or the removable memory 132. The non-removable memory 130 may include random-access memory (RAM), read-only memory (ROM), a hard disk, or any other type of memory storage device. The removable memory 132 may include a subscriber identity module (SIM) card, a memory stick, a secure digital (SD) memory card, and the like. In other embodiments, the processor 118 may access information from, and store data in, memory that is not physically located on the WTRU 102, such as on a server or a home computer (not shown).
[0053] The processor 118 may receive power from the power source 134, and may be configured to distribute and/or control the power to the other components in the WTRU 102. The power source 134 may be any suitable device for powering the WTRU 102. For example, the power source 134 may include one or more dry cell batteries (e.g., nickel-cadmium (NiCd), nickel-zinc (NiZn), nickel metal hydride (NiMH), lithium-ion (Li- ion), etc.), solar cells, fuel cells, and the like.
[0054] The processor 118 may also be coupled to the GPS chipset 136, which may be configured to provide location information (e.g., longitude and latitude) regarding the current location of the WTRU 102. In addition to, or in lieu of, the information from the GPS chipset 136, the WTRU 102 may receive location information over the air interface 116 from a base station (e.g., base stations 114a, 114b) and/or determine its location based on the timing of the signals being received from two or more nearby base stations. It will be appreciated that the WTRU 102 may acquire location information by way of any suitable location-determination method while remaining consistent with an embodiment
[0055] The processor 118 may further be coupled to other peripherals 138, which may include one or more software and/or hardware modules that provide additional features, functionality and/or wired or wireless connectivity. For example, the peripherals 138 may include an accelerometer, an e-compass, a satellite transceiver, a digital camera (for photographs and/or video), a universal serial bus (USB) port, a vibration device, a television transceiver, a handsfree headset, a Bluetooth® module, a frequency modulated (FM) radio unit, a digital music player, a media player, a video game player module, an Internet browser, a Virtual Reality and/or Augmented Reality (VR/AR) device, an activity tracker, and the like. The peripherals 138 may include one or more sensors. The sensors may be one or more of a gyroscope, an accelerometer, a hall effect sensor, a magnetometer, an orientation sensor, a proximity sensor, a temperature sensor, a time sensor; a geolocation sensor, an altimeter, a light sensor, a touch sensor, a magnetometer, a barometer, a gesture sensor, a biometric sensor, a humidity sensor and the like.
[0056] The WTRU 102 may include a full duplex radio for which transmission and reception of some or all of the signals (e g., associated with particular subframes for both the UL (e.g., for transmission) and DL (e.g., for reception) may be concurrent and/or simultaneous. The full duplex radio may include an interference management unit to reduce and or substantially eliminate self-interference via either hardware (e.g., a choke) or signal processing via a processor (e.g., a separate processor (not shown) or via processor 118). In an embodiment, the WTRU 102 may include a half-duplex radio for which transmission and reception of some or all of the signals (e.g., associated with particular subframes for either the UL (e g., for transmission) or the DL (e g., for reception)). [0057] FIG. 1C is a system diagram illustrating the RAN 104 and the CN 106 according to an embodiment. As noted above, the RAN 104 may employ an E-UTRA radio technology to communicate with the WTRUs 102a, 102b, 102c over the air interface 116. The RAN 104 may also be in communication with the CN 106.
[0058] The RAN 104 may include eNode-Bs 160a, 160b, 160c, though it will be appreciated that the RAN 104 may include any number of eNode-Bs while remaining consistent with an embodiment. The eNode-Bs 160a, 160b, 160c may each include one or more transceivers for communicating with the WTRUs 102a, 102b, 102c over the air interface 116. In one embodiment, the eNode-Bs 160a, 160b, 160c may implement MIMO technology. Thus, the eNode-B 160a, for example, may use multiple antennas to transmit wireless signals to, and/or receive wireless signals from, the WTRU 102a.
[0059] Each of the eNode-Bs 160a, 160b, 160c may be associated with a particular cell (not shown) and may be configured to handle radio resource management decisions, handover decisions, scheduling of users in the UL and/or DL, and the like. As shown in FIG. 1 C, the eNode-Bs 160a, 160b, 160c may communicate with one another over an X2 interface.
[0060] The CN 106 shown in FIG. 1C may include a mobility management entity (MME) 162, a serving gateway (SGW) 164, and a packet data network (PDN) gateway (PGW) 166. While the foregoing elements are depicted as part of the CN 106, it will be appreciated that any of these elements may be owned and/or operated by an entity other than the CN operator.
[0061] The MME 162 may be connected to each of the eNode-Bs 162a, 162b, 162c in the RAN 104 via an 81 interface and may serve as a control node. For example, the MME 162 may be responsible for authenticating users of the WTRUs 102a, 102b, 102c, bearer activation/deactivation, selecting a particular serving gateway during an initial attach of the WTRUs 102a, 102b, 102c, and the like. The MME 162 may provide a control plane function for switching between the RAN 104 and other RANs (not shown) that employ other radio technologies, such as GSM and/or WCDMA
[0062] The SGW 164 may be connected to each of the eNode Bs 160a, 160b, 160c in the RAN 104 via the S1 interface. The SGW 164 may generally route and forward user data packets to/from the WTRUs 102a, 102b, 102c. The SGW 164 may perform other functions, such as anchoring user planes during inter-eNode B handovers, triggering paging when DL data is available for the WTRUs 102a, 102b, 102c, managing and storing contexts of the WTRUs 102a, 102b, 102c, and the like.
[0063] The SGW 164 may be connected to the PGW 166, which may provide the WTRUs 102a, 102b, 102c with access to packet-switched networks, such as the Internet 110, to facilitate communications between the WTRUs 102a, 102b, 102c and IP-enabled devices.
[0064] The CN 106 may facilitate communications with other networks For example, the CN 106 may provide the WTRUs 102a, 102b, 102c with access to circuit-switched networks, such as the PSTN 108, to facilitate communications between the WTRUs 102a, 102b, 102c and traditional land-line communications devices. For example, the CN 106 may include, or may communicate with, an IP gateway (e.g., an IP multimedia subsystem (IMS) server) that serves as an interface between the CN 106 and the PSTN 108. In addition, the CN 106 may provide the WTRUs 102a, 102b, 102c with access to the other networks 112, which may include other wired and/or wireless networks that are owned and/or operated by other service providers. [0065] Although the WTRU is described in FIGS. 1A-1 D as a wireless terminal, it is contemplated that in certain representative embodiments that such a terminal may use (e.g., temporarily or permanently) wired communication interfaces with the communication network.
[0066] In representative embodiments, the other network 112 may be a WLAN.
[0067] A WLAN in Infrastructure Basic Service Set (BSS) mode may have an Access Point (AP) for the BSS and one or more stations (STAs) associated with the AP. The AP may have access or an interface to a Distribution System (DS) or another type of wired/wireless network that carries traffic in to and/or out of the BSS. Traffic to STAs that originates from outside the BSS may arrive through the AP and may be delivered to the STAs. Traffic originating from STAs to destinations outside the BSS may be sent to the AP to be delivered to respective destinations. Traffic between STAs within the BSS may be sent through the AP, for example, where the source STA may send traffic to the AP and the AP may deliver the traffic to the destination STA The traffic between STAs within a BSS may be considered and/or referred to as peer-to-peer traffic. The peer-to- peer traffic may be sent between (e.g., directly between) the source and destination STAs with a direct link setup (DLS). In certain representative embodiments, the DLS may use an 802.11e DLS or an 802.11z tunneled DLS (TDLS). A WLAN using an Independent BSS (IBSS) mode may not have an AP, and the STAs (e.g., all of the STAs) within or using the IBSS may communicate directly with each other. The IBSS mode of communication may sometimes be referred to herein as an “ad-hoc” mode of communication.
[0068] When using the 802.11 ac infrastructure mode of operation or a similar mode of operations, the AP may transmit a beacon on a fixed channel, such as a primary channel. The primary channel may be a fixed width (e.g., 20 MHz wide bandwidth) or a dynamically set width. The primary channel may be the operating channel of the BSS and may be used by the STAs to establish a connection with the AP. In certain representative embodiments, Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA) may be implemented, for example in 802.11 systems. For CSMA/CA, the STAs (e.g., every STA), including the AP, may sense the primary channel. If the primary channel is sensed/detected and/or determined to be busy by a particular STA, the particular STA may back off. One STA (e.g., only one station) may transmit at any given time in a given BSS.
[0069] High Throughput (HT) STAs may use a 40 MHz wide channel for communication, for example, via a combination of the primary 20 MHz channel with an adjacent or nonadjacent 20 MHz channel to form a 40 MHz wide channel.
[0070] Very High Throughput (VHT) STAs may support 20MHz, 40 MHz, 80 MHz, and/or 160 MHz wide channels The 40 MHz, and/or 80 MHz, channels may be formed by combining contiguous 20 MHz channels. A 160 MHz channel may be formed by combining 8 contiguous 20 MHz channels, or by combining two noncontiguous 80 MHz channels, which may be referred to as an 80+80 configuration. For the 80+80 configuration, the data, after channel encoding, may be passed through a segment parser that may divide the data into two streams. Inverse Fast Fourier Transform (IFFT) processing, and time domain processing, may be done on each stream separately The streams may be mapped on to the two 80 MHz channels, and the data may be transmitted by a transmitting STA. At the receiver of the receiving STA, the above described operation for the 80+80 configuration may be reversed, and the combined data may be sent to the Medium Access Control (MAC).
[0071] Sub 1 GHz modes of operation are supported by 802.11 af and 802.11 ah. The channel operating bandwidths, and carriers, are reduced in 802.11af and 802.11ah relative to those used in 802.11n, and 802.11ac. 802.11 af supports 5 MHz, 10 MHz, and 20 MHz bandwidths in the TV White Space (TVWS) spectrum, and 802.11 ah supports 1 MHz, 2 MHz, 4 MHz, 8 MHz, and 16 MHz bandwidths using non-TVWS spectrum. According to a representative embodiment, 802.11 ah may support Meter Type Control/Machine- Type Communications (MTC), such as MTC devices in a macro coverage area. MTC devices may have certain capabilities, for example, limited capabilities including support for (e.g , only support for) certain and/or limited bandwidths The MTC devices may include a battery with a battery life above a threshold (e.g., to maintain a very long battery life).
[0072] WLAN systems, which may support multiple channels, and channel bandwidths, such as 802 11 n, 802.11ac, 802.11af, and 802.11 ah, include a channel which may be designated as the primary channel. The primary channel may have a bandwidth equal to the largest common operating bandwidth supported by all STAs in the BSS. The bandwidth of the primary channel may be set and/or limited by a STA, from among all STAs in operating in a BSS, which supports the smallest bandwidth operating mode. In the example of 802.11ah, the primary channel may be 1 MHz wide for STAs (e.g., MTC type devices) that support (e.g., only support) a 1 MHz mode, even if the AP, and other STAs in the BSS support 2 MHz, 4 MHz, 8 MHz, 16 MHz, and/or other channel bandwidth operating modes. Carrier sensing and/or Network Allocation Vector (NAV) settings may depend on the status of the primary channel. If the primary channel is busy, for example, due to a STA (which supports only a 1 MHz operating mode) transmitting to the AP, all available frequency bands may be considered busy even though a majority of the available frequency bands remains idle.
[0073] In the United States, the available frequency bands, which may be used by 802.11 ah, are from 902 MHz to 928 MHz. In Korea, the available frequency bands are from 917.5 MHz to 923.5 MHz. In Japan, the available frequency bands are from 916.5 MHz to 927.5 MHz. The total bandwidth available for 802.11 ah is 6 MHz to 26 MHz depending on the country code.
[0074] FIG. 1 D is a system diagram illustrating the RAN 104 and the CN 106 according to an embodiment. As noted above, the RAN 104 may employ an NR radio technology to communicate with the WTRUs 102a, 102b, 102c over the air interface 116. The RAN 104 may also be in communication with the CN 106.
[0075] The RAN 104 may include gNBs 180a, 180b, 180c, though it will be appreciated that the RAN 104 may include any number of gNBs while remaining consistent with an embodiment. The gNBs 180a, 180b, 180c may each include one or more transceivers for communicating with the WTRUs 102a, 102b, 102c over the air interface 116. In one embodiment, the gNBs 180a, 180b, 180c may implement MIMO technology. For example, gNBs 180a, 108b may utilize beamforming to transmit signals to and/or receive signals from the gNBs 180a, 180b, 180c. Thus, the gNB 180a, for example, may use multiple antennas to transmit wireless signals to, and/or receive wireless signals from, the WTRU 102a. In an embodiment, the gNBs 180a, 180b, 180c may implement carrier aggregation technology. For example, the gNB 180a may transmit multiple component carriers to the WTRU 102a (not shown). A subset of these component carriers may be on unlicensed spectrum while the remaining component carriers may be on licensed spectrum. In an embodiment, the gNBs 180a, 180b, 180c may implement Coordinated Multi-Point (CoMP) technology. For example, WTRU 102a may receive coordinated transmissions from gNB 180a and gNB 180b (and/or gNB 180c).
[0076] The WTRUs 102a, 102b, 102c may communicate with gNBs 180a, 180b, 180c using transmissions associated with a scalable numerology. For example, the OFDM symbol spacing and/or OFDM subcarrier spacing may vary for different transmissions, different cells, and/or different portions of the wireless transmission spectrum. The WTRUs 102a, 102b, 102c may communicate with gNBs 180a, 180b, 180c using subframe or transmission time intervals (TTIs) of various or scalable lengths (e.g., containing a varying number of OFDM symbols and/or lasting varying lengths of absolute time).
[0077] The gNBs 180a, 180b, 180c may be configured to communicate with the WTRUs 102a, 102b, 102c in a standalone configuration and/or a non-standalone configuration. In the standalone configuration, WTRUs 102a, 102b, 102c may communicate with gNBs 180a, 180b, 180c without also accessing other RANs (e.g., such as eNode-Bs 160a, 160b, 160c). In the standalone configuration, WTRUs 102a, 102b, 102c may utilize one or more of gNBs 180a, 180b, 180c as a mobility anchor point. In the standalone configuration, WTRUs 102a, 102b, 102c may communicate with gNBs 180a, 180b, 180c using signals in an unlicensed band. In a non-standalone configuration WTRUs 102a, 102b, 102c may communicate with/connect to gNBs 180a, 180b, 180c while also communicating with/connecting to another RAN such as eNode-Bs 160a, 160b, 160c. For example, WTRUs 102a, 102b, 102c may implement DC principles to communicate with one or more gNBs 180a, 180b, 180c and one or more eNode-Bs 160a, 160b, 160c substantially simultaneously. In the non- standalone configuration, eNode-Bs 160a, 160b, 160c may serve as a mobility anchor for WTRUs 102a, 102b, 102c and gNBs 180a, 180b, 180c may provide additional coverage and/or throughput for servicing WTRUs 102a, 102b, 102c.
[0078] Each of the gNBs 180a, 180b, 180c may be associated with a particular cell (not shown) and may be configured to handle radio resource management decisions, handover decisions, scheduling of users in the UL and/or DL, support of network slicing, DC, interworking between NR and E-UTRA, routing of user plane data towards User Plane Function (UPF) 184a, 184b, routing of control plane information towards Access and Mobility Management Function (AMF) 182a, 182b and the like. As shown in FIG. 1D, the gNBs 180a, 180b, 180c may communicate with one another over an Xn interface.
[0079] The CN 106 shown in FIG. 1 D may include at least one AMF 182a, 182b, at least one UPF 184a, 184b, at least one Session Management Function (SMF) 183a, 183b, and possibly a Data Network (DN) 185a, 185b. While the foregoing elements are depicted as part of the CN 106, it will be appreciated that any of these elements may be owned and/or operated by an entity other than the CN operator.
[0080] The AMF 182a, 182b may be connected to one or more of the gNBs 180a, 180b, 180c in the RAN 104 via an N2 interface and may serve as a control node. For example, the AMF 182a, 182b may be responsible for authenticating users of the WTRUs 102a, 102b, 102c, support for network slicing (e.g., handling of different protocol data unit (PDU) sessions with different requirements), selecting a particular SMF 183a, 183b, management of the registration area, termination of non-access stratum (NAS) signaling, mobility management, and the like. Network slicing may be used by the AMF 182a, 182b in order to customize CN support for WTRUs 102a, 102b, 102c based on the types of services being utilized WTRUs 102a, 102b, 102c. For example, different network slices may be established for different use cases such as services relying on ultra-reliable low latency (URLLC) access, services relying on enhanced massive mobile broadband (eMBB) access, services for MTC access, and the like The AMF 182a, 182b may provide a control plane function for switching between the RAN 104 and other RANs (not shown) that employ other radio technologies, such as LTE, LTE-A, LTE-A Pro, and/or non-3GPP access technologies such as WiFi.
[0081] The SMF 183a, 183b may be connected to an AMF 182a, 182b in the CN 106 via an N11 interface. The SMF 183a, 183b may also be connected to a UPF 184a, 184b in the CN 106 via an N4 interface. The SMF 183a, 183b may select and control the UPF 184a, 184b and configure the routing of traffic through the UPF 184a, 184b. The SMF 183a, 183b may perform other functions, such as managing and allocating UE IP address, managing PDU sessions, controlling policy enforcement and QoS, providing DL data notifications, and the like. A PDU session type may be IP-based, non-IP based, Ethernet-based, and the like.
[0082] The UPF 184a, 184b may be connected to one or more of the gNBs 180a, 180b, 180c in the RAN 104 via an N3 interface, which may provide the WTRUs 102a, 102b, 102c with access to packet-switched networks, such as the Internet 110, to facilitate communications between the WTRUs 102a, 102b, 102c and IP-enabled devices. The UPF 184, 184b may perform other functions, such as routing and forwarding packets, enforcing user plane policies, supporting multi-homed PDU sessions, handling user plane QoS, buffering DL packets, providing mobility anchoring, and the like.
[0083] The CN 106 may facilitate communications with other networks For example, the CN 106 may include, or may communicate with, an IP gateway (e.g., an IP multimedia subsystem (IMS) server) that serves as an interface between the CN 106 and the PSTN 108. In addition, the CN 106 may provide the WTRUs 102a, 102b, 102c with access to the other networks 112, which may include other wired and/or wireless networks that are owned and/or operated by other service providers In one embodiment, the WTRUs 102a, 102b, 102c may be connected to a local DN 185a, 185b through the UPF 184a, 184b via the N3 interface to the UPF 184a, 184b and an N6 interface between the UPF 184a, 184b and the DN 185a, 185b.
[0084] In view of FIGs. 1A-1 D, and the corresponding description of FIGs. 1A-1 D, one or more, or all, of the functions described herein with regard to one or more of: WTRU 102a-d, Base Station 114a-b, eNode-B 160a-c, MME 162, SGW 164, PGW 166, gNB 180a-c, AMF 182a-b, UPF 184a-b, SMF 183a-b, DN 185a-b, and/or any other device(s) described herein, may be performed by one or more emulation devices (not shown). The emulation devices may be one or more devices configured to emulate one or more, or all, of the functions described herein. For example, the emulation devices may be used to test other devices and/or to simulate network and/or WTRU functions.
[0085] The emulation devices may be designed to implement one or more tests of other devices in a lab environment and/or in an operator network environment. For example, the one or more emulation devices may perform the one or more, or all, functions while being fully or partially implemented and/or deployed as part of a wired and/or wireless communication network in order to test other devices within the communication network. The one or more emulation devices may perform the one or more, or all, functions while being temporarily implemented/deployed as part of a wired and/or wireless communication network The emulation device may be directly coupled to another device for purposes of testing and/or performing testing using over-the-air wireless communications.
The one or more emulation devices may perform the one or more, including all, functions while not being implemented/deployed as part of a wired and/or wireless communication network. For example, the emulation devices may be utilized in a testing scenario in a testing laboratory and/or a non-deployed (e.g., testing) wired and/or wireless communication network in order to implement testing of one or more components. The one or more emulation devices may be test equipment. Direct RF coupling and/or wireless communications via RF circuitry (e.g., which may include one or more antennas) may be used by the emulation devices to transmit and/or receive data.
[0086] As mentioned previously, there is significant research and development in investigating enhancements of QoS mechanisms considering the characteristics of extended reality (XR) and other media services. It is also recognized that in order to help applications adapt to network status and provide better quality of experience (QoE), the network information exposure to applications should be investigated and enhanced, especially for the media service which has large traffic burst. The XR/media traffics have the characteristics of high throughput, low latency, and high reliability requirement, and the UE battery level may impact the user’s experience since the high throughput require the high-power consumption in terminal side. So considering these requirements which are expected to be more stringent going forward, the limited radio resource and end-to-end QoS policy control from system perspective, further system optimizations and enhancements beyond ongoing 5G enhancements in support of further trade-off among throughput, latency and reliability and device battery life, are needed.
[0087] Some advanced XR or media services may include more modalities besides video and audio stream, such as information from different sensors and tactile or emotion data for more immersing experience e.g. haptic data or sensor data. To support such tactile and multi-modality communication services early requirements, efforts are already investigating how to address service requirements of different types of traffic steams with coordinated QoS selection and packet processing, guaranteed latency and reliability, time synchronization of these parallel information, in order to ensure best service experience The service requirements are only expected to become more extreme.
[0088] Referring to multi-modality communication services, these services involve multi-modal data, a term used to describe the input data from different kinds of devices/sensors or the output data to different kinds of destinations (e.g. , one or more WTRUs) required for the same task or application. Multi-modal data consists of more than one single-modal data, and there is strong dependency among each single-modal data. Single- modal data can be seen as one type of data
[0089] Referring to Fig. 2, an example of a multi-modal interactive system 200 is depicted. As shown in this figure, multi-modal outputs 212 are generated based on the inputs 204 from multiple sources. In the multimodal interactive system, modality is a type or representation of information in a specific interactive system. Multi-modal interaction is the process during which information of multiple modalities are exchanged. Modal types consists of motion, sentiment, gesture, etc. Modal representations consists of video, audio, tactition (vibrations or other movements which provide haptic or tactile feelings to a person, or a machine), etc. Examples of multi-modality communication services may include, immersive multi-modal virtual reality (VR) applications, remote control robot, immersive VR game, skillset sharing for cooperative perception and maneuvering of robots, liven vent selective immersion, haptic feedback for a person exclusion zone in dangerous remote environment, etc.
[0090] A video traffic stream (denoted in FIG. 2 for simplicity as video stream) is typically structure of a Group of Pictures (GOP) where each picture constitutes a video frame FIG. 3 is an example illustration comparing an example of video game group-of-pictures (GOP) frame viewing order 302 verses an example frame transmission order 304. The frames are of a different type and different frame types serve varying purposes, are of different importance for the video application rendering. An “I” frame is a frame that is compressed solely based on the information contained in the frame; no reference is made to any of the other video frames before or after it. The “I” stands for “intra” coded. A “P” frame is a frame that has been compressed u sing the data contained in the frame itself and data from the closest preceding I or P frame. The “P” stands for “predicted.” A “B” frame is a frame that has been compressed using data from the closest preceding I or P frame and the closest following I or P frame. The “B” stands for “bidirectional,” meaning that the frame data can depend on frames that occur before and after it in the video sequence. A group of pictures, or GOP, is a series of frames consisting of a single I frame and zero or more P and B frames. A GOP always begins with an I frame and ends with the last frame before the next subsequent I frame. All of the frames in the GOP depend (directly or indirectly) on the data in the initial I frame. Open GOP and closed GOP are terms that refer to the relationship between one GOP and another. A closed GOP is self-contained; that is, none of the frames in the GOP refer to or are based on any of the frames outside the GOP. An open GOP uses data from the I frame of the following for calculating some of the B frames in the GOP. [0091] It should be noted that the second I frame (shown in gray in FIG 3 at the right side of frame viewing order 302 and approximately 1 /3rd from the right in frame transmission order 304) is the first frame of the next GOP. The B frames 11 and 12 are based on this I frame because this is an open GOP structure.
[0092] As mentioned previously, packets of a same video stream but different frame types (l/P/B frame) or even different positions in the GoP may be of different contributions to user experience Referring to FIG. 4, for example an error 402 on the P frame 4, will cause induced errors 404 on B frames 2, 3, 6 and 7. The error 402 can also propagate as propagated errors 406A, 406B to P frame 7 and P frame 10 respectively, causing further induced errors 408A, 408B to B frames 8, 9, 11 and 12.
[0093] A video compression algorithm may encode a video stream into multiple video layers, which enables a progressive refinement of the reconstructed video quality at the receiver. This is motivated by the fact that video distribution needs to support scenarios with heterogeneous devices, unreliable networks, and bandwidth fluctuations Generally, the most important layer is referred to as the video base layer (BL) and the less important layers are termed video enhancement layers (ELs), which rely on the video BL. Furthermore, an EL may be further relied upon by less important ELs. When the video BL or a video EL is lost or corrupted during its transmission, the dependent layers cannot be utilized by the decoder and must be dropped.
[0094] A number of video coding techniques have been investigated and standardized. Fig. 5 shows examples of a multi-layer video stream over the length of a GoP. As shown, a layered video encoder 502 may provide a multiplexed stream via multiplexer 504 comprising sub-streams or layers 506A-506D, which may be at different bit rates or throughputs Each stream may be decoded by a corresponding layered video decoder 508A-508D for output to various devices. In many implementations, a single video decoder may provide for decoding of different channels, and accordingly, multiple decoders 508 may be replaced with a single decoder capable of receiving and decoding multiple sub-streams 506
[0095] In FIG. 6, the layers of the video stream are referred to as partitions A, B and C. In this example, “B- >A” indicates the partition B depends on partition A, while “B->l” indicates frame B 604A is predicted from frame I 602 Similarly, frame P 606A may be predicted from frames B 604A, 604B, and frame P 606B may be predicted from frame B 604A.
[0096] In FIG. 7, the dependency of the layers in the Scalable Video Coding (SVC) stream is exemplified. The video layers L0, L1 , and L2 represent the video BL, the video spatial EL and the video temporal EL respectively. In this example, “-^’indicates “depends on”, while “->” indicates “is predicted from.” As shown, frame B 704A is predicted from frame I 702 and frame B 708A; frame B 704B is predicted from frame B708A and frame P 706A, etc.
[0097] In FIG. 8, the dependencies in a multi-view video coding (MVC) from the Joint Video Team (JVT) standards is exemplified. In this example, “-^’indicates “depends on”, while
Figure imgf000017_0001
indicates “is predicted from.” As shown, frames of a first view 710A may serve as bases for prediction of other frames in the view (e.g. frame B 704A predicted from frame I 702) as well as frames in a second view 710B (e.g. frame P 706A predicted from frame I 702). [0098] In certain embodiments, video data may be interpreted as a multi-modal data that consists of more than one single-modal data. Each single-modal data can be interpreted as a video frame or a video layer within the video frame depending on whether the video frame is made of more than one video layer.
[0099] Handling of a PDU set within a bearer or a QoS Flow according to certain example embodiments will now be described. In one example, application data needs to be transported over a transport network to the cellular system. This requires the application data to be packetized. Example of such packetization may use real-time protocol (RTP) packets or RTP PDUs. An example of a video stream packetized into RTP PDU packets is depicted in FIG. 9, with I frame 802, B frames 804A, 804B, and P frame 806.
[0100] As previously mentioned, packets within an application data frame have dependency with each other since the application needs all of these packets for decoding the frame. Hence one packet loss will make other correlative packets useless even if they are successfully transmitted For example, XR applications impose requirements in terms of Media Units (Application Data Units), rather than in terms of single packets/PDUs. Therefore in one embodiment, a PDU Set may be defined, and is composed of one or more PDUs carrying the payload of one unit of information (media data unit) generated at the application level (e.g., a video frame, or video slice, or video layer for video XRM Services, or single-modal data within a multi-modal data) In some implementations all PDUs in a PDU Set are needed by the application layer to use the corresponding data unit of information In other implementations, the application layer can still recover parts all or of the information unit when some PDUs are missing.
[0101] Within a QoS flow or a bearer, packets of a packetized media data unit (PDU) may have different levels of importance as illustrated in FIG. 10. It should be noted that a QoS Flow is the finest granularity of QoS differentiation in the PDU Session in the 5G core network. Similarly, a bearer is the finest granularity of QoS differentiation for bearer level QoS control in the radio access network (RAN) or in the generation of core network earlier than 5G core network One or more QoS flows may be mapped to a bearer in the RAN. In FIG. 10, the bearer corresponds to the AN network resources illustrated by the tunnels between the Access Network (AN) 1004 and the WTRU 1002 (an example of which is denoted as “UE”). User Plane Function (UPF) 1006 may provide for classification and QoS marking of packets.
[0102] For example, the XR Media (XRM) service PDUs have dependency with each other. The PDUs (e g., I frame, a base video layer, a first single modal data of a multi-modal data), on which are dependent by the other PDUs (e.g. P frame, B frame, a video enhancement layer, a second single modal data of a multimodal data), are expected to be more important and should be transmitted firstly, or be provided with different levels of packet scheduling and error resiliency treatment. For example, in some video XRM services, P frame and B frame are also important as I frame to construct the fluent video, dropping of those P frame and B frame causes jitter to the QoE which is not better than giving up the whole service. In some other video XRM services, P frame and B frame are used to enhance the high definition, e.g., from 720p to 1080p dropping of those P frame and B frame makes sense to keep the service when the network resource cannot transmit all of the service data. [0103] The PDUs with the same importance level within a QoS flow or bearer can be treated as a PDU Set (e g., a video frame, or a video layer, or single-modal data within multi-modal data). In one embodiment, XRM service data can be categorized into a list of consecutive PDU Sets. Except for importance level, the QoS requirement for the XRM service flows are consistent Hence, XRM service flows can be mapped into a QoS flow. And the QoS flow should include a list of PDU Sets with different importance levels. A PDU Set may include a list of PDUs and in one example embodiment, each PDU Set may have the following factors:
-The sequence number of the PDU Set;
-The Importance Level of the PDU Set;
-The boundary information of the PDU Set; for example, in one embodiment, (i) the Start Mark of PDU Set, which is only valid for the first PDU of the PDU Set (As shown in the example figure, unless the next PDU is the 1st PDU of another PDU set, the network cannot know whether the current PDU is the last PDU of the current PDU Set and in order to avoid always waiting for the next PDU to estimate whether the current received PDU is the last PDU of the PDU Set, it is proposed not to mark the last PDU of the PDU Set, instead the first PDU of the PDU Set is marked); and (ii) the sequence number of the PDU within the PDU Set, which may be used to allow support for out-of-order detection and reordering; and
-Dependent PDU Set’s sequence number. (If the current PDU Set 2 is dependent on PDU Set 1, the PDU Set 2 should carry the PDU Set 1’s sequence number.)
[0104] An example of PDU Sets 1102A, 1102B according to one embodiment, is depicted in FIG. 11 and an example illustration of a PDU set header is shown in the Table 1 below:
Figure imgf000019_0001
Table 1
[0105] The above fields may be of fixed or variable length in various implementations, and may appear in any order. In some implementations, one or more fields may be absent.
[0106] The descriptions of the various embodiments herein may include the following: the term “layer” is used in different contexts in the embodiments described herein to mean very different things depending on the context of its use. At least the following different contexts and distinctions in use of the term layer are present in description of the embodiments and articulated as follows.
[0107] The term video layer, as used herein, is generically, in reference to a PDU sets previously described, where a PDU set (and the PDUs within the PDU set) may be provided differentiated transmission treatment or reception treatment in the cellular system access stratum (AS) or non-access stratum (NAS), based on the PSU set’s relative importance as described in FIG. 10 and Table 1 , above. Moreover, the functions of telecommunication systems, and particularly cellular communication systems, are typically structured in distinct groups of related functions traditionally referred to as “protocol layers.” Examples of 5G protocol stack layers for control plane and user plane are illustrated in FIG 12 and FIG. 13, respectively. As shown in FIG. 12, the control plane may include the following protocol layers: PHY, MAC, RLC, PDCP, RRC and NAS. As shown in FIG. 13, the user plane may include the following protocol layers: PHY, MAC, RLC, PDCP, SDAP. The access stratums may be comprised of PHY, MAC, RLC, PDCP, SDAP protocol layers. Therefore, terms “video layers” and “protocol layers” are described in relation to the exemplary embodiment to mean two very different concepts From the perspective of a given protocol stack layer, a protocol stack upper layer means the one or more protocol stack layers above a given protocol layer, and the protocol stack lower layer means the one or more protocol stack layers below the mentioned protocol layer. For example from the PHY protocol stack layer, a protocol stack upper layer may be the RRC layer, while from RRC layer perspective, a protocol stack upper layer may be the NAS layer or the application layer. Similarly, for example, from SDAP perspective, a protocol stack upper layer may be the network Internet Protocol (IP) layer or the transport RTP protocol stack layer, or the application layer, while the protocol stack lower layer may be a PDCP layer. As shown in FIGs. 12 and 13, different devices or network nodes or functions (e.g. UE 1202, gNB 1204, AMF 1206; and UE 1302 and gNB1304) may provide functionality at different layers of the protocol stack. For example, an AMF 1206 may provide NAS functionality to a UE 1202 that is not provided by a gNB 1204; such communications may be provided by a lower layer of the network stack (e.g RRC protocol, PDCP protocol, etc.) and the gNB may be agnostic to such communications.
[0108] Except for the term of RTP PDU, as previously referenced, the various embodiments contemplate a PDU as access stratum protocol layer PDU for the purpose of differentiated PDU set transmission treatment or reception treatment. It should be noted that for all practical purposes, an RTP PDU can be segmented into an access stratum protocol layer PDUs or aggregated into an access stratum protocol layer PDU.
[0109] Multiple input multiple output (MIMO) Layers or MIMO Spatial Layers: MIMO Layers are the independent data streams that can be transmitted between a base station and one or more users simultaneously. Single-user MIMO (SU-MIMO) is the ability to transmit one or multiple data streams, i.e., MIMO layers, from one transmitting array to a single user. The number of layers that can be supported, called the rank, depends on the radio channel. In multi-user MIMO (MU-MIMO), the base station simultaneously sends different MIMO layers in separate beams to different users using the same time and frequency resource, thereby increasing the network capacity.
[0110] As previously mentioned, for XR and multi-modal traffics, in spite of which codec is used, some common characteristics are present, which can be useful for better transmission control and efficiency. Media application attributes which may take advantage of this potential might include information such as relative importance of a PDU set within PDU sets derived from the packetization of a media data stream, scheduling deadline of PDUs within a PDU set, content delivery criteria for PDUs within a PDU set such as “all or nothing”, “good until first loss”, or “forward error correction (FEC) with either static or variable code rate”. The content delivery criteria may help define whether to deliver or discard a PDU in a PDU set after it misses its deadline, or the content criteria of its associated PDU set no longer can be fulfilled, or the content criteria of its associated PDU set have already been fulfilled.
[0111] Various embodiments described herein may enable solutions to provide differentiated transmission treatment or reception treatment to PDU sets and their corresponding PDUs, considering the relative importance or priority within a QoS flow or bearer of the said PDU sets, as well as their corresponding PDUs, as per the QoS framework for media PDU classification illustrated in FIG. 10. Specifically, certain embodiments may enable the application of different modulation and coding schemes (i.e. modulation orders and coding rates) in support of differentiated transmission treatment or reception treatment of PDU sets and their corresponding PDUs with the assumption that their relative importance or priority is visible to the physical layer. [0112] For instance, video base layers (video BLs) should be provided with more robust error protection than video enhancement layers (video ELs) through the application of a less aggressive MCS. Also, more important video ELs should be provided with more robust error protection than less important video ELs. As previously mentioned, the term video layer is generically used herein, in reference to a PDU or a PDU set comprising of PDUs of the same importance, as defined herein. In certain embodiments, a video base layer is a PDU or a packet service unit (PSU) set with the importance of a video base layer. Further, in the present embodiments, a video enhancement layer is a PDU or PDU set with the importance of the corresponding video enhancement layer. While the term video layer is generically used throughout the description in describing PHY protocol layer processing in support of differentiated transmission treatment or reception treatment, it should be understood, that embodiments described herein broadly apply to any indicators that the RAN protocol stack PDUs can be differentiated on the basis of which video layer they represent, so they can be processed according to the importance or priority level of the video layer to which they correspond. In some implementations, a first video EL may be higher priority or more important than a second video EL if the second video EL depends upon the first video EL (e g. uses frames from the first video EL for prediction or decoding). Accordingly, a layer or frame may be more important and merit a higher QoS setting if it is used for decoding or predicting other frames or layers.
[0113] In one example embodiment a WTRU is enabled to inform its capability to enable unequal error protection (UEP) for Video. For example, WTRU capabilities may be reported to base station (BS) to enable highly reliable video transmissions. In one embodiment, the WTRU reports various important capabilities to BS to enable features of the embodiments, such as supported modulation order for DL and UL, max BW, and subcarrier spacing and others. In one embodiment, the UE may report to BS one or more capabilities to:
-Differentiate between different video layers at the lower protocol stack layers;
-Enable differential treatment of video layers;
-Enable differential treatment of video frames;
-Enable differential treatment of video frames within a GOP. -Enable differential treatment of video frames across GOP.
-Modulate/Demodulate different video layers using different constellation diagrams simultaneously
-Code/Decode different video layers separately; and/or
-Jointly encode/decode different video layers in the protocol stack upper layers, or in the protocol stack lower layers, or both, for example in support of video layer-aware forward error correction (LA-FEC) also referred to as inter-layer forward-error-correction (IL-FEC).
[0114] The identifiers above may inform the BS what the WTRU can handle and which embodiments may be used for video transmission. Example WTRUs might include not only smart phones, tablets and the like, but also loT devices for low cost, low power wide area network applications, mid-tier cost reduced capability (REDCAP) devices for example for industrial wireless sensor network (IWSN) applications, examples of which include power meters, parking meters, secure monitoring video cameras, connected fire hydrants, connected post boxes, etc. Intended use cases of the embodiments described herein may include applications requiring both uplink or downlink of videos, or applications requiring only either uplink or downlink video traffic. Furthermore, the ideas described in this disclosure equally apply to any multi-modality traffic which might not include video traffic, but two or more traffics of different modalities for example, audio, sensor related traffic (e g., temperature, humidity, pressure, smell, etc.), or haptic data for example when touching a surface (e.g., pressure, texture, vibration, temperature, etc.), in support of immersive reality application, generically denoted here XR applications. Such traffic might be formatted in different levels of resolutions, accuracy level/quantization or precision level/quantization. Such level of resolutions, accuracy or precision can be equated to layers of video traffic or equivalents terms as described in the embodiments herein. As such, the unequal error protection methods described herein apply to that traffic as well.
[0115] The embodiment described herein may also apply to any RAT including cellular RAT, such as 5G cellular RAT or future 6G cellular RAT, and 802.11 WLAN (Wi-Fi) RAT that may support these capabilities. Further, while embodiments may be described in terms of Uu interface with interactions between a WTRU and a base station, these embodiments may equally apply to the communication over the sidelink interface for e.g., PC5 interface, D2D, or any wireless link where advantages may be obtained.
[0116] FIG. 14A shows an overview of a video layer-aware scheduling method and apparatus according to one embodiment. At a high level, the steps may include one or more of the following.
[0117] At 1408, in some embodiments, the WTRU (1402, also referred to as a target UE) signals its capability to a scheduler (e.g. gNB 1404, base station, or other such device or network node), either autonomously for example based on a trigger from the protocol stack upper layers of the WTRU 1402, or optionally upon request from the scheduler at 1406 (shown in dashed line)
[0118] At 1410, the WTRU 1402 establishes an RRC connection and the associated one or more signaling bearers, as well as one or more data radio bearers, in this example through a RRC reconfiguration procedure. The WTRU 1402 may be configured with measurements and reporting configuration as part of the RRC reconfiguration procedure at 1412. [0119] At 1414, the WTRU 1402 may report measurements to the scheduler 1404. The measurements may include measurements to support scheduling operation, including transport volume measurements, RRM measurements and other link quality evaluation related measurements such as experience block error rate, bit error rate or packet error rate or other metrics/quantities that measures the deviation/delta between targeted QoS/QoE and experience QoS or QoE. Example of measurement report includes buffer status report (BSR), or SR to request resources for BSR report, power headroom report (PHR). For BSR, PHR or SR, the WTRU may report these measurements on per video-layer basis, so the scheduler has visibility into the uplink scheduling requirements at the WTRU, at the level of a video layer or any other granularity of video partition. It should be noted that two or more video layers might be associated with the same bearer or QoS flow. In many implementations, the WTRU may report these measurements at a granularity level that enables the scheduler to have visibility into the scheduling requirement beyond the granularity of QoS treatment differentiation level traditionally offered by the existing QoS flow or bearer framework, for either communication direction i e., uplink or downlink Other example of measurements reported by the WTRU may include, RSRP, RSRQ, RSSI, SINR or CSI, etc.
[0120] In some aspects, illustrated in dashed box 1416, at 1418, in some implementations, the WTRU 1402 may receive scheduling downlink control information (DCI) with one or more scheduling parameters, for DL reception with video layer-aware MCS based processing, or for UL transmission with video layer-aware MCS based processing.
[0121] At 1420, the WTRU may perform DL reception with video layer-aware based MCS processing as per the received RRC configuration and the DCI scheduling information, and may process the received data at 1422.
[0122] At 1424, the WTRU may process UL data for transmission with video layer-aware based MCS processing as per the received RRC configuration and the DCI scheduling information, and at 1426, may perform UL transmission of the processed data.
[0123] In some other aspects, illustrated in dashed box 1428, at 1430, the WTRU may receive DCI scheduling uplink transmission but not downlink reception scheduling. Specifically, in some implementations, the WTRU receives a scheduling DCI with one or more scheduling parameters for UL transmission with video layer-aware MCS based processing. At 1432, the WTRU may process UL data for transmission with video layer-aware based MCS processing as per the received RRC configuration and the DCI scheduling information, and at 1434, the WTRU may perform UL transmission of the processed data.
[0124] In some implementations, at 1436, the WTRU provides feedback to the scheduler. The feedback may include additional measurements in support of scheduling information, HARQ feedback, WTRU recommendations for per video layer MCS selection for subsequent DL scheduling or Uplink scheduling, WTRU’s recommendation for a switching to a single constellation-based method, separate constellation-based method, or hybrid constellation-based scheme, or any combination of these or other information or measurements. [0125] Similar to FIG. 14A, FIG. 14B is a sequence diagram between network entities (e.g. UE or WTRU 1402 and gNB, RSU, or scheduling entity 1404) demonstrating an overview for video layer-aware scheduling according to another example embodiment. Similar to 1408, at 1450, a WTRU may send to a base station or other network device an identification of its capabilities for handling video layers, including its ability to differentiate between different video layers belonging to the same QoS flow; its ability to treat different video layers belonging to the same QoS flow differently; and/or its ability to create a hierarchical modulation constellation diagram.
[0126] At 1452, the base station or other network device may send, and the WTRU may receive, a modulation configuration. In some embodiments, the configuration may include one or more configured modulation schemes for layered video coding transmission. In some embodiments, the configuration may include one or more hierarchical modulation configuration parameters. In some embodiments, the configuration may include both one or more configured modulation schemes and one or more hierarchical modulation configuration parameters. The WTRU may receive the configuration information and configure codecs accordingly
[0127] At 1454 in some embodiments, the WTRU may perform one or more radio-related and/or layerspecific measurements and/or may transmit such measurements or identifiers of channel characteristics to transmit to the base station or other network device The measurements or identifiers may be provided via one or more management reports or quality reports, or via header fields or payloads of other data packets. In some embodiments, the measurements may include one or more of RSSI, RSRP, RSRQ, SINR, or CSI measurements. In some embodiments, the measurements may include one or more layer-specific BSRs. In some embodiments, the measurements may include one or more BSRs indicating the amount of data of each video layer In some embodiments, the measurements may include one or more channel time variation indicators. In some embodiments, the measurements may include one or more channel frequency selectivity indicators. In some embodiments, the measurements may include any combination of the above measurements and/or any other type and form of measurement.
[0128] At 1456, in some embodiments, in response to the reported measurements, the base station or network device may generate and transmit dynamic scheduling information and/or modulation scheme updates to the WTRU. In some embodiments, the dynamic scheduling information can include one or more resource allocations. In some embodiments, the dynamic scheduling information can include indications of one or more HARQ redundancy versions (RVs). In some embodiments, the dynamic scheduling information can include one or more MCS configurations. In some embodiments, the dynamic scheduling information can any combination of the above or other such information. In some embodiments, modulation scheme updates can include activating or deactivating of specific hierarchical modulation schemes (e.g. bit allocations or joint bits and distance allocations). In some embodiments, modulation scheme updates can include one or more hierarchical modulation configuration parameters In some embodiments, hierarchical modulation configuration parameters may include one or more hierarchical modulation schemes. In some embodiments, hierarchical modulation configuration parameters may include one or more bounds or parameters for bits allocation for different video layers (e.g. 4 bits to a first base layer, 2 bits to an enhancement layer, 2 bits to a second enhancement layer, etc.). In some embodiments, hierarchical modulation configuration parameters may include a minimum distance between constellation points carrying different base layer/enhancement layer symbols. In some embodiments, modulation scheme updates can any combination of the above updates or other such configuration information.
[0129] At 1458, in some embodiments, the WTRU may encode the bit streams of different video layers. For example, the WTRU may capture video for transmission (e.g. via a camera of the WTRU or via another device), or may receive a video stream for transmission from an application, from another device, etc. The WTRU may encode the bit stream via any suitable encoding scheme, and may perform any other required processing (e.g. compression, upscaling or downscaling, color adjustments, etc ).
[0130] At 1460, in some embodiments, the WTRU may determine a number of bits and constellation distance allocation for each video layer The number of bits and/or constellation distance may be determined in accordance with configuration parameters received from the base station or other network device at 1456. The WTRU may also construct a hierarchical constellation diagram. Although referred to and described in terms of a constellation diagram, in some implementations, the WTRU may construct the hierarchical constellation as a data array, data string, index of constellation points allocated to each layer, or any other suitable type and form of data structure.
[0131] At 1462, in some embodiments, the WTRU may determine or identify a modulation constellation symbol from the hierarchical constellation based on the determined bits allocation and/or distance allocation. The determination may be performed for each video layer in parallel or serially (e.g. iteratively for a base layer, then enhancement layer, etc.) in various implementations.
[0132] At 1464, in some embodiments, the WTRU may multiplex the determined or identified bits of the different layers onto the determined modulation constellation symbol for transmission. As discussed in more detail below, a number of bits allocated to each video layer (e.g. base layer and enhancement layers) may be concatenated together in accordance with the modulation configuration and a constellation symbol determined. The symbol may be transmitted or broadcast to the base station and/or other network devices or WTRUs.
[0133] At 1466, in some embodiments, the WTRU may send feedback information to the base station or other network device The feedback information may include its bits and distance allocation (e.g. as determined at 1460). The feedback information may be transmitted via a PUCCH transmission in some embodiments. In other embodiments, the feedback information may be multiplexed with the modulated video symbols and transmitted via a PUSCH transmission. In some embodiments, the WTRU may send additional radio-related and/or layer-specific measurements, as at 1454, allowing for dynamic reconfiguration of the modulation scheme and/or scheduling as needed when channel conditions or characteristics change.
[0134] In various embodiments, differential treatment of video traffic may include, in one embodiment, the protocol stack physical layer may be capable of identifying the data belonging to different video layers and is capable of applying differential treatment for each of the video layers, for example between video base layer and each of the video enhancement layers at various PHY processing stages. Subsequently, the protocol stack physical layer may also be capable of simultaneous transmission of different video layers.
[0135] In one example, a video data or bit stream might be partitioned into blocks of data, where each block of data might be characterized by one or more of the following: (1) The video frame that the block of data belongs to; (2) the video layer that the block of data belongs to; and/or (3) the video GOP that the block belongs to.
[0136] It should be noted that while the UEP of the various embodiments are expressed in terms of differentiated treatment of video layers, it may equally apply to differentiated treatment of video data blocks, wherein the data block might be characterized by one or more of the video frames the data block belongs to, the video layer the data block belongs to, or the GOP to which the video data block belongs.
[0137] In various embodiments, the differentiated treatment may be applied to video frames, or a combination of video frames and video layers. For example, embodiments may use differentiated treatments of video layers within a video frame, or video layers across video frames While embodiments are described in terms of one video base layer and one video enhancement layer, the various embodiments may equally apply to the use cases where there is one video base layer and two or more video enhancement layers.
[0138] Embodiments for modulation Constellation assignment to video layers will now be described. In certain embodiments, the application of different modulation and coding schemes can be used in one example method of differentiating video layers at the UE, the base station or any other controlling or scheduling entity.
[0139] Some example modulation constellation assignment schemes are described below, although others might be utilized Three examples of differentiating video layers based on modulation constellation assignment include: (i) Single root constellation-based scheme; (ii) Separate roots or multi-roots constellation-based scheme; and/or (iii) Hybrid Constellation scheme
[0140] In one example, a root constellation may be defined and configured into the WTRU by its maximum modulation order, that defines the possible modulation constellation points. In a single root constellation scheme, the modulation constellations applied to the various layers of the video are all derived from the same root constellation, by for example, being based on the video layer specific minimum distance between modulation constellation points and the number of bits within the set of bits allocated to a modulation symbol as shown in FIG. 15. Furthermore, constellations might be assigned to each of the video enhancement layers in a hierarchical manner. For example, assuming a video has a video base layer BL and video enhancement layers L1 and L2, the modulation constellation of video layer L1 might be derived from the modulation constellation of the video BL, while the modulation constellation of the video layer L2 might be derived from the modulation constellation of the video enhancement L1. As used herein, the terms hierarchical modulation and single-constellation scheme/diagram may be used interchangeably.
[0141] In another embodiment, referred to as a separate root constellation scheme, the modulation constellations applied to the various layers of the video may be derived from two or more root constellations. For example, in this embodiment, the scheduler may use different constellation sets for different video layers. In yet another embodiment, the different layers of a video might be grouped into subgroups of video layers. The scheduler uses the same constellation for the video layers within the same subgroup of video layers, and different constellations for the video layers of different subgroups of video layers.
[0142] In an embodiment referred to as the hybrid constellation scheme, a combination of the single root constellation scheme and separate roots constellation scheme may be applied. For example, a first root constellation is assigned to a video base layer, and a second root constellation is assigned to the enhancement layers, wherein a single root constellation scheme is used for modulation constellation assignment to each video enhancement layer, using the second root constellation. In this example, assuming one or more video enhancement layers may be differentiated, the second root constellation is assigned to a first video enhancement layer, and the one or more modulation constellations of the remaining one or more video enhancement layers are derived from the second root constellation in hierarchical manner following the single root constellation scheme. By way of example, assuming a video data is structures in a video base layer BL and video enhancement layers L1, L2 and L3, the first root constellation might be assigned to the video base layer BL, the second root constellation is assigned to the video enhancement layer L1, the modulation constellation of video layer L2 might be derived from the second root constellation, while the modulation constellation of the video layer L3 might be derived from the modulation constellation of the video enhancement layer L2. Various combinations and alternatives could also be applied. Accordingly, in some implementations, the mapping or grouping may follow a hierarchy or form a hierarchical constellation, with packets from a first set mapped to a first constellation subset or root constellation and packets from a second set mapped to a second constellation subset or constellation derived from the root constellation, with the second constellation subset being a child or subset of or otherwise created or determined based on the first constellation.
[0143] As used hereafter, the terms hierarchical modulation and single-constellation scheme/diagram may be used interchangeably. Furthermore, the terms single root constellation-based scheme and single constellation-based scheme or simply single constellation scheme will be used interchangeably. Similarly, the terms separate root constellation-based scheme, multi-root constellation-based scheme, separate constellation-based scheme, multi-constellation-based scheme or simply multi-root constellation scheme or separate constellation scheme may be used interchangeably.
[0144] An embodiment for UEP-Based PHY Operation using Hierarchical Modulation-Based UEP will now be described. This embodiment focuses on how a hierarchical modulation based UEP scheme may be applied according to the reported WTRU capabilities, channel conditions, scheduling constraints, and other system considerations.
[0145] Modulation-based UEP is used herein to differentiate different video layers by modulating differently according to their importance. For instance, bit streams from high-importance video layers (i.e., video BL) can be modulated using low-modulation order while the bit streams from low-importance video layers (i.e , video EL) can be modulated using high-modulation order. The BS can decide to leverage this scheme based on the capability reported by the WTRU to the BS. Then, the WTRU may receive from the BS a configuration (e.g., via RRC signaling) indicating the modulation-based UEP scheme to use for the modulation of the transmit data or the demodulation of the received data.
[0146] A hierarchical modulation scheme may be applied within the framework of a single-constellation diagram. Referring to FIG. 15, a single-constellation diagram overview is shown in which hierarchal quadrature amplitude modulation (HQAM) is used by a modulator 1506 of a device to map different bit streams of different video layers to certain bits in the constellation diagram. More specifically, bits streams from video BL 1502 are assigned to the most significant bits (MSB)s in the constellation diagram while bit streams from video EL 1504 are assigned to the least significant bits (LSB)s. As used herein, a constellation diagram may be referred to as constellation set, which may comprise constellation subsets. Each constellation subset may include one or more constellation points. Also, the terms constellation region and constellation subset may be used interchangeably.
[0147] The WTRU might receive, from the BS or eNB, the modulation scheme to be utilized via RRC signaling. If the configuration received by the WTRU includes more than one modulation scheme, the configuration might also include whether a modulation scheme is activated or deactivated. A WTRU might receive through MAC CE, DCI or SCI signaling, the activation or deactivation command of a modulation scheme configured previously into the WTRU by RRC signaling.
[0148] To enable some of the embodiments described herein, the WTRU may receive from the BS, UL/DL scheduling parameters it uses to provide differentiated PHY-based treatment of different video layers in terms of applied modulation and coding scheme for different video layers. In one example, the WTRU might receive one or more scheduling parameters from BS for DL data reception through DCI messages in support of dynamic or semi-static scheduling. Similarly, the WTRU might receive one or more scheduling parameters from BS for UL data transmission either through DCI messages in support of one or more of dynamic scheduling or through RRC signaling. Certain embodiments focus on the enablement of a differentiated treatment of video layers, namely the ability of the WTRU to receive differentiated configuration modulation and coding scheme parameters for a video base layer, or for one or more video enhancement layers, and the ability of the WTRU to process the video base layer versus the one or more video enhancement layers differently, for example, by mapping the modulated symbols to one or more of different time-frequency resources (Res), modulations, or coding schemes, according to the video layer to which a modulated symbol is assigned.
[0149] In one embodiment, a single constellation may be used for two video layers. In the single constellation embodiment, the following aspects might be applied to enable differentiated error protection between various video layers including: (i) a number of allocated bits to video BL and video EL; and/or (ii) a provided level of protection to each layer via distance between the constellation regions.
[0150] For instance, the scheduler can control the provided level of protection to video BL bits by changing the minimum distance between the constellation regions with different values of MSBs, referred to as d_1 in FIG. 15. Additionally, the provided protection to LSBs, assigned to video EL data can be altered by changing the distance d_2, shown in FIG. 15, between constellation points located in regions with the same video BL information. Consequently, two (or more) single-constellation based modulation schemes can be configured into the WTRU, for example, the bits-allocation based scheme of Fig. 16 and the joint bits and distance allocation scheme shown in FIG. 17.
[0151] One example embodiment may define the mapping of the bits of video enhancement layer as a function of the mapping of the video base layer bits. In differential mapping, a given symbol is mapped as a function of information bits (that it represents) and the previous symbol. Embodiments of the present invention may map the bits of the video enhancement layer in a differentiated way, where the bits of the video enhancement layer (sub-symbol) are mapped as a function of the mapping of the bits of the video enhancement layer bits and the bits of the video base layer. For higher order modulations, this scheme can provide an additional tool to control the UEP performance and improve the performance of video enhancement layers decoding.
[0152] A variant embodiment for the single-constellation scheme may be applied with one video BL and more than one video EL. For instance, each constellation region in FIG. 15 may include more than one video EL. In this case, as the priority of the video EL increases, the order of the assigned bits to this video EL increases. For example, the video EL with the least priority can be assigned to a certain number of LSBs while another video EL with a higher priority can be assigned a certain number of bits succeeding the LSBs assigned to the least priority video EL. Additionally, the minimum distance between each video EL constellation points can be altered based on the level of protection that should be provided to this video layer.
[0153] FIG. 17 is an example representation where a video stream is encoded in one video base layer and one video enhancement layer, termed as video BL 1702 and video EL1 1704. In this exemplary figure, 64-QAM constellation is used to provide UEP for these two video layers. The two layers in this example use 2 bits each within each constellation symbol. On the top, legend shows two MSBs are used to encode the two bits from the video BL, and the two LSBs are used to transmit video EL2. From the WTRU processing perspective, a WTRU will first decode the video BL first - shown as the symbol detection over the green diamond symbols 1706. This part is shown in the left most part of the figure. Once video BL decoding is done, WTRU will process for the video EL1, in the two LSB bits of the constellation symbols at 1708. Detecting video EL1 bits translates as detecting the red diamond symbols in the middle of this figure.
[0154] The WTRU may receive one or more single-constellation based modulation schemes, through either dynamic signaling, or through semi-static manner via RRC signaling. The activation or deactivation command of the configured modulation scheme via RRC signaling may be accomplished, for example, through MAC CE, DCI or SCI signaling.
[0155] According to the differential treatment provided for different video layers in terms of the number of allocated bits as well as the provided level of protection to each video layer, WTRU should receive an extra set of parameters to be able to separate the received stream of bits for video BL and video EL. In one embodiment, the WTRU uses the received parameters to adjust the mapping between modulated symbols and time- frequency resources. Similarly, the WTRU uses the received parameters to identify the appropriate modulation order and code rate for signal transmission/reception.
[0156] The parameters that WTRU should receive from BS to enable the single-constellation based UEP framework depend mainly on the applied single-constellation approach as illustrated in Table 2 below:
Figure imgf000030_0001
Table 2. Modulation-based parameters to be received by UE under single-constellation modulation scheme [0157] In one example embodiment, the modulation-based parameters received by WTRU are discussed herein. In one example embodiment, the WTRU can use one of different defined MCS tables for determining the modulation order and coding rate based on the received MCS index. In one example, the WTRU can determine the table from which it identifies the modulation and coding scheme based on a received RRC signaling, the received DCI format, and the RNTI used to scramble the attached CRC to the received DCI. [0158] For embodiments using semi-static bits allocation, a table that defines the different possible numbers of bits to be allocated to one of the video layers (i.e , video BL) under different modulation order should be available at both BS and WTRU. The WTRU receives configurations parameters for differentiated treatment of video layers accordingly.
[0159] For embodiments using semi-static joint bits and distance allocation, a table that defines the different combination of bits allocation and distance allocation for each modulation order should be available at both WTRU and BS. The WTRU receives configurations parameters for differentiated treatment of video layers accordingly
[0160] For embodiments using bits allocation scheme, the WTRU can be configured with any number of bits less than the configured modulation order for each video layer. The WTRU receives configurations parameters for differentiated treatment of video layers accordingly
[0161] For embodiments using joint bits and distance allocation, the received number of bits for each video layer by the WTRU should be multiples of 2 to be able to construct the new constellation sets (HQAM-based constellation set). The WTRU receives configurations parameters for differentiated treatment of video layers accordingly
[0162] According to one example embodiment, the WTRU can receive the aforementioned parameters: (i) as a part of DCI messages preceding DL transmission of PDSCH under both dynamic and semi-persistent scheduling; (ii) as a part of DCI messages granting UL transmission of PUSCH under both dynamic scheduling and CS type 2; and/or (iii) as a part of RRC signaling granting UL transmission of PUSCH under CS type 1.
[0163] Referring to FIG. 18, a method for WTRU communicating using a single-constellation UEP framework during DL transmission is described. For example the WTRU might perform one or more of the following actions for the processing of a PDSCH that includes the data of both video layers within the allocated slot for DL reception.
[0164] In some implementations, at 1802, based on the RRC signaling as well as the activation/deactivation of different UEP-based modulation schemes that might be received through DCI messages, MAC CE, or SCI signaling, WTRU determines whether it is configured with the single-constellation modulation scheme or not. If yes, it proceeds with the following steps. Otherwise, it performs other actions at 1804 as will be discussed later. [0165] If the signal-constellation modulation scheme is configured, then in some implementations at 1806, the WTRU uses the received time-frequency resources through DCI messages to detect the DL transmitted symbols over the allocated time-frequency resources for DL reception
[0166] At 1808, the WTRU identifies the received MCS index and determines from which table this index is selected to identify the used modulation order (M) and the code rate. As described earlier, the WTRU is already (pre)configured with MCS configuration tables (e.g., one or more MCS configuration look-up tables). The received MCS index points to an MCS configuration in an MCS configuration table. The MCS configuration pointed to by the received MCS index include the modulation order (M) and the code rate used.
[0167] At 1810, and based on the RRC signaling as well as the activation/deactivation of different UEP- based modulation schemes that might be received through DCI messages, MAC CE, or SCI signaling, the WTRU determines whether it is configured with the bits allocation or the joint bits and distance allocation scheme. Also, it can determine whether it is configured with a dynamic or semi-static single-constellation based operation. [0168] The WTRU determines the applied modulation order for video BL and video EL to demodulate the received modulated symbols. The WTRU applies a single-constellation option configured into the WTRU to demodulate the received PDSCH. In the following, the actions to be performed by WTRU to demodulate the received PDSCH signals over allocated time-frequency resources are described in a frequency-first, time- second manner for the different considered single-constellation approaches.
[0169] If WTRU is configured with bits allocation only, then at 1812, the WTRU identifies the received number of assigned modulation symbol MSBs to BL (N_BL); and also identifies the constellation set and the constellation subsets for both the video base layer and the video enhancement layer using the minimum distance M,d_1 , and N_BL. Note that in semi-static operation, WTRU uses the received index to identify the number of assigned MSBs from the configured lookup table as explained in Table 2.
[0170] Else if WTRU is configured with joint bits and distance allocation, then at 1814, the WTRU determines the received number of assigned modulation symbol MSBs to BL (NBL) to know the number of regions representing the different video BL data (2NBL). The WTRU also determines the minimum distance di between the constellation points (symbols) carrying different video BL bits and the minimum distance d2 between the modulation constellation points carrying different video EL bits. At 1816, the WTRU creates HQAM based constellation set depending on M, NBL, di, and d2. Specifically, the constellation set includes 2M constellation points (constellation set), equally divided between (2NBL). regions. The WTRU identifies the constellation subset for the video base layer using the minimum distance di and NBL. Then, within each one of the (2WBL). region, the WTRU identifies the constellation subset for the video enhancement layer by creating 2(M-WBL) constellation points using the minimum distance d2. Note that in the semi-static approach, WTRU uses the received index for the selected constellation to obtain the number of MSBs assigned to video BL, the minimum distance between the modulation symbols for the video BL, and minimum distance between the modulation symbol of the video EL
[0171] At 1818, based on the identified modulation order M, the WTRU demodulates the received symbols using the corresponding constellation set to this modulation order. Then, the WTRU assigns the NBL bits (identified in step 4) of the demodulated symbols to video BL and assigns the other bits to video EL.
[0172] At 1820, the WTRU assembles the obtained video BL bit streams from allocated time-frequency resources in a frequency-first, time-second manner to reconstruct the protocol stack physical layer (PHY) code block(s) for the video BL. Similarly, the WTRU then reconstructs the protocol stack physical layer code blocks for the video EL.
[0173] At 1822, the WTRU first decodes the protocol stack physical layer code block(s) for the video BL based on the determined code rate at 1808.
[0174] At 1824, the WTRU checks if the PHY code block(s) of the video BL are correctly decoded. If not, at 1826, the WTRU drops the received protocol stack physical layer code block(s) that correspond to the video EL, and at 1828, the WTRU sends a negative acknowledgment (NACK) to the base station or serving node for retransmission. [0175] If PHY code blocks of the video BL are correctly decoded, then at 1830, the WTRU decodes the PHY code blocks of the video EL based on the same code rate used to decode the PHY code block for the video BL; and at 1832, the WTRU checks if PHY code block of the video EL code is correctly decoded. If PHY code block of the video EL is not correctly decoded, WTRU may or may not require retransmission of the video EL based on its required QoS. If the WTRU requires retransmission of the video EL, then at 1834, the WTRU sends a NACK to the serving node or base station If the WTRU does not require retransmission of video EL or if the video EL code block(s) are correctly decoded, then at 1836, the WTRU sends an ACK to the serving node or base station and at 1838, the WTRU concatenates the correctly decoded code blocks of both video layers to construct the transport block to be transferred to the protocol stack upper layers.
[0176] In some implementations, the WTRU may determine whether retransmission of the video EL is required based on its priority, importance (e.g. whether it is needed for decoding or predicting other frames), QoS level, measurements of transmission characteristics (e.g. received signal strength, noise or interference, bandwidth, latency, etc ), or any other type and form of factors or combination of factors.
[0177] Referring to FIG. 19, a method for WTRU communicating using single-constellation UEP framework during UL transmission is described. For UL transmission, the WTRU might perform one or more of the following for the processing of a PUSCH that includes the data of both video layers within the allocated slot for UL transmission.
[0178] In some implementations, at 1902, the WTRU sends a buffer status report (BSR) to a base station (BS) , access point, or other serving node over PUSCH as a part of MAC CE. The sent BSR can be used to notify the BS about the amount of data WTRU needs to send for each video layer
[0179] At 1904, the WTRU may receive a UL grant along with scheduling-related parameters either through DCI messages if it is configured with dynamic scheduling or CS type 2, or through RRC signaling if it is configured with CS type 1.
[0180] At 1906, the WTRU determines the coding rate it will use to encode video BL and video EL bit streams by determining the received MCS index and the table from which this MCS index is selected. As described earlier, the WTRU is already (pre)configured with MCS configuration tables (e.g., one or more MCS configuration look-up tables). The received MCS index points to an MCS configuration in an MCS configuration table. The MCS configuration pointed to by the received MCS index includes the code rate to be used by WTRU
[0181] At 1908, the WTRU encodes both video BL bit streams and video EL bit streams with the code rate it determined at 1906 to generate encoded PHY code block(s) for the video BL code blocks and encoded PHY code block(s) for the video EL
[0182] At 1910, before proceeding with the modulation, WTRU may identify which modulation approach it will apply to modulate the bit streams of each video layer. To this end, and based on the RRC signaling as well as the activation/deactivation of different UEP-based modulation schemes that might be received through DCI messages, MAC CE, or SCI signaling, WTRU determines whether it is configured with the single- constellation modulation scheme or not. If yes, it proceeds to 1914 Otherwise, at 1912, it performs other actions as will be discussed later.
[0183] At 1914, the WTRU identifies the received MCS index and determines the table from which the MCS index is selected to identify the modulation order to use (M). As described earlier, the WTRU is already (pre)configured with MCS configuration tables (e.g., one or more MCS configuration look-up tables). The received MCS index points to an MCS configuration in an MCS configuration table. The MCS configuration pointed to by the received MCS index include the modulation order M.
[0184] At 1916, based on the RRC signaling as well as the activation/deactivation of different UEP-based modulation schemes that might be received through DCI messages, MAC CE, or SCI signaling, the WTRU determines whether it is configured with the bit-allocation or the joint bits and distance allocation scheme. Also, it can determine whether it is configured with a dynamic or semi-static single-constellation based operation.
[0185] The WTRU determines the modulation order it applies for video BL and video EL. The WTRU applies a single-constellation option configured into it to modulate the encoded video BL and video EL code blocks. In the following, a method for a WTRU to modulate the PHY code block(s) for the video BL and video EL is described.
[0186] If the WTRU is configured with bits allocation only, then at 1918, the WTRU identifies the received number of assigned modulation symbol MSBs to BL (NBL) and also identifies the constellation set and the constellation subsets for both the video base layer and the video enhancement layer using the minimum distance M, dlt and NBL
[0187] If the WTRU is configured with joint bits and distance allocation, then at 1920, the WTRU identifies the received number of assigned MSBs to video BL (WBL), the minimum distance between constellation points carrying different video BL symbols (di) and the minimum distance between constellation points carrying different EL symbols (d2).
[0188] At 1922, the WTRU may create a HQAM based constellation set depending on M, NBL, d1, and d2. Specifically, the constellation set includes 2M constellation points (constellation set), equally divided between 2NBL regions The WTRU identifies the constellation subset for the video base layer using the minimum distance di and NBL. Then, within each one of the 2NBL region, the WTRU identifies the constellation subsets for the enhancement layer by creating 2(M-/VBL) constellation points using the minimum distance d2. As discussed above, such constellations including constellation subsets may be referred to as a hierarchical constellation. At 1924, the WTRU combines NBL bits (identified at 1918 or 1920) for the video BL and (M — NBL) bits for the video EL to create M bits in which the modulation symbol bits for the BL are the MSBs. [0189] At 1926, the WTRU performs modulation by mapping the created M bits to the corresponding constellation point.
[0190] At 1928, the WTRU determines the allocated time-frequency resources for PUSCH transmission based on the received parameters for time allocation and frequency allocation. [0191] At 1930, the WTRU maps the modulated symbols to time-frequency resources (identified in at 1928) in a frequency-first, time-second manner.
[0192] The mapped data over the scheduled resource is then transmitted in the uplink direction.
[0193] The previous discussion kept its focus on the usage where the application video data comprises two video layers of different priorities to help explain concepts around UEP. In reality, the video streams are typically encoded in more than two video layers. In the following, an exemplary embodiment is disclosed to show a single constellation based UEP used to provide differentiated transmission treatment (or reception) for three video layers.
[0194] FIG. 20 is an example representation where a video stream is encoded in one video base layer 2002 and two video enhancement layers, termed as video EL1 2004 and video EL2 2006. In this exemplary figure, 64-QAM constellation is used to provide UEP for these three video layers. All the three layers in this example use 2 bits each within each constellation symbol. On the top, the legend shows two MSBs are used to encode the two bits from the video BL, next two bits which are in the middle of the symbol bits are used to transmit two bits from video EL1 , and the two LSBs are used to transmit video EL2. From the WTRU processing perspective, a WTRU will first decode the video BL first - shown as the symbol detection over the green diamond symbols 2008. This part is shown in the left most part of the figure. Once video BL decoding is done, WTRU will process for the video EL1 , in the two middle bits of the constellation symbols at 2010. Detecting video EL1 bits translates as detecting the red diamond symbols in the middle of this figure Having detected the video EL1 bits, WTRU can zoom in further trying to detect two LSBs of video EL2, shown in the right side of this figure at 2012.
[0195] The relative error probabilities in this UEP constellation diagram with three video layers is dictated by the careful choice of the constellation parameters, d1 , d2 and d3. Additional control and tuning for relative error probabilities can be enabled by having additional constellation design parameters, such as vertical distance and unequal distance among the constellation points.
[0196] Embodiments for Dynamic Adaptation of UEP will now be discussed. The previous sections have provided some tools where different flows or video layers can have different prioritized treatment in the protocol stack PHY layer, especially through UEP over the active modulation constellations. One question is the dynamic adaptation of UEP as a function of inherent priority of data content (video layers), device capabilities, system aspects including scheduling decisions, available capacity, system load and changing radio conditions. [0197] In embodiments utilizing dynamic UEP adaptation, measurement quantities and feedback may be important. The following may be measurement quantities which can be used to adapt dynamically different UEP schemes.
[0198] Channel time variation and estimation for feedback: a measure of how fast channel radio conditions are changing with time can play an important role in the dynamic adaptation of UEP embodiments. The measurement quantity may involve a rate of change including the phase of the estimated channel, or it can be based only upon the channel magnitude ignoring the phase. This measurement can be made more precise in the form of Doppler estimate among the available channel estimates at different time instants. Additional conditions in terms of averaging and filtering can be defined to make this quantity stable prior to feedback and use in dynamic adaptation.
[0199] For the DL direction, a device can estimate the rate of change of channel conditions through estimates made over one or a combination of existing reference signals (RSs). These RSs can be DMRS of SSB, DMRS of data, CSI-RS, or even SSBs themselves. New more suitable RSs could also be defined exclusively for this purpose. These RS can be WTRU dedicated, group common or cell/beam specific which may allow a WTRU estimate of Doppler.
[0200] Once a suitable measure of channel time variation has been estimated, WTRU should feedback this quantity to the network so that it can be used for dynamic adaptation of UEP in combination with other parameters/constraints The indication of channel time variation can be transmitted in the form of a single bit flag, which may indicate the channel time variation larger than a pre-defined or configured threshold In an alternate embodiment, the network can configure the size/pattern of the channel time variation feedback, which could be selected among a number of options defined for a given network. A subset of these options can be indicated to the WTRU as part of semi-static configuration. An estimated channel time variation indication, after suitable processing/filtering, can be provided as feedback to the network, for example, as part of uplink control information (UCI). In one embodiment, the UCI carrying channel time variation indication can be transmitted either in PUCCH or in PUSCH. The channel time variation feedback can be configured as periodic, semi-static or aperiodic. The network can configure suitable parameters controlling the periodicity of this feedback.
[0201] Additionally, channel frequency selectivity estimation and feedback may be desirable. Channel variation in the frequency domain or channel frequency selectivity can be another important parameter to choose judicious use of UEP embodiments to combat the frequency selectivity and avoid the deep fades hitting the prioritized video layers/data. In one embodiment, the measurement quantity may involve rate of change including the phase of the estimated channel, or it can be based only upon the channel magnitude ignoring the phase. Additional conditions in terms of averaging and filtering can be defined to make this quantity stable prior to feedback and use in dynamic adaptation.
[0202] For the DL direction, a device can estimate the channel frequency selectivity through multiple channel estimates made over different parts of the bandwidth. These estimates can be made using a suitable RS or a combination of RSs, examples of which include DMRS of SSB, DMRS of data, CSI-RS, or even SSBs themselves New RSs may also be defined exclusively for this purpose. These RS can be WTRU dedicated, group common or cell/beam specific which may allow a WTRU estimate of channel over different frequency portions. In one embodiment, DMRS type 1 and type 2 could be suitably used to estimate the channel frequency selectivity as they span all the physical resource blocks (PRBs) in the scheduled resource. Similarly, many of the existing CSI-RS patterns can be used to estimate the channel frequency selectivity Once a suitable measure of channel frequency selectivity has been estimated, WTRU should feedback this quantity to the network so that it can be used for dynamic adaptation of UEP in combination with other parameters/constraints. Various options on how to perform averaging, filtering or some other requirements as to the minimum number of measurements to be average over prior to feeding back this quantity to the network may be defined. In one example, an indication of channel frequency selectivity can be transmitted in the form of a single bit flag, which may indicate the channel frequency selectivity larger than a pre-defined or configured threshold
[0203] In some embodiments, the network can configure the size/pattern of the channel frequency selectivity feedback, which could be selected among several options defined in the specification. A subset of these options can be indicated to the WTRU as part of semi-static configuration. The estimated channel frequency selectivity indication, after suitable processing/filtering, can be provided as feedback to the network as part of uplink control information (UCI). The UCI carrying channel frequency selectivity indication can be transmitted either in PUCCH or in PUSCH. The channel frequency selectivity feedback can be configured as periodic, semi-static or aperiodic and a network can configure suitable parameters controlling the periodicity of this feedback.
[0204] The previous discussion provided certain quantities that the WTRU can estimate/measure and report to the network in suitable format. In a compatible embodiment, the WTRU can make a direct request of constellation and video layer mapping specific parameters that is desired for use. In an example, this can be the expected receive parameters through which it prefers to receive DL layered video. For the UL case, the requested parameters may be those that the WTRU prefers to transmit the layered video to the base station in the uplink direction.
[0205] In certain embodiments, the indication of modulation based UEP parameters can comprise the constellation design parameters (for example distance parameters), the bit allocation for various video layers, the relative mapping for video layers as discussed previously.
[0206] The direct feedback for requested modulation based UEP can be transmitted in the uplink direction to the base station by transmitting this feedback as part of uplink control information. This information can be transmitted as part of PUCCH or PUSCH. Furthermore, the base station can configure this reporting as periodic, semi-static or aperiodic. To cover dynamic embodiments, this reporting can be event triggered, where the suitable triggers can be defined to report this feedback. One example of suitable triggers can be in terms of channel variations in time or frequency being more than a configured threshold. After having received direct request of modulation based UEP parameters, the base station can use this feedback, radio measurements feedback (if available) and combine with other system considerations to adapt the UEP parameters for the subsequent transmissions.
[0207] In an embodiment for dynamic adaptation for single constellation differentiation, may involve assignment and split of the available resource/capacity/bits for different video layers having different priorities. Variable factors such as traffic flows (video layers), multiple system design considerations, WTRU capabilities and some long-term channel characteristics for the relevant WTRU may be used to dynamically assign different resource or different priority to different video layers in the UEP schemes. [0208] To make the best use of the available resources for multi-layer video transmission, the UEP embodiments discussed herein, should utilize dynamic adjustment in the face of network dynamics, which may include variations in the system load, different cell capacities while WTRU is in mobility and changing radio conditions. Different measurement quantities were identified previously, which a WTRU can estimate and report back to the network in the suitable formats reflecting the current channel conditions.
[0209] As an example for the single constellation case, if the channel time variations exceed some threshold, the probability that a WTRU may decode successfully a given higher order constellation reduces with the channel time variations as the quality of channel estimates goes down in direct proportion to the channel time variation. The network can use the time variation indication to adjust the rates for both video base layer and one or more of the video enhancement layers. Another possibility could be to stop transmitting one of the video enhancement layers if the network estimates that WTRU will not be able to decode it anyway. The network decisions in the updated dynamic split of bits assignment and transmission of a given number of video layers can be indicated to the WTRU through dynamic signaling.
[0210] The previous embodiments have identified how a transmitting device can make dynamic updates to the layered video parameters, the allocation of different video layers bits to constellation bits, and the size of each video layer bits for a given constellation. In another embodiment, the constellation design parameters can be updated themselves to obtain a more suitable form of constellation in view of the system considerations, WTRU capability, feedback from the receiver about the channel variations, etc. The constellation design parameters, for example, d1 , d2 and d3 etc. as discussed previously, can be updated with respect to the available information elements. The update of constellation design parameters results in a change in expected probability of detection for various video layers This change can thus provide an outcome in achieving a given prioritization of different video layers when the design considerations are changing.
[0211] In another example, the constellation can be made to switch from single root constellation to multiroot constellation, or a multi-root constellation can be updated to another multi-root constellation of different parameters
[0212] In yet another compatible design, the mapping of different video layers to the constellation can be updated, for example updating mapping of a given video enhancement layer as a function of detected bits (subsymbol) corresponding to the video base layer
[0213] In various other embodiments, dynamic switching of UEP configurations may be performed without feedback. For modulation based UEP embodiments, one configuration may build a hierarchical modulation constellation for transmission/reception of layered video and another configuration may base the design for video layer specific modulation constellations. For example, the base station may provide the relevant configurations for hierarchical constellation and separate video layer specific constellation while providing an indication of active configuration. This configuration may then be used for UL or DL transmission of layered video data. In view of the changing requirements, channel conditions and system considerations, the network can switch the active configuration. The signaling to switch the active configuration can be sent through semi- static signaling or in a more dynamic manner by indicating in the DCI. This can be easily achieved by a single bit flag providing the active configuration indication.
[0214] In another compatible design, the WTRU can request the base station to switch the active configuration for the layered video transmission/reception. This configuration switch request can be transmitted to the base station in the uplink direction. One signaling example to achieve this is to add this active configuration indication in the UEP feedback.
[0215] FIG. 21 shows an exemplary embodiment for the single constellation based UEP layered video transmission in the DL direction. In this embodiment, a WTRU may report its capability and assistance information to the base station (BS), access point, gNB, or serving node (referredto generally as a BS or serving node) at 2102 In response, at 2104, the BS or serving node provides the single hierarchical constellation based UEP configuration for layered video transmission. This configuration provides parameters of video layer specific bits size, constellation part, distance, and bit mapping for the constellation. The configuration may indicate the dynamic update option for a subset of these parameters. In one example, scheduling DCI provides the time frequency resource and may complete the constellation design information. In one design, the configuration provides a set of parameters which are completed by the dynamic indication later. In another compatible design, the configuration may provide a set of parameters e g., related to the constellation choice and construction with suitable distances, and then some of these parameters may be overwritten by the dynamic indication. The overwriting of UEP parameters as part of dynamic indication at 2106 provides the network the ability to respond to dynamic traffic changes, the network system load variations and to respond well to the channel variations. Upon decoding DCI, the WTRU will receive the scheduled data from the indicated time frequency resource at 2108. At 2110, the WTRU will prepare the constellation for the received video layers as well using the received information from the BS or serving node. At 2112, the WTRU may demodulate the video base layer, and then at 2114 the video enhancement layers using the relevant parts of the hierarchical constellation. After the demodulation, at 2116, the WTRU may proceed to channel decoding of the demodulated video layers. At 2118, the WTRU may prepare the UEP feedback which can request a specific set of target constellations from the BS or serving node for the next transmission. UEP feedback may additionally include the indication of channel time and frequency variation to allow suitable UEP processing/scheduling for the subsequent transmissions. These estimates can be prepared over the reference symbols which are part of the scheduled resource, or the BS or serving node can transmit dedicated ones for such estimation. The WTRU will transmit the UEP feedback in the UL direction at 2120.
[0216] Referring to FIG. 22, a method for a WTRU communicating in a wireless network using layered video transmission in the uplink using single-Constellation based UEP is shown. In this embodiment, at 2202, a WTRU may report its capability and assistance information to the BS or serving node. In response, at 2204, the BS or serving node provides the single constellation based UEP configuration for layered video transmission in the UL direction. This configuration provides parameters of video layer specific mapping over the hierarchical constellation, distance, and bit mapping rules for the single constellation. The configuration may indicate the dynamic update option for a subset of these parameters. The scheduling DCI provides the UL time frequency resource and may complete the constellation design information either by providing any missing elements or updating certain elements configured in RRC configuration at 2206. Upon decoding DCI, the WTRU will perform channel encoding of different video layers at 2208, followed by multiplexing of layered coded bits according to the configuration at 2210 At 2212, the WTRU may prepare the indicated constellation for the video layers to be transmitted. At 2214, the WTRU will then modulate the multiplexed video layered data over the constellation. The WTRU then transmits the UEP modulated layered video data over the scheduled UL time frequency resource at 2216.
[0217] FIG. 23 illustrates one example embodiment for a method for WTRU communicating in a wireless network using the single constellation-based UEP layered video transmission in the UL direction and providing feedback to the base station in the UL direction. In this embodiment, at 2302, a WTRU may report its capability and assistance information to the BS or serving node. In response, at 2304, the BS or serving node provides the single constellation based UEP configuration for layered video transmission in the UL direction. This configuration provides parameters of video layer specific mapping over the hierarchical constellation, distance, and bit mapping rules for the single constellation. The configuration may indicate the dynamic update option for a subset of these parameters. At 2306, the scheduling DCI provides the UL time frequency resource and may complete the constellation design information either by providing the missing elements or updating certain elements configured in RRC configuration. Upon decoding DCI, at 2308, the WTRU will perform channel encoding of different video layers, followed by multiplexing of layered video coded bits according to the configuration at 2310. At 2312, the WTRU may prepare the indicated constellation for the video layers to be transmitted. At 2314, the WTRU may modulate the multiplexed layered video data over the constellation. At 2316, the WTRU may prepare WTRU feedback which can comprise a target constellation for subsequent transmission(s) indicating in addition video layer specific mapping and constellation design parameters. The WTRU may then multiplex the UEP feedback with the layered video data at 2318. At 2320, the WTRU may transmit the multiplexed UEP feedback and layered video data over the scheduled UL time frequency resource. [0218] In some aspects, the present disclosure is directed to a method including determining, by a wireless transmit/receive unit (WTRU), a first number of bits for a first media packet, and a second number of bits for a second media packet. The method also includes_determining, by the WTRU, a modulation constellation symbol based on the determined first number of bits and the second number of bits. The method also includes multiplexing, by the WTRU, the first number of bits of the first media packet and the second number of bits of the second media packet onto the determined modulation constellation symbol for transmission. The method also includes sending, by the WTRU, feedback information to a network device, the feedback information comprising an identification of the number of first bits and the first constellation distance and the second number of bits and the second constellation distance.
[0219] In some implementations, the method includes transmitting, by the WTRU, the multiplexed modulation constellation symbol In some implementations, the method includes identifying, by the WTRU, a first constellation subset based on the first number of bits and a second constellation subset based on the second number of bits, wherein the first constellation subset and the second constellation subset constitute a hierarchical constellation. In some implementations, the method includes determining, by the WTRU, a first constellation distance for the first media packet, and a second constellation distance for the second media packet In a further implementation, the determination of the modulation constellation symbol is further based on the first constellation distance and the second constellation distance.
[0220] In some implementations, the method includes reporting, by the WTRU to the network device, an identification of capabilities of the WTRU for differentiating between different video layers; and receiving, by the WTRU from the network device, a modulation configuration for transmission of the different video layers. In a further implementation, the method includes reporting, by the WTRU to the network node, one or more layerspecific buffer status reports.
[0221] In some implementations of the method, multiplexing the first number of bits of the first media packet and the second number of bits of the second media packet onto the determined modulation constellation symbol comprises: assigning the first number of bits of the first media packet to a first most significant bits of the modulation constellation symbol and the second number of bits of the second media packet to a second most significant bits of the modulation constellation symbol. In a further implementation, the first most significant bits are more reliable than the second most significant bits. In some implementations, the first media packet comprises data of a base layer of a video signal and the second media packet comprises data of an enhancement layer of the video signal.
[0222] In another aspect, the present disclosure is directed to a wireless/transmit receive unit (WTRU) The WTRU comprises one or more transceivers and one or more processors. The one or more processors are configured to: determine a first number of bits for a first media packet, and a second numberof bits for a second media packet determine a modulation constellation symbol based on the determined first number of bits and the second number of bits; multiplex the first number of bits of the first media packet and the second number of bits of the second media packet onto the determined modulation constellation symbol for transmission; and send feedback information to a network device, the feedback information comprising an identification of the number of first bits and the first constellation distance and the second number of bits and the second constellation distance.
[0223] In some implementations, the one or more processors are further configured to transmit, via the one or more transceivers, the multiplexed modulation constellation symbol. In some implementations, the one or more processors are further configured to: identify a first constellation subset based on the first number of bits and a second constellation subset based on the second number of bits, wherein the first constellation subset and the second constellation subset constitute a hierarchical constellation. In some implementations, the one or more processors are further configured to determine a first constellation distance for the first media packet, and a second constellation distance for the second media packet. In a further implementation, the determination of the modulation constellation symbol is further based on the first constellation distance and the second constellation distance.
[0224] In some implementations, the one or more processors are further configured to: report, to the network device, an identification of capabilities of the WTRU for differentiating between different video layers; and receive, from the network device, a modulation configuration for transmission of the different video layers. In a further implementation, the one or more processors are further configured to report, to the network node, one or more layer-specific buffer status reports. In some implementations, the one or more processors are further configured to: assign the first number of bits of the first media packet to a first most significant bits of the modulation constellation symbol and the second number of bits of the second media packet to a second most significant bits of the modulation constellation symbol. In a further implementation, the first most significant bits are more reliable than the second most significant bits. In some implementations, the first media packet comprises data of a base layer of a video signal and the second media packet comprises data of an enhancement layer of the video signal.
[0225] Although features and elements are described above in particular combinations, one of ordinary skill in the art will appreciate that each feature or element can be used alone or in any combination with the other features and elements. In addition, the methods described herein may be implemented in a computer program, software, or firmware incorporated in a computer-readable medium for execution by a computer or processor. Examples of computer-readable media include electronic signals (transmitted over wired or wireless connections) and computer-readable storage media. Examples of computer-readable storage media include, but are not limited to, a read only memory (ROM), a random access memory (RAM), a register, cache memory, semiconductor memory devices, magnetic media such as internal hard disks and removable disks, magnetooptical media, and optical media such as CD-ROM disks, and digital versatile disks (DVDs). A processor in association with software may be used to implement a radio frequency transceiver for use in a WTRU, UE, terminal, base station, RNC, or any host computer.

Claims

CLAIMS What is Claimed:
1. A method, comprising: determining, by a wireless transmit/receive unit (WTRU), a first number of bits for a first media packet, and a second number of bits for a second media packet; determining, by the WTRU, a modulation constellation symbol based on the determined first number of bits and the second number of bits; multiplexing, by the WTRU, the first number of bits of the first media packet and the second number of bits of the second media packet onto the determined modulation constellation symbol for transmission; and sending, by the WTRU, feedback information to a network device, the feedback information comprising an identification of the number of first bits and the first constellation distance and the second number of bits and the second constellation distance.
2. The method of claim 1 , further comprising transmitting, by the WTRU, the multiplexed modulation constellation symbol.
3. The method of either of claim 1 or claim 2, further comprising: identifying, by the WTRU, a first constellation subset based on the first number of bits and a second constellation subset based on the second number of bits, wherein the first constellation subset and the second constellation subset constitute a hierarchical constellation.
4. The method of any preceding claim, further comprising determining, by the WTRU, a first constellation distance for the first media packet, and a second constellation distance for the second media packet
5. The method of claim 4, wherein the determination of the modulation constellation symbol is further based on the first constellation distance and the second constellation distance.
6. The method of any preceding claim, further comprising: reporting, by the WTRU to the network device, an identification of capabilities of the WTRU for differentiating between different video layers; and receiving, by the WTRU from the network device, a modulation configuration for transmission of the different video layers.
7. The method of claim 6, further comprising reporting, by the WTRU to the network node, one or more layerspecific buffer status reports.
8. The method of any preceding claim, wherein multiplexing the first number of bits of the first media packet and the second number of bits of the second media packet onto the determined modulation constellation symbol comprises: assigning the first number of bits of the first media packet to a first most significant bits of the modulation constellation symbol and the second number of bits of the second media packet to a second most significant bits of the modulation constellation symbol.
9. The method of claim 8, wherein the first most significant bits are more reliable than the second most significant bits.
10. The method of any preceding claim, wherein the first media packet comprises data of a base layer of a video signal and wherein the second media packet comprises data of an enhancement layer of the video signal
11. A wireless transmit/receive unit (WTRU), comprising one or more transceivers and one or more processors; wherein the one or more processors are configured to: determine a first number of bits for a first media packet, and a second number of bits for a second media packed determine a modulation constellation symbol based on the determined first number of bits and the second number of bits, multiplex the first number of bits of the first media packet and the second number of bits of the second media packet onto the determined modulation constellation symbol for transmission, and send feedback information to a network device, the feedback information comprising an identification of the number of first bits and the first constellation distance and the second number of bits and the second constellation distance
12. The WTRU of claim 1 , wherein the one or more processors are further configured to transmit, via the one or more transceivers, the multiplexed modulation constellation symbol.
13. The WTRU of either of claim 11 or claim 12, wherein the one or more processors are further configured to: identify a first constellation subset based on the first number of bits and a second constellation subset based on the second number of bits, wherein the first constellation subset and the second constellation subset constitute a hierarchical constellation
14. The WTRU of any of claims 11 through 13, wherein the one or more processors are further configured to determine a first constellation distance for the first media packet, and a second constellation distance for the second media packet.
15. The WTRU of claim 14, wherein the determination of the modulation constellation symbol is further based on the first constellation distance and the second constellation distance.
16. The WTRU of any of claims 11 through 15, wherein the one or more processors are further configured to: report, to the network device, an identification of capabilities of the WTRU for differentiating between different video layers; and receive, from the network device, a modulation configuration for transmission of the different video layers.
17. The WTRU of claim 16, wherein the one or more processors are further configured to report, to the network node, one or more layer-specific buffer status reports.
18. The WTRU of any of claims 11 through 17, wherein the one or more processors are further configured to:: assign the first number of bits of the first media packet to a first most significant bits of the modulation constellation symbol and the second number of bits of the second media packet to a second most significant bits of the modulation constellation symbol.
19. The WTRU of claim 18, wherein the first most significant bits are more reliable than the second most significant bits.
20. The WTRU of any of claims 11 through 20, wherein the first media packet comprises data of a base layer of a video signal and wherein the second media packet comprises data of an enhancement layer of the video signal.
PCT/US2023/078882 2022-11-07 2023-11-07 Modulation based uep-hierarchical modulation WO2024102684A2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202263423367P 2022-11-07 2022-11-07
US63/423,367 2022-11-07

Publications (1)

Publication Number Publication Date
WO2024102684A2 true WO2024102684A2 (en) 2024-05-16

Family

ID=89164600

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2023/078882 WO2024102684A2 (en) 2022-11-07 2023-11-07 Modulation based uep-hierarchical modulation

Country Status (1)

Country Link
WO (1) WO2024102684A2 (en)

Similar Documents

Publication Publication Date Title
JP7216196B2 (en) Method and apparatus for multi-transmit/receive point transmission
US11916680B2 (en) Sidelink resource sensing using feedback channels
TWI849366B (en) Wireless transmit/receive unit(wtru) and methods for nr sl multi-sub-channel pscch transmission
TWI700007B (en) Apparatus and method for determining whether to provide a csi report
WO2019195505A1 (en) Control information signaling and procedure for new radio (nr) vehicle-to-everything (v2x) communications
EP4104495A1 (en) Methods, apparatus, and systems for reliable channel state information reporting
EP4014626A1 (en) Apparatus and methods for new radio sidelink channel state information acquisition
EP3925127A1 (en) Harq-ack codebook adaptation
US20230093477A1 (en) Methods and apparatus for uplink control enhancement
EP3949206A1 (en) Methods for sidelink transmit-receive distance determination
EP3602929A1 (en) Active interference management
US20220330083A1 (en) Methods and apparatuses for transmitting and receiving data in an nr system
EP4122277A1 (en) Multi-ru multi-ap transmissions in wlan systems
WO2019195103A1 (en) Methods of harq for noma
EP4427342A1 (en) Methods, architectures, apparatuses and systems for sidelink beam management
WO2024102684A2 (en) Modulation based uep-hierarchical modulation
WO2024102347A1 (en) Transmission of a base layer and an enhancement layer of a data stream with different modulation parameters using indicated uplink resources
WO2024173274A1 (en) Methods and appratus for handling dependencies for xr traffic in wireless systems based on selection of pdus for filling the mac pdu/transport block
WO2024077138A1 (en) Methods and systems of sidelink operations for beam-based mode 2 harq in shared spectrum
WO2024194149A1 (en) Methods, architectures, apparatuses and systems directed to measurement and adjustments of co-dependent flows characteristics
WO2024173273A1 (en) Methods and appratus for handling dependencies for xr traffic in wireless systems based on configuration of lch restricted to handle dependencies
WO2024097324A1 (en) Dynamic radio bearer selection associated with ai/ml operations
CN118383049A (en) Method, architecture, apparatus and system for multi-stream synchronization
WO2024211467A1 (en) Adapting discard mechanism to xtended traffic
WO2024097824A1 (en) Stable quality of service (qos)/quality of experience (qoe)

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23821441

Country of ref document: EP

Kind code of ref document: A2