WO2023194558A1 - Prédiction améliorée de vecteur de mouvement basée sur un sous-bloc (sbtmvp) - Google Patents

Prédiction améliorée de vecteur de mouvement basée sur un sous-bloc (sbtmvp) Download PDF

Info

Publication number
WO2023194558A1
WO2023194558A1 PCT/EP2023/059208 EP2023059208W WO2023194558A1 WO 2023194558 A1 WO2023194558 A1 WO 2023194558A1 EP 2023059208 W EP2023059208 W EP 2023059208W WO 2023194558 A1 WO2023194558 A1 WO 2023194558A1
Authority
WO
WIPO (PCT)
Prior art keywords
sbtmvp
candidates
processor
current block
sbtmvp candidates
Prior art date
Application number
PCT/EP2023/059208
Other languages
English (en)
Inventor
Antoine Robert
Franck Galpin
Philippe DE LAGRANGE
Philippe Bordes
Original Assignee
Interdigital Ce Patent Holdings, Sas
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Interdigital Ce Patent Holdings, Sas filed Critical Interdigital Ce Patent Holdings, Sas
Publication of WO2023194558A1 publication Critical patent/WO2023194558A1/fr

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors
    • H04N19/517Processing of motion vectors by encoding
    • H04N19/52Processing of motion vectors by encoding by predictive encoding

Definitions

  • Video coding systems may be used to compress digital video signals, e.g., to reduce the storage and/or transmission bandwidth needed for such signals.
  • Video coding systems may include, for example, block-based, wavelet-based, and/or object-based systems.
  • SbTMVP subblock-based temporal motion vector prediction
  • Multiple subblock temporal motion vector predictor (SbTMVP) candidates may be determined (e.g., by a video decoding device). Each of the multiple SbTMVP candidates may be obtained using a motion shift associated with an input candidate, and the input candidate may include a neighboring block of a current block. The multiple SbTMVP candidates may be reordered based on a template matching cost of the multiple SbTMVP candidates. The current block may be decoded using one of the reordered multiple SbTMVP candidates.
  • the template matching cost may indicate a degree of accuracy in predicting motion of video content
  • the multiple SbTMVP candidates may be reordered based on the template matching cost of the multiple SbTMVP candidates.
  • the multiple SbTMVP candidates may be reordered based on the lowest template matching cost.
  • the input candidate may include one or more of the following: spatial neighbors of a current block, a merge list, history-based motion vector predictors (HMVP), bi-directional motion vectors, or a zero motion vector.
  • HMVP history-based motion vector predictors
  • a subset of the reordered multiple SbTMVP candidates may be selected. One of the selected subset of the reordered multiple SbTMVP candidates may be used to decode the current block.
  • a maximum number of seeds may be determined to be used for determining the plurality of SbTMVP candidates. Multiple seeds may be determined in accordance with the determined maximum number of input candidates.
  • the plurality of SbTMVP candidates may be reordered using adaptive reordering of merge candidates (ARMC). Based on a threshold value, an SbTMVP candidate may be selected from the plurality of SbTMVP candidates.
  • ARMC adaptive reordering of merge candidates
  • a flag indicating a number of seeds may be received. Based on the flag, determine the plurality of SbTMVP candidates may be determined based on the number of seeds.
  • a flag indicating usage of the SbTMVP ARMC process may be received. Based on the flag, the plurality of SbTMVP candidates may be reordered using ARMC.
  • the neighboring block of the current block may have a motion vector that uses a collocated picture as a reference picture.
  • SbTMVP subblock-based temporal motion vector prediction
  • Multiple subblock temporal motion vector predictor (SbTMVP) candidates may be determined (e.g., by a video encoding device). Each of the multiple SbTMVP candidates may be obtained using a motion shift associated with an input candidate, and the input candidate may include a neighboring block of a current block. The multiple SbTMVP candidates may be reordered based on a template matching cost of the multiple SbTMVP candidates. The current block may be encoded using one of the reordered multiple SbTMVP candidates.
  • the template matching cost may indicate a degree of accuracy in predicting motion of video content
  • the multiple SbTMVP candidates may be reordered based on the template matching cost of the multiple SbTMVP candidates.
  • the multiple SbTMVP candidates may be reordered based on the lowest template matching cost.
  • the input candidate may include one or more of the following: spatial neighbors of a current block, a merge list, history-based motion vector predictors (HMVP), bi-directional motion vectors, or a zero motion vector.
  • HMVP history-based motion vector predictors
  • a subset of the reordered multiple SbTMVP candidates may be selected.
  • One of the selected subset of the reordered multiple SbTMVP candidates may be used to encode the current block.
  • a maximum number of seeds may be determined to be used for determining the plurality of SbTMVP candidates. Multiple seeds may be determined in accordance with the determined maximum number of input candidates.
  • the plurality of SbTMVP candidates may be reordered using adaptive reordering of merge candidates (ARMC). Based on a threshold value, an SbTMVP candidate may be selected from the plurality of SbTMVP candidates.
  • ARMC adaptive reordering of merge candidates
  • the plurality of SbTMVP candidates may be determined based on a number of seeds. A flag indicating the number of seeds may be included in video data. The plurality of SbTMVP candidates may be reordered using ARMC. A flag indicating usage of the SbTMVP ARMC process may be included in video data.
  • the neighboring block of the current block may have a motion vector that uses a collocated picture as a reference picture.
  • Systems, methods, and instrumentalities described herein may involve a decoder.
  • the systems, methods, and instrumentalities described herein may involve an encoder.
  • the systems, methods, and instrumentalities described herein may involve a signal (e.g., from an encoder and/or received by a decoder).
  • a computer-readable medium may include instructions for causing one or more processors to perform methods described herein.
  • a computer program product may include instructions which, when the program is executed by one or more processors, may cause the one or more processors to carry out the methods described herein.
  • FIG. 1A is a system diagram illustrating an example communications system in which one or more disclosed embodiments may be implemented.
  • FIG. 1 B is a system diagram illustrating an example wireless transmit/receive unit (WTRU) that may be used within the communications system illustrated in FIG. 1A according to an embodiment.
  • WTRU wireless transmit/receive unit
  • FIG. 1 C is a system diagram illustrating an example radio access network (RAN) and an example core network (CN) that may be used within the communications system illustrated in FIG. 1 A according to an embodiment.
  • RAN radio access network
  • CN core network
  • FIG. 1 D is a system diagram illustrating a further example RAN and a further example CN that may be used within the communications system illustrated in FIG. 1A according to an embodiment.
  • FIG. 2 illustrates an example video encoder
  • FIG. 3 illustrates an example video decoder.
  • FIG. 4 illustrates an example of a system in which various aspects and examples may be implemented.
  • FIG. 5 illustrates an example spatial neighboring block (e.g., A1) that may be used by a subblockbased motion vector prediction (SbTMVP).
  • SBTMVP subblockbased motion vector prediction
  • FIG. 6 illustrates an example method and/or process for deriving sub-coding unit (sub-CU) motion field by applying a motion shift from spatial neighbor and scaling the motion information from the corresponding collocated sub-CU.
  • sub-CU sub-coding unit
  • FIG. 7 illustrates an example template and reference samples of the template in one or more reference pictures.
  • FIG. 8 illustrates an example template and reference samples of the template for block with subblock motion using the motion information of the subblocks of the current block.
  • FIG. 9 illustrates a method and/or process of a motion shift design.
  • FIG. 10 illustrates a method and/or process that may be used to add a top spatial neighbor, for example, if a left spatial neighbor may not respect one or more parameters.
  • FIG. 11 illustrates another method and/or process that may be used to add a top spatial neighbor, for example, if a left spatial neighbor may not respect one or more parameters.
  • FIG. 12 illustrates a method and/or process of a motion shift design (e.g., a simplified motion shift design).
  • FIG. 13 illustrates an example template that may correspond to SbTMVP.
  • FIG. 1A is a diagram illustrating an example communications system 100 in which one or more disclosed embodiments may be implemented.
  • the communications system 100 may be a multiple access system that provides content, such as voice, data, video, messaging, broadcast, etc., to multiple wireless users.
  • the communications system 100 may enable multiple wireless users to access such content through the sharing of system resources, including wireless bandwidth.
  • the communications systems 100 may employ one or more channel access methods, such as code division multiple access (CDMA), time division multiple access (TDMA), frequency division multiple access (FDMA), orthogonal FDMA (OFDMA), single-carrier FDMA (SC-FDMA), zero-tail unique-word DFT-Spread OFDM (ZT UW DTS-s OFDM), unique word OFDM (UW-OFDM), resource block-filtered OFDM, filter bank multicarrier (FBMC), and the like.
  • CDMA code division multiple access
  • TDMA time division multiple access
  • FDMA frequency division multiple access
  • OFDMA orthogonal FDMA
  • SC-FDMA single-carrier FDMA
  • ZT UW DTS-s OFDM zero-tail unique-word DFT-Spread OFDM
  • UW-OFDM unique word OFDM
  • FBMC filter bank multicarrier
  • the communications system 100 may include wireless transmit/receive units (WTRUs) 102a, 102b, 102c, 102d, a RAN 104/113, a CN 106/115, a public switched telephone network (PSTN) 108, the Internet 110, and other networks 112, though it will be appreciated that the disclosed embodiments contemplate any number of WTRUs, base stations, networks, and/or network elements.
  • WTRUs 102a, 102b, 102c, 102d may be any type of device configured to operate and/or communicate in a wireless environment.
  • the WTRUs 102a, 102b, 102c, 102d may be configured to transmit and/or receive wireless signals and may include a user equipment (UE), a mobile station, a fixed or mobile subscriber unit, a subscription-based unit, a pager, a cellular telephone, a personal digital assistant (PDA), a smartphone, a laptop, a netbook, a personal computer, a wireless sensor, a hotspot or Mi-Fi device, an Internet of Things (loT) device, a watch or other wearable, a head-mounted display (HMD), a vehicle, a drone, a medical device and applications (e.g., remote surgery), an industrial device and applications (e.g., a robot and/or other wireless devices operating in an industrial and/or an automated processing chain contexts), a consumer electronics device, a device operating on commercial and/or industrial wireless networks, and the like.
  • UE user equipment
  • PDA personal digital assistant
  • HMD head-mounted display
  • a vehicle a drone
  • the communications systems 100 may also include a base station 114a and/or a base station 114b.
  • Each of the base stations 114a, 114b may be any type of device configured to wirelessly interface with at least one of the WTRUs 102a, 102b, 102c, 102d to facilitate access to one or more communication networks, such as the CN 106/115, the I nternet 110, and/or the other networks 112.
  • the base stations 114a, 114b may be a base transceiver station (BTS), a Node-B, an eNode B, a Home Node B, a Home eNode B, a gNB, a NR NodeB, a site controller, an access point (AP), a wireless router, and the like. While the base stations 114a, 114b are each depicted as a single element, it will be appreciated that the base stations 114a, 114b may include any number of interconnected base stations and/or network elements.
  • the base station 114a may be part of the RAN 104/113, which may also include other base stations and/or network elements (not shown), such as a base station controller (BSC), a radio network controller (RNC), relay nodes, etc.
  • BSC base station controller
  • RNC radio network controller
  • the base station 114a and/or the base station 114b may be configured to transmit and/or receive wireless signals on one or more carrier frequencies, which may be referred to as a cell (not shown). These frequencies may be in licensed spectrum, unlicensed spectrum, or a combination of licensed and unlicensed spectrum.
  • a cell may provide coverage for a wireless service to a specific geographical area that may be relatively fixed or that may change over time. The cell may further be divided into cell sectors.
  • the cell associated with the base station 114a may be divided into three sectors.
  • the base station 114a may include three transceivers, i.e., one for each sector of the cell.
  • the base station 114a may employ multiple-input multiple output (MIMO) technology and may utilize multiple transceivers for each sector of the cell.
  • MIMO multiple-input multiple output
  • beamforming may be used to transmit and/or receive signals in desired spatial directions.
  • the base stations 114a, 114b may communicate with one or more of the WTRUs 102a, 102b, 102c, 102d over an air interface 116, which may be any suitable wireless communication link (e.g., radio frequency (RF), microwave, centimeter wave, micrometer wave, infrared (IR), ultraviolet (UV), visible light, etc.).
  • the air interface 116 may be established using any suitable radio access technology (RAT).
  • RAT radio access technology
  • the communications system 100 may be a multiple access system and may employ one or more channel access schemes, such as CDMA, TDMA, FDMA, OFDMA, SC- FDMA, and the like.
  • the base station 114a in the RAN 104/113 and the WTRUs 102a, 102b, 102c may implement a radio technology such as Universal Mobile Telecommunications System (UMTS) Terrestrial Radio Access (UTRA), which may establish the air interface 115/116/117 using wideband CDMA (WCDMA).
  • WCDMA may include communication protocols such as High-Speed Packet Access (HSPA) and/or Evolved HSPA (HSPA+).
  • HSPA may include High-Speed Downlink (DL) Packet Access (HSDPA) and/or High-Speed UL Packet Access (HSUPA).
  • the base station 114a and the WTRUs 102a, 102b, 102c may implement a radio technology such as Evolved UMTS Terrestrial Radio Access (E-UTRA), which may establish the air interface 116 using Long Term Evolution (LTE) and/or LTE-Advanced (LTE-A) and/or LTE-Advanced Pro (LTE-A Pro).
  • E-UTRA Evolved UMTS Terrestrial Radio Access
  • LTE Long Term Evolution
  • LTE-A LTE-Advanced
  • LTE-A Pro LTE-Advanced Pro
  • the base station 114a and the WTRUs 102a, 102b, 102c may implement a radio technology such as NR Radio Access , which may establish the air interface 116 using New Radio (NR).
  • NR New Radio
  • the base station 114a and the WTRUs 102a, 102b, 102c may implement multiple radio access technologies.
  • the base station 114a and the WTRUs 102a, 102b, 102c may implement LTE radio access and NR radio access together, for instance using dual connectivity (DC) principles.
  • DC dual connectivity
  • the air interface utilized by WTRUs 102a, 102b, 102c may be characterized by multiple types of radio access technologies and/or transmissions sent to/from multiple types of base stations (e.g., a eNB and a gNB).
  • the base station 114a and the WTRUs 102a, 102b, 102c may implement radio technologies such as IEEE 802.11 (i.e., Wireless Fidelity (WiFi), IEEE 802.16 (i.e., Worldwide Interoperability for Microwave Access (WiMAX)), CDMA2000, CDMA2000 1X, CDMA2000 EV-DO, Interim Standard 2000 (IS- 2000), Interim Standard 95 (IS-95), Interim Standard 856 (IS-856), Global System for Mobile communications (GSM), Enhanced Data rates for GSM Evolution (EDGE), GSM EDGE (GERAN), and the like.
  • IEEE 802.11 i.e., Wireless Fidelity (WiFi)
  • IEEE 802.16 i.e., Worldwide Interoperability for Microwave Access (WiMAX)
  • CDMA2000, CDMA2000 1X, CDMA2000 EV-DO Code Division Multiple Access 2000
  • IS-2000 Interim Standard 95
  • IS-856 Interim Standard 856
  • GSM Global System for
  • the base station 114b in FIG. 1 A may be a wireless router, Home Node B, Home eNode B, or access point, for example, and may utilize any suitable RAT for facilitating wireless connectivity in a localized area, such as a place of business, a home, a vehicle, a campus, an industrial facility, an air corridor (e.g., for use by drones), a roadway, and the like.
  • the base station 114b and the WTRUs 102c, 102d may implement a radio technology such as IEEE 802.11 to establish a wireless local area network (WLAN).
  • WLAN wireless local area network
  • the base station 114b and the WTRUs 102c, 102d may implement a radio technology such as IEEE 802.15 to establish a wireless personal area network (WPAN).
  • the base station 114b and the WTRUs 102c, 102d may utilize a cellular-based RAT (e.g., WCDMA, CDMA2000, GSM, LTE, LTE-A, LTE-A Pro, NR etc.) to establish a picocell or femtocell.
  • the base station 114b may have a direct connection to the Internet 110.
  • the base station 114b may not be required to access the Internet 110 via the CN 106/115.
  • the RAN 104/113 may be in communication with the CN 106/115, which may be any type of network configured to provide voice, data, applications, and/or voice over internet protocol (VoIP) services to one or more of the WTRUs 102a, 102b, 102c, 102d.
  • the data may have varying quality of service (QoS) requirements, such as differing throughput requirements, latency requirements, error tolerance requirements, reliability requirements, data throughput requirements, mobility requirements, and the like.
  • QoS quality of service
  • the CN 106/115 may provide call control, billing services, mobile location-based services, pre-paid calling, Internet connectivity, video distribution, etc., and/or perform high-level security functions, such as user authentication.
  • the RAN 104/113 and/or the CN 106/115 may be in direct or indirect communication with other RANs that employ the same RAT as the RAN 104/113 or a different RAT.
  • the CN 106/115 may also be in communication with another RAN (not shown) employing a GSM, UMTS, CDMA 2000, WiMAX, E-UTRA, or WiFi radio technology.
  • the CN 106/115 may also serve as a gateway for the WTRUs 102a, 102b, 102c, 102d to access the PSTN 108, the Internet 110, and/or the other networks 112.
  • the PSTN 108 may include circuit-switched telephone networks that provide plain old telephone service (POTS).
  • POTS plain old telephone service
  • the Internet 110 may include a global system of interconnected computer networks and devices that use common communication protocols, such as the transmission control protocol (TCP), user datagram protocol (UDP) and/or the internet protocol (IP) in the TCP/IP internet protocol suite.
  • the networks 112 may include wired and/or wireless communications networks owned and/or operated by other service providers.
  • the networks 112 may include another CN connected to one or more RANs, which may employ the same RAT as the RAN 104/113 or a different RAT.
  • Some or all of the WTRUs 102a, 102b, 102c, 102d in the communications system 100 may include multi-mode capabilities (e.g., the WTRUs 102a, 102b, 102c, 102d may include multiple transceivers for communicating with different wireless networks over different wireless links).
  • the WTRU 102c shown in FIG. 1 A may be configured to communicate with the base station 114a, which may employ a cellularbased radio technology, and with the base station 114b, which may employ an IEEE 802 radio technology.
  • FIG. 1 A may be configured to communicate with the base station 114a, which may employ a cellularbased radio technology, and with the base station 114b, which may employ an IEEE 802 radio technology.
  • the WTRU 102 may include a processor 118, a transceiver 120, a transmit/receive element 122, a speaker/microphone 124, a keypad 126, a display/touchpad 128, non-removable memory 130, removable memory 132, a power source 134, a global positioning system (GPS) chipset 136, and/or other peripherals 138, among others.
  • the WTRU 102 may include any sub-combination of the foregoing elements while remaining consistent with an embodiment.
  • the processor 118 may be a general purpose processor, a special purpose processor, a conventional processor, a digital signal processor (DSP), a plurality of microprocessors, one or more microprocessors in association with a DSP core, a controller, a microcontroller, Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) circuits, any other type of integrated circuit (IC), a state machine, and the like.
  • the processor 118 may perform signal coding, data processing, power control, input/output processing, and/or any other functionality that enables the WTRU 102 to operate in a wireless environment.
  • the processor 118 may be coupled to the transceiver 120, which may be coupled to the transmit/receive element 122. While FIG. 1 B depicts the processor 118 and the transceiver 120 as separate components, it will be appreciated that the processor 118 and the transceiver 120 may be integrated together in an electronic package or chip.
  • the transmit/receive element 122 may be configured to transmit signals to, or receive signals from, a base station (e.g., the base station 114a) over the air interface 116.
  • the transmit/receive element 122 may be an antenna configured to transmit and/or receive RF signals.
  • the transmit/receive element 122 may be an emitter/detector configured to transmit and/or receive IR, UV, or visible light signals, for example.
  • the transmit/receive element 122 may be configured to transmit and/or receive both RF and light signals. It will be appreciated that the transmit/receive element 122 may be configured to transmit and/or receive any combination of wireless signals.
  • the WTRU 102 may include any number of transmit/receive elements 122. More specifically, the WTRU 102 may employ MIMO technology. Thus, in one embodiment, the WTRU 102 may include two or more transmit/receive elements 122 (e.g., multiple antennas) for transmitting and receiving wireless signals over the air interface 116.
  • the transceiver 120 may be configured to modulate the signals that are to be transmitted by the transmit/receive element 122 and to demodulate the signals that are received by the transmit/receive element 122. As noted above, the WTRU 102 may have multi-mode capabilities. Thus, the transceiver 120 may include multiple transceivers for enabling the WTRU 102 to communicate via multiple RATs, such as NR and IEEE 802.11, for example.
  • the processor 118 of the WTRU 102 may be coupled to, and may receive user input data from, the speaker/microphone 124, the keypad 126, and/or the display/touchpad 128 (e.g., a liquid crystal display (LCD) display unit or organic light-emitting diode (OLED) display unit).
  • the processor 118 may also output user data to the speaker/microphone 124, the keypad 126, and/or the display/touchpad 128.
  • the processor 118 may access information from, and store data in, any type of suitable memory, such as the non-removable memory 130 and/or the removable memory 132.
  • the non-removable memory 130 may include random-access memory (RAM), read-only memory (ROM), a hard disk, or any other type of memory storage device.
  • the removable memory 132 may include a subscriber identity module (SIM) card, a memory stick, a secure digital (SD) memory card, and the like.
  • SIM subscriber identity module
  • SD secure digital
  • the processor 118 may access information from, and store data in, memory that is not physically located on the WTRU 102, such as on a server or a home computer (not shown).
  • the processor 118 may receive power from the power source 134, and may be configured to distribute and/or control the power to the other components in the WTRU 102.
  • the power source 134 may be any suitable device for powering the WTRU 102.
  • the power source 134 may include one or more dry cell batteries (e.g., nickel-cadmium (NiCd), nickel-zinc (NiZn), nickel metal hydride (NiMH), lithium-ion (Li- ion), etc.), solar cells, fuel cells, and the like.
  • the processor 118 may also be coupled to the GPS chipset 136, which may be configured to provide location information (e.g., longitude and latitude) regarding the current location of the WTRU 102.
  • location information e.g., longitude and latitude
  • the WTRU 102 may receive location information over the air interface 116 from a base station (e.g., base stations 114a, 114b) and/or determine its location based on the timing of the signals being received from two or more nearby base stations. It will be appreciated that the WTRU 102 may acquire location information by way of any suitable location-determination method while remaining consistent with an embodiment.
  • the processor 118 may further be coupled to other peripherals 138, which may include one or more software and/or hardware modules that provide additional features, functionality and/or wired or wireless connectivity.
  • the peripherals 138 may include an accelerometer, an e-compass, a satellite transceiver, a digital camera (for photographs and/or video), a universal serial bus (USB) port, a vibration device, a television transceiver, a hands free headset, a Bluetooth® module, a frequency modulated (FM) radio unit, a digital music player, a media player, a video game player module, an Internet browser, a Virtual Reality and/or Augmented Reality (VR/AR) device, an activity tracker, and the like.
  • FM frequency modulated
  • the peripherals 138 may include one or more sensors, the sensors may be one or more of a gyroscope, an accelerometer, a hall effect sensor, a magnetometer, an orientation sensor, a proximity sensor, a temperature sensor, a time sensor; a geolocation sensor; an altimeter, a light sensor, a touch sensor, a magnetometer, a barometer, a gesture sensor, a biometric sensor, and/or a humidity sensor.
  • a gyroscope an accelerometer, a hall effect sensor, a magnetometer, an orientation sensor, a proximity sensor, a temperature sensor, a time sensor; a geolocation sensor; an altimeter, a light sensor, a touch sensor, a magnetometer, a barometer, a gesture sensor, a biometric sensor, and/or a humidity sensor.
  • the WTRU 102 may include a full duplex radio for which transmission and reception of some or all of the signals (e.g., associated with particular subframes for both the UL (e.g., for transmission) and downlink (e.g., for reception) may be concurrent and/or simultaneous.
  • the full duplex radio may include an interference management unit to reduce and or substantially eliminate self-interference via either hardware (e.g., a choke) or signal processing via a processor (e.g., a separate processor (not shown) or via processor 118).
  • the WRTU 102 may include a half-duplex radio for which transmission and reception of some or all of the signals (e.g., associated with particular subframes for either the UL (e.g., for transmission) or the downlink (e.g., for reception)).
  • a half-duplex radio for which transmission and reception of some or all of the signals (e.g., associated with particular subframes for either the UL (e.g., for transmission) or the downlink (e.g., for reception)).
  • FIG. 1 C is a system diagram illustrating the RAN 104 and the CN 106 according to an embodiment.
  • the RAN 104 may employ an E-UTRA radio technology to communicate with the WTRUs 102a, 102b, 102c over the air interface 116.
  • the RAN 104 may also be in communication with the CN 106.
  • the RAN 104 may include eNode-Bs 160a, 160b, 160c, though it will be appreciated that the RAN 104 may include any number of eNode-Bs while remaining consistent with an embodiment.
  • the eNode-Bs 160a, 160b, 160c may each include one or more transceivers for communicating with the WTRUs 102a, 102b, 102c over the air interface 116.
  • the eNode-Bs 160a, 160b, 160c may implement MIMO technology.
  • the eNode-B 160a for example, may use multiple antennas to transmit wireless signals to, and/or receive wireless signals from, the WTRU 102a.
  • Each of the eNode-Bs 160a, 160b, 160c may be associated with a particular cell (not shown) and may be configured to handle radio resource management decisions, handover decisions, scheduling of users in the UL and/or DL, and the like. As shown in FIG. 1 C, the eNode-Bs 160a, 160b, 160c may communicate with one another over an X2 interface.
  • the CN 106 shown in FIG. 1 C may include a mobility management entity (MME) 162, a serving gateway (SGW) 164, and a packet data network (PDN) gateway (or PGW) 166. While each of the foregoing elements are depicted as part of the CN 106, it will be appreciated that any of these elements may be owned and/or operated by an entity other than the CN operator.
  • MME mobility management entity
  • SGW serving gateway
  • PGW packet data network gateway
  • the MME 162 may be connected to each of the eNode-Bs 162a, 162b, 162c in the RAN 104 via an S1 interface and may serve as a control node.
  • the MME 162 may be responsible for authenticating users of the WTRUs 102a, 102b, 102c, bearer activation/deactivation, selecting a particular serving gateway during an initial attach of the WTRUs 102a, 102b, 102c, and the like.
  • the MME 162 may provide a control plane function for switching between the RAN 104 and other RANs (not shown) that employ other radio technologies, such as GSM and/or WCDMA.
  • the SGW 164 may be connected to each of the eNode Bs 160a, 160b, 160c in the RAN 104 via the S1 interface.
  • the SGW 164 may generally route and forward user data packets to/from the WTRUs 102a, 102b, 102c.
  • the SGW 164 may perform other functions, such as anchoring user planes during inter-eNode B handovers, triggering paging when DL data is available for the WTRUs 102a, 102b, 102c, managing and storing contexts of the WTRUs 102a, 102b, 102c, and the like.
  • the SGW 164 may be connected to the PGW 166, which may provide the WTRUs 102a, 102b, 102c with access to packet-switched networks, such as the Internet 110, to facilitate communications between the WTRUs 102a, 102b, 102c and IP-enabled devices.
  • packet-switched networks such as the Internet 110
  • the CN 106 may facilitate communications with other networks.
  • the CN 106 may provide the WTRUs 102a, 102b, 102c with access to circuit-switched networks, such as the PSTN 108, to facilitate communications between the WTRUs 102a, 102b, 102c and traditional land-line communications devices.
  • the CN 106 may include, or may communicate with, an IP gateway (e.g., an IP multimedia subsystem (IMS) server) that serves as an interface between the CN 106 and the PSTN 108.
  • IMS IP multimedia subsystem
  • the CN 106 may provide the WTRUs 102a, 102b, 102c with access to the other networks 112, which may include other wired and/or wireless networks that are owned and/or operated by other service providers.
  • the WTRU is described in FIGS. 1 A-1 D as a wireless terminal, it is contemplated that in certain representative embodiments that such a terminal may use (e.g., temporarily or permanently) wired communication interfaces with the communication network.
  • the other network 112 may be a WLAN.
  • a WLAN in Infrastructure Basic Service Set (BSS) mode may have an Access Point (AP) for the BSS and one or more stations (STAs) associated with the AP.
  • the AP may have an access or an interface to a Distribution System (DS) or another type of wired/wireless network that carries traffic in to and/or out of the BSS.
  • Traffic to STAs that originates from outside the BSS may arrive through the AP and may be delivered to the STAs.
  • Traffic originating from STAs to destinations outside the BSS may be sent to the AP to be delivered to respective destinations.
  • Traffic between STAs within the BSS may be sent through the AP, for example, where the source STA may send traffic to the AP and the AP may deliver the traffic to the destination STA.
  • the traffic between STAs within a BSS may be considered and/or referred to as peer-to-peer traffic.
  • the peer-to- peer traffic may be sent between (e.g., directly between) the source and destination STAs with a direct link setup (DLS).
  • the DLS may use an 802.11e DLS or an 802.11z tunneled DLS (TDLS).
  • a WLAN using an Independent BSS (IBSS) mode may not have an AP, and the STAs (e.g., all of the STAs) within or using the IBSS may communicate directly with each other.
  • the IBSS mode of communication may sometimes be referred to herein as an “ad-hoc” mode of communication.
  • the AP may transmit a beacon on a fixed channel, such as a primary channel.
  • the primary channel may be a fixed width (e.g., 20 MHz wide bandwidth) or a dynamically set width via signaling.
  • the primary channel may be the operating channel of the BSS and may be used by the STAs to establish a connection with the AP.
  • Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA) may be implemented, for example in in 802.11 systems.
  • the STAs e.g., every STA, including the AP, may sense the primary channel. If the primary channel is sensed/detected and/or determined to be busy by a particular STA, the particular STA may back off.
  • One STA (e.g., only one station) may transmit at any given time in a given BSS.
  • High Throughput (HT) STAs may use a 40 MHz wide channel for communication, for example, via a combination of the primary 20 MHz channel with an adjacent or nonadjacent 20 MHz channel to form a 40 MHz wide channel.
  • VHT STAs may support 20MHz, 40 MHz, 80 MHz, and/or 160 MHz wide channels.
  • the 40 MHz, and/or 80 MHz, channels may be formed by combining contiguous 20 MHz channels.
  • a 160 MHz channel may be formed by combining 8 contiguous 20 MHz channels, or by combining two noncontiguous 80 MHz channels, which may be referred to as an 80+80 configuration.
  • the data, after channel encoding may be passed through a segment parser that may divide the data into two streams.
  • Inverse Fast Fourier Transform (IFFT) processing, and time domain processing may be done on each stream separately.
  • IFFT Inverse Fast Fourier Transform
  • the streams may be mapped on to the two 80 MHz channels, and the data may be transmitted by a transmitting STA.
  • the above described operation for the 80+80 configuration may be reversed, and the combined data may be sent to the Medium Access Control (MAC).
  • MAC Medium Access Control
  • Sub 1 GHz modes of operation are supported by 802.11 af and 802.11 ah.
  • the channel operating bandwidths, and carriers, are reduced in 802.11 af and 802.11 ah relative to those used in 802.11 n, and
  • 802.11 ac 802.11 af supports 5 MHz, 10 MHz and 20 MHz bandwidths in the TV White Space (TVWS) spectrum
  • 802.11 ah supports 1 MHz, 2 MHz, 4 MHz, 8 MHz, and 16 MHz bandwidths using non-TVWS spectrum.
  • 802.11 ah may support Meter Type Control/Machine- Type Communications, such as MTC devices in a macro coverage area.
  • MTC devices may have certain capabilities, for example, limited capabilities including support for (e.g., only support for) certain and/or limited bandwidths.
  • the MTC devices may include a battery with a battery life above a threshold (e.g., to maintain a very long battery life).
  • WLAN systems which may support multiple channels, and channel bandwidths, such as 802.11 n,
  • 802.11 ac, 802.11 af, and 802.11 ah include a channel which may be designated as the primary channel.
  • the primary channel may have a bandwidth equal to the largest common operating bandwidth supported by all STAs in the BSS.
  • the bandwidth of the primary channel may be set and/or limited by a STA, from among all STAs in operating in a BSS, which supports the smallest bandwidth operating mode.
  • the primary channel may be 1 MHz wide for STAs (e.g., MTC type devices) that support (e.g., only support) a 1 MHz mode, even if the AP, and other STAs in the BSS support 2 MHz, 4 MHz, 8 MHz, 16 MHz, and/or other channel bandwidth operating modes.
  • Carrier sensing and/or Network Allocation Vector (NAV) settings may depend on the status of the primary channel. If the primary channel is busy, for example, due to a STA (which supports only a 1 MHz operating mode), transmitting to the AP, the entire available frequency bands may be considered busy even though a majority of the frequency bands remains idle and may be available.
  • STAs e.g., MTC type devices
  • NAV Network Allocation Vector
  • the available frequency bands which may be used by 802.11 ah, are from 902 MHz to 928 MHz. In Korea, the available frequency bands are from 917.5 MHz to 923.5 MHz. In Japan, the available frequency bands are from 916.5 MHz to 927.5 MHz. The total bandwidth available for 802.11 ah is 6 MHz to 26 MHz depending on the country code.
  • FIG. 1 D is a system diagram illustrating the RAN 113 and the CN 115 according to an embodiment.
  • the RAN 113 may employ an NR radio technology to communicate with the WTRUs 102a, 102b, 102c over the air interface 116.
  • the RAN 113 may also be in communication with the CN 115.
  • the RAN 113 may include gNBs 180a, 180b, 180c, though it will be appreciated that the RAN 113 may include any number of gNBs while remaining consistent with an embodiment.
  • the gNBs 180a, 180b, 180c may each include one or more transceivers for communicating with the WTRUs 102a, 102b, 102c over the air interface 116.
  • the gNBs 180a, 180b, 180c may implement MIMO technology.
  • gNBs 180a, 108b may utilize beamforming to transmit signals to and/or receive signals from the gNBs 180a, 180b, 180c.
  • the gNB 180a may use multiple antennas to transmit wireless signals to, and/or receive wireless signals from, the WTRU 102a.
  • the gNBs 180a, 180b, 180c may implement carrier aggregation technology.
  • the gNB 180a may transmit multiple component carriers to the WTRU 102a (not shown). A subset of these component carriers may be on unlicensed spectrum while the remaining component carriers may be on licensed spectrum.
  • the gNBs 180a, 180b, 180c may implement Coordinated Multi-Point (CoMP) technology.
  • WTRU 102a may receive coordinated transmissions from gNB 180a and gNB 180b (and/or gNB 180c).
  • CoMP Coordinated Multi-Point
  • the WTRUs 102a, 102b, 102c may communicate with gNBs 180a, 180b, 180c using transmissions associated with a scalable numerology. For example, the OFDM symbol spacing and/or OFDM subcarrier spacing may vary for different transmissions, different cells, and/or different portions of the wireless transmission spectrum.
  • the WTRUs 102a, 102b, 102c may communicate with gNBs 180a, 180b, 180c using subframe or transmission time intervals (TTIs) of various or scalable lengths (e.g., containing varying number of OFDM symbols and/or lasting varying lengths of absolute time).
  • TTIs subframe or transmission time intervals
  • the gNBs 180a, 180b, 180c may be configured to communicate with the WTRUs 102a, 102b, 102c in a standalone configuration and/or a non-standalone configuration.
  • WTRUs 102a, 102b, 102c may communicate with gNBs 180a, 180b, 180c without also accessing other RANs (e.g., such as eNode-Bs 160a, 160b, 160c).
  • WTRUs 102a, 102b, 102c may utilize one or more of gNBs 180a, 180b, 180c as a mobility anchor point.
  • WTRUs 102a, 102b, 102c may communicate with gNBs 180a, 180b, 180c using signals in an unlicensed band.
  • WTRUs 102a, 102b, 102c may communicate with/connect to gNBs 180a, 180b, 180c while also communicating with/connecting to another RAN such as eNode-Bs 160a, 160b, 160c.
  • WTRUs 102a, 102b, 102c may implement DC principles to communicate with one or more gNBs 180a, 180b, 180c and one or more eNode-Bs 160a, 160b, 160c substantially simultaneously.
  • eNode-Bs 160a, 160b, 160c may serve as a mobility anchor for WTRUs 102a, 102b, 102c and gNBs 180a, 180b, 180c may provide additional coverage and/or throughput for servicing WTRUs 102a, 102b, 102c.
  • Each of the gNBs 180a, 180b, 180c may be associated with a particular cell (not shown) and may be configured to handle radio resource management decisions, handover decisions, scheduling of users in the UL and/or DL, support of network slicing, dual connectivity, interworking between NR and E-UTRA, routing of user plane data towards User Plane Function (UPF) 184a, 184b, routing of control plane information towards Access and Mobility Management Function (AMF) 182a, 182b and the like. As shown in FIG. 1 D, the gNBs 180a, 180b, 180c may communicate with one another over an Xn interface.
  • UPF User Plane Function
  • AMF Access and Mobility Management Function
  • the CN 115 shown in FIG. 1 D may include at least one AMF 182a, 182b, at least one UPF 184a, 184b, at least one Session Management Function (SMF) 183a, 183b, and possibly a Data Network (DN) 185a, 185b. While each of the foregoing elements are depicted as part of the CN 115, it will be appreciated that any of these elements may be owned and/or operated by an entity other than the CN operator.
  • SMF Session Management Function
  • the AMF 182a, 182b may be connected to one or more of the gNBs 180a, 180b, 180c in the RAN 113 via an N2 interface and may serve as a control node.
  • the AMF 182a, 182b may be responsible for authenticating users of the WTRUs 102a, 102b, 102c, support for network slicing (e.g., handling of different PDU sessions with different requirements), selecting a particular SMF 183a, 183b, management of the registration area, termination of NAS signaling, mobility management, and the like.
  • Network slicing may be used by the AMF 182a, 182b in order to customize CN support for WTRUs 102a, 102b, 102c based on the types of services being utilized WTRUs 102a, 102b, 102c.
  • different network slices may be established for different use cases such as services relying on ultra-reliable low latency (URLLC) access, services relying on enhanced massive mobile broadband (eMBB) access, services for machine type communication (MTC) access, and/or the like.
  • URLLC ultra-reliable low latency
  • eMBB enhanced massive mobile broadband
  • MTC machine type communication
  • the AMF 162 may provide a control plane function for switching between the RAN 113 and other RANs (not shown) that employ other radio technologies, such as LTE, LTE-A, LTE-A Pro, and/or non-3GPP access technologies such as WiFi.
  • radio technologies such as LTE, LTE-A, LTE-A Pro, and/or non-3GPP access technologies such as WiFi.
  • the SMF 183a, 183b may be connected to an AMF 182a, 182b in the CN 115 via an N11 interface.
  • the SMF 183a, 183b may also be connected to a UPF 184a, 184b in the CN 115 via an N4 interface.
  • the SMF 183a, 183b may select and control the UPF 184a, 184b and configure the routing of traffic through the UPF 184a, 184b.
  • the SMF 183a, 183b may perform other functions, such as managing and allocating UE IP address, managing PDU sessions, controlling policy enforcement and QoS, providing downlink data notifications, and the like.
  • a PDU session type may be IP-based, non-IP based, Ethernet-based, and the like.
  • the UPF 184a, 184b may be connected to one or more of the gNBs 180a, 180b, 180c in the RAN 113 via an N3 interface, which may provide the WTRUs 102a, 102b, 102c with access to packet-switched networks, such as the Internet 110, to facilitate communications between the WTRUs 102a, 102b, 102c and IP- enabled devices.
  • the UPF 184, 184b may perform other functions, such as routing and forwarding packets, enforcing user plane policies, supporting multi-homed PDU sessions, handling user plane QoS, buffering downlink packets, providing mobility anchoring, and the like.
  • the CN 115 may facilitate communications with other networks.
  • the CN 115 may include, or may communicate with, an IP gateway (e.g., an IP multimedia subsystem (IMS) server) that serves as an interface between the CN 115 and the PSTN 108.
  • IMS IP multimedia subsystem
  • the CN 115 may provide the WTRUs 102a, 102b, 102c with access to the other networks 112, which may include other wired and/or wireless networks that are owned and/or operated by other service providers.
  • the WTRUs 102a, 102b, 102c may be connected to a local Data Network (DN) 185a, 185b through the UPF 184a, 184b via the N3 interface to the UPF 184a, 184b and an N6 interface between the UPF 184a, 184b and the DN 185a, 185b.
  • DN local Data Network
  • one or more, or all, of the functions described herein with regard to one or more of: WTRU 102a-d, Base Station 114a-b, eNode-B 160a-c, MME 162, SGW 164, PGW 166, gNB 180a-c, AMF 182a-b, UPF 184a-b, SMF 183a-b, DN 185a-b, and/or any other device(s) described herein, may be performed by one or more emulation devices (not shown).
  • the emulation devices may be one or more devices configured to emulate one or more, or all, of the functions described herein.
  • the emulation devices may be used to test other devices and/or to simulate network and/or WTRU functions.
  • the emulation devices may be designed to implement one or more tests of other devices in a lab environment and/or in an operator network environment.
  • the one or more emulation devices may perform the one or more, or all, functions while being fully or partially implemented and/or deployed as part of a wired and/or wireless communication network in order to test other devices within the communication network.
  • the one or more emulation devices may perform the one or more, or all, functions while being temporarily implemented/deployed as part of a wired and/or wireless communication network.
  • the emulation device may be directly coupled to another device for purposes of testing and/or may performing testing using over-the-air wireless communications.
  • the one or more emulation devices may perform the one or more, including all, functions while not being implemented/deployed as part of a wired and/or wireless communication network.
  • the emulation devices may be utilized in a testing scenario in a testing laboratory and/or a non-deployed (e.g., testing) wired and/or wireless communication network in order to implement testing of one or more components.
  • the one or more emulation devices may be test equipment. Direct RF coupling and/or wireless communications via RF circuitry (e.g., which may include one or more antennas) may be used by the emulation devices to transmit and/or receive data.
  • RF circuitry e.g., which may include one or more antennas
  • FIGS. 5-13 described herein may provide some examples, but other examples are contemplated.
  • the discussion of FIGS. 5-13 does not limit the breadth of the implementations.
  • At least one of the aspects generally relates to video encoding and decoding, and at least one other aspect generally relates to transmitting a bitstream generated or encoded.
  • These and other aspects may be implemented as a method, an apparatus, a computer readable storage medium having stored thereon instructions for encoding or decoding video data according to any of the methods described, and/or a computer readable storage medium having stored thereon a bitstream generated according to any of the methods described.
  • the terms “reconstructed” and “decoded” may be used interchangeably, the terms “pixel” and “sample” may be used interchangeably, the terms “image,” “picture” and “frame” may be used interchangeably.
  • each of the methods comprises one or more steps or actions for achieving the described method. Unless a specific order of steps or actions is required for proper operation of the method, the order and/or use of specific steps and/or actions may be modified or combined. Additionally, terms such as “first”, “second”, etc. may be used in various examples to modify an element, component, step, operation, etc., such as, for example, a “first decoding” and a “second decoding”. Use of such terms does not imply an ordering to the modified operations unless specifically required. So, in this example, the first decoding need not be performed before the second decoding, and may occur, for example, before, during, or in an overlapping time period with the second decoding.
  • modules for example, decoding modules, of a video encoder 200 and decoder 300 as shown in FIG. 2 and FIG. 3.
  • FIG. 2 is a diagram showing an example video encoder. Variations of example encoder 200 are contemplated, but the encoder 200 is described below for purposes of clarity without describing all expected variations.
  • the video sequence may go through pre-encoding processing (201), for example, applying a color transform to the input color picture (e.g., conversion from RGB 4:4:4 to YCbCr 4:2:0), or performing a remapping of the input picture components in order to get a signal distribution more resilient to compression (for instance using a histogram equalization of one of the color components).
  • Metadata may be associated with the pre-processing, and attached to the bitstream.
  • a picture is encoded by the encoder elements as described below.
  • the picture to be encoded is partitioned (202) and processed in units of, for example, coding units (CUs).
  • Each unit is encoded using, for example, either an intra or inter mode.
  • intra prediction 260
  • inter mode motion estimation
  • compensation 270
  • the encoder decides (205) which one of the intra mode or inter mode to use for encoding the unit, and indicates the intra/inter decision by, for example, a prediction mode flag.
  • Prediction residuals are calculated, for example, by subtracting (210) the predicted block from the original image block.
  • the prediction residuals are then transformed (225) and quantized (230).
  • the quantized transform coefficients, as well as motion vectors and other syntax elements, are entropy coded (245) to output a bitstream.
  • the encoder can skip the transform and apply quantization directly to the non-transformed residual signal.
  • the encoder can bypass both transform and quantization, i.e. , the residual is coded directly without the application of the transform or quantization processes.
  • the encoder decodes an encoded block to provide a reference for further predictions.
  • the quantized transform coefficients are de-quantized (240) and inverse transformed (250) to decode prediction residuals. Combining (255) the decoded prediction residuals and the predicted block, an image block is reconstructed.
  • In-loop filters (265) are applied to the reconstructed picture to perform, for example, deblocking/SAO (Sample Adaptive Offset) ALF (Active Loop Filter) filtering to reduce encoding artifacts.
  • the filtered image is stored at a reference picture buffer (280).
  • FIG. 3 is a diagram showing an example of a video decoder.
  • a bitstream is decoded by the decoder elements as described below.
  • Video decoder 300 generally performs a decoding pass reciprocal to the encoding pass as described in FIG. 2.
  • the encoder 200 also generally performs video decoding as part of encoding video data.
  • the input of the decoder includes a video bitstream, which may be generated by video encoder 200.
  • the bitstream is first entropy decoded (330) to obtain transform coefficients, motion vectors, and other coded information.
  • the picture partition information indicates how the picture is partitioned.
  • the decoder may therefore divide (335) the picture according to the decoded picture partitioning information.
  • the transform coefficients are de-quantized (340) and inverse transformed (350) to decode the prediction residuals. Combining (355) the decoded prediction residuals and the predicted block, an image block is reconstructed.
  • the predicted block may be obtained (370) from intra prediction (360) or motion-compensated prediction (i.e., inter prediction) (375).
  • In-loop filters (365) are applied to the reconstructed image.
  • the filtered image is stored at a reference picture buffer (380). Note that, for a given picture, the contents of the reference picture buffer 380 on the decoder 300 side may be similar (e.g., identical) to the contents of the reference picture buffer 280 on the encoder 200 side for the same picture.
  • the decoded picture can further go through post-decoding processing (385), for example, an inverse color transform (e.g., conversion from YCbCr 4:2:0 to RGB 4:4:4) or an inverse remapping performing the inverse of the remapping process performed in the pre-encoding processing (201).
  • the post-decoding processing may use metadata derived in the pre-encoding processing and signaled in the bitstream.
  • the decoded images e.g., after application of the in-loop filters (365) and/or after post-decoding processing (385), if post-decoding processing is used
  • System 400 may be embodied as a device including the various components described below and is configured to perform one or more of the aspects described in this document. Examples of such devices, include, but are not limited to, various electronic devices such as personal computers, laptop computers, smartphones, tablet computers, digital multimedia set top boxes, digital television receivers, personal video recording systems, connected home appliances, and servers. Elements of system 400, singly or in combination, may be embodied in a single integrated circuit (IC), multiple ICs, and/or discrete components. For example, in at least one example, the processing and encoder/decoder elements of system 400 are distributed across multiple ICs and/or discrete components.
  • IC integrated circuit
  • system 400 is communicatively coupled to one or more other systems, or other electronic devices, via, for example, a communications bus or through dedicated input and/or output ports.
  • system 400 is configured to implement one or more of the aspects described in this document.
  • the system 400 includes at least one processor 410 configured to execute instructions loaded therein for implementing, for example, the various aspects described in this document.
  • Processor 410 can include embedded memory, input output interface, and various other circuitries as known in the art.
  • the system 400 includes at least one memory 420 (e.g., a volatile memory device, and/or a non-volatile memory device).
  • System 400 includes a storage device 440, which can include non-volatile memory and/or volatile memory, including, but not limited to, Electrically Erasable Programmable Read-Only Memory (EEPROM), Read-Only Memory (ROM), Programmable Read-Only Memory (PROM), Random Access Memory (RAM), Dynamic Random Access Memory (DRAM), Static Random Access Memory (SRAM), flash, magnetic disk drive, and/or optical disk drive.
  • the storage device 440 can include an internal storage device, an attached storage device (including detachable and non-detachable storage devices), and/or a network accessible storage device, as non-limiting examples.
  • System 400 includes an encoder/decoder module 430 configured, for example, to process data to provide an encoded video or decoded video, and the encoder/decoder module 430 can include its own processor and memory.
  • the encoder/decoder module 430 represents module(s) that may be included in a device to perform the encoding and/or decoding functions. As is known, a device can include one or both of the encoding and decoding modules. Additionally, encoder/decoder module 430 may be implemented as a separate element of system 400 or may be incorporated within processor 410 as a combination of hardware and software as known to those skilled in the art.
  • Program code to be loaded onto processor 410 or encoder/decoder 430 to perform the various aspects described in this document may be stored in storage device 440 and subsequently loaded onto memory 420 for execution by processor 410.
  • processor 410, memory 420, storage device 440, and encoder/decoder module 430 can store one or more of various items during the performance of the processes described in this document. Such stored items can include, but are not limited to, the input video, the decoded video or portions of the decoded video, the bitstream, matrices, variables, and intermediate or final results from the processing of equations, formulas, operations, and operational logic.
  • memory inside of the processor 410 and/or the encoder/decoder module 430 is used to store instructions and to provide working memory for processing that is needed during encoding or decoding.
  • a memory external to the processing device (for example, the processing device may be either the processor 410 or the encoder/decoder module 430) is used for one or more of these functions.
  • the external memory may be the memory 420 and/or the storage device 440, for example, a dynamic volatile memory and/or a non-volatile flash memory.
  • an external non-volatile flash memory is used to store the operating system of, for example, a television.
  • a fast external dynamic volatile memory such as a RAM is used as working memory for video encoding and decoding operations.
  • the input to the elements of system 400 may be provided through various input devices as indicated in block 445.
  • Such input devices include, but are not limited to, (i) a radio frequency (RF) portion that receives an RF signal transmitted, for example, over the air by a broadcaster, (ii) a Component (COMP) input terminal (or a set of COMP input terminals), (iii) a Universal Serial Bus (USB) input terminal, and/or (iv) a High Definition Multimedia Interface (HDMI) input terminal.
  • RF radio frequency
  • COMP Component
  • USB Universal Serial Bus
  • HDMI High Definition Multimedia Interface
  • the input devices of block 445 have associated respective input processing elements as known in the art.
  • the RF portion may be associated with elements suitable for (i) selecting a desired frequency (also referred to as selecting a signal, or band-limiting a signal to a band of frequencies), (ii) downconverting the selected signal, (iii) band-limiting again to a narrower band of frequencies to select (for example) a signal frequency band which may be referred to as a channel in certain examples, (iv) demodulating the downconverted and band-limited signal, (v) performing error correction, and/or (vi) demultiplexing to select the desired stream of data packets.
  • a desired frequency also referred to as selecting a signal, or band-limiting a signal to a band of frequencies
  • downconverting the selected signal for example
  • band-limiting again to a narrower band of frequencies to select (for example) a signal frequency band which may be referred to as a channel in certain examples
  • demodulating the downconverted and band-limited signal (v) performing error correction, and/or (vi) demultiplexing to select the desired stream of data
  • the RF portion of various examples includes one or more elements to perform these functions, for example, frequency selectors, signal selectors, band-limiters, channel selectors, filters, downconverters, demodulators, error correctors, and demultiplexers.
  • the RF portion can include a tuner that performs various of these functions, including, for example, downconverting the received signal to a lower frequency (for example, an intermediate frequency or a near-baseband frequency) or to baseband.
  • the RF portion and its associated input processing element receives an RF signal transmitted over a wired (for example, cable) medium, and performs frequency selection by filtering, downconverting, and filtering again to a desired frequency band.
  • Adding elements can include inserting elements in between existing elements, such as, for example, inserting amplifiers and an analog-to-digital converter.
  • the RF portion includes an antenna.
  • the USB and/or HDMI terminals can include respective interface processors for connecting system 400 to other electronic devices across USB and/or HDMI connections. It is to be understood that various aspects of input processing, for example, Reed-Solomon error correction, may be implemented, for example, within a separate input processing IC or within processor 410 as necessary. Similarly, aspects of USB or HDMI interface processing may be implemented within separate interface ICs or within processor 410 as necessary.
  • the demodulated, error corrected, and demultiplexed stream is provided to various processing elements, including, for example, processor 410, and encoder/decoder 430 operating in combination with the memory and storage elements to process the datastream as necessary for presentation on an output device.
  • connection arrangement 425 for example, an internal bus as known in the art, including the I nter-IC (I2C) bus, wiring, and printed circuit boards.
  • I2C I nter-IC
  • the system 400 includes communication interface 450 that enables communication with other devices via communication channel 460.
  • the communication interface 450 can include, but is not limited to, a transceiver configured to transmit and to receive data over communication channel 460.
  • the communication interface 450 can include, but is not limited to, a modem or network card and the communication channel 460 may be implemented, for example, within a wired and/or a wireless medium.
  • Data is streamed, or otherwise provided, to the system 400, in various examples, using a wireless network such as a Wi-Fi network, for example IEEE 802.11 (IEEE refers to the Institute of Electrical and Electronics Engineers).
  • the Wi-Fi signal of these examples is received over the communications channel 460 and the communications interface 450 which are adapted for Wi-Fi communications.
  • the communications channel 460 of these examples is typically connected to an access point or router that provides access to external networks including the Internet for allowing streaming applications and other over-the-top communications.
  • Other examples provide streamed data to the system 400 using a set-top box that delivers the data over the HDMI connection of the input block 445.
  • Still other examples provide streamed data to the system 400 using the RF connection of the input block 445.
  • various examples provide data in a non-streaming manner.
  • various examples use wireless networks other than Wi-Fi, for example a cellular network or a Bluetooth® network.
  • the system 400 can provide an output signal to various output devices, including a display 475, speakers 485, and other peripheral devices 495.
  • the display 475 of various examples includes one or more of, for example, a touchscreen display, an organic light-emitting diode (OLED) display, a curved display, and/or a foldable display.
  • the display 475 may be for a television, a tablet, a laptop, a cell phone (mobile phone), or other device.
  • the display 475 can also be integrated with other components (for example, as in a smart phone), or separate (for example, an external monitor for a laptop).
  • the other peripheral devices 495 include, in various examples, one or more of a stand-alone digital video disc (or digital versatile disc) (DVD, for both terms), a disk player, a stereo system, and/or a lighting system.
  • Various examples use one or more peripheral devices 495 that provide a function based on the output of the system 400. For example, a disk player performs the function of playing the output of the system 400.
  • control signals are communicated between the system 400 and the display 475, speakers 485, or other peripheral devices 495 using signaling such as AV. Link, Consumer Electronics Control (CEC), or other communications protocols that enable device-to-device control with or without user intervention.
  • the output devices may be communicatively coupled to system 400 via dedicated connections through respective interfaces 470, 480, and 490. Alternatively, the output devices may be connected to system 400 using the communications channel 460 via the communications interface 450.
  • the display 475 and speakers 485 may be integrated in a single unit with the other components of system 400 in an electronic device such as, for example, a television.
  • the display interface 470 includes a display driver, such as, for example, a timing controller (T Con) chip.
  • the display 475 and speakers 485 can alternatively be separate from one or more of the other components, for example, if the RF portion of input 445 is part of a separate set-top box.
  • the output signal may be provided via dedicated output connections, including, for example, HDMI ports, USB ports, or COMP outputs.
  • the examples may be carried out by computer software implemented by the processor 410 or by hardware, or by a combination of hardware and software. As a non-limiting example, the examples may be implemented by one or more integrated circuits.
  • the memory 420 may be of any type appropriate to the technical environment and may be implemented using any appropriate data storage technology, such as optical memory devices, magnetic memory devices, semiconductor-based memory devices, fixed memory, and removable memory, as non-limiting examples.
  • the processor 410 may be of any type appropriate to the technical environment, and can encompass one or more of microprocessors, general purpose computers, special purpose computers, and processors based on a multi-core architecture, as non-limiting examples.
  • Various implementations involve decoding.
  • Decoding can encompass all or part of the processes performed, for example, on a received encoded sequence in order to produce a final output suitable for display.
  • processes include one or more of the processes typically performed by a decoder, for example, entropy decoding, inverse quantization, inverse transformation, and differential decoding.
  • such processes also, or alternatively, include processes performed by a decoder of various implementations described in this application, for example, for determining a plurality of subblock temporal motion vector predictor (SbTMVP) candidates, wherein each of the plurality of SbTMVP candidates is obtained using a motion shift associated with an input candidate, and wherein the input candidate comprises a neighboring block of a current block; reordering the plurality of SbTMVP candidates based on a template matching cost of the plurality of SbTMVP candidates; and decoding the current block using one of the reordered plurality of SbTMVP candidates.
  • SBTMVP subblock temporal motion vector predictor
  • decoding refers only to entropy decoding
  • decoding refers only to differential decoding
  • decoding refers to a combination of entropy decoding and differential decoding.
  • encoding can encompass all or part of the processes performed, for example, on an input video sequence in order to produce an encoded bitstream.
  • processes include one or more of the processes typically performed by an encoder, for example, partitioning, differential encoding, transformation, quantization, and entropy encoding.
  • such processes also, or alternatively, include processes performed by an encoder of various implementations described in this application, for example, for determining a plurality of subblock temporal motion vector predictor (SbTMVP) candidates, wherein each of the plurality of SbTMVP candidates is obtained using a motion shift associated with an input candidate, and wherein the input candidate comprises a neighboring block of a current block; reordering the plurality of SbTMVP candidates based on a template matching cost of the plurality of SbTMVP candidates; and decoding the current block using one of the reordered plurality of SbTMVP candidates.
  • SBTMVP subblock temporal motion vector predictor
  • encoding refers only to entropy encoding
  • encoding refers only to differential encoding
  • encoding refers to a combination of differential encoding and entropy encoding.
  • syntax elements as used herein for example, coding syntax on motion information, motion vectors, motion shifts, seeds, precision factors, shifts, input candidates, SbTMVP candidates, ARMC, number of fraction bits etc., are descriptive terms. As such, they do not preclude the use of other syntax element names.
  • the implementations and aspects described herein may be implemented in, for example, a method or a process, an apparatus, a software program, a data stream, or a signal. Even if only discussed in the context of a single form of implementation (for example, discussed only as a method), the implementation of features discussed can also be implemented in other forms (for example, an apparatus or program).
  • An apparatus may be implemented in, for example, appropriate hardware, software, and firmware.
  • the methods may be implemented in, for example, a processor, which refers to processing devices in general, including, for example, a computer, a microprocessor, an integrated circuit, or a programmable logic device. Processors also include communication devices, such as, for example, computers, cell phones, portable/personal digital assistants ("PDAs”), and other devices that facilitate communication of information between end-users.
  • PDAs portable/personal digital assistants
  • references to “one example” or “an example” or “one implementation” or “an implementation”, as well as other variations thereof, means that a particular feature, structure, characteristic, and so forth described in connection with the example is included in at least one example.
  • the appearances of the phrase “in one example” or “in an example” or “in one implementation” or “in an implementation”, as well any other variations, appearing in various places throughout this application are not necessarily all referring to the same example.
  • this application may refer to “determining” various pieces of information. Determining the information can include one or more of, for example, estimating the information, calculating the information, predicting the information, or retrieving the information from memory.
  • Obtaining may include receiving, retrieving, constructing, generating, and/or determining. [0127] Further, this application may refer to “accessing” various pieces of information. Accessing the information can include one or more of, for example, receiving the information, retrieving the information (for example, from memory), storing the information, moving the information, copying the information, calculating the information, determining the information, predicting the information, or estimating the information.
  • this application may refer to “receiving” various pieces of information.
  • Receiving is, as with “accessing”, intended to be a broad term.
  • Receiving the information can include one or more of, for example, accessing the information, or retrieving the information (for example, from memory).
  • “receiving” is typically involved, in one way or another, during operations such as, for example, storing the information, processing the information, transmitting the information, moving the information, copying the information, erasing the information, calculating the information, determining the information, predicting the information, or estimating the information.
  • any of the following ”, “and/or”, and “at least one of”, for example, in the cases of “A/B”, “A and/or B” and “at least one of A and B”, is intended to encompass the selection of the first listed option (A) only, or the selection of the second listed option (B) only, or the selection of both options (A and B).
  • such phrasing is intended to encompass the selection of the first listed option (A) only, or the selection of the second listed option (B) only, or the selection of the third listed option (C) only, or the selection of the first and the second listed options (A and B) only, or the selection of the first and third listed options (A and C) only, or the selection of the second and third listed options (B and C) only, or the selection of all three options (A and B and C).
  • This may be extended, as is clear to one of ordinary skill in this and related arts, for as many items as are listed.
  • the word “signal” refers to, among other things, indicating something to a corresponding decoder.
  • Encoder signals may include, for example, indications that SbTMVP may be enabled, etc.
  • the same parameter is used at both the encoder side and the decoder side.
  • an encoder can transmit (explicit signaling) a particular parameter to the decoder so that the decoder can use the same particular parameter.
  • signaling may be used without transmitting (implicit signaling) to simply allow the decoder to know and select the particular parameter.
  • signaling may be accomplished in a variety of ways. For example, one or more syntax elements, flags, and so forth are used to signal information to a corresponding decoder in various examples. While the preceding relates to the verb form of the word “signal”, the word “signal” can also be used herein as a noun.
  • implementations may produce a variety of signals formatted to carry information that may be, for example, stored or transmitted.
  • the information can include, for example, instructions for performing a method, or data produced by one of the described implementations.
  • a signal may be formatted to carry the bitstream of a described example.
  • Such a signal may be formatted, for example, as an electromagnetic wave (for example, using a radio frequency portion of spectrum) or as a baseband signal.
  • the formatting may include, for example, encoding a data stream and modulating a carrier with the encoded data stream.
  • the information that the signal carries may be, for example, analog or digital information.
  • the signal may be transmitted over a variety of different wired or wireless links, as is known.
  • the signal may be stored on, or accessed or received from, a processor-readable medium.
  • features described herein may be implemented in a bitstream or signal that includes information generated as described herein. The information may allow a decoder to decode a bitstream, the encoder, bitstream, and/or decoder according to any of the embodiments described.
  • features described herein may be implemented by creating and/or transmitting and/or receiving and/or decoding a bitstream or signal.
  • features described herein may be implemented a method, process, apparatus, medium storing instructions, medium storing data, or signal.
  • features described herein may be implemented by a TV, set-top box, cell phone, tablet, or other electronic device that performs decoding.
  • the TV, set-top box, cell phone, tablet, or other electronic device may display (e.g., using a monitor, screen, or other type of display) a resulting image (e.g., an image from residual reconstruction of the video C ECM, there may be a few ways to construct it.
  • SbTMVP may be constructed by using the displacement observed with the left spatial neighbor if it respects one or more parameters.
  • SbTMVP may be constructed by using the collocated position.
  • video codec such as ECM
  • tools that may be based on a template around a block (e.g., the current bock) may be used to reduce the signaling cost of the motion information candidate.
  • Adaptive Reordering of Merge Candidates (ARMC) may be used to evaluate one or more SbTMVP candidates.
  • Embodiments disclosed herein may provide one or more ways to construct a SbTMVP candidate, and may provide one or more ways to construct several SbTMVP candidates before evaluating them using an ARMC process.
  • SbTMVP may be referred to as ATMVP mode
  • a subblock-based motion vector prediction may be created.
  • SbTMVP may be similar to temporal motion vector prediction (TMVP) in that SbTMVP may use a motion field in the collocated picture to improve motion vector prediction and merge mode for CUs in the current picture.
  • TMVP temporal motion vector prediction
  • a collocated picture used by TMVP may be used for SbTMVP.
  • TMVP may predict motion at a CU level
  • SbTMVP may predict motion at a sub-CU level
  • TMVP may fetch the temporal motion vector from the collocated block in the collocated picture (e.g., the collocated block may be the bottom-right or center block relative to the current CU), and SbTMVP may apply a motion shift before fetching the temporal motion information from the collocated picture.
  • the motion shift may be obtained from the motion vector from one of the spatial neighboring blocks of the current CU.
  • FIG. 5 illustrates an example of a spatial neighboring block (e.g., A1) that may be used by a subblock-based motion vector prediction (SbTMVP).
  • SbTMVP may predict the motion vectors of the sub-CUs within a CU (e.g., the current CU).
  • the left spatial neighbor A1 in FIG. 5 may be examined. If the spatial neighboring block that has a motion vector that uses the collocated picture as its reference picture is identified, the motion vector may be selected as the motion shift to be applied. If no such motion is identified from the spatial neighbor, the motion shift may be set to (0, 0).
  • FIG. 6 illustrates an example for deriving a sub-coding unit (sub-CU) motion field by applying a motion shift from a spatial neighbor and scaling the motion information from the corresponding collocated sub- CU.
  • sub-CU sub-coding unit
  • the motion shift which may have been previously identified, may be applied (e.g., may be added to the coordinates of the current block) to obtain sub-CU-level motion information (e.g., motion vectors and reference indices) from the collocated picture, as shown in Fig. 6.
  • sub-CU-level motion information e.g., motion vectors and reference indices
  • the motion information of a corresponding block e.g., the smallest motion grid that covers the center sample
  • the motion information at the translated position for the sub-CU may be used to derive the motion information at the translated position for the sub-CU.
  • the motion information of the collocated sub-CU may be identified, and it may be converted to the motion vectors and reference indices of the current sub-CU, for example, in a similar way as a TMVP process (e.g., where temporal motion scaling may be applied to align the reference pictures of the temporal motion vectors to those of the current CU).
  • TMVP process e.g., where temporal motion scaling may be applied to align the reference pictures of the temporal motion vectors to those of the current CU.
  • a combined subblock based merge list which contains both SbTMVP candidates and affine merge candidates may be used for the signaling of subblock based merge mode.
  • the SbTMVP mode may be enabled/disabled by using a sequence parameter set (SPS) flag. If the SbTMVP mode is enabled, the SbTMVP predictor may be added as the first entry of the list of subblock based merge candidates and followed by the affine merge candidates.
  • the size of the sub block based merge list may be signaled in SPS and the maximum allowed size of the subblock based merge list may be set to 5.
  • the sub-CU size used in SbTMVP may be fixed to be 8x8, and as may be done for affine merge mode, SbTMVP mode may be applicable to the CU when the width and height are larger than or equal to 8.
  • an SbTMVP merge candidate (e.g., an additional SbTMVP merge candidate) may be the same as for other merge candidates. For example, for a CU in a P or B slice, an additional rate distortion (RD) check may be performed to decide whether to use the SbTMVP candidate.
  • RD rate distortion
  • Adaptive Reordering of Merge Candidates with template matching (TM) may be provided.
  • Merge candidates may be adaptively reordered with template matching (TM).
  • the reordering may be applied to candidates from a regular merge mode, template matching (TM) merge mode, and/or affine merge mode (e.g., excluding the SbTMVP candidate).
  • TM merge mode merge candidates may be reordered before the refinement process.
  • merge candidates may be divided into subgroups (e.g., several subgroups).
  • a subgroup may be of a size.
  • the subgroup size may be set to 5 for regular merge mode and/or TM merge mode.
  • the subgroup size may be set to 3 for affine merge mode.
  • Merge candidates in a subgroup may be reordered ascendingly according to cost values based on template matching. For simplification, merge candidates in the last but not the first subgroup may be not reordered.
  • the template matching cost of a merge candidate may be measured by a sum of differences (e.g., a sum of absolute differences (SAD)) between samples of a template of the current block and their corresponding reference samples.
  • the template may comprise a set of reconstructed samples neighboring the current block. Reference samples of the template may be located by the motion information of the merge candidate.
  • FIG. 7 illustrates an example template and reference samples of the template in one or more reference pictures. If a merge candidate utilizes bi-directional prediction, the reference samples of the template of the merge candidate may also be generated by bi-prediction, as shown in FIG. 7.
  • the template shown in FIG. 7 may include sub-templates (e.g., several sub-templates) with a size of Wsub x 1.
  • the left template may include sub-templates (e.g., several sub-templates) with a size of 1 x Hsub.
  • the motion information of the subblocks in the first row and the first column of the current block may be used to derive the reference samples of each sub-template.
  • FIG. 8 illustrates an example template and reference samples of the template for a block with subblock motion determined using the motion information of the subblocks of the current block.
  • one or more SbTMVP candidates may be improved.
  • One or more spatial neighboring motions and/or other motions e.g., seeds
  • One or more SbTMVP candidates may be derived based on the multiple seeds.
  • a SbTMVP candidate e.g., a best SbTMVP candidate
  • FIG. 9 illustrates a method and/or process of a motion shift design.
  • SbTMVP candidate construction may define a motion shift based on the left spatial neighbor A1 motion information, which may be referred to as a seed .
  • a seed may have one or more parameters: As illustrated in FIG.
  • a spatial neighbor for example, a left spatial neighbor may be considered.
  • it may be determined if the seed exists and is inter coded (e.g., holds a motion information). Assuming that the seed exists and is inter coded, it may be determined whether the seed uses the collocated picture as a reference picture (e.g., in one direction if bidirectional).
  • it may be determined whether list L0 of the seed uses the collocated picture as a reference picture. If list L0 of the picture uses a collocated picture as a reference picture, at 910, motion vector associated with the list L0 (Mv L0) may be used for motion shift.
  • it may be determined whether list L1 of the seed uses the collocated picture as a reference picture.
  • motion vector associated with the list L1 may be used for motion shift. If the seed does not respect these parameters and/or conditions (e.g., does not used any of the L0 or L1 as a collocated picture), at 914, the motion shift may be set to (0, 0). The motion shift (e.g., coming from a left spatial neighbor or (0, 0)) may also respect a rule to be able to define a SbTMVP candidate. At 916, it may be determined whether the seed, at a CU center position, points to an inter coded part. At 918, the seed may be designated as a SbTMVP candidate.
  • FIG. 10 illustrates a method and/or process that may be used to add a top spatial neighbor, for example, if a left spatial neighbor may not respect one or more parameters.
  • a motion shift design if the left spatial neighbor is bi-directional, one of its motion information may be considered as a motion shift.
  • no SbTMVP candidate may result by using the motion shift (0, 0).
  • the left spatial neighbor may not respect a seed parameter (e.g., a constraint)
  • a seed parameter e.g., a constraint
  • a candidate from a merge list e.g., the regular merge list
  • bi-directional motion vectors and/or a zero motion vector may be used as a seed.
  • the input seeds may be reordered by using an ARMC process (e.g., the regular ARMC process). As illustrated in FIG.
  • a spatial neighbor for example, a left spatial neighbor may be considered.
  • the seed exists and is inter coded (e.g., holds motion information). Assuming that the seed exists and is inter coded, at 1006, it may be determined whether list L0 of the seed uses the collocated picture as a reference picture. If list L0 of the picture uses a collocated picture as a reference picture, at 1012, motion vector associated with the list L0 (Mv L0) may be used for motion shift. At 1008, it may be determined whether list L1 of the seed uses the collocated picture as a reference picture.
  • motion vector associated with the list L1 may be used for motion shift. If the seed does not respect these parameters and/or conditions (e.g., does not used any of the L0 or L1 as a collocated picture), at 1010, the motion shift may be set to (0, 0). If the left spatial neighbor does not respect some seed constraints (e.g., the seed does not exist, or the seed is not inter coded), at 1020, other seed associated with a different neighbor, for example, a top spatial neighbor may be used. In such a case, at 1022, it may be determined that the additional seed exists and is inter coded.
  • seed constraints e.g., the seed does not exist, or the seed is not inter coded
  • L0 or L1 collocated picture Assuming that a different neighbor exists and uses inter coded picture as a reference picture, it may then be determined whether L0 or L1 collocated picture is used, as illustrated in FIG. 10. At 1016, it may be determined whether the seed, at a CU center position, points to an inter coded part. At 1018, the seed may be designated as a SbTMVP candidate. Otherwise, no SbTMVP candidate may be selected.
  • FIG. 11 illustrates an example that may be used to add a top spatial neighbor, for example, if a left spatial neighbor may not respect one or more parameters.
  • FIG. 11 may show how to overcome a constraint that a seed may exist, that the seed may be inter-coded, and that the seed may use the collocated picture as reference picture (e.g., in one direction if bi-directional).
  • one or more unique motion shifts may be collected (e.g., all unique motion shifts) that fulfill the following parameters: that a seed may exist, that the seed may be inter-coded, and that the seed may use the collocated picture as a reference picture (e.g., in one direction if bi-directional).
  • the set may be examined to find the one that respects the motion shift parameter.
  • constructed motion shifts may be added to the (0, 0) motion shift.
  • the SbTMVP candidate may be defined (e.g., in any case).
  • a spatial neighbor for example, a left spatial neighbor may be considered.
  • the seed exists and is inter coded (e.g., holds motion information).
  • the seed exists and is inter coded
  • it may be determined whether list L0 of the seed uses the collocated picture as a reference picture. If list L0 of the picture uses a collocated picture as a reference picture, at 1112, motion vector associated with the list L0 (Mv L0) may be used for motion shift.
  • Mv L0 motion vector associated with the list L0
  • motion vector associated with the list L1 may be used for motion shift. If the seed does not respect these parameters and/or conditions (e.g., does not used any of the L0 or L1 as a collocated picture), at 1110, it may be determined if another spatial neighbor, e.g., a top spatial neighbor has been tested. If the top spatial neighbor has already been tested, at 1116, the motion shift may be set to (0, 0)
  • a top spatial neighbor may be used and/or tested. In such a case, the conditions from 1104 to 1110 may be tested.
  • the seed may be designated as a SbTMVP candidate. Otherwise, no SbTMVP candidate may be selected.
  • FIG. 12 illustrates a process of a motion shift design (e.g., a simplified motion shift design).
  • An ARMC process for SbTMVP may be provided.
  • several seeds may be used to ensure a motion shift may be defined and used to derive a SbTMVP candidate.
  • These motion shifts may be valid and may allow for deriving SbTMVP candidates (e.g., different SbTMVP candidates).
  • an ARMC process that may be dedicated to the SbTMVP candidate may be designed. The process may allow reordering of the different candidates and may allow an acceptable candidate (e.g., the best candidate) to be found.
  • SbTMVP candidates may be collected using the motion shifts coming from (e.g., several) input candidates (spatial neighbors, HMVP, and motion shift (0, 0)) associated with the collocated picture following the design shown in FIG. 12.
  • a spatial neighbor for example, a left spatial neighbor may be considered.
  • Mv L0 motion vector associated with the list L0
  • the seed may be determined whether list L1 of the seed uses the collocated picture as a reference picture. If list L1 of the picture uses a collocated picture as a reference picture, at 1212, motion vector associated with the list L1 (Mv L1) may be used for motion shift. If the seed does not respect these parameters and/or conditions (e.g., does not used any of the L0 or L1 as a collocated picture), no SbTVMP is collected. [0162] At 1214, it may be determined whether the seed, at a CU center position, points to an inter coded part. At 1216, the seed may be designated as a SbTMVP candidate. Otherwise, no SbTMVP candidate may be selected.
  • Mv L1 motion vector associated with the list L1
  • each of its motion information lists (L0 and L1) may be used as a motion shift to define an SbTMVP candidate.
  • an ARMC process may be provided to reorder the SbTMVP candidates.
  • the SbTMVP candidates may be reordered based on template matching cost where the first SbTMVP candidate may have a lower cost (e.g., the smallest cost) and may be selected (e.g., may be considered as the best one).
  • a subset of the reordered multiple SbTMVP candidates may be selected, and one of the selected subset of SbTMVP candidates may be used to decode the current block, based on template matching cost with the SbTMVP candidate with the lowest template matching cost being the preferred one SbTMVP subblocks may be 8x8 (e.g., in WC) and 4x4 (e.g., in ECM).
  • a (e.g., each) subblock may have its own motion information that can be uni- or bi-directional.
  • FIG. 13 illustrates an example template that may correspond to SbTMVP.
  • the reference template in FIG. 13 may correspond to the SbTMVP example of FIG. 6.
  • a subblock part of the reference template may be calculated (e.g., independently calculated) by using the affine motion field and by applying LIC.
  • the subblock parts may be aggregated to constitute the reference template.
  • a subblock part of the reference template may be calculated (e.g., calculated independently) by using the affine motion field and aggregated for a (e.g., each) reference frame, as described herein.
  • a reference template from L0 and L1 may be averaged by considering bi-prediction with CU-level weights (BCW).
  • a subblock may be treated independently. If no motion information is available for a subblock, a predefined motion information (e.g., default motion information) may be used. The motion information may be the one picked at center position (e.g., checked at the motion shift constraint).
  • the reference template may be calculated using the corresponding reference frame.
  • the reference templates may be calculated for each reference frame and the subblock parts of the reference template for both reference frames (e.g., reference frame L0 and reference frame L1) may be averaged.
  • one or more (e.g., all) subblock reference templates are calculated, they may be aggregated to constitute the SbTMVP reference template.
  • the difference e.g., sum of absolute differences (SAD)
  • SAD sum of absolute differences
  • One or more (e.g., all) SbTMVP candidates may be reordered depending on their template matching cost, the first one may be the one with a lower cost (e.g., the smallest cost) and may be selected (e.g., may be the best candidate).
  • one SbTMVP candidate may be considered, and the first one of the reordered list may be kept.
  • one or more (e.g., a plurality) of SbTMVP candidates may be considered, and the first n candidates of the reordered list may be used for that purpose.
  • the two first SbTMVP candidates may be used.
  • the first n SbTMVP candidates may be selected from the multiple SbTMVP candidates.
  • the two SbTMVP candidates may both be considered if they are different (e.g., with motions different for subblocks (e.g., several subblocks)) and if their template matching costs are close (e.g., under a threshold th as: second ⁇ th * first).
  • Associated signaling may be provided.
  • a flag indicating a number of seeds may be received. Based on the flag, determine the plurality of SbTMVP candidates may be determined based on the number of seeds.
  • the number of seeds that may be used to derive the SbTMVP candidate may be signaled at the high level syntax (HLS) (e.g., at SPS level or picture header level) when greater than 1 as sps_max_num_sbtmvp_seed_minus_one.
  • HLS high level syntax
  • a maximum number of seeds may be determined to be used for determining the plurality of SbTMVP candidates. Multiple seeds may be determined in accordance with the determined maximum number of input candidates.
  • the multiple SbTMVP candidates may be reordered using adaptive reordering of merge candidates (ARMC).
  • the maximum number of seeds e.g., input candidates
  • the maximum number of seeds may be predefined, for example, as six, and the difference may be signaled as sps_six_minus_max_num_sbtmvp_seed.
  • a flag may also be signaled in HLS (e.g., at SPS level) to indicate if the seeds are reordered by using the ARMC process (sps_sbtmvp_seed_armc_enabled_flag).
  • a flag indicating usage of the SbTMVP ARMC process may be received. Based on the flag, the plurality of SbTMVP candidates may be reordered using ARMC.. The usage of the SbTMVP ARMC process may be signaled in HLS (e.g., SPS level). (sps_sbtmvp_armc_enabled_flag). If the number of considered SbTMVP candidates is greater than 1, it may also be signaled as sps_max_num_sbtmvp_cand_minus_one.
  • the threshold th or an index into a predefined look-up table (LUT) of threshold values may be signaled if needed.
  • sps_max_num_sbtmvp_cand_minus_one may be in the range 0 sps_max_num_sbtmvp_seed_minus_one+1 .
  • th * best cost e.g., cost of the first reordered SbTMVP candidate
  • ROM read only memory
  • RAM random access memory
  • register cache memory
  • semiconductor memory devices magnetic media such as internal hard disks and removable disks, magneto-optical media, and optical media such as CD-ROM disks, and digital versatile disks (DVDs).
  • a processor in association with software may be used to implement a radio frequency transceiver for use in a WTRU, UE, terminal, base station, RNC, or any host computer.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

L'invention divulgue des systèmes, des procédés et des instrumentalités pour utiliser une prédiction de vecteur de mouvement temporel basée sur un sous-bloc (SbTMVP). De multiples candidats de prédicteur de vecteur de mouvement temporel de sous-bloc (SbTMVP) peuvent être déterminés (par exemple, par un dispositif de codage et/ou de décodage vidéo). Chacun des multiples SbTMVP candidats peut être obtenu à l'aide d'un décalage de mouvement associé à un candidat d'entrée, et le candidat d'entrée peut comprendre un bloc voisin d'un bloc courant. Les multiples SbTMVP candidats peuvent être réordonnancés sur la base d'un coût de mise en correspondance de modèles des multiples SbTMVP candidats. Le bloc courant peut être traité (par exemple, codé et/ou décodé) à l'aide de l'un des multiples candidats SbTMVP réordonnancés.
PCT/EP2023/059208 2022-04-08 2023-04-06 Prédiction améliorée de vecteur de mouvement basée sur un sous-bloc (sbtmvp) WO2023194558A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP22305510.4 2022-04-08
EP22305510 2022-04-08

Publications (1)

Publication Number Publication Date
WO2023194558A1 true WO2023194558A1 (fr) 2023-10-12

Family

ID=81388841

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2023/059208 WO2023194558A1 (fr) 2022-04-08 2023-04-06 Prédiction améliorée de vecteur de mouvement basée sur un sous-bloc (sbtmvp)

Country Status (1)

Country Link
WO (1) WO2023194558A1 (fr)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023056135A1 (fr) * 2021-09-29 2023-04-06 Qualcomm Incorporated Réordonnancement de candidats de vecteur de mouvement (mv)

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023056135A1 (fr) * 2021-09-29 2023-04-06 Qualcomm Incorporated Réordonnancement de candidats de vecteur de mouvement (mv)

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
CHEN J ET AL: "Algorithm description for Versatile Video Coding and Test Model 3 (VTM 3)", no. JVET-L1002, 3 December 2018 (2018-12-03), XP030198628, Retrieved from the Internet <URL:http://phenix.int-evry.fr/jvet/doc_end_user/documents/12_Macao/wg11/JVET-L1002-v1.zip JVET-L1002-v1.docx> [retrieved on 20181203] *
ZHANG (BYTEDANCE) N ET AL: "AHG12: Adaptive Reordering of Merge Candidates with Template Matching", no. JVET-V0099 ; m56511, 23 April 2021 (2021-04-23), XP030294228, Retrieved from the Internet <URL:https://jvet-experts.org/doc_end_user/documents/22_Teleconference/wg11/JVET-V0099-v3.zip JVET-V0099_r2.docx> [retrieved on 20210423] *
ZHAO (BYTEDANCE) L ET AL: "Non-EE2: Enhanced temporal motion information derivation", no. JVET-AA0141 ; m60117, 7 July 2022 (2022-07-07), XP030303011, Retrieved from the Internet <URL:https://jvet-experts.org/doc_end_user/documents/27_Teleconference/wg11/JVET-AA0141-v1.zip JVET-AA0141.docx> [retrieved on 20220707] *

Similar Documents

Publication Publication Date Title
US20220377344A1 (en) Systems and methods for versatile video coding
US20230046946A1 (en) Merge mode, adaptive motion vector precision, and transform skip syntax
US20220394298A1 (en) Transform coding for inter-predicted video data
WO2021055836A1 (fr) Filtre d&#39;interpolation adaptatif pour compensation de mouvement
WO2023194108A2 (fr) Systèmes et procédés associés à un codage entropique
US20220345701A1 (en) Intra sub-partitions related infra coding
EP3977729A1 (fr) Affinement de prédiction de limite de bloc à flux optique
US20240196007A1 (en) Overlapped block motion compensation
WO2023194558A1 (fr) Prédiction améliorée de vecteur de mouvement basée sur un sous-bloc (sbtmvp)
WO2023194193A1 (fr) Prédiction de signe et de direction dans un saut de transformée et bdpcm
WO2024079193A1 (fr) Modes de prédiction angulaire étendue avec affinement côté décodeur
WO2023046955A1 (fr) Prédiction d&#39;élément de syntaxe basé sur un modèle
WO2024003115A1 (fr) Sélection de transformée multiple de chrominance
WO2023118048A1 (fr) Génération de liste de modes le plus probable avec dérivation de mode intra basé sur un modèle et dérivation de mode intra côté décodeur
WO2023194568A1 (fr) Remise en ordre de liste de modes les plus probables sur la base d&#39;un modèle
WO2022219403A1 (fr) Compensation de mouvement par superposition de blocs
WO2023057488A1 (fr) Codage de vecteur de mouvement avec des données de vecteur de mouvement d&#39;entrée
WO2023194604A1 (fr) Ajustement de pente cclm/mmlm sur la base d&#39;un modèle
WO2023194395A1 (fr) Mode direct de chrominance
WO2023194556A1 (fr) Mode intra implicite pour prédiction inter-fusion/intra combinée et prédiction intra/inter de mode de partitionnement géométrique
WO2023194570A1 (fr) Outils de rafraîchissement de décodage progressif et de codage
WO2023194138A1 (fr) Détermination d&#39;indice de transformée
WO2024002947A1 (fr) Mise en correspondance intra-modèle avec retournement
WO2023194599A1 (fr) Opérations basées sur des modèles et limitées par la largeur de bande de la mémoire
WO2023194588A1 (fr) Dérivation de mode intra basée sur un modèle avec prédiction intra à grand angle

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23716608

Country of ref document: EP

Kind code of ref document: A1