EP4690803A1 - Zusätzliche schablonenwerkzeugmetriken - Google Patents

Zusätzliche schablonenwerkzeugmetriken

Info

Publication number
EP4690803A1
EP4690803A1 EP24718138.1A EP24718138A EP4690803A1 EP 4690803 A1 EP4690803 A1 EP 4690803A1 EP 24718138 A EP24718138 A EP 24718138A EP 4690803 A1 EP4690803 A1 EP 4690803A1
Authority
EP
European Patent Office
Prior art keywords
candidate mode
candidate
metric
template
mode
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
EP24718138.1A
Other languages
English (en)
French (fr)
Inventor
Tangi POIRIER
Karam NASER
Hassane Guermoud
Thierry DUMAS
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
InterDigital CE Patent Holdings SAS
Original Assignee
InterDigital CE Patent Holdings SAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by InterDigital CE Patent Holdings SAS filed Critical InterDigital CE Patent Holdings SAS
Publication of EP4690803A1 publication Critical patent/EP4690803A1/de
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/70Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/157Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/105Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/11Selection of coding mode or of prediction mode among a plurality of spatial predictive coding modes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/593Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques

Definitions

  • Video coding systems may be used to compress digital video signals, e.g., to reduce the storage and/or transmission bandwidth needed for such signals.
  • Video coding systems may include, for example, block-based, wavelet-based, and/or object-based systems.
  • a device for video decoding such as a video decoder, may select a candidate mode template metric among multiple candidate mode template metrics.
  • the multiple candidate mode template metrics may be, or may include, at least one of a sum of absolute differences (SAD) metric, a mean- removed sum of absolute differences (MR-SAD) metric, or a sum of absolute transform differences (SATD) metric.
  • the apparatus for video decoding may select the candidate mode template metric based on a candidate mode template metric indication in video data. For example, the apparatus for video decoding may obtain the candidate mode template metric indication in the video data.
  • the candidate mode template metric indication may indicate, and/or may configured to indicate, an index associated with the candidate mode template metric to be used.
  • the apparatus for video decoding may select the candidate mode template metric based on the candidate mode template metric indication.
  • the apparatus for video decoding may use the selected candidate mote template metric for decoding a block. For example, the apparatus for video decoding may derive a candidate mode associated with deriving the block based on the selected candidate mode template metric. Based on the derived candidate mode, the apparatus for video decoding may decode the block.
  • the candidate mode template metric is associated with an intra template matching prediction (intraTMP) and/or a template-based intra mode derivation (TIMD).
  • the apparatus for video decoding may derive an intraTMP candidate using the selected candidate mote template metric and/or derive a TIMD candidate using the selected candidate mote template metric.
  • the apparatus for video decoding may decode the block based on the derived intraTMP candidate and/or the derived TIMP candidate.
  • a device for video encoding may obtain a candidate mode based on a candidate mode template metric that is associated with the candidate mode.
  • an apparatus for video encoding may obtain a first candidate mode based on a first candidate mode template metric that is associated with the first candidate mode and obtain a second candidate mode based on a second candidate mode template metric that is associated with the second candidate mode.
  • the candidate mode template metric e.g., the first candidate mode template metric and the second candidate mode template metric
  • the apparatus for video encoding may select a candidate mode.
  • the candidate mode may be or may include, at least one of the first candidate mode or the second candidate mode.
  • the apparatus for video encoding may determine whether an intraTMP has been enabled. Based on a determination that the intraTMP has been enabled, the apparatus for video encoding may include a candidate mode template metric indication in video data, such as bitstream.
  • the candidate mode template metric indication may indicate, and/or may configured to indicate, an index associated with an intraTMP candidate mode template metric for an intraTMP candidate.
  • the apparatus for video encoding may determine whether a TIMD has been enabled. Based on a determination that the TIMD has been enabled, the apparatus for video encoding may include a candidate mode template metric indication in video data, such as bitstream.
  • the candidate mode template metric indication may indicate, and/or may configured to indicate, an index associated with a TIMD candidate mode template metric for a TIMD candidate.
  • FIG. 1A is a system diagram illustrating an example communications system in which one or more disclosed embodiments may be implemented.
  • FIG. 1 B is a system diagram illustrating an example wireless transmit/receive unit (WTRU) that may be used within the communications system illustrated in FIG. 1A according to an embodiment.
  • WTRU wireless transmit/receive unit
  • FIG. 1C is a system diagram illustrating an example radio access network (RAN) and an example core network (CN) that may be used within the communications system illustrated in FIG. 1 A according to an embodiment.
  • FIG. 1 D is a system diagram illustrating a further example RAN and a further example CN that may be used within the communications system illustrated in FIG. 1A according to an embodiment.
  • FIG. 2 is a diagram showing an example block-based video encoder.
  • FIG. 3 is a diagram showing an example video decoder.
  • FIG. 4 is a diagram showing an example of a system in which various aspects and examples may be implemented.
  • FIG. 5 illustrates an example of an intra template matching search area used in intra template matching prediction (IntraTMP).
  • FIG. 6 illustrates an example of selection of the two best predictors for IntraTMP.
  • FIG. 7 illustrates an example of left and top template shapes.
  • FIG. 8A illustrates an example template area and reference samples used for deriving templatebased intra mode derivation (TIMD) modes and associated weights.
  • TDD templatebased intra mode derivation
  • FIG. 8B illustrates example reference samples used to derive intra prediction for the current block.
  • FIG. 9 illustrates an example of template matching performed on a search area around an initial MV.
  • FIG. 10 illustrates example spatial geometric partitioning mode (SGPM) candidates.
  • FIG. 11 illustrates an example SGPM template.
  • FIG. 12 illustrates an example of geometric partitioning mode (GPM) blending.
  • FIG. 13 illustrates an example derivation of a best candidate for a template tool at an encoder.
  • FIG. 14 illustrates an example derivation of a template tool predictor at a decoder.
  • FIG. 15 illustrates an example fusion of the best IntraTMP predictors derived with different metrics.
  • FIG. 1A is a diagram illustrating an example communications system 100 in which one or more disclosed embodiments may be implemented.
  • the communications system 100 may be a multiple access system that provides content, such as voice, data, video, messaging, broadcast, etc., to multiple wireless users.
  • the communications system 100 may enable multiple wireless users to access such content through the sharing of system resources, including wireless bandwidth.
  • the communications systems 100 may employ one or more channel access methods, such as code division multiple access (CDMA), time division multiple access (TDMA), frequency division multiple access (FDMA), orthogonal FDMA (OFDMA), single-carrier FDMA (SC-FDMA), zero-tail unique-word DFT-Spread OFDM (ZT UW DTS-s OFDM), unique word OFDM (UW-OFDM), resource block-filtered OFDM, filter bank multicarrier (FBMC), and the like.
  • CDMA code division multiple access
  • TDMA time division multiple access
  • FDMA frequency division multiple access
  • OFDMA orthogonal FDMA
  • SC-FDMA single-carrier FDMA
  • ZT UW DTS-s OFDM zero-tail unique-word DFT-Spread OFDM
  • UW-OFDM unique word OFDM
  • FBMC filter bank multicarrier
  • the communications system 100 may include wireless transmit/receive units (WTRUs) 102a, 102b, 102c, 102d, a RAN 104/113, a ON 106/115, a public switched telephone network (PSTN) 108, the Internet 110, and other networks 112, though it will be appreciated that the disclosed embodiments contemplate any number of WTRUs, base stations, networks, and/or network elements.
  • WTRUs 102a, 102b, 102c, 102d may be any type of device configured to operate and/or communicate in a wireless environment.
  • the WTRUs 102a, 102b, 102c, 102d may be configured to transmit and/or receive wireless signals and may include a user equipment (UE), a mobile station, a fixed or mobile subscriber unit, a subscription-based unit, a pager, a cellular telephone, a personal digital assistant (PDA), a smartphone, a laptop, a netbook, a personal computer, a wireless sensor, a hotspot or Mi-Fl device, an Internet of Things (loT) device, a watch or other wearable, a head-mounted display (HMD), a vehicle, a drone, a medical device and applications (e.g., remote surgery), an industrial device and applications (e.g., a robot and/or other wireless devices operating in an industrial and/or an automated processing chain contexts), a consumer electronics device, a device operating on commercial and/or industrial wireless networks, and the like.
  • UE user equipment
  • PDA personal digital assistant
  • HMD head-mounted display
  • a vehicle a
  • the communications systems 100 may also include a base station 114a and/or a base station 114b.
  • Each of the base stations 114a, 114b may be any type of device configured to wirelessly interface with at least one of the WTRUs 102a, 102b, 102c, 102d to facilitate access to one or more communication networks, such as the CN 106/115, the Internet 110, and/or the other networks 112.
  • the base stations 114a, 114b may be a base transceiver station (BTS), a Node-B, an eNode B, a Home Node B, a Home eNode B, a gNB, a NR NodeB, a site controller, an access point (AP), a wireless router, and the like. While the base stations 114a, 114b are each depicted as a single element, it will be appreciated that the base stations 114a, 114b may include any number of interconnected base stations and/or network elements.
  • the base station 114a may be part of the RAN 104/113, which may also include other base stations and/or network elements (not shown), such as a base station controller (BSC), a radio network controller (RNC), relay nodes, etc.
  • BSC base station controller
  • RNC radio network controller
  • the base station 114a and/or the base station 114b may be configured to transmit and/or receive wireless signals on one or more carrier frequencies, which may be referred to as a cell (not shown). These frequencies may be in licensed spectrum, unlicensed spectrum, or a combination of licensed and unlicensed spectrum.
  • a cell may provide coverage for a wireless service to a specific geographical area that may be relatively fixed or that may change over time. The cell may further be divided into cell sectors.
  • the cell associated with the base station 114a may be divided into three sectors.
  • the base station 114a may include three transceivers, i.e., one for each sector of the cell.
  • the base station 114a may employ multiple-input multiple output (MIMO) technology and may utilize multiple transceivers for each sector of the cell.
  • MIMO multiple-input multiple output
  • beamforming may be used to transmit and/or receive signals in desired spatial directions.
  • the base stations 114a, 114b may communicate with one or more of the WTRUs 102a, 102b, 102c, 102d over an air interface 116, which may be any suitable wireless communication link (e.g., radio frequency (RF), microwave, centimeter wave, micrometer wave, infrared (IR), ultraviolet (UV), visible light, etc.).
  • the air interface 116 may be established using any suitable radio access technology (RAT).
  • RAT radio access technology
  • the communications system 100 may be a multiple access system and may employ one or more channel access schemes, such as CDMA, TDMA, FDMA, OFDMA, SC-FDMA, and the like.
  • the base station 114a in the RAN 104/113 and the WTRUs 102a, 102b, 102c may implement a radio technology such as Universal Mobile Telecommunications System (UMTS) Terrestrial Radio Access (UTRA), which may establish the air interface 116 using wideband CDMA (WCDMA).
  • WCDMA may include communication protocols such as High-Speed Packet Access (HSPA) and/or Evolved HSPA (HSPA+).
  • HSPA may include High-Speed Downlink (DL) Packet Access (HSDPA) and/or High-Speed UL Packet Access (HSUPA).
  • the base station 114a and the WTRUs 102a, 102b, 102c may implement a radio technology such as Evolved UMTS Terrestrial Radio Access (E-UTRA), which may establish the air interface 116 using Long Term Evolution (LTE) and/or LTE-Advanced (LTE-A) and/or LTE-Advanced Pro (LTE-A Pro).
  • E-UTRA Evolved UMTS Terrestrial Radio Access
  • LTE Long Term Evolution
  • LTE-A LTE-Advanced
  • LTE-A Pro LTE-Advanced Pro
  • the base station 114a and the WTRUs 102a, 102b, 102c may implement a radio technology such as NR Radio Access, which may establish the air interface 116 using New Radio (NR).
  • a radio technology such as NR Radio Access, which may establish the air interface 116 using New Radio (NR).
  • the base station 114a and the WTRUs 102a, 102b, 102c may implement multiple radio access technologies.
  • the base station 114a and the WTRUs 102a, 102b, 102c may implement LTE radio access and NR radio access together, for instance using dual connectivity (DC) principles.
  • DC dual connectivity
  • the air interface utilized by WTRUs 102a, 102b, 102c may be characterized by multiple types of radio access technologies and/or transmissions sent to/from multiple types of base stations (e.g., a eNB and a gNB).
  • the base station 114a and the WTRUs 102a, 102b, 102c may implement radio technologies such as IEEE 802.11 (i.e., Wireless Fidelity (WiFi), IEEE 802.16 (i.e., Worldwide Interoperability for Microwave Access (WiMAX)), CDMA2000, CDMA2000 1X, CDMA2000 EV-DO, Interim Standard 2000 (IS-2000), Interim Standard 95 (IS-95), Interim Standard 856 (IS-856), Global System for Mobile communications (GSM), Enhanced Data rates for GSM Evolution (EDGE), GSM EDGE (GERAN), and the like.
  • IEEE 802.11 i.e., Wireless Fidelity (WiFi)
  • IEEE 802.16 i.e., Worldwide Interoperability for Microwave Access (WiMAX)
  • CDMA2000, CDMA2000 1X, CDMA2000 EV-DO Code Division Multiple Access 2000
  • IS-95 Interim Standard 95
  • IS-856 Interim Standard 856
  • GSM Global System for
  • the base station 114b in FIG. 1 A may be a wireless router, Home Node B, Home eNode B, or access point, for example, and may utilize any suitable RAT for facilitating wireless connectivity in a localized area, such as a place of business, a home, a vehicle, a campus, an industrial facility, an air corridor (e.g., for use by drones), a roadway, and the like.
  • the base station 114b and the WTRUs 102c, 102d may implement a radio technology such as IEEE 802.11 to establish a wireless local area network (WLAN).
  • WLAN wireless local area network
  • the base station 114b and the WTRUs 102c, 102d may implement a radio technology such as IEEE 802.15 to establish a wireless personal area network (WPAN).
  • the base station 114b and the WTRUs 102c, 102d may utilize a cellular-based RAT (e.g., WCDMA, CDMA2000, GSM, LTE, LTE-A, LTE-A Pro, NR etc.) to establish a picocell or femtocell.
  • the base station 114b may have a direct connection to the Internet 110.
  • the base station 114b may not be required to access the Internet 110 via the ON 106/115.
  • the RAN 104/113 may be in communication with the ON 106/115, which may be any type of network configured to provide voice, data, applications, and/or voice over internet protocol (VoIP) services to one or more of the WTRUs 102a, 102b, 102c, 102d.
  • the data may have varying quality of service (QoS) requirements, such as differing throughput requirements, latency requirements, error tolerance requirements, reliability requirements, data throughput requirements, mobility requirements, and the like.
  • QoS quality of service
  • the ON 106/115 may provide call control, billing services, mobile location-based services, pre-paid calling, Internet connectivity, video distribution, etc., and/or perform high-level security functions, such as user authentication.
  • the RAN 104/113 and/or the ON 106/115 may be in direct or indirect communication with other RANs that employ the same RAT as the RAN 104/113 or a different RAT.
  • the ON 106/115 may also be in communication with another RAN (not shown) employing a GSM, UMTS, CDMA 2000, WiMAX, E-UTRA, or WiFi radio technology.
  • the CN 106/115 may also serve as a gateway for the WTRUs 102a, 102b, 102c, 102d to access the PSTN 108, the Internet 110, and/or the other networks 112.
  • the PSTN 108 may include circuit- switched telephone networks that provide plain old telephone service (POTS).
  • POTS plain old telephone service
  • the Internet 110 may include a global system of interconnected computer networks and devices that use common communication protocols, such as the transmission control protocol (TCP), user datagram protocol (UDP) and/or the internet protocol (IP) in the TCP/IP internet protocol suite.
  • the networks 112 may include wired and/or wireless communications networks owned and/or operated by other service providers.
  • the networks 112 may include another CN connected to one or more RANs, which may employ the same RAT as the RAN 104/113 or a different RAT.
  • Some or all of the WTRUs 102a, 102b, 102c, 102d in the communications system 100 may include multi-mode capabilities (e.g., the WTRUs 102a, 102b, 102c, 102d may include multiple transceivers for communicating with different wireless networks over different wireless links).
  • the WTRU 102c shown in FIG. 1 A may be configured to communicate with the base station 114a, which may employ a cellular-based radio technology, and with the base station 114b, which may employ an IEEE 802 radio technology.
  • FIG. 1 B is a system diagram illustrating an example WTRU 102.
  • the WTRU 102 may include a processor 118, a transceiver 120, a transmit/receive element 122, a speaker/microphone 124, a keypad 126, a display/touchpad 128, non-removable memory 130, removable memory 132, a power source 134, a global positioning system (GPS) chipset 136, and/or other peripherals 138, among others.
  • GPS global positioning system
  • the processor 118 may be a general purpose processor, a special purpose processor, a conventional processor, a digital signal processor (DSP), a plurality of microprocessors, one or more microprocessors in association with a DSP core, a controller, a microcontroller, Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) circuits, any other type of integrated circuit (IC), a state machine, and the like.
  • the processor 118 may perform signal coding, data processing, power control, input/output processing, and/or any other functionality that enables the WTRU 102 to operate in a wireless environment.
  • the processor 118 may be coupled to the transceiver 120, which may be coupled to the transmit/receive element 122. While FIG. 1 B depicts the processor 118 and the transceiver 120 as separate components, it will be appreciated that the processor 118 and the transceiver 120 may be integrated together in an electronic package or chip.
  • the transmit/receive element 122 may be configured to transmit signals to, or receive signals from, a base station (e.g., the base station 114a) over the air interface 116.
  • the transmit/receive element 122 may be an antenna configured to transmit and/or receive RF signals.
  • the transmit/receive element 122 may be an emitter/detector configured to transmit and/or receive IR, UV, or visible light signals, for example.
  • the transmit/receive element 122 may be configured to transmit and/or receive both RF and light signals. It will be appreciated that the transmit/receive element 122 may be configured to transmit and/or receive any combination of wireless signals.
  • the WTRU 102 may include any number of transmit/receive elements 122. More specifically, the WTRU 102 may employ MIMO technology. Thus, in one embodiment, the WTRU 102 may include two or more transmit/receive elements 122 (e.g., multiple antennas) for transmitting and receiving wireless signals over the air interface 116.
  • the WTRU 102 may include two or more transmit/receive elements 122 (e.g., multiple antennas) for transmitting and receiving wireless signals over the air interface 116.
  • the transceiver 120 may be configured to modulate the signals that are to be transmitted by the transmit/receive element 122 and to demodulate the signals that are received by the transmit/receive element 122.
  • the WTRU 102 may have multi-mode capabilities.
  • the transceiver 120 may include multiple transceivers for enabling the WTRU 102 to communicate via multiple RATs, such as NR and IEEE 802.11 , for example.
  • the processor 118 of the WTRU 102 may be coupled to, and may receive user input data from, the speaker/microphone 124, the keypad 126, and/or the display/touchpad 128 (e.g., a liquid crystal display (LCD) display unit or organic light-emitting diode (OLED) display unit).
  • the processor 118 may also output user data to the speaker/microphone 124, the keypad 126, and/or the display/touchpad 128.
  • the processor 118 may access information from, and store data in, any type of suitable memory, such as the non-removable memory 130 and/or the removable memory 132.
  • the non-removable memory 130 may include random-access memory (RAM), read-only memory (ROM), a hard disk, or any other type of memory storage device.
  • the removable memory 132 may include a subscriber identity module (SIM) card, a memory stick, a secure digital (SD) memory card, and the like.
  • SIM subscriber identity module
  • SD secure digital
  • the processor 118 may access information from, and store data in, memory that is not physically located on the WTRU 102, such as on a server or a home computer (not shown).
  • the processor 118 may receive power from the power source 134, and may be configured to distribute and/or control the power to the other components in the WTRU 102.
  • the power source 134 may be any suitable device for powering the WTRU 102.
  • the power source 134 may include one or more dry cell batteries (e.g., nickel-cadmium (NiCd), nickel-zinc (NiZn), nickel metal hydride (NiMH), lithium-ion (Li-ion), etc.), solar cells, fuel cells, and the like.
  • the processor 118 may also be coupled to the GPS chipset 136, which may be configured to provide location information (e.g., longitude and latitude) regarding the current location of the WTRU 102.
  • the WTRU 102 may receive location information over the air interface 116 from a base station (e.g., base stations 114a, 114b) and/or determine its location based on the timing of the signals being received from two or more nearby base stations. It will be appreciated that the WTRU 102 may acquire location information by way of any suitable locationdetermination method while remaining consistent with an embodiment.
  • a base station e.g., base stations 114a, 114b
  • the WTRU 102 may acquire location information by way of any suitable locationdetermination method while remaining consistent with an embodiment.
  • the processor 118 may further be coupled to other peripherals 138, which may include one or more software and/or hardware modules that provide additional features, functionality and/or wired or wireless connectivity.
  • the peripherals 138 may include an accelerometer, an e-compass, a satellite transceiver, a digital camera (for photographs and/or video), a universal serial bus (USB) port, a vibration device, a television transceiver, a hands free headset, a Bluetooth® module, a frequency modulated (FM) radio unit, a digital music player, a media player, a video game player module, an Internet browser, a Virtual Reality and/or Augmented Reality (VR/AR) device, an activity tracker, and the like.
  • FM frequency modulated
  • the peripherals 138 may include one or more sensors, the sensors may be one or more of a gyroscope, an accelerometer, a hall effect sensor, a magnetometer, an orientation sensor, a proximity sensor, a temperature sensor, a time sensor; a geolocation sensor; an altimeter, a light sensor, a touch sensor, a magnetometer, a barometer, a gesture sensor, a biometric sensor, and/or a humidity sensor.
  • a gyroscope an accelerometer, a hall effect sensor, a magnetometer, an orientation sensor, a proximity sensor, a temperature sensor, a time sensor; a geolocation sensor; an altimeter, a light sensor, a touch sensor, a magnetometer, a barometer, a gesture sensor, a biometric sensor, and/or a humidity sensor.
  • the WTRU 102 may include a full duplex radio for which transmission and reception of some or all of the signals (e.g., associated with particular subframes for both the UL (e.g., for transmission) and downlink (e.g., for reception) may be concurrent and/or simultaneous.
  • the full duplex radio may include an interference management unit to reduce and or substantially eliminate self-interference via either hardware (e.g., a choke) or signal processing via a processor (e.g., a separate processor (not shown) or via processor 118).
  • the WRTU 102 may include a half-duplex radio for which transmission and reception of some or all of the signals (e.g., associated with particular subframes for either the UL (e.g., for transmission) or the downlink (e.g., for reception)).
  • a half-duplex radio for which transmission and reception of some or all of the signals (e.g., associated with particular subframes for either the UL (e.g., for transmission) or the downlink (e.g., for reception)).
  • FIG. 1C is a system diagram illustrating the RAN 104 and the CN 106 according to an embodiment.
  • the RAN 104 may employ an E-UTRA radio technology to communicate with the WTRUs 102a, 102b, 102c over the air interface 116.
  • the RAN 104 may also be in communication with the CN 106.
  • the RAN 104 may include eNode-Bs 160a, 160b, 160c, though it will be appreciated that the RAN 104 may include any number of eNode-Bs while remaining consistent with an embodiment.
  • the eNode-Bs 160a, 160b, 160c may each include one or more transceivers for communicating with the WTRUs 102a, 102b, 102c over the air interface 116.
  • the eNode-Bs 160a, 160b, 160c may implement MIMO technology.
  • the eNode-B 160a for example, may use multiple antennas to transmit wireless signals to, and/or receive wireless signals from, the WTRU 102a.
  • Each of the eNode-Bs 160a, 160b, 160c may be associated with a particular cell (not shown) and may be configured to handle radio resource management decisions, handover decisions, scheduling of users in the UL and/or DL, and the like. As shown in FIG. 1 C, the eNode-Bs 160a, 160b, 160c may communicate with one another over an X2 interface.
  • the CN 106 shown in FIG. 1C may include a mobility management entity (MME) 162, a serving gateway (SGW) 164, and a packet data network (PDN) gateway (or PGW) 166. While each of the foregoing elements are depicted as part of the CN 106, it will be appreciated that any of these elements may be owned and/or operated by an entity other than the CN operator.
  • MME mobility management entity
  • SGW serving gateway
  • PGW packet data network gateway
  • the MME 162 may be connected to each of the eNode-Bs 160a, 160b, 160c in the RAN 104 via an S1 interface and may serve as a control node.
  • the MME 162 may be responsible for authenticating users of the WTRUs 102a, 102b, 102c, bearer activation/deactivation, selecting a particular serving gateway during an initial attach of the WTRUs 102a, 102b, 102c, and the like.
  • the MME 162 may provide a control plane function for switching between the RAN 104 and other RANs (not shown) that employ other radio technologies, such as GSM and/or WCDMA.
  • the SGW 164 may be connected to each of the eNode Bs 160a, 160b, 160c in the RAN 104 via the S1 interface.
  • the SGW 164 may generally route and forward user data packets to/from the WTRUs 102a, 102b, 102c.
  • the SGW 164 may perform other functions, such as anchoring user planes during inter- eNode B handovers, triggering paging when DL data is available for the WTRUs 102a, 102b, 102c, managing and storing contexts of the WTRUs 102a, 102b, 102c, and the like.
  • the SGW 164 may be connected to the PGW 166, which may provide the WTRUs 102a, 102b, 102c with access to packet-switched networks, such as the Internet 110, to facilitate communications between the WTRUs 102a, 102b, 102c and IP-enabled devices.
  • packet-switched networks such as the Internet 110
  • the CN 106 may facilitate communications with other networks.
  • the CN 106 may provide the WTRUs 102a, 102b, 102c with access to circuit-switched networks, such as the PSTN 108, to facilitate communications between the WTRUs 102a, 102b, 102c and traditional land-line communications devices.
  • the CN 106 may include, or may communicate with, an IP gateway (e.g., an IP multimedia subsystem (IMS) server) that serves as an interface between the CN 106 and the PSTN 108.
  • IMS IP multimedia subsystem
  • the CN 106 may provide the WTRUs 102a, 102b, 102c with access to the other networks 112, which may include other wired and/or wireless networks that are owned and/or operated by other service providers.
  • the WTRU is described in FIGS. 1 A-1 D as a wireless terminal, it is contemplated that in certain representative embodiments that such a terminal may use (e.g., temporarily or permanently) wired communication interfaces with the communication network.
  • the other network 112 may be a WLAN.
  • a WLAN in Infrastructure Basic Service Set (BSS) mode may have an Access Point (AP) for the BSS and one or more stations (STAs) associated with the AP.
  • the AP may have an access or an interface to a Distribution System (DS) or another type of wired/wireless network that carries traffic in to and/or out of the BSS.
  • Traffic to STAs that originates from outside the BSS may arrive through the AP and may be delivered to the STAs.
  • Traffic originating from STAs to destinations outside the BSS may be sent to the AP to be delivered to respective destinations.
  • Traffic between STAs within the BSS may be sent through the AP, for example, where the source STA may send traffic to the AP and the AP may deliver the traffic to the destination STA.
  • the traffic between STAs within a BSS may be considered and/or referred to as peer-to- peer traffic.
  • the peer-to-peer traffic may be sent between (e.g., directly between) the source and destination STAs with a direct link setup (DLS).
  • the DLS may use an 802.11e DLS or an 802.11z tunneled DLS (TDLS).
  • a WLAN using an Independent BSS (IBSS) mode may not have an AP, and the STAs (e.g., all of the STAs) within or using the IBSS may communicate directly with each other.
  • the IBSS mode of communication may sometimes be referred to herein as an "ad- hoc” mode of communication.
  • the AP may transmit a beacon on a fixed channel, such as a primary channel.
  • the primary channel may be a fixed width (e.g., 20 MHz wide bandwidth) or a dynamically set width via signaling.
  • the primary channel may be the operating channel of the BSS and may be used by the STAs to establish a connection with the AP.
  • Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA) may be implemented, for example in in 802.11 systems.
  • the STAs e.g., every STA, including the AP, may sense the primary channel. If the primary channel is sensed/detected and/or determined to be busy by a particular STA, the particular STA may back off.
  • One STA (e.g., only one station) may transmit at any given time in a given BSS.
  • High Throughput (HT) STAs may use a 40 MHz wide channel for communication, for example, via a combination of the primary 20 MHz channel with an adjacent or nonadjacent 20 MHz channel to form a 40 MHz wide channel.
  • VHT STAs may support 20MHz, 40 MHz, 80 MHz, and/or 160 MHz wide channels.
  • the 40 MHz, and/or 80 MHz, channels may be formed by combining contiguous 20 MHz channels.
  • a 160 MHz channel may be formed by combining 8 contiguous 20 MHz channels, or by combining two non-contiguous 80 MHz channels, which may be referred to as an 80+80 configuration.
  • the data, after channel encoding may be passed through a segment parser that may divide the data into two streams.
  • Inverse Fast Fourier Transform (IFFT) processing, and time domain processing may be done on each stream separately.
  • IFFT Inverse Fast Fourier Transform
  • the streams may be mapped on to the two 80 MHz channels, and the data may be transmitted by a transmitting STA.
  • the above described operation for the 80+80 configuration may be reversed, and the combined data may be sent to the Medium Access Control (MAC).
  • MAC Medium Access Control
  • Sub 1 GHz modes of operation are supported by 802.11 af and 802.11 ah.
  • the channel operating bandwidths, and carriers, are reduced in 802.11 af and 802.11 ah relative to those used in 802.11 n, and 802.11ac.
  • 802.11 af supports 5 MHz, 10 MHz and 20 MHz bandwidths in the TV White Space (TVWS) spectrum
  • 802.11 ah supports 1 MHz, 2 MHz, 4 MHz, 8 MHz, and 16 MHz bandwidths using non- TVWS spectrum.
  • 802.11 ah may support Meter Type Control/Machine-Type Communications, such as MTC devices in a macro coverage area.
  • MTC devices may have certain capabilities, for example, limited capabilities including support for (e.g., only support for) certain and/or limited bandwidths.
  • the MTC devices may include a battery with a battery life above a threshold (e.g., to maintain a very long battery life).
  • WLAN systems which may support multiple channels, and channel bandwidths, such as 802.11 n, 802.11 ac, 802.11 af, and 802.11 ah, include a channel which may be designated as the primary channel.
  • the primary channel may have a bandwidth equal to the largest common operating bandwidth supported by all STAs in the BSS.
  • the bandwidth of the primary channel may be set and/or limited by a STA, from among all STAs in operating in a BSS, which supports the smallest bandwidth operating mode.
  • the primary channel may be 1 MHz wide for STAs (e.g., MTC type devices) that support (e.g., only support) a 1 MHz mode, even if the AP, and other STAs in the BSS support 2 MHz, 4 MHz, 8 MHz, 16 MHz, and/or other channel bandwidth operating modes.
  • Carrier sensing and/or Network Allocation Vector (NAV) settings may depend on the status of the primary channel. If the primary channel is busy, for example, due to a STA (which supports only a 1 MHz operating mode), transmitting to the AP, the entire available frequency bands may be considered busy even though a majority of the frequency bands remains idle and may be available.
  • STAs e.g., MTC type devices
  • NAV Network Allocation Vector
  • FIG. 1 D is a system diagram illustrating the RAN 113 and the CN 115 according to an embodiment.
  • the RAN 113 may employ an NR radio technology to communicate with the WTRUs 102a, 102b, 102c over the air interface 116.
  • the RAN 113 may also be in communication with the CN 115.
  • the RAN 113 may include gNBs 180a, 180b, 180c, though it will be appreciated that the RAN 113 may include any number of gNBs while remaining consistent with an embodiment.
  • the gNBs 180a, 180b, 180c may each include one or more transceivers for communicating with the WTRUs 102a, 102b, 102c over the air interface 116.
  • the gNBs 180a, 180b, 180c may implement MIMO technology.
  • gNBs 180a, 108b may utilize beamforming to transmit signals to and/or receive signals from the gNBs 180a, 180b, 180c.
  • the gNB 180a may use multiple antennas to transmit wireless signals to, and/or receive wireless signals from, the WTRU 102a.
  • the gNBs 180a, 180b, 180c may implement carrier aggregation technology.
  • the gNB 180a may transmit multiple component carriers to the WTRU 102a (not shown). A subset of these component carriers may be on unlicensed spectrum while the remaining component carriers may be on licensed spectrum.
  • the gNBs 180a, 180b, 180c may implement Coordinated Multi-Point (CoMP) technology.
  • WTRU 102a may receive coordinated transmissions from gNB 180a and gNB 180b (and/or gNB 180c).
  • CoMP Coordinated Multi-Point
  • the WTRUs 102a, 102b, 102c may communicate with gNBs 180a, 180b, 180c using transmissions associated with a scalable numerology. For example, the OFDM symbol spacing and/or OFDM subcarrier spacing may vary for different transmissions, different cells, and/or different portions of the wireless transmission spectrum.
  • the WTRUs 102a, 102b, 102c may communicate with gNBs 180a, 180b, 180c using subframe or transmission time intervals (TTIs) of various or scalable lengths (e.g., containing varying number of OFDM symbols and/or lasting varying lengths of absolute time).
  • TTIs subframe or transmission time intervals
  • the gNBs 180a, 180b, 180c may be configured to communicate with the WTRUs 102a, 102b, 102c in a standalone configuration and/or a non-standalone configuration.
  • WTRUs 102a, 102b, 102c may communicate with gNBs 180a, 180b, 180c without also accessing other RANs (e.g., such as eNode-Bs 160a, 160b, 160c).
  • WTRUs 102a, 102b, 102c may utilize one or more of gNBs 180a, 180b, 180c as a mobility anchor point.
  • WTRUs 102a, 102b, 102c may communicate with gNBs 180a, 180b, 180c using signals in an unlicensed band.
  • WTRUs 102a, 102b, 102c may communicate with/connect to gNBs 180a, 180b, 180c while also communicating with/connecting to another RAN such as eNode-Bs 160a, 160b, 160c.
  • WTRUs 102a, 102b, 102c may implement DC principles to communicate with one or more gNBs 180a, 180b, 180c and one or more eNode-Bs 160a, 160b, 160c substantially simultaneously.
  • eNode-Bs 160a, 160b, 160c may serve as a mobility anchor for WTRUs 102a, 102b, 102c and gNBs 180a, 180b, 180c may provide additional coverage and/or throughput for servicing WTRUs 102a, 102b, 102c.
  • Each of the gNBs 180a, 180b, 180c may be associated with a particular cell (not shown) and may be configured to handle radio resource management decisions, handover decisions, scheduling of users in the UL and/or DL, support of network slicing, dual connectivity, interworking between NR and E- UTRA, routing of user plane data towards User Plane Function (UPF) 184a, 184b, routing of control plane information towards Access and Mobility Management Function (AMF) 182a, 182b and the like. As shown in FIG. 1 D, the gNBs 180a, 180b, 180c may communicate with one another over an Xn interface.
  • UPF User Plane Function
  • AMF Access and Mobility Management Function
  • the CN 115 shown in FIG. 1 D may include at least one AMF 182a, 182b, at least one UPF 184a, 184b, at least one Session Management Function (SMF) 183a, 183b, and possibly a Data Network (DN) 185a, 185b. While each of the foregoing elements are depicted as part of the CN 115, it will be appreciated that any of these elements may be owned and/or operated by an entity other than the CN operator.
  • SMF Session Management Function
  • the AMF 182a, 182b may be connected to one or more of the gNBs 180a, 180b, 180c in the RAN 113 via an N2 interface and may serve as a control node.
  • the AMF 182a, 182b may be responsible for authenticating users of the WTRUs 102a, 102b, 102c, support for network slicing (e.g., handling of different PDU sessions with different requirements), selecting a particular SMF 183a, 183b, management of the registration area, termination of NAS signaling, mobility management, and the like.
  • Network slicing may be used by the AMF 182a, 182b in order to customize CN support for WTRUs 102a, 102b, 102c based on the types of services being utilized WTRUs 102a, 102b, 102c.
  • different network slices may be established for different use cases such as services relying on ultra-reliable low latency (URLLC) access, services relying on enhanced massive mobile broadband (eMBB) access, services for machine type communication (MTC) access, and/or the like.
  • URLLC ultra-reliable low latency
  • eMBB enhanced massive mobile broadband
  • MTC machine type communication
  • the AMF 162 may provide a control plane function for switching between the RAN 113 and other RANs (not shown) that employ other radio technologies, such as LTE, LTE-A, LTE-A Pro, and/or non-3GPP access technologies such as WiFi.
  • radio technologies such as LTE, LTE-A, LTE-A Pro, and/or non-3GPP access technologies such as WiFi.
  • the SMF 183a, 183b may be connected to an AMF 182a, 182b in the CN 115 via an N11 interface.
  • the SMF 183a, 183b may also be connected to a UPF 184a, 184b in the CN 115 via an N4 interface.
  • the SMF 183a, 183b may select and control the UPF 184a, 184b and configure the routing of traffic through the UPF 184a, 184b.
  • the SMF 183a, 183b may perform other functions, such as managing and allocating UE IP address, managing PDU sessions, controlling policy enforcement and QoS, providing downlink data notifications, and the like.
  • a PDU session type may be IP-based, non-IP based, Ethernetbased, and the like.
  • the UPF 184a, 184b may be connected to one or more of the gNBs 180a, 180b, 180c in the RAN 113 via an N3 interface, which may provide the WTRUs 102a, 102b, 102c with access to packet- switched networks, such as the Internet 110, to facilitate communications between the WTRUs 102a, 102b, 102c and IP-enabled devices.
  • the UPF 184a, 184b may perform other functions, such as routing and forwarding packets, enforcing user plane policies, supporting multi-homed PDU sessions, handling user plane QoS, buffering downlink packets, providing mobility anchoring, and the like.
  • the CN 115 may facilitate communications with other networks.
  • the CN 115 may include, or may communicate with, an IP gateway (e.g., an IP multimedia subsystem (IMS) server) that serves as an interface between the CN 115 and the PSTN 108.
  • IMS IP multimedia subsystem
  • the CN 115 may provide the WTRUs 102a, 102b, 102c with access to the other networks 112, which may include other wired and/or wireless networks that are owned and/or operated by other service providers.
  • the WTRUs 102a, 102b, 102c may be connected to a local Data Network (DN) 185a, 185b through the UPF 184a, 184b via the N3 interface to the UPF 184a, 184b and an N6 interface between the UPF 184a, 184b and the DN 185a, 185b.
  • DN local Data Network
  • one or more, or all, of the functions described herein with regard to one or more of: WTRU 102a-d, Base Station 114a-b, eNode-B 160a-c, MME 162, SGW 164, PGW 166, gNB 180a-c, AMF 182a-b, UPF 184a-b, SMF 183a-b, DN 185a-b, and/or any other device(s) described herein, may be performed by one or more emulation devices (not shown).
  • the emulation devices may be one or more devices configured to emulate one or more, or all, of the functions described herein.
  • the emulation devices may be used to test other devices and/or to simulate network and/or WTRU functions.
  • the emulation devices may be designed to implement one or more tests of other devices in a lab environment and/or in an operator network environment.
  • the one or more emulation devices may perform the one or more, or all, functions while being fully or partially implemented and/or deployed as part of a wired and/or wireless communication network in order to test other devices within the communication network.
  • the one or more emulation devices may perform the one or more, or all, functions while being temporarily implemented/deployed as part of a wired and/or wireless communication network.
  • the emulation device may be directly coupled to another device for purposes of testing and/or may performing testing using over-the-air wireless communications.
  • the one or more emulation devices may perform the one or more, including all, functions while not being implemented/deployed as part of a wired and/or wireless communication network.
  • the emulation devices may be utilized in a testing scenario in a testing laboratory and/or a non-deployed (e.g., testing) wired and/or wireless communication network in order to implement testing of one or more components.
  • the one or more emulation devices may be a test equipment.
  • Direct RF coupling and/or wireless communications via RF circuitry (e.g., which may include one or more antennas) may be used by the emulation devices to transmit and/or receive data.
  • At least one of the aspects generally relates to video encoding and decoding, and at least one other aspect generally relates to transmitting a bitstream generated or encoded.
  • These and other aspects may be implemented as a method, an apparatus, a computer readable storage medium having stored thereon instructions for encoding or decoding video data according to any of the methods described, and/or a computer readable storage medium having stored thereon a bitstream generated according to any of the methods described.
  • the terms “reconstructed” and “decoded” may be used interchangeably, the terms “pixel” and “sample” may be used interchangeably, the terms “image,” “picture” and “frame” may be used interchangeably.
  • HDR high dynamic range
  • SDR standard dynamic range
  • each of the methods comprises one or more steps or actions for achieving the described method. Unless a specific order of steps or actions is required for proper operation of the method, the order and/or use of specific steps and/or actions may be modified or combined. Additionally, terms such as “first”, “second”, etc. may be used in various embodiments to modify an element, component, step, operation, etc., such as, for example, a "first decoding” and a "second decoding”. Use of such terms does not imply an ordering to the modified operations unless specifically required. So, in this example, the first decoding need not be performed before the second decoding, and may occur, for example, before, during, or in an overlapping time period with the second decoding.
  • Various methods and other aspects described in this application may be used to modify modules, for example, intra prediction and/or decoding modules (260, 330), of a video encoder 200 and decoder 300 as shown in FIG. 2 and FIG. 3.
  • the subject matter disclosed herein presents aspects that are not limited to VVC or HEVC, and may be applied, for example, to any type, format or version of video coding, whether described in a standard or a recommendation, whether pre-existing or future-developed, and extensions of any such standards and recommendations (e.g., including VVC and HEVC).
  • the aspects described in this application may be used individually or in combination.
  • FIG. 2 illustrates a diagram showing an example video encoder. Variations of example encoder 200 are contemplated, but the encoder 200 is described below for purposes of clarity without describing all expected variations.
  • the video sequence may go through pre-encoding processing (201), for example, applying a color transform to the input color picture (e.g., conversion from RGB 4:4:4 to YCbCr 4:2:0), or performing a remapping of the input picture components in order to get a signal distribution more resilient to compression (for instance using a histogram equalization of one of the color components).
  • Metadata may be associated with the pre-processing, and may be attached to video data (e.g., the bitstream).
  • a picture is encoded by the encoder elements as described below.
  • the picture to be encoded is partitioned (202) and processed in units of, for example, coding units (Cus).
  • Each unit is encoded using, for example, either an intra or inter mode.
  • intra prediction 260
  • inter mode motion estimation
  • compensation 270
  • the encoder decides (205) which one of the intra mode or inter mode to use for encoding the unit, and indicates the intra/inter decision by, for example, a prediction mode flag.
  • Prediction residuals are calculated, for example, by subtracting (210) the predicted block from the original image block.
  • the prediction residuals are then transformed (225) and quantized (230).
  • the quantized transform coefficients, as well as motion vectors and other syntax elements, are entropy coded (245) to output a bitstream.
  • the encoder can skip the transform and apply quantization directly to the nontransformed residual signal.
  • the encoder can bypass both transform and quantization, e.g., the residual is coded directly without the application of the transform or quantization processes.
  • the encoder decodes an encoded block to provide a reference for further predictions.
  • the quantized transform coefficients are de-quantized (240) and inverse transformed (250) to decode prediction residuals.
  • In-loop filters (265) are applied to the reconstructed picture to perform, for example, deblocking/SAO (Sample Adaptive Offset) filtering to reduce encoding artifacts.
  • the filtered image is stored at a reference picture buffer (280).
  • FIG. 3 illustrates a diagram showing an example of a video decoder.
  • a bitstream is decoded by the decoder elements as described below.
  • Video decoder 300 generally performs a decoding pass reciprocal to the encoding pass as described in FIG. 2.
  • the encoder 200 also generally performs video decoding as part of encoding video data.
  • the input of the decoder includes a video bitstream, which may be generated by video encoder 200.
  • the bitstream is first entropy decoded (330) to obtain transform coefficients, motion vectors, and other coded information.
  • the picture partition information indicates how the picture is partitioned.
  • the decoder may therefore divide (335) the picture according to the decoded picture partitioning information.
  • the transform coefficients are de-quantized (340) and inverse transformed (350) to decode the prediction residuals. Combining (355) the decoded prediction residuals and the predicted block, an image block is reconstructed.
  • the predicted block may be obtained (370) from intra prediction (360) or motion-compensated prediction (e.g., inter prediction) (375).
  • In-loop filters (365) are applied to the reconstructed image.
  • the filtered image is stored at a reference picture buffer (380).
  • the decoded picture may further go through post-decoding processing (385), for example, an inverse color transform (e.g. conversion from YcbCr 4:2:0 to RGB 4:4:4) or an inverse remapping performing the inverse of the remapping process performed in the pre-encoding processing (201).
  • post-decoding processing can use metadata derived in the pre-encoding processing and signaled in the bitstream.
  • FIG. 4 illustrates a diagram showing an example of a system in which various aspects and embodiments described herein may be implemented.
  • System 400 may be embodied as a device including the various components described below and is configured to perform one or more of the aspects described in this document. Examples of such devices, include, but are not limited to, various electronic devices such as personal computers, laptop computers, smartphones, tablet computers, digital multimedia set top boxes, digital television receivers, personal video recording systems, connected home appliances, and servers.
  • Elements of system 400 singly or in combination, may be embodied in a single integrated circuit (IC), multiple les, and/or discrete components.
  • the processing and encoder/decoder elements of system 400 are distributed across multiple les and/or discrete components.
  • system 400 is communicatively coupled to one or more other systems, or other electronic devices, via, for example, a communications bus or through dedicated input and/or output ports.
  • system 400 is configured to implement one or more of the aspects described in this document.
  • the system 400 may include at least one processor 410 configured to execute instructions loaded therein for implementing, for example, the various aspects described in this document.
  • Processor 410 may include embedded memory, input output interface, and various other circuitries as known in the art.
  • the system 400 includes at least one memory 420 (e.g., a volatile memory device, and/or a nonvolatile memory device).
  • System 400 may include a storage device 440, which may include non-volatile memory and/or volatile memory, including, but not limited to, Electrically Erasable Programmable Read- Only Memory (EEPROM), Read-Only Memory (ROM), Programmable Read-Only Memory (PROM), Random Access Memory (RAM), Dynamic Random Access Memory (DRAM), Static Random Access Memory (SRAM), flash, magnetic disk drive, and/or optical disk drive.
  • the storage device 440 may include an internal storage device, an attached storage device (including detachable and non-detachable storage devices), and/or a network accessible storage device, as non-limiting examples.
  • System 400 may include an encoder/decoder module 430 configured, for example, to process data to provide an encoded video or decoded video, and the encoder/decoder module 430 may include its own processor and memory.
  • the encoder/decoder module 430 represents module(s) that may be included in a device to perform the encoding and/or decoding functions. As is known, a device may include one or both of the encoding and decoding modules. Additionally, encoder/decoder module 430 may be implemented as a separate element of system 400 or may be incorporated within processor 410 as a combination of hardware and software as known to those skilled in the art.
  • Program code to be loaded onto processor 410 or encoder/decoder 430 to perform the various aspects described in this application may be stored in storage device 440 and subsequently loaded onto memory 420 for execution by processor 410.
  • one or more of processor 410, memory 420, storage device 440, and encoder/decoder module 430 may store one or more of various items during the performance of the processes described in this document. Such stored items may include, but are not limited to, the input video, the decoded video or portions of the decoded video, the bitstream, matrices, variables, and intermediate or final results from the processing of equations, formulas, operations, and operational logic.
  • memory inside of the processor 410 and/or the encoder/decoder module 430 may be used to store instructions and to provide working memory for processing that is needed during encoding or decoding.
  • a memory external to the processing device (for example, the processing device may be either the processor 410 or the encoder/decoder module 430) is used for one or more of these functions.
  • the external memory may be the memory 420 and/or the storage device 440, for example, a dynamic volatile memory and/or a non-volatile flash memory.
  • an external non-volatile flash memory is used to store the operating system of, for example, a television.
  • a fast external dynamic volatile memory such as a RAM is used as working memory for video coding and decoding operations, such as, for example, MPEG-2 (MPEG refers to the Moving Picture Experts Group, MPEG-2 is also referred to as ISO/IEC 13818, and 13818-1 is also known as H.222, and 13818-2 is also known as H.262), HEVC (HEVC refers to High Efficiency Video Coding, also known as H.265 and MPEG-H Part 2), or VVC (Versatile Video Coding, a new standard being developed by JVET, the Joint Video Experts Team).
  • MPEG-2 MPEG refers to the Moving Picture Experts Group
  • ISO/IEC 13818 MPEG-2
  • 13818-1 is also known as H.222
  • 13818-2 is also known as H.262
  • HEVC High Efficiency Video Coding
  • VVC Very Video Coding
  • the input to the elements of system 400 may be provided through various input devices as indicated in block 445.
  • Such input devices include, but are not limited to, (i) a radio frequency (RF) portion that receives an RF signal transmitted, for example, over the air by a broadcaster, (ii) a Component (COMP) input terminal (or a set of COMP input terminals), (iii) a Universal Serial Bus (USB) input terminal, and/or (iv) a High Definition Multimedia Interface (HDMI) input terminal.
  • RF radio frequency
  • COMP Component
  • USB Universal Serial Bus
  • HDMI High Definition Multimedia Interface
  • the input devices of block 445 have associated respective input processing elements as known in the art.
  • the RF portion may be associated with elements suitable for (i) selecting a desired frequency (also referred to as selecting a signal, or band-limiting a signal to a band of frequencies), (ii) downconverting the selected signal, (iii) band-limiting again to a narrower band of frequencies to select (for example) a signal frequency band which may be referred to as a channel in certain embodiments, (iv) demodulating the downconverted and band-limited signal, (v) performing error correction, and (vi) demultiplexing to select the desired stream of data packets.
  • the RF portion of various embodiments includes one or more elements to perform these functions, for example, frequency selectors, signal selectors, band-limiters, channel selectors, filters, downconverters, demodulators, error correctors, and demultiplexers.
  • the RF portion can include a tuner that performs various of these functions, including, for example, downconverting the received signal to a lower frequency (for example, an intermediate frequency or a near-baseband frequency) or to baseband.
  • the RF portion and its associated input processing element receives an RF signal transmitted over a wired (for example, cable) medium, and performs frequency selection by filtering, downconverting, and filtering again to a desired frequency band.
  • Adding elements can include inserting elements in between existing elements, such as, for example, inserting amplifiers and an analog-to-digital converter.
  • the RF portion includes an antenna.
  • the USB and/or HDMI terminals may include respective interface processors for connecting system 400 to other electronic devices across USB and/or HDMI connections.
  • various aspects of input processing for example, Reed-Solomon error correction, may be implemented, for example, within a separate input processing IC or within processor 410 as necessary.
  • aspects of USB or HDMI interface processing may be implemented within separate interface les or within processor 410 as necessary.
  • the demodulated, error corrected, and demultiplexed stream is provided to various processing elements, including, for example, processor 410, and encoder/decoder 430 operating in combination with the memory and storage elements to process the datastream as necessary for presentation on an output device.
  • connection arrangement 425 for example, an internal bus as known in the art, including the Inter- IC (I2C) bus, wiring, and printed circuit boards.
  • I2C Inter- IC
  • the system 400 may include a communication interface 450 that enables communication with other devices via communication channel 460.
  • the communication interface 450 can include, but is not limited to, a transceiver configured to transmit and to receive data over communication channel 460.
  • the communication interface 450 may include, but is not limited to, a modem or network card and the communication channel 460 may be implemented, for example, within a wired and/or a wireless medium.
  • Data may be streamed, or otherwise provided, to the system 400, in various embodiments, using a wireless network such as a Wi-Fi network, for example IEEE 802.11 (IEEE refers to the Institute of Electrical and Electronics Engineers).
  • the Wi-Fi signal of these examples is received over the communications channel 460 and the communications interface 450 which are adapted for Wi-Fi communications.
  • the communications channel 460 of the one or more embodiments may be typically connected to an access point or router that provides access to external networks including the Internet for allowing streaming applications and other over-the-top communications.
  • Other embodiments may provide streamed data to the system 400 using a set-top box that delivers the data over the HDMI connection of the input block 445.
  • Still other embodiments may provide streamed data to the system 400 using the RF connection of the input block 445.
  • various embodiments may provide data in a nonstreaming manner. Additionally and/or alternatively, various embodiments may use one or more wireless networks other than Wi-Fi, for example a cellular network or a Bluetooth network.
  • the system 400 may provide an output signal to various output devices, including a display 475, speakers 485, and other peripheral devices 495.
  • the display 475 of various embodiments may include one or more of a touchscreen display, an organic light-emitting diode (OLED) display, a curved display, a foldable display, and/or the like.
  • the display 475 may be for a television, a tablet, a laptop, a cell phone (mobile phone), or other device.
  • the display 475 may also be integrated with other components (for example, as in a smart phone), or separate (for example, an external monitor for a laptop).
  • the other peripheral devices 495 may include, in various examples of embodiments, one or more of a stand-alone digital video disc (or digital versatile disc) (DVR, for both terms), a disk player, a stereo system, and/or a lighting system.
  • Various embodiments may use one or more peripheral devices 495 that provide a function based on the output of the system 400. For example, a disk player may perform the function of playing the output of the system 400.
  • control signals may be communicated between the system 400 and the display 475, speakers 485, or other peripheral devices 495 using signaling such as AV. Link, Consumer Electronics Control (CEC), or other communications protocols that enable device-to-device control with or without user intervention.
  • the output devices may be communicatively coupled to system 400 via dedicated connections through respective interfaces 470, 480, and 490. Additionally and/or alternatively, the output devices may be connected to system 400 using the communications channel 460 via the communications interface 450.
  • the display 475 and speakers 485 may be integrated in a single unit with the other components of system 400 in an electronic device such as, for example, a television.
  • the display interface 470 may include a display driver, such as, for example, a timing controller (T Con) chip.
  • the display 475 and speakers 485 may additionally and/or alternatively be separate from one or more of the other components, for example, if the RF portion of input 445 is part of a separate set-top box.
  • the output signal may be provided via dedicated output connections, including, for example, HDMI ports, USB ports, or COMP outputs.
  • the embodiments may be carried out by computer software implemented by the processor 410 or by hardware, or by a combination of hardware and software. As a non-limiting example, the embodiments may be implemented by one or more integrated circuits.
  • the memory 420 may be of any type appropriate to the technical environment and may be implemented using any appropriate data storage technology, such as optical memory devices, magnetic memory devices, semiconductor-based memory devices, fixed memory, and removable memory, as non-limiting examples.
  • the processor 410 may be of any type appropriate to the technical environment, and can encompass one or more of microprocessors, general purpose computers, special purpose computers, and processors based on a multi-core architecture, as non-limiting examples.
  • Decoding may encompass all or part of the processes performed, for example, on a received encoded sequence in order to produce a final output suitable for display.
  • processes include one or more of the processes typically performed by a decoder, for example, entropy decoding, inverse quantization, inverse transformation, and differential decoding.
  • processes also, or alternatively, include processes performed by a decoder of various implementations described in this application, for example, receiving an indication of a candidate mode template metric; deriving a candidate mode based on the candidate mode template metric; and decoding a block based on the candidate mode.
  • decoding may refer only to entropy decoding.
  • decoding may refer only to differential decoding, and in another embodiment “decoding” refers to a combination of entropy decoding and differential decoding. Whether the phrase “decoding process” is intended to refer specifically to a subset of operations or generally to the broader decoding process may be clear based on the context of the specific descriptions and may be believed to be well understood by those skilled in the art.
  • encoding may encompass all or part of the processes performed, for example, on an input video sequence in order to produce an encoded bitstream.
  • processes include one or more of the processes typically performed by an encoder, for example, partitioning, differential encoding, transformation, quantization, and entropy encoding.
  • such processes also, or alternatively, include processes performed by an encoder of various implementations described in this application, for example, deriving a first candidate mode based on a first candidate mode template metric associated with the first candidate mode; deriving a second candidate mode based on a second candidate mode template metric associated with the second candidate mode; selecting a candidate mode, from the first candidate mode and the second candidate mode; and including, in video data, an indication of the candidate mode template metric associated with the selected candidate mode.
  • encoding may refer only to entropy encoding.
  • encoding may refer only to differential encoding, and in another embodiment “encoding” may refer to a combination of differential encoding and entropy encoding. Whether the phrase “encoding process” is intended to refer specifically to a subset of operations or generally to the broader encoding process may be clear based on the context of the specific descriptions and may be believed to be well understood by those skilled in the art.
  • syntax elements as used herein, for example, spsjntra_template_matching_metric_id, shjntra_template_matching_metric_id, intra_template_matching_metric _predictor_id, intra_template_matching_metric_fusionjd, etc. may be descriptive terms. As such, the syntax elements may not preclude the use of other syntax element names.
  • a figure When a figure is presented as a flow diagram, it should be understood that it also provides a block diagram of a corresponding apparatus. Similarly, when a figure is presented as a block diagram, it should be understood that it also provides a flow diagram of a corresponding method/process.
  • Various embodiments may refer to rate distortion optimization.
  • the balance or trade-off between the rate and distortion may usually considered, often given the constraints of computational complexity.
  • the rate distortion optimization may usually formulated as minimizing a rate distortion function, which is a weighted sum of the rate and of the distortion. There may be one or more different approaches to solve the rate distortion optimization problem.
  • the approaches may be based on an extensive testing of all encoding options, including all considered modes or coding parameters values, with a complete evaluation of their coding cost and related distortion of the reconstructed signal after coding and decoding.
  • Faster approaches may also be used, to save encoding complexity, in particular with computation of an approximated distortion based on the prediction or the prediction residual signal, not the reconstructed one.
  • Mix of these two approaches may also be used, such as by using an approximated distortion for only some of the possible encoding options, and a complete distortion for other encoding options.
  • Other approaches may only evaluate a subset of the possible encoding options. More generally, many approaches may employ any of a variety of techniques to perform the optimization, but the optimization may not be necessarily a complete evaluation of both the coding cost and related distortion.
  • the implementations and aspects described herein may be implemented in, for example, a method or a process, an apparatus, a software program, a data stream, or a signal. Even if only discussed in the context of a single form of implementation (for example, discussed only as a method), the implementation of features discussed can also be implemented in other forms (for example, an apparatus or program).
  • An apparatus may be implemented in, for example, appropriate hardware, software, and firmware.
  • the methods may be implemented in, for example, a processor, which refers to processing devices in general, including, for example, a computer, a microprocessor, an integrated circuit, or a programmable logic device. Processors also include communication devices, such as, for example, computers, cell phones, portable/personal digital assistants ("PDAs”), and other devices that facilitate communication of information between end-users.
  • PDAs portable/personal digital assistants
  • references to "one embodiment,” “an embodiment,” “an example,” “one implementation” or “an implementation,” as well as other variations thereof, means that a particular feature, structure, characteristic, and so forth described in connection with the embodiment is included in at least one embodiment.
  • the appearances of the phrase “in one embodiment,” “in an embodiment,” “in an example,” “in one implementation,” or “in an implementation”, as well any other variations, appearing in various places throughout this application are not necessarily all referring to the same embodiment or example.
  • this application may refer to "determining” various pieces of information. Determining the information may include one or more of, for example, estimating the information, calculating the information, predicting the information, or retrieving the information from memory. Obtaining may include receiving, retrieving, constructing, generating, and/or determining.
  • Accessing the information may include one or more of, for example, receiving the information, retrieving the information (for example, from memory), storing the information, moving the information, copying the information, calculating the information, determining the information, predicting the information, or estimating the information.
  • this application may refer to "receiving” various pieces of information. Receiving may be, as with “accessing”, intended to be a broad term. Receiving the information may include one or more of, for example, accessing the information, or retrieving the information (for example, from memory). Further, “receiving” may typically be involved, in one way or another, during operations such as, for example, storing the information, processing the information, transmitting the information, moving the information, copying the information, erasing the information, calculating the information, determining the information, predicting the information, or estimating the information.
  • such phrasing is intended to encompass the selection of the first listed option (A) only, or the selection of the second listed option (B) only, or the selection of the third listed option (C) only, or the selection of the first and the second listed options (A and B) only, or the selection of the first and third listed options (A and C) only, or the selection of the second and third listed options (B and C) only, or the selection of all three options (A and B and C).
  • This may be extended, as is clear to one of ordinary skill in this and related arts, for as many items as are listed.
  • the word "signal” refers to, among other things, indicating something to a corresponding decoder.
  • the same parameter is used at both the encoder side and the decoder side.
  • an encoder may transmit (e.g., explicit signaling) a particular parameter to the decoder so that the decoder can use the same particular parameter.
  • signaling may be used without transmitting (e.g., implicit signaling) to simply allow the decoder to know and select the particular parameter. By avoiding transmission of any actual functions, a bit savings is realized in various embodiments.
  • signaling may be accomplished in a variety of ways. For example, one or more syntax elements, flags, and so forth are used to signal information to a corresponding decoder in various embodiments. While the preceding relates to the verb form of the word "signal”, the word “signal” can also be used herein as a noun.
  • implementations may produce a variety of signals formatted to carry information that may be, for example, stored or transmitted.
  • the information may include, for example, instructions for performing a method, or data produced by one of the described implementations.
  • a signal may be formatted to carry the bitstream of a described embodiment.
  • Such a signal may be formatted, for example, as an electromagnetic wave (for example, using a radio frequency portion of spectrum) or as a baseband signal.
  • the formatting may include, for example, encoding a data stream and modulating a carrier with the encoded data stream.
  • the information that the signal carries may be, for example, analog or digital information.
  • the signal may be transmitted over a variety of different wired or wireless links, as is known.
  • the signal may be stored on a processor- readable medium.
  • embodiments are described herein. Features of embodiments may be provided alone or in any combination, across various claim categories and types. Further, embodiments may include one or more of the features, devices, or aspects described herein, alone or in any combination, across various claim categories and types. For example, features described herein may be implemented in a bitstream or signal that includes information generated as described herein. The information may allow a decoder to decode a bitstream, the encoder, bitstream, and/or decoder according to any of the embodiments described. For example, features described herein may be implemented by creating and/or transmitting and/or receiving and/or decoding a bitstream or signal.
  • features described herein may be implemented a method, process, apparatus, medium storing instructions, medium storing data, or signal.
  • features described herein may be implemented by a TV, set-top box, cell phone, tablet, or other electronic device that performs decoding.
  • the TV, set-top box, cell phone, tablet, or other electronic device may display (e.g., using a monitor, screen, or other type of display) a resulting image (e.g., an image from residual reconstruction of the video bitstream).
  • the TV, set-top box, cell phone, tablet, or other electronic device may receive a signal including an encoded image and perform decoding.
  • FIG. 5 illustrates an example of an intra template matching search area used in intra template matching prediction (IntraTMP).
  • IntraTMP is an intra prediction mode that may copy the best prediction block from the reconstructed part of the current picture (e.g., current frame), whose L-shaped template matches the current template.
  • the encoder may search for the most similar template to the current template in a reconstructed part of the current picture and use the corresponding block as the prediction block.
  • the encoder may (e.g., may then) signal the usage of this mode.
  • the same prediction operation may be performed at the decoder side.
  • the prediction signal may be generated by matching the L-shaped causal neighbor of the current block with another block in a predefined search area in FIG. 5 including:
  • a sum of absolute differences may be used as a cost function.
  • the decoder may search for the template that has the least SAD with respect to the current one and may use its corresponding block as a prediction block.
  • the dimensions of the regions (SearchRange_w, SearchRange_h) may be set proportional to the block dimension (BlkW, BlkH) to have a fixed number of SAD comparisons per pixel. That is:
  • the IntraTMP tool may be enabled for CUs with size less than or equal to 64 in width and height. This maximum CU size for intraTMP may be configurable.
  • the IntraTMP mode may be signaled at the CU level through a dedicated flag.
  • Multiple candidates may be selected for IntraTMP.
  • An index to indicate the best candidate may be coded into the bitstream.
  • a fusion method of IntraTMP (e.g., where the fusion weights are derived by template costs or fixed values) may be used.
  • the number of fused blocks may be determined by a threshold. Up to three candidate blocks may be fused. If one (e.g., only one) block is selected for fusion, the final predictor may be a fusion of the selected matched block and the intra predictor derived by Planar mode.
  • Sub-pel precision may be supported in IntraTMP.
  • the sub-pel positions around the integer-pel position obtained by template matching may be enabled for IntraTMP.
  • a Discrete Cosine Transform-based interpolation filter (DCT-IF) filter may be used for sub-pel interpolation.
  • FIG. 6 illustrates an example of selection of the two best predictors for IntraTMP.
  • Adaptive IntraTMP fusion (e.g., with up to two prediction blocks corresponding to the best and second-best TM cost) may be used, for example, as shown in FIG. 6, where P1 is the prediction data with the lowest TM cost and P2 is the prediction data with the second-lowest TM cost. Two conditions may be applied to decide whether to apply fusion. The weights derivation may be the same as in TIMD. The fused prediction (e.g., the final fused prediction) may be the weighted sum of P1 and P2. [0139] Other (e.g., three additional) IntraTMP modes may be used (e.g., left template mode, above template mode, and L-shape fusion mode).
  • the left and above template modes may use (e.g., only) the left side and above side, respectively, to derive the template matching candidates.
  • the L-shape fusion mode may fuse the best two template matching candidates based on a cost-based linear combination formula.
  • FIG. 7 illustrates an example of left and top template shapes.
  • Feature(s) associated with template-based intra mode derivation are provided herein.
  • FIG. 8A illustrates an example template area (820) and reference samples (810) used for deriving the TIMD modes and associated weights.
  • FIG. 8B illustrates example reference samples (830) used to derive intra prediction for the current block.
  • an intra prediction mode e.g., each intra prediction mode in MPMs (e.g., and possibly supplemented with default modes PLANAR and DC)
  • the sum of absolute transform difference (SATD) between the luminance prediction built with reconstructed reference samples of the template (810) and the luminance prediction built with reconstructed samples of the template (820) is calculated.
  • Two intra prediction modes (IPMs) with the minimum SATD may be selected as the TIMD modes. These two TIMD modes may be fused with the weights (weightl , weight2) after applying a position dependent intra prediction combination (PDPC) process.
  • the weighted intra prediction may be used to code the current CU.
  • Position dependent intra prediction combination (PDPC) may be included in the derivation of the TIMD modes.
  • the costs of the two selected modes may be compared with a threshold.
  • a cost factor of 2 may be applied as follows: costMode2 ⁇ 2*costMode1 . If this condition is true, the fusion may be applied. If the condition is not true, model (e.g., only model) may be used.
  • Template matching may be a decoder-side Motion Vector (MV) derivation method.
  • TM may refine the motion information of the current CU by finding the closest match between a template (e.g., top and/or left neighboring blocks of the current CU) in the current picture and a block (e.g., same size to the template) in a reference picture.
  • FIG. 9 illustrates an example of template matching performed on a search area around an initial MV. As illustrated in FIG. 9, a better MV may be searched around the initial motion of the current CU (e.g., within a [- 8, +8]-pel search range). For a tested MV (e.g., each tested MV), a distortion may be computed between the samples in the template of the current block, and the samples in the template of the reference block associated with the MV.
  • MV Motion Vector
  • a motion vector prediction (MVP) candidate may be determined based on a template matching error.
  • the MVP candidate that has the minimum difference between the current block template and the reference block template may be selected.
  • TM may be performed for (e.g., only for) this particular MVP candidate for MV refinement.
  • TM may be used to refine this MVP candidate.
  • TM may involve using (e.g., starting from) full-pel motion vector difference (MVD) precision (or 4-pel for 4-pel adaptive motion vector resolution (AMVR) mode) within a [-8, +8]-pel search range by using iterative diamond search.
  • MVD full-pel motion vector difference
  • AMVR adaptive motion vector resolution
  • the AMVP candidate may be refined (e.g., further refined) by using cross search with full-pel MVD precision (or 4-pel for 4-pel AMVR mode), and by using (e.g., followed sequentially by) half-pel or quarter-pel cross searches.
  • This search process may ensure that the MVP candidate keeps the same MV precision as indicated by the AMVR mode (e.g., even after the TM process).
  • the search process may be terminated.
  • the threshold may be equal to the area of the block.
  • TM may be performed to 1/8-pel MVD precision level or may be performed by skipping precisions beyond half-pel MVD precision.
  • the way in which TM is performed may depend on whether the alternative interpolation filter (e.g., that is used when AMVR is of half-pel mode) is used according to merged motion information. If TM mode is enabled, template matching may work as an independent process, or as an extra MV refinement process between block-based and subblock-based bilateral matching (BM) methods (e.g., depending on whether BM may be enabled according to an enabling condition check).
  • BM subblock-based bilateral matching
  • FIG. 10 illustrates example spatial GPM candidates.
  • SGPM is an intra mode that may resemble the inter coding tool of GPM.
  • the two prediction parts may be generated from an intra predicted process.
  • a candidate list may be built with each entry containing a partition split and two intra prediction modes, as illustrated in FIG. 10.
  • One or more (e.g., 26) partition modes and one or more (e.g., three) intra prediction modes may be used to form the combinations.
  • the length of the candidate list may be set to 16.
  • the selected candidate index may be signaled.
  • FIG. 11 illustrates an example SGPM template.
  • the list may be reordered using the SGPM template.
  • the SAD between the prediction and reconstruction of the template may be used for ordering.
  • the template size may be fixed to one.
  • an IPM list may be derived (e.g., including a TIMD-derived mode, horizontal mode, and vertical mode).
  • the IPM list size may be set to three.
  • a TIMD-derived mode may be replaced by one or more (e.g., two) derived modes with horizontal and vertical orientations.
  • the SGPM mode may be applied with a restricted block size (e.g., 4 ⁇ width ⁇ 64 and 4 height ⁇ 64, where width ⁇ height * 8, height ⁇ width * 8, and width * height > 32.
  • a restricted block size e.g., 4 ⁇ width ⁇ 64 and 4 height ⁇ 64, where width ⁇ height * 8, height ⁇ width * 8, and width * height > 32.
  • FIG. 12 illustrates an example of GPM blending.
  • Adaptive blending may be used for SGPM.
  • a blending depth T (e.g., illustrated in FIG. 12) may be derived as follows:
  • SAD sum of absolute differences
  • MR- SAD mean-removed sum of absolute differences
  • SATD or HAD sum of absolute transform differences
  • MR_SATD or MR_HAD sum of absolute differences
  • SSE sum of square errors
  • the SAD metric may use the sum of absolute differences between pixels to compute the difference between blocks (e.g., two blocks). SAD may also be referred to as an L1 norm. The SAD metric may be determined using the following equation:
  • the MR-SAD metric may use the sum of absolute differences between pixels, minus the average value of the difference between pixel values, to compute the difference between blocks (e.g., two blocks).
  • the MR-SAD metric may be determined using the following equations: [0156]
  • SATD may involve applying a transform (e.g., a Hadamard transform), and computing a SAD on the transform block. SATD considers that a transform may be applied on the block.
  • a transform e.g., a Hadamard transform
  • MR_SATD may involve applying (e.g., first applying) a transform (e.g., a Hadamard transform), and computing (e.g., then computing) a MR_SAD on the transform block.
  • a transform e.g., a Hadamard transform
  • computing e.g., then computing
  • MR_SATD considers that a transform may be applied on the block.
  • the SSE metric may be determined using the following equation:
  • Feature(s) provided herein may add more diversity to the selection of candidates for tools using a template (e.g., by adding more metrics functions during the candidate derivation process).
  • Multiple metrics/metric functions may be available during the derivation of candidates of tools that use a template (e.g., with IntraTMP).
  • MR-SAD or SATD may be used during the derivation of candidates of tools that use a template (e.g., based on high level signaling or block level signaling). Fusion candidates may be computed with different metrics.
  • SAD may be used as the cost function to derive the best intraTMP candidate.
  • SATD may be used on the template to derive TIMD mode(s).
  • IntraTMP tool is used as an example to describe using different metrics for template-based analysis via an the IntraTMP related indications. It may be appreciated that other indication(s) may be used (e.g., independently) for TIMD or other tools. In some examples, an indication (e.g., a flag, a single flag) may be used to indicate the index of the metric used for one or more modes (e.g., all the modes) that use the template.
  • the template size may be equal to 1 pixel.
  • the computation of SATD may be computed in 1 dimension.
  • IntraTMP may be used for natural content.
  • SATD may be used to derive the best intraTMP candidate (e.g., because a residual may be coded for natural content).
  • MR-SAD may be used for DC coefficients, as a DC coefficient may be easier to code.
  • Table 1 SPS syntax to signal the metric used for intra template matching
  • an intra template matching metric indication such as sps_intra_template_matching_metric_id, may indicate an identifier for the metric used (e.g., to be used) to derive the best candidate for the current sequence (e.g., the pictures in the video sequence).
  • a slice level syntax element (e.g., which may be present in the slice header or the picture header, as in Table 2) may be used to override the SPS parameter for a frame (e.g., each frame) to the content (e.g., because some sequences have a mix of computer generated and natural content).
  • Table 2 slice header syntax to signal the metric used for intra template matching
  • a slice-level intra template matching metric indication such as sh_intra_template_matching_metric_id, may indicate an identifier for the metric used to derive the best candidate for the current slice.
  • the value of sh_intra_template_matching_metric_id may override the value of sps_intra_template_matching_metric_id.
  • FIG. 13 illustrates the derivation (e.g., at the encoder) of the best candidate for a template tool (e.g., IntraTMP).
  • a template tool e.g., IntraTMP
  • best candidates may be derived using different metrics.
  • the best candidate may be chosen, and the encoder may indicate the corresponding metric in video data.
  • a block-level indication e.g., an index, a flag
  • Table 3 may be coded to indicate the metric to be used for deriving the intra template matching predictor, as shown in Table 3.
  • a block-level intra template matching metric indication such as intra_template_matching_metric_predictor_id , may indicate the metric used (e.g., to be used) to derive the best candidate for the current block.
  • an index may be signaled to indicate the best intraTMP predictor using a given derivation metric.
  • the intra_template_matching_metric_predictor_id syntax element may be coded in conjunction with the index of the best predictor among the N best selected.
  • the intra_template_matching_metric_predictor_id syntax element may indicate the derivation metric to be used for identifying the N best IntraTMP predictors.
  • two candidate metrics may be used (e.g., SAD and SATD).
  • a flag may be coded to indicate the metric for template-based analysis.
  • FIG. 14 illustrates an example derivation of a template tool predictor (e.g., intraTMP) at the decoder.
  • the decoder may determine the metric to be used to derive the best intraTMP predictor, based on the intra template matching metric indication (e.g., in video data, bitstream).
  • FIG. 15 illustrates an example fusion of the best intraTMP predictors derived with different metrics. Multiple intraTMP candidates may be derived using different metrics (a candidate may be derived using a metric). The candidates may be fused to construct the final predictor, as illustrated in FIG. 15.
  • a fusion method of IntraTMP may be used.
  • the fusion weights may be derived based on template costs.
  • the number of fused blocks may be determined by a threshold. Up to three candidate blocks may be fused. If one (e.g., only one) block is selected for fusion, the final predictor may be a fusion of the selected matched block and the intra predictor derived by Planar mode.
  • An indication may be signaled to indicate which combination of candidates to use for fusion.
  • an intra template matching metric fusion indication may indicate the metrics used to derive the best candidates (e.g., the best candidates to fuse to predict the current block).
  • the fusion ID may be selected from among one or more combinations of metrics (e.g., from among three combinations, for example, SAD-SATD, SAD-MR_SAD, and SATD-MR_SATD). If SAD-SATD is selected, the fusion may be between the candidate derived with the SAD metric and the candidate derived with SATD metric. A threshold may be added to discard a candidate (e.g., one of the candidates).
  • the candidate may be discarded if the cost of the best candidate is lower than N- times the cost of the worst candidate. For example, if Costs AD ⁇ 2 * CostSATD, the candidate derived with the SATD metric (e.g., only the candidate derived with SATD metric) may be used. In this case, the SAD candidates may be discarded, and no fusion may be performed.
  • ROM read only memory
  • RAM random access memory
  • register cache memory
  • semiconductor memory devices magnetic media such as internal hard disks and removable disks, magneto-optical media, and optical media such as CD-ROM disks, and digital versatile disks (DVDs).
  • a processor in association with software may be used to implement a radio frequency transceiver for use in a WTRU, UE, terminal, base station, RNC, or any host computer.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Mobile Radio Communication Systems (AREA)
EP24718138.1A 2023-04-07 2024-04-05 Zusätzliche schablonenwerkzeugmetriken Pending EP4690803A1 (de)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP23305511 2023-04-07
PCT/EP2024/059333 WO2024209051A1 (en) 2023-04-07 2024-04-05 Additional template tool metrics

Publications (1)

Publication Number Publication Date
EP4690803A1 true EP4690803A1 (de) 2026-02-11

Family

ID=86226769

Family Applications (1)

Application Number Title Priority Date Filing Date
EP24718138.1A Pending EP4690803A1 (de) 2023-04-07 2024-04-05 Zusätzliche schablonenwerkzeugmetriken

Country Status (4)

Country Link
EP (1) EP4690803A1 (de)
KR (1) KR20250169351A (de)
CN (1) CN120917749A (de)
WO (1) WO2024209051A1 (de)

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20240275984A1 (en) * 2021-06-04 2024-08-15 Beijing Bytedance Network Technology Co., Ltd. Method, device, and medium for video processing
WO2024076220A1 (ko) * 2022-10-07 2024-04-11 한국전자통신연구원 영상 부호화/복호화를 위한 방법, 장치 및 기록 매체

Also Published As

Publication number Publication date
CN120917749A (zh) 2025-11-07
WO2024209051A1 (en) 2024-10-10
KR20250169351A (ko) 2025-12-02

Similar Documents

Publication Publication Date Title
US20250247544A1 (en) Block boundary prediction refinement with optical flow
EP4732532A1 (de) Verallgemeinerte intraprädiktionsfusion
EP4629620A1 (de) Bewegungsvektoren als zusammenführungskandidaten für intrablöcke
EP4629619A1 (de) Generalisierte verwendung eines selbst-verlegten blockvektors
EP4629618A1 (de) Sgpm im zusammenführungsmodus
EP4629627A1 (de) Vorlagenbasiertes clipping einer blockvektorbasierten intraprädiktion
EP4629622A1 (de) Direktblockvektor mit verlaufszusammenführungskandidaten
EP4690803A1 (de) Zusätzliche schablonenwerkzeugmetriken
WO2024208952A1 (en) Prediction and transform dependency
WO2025149367A1 (en) Multi-reference line for template coding tools
WO2025003042A1 (en) Block vector guided chroma direct mode
WO2025002750A1 (en) Merge mode intra-template matching prediction
WO2025214896A1 (en) Candidates for building timd and dimd merge mode list
WO2025002873A1 (en) Bi-predictive merge list for intra block copy coding
WO2024208633A1 (en) Most probable modes list
WO2025201870A1 (en) Intra template matching prediction based on multiple metrics
EP4717028A1 (de) Intrablockkopiezusammenführungsmodus mit intravorlagenabgleich
WO2025002849A1 (en) Derivation of coding parameters
WO2024133762A1 (en) Extension of spatial geometric prediction mode
WO2025061921A1 (en) Template-based geometric partition mode blending area width
WO2025131654A1 (en) Intratmp lic extended template and probing
WO2024209028A1 (en) Padding of non-available intra samples in block-vector-based intra prediction
EP4505739A1 (de) Verbesserte subblockbasierte bewegungsvektorvorhersage (sbtmvp)
WO2025002778A1 (en) Amvr interactions with filtered prediction
WO2023194599A1 (en) Memory bandwidth constrained template-based operations

Legal Events

Date Code Title Description
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: UNKNOWN

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20251020

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC ME MK MT NL NO PL PT RO RS SE SI SK SM TR