US20220224915A1 - Usage of templates for decoder-side intra mode derivation - Google Patents

Usage of templates for decoder-side intra mode derivation Download PDF

Info

Publication number
US20220224915A1
US20220224915A1 US17/148,383 US202117148383A US2022224915A1 US 20220224915 A1 US20220224915 A1 US 20220224915A1 US 202117148383 A US202117148383 A US 202117148383A US 2022224915 A1 US2022224915 A1 US 2022224915A1
Authority
US
United States
Prior art keywords
template
sub
block
current video
video block
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US17/148,383
Other versions
US11388421B1 (en
Inventor
Yang Wang
Kai Zhang
Li Zhang
Yuwen He
Hongbin Liu
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lemon Inc USA
Original Assignee
Beijing Zitiao Network Technology
Lemon Inc USA
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zitiao Network Technology, Lemon Inc USA filed Critical Beijing Zitiao Network Technology
Priority to US17/148,383 priority Critical patent/US11388421B1/en
Assigned to BEIJING ZITIAO NETWORK TECHNOLOGY reassignment BEIJING ZITIAO NETWORK TECHNOLOGY ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: LIU, HONGBIN
Assigned to BEIJING OCEAN ENGINE NETWORK TECHNOLOGY CO., LTD. reassignment BEIJING OCEAN ENGINE NETWORK TECHNOLOGY CO., LTD. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: WANG, YANG
Assigned to BYTEDANCE INC. reassignment BYTEDANCE INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: HE, YUWEN, ZHANG, KAI, ZHANG, LI
Assigned to LEMON INC. reassignment LEMON INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: BEIJING OCEAN ENGINE NETWORK TECHNOLOGY CO., LTD.
Assigned to LEMON INC. reassignment LEMON INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: BEIJING ZITIAO NETWORK TECHNOLOGY CO., LTD.
Assigned to LEMON INC. reassignment LEMON INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: BYTEDANCE INC.
Assigned to LEMON INC. reassignment LEMON INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: BEIJING OCEAN ENGINE NETWORK TECHNOLOGY CO., LTD.
Priority to CN202210028454.4A priority patent/CN114765688A/en
Priority to US17/837,867 priority patent/US11902537B2/en
Publication of US11388421B1 publication Critical patent/US11388421B1/en
Application granted granted Critical
Publication of US20220224915A1 publication Critical patent/US20220224915A1/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/157Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
    • H04N19/159Prediction type, e.g. intra-frame, inter-frame or bidirectional frame prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/593Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/11Selection of coding mode or of prediction mode among a plurality of spatial predictive coding modes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/117Filters, e.g. for pre-processing or post-processing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/124Quantisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/146Data rate or code amount at the encoder output
    • H04N19/147Data rate or code amount at the encoder output according to rate distortion criteria
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/184Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being bits, e.g. of the compressed video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/186Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/70Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/85Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
    • H04N19/86Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving reduction of coding artifacts, e.g. of blockiness
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/90Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
    • H04N19/91Entropy coding, e.g. variable length coding [VLC] or arithmetic coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/90Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
    • H04N19/96Tree coding, e.g. quad-tree coding

Definitions

  • the present disclosure relates generally to video coding, and more particularly, to video encoding and decoding based on templates for decoder-side intra-prediction mode derivation.
  • a video may include a plurality of frames, each frame including blocks of samples.
  • the method may include constructing, during a conversion between a current video block of a video and a bitstream of the video, at least one template set for the current video block from a plurality of sub-templates.
  • the method may include deriving at least one intra-prediction mode (IPM) based on cost calculations.
  • the method may include determining, based on the at least one IPM, a final predictor of the current video block.
  • the method may include performing the conversion based on the final predictor.
  • IPM intra-prediction mode
  • the plurality of sub-templates includes at least one of the following: a left sub-template, an above sub-template, a right-above sub-template, a left-below sub-template, and a left-above sub-template.
  • the plurality of sub-templates includes non-adjacent samples of the current video block.
  • the at least one template set includes a single sub-template, and wherein the single sub-template is one of a left sub-template, an above sub-template, a right-above sub-template, a left-below sub-template, or a left-above sub-template.
  • the at least one template set includes any one of the following:
  • the at least one template set is selected from the plurality of sub-templates based on a coding information for the current block, wherein the coding information includes a block dimension or block shape.
  • the at least one template set may be selected from the plurality of sub-templates based on a relationship between the dimension and a pre-defined threshold.
  • BW may represent a block width of the current video block and BH may represent a block height of the current video block.
  • a left sub-template is not selected in a case that the BW divided by the BH is greater than or equal to a first threshold.
  • an above sub-template is not selected in a case that the BH divided by the BW is greater than or equal to a second threshold.
  • a dimension of one of the plurality of sub-templates is based on the at least one of the following: a) a dimension of the current video block; b) a block shape of the current video block; c) a slice type of the current video block; or d) a picture type of the current video block.
  • constructing at least one template set from a plurality of sub-templates is further related to a second syntax element presented in the bitstream.
  • constructing at least one template set from a plurality of sub-templates is further related to a second syntax element presented in the bitstream.
  • no template set in response to the plurality of sub-templates being unavailable, no template set is constructed and no IPM is derived.
  • the final predictor of the current video block may be determined based on a predefined IPM, and the predefined IPM may be one of DC mode, planar mode, horizontal mode, or vertical mode
  • the conversion includes decoding the current video block from the bitstream.
  • the conversion includes encoding the current video block into the bitstream.
  • the disclosure provides apparatus for processing video data.
  • the apparatus includes a processor and a non-transitory memory with instructions thereon.
  • the instructions upon execution by the processor, cause the processor to: construct, during a conversion between a current video block of a video and a bitstream of the video, at least one template set for the current video block from a plurality of sub-templates; derive at least one intra-prediction mode (IPM) based on cost calculations; determine, based on the at least one IPM, a final predictor of the current video block; and perform the conversion based on the final predictor.
  • IPM intra-prediction mode
  • the apparatus may be configured to perform any of the above implementations of the method.
  • the disclosure provides a non-transitory computer-readable recording medium storing a bitstream of a video which is generated by a method performed by a video processing apparatus.
  • the non-transitory computer-readable medium may be generated by any of the above implementations of the method.
  • the disclosure provides a non-transitory computer-readable storage medium storing instructions that cause a processor to perform any of the above implementations of the method.
  • the one or more aspects include the features hereinafter fully described and particularly pointed out in the claims.
  • the following description and the annexed drawings set forth in detail some illustrative features of the one or more aspects. These features are indicative, however, of but a few of the various ways in which the principles of various aspects may be employed, and this description is intended to include all such aspects and their equivalents.
  • FIG. 1 is a block diagram that illustrates an example of a video coding system, in accordance with some aspects of the present disclosure.
  • FIG. 2 is a block diagram that illustrates a first example of a video encoder, in accordance with some aspects of the present disclosure.
  • FIG. 3 is a block diagram that illustrates an example of a video decoder, in accordance with some aspects of the present disclosure.
  • FIG. 4 is a block diagram that illustrates a second example of a video encoder, in accordance with some aspects of the present disclosure.
  • FIG. 5 is an example of an encoder block diagram of versatile video coding (VVC) in accordance with some aspects of the present disclosure.
  • VVC versatile video coding
  • FIG. 6 is a schematic diagram of intra mode coding with 67 intra-prediction modes to capture the arbitrary edge directions presented in natural video in accordance with some aspects of the present disclosure.
  • FIGS. 7 and 8 are reference example diagrams of wide-angular intra-prediction in accordance with some aspects of the present disclosure.
  • FIG. 9 is a diagram of discontinuity in case of directions that exceed 45° angle in accordance with some aspects of the present disclosure.
  • FIG. 10 is a schematic diagram of location of the samples used for the derivation of a and ⁇ for the chroma in accordance with some aspects of the present disclosure.
  • FIG. 11 is a schematic diagram of location of the samples used for the derivation of a and ⁇ for the luma in accordance with some aspects of the present disclosure.
  • FIGS. 12-15 illustrate examples of reference samples (R x, ⁇ 1 and R ⁇ 1,y) for PDPC applied over various prediction modes in accordance with some aspects of the present disclosure.
  • FIG. 16 is a diagram of multiple reference line (MRL) intra-prediction used in accordance with aspects of the present disclosure.
  • FIGS. 17 and 18 are example diagrams and of an intra sub-partitions (ISP) that divides luma intra-predicted blocks vertically or horizontally into sub-partitions depending on the block size in accordance with some aspects of the present disclosure.
  • ISP intra sub-partitions
  • FIG. 19 is a diagram of a matrix weighted intra-prediction process (MIP) method for VVC in accordance with some aspects of the present disclosure.
  • MIP matrix weighted intra-prediction process
  • FIG. 20 is a diagram of a template based intra mode derivation where the target denotes the current block (of block size N) for which intra-prediction mode is to be estimated in accordance with some aspects of the present disclosure.
  • FIG. 21 is a diagram of a template of a set of chosen pixels on which a gradient analysis may be performed based on intra-prediction mode derivation in accordance with some aspects of the present disclosure.
  • FIG. 22 is a diagram of a convolution of a 3 ⁇ 3 sobel gradient filter with the template in accordance with aspects of the present disclosure.
  • FIG. 23 is a schematic diagram of intra mode coding with greater than 67 intra-prediction modes in accordance with some aspects of the present disclosure.
  • FIG. 24 is a diagram of an example template including a left-above sub-template in accordance with some aspects of the present disclosure.
  • FIG. 25 is a diagram of an example template including a left sub-template and an above sub-template in accordance with some aspects of the present disclosure.
  • FIG. 26 is a diagram of an example template including an above sub-template in accordance with some aspects of the present disclosure.
  • FIG. 27 is a diagram of an example template including a left sub-template in accordance with some aspects of the present disclosure.
  • FIG. 28 is a diagram of an example template including a left sub-template and a left-below sub-template in accordance with some aspects of the present disclosure.
  • FIG. 29 is a diagram of an example template including an above sub-template and a right-above sub-template in accordance with some aspects of the present disclosure.
  • FIG. 30 is a diagram of an example template including a left sub-template, a left-below sub-template, an above sub-template, and a right-above sub-template in accordance with some aspects of the present disclosure.
  • FIG. 32 is a diagram of an example template including sub-templates that are spaced apart from a target block in accordance with some aspects of the present disclosure.
  • FIG. 33 is a diagram of example template-reference samples for a template including a left-above sub-template, a left sub-template, and an above sub-template in accordance with some aspects of the present disclosure.
  • FIG. 34 is a diagram of example template-reference samples for a template including a left sub-template and an above sub-template in accordance with some aspects of the present disclosure.
  • FIG. 35 is a diagram of example template-reference samples for a template including an above sub-template in accordance with some aspects of the present disclosure.
  • FIG. 36 is a diagram of example template-reference samples for a template including a left sub-template in accordance with some aspects of the present disclosure.
  • FIG. 37 is a diagram of example template-reference samples with a horizontal gap for a template including an above sub-template in accordance with some aspects of the present disclosure.
  • FIG. 38 is a diagram of example template-reference samples with a vertical gap for a template including an above sub-template in accordance with some aspects of the present disclosure.
  • FIG. 39 is a diagram of example template-reference samples with a vertically shifted portion for a template in accordance with some aspects of the present disclosure.
  • FIG. 40 is a diagram of example template-reference samples with a horizontally shifted portion for a template in accordance with some aspects of the present disclosure.
  • FIG. 41 is a diagram of an example apparatus including a video decoder with a variable template intra-prediction unit in accordance with some aspects of the present disclosure.
  • FIG. 42 is a diagram of an example apparatus including a video encoder for variable template intra-prediction in accordance with some aspects of the present disclosure.
  • FIG. 43 is a flowchart of an example method of decoding a bitstream in accordance with some aspects of the present disclosure.
  • processors include microprocessors, microcontrollers, graphics processing units (GPUs), central processing units (CPUs), application processors, digital signal processors (DSPs), reduced instruction set computing (RISC) processors, systems on a chip (SoC), baseband processors, field programmable gate arrays (FPGAs), programmable logic devices (PLDs), state machines, gated logic, discrete hardware circuits, and other suitable hardware configured to perform the various functionality described throughout this disclosure.
  • processors in the processing system may execute software.
  • Software shall be construed broadly to mean instructions, instruction sets, code, code segments, program code, programs, subprograms, software components, applications, software applications, software packages, routines, subroutines, objects, executables, threads of execution, procedures, functions, among other examples, whether referred to as software, firmware, middleware, microcode, hardware description language, or otherwise.
  • the functions described may be implemented in hardware, software, or any combination thereof. If implemented in software, the functions may be stored on or encoded as one or more instructions or code on a computer-readable medium.
  • Computer-readable media includes computer storage media, which may be referred to as non-transitory computer-readable media. Non-transitory computer-readable media may exclude transitory signals. Storage media may be any available media that can be accessed by a computer.
  • such computer-readable media can include a random-access memory (RAM), a read-only memory (ROM), an electrically erasable programmable ROM (EEPROM), optical disk storage, magnetic disk storage, other magnetic storage devices, combinations of the aforementioned types of computer-readable media, or any other medium that can be used to store computer executable code in the form of instructions or data structures that can be accessed by a computer.
  • RAM random-access memory
  • ROM read-only memory
  • EEPROM electrically erasable programmable ROM
  • optical disk storage magnetic disk storage
  • magnetic disk storage other magnetic storage devices
  • combinations of the aforementioned types of computer-readable media or any other medium that can be used to store computer executable code in the form of instructions or data structures that can be accessed by a computer.
  • aspects described herein generally relate to selecting one or more templates used for deriving an intra-prediction modes (IPM) for decoding (or encoding) images.
  • IPM intra-prediction modes
  • video coding technologies such as high efficiency video coding (HEVC), versatile video coding (VVC), etc.
  • HEVC high efficiency video coding
  • VVC versatile video coding
  • IPMs at the encoding side and decoding side to encode/decode each image or frame of a video, such to compress the number of bits in the bitstream, which can provide for efficient storage or transmission of the image or frame, and thus the video.
  • the IPMs are determined or specified per block of an image, where a block can include a portion of the image defined by a subset of coding units CUs or prediction units (PUs) (e.g., N ⁇ N CUs or PUs), where each CU or PU can be a pixel, a chroma, a luma, a collection of such, etc.
  • PUs prediction units
  • the IPM is then used to predict a given block based on reference pixels, chromas, lumas, etc. of a previously decoded block. This can save from storing or communicating values for each pixel, chroma, or luma, etc.
  • the IPM used for encoding a block may be signaled in the bitstream.
  • the IPM used for intra-prediction may be derived at the decoder. Deriving the IPM may reduce the bitrate of the bitstream by reducing the bits used for signaling the IPM.
  • the template used for intra-prediction has been fixed.
  • the present disclosure provides techniques for selecting a template.
  • the template may be a combination of one or more sub-templates. Selecting which sub-templates to include in the template may result in a template that better predicts the samples of the block. For example, the template may be selected based on the shape or dimensions of the block. Additionally, selecting the template may reduce the number of ratio of unavailable samples included in the template. For instance, the template may be selected based on which sub-templates for the current block have been reconstructed. Accordingly, selecting the template may reduce processing of unavailable samples. Additional aspects described herein relate to determining a cost of an IPM based on the selected template.
  • FIG. 1 is a block diagram that illustrates an example of a video coding system 100 that may utilize the techniques of this disclosure.
  • video coding system 100 may include a source device 110 and a destination device 120 .
  • the source device 110 which may be referred to as a video encoding device, may generate encoded video data.
  • the destination device 120 which may be referred to as a video decoding device, may decode the encoded video data generated by the source device 110 .
  • the source device 110 may include a video source 112 , a video encoder 114 , and an input/output (I/O) interface 116 .
  • I/O input/output
  • the video source 112 may include a source such as a video capture device, an interface to receive video data from a video content provider, and/or a computer graphics system for generating video data, or a combination of such sources.
  • the video data may comprise one or more pictures or images.
  • the terms “picture,” “image,” or “frame” can be used interchangeably throughout to refer to a single image in a stream of images that produce a video.
  • the video encoder 114 encodes the video data from the video source 112 to generate a bitstream.
  • the bitstream may include a sequence of bits that form a coded representation of the video data.
  • the bitstream may include coded pictures and associated data.
  • the coded picture is a coded representation of a picture.
  • the associated data may include sequence parameter sets, picture parameter sets, and other syntax structures.
  • the I/O interface 116 may include a modulator/demodulator (modem) and/or a transmitter, a bus, or substantially any mechanism that facilitates transfer of data between devices or within a computing device that may include both the source device 110 and destination device 120 (e.g., where the computing device stores the encoded video generated using functions of the source device 110 for display using functions of the destination device 120 ).
  • the encoded video data may be transmitted directly to destination device 120 via the I/O interface 116 through the network 130 a .
  • the encoded video data may also be stored onto a storage medium/server 130 b for access by destination device 120 .
  • the destination device 120 may include an I/O interface 126 , a video decoder 124 , and a display device 122 .
  • the I/O interface 126 may include a receiver and/or a modem, a bus, or substantially any mechanism that facilitates transfer of data between devices or within a computing device.
  • the I/O interface 126 may acquire encoded video data from the source device 110 or the storage medium/server 130 b .
  • the video decoder 124 may decode the encoded video data.
  • the display device 122 may display the decoded video data to a user.
  • the display device 122 may be integrated with the destination device 120 , or may be external to the destination device 120 which be configured to interface with an external display device.
  • the video encoder 114 and the video decoder 124 may operate according to a video compression standard, such as the HEVC standard, VVC standard and other current and/or further standards.
  • FIG. 2 is a block diagram illustrating an example of a video encoder 200 , which may be an example of the video encoder 114 in the system 100 illustrated in FIG. 1 , in accordance with some aspects of the present disclosure.
  • the video encoder 200 may be configured to perform any or all of the techniques of this disclosure.
  • the video encoder 200 includes a plurality of functional components.
  • the techniques described in this disclosure may be shared among the various components of the video encoder 200 .
  • a processor may be configured to perform any or all of the techniques described in this disclosure, including those of video encoder 200 .
  • the functional components of video encoder 200 may include one or more of a partition unit 201 , a prediction unit 202 which may include a mode select unit 203 , a motion estimation unit 204 , a motion compensation unit 205 and an intra-prediction unit 206 , a residual generation unit 207 , a transform unit 208 , a quantization unit 209 , an inverse quantization unit 210 , an inverse transform unit 211 , a reconstruction unit 212 , a buffer 213 , and an entropy encoding unit 214 .
  • a partition unit 201 may include a mode select unit 203 , a motion estimation unit 204 , a motion compensation unit 205 and an intra-prediction unit 206 , a residual generation unit 207 , a transform unit 208 , a quantization unit 209 , an inverse quantization unit 210 , an inverse transform unit 211 , a reconstruction unit 212 , a buffer 213 , and an entropy
  • the video encoder 200 may include more, fewer, or different functional components.
  • the prediction unit 202 may include an intra block copy (IBC) unit.
  • the IBC unit may perform prediction in an IBC mode in which at least one reference picture is a picture where the current video block is located.
  • motion estimation unit 204 may be highly integrated, but are separately represented in the example of FIG. 2 for purposes of explanation.
  • motion compensation unit 205 may be highly integrated, but are separately represented in the example of FIG. 2 for purposes of explanation.
  • the partition unit 201 may partition a picture into one or more video blocks.
  • the video encoder 200 and the video decoder 300 may support various video block sizes.
  • the mode select unit 203 may select one of the coding modes, intra or inter, e.g., based on error results, and provide the resulting intra- or inter-coded block to at least one of a residual generation unit 207 to generate residual block data and to a reconstruction unit 212 to reconstruct the encoded block for use as a reference picture.
  • the mode select unit 203 may select a combination of intra- and inter-prediction (CIIP) mode in which the prediction is based on an inter-prediction signal and an intra-prediction signal.
  • CIIP intra- and inter-prediction
  • the mode select unit 203 may also select a resolution for a motion vector (e.g., a sub-pixel or integer pixel precision) for the block in the case of inter-prediction.
  • the motion estimation unit 204 may generate motion information for the current video block by comparing one or more reference frames from buffer 213 to the current video block.
  • each reference frame can correspond to a picture of the video.
  • the motion compensation unit 205 may determine a predicted video block for the current video block based on the motion information and decoded samples of pictures from the buffer 213 other than the picture associated with the current video block.
  • the motion estimation unit 204 and the motion compensation unit 205 may perform different operations for a current video block, for example, depending on whether the current video block is in an I-slice, a P-slice, or a B-slice.
  • an “I-slice” may refer to a portion of a picture composed of macroblocks, all of which are based upon macroblocks within the same picture.
  • P-slices and “B-slices” may refer to portions of a picture composed of macroblocks that are not dependent on macroblocks in the same picture.
  • the motion estimation unit 204 may perform uni-directional prediction for the current video block, and the motion estimation unit 204 may search reference pictures of list 0 or list 1 for a reference video block for the current video block. The motion estimation unit 204 may then generate a reference index that indicates the reference picture in list 0 or list 1 that contains the reference video block and a motion vector that indicates a spatial displacement between the current video block and the reference video block. The motion estimation unit 204 may output the reference index, a prediction direction indicator, and the motion vector as the motion information of the current video block. The motion compensation unit 205 may generate the predicted video block of the current block based on the reference video block indicated by the motion information of the current video block.
  • the motion estimation unit 204 may perform bi-directional prediction for the current video block, where the motion estimation unit 204 may search the reference pictures in list 0 for a reference video block for the current video block and may also search the reference pictures in list 1 for another reference video block for the current video block. The motion estimation unit 204 may then generate reference indexes that indicate the reference pictures in list 0 and list 1 containing the reference video blocks and motion vectors that indicate spatial displacements between the reference video blocks and the current video block. The motion estimation unit 204 may output the reference indexes and the motion vectors of the current video block as the motion information of the current video block. The motion compensation unit 205 may generate the predicted video block of the current video block based on the reference video blocks indicated by the motion information of the current video block.
  • the motion estimation unit 204 may output a full set of motion information for decoding processing of a decoder.
  • the motion estimation unit 204 may not output a full set of motion information for the current video. Rather, the motion estimation unit 204 may signal the motion information of the current video block with reference to the motion information of another video block. For example, the motion estimation unit 204 may determine that the motion information of the current video block is sufficiently similar to the motion information of a neighboring video block.
  • the motion estimation unit 204 may indicate, in a syntax structure associated with the current video block, a value that indicates to the video decoder 300 that the current video block has the same motion information as the another video block.
  • the motion estimation unit 204 may identify, in a syntax structure associated with the current video block, another video block and a motion vector difference (MVD).
  • the motion vector difference indicates a difference between the motion vector of the current video block and the motion vector of the indicated video block.
  • the video decoder 300 may use the motion vector of the indicated video block and the motion vector difference to determine the motion vector of the current video block.
  • video encoder 200 may predictively signal the motion vector.
  • Two examples of predictive signaling techniques that may be implemented by video encoder 200 include advanced motion vector prediction (AMVP) and merge mode signaling.
  • AMVP advanced motion vector prediction
  • merge mode signaling merge mode signaling
  • the intra-prediction unit 206 may perform intra-prediction on the current video block.
  • the intra-prediction unit 206 may generate prediction data for the current video block based on decoded samples of other video blocks in the same picture.
  • the prediction data for the current video block may include at least one of a predicted video block or one or more syntax elements.
  • the residual generation unit 207 may generate residual data for the current video block by subtracting (e.g., indicated by the minus sign) the predicted video block(s) of the current video block from the current video block.
  • the residual data of the current video block may include residual video blocks that correspond to different sample components of the samples in the current video block.
  • the residual generation unit 207 may not perform the subtracting operation.
  • the transform unit 208 may generate one or more transform coefficient video blocks for the current video block by applying one or more transforms to a residual video block associated with the current video block.
  • the quantization unit 209 may quantize the transform coefficient video block associated with the current video block based on one or more quantization parameter (QP) values associated with the current video block.
  • QP quantization parameter
  • the inverse quantization unit 210 and the inverse transform unit 211 may apply inverse quantization and inverse transforms to the transform coefficient video block, respectively, to reconstruct a residual video block from the transform coefficient video block.
  • the reconstruction unit 212 may add the reconstructed residual video block to corresponding samples from one or more predicted video blocks generated by the prediction unit 202 to produce a reconstructed video block associated with the current block for storage in the buffer 213 .
  • loop filtering operation may be performed to reduce video blocking artifacts in the video block.
  • the entropy encoding unit 214 may receive data from other functional components of the video encoder 200 . When entropy encoding unit 214 receives the data, entropy encoding unit 214 may perform one or more entropy encoding operations to generate entropy encoded data and output a bitstream that includes the entropy encoded data.
  • FIG. 3 is a block diagram illustrating an example of video decoder 300 , which may be an example of the video decoder 124 in the system 100 illustrated in FIG. 1 , in accordance with some aspects of the present disclosure.
  • the video decoder 300 may be configured to perform any or all of the techniques of this disclosure.
  • the video decoder 300 includes a plurality of functional components.
  • the techniques described in this disclosure may be shared among the various components of the video decoder 300 .
  • a processor may be configured to perform any or all of the techniques described in this disclosure, including those of video decoder 300 .
  • the video decoder 300 includes one or more of an entropy decoding unit 301 , a motion compensation unit 302 , an intra-prediction unit 303 , an inverse quantization unit 304 , an inverse transform unit 305 , a reconstruction unit 306 , and a buffer 307 .
  • the video decoder 300 may, in some examples, perform a decoding pass generally reciprocal to the encoding pass described with respect to video encoder 200 ( FIG. 2 ).
  • the video decoder 300 may receive, via the entropy decoding unit 301 or otherwise, an encoded bitstream.
  • the encoded bitstream may include entropy coded video data (e.g., encoded blocks of video data).
  • the entropy decoding unit 301 may decode the entropy coded video data.
  • the motion compensation unit 302 may determine motion information including motion vectors, motion vector precision, reference picture list indexes, and other motion information.
  • the motion compensation unit 302 may, for example, determine such information by performing the AMVP and merge mode.
  • AMVP may be used, including derivation of several most probable candidates based on data from adjacent PBs and the reference picture.
  • Motion information typically includes the horizontal and vertical motion vector displacement values, one or two reference picture indices, and, in the case of prediction regions in B slices, an identification of which reference picture list is associated with each index.
  • a “merge mode” may refer to deriving the motion information from spatially or temporally neighboring blocks.
  • the motion compensation unit 302 may produce motion compensated blocks, possibly performing interpolation based on interpolation filters. Identifiers for interpolation filters to be used with sub-pixel precision may be included in syntax elements received with the encoded bitstream or in separate assistance information, e.g., as specified by a video encoder when encoding the video.
  • the motion compensation unit 302 may use interpolation filters as used by video encoder 200 during encoding of the video block to calculate interpolated values for sub-integer pixels of a reference block.
  • the motion compensation unit 302 may determine the interpolation filters used by video encoder 200 according to received syntax information and use the interpolation filters to produce predictive blocks.
  • the motion compensation unit 302 may use some of the syntax information to determine sizes of blocks used to encode frame(s) and/or slice(s) of the encoded video sequence, partition information that describes how each macroblock of a picture of the encoded video sequence is partitioned, modes indicating how each partition is encoded, one or more reference frames (and reference frame lists) for each inter-encoded block, and other information to decode the encoded video sequence.
  • a “slice” may refer to a data structure that can be decoded independently from other slices of the same picture, in terms of entropy coding, signal prediction, and residual signal reconstruction.
  • a slice can either be an entire picture or a region of a picture.
  • the intra-prediction unit 303 may use intra-prediction modes for example received in the bitstream to form a prediction block from spatially adjacent blocks. Intra-prediction can be referred to herein as “intra,” and/or intra-prediction modes can be referred to herein as “intra modes”
  • the inverse quantization unit 304 inverse quantizes, i.e., de-quantizes, the quantized video block coefficients provided in the bitstream and decoded by entropy decoding unit 301 .
  • Inverse transform unit 305 applies an inverse transform.
  • the reconstruction unit 306 may sum the residual blocks with the corresponding prediction blocks generated by motion compensation unit 302 or intra-prediction unit 303 to form decoded blocks. If desired, a deblocking filter may also be applied to filter the decoded blocks in order to remove blockiness artifacts.
  • the decoded video blocks are then stored in buffer 307 , which provides reference blocks for subsequent motion compensation/intra-prediction and also produces decoded video for presentation on a display device.
  • HEVC High Efficiency Video Coding
  • VVC Versatile Video Coding
  • FIG. 4 shows an example of a block diagram of a HEVC video encoder and decoder 400 , which may be the video encoder 114 and video decoder 124 in the system 100 illustrated in FIG. 1 , video encoder 200 in FIG. 2 and video decoder 300 in FIG. 3 , etc., in accordance with some aspects of the present disclosure.
  • the encoding algorithm for generating HEVC-compliant bitstreams may proceed as follows. Each picture can be divided into block regions (e.g., coding tree units (CTUs)), and the precise block division may be transmitted to the decoder.
  • a CTU consists of a luma coding tree block (CTB) and the corresponding chroma CTBs and syntax elements.
  • CTB luma coding tree block
  • HEVC then supports a partitioning of the CTBs into smaller blocks using a tree structure and quadtree-like signaling.
  • the quadtree syntax of the CTU specifies the size and positions of its luma and chroma CBs.
  • the root of the quadtree is associated with the CTU.
  • the size of the luma CTB is the largest supported size for a luma CB.
  • the splitting of a CTU into luma and chroma CBs may be jointly signaled.
  • a CTB may contain only one CU or may be split to form multiple CUs, and each CU has an associated partitioning into prediction units (PUs) and a tree of transform units (TUs).
  • PUs prediction units
  • TUs tree of transform units
  • the first picture of the video sequence (and/or the first picture at each clean random access point that enters the video sequence) can use only intra-picture prediction, which uses region-to-region spatial data prediction within the same picture, but does not rely on other pictures to encode the first picture.
  • the inter-picture temporal prediction coding mode may be used for most blocks.
  • the encoding process for inter-picture prediction includes selecting motion data including a selected reference picture and a motion vector (MV) to be applied to predict samples of each block.
  • MV motion vector
  • the decision whether to code a picture area using inter-picture or intra-picture prediction can be made at the CU level.
  • a PU partitioning structure has its root at the CU level.
  • the luma and chroma CBs can then be further split in size and predicted from luma and chroma prediction blocks (PBs).
  • PBs chroma prediction blocks
  • HEVC supports variable PB sizes from 64 ⁇ 64 down to 4 ⁇ 4 samples.
  • the prediction residual is coded using block transforms.
  • a TU tree structure has its root at the CU level.
  • the luma CB residual may be identical to the luma transform block (TB) or may be further split into smaller luma TBs. The same applies to the chroma TBs.
  • the encoder and decoder may apply motion compensation (MC) by using MV and mode decision data to generate the same inter-picture prediction signal, which is transmitted as auxiliary information.
  • MC motion compensation
  • the residual signal of intra-picture or inter-picture prediction can be transformed by linear spatial transformation, which is the difference between the original block and its prediction. Then the transform coefficients can be scaled, quantized, entropy encoded, and transmitted together with the prediction information.
  • the encoder can duplicate the decoder processing loop so that both can generate the same prediction for subsequent data. Therefore, the quantized transform coefficients can be constructed by inverse scaling, and then can be inversely transformed to replicate the decoding approximation of the residual signal. The residual can then be added to the prediction, and the result of this addition can then be fed into one or two loop filters to smooth the artifacts caused by block-by-block processing and quantization.
  • the final picture representation i.e., the copy output by the decoder
  • the order of encoding or decoding processing of pictures may be different from the order in which they arrive from the source. As such, in some examples, it may be necessary to distinguish between the decoding order of the decoder (that is, the bit stream order) and the output order (that is, the display order).
  • Video material encoded by HEVC can be input as a progressive image (e.g., because the source video originates from this format or is generated by de-interlacing before encoding).
  • There is no explicit coding feature in the HEVC design to support the use of interlaced scanning because interlaced scanning is no longer used for displays and becomes very uncommon for distribution.
  • metadata syntax has been provided in HEVC to allow the encoder to indicate that it has been sent by encoding each area of the interlaced video (i.e., even or odd lines of each video frame) into a separate picture interlaced video, or by encoding each interlaced frame as a HEVC encoded picture to indicate that it has been sent. This can provide an effective method for encoding interlaced video without the need to support special decoding processes for it.
  • FIG. 5 is an example of an encoder block diagram 500 of VVC, which can include multiple in-loop filtering blocks: e.g., deblocking filter (DF), sample adaptive offset (SAO) adaptive loop filter (ALF), etc.
  • SAO and ALF may utilize the original samples of the current picture to reduce the mean square errors between the original samples and the reconstructed samples by adding an offset and by applying a finite impulse response (FIR) filter, respectively, with coded side information signaling the offsets and filter coefficients.
  • FIR finite impulse response
  • ALF may be located at the last processing stage of each picture and can be regarded as a tool to catch and fix artifacts created by the previous stages.
  • FIG. 6 is a schematic diagram 600 of intra-prediction mode coding with 67 intra-prediction modes to capture the arbitrary edge directions presented in natural video.
  • the number of directional intra modes may be extended from 33, as used in HEVC, to 65 while the planar and the DC modes remain the same.
  • the denser directional intra-prediction modes may apply for the block sizes and for both luma and chroma intra-predictions.
  • every intra-prediction mode coded block may include a square shape (e.g., a coded block of size N ⁇ N) and the length of each of its side may be a power of 2 (e.g., where Nis a power of 2).
  • blocks can have a rectangular shape that may necessitate the use of a division operation per block in the general case.
  • the longer side may be used to compute the average for non-square blocks.
  • the exact prediction direction for a given intra-prediction mode index may be further dependent on the block shape.
  • Conventional angular intra-prediction directions are defined from 45 degrees to ⁇ 135 degrees in clockwise direction.
  • several conventional angular intra-prediction modes may be adaptively replaced with wide-angle intra-prediction modes for non-square blocks.
  • the replaced modes may be signaled using the original mode indexes, which are remapped to the indexes of wide angular modes after parsing.
  • the total number of intra-prediction modes may be unchanged, i.e., 67 , and the intra mode coding method may also be unchanged.
  • FIGS. 7 and 8 are reference example diagrams 700 and 800 of wide-angular intra-prediction.
  • FIG. 9 is a diagram 900 of discontinuity in case of directions that exceed 45° angle.
  • two vertically adjacent predicted samples may use two non-adjacent reference samples in the case of wide-angle intra-prediction.
  • low-pass reference samples filter and side smoothing may be applied to the wide-angle prediction to reduce the negative effect of the increased gap A N .
  • a wide-angle mode represents a non-fractional offset
  • the samples in the reference buffer can be directly copied without applying any interpolation. With this modification, the number of samples to be smoothed may be reduced.
  • Chroma derived mode (DM) derivation table for 4:2:2 chroma format was initially ported from HEVC extending the number of entries from 35 to 67 to align with the extension of intra-prediction modes.
  • HEVC specification does not support prediction angle below ⁇ 135 degree and above 45 degree, luma intra-prediction modes ranging from 2 to 5 may be mapped to 2. Therefore, chroma DM derivation table for 4:2:2: chroma format can be updated by replacing some values of the entries of the mapping table to convert prediction angle more precisely for chroma blocks.
  • motion parameters consisting of motion vectors, reference picture indices and reference picture list usage index, and additional information used for the new coding feature of VVC may be used for inter-predicted sample generation.
  • the motion parameter can be signaled in an explicit or implicit manner.
  • the CU When a CU is coded with skip mode, the CU may be associated with one PU and may have no significant residual coefficients, no coded motion vector delta or reference picture index.
  • a merge mode may be specified where the motion parameters for the current CU can be obtained from neighboring CUs, including spatial and temporal candidates, and additional schedules introduced in VVC.
  • the merge mode can be applied to any inter-predicted CU, not only for skip mode.
  • the alternative to merge mode may be the explicit transmission of motion parameters, where motion vector, corresponding reference picture index for each reference picture list and reference picture list usage flag and other needed information are signaled explicitly per each CU.
  • intra block copy may be a tool adopted in HEVC extensions on SCC, and thus may be used by a video encoder 114 , 200 , 400 , as described herein in encoding video, and/or by a video decoder 124 , 300 , 400 , as described herein in decoding video. Such a tool may improve the coding efficiency of screen content materials.
  • IBC mode may be implemented as a block level coding mode
  • block matching (BM) may be performed at the encoder to find the optimal block vector (or motion vector) for each CU.
  • a block vector is used to indicate the displacement from the current block to a reference block, which is already reconstructed inside the current picture.
  • the luma block vector of an IBC-coded CU may be in integer precision.
  • the chroma block vector can round to integer precision as well.
  • the IBC mode can switch between 1-pel and 4-pel motion vector precisions.
  • An IBC-coded CU may be treated as the third prediction mode other than intra- or inter-prediction modes.
  • the IBC mode may be applicable to the CUs with both width and height smaller than or equal to 64 luma samples.
  • hash-based motion estimation may be performed for IBC.
  • the encoder performs RD check for blocks with either width or height no larger than 16 luma samples.
  • the block vector search may be performed using hash-based search first. If hash search does not return valid candidate, block matching based local search may be performed.
  • hash key matching 32-bit cyclic redundancy check (CRC)
  • CRC cyclic redundancy check
  • a hash key may be determined to match that of the reference block when all the hash keys of all 4 ⁇ 4 sub-blocks match the hash keys in the corresponding reference locations. If hash keys of multiple reference blocks are found to match that of the current block, the block vector costs of each matched reference may be calculated and the one with the minimum cost may be selected.
  • the search range may be set to cover both the previous and current CTUs.
  • IBC mode may be signaled with a flag and it can be signaled as IBC AMVP mode or IBC skip/merge mode.
  • a merge candidate index may be used to indicate which of the block vectors in the list from neighboring candidate IBC coded blocks is used to predict the current block.
  • the merge list may include spatial, HMVP, and pairwise candidates.
  • a block vector difference may be coded in the same way as a motion vector difference.
  • the block vector prediction method uses two candidates as predictors, one from left neighbor and one from above neighbor (if IBC coded). When either neighbor is not available, a default block vector can be used as a predictor. A flag can be signaled to indicate the block vector predictor index.
  • a cross-component linear model (CCLM) prediction mode may be used in the VVC, for which the chroma samples are predicted based on the reconstructed luma samples of the same CU by using a linear model as follows:
  • pred C (i, j) may represent the predicted chroma samples in a CU and rec L (i, j) may represent the down-sampled reconstructed luma samples of the same CU.
  • the above neighboring positions may be denoted as S[0, ⁇ 1] . . . S[W′ ⁇ 1, ⁇ 1] and the left neighboring positions may be denoted as S[ ⁇ 1, 0] . . . S[ ⁇ 1, H′ ⁇ 1].
  • the four samples are selected as S[W′/4, ⁇ 1], S[3*W′/4, ⁇ 1], S[ ⁇ 1, H′/4], S[ ⁇ 1, 3*H′/4] when LM mode is applied and both above and left neighboring samples are available; S[W′/8, ⁇ 1], S[3*W′/8, ⁇ 1], S[5*W′/8, ⁇ 1], S[7*W′/8, ⁇ 1] when LM-T mode is applied or only the above neighboring samples are available; and S[ ⁇ 1, H′/8], S[ ⁇ 1, 3*H′/8], S[ ⁇ 1, 5*H′/8], S[ ⁇ 1, 7*H′/8] when LM-L mode is applied or only the left neighboring samples are available.
  • the four neighboring luma samples at the selected positions may be down-sampled and compared four times to find two larger values: x 0 A and x 1 A , and two smaller values: x 0 B and x 1 B .
  • Their corresponding chroma sample values may be denoted as y 0 A , y 1 A , y 0 B and y 1 B .
  • x A , x B , y A and y B may be derived as:
  • FIG. 10 is a schematic diagram 1000 of location of the samples used for the derivation of ⁇ and ⁇ for the chroma.
  • FIG. 11 is a schematic diagram 1100 of location of the samples used for the derivation of ⁇ and ⁇ for the luma.
  • the division operation to calculate parameter ⁇ may be implemented with a look-up table.
  • the diff value difference between maximum and minimum values
  • the parameter ⁇ may be expressed by an exponential notation.
  • the diff value is approximated with a 4-bit significant part and an exponent. Consequently, the table for 1/diff is reduced into 16 elements for 16 values of the significand as follows:
  • the above template and left template can be used to calculate the linear model coefficients together.
  • the above template and left template can be used alternatively in the other 2 LM modes, called LM_T, and LM_L modes.
  • LM_T mode only the above template may be used to calculate the linear model coefficients.
  • W+H the above template
  • LM_L mode only left template is used to calculate the linear model coefficients.
  • H+W left+W samples.
  • left and above templates are used to calculate the linear model coefficients.
  • two types of down-sampling filter are applied to luma samples to achieve 2 to 1 down-sampling ratio in both horizontal and vertical directions.
  • the selection of down-sampling filter is specified by a SPS level flag.
  • the two down-sampling filters are as follows, which are corresponding to “type-0” and “type-2” content, respectively.
  • luma line generally line buffer in intra-prediction
  • This parameter computation may be performed as part of the decoding process, and not just as an encoder search operation. As a result, no syntax may be used to convey the ⁇ and ⁇ values to the decoder.
  • Chroma mode coding For chroma intra-prediction mode coding, a total of 8 intra-prediction modes are allowed for chroma intra mode coding. Those modes include five traditional intra-prediction modes and three cross-component linear model modes (LM, LM_T, and LM_L). Chroma mode signaling and derivation process are shown in Table 3 below. Chroma mode coding directly depends on the intra-prediction mode of the corresponding luma block. As separate block partitioning structure for luma and chroma components is enabled in I slices, one chroma block may correspond to multiple luma blocks. Therefore, for Chroma DM mode, the intra-prediction mode of the corresponding luma block covering the center position of the current chroma block can be directly inherited.
  • the first bin indicates whether it is regular (0) or LM modes (1). If it is LM mode, then the next bin indicates whether it is LM CHROMA (0) or not (1). If it is not LM_CHROMA, next bin indicates whether it is LM_L (0) or LM_T (1). For this case, when sps_cclm_enabled_flag is 0, the first bin of the binarization table for the corresponding intra_chroma_pred_mode can be discarded prior to the entropy coding. In other words, the first bin is inferred to be 0 and hence not coded. This single binarization table is used for both sps_cclm_enabled_flag equal to 0 and 1 cases. The first two bins in Table 4 are context coded with its own context model, and the rest of the bins are bypass coded.
  • the chroma CUs in 32 ⁇ 32/32 ⁇ 16 chroma coding tree node is allowed to use CCLM in the following way: If the 32 ⁇ 32 chroma node is not split or partitioned QT split, all chroma CUs in the 32 ⁇ 32 node can use CCLM; Alternatively, if the 32 ⁇ 32 chroma node is partitioned with Horizontal BT, and the 32 ⁇ 16 child node does not split or uses Vertical BT split, all chroma CUs in the 32 ⁇ 16 chroma node can use CCLM. In all the other luma and chroma coding tree split conditions, CCLM is not allowed for chroma CU.
  • PDPC position dependent prediction combination
  • PDPC is a prediction method that invokes a combination of the boundary reference samples and HEVC style prediction with filtered boundary reference samples.
  • PDPC can be applied to the following intra modes without signaling: planar, DC, intra angles less than or equal to horizontal, and intra angles greater than or equal to vertical and less than or equal to 80. If the current block is BDPCM mode or MRL index is larger than 0, PDPC is not applied.
  • the prediction sample pred(x′,y′) is predicted using an intra-prediction mode (DC, planar, angular) and a linear combination of reference samples according to the Equation 7 as follows:
  • pred( x′,y ′) Clip(0,(1 ⁇ BitDepth) ⁇ 1,( wL ⁇ R ⁇ 1,y′ +wT ⁇ R x; ⁇ 1 +(64 ⁇ wL ⁇ wT ) ⁇ pred( x′,y ′)+32)>>6) Equation 7
  • R x, ⁇ 1 , R ⁇ 1,y may represent the reference samples located at the top and left boundaries of current sample (x, y), respectively
  • PDPC if PDPC is applied to DC, planar, horizontal, and vertical intra modes, additional boundary filters may not be needed, as currently required in the case of HEVC DC mode boundary filter or horizontal/vertical mode edge filters.
  • PDPC process for DC and planar modes is identical.
  • For angular modes if the current angular mode is HOR_IDX or VER_IDX, left or top reference samples is not used, respectively.
  • the PDPC weights and scale factors are dependent on prediction modes and the block sizes. PDPC is applied to the block with both width and height greater than or equal to 4.
  • FIGS. 12-15 illustrate examples of reference samples 1200 , 1300 , 1400 , 1500 (R x, ⁇ 1 and R ⁇ 1,y ) for PDPC applied over various prediction modes.
  • the prediction sample pred(x′, y′) is located at (x′, y′) within the prediction block.
  • the reference samples R x, ⁇ 1 and R ⁇ 1,y could be located in fractional sample position. In this case, the sample value of the nearest integer sample location is used.
  • FIG. 16 is a diagram 1600 of multiple reference line (MRL) intra-prediction used in accordance with aspects of the present disclosure.
  • MRL multiple reference line
  • the samples of segments A and F are not fetched from reconstructed neighboring samples but padded with the closest samples from Segment B and E, respectively.
  • HEVC intra-picture prediction uses the nearest reference line (i.e., reference line 0).
  • reference line 0 the nearest reference line
  • 2 additional lines reference line 1 and reference line 3 are used.
  • the index of selected reference line (mrl_idx) can be signalled and used to generate intra predictor.
  • the most probable mode (MPM) list may only include additional reference line modes and the MPM index can be signalled without remaining modes.
  • the reference line index can be signalled before intra-prediction modes, and planar mode can be excluded from intra-prediction modes in case a non-zero reference line index is signaled.
  • MRL can be disabled for the first line of blocks inside a CTU to prevent using extended reference samples outside the current CTU line. Also, PDPC can be disabled when an additional line is used.
  • MRL mode the derivation of DC value in DC intra-prediction mode for non-zero reference line indices can be aligned with that of reference line index 0.
  • MRL may store 3 neighboring luma reference lines with a CTU to generate predictions.
  • the Cross-Component Linear Model (CCLM) tool may store 3 neighboring luma reference lines for its down-sampling filters. The definition of MRL to use the same 3 lines can be aligned as CCLM to reduce the storage requirements for decoders.
  • CCLM Cross-Component Linear Model
  • FIGS. 17 and 18 are examples of diagrams 1700 and 1800 of an intra sub-partitions (ISP) that divides luma intra-predicted blocks vertically or horizontally into sub-partitions depending on the block size.
  • ISP intra sub-partitions
  • minimum block size for ISP is 4 ⁇ 8 (or 8 ⁇ 4). If block size is greater than 4 ⁇ 8 (or 8 ⁇ 4) then the corresponding block can be divided by 4 sub-partitions.
  • the M ⁇ 128 (with M ⁇ 64) and 128 ⁇ N (with N ⁇ 64) ISP blocks could generate a potential issue with the 64 ⁇ 64 VDPU.
  • an M ⁇ 128 CU in the single tree case has an M ⁇ 128 luma TB and two corresponding
  • the luma TB can be divided into four M ⁇ 32 TBs (only the horizontal split is possible), each of them smaller than a 64 ⁇ 64 block.
  • chroma blocks are not divided. Therefore, both chroma components may have a size greater than a 32 ⁇ 32 block.
  • a similar situation could be created with a 128 ⁇ N CU using ISP. Hence, these two cases may be an issue for the 64 ⁇ 64 decoder pipeline. For this reason, the CU sizes that can use ISP may be restricted to a maximum of 64 ⁇ 64.
  • FIGS. 17 and 18 shows examples of the two possibilities. All sub-partitions fulfill the condition of having at least 16 samples.
  • the dependence of 1 ⁇ N/2 ⁇ N subblock prediction on the reconstructed values of previously decoded 1 ⁇ N/2 ⁇ N subblocks of the coding block is not allowed so that the minimum width of prediction for subblocks becomes four samples.
  • an 8 ⁇ N (N>4) coding block that is coded using ISP with vertical split is split into two prediction regions each of size 4 ⁇ N and four transforms of size 2 ⁇ N.
  • a 4 ⁇ N coding block that is coded using ISP with vertical split is predicted using the full 4 ⁇ N block; four transform each of 1 ⁇ N is used.
  • the transform sizes of 1 ⁇ N and 2 ⁇ N are allowed, it is asserted that the transform of these blocks in 4 ⁇ N regions can be performed in parallel.
  • a 4 ⁇ N prediction region contains four 1 ⁇ N transforms
  • the transform in the vertical direction can be performed as a single 4 ⁇ N transform in the vertical direction.
  • the transform operation of the two 2 ⁇ N blocks in each direction can be conducted in parallel.
  • reconstructed samples are obtained by adding the residual signal to the prediction signal.
  • a residual signal is generated by the processes such as entropy decoding, inverse quantization and inverse transform. Therefore, the reconstructed sample values of each sub-partition can be available to generate the prediction of the next sub-partition, and each sub-partition is repeatedly processed.
  • the first sub-partition to be processed is the one containing the top-left sample of the CU and then continuing downwards (horizontal split) or rightwards (vertical split).
  • reference samples used to generate the sub-partitions prediction signals may only be located at the left and above sides of the lines. All sub-partitions can share the same intra mode. The followings are summary of interaction of ISP with other coding tools.
  • MRL may be implemented if a block has an MRL index other than 0, then the ISP coding mode can be inferred to be 0 and therefore ISP mode information may not be sent to the decoder.
  • entropy coding coefficient group size may be selected if the sizes of the entropy coding subblocks have been modified so that they have 16 samples in all possible cases, as shown in Table 5. Note that the new sizes may only affect blocks produced by ISP in which one of the dimensions is less than 4 samples. In all other cases coefficient groups may keep the 4 ⁇ 4 dimensions.
  • CBF coded block flag
  • the encoder may not perform rate distortion (RD) tests for the different available transforms for each resulting sub-partition.
  • RD rate distortion
  • the transform choice for the ISP mode may instead be fixed and selected according the intra mode, the processing order and the block size utilized. Hence, no signaling may be required, in this example.
  • ISP mode all 67 intra-prediction modes are allowed.
  • PDPC can also be applied if corresponding width and height is at least 4 samples long.
  • reference sample filtering process reference smoothing
  • condition for intra interpolation filter selection may not exist anymore
  • Cubic (DCT-IF) filter can be applied for fractional position interpolation in ISP mode.
  • FIG. 19 is an example of a diagram 1900 of matrix weighted intra-prediction process (MIP) for VVC.
  • MIP matrix weighted intra-prediction process
  • boundary samples four samples or eight samples can be selected by averaging based on block size and shape.
  • the input boundaries bdry top and bdry left are reduced to smaller boundaries bdry red top and bdry red left by averaging neighboring boundary samples according to predefined rule depends on block size.
  • the two reduced boundaries bdry red top and bdry red left can be concatenated to a reduced boundary vector bdry red which is thus of size four for blocks of shape 4 ⁇ 4 and of size eight for blocks of all other shapes. If mode refers to the MIP-mode, this concatenation is defined as follows:
  • a matrix vector multiplication, followed by addition of an offset, is carried out with the averaged samples as an input.
  • the result is a reduced prediction signal on a subsampled set of samples in the original block.
  • a reduced prediction signal pred red which is a signal on the down-sampled block of width W red and height H red is generated.
  • W red and H red are defined as:
  • the reduced prediction signal pred red may be computed by calculating a matrix vector product and adding an offset:
  • b is a vector of size W red ⁇ H red .
  • the matrix A and the offset vector b are taken from one of the sets S 0 , S 1 , S 2 .
  • One defines an index idx idx(W, H) as follows:
  • each coefficient of the matrix A is represented with 8 bit precision.
  • the set S 0 consists of 16 matrices A 0 i , i ⁇ 0, . . . , 15 ⁇ each of which has 16 rows and 4 columns and 16 offset vectors b 0 i , i ⁇ 0, . . . , 16 ⁇ each of size 16. Matrices and offset vectors of that set are used for blocks of size 4 ⁇ 4.
  • the set S 1 consists of 8 matrices A 1 i , i ⁇ 0, . . . , 7 ⁇ , each of which has 16 rows and 8 columns and 8 offset vectors b 1 i , i ⁇ 0, . . . , 7 ⁇ each of size 16.
  • the set S 2 consists of 6 matrices A 2 i , i ⁇ 0, . . . , 5 ⁇ , each of which has 64 rows and 8 columns and of 6 offset vectors b 2 i , i ⁇ 0, . . . , 5 ⁇ of size 64.
  • the prediction signal at the remaining positions may be generated from the prediction signal on the subsampled set by linear interpolation which is a single step linear interpolation in each direction.
  • the interpolation can be firstly performed in the horizontal direction and then in the vertical direction regardless of block shape or block size.
  • a flag indicating whether an MIP mode may be to be applied or not is sent. If an MIP mode is to be applied, MIP mode (predModeIntra) may be signaled.
  • MIP mode For an MIP mode, a transposed flag (isTransposed), which determines whether the mode is transposed, and MIP mode ID (modeId), which determines which matrix is to be used for the given MIP mode, can be derived as follows
  • MIP coding mode may be harmonized with other coding tools by considering following aspects: (1) low-frequency non-separable transform (LFNST) is enabled for MIP on large blocks.
  • LFNST low-frequency non-separable transform
  • the reference sample derivation for MIP is performed exactly or at least similarly as for the conventional intra-prediction modes; (3) for the up-sampling step used in the MIP-prediction, original reference samples are used instead of down-sampled ones; (4) clipping is performed before up-sampling and not after up-sampling; (5) MIP may be allowed up to 64 ⁇ 64 regardless of the maximum transform size.
  • intra modes are extended to 67 from 35 modes in HEVC, and they are derived at encoder and explicitly signaled to decoder.
  • a significant amount of overhead is spent on intra mode coding in JEM-2.0.
  • the intra mode signaling overhead may be up to 5 ⁇ 10% of overall bitrate in all intra coding configuration.
  • This contribution proposes the decoder-side intra mode derivation approach to reduce the intra mode coding overhead while keeping prediction accuracy.
  • a decoder-side intra mode derivation (DIMD) approach which may be used by video decoders 124 , 300 , 400 in decoding video.
  • DIMD decoder-side intra mode derivation
  • the information can be derived at both encoder and decoder from the neighboring reconstructed samples of current block.
  • the intra mode derived by DIMD may be used in two ways, for example: 1) For 2N ⁇ 2N CUs, the DIMD mode is used as the intra mode for intra-prediction when the corresponding CU-level DIMD flag is turned on; 2) For N ⁇ N CUs, the DIMD mode is used to replace one candidate of the existing MPM list to improve the efficiency of intra mode coding.
  • FIG. 20 is an example of a diagram 2000 of a template based intra mode derivation where the target denotes the current block (of block size N) for which intra-prediction mode is to be estimated.
  • the template (indicated by the patterned region in FIG. 20 ) specifies a set of already reconstructed samples, which are used to derive the intra mode.
  • the template size is denoted as the number of samples within the template that extends to the above and the left of the target block, i.e., L.
  • the reference of template (indicated by the dotted region in FIG. 20 ) can refer to a set of neighboring samples from above and left of the template, as defined by JEM-2.0. Unlike the template samples which are always from reconstructed region, the reference samples of template may not be reconstructed yet when encoding/decoding the target block. In this case, the existing reference samples substitution algorithm of JEM-2.0 is utilized to substitute the unavailable reference samples with the available reference samples.
  • the DIMD calculates the absolute difference (SAD) between the reconstructed template samples and its prediction samples obtained from the reference samples of the template.
  • SAD absolute difference
  • the intra-prediction mode that yields the minimum SAD may be selected as the final intra-prediction mode of the target block.
  • the DIMD can be used as one additional intra mode, which can be adaptively selected by comparing the DIMD intra mode with the optimal normal intra mode (i.e., being explicitly signaled).
  • One flag is signaled for each intra 2N ⁇ 2N CU to indicate the usage of the DIMD. If the flag is one, then the CU can be predicted using the intra mode derived by DIMD; otherwise, the DIMD is not applied and the CU is predicted using the intra mode explicitly signaled in the bit-stream.
  • chroma components can reuse the same intra mode as that derived for luma component, i.e., DM mode.
  • the blocks in the CU can adaptively select to derive their intra modes at either PU-level or TU-level.
  • the DIMD flag is one
  • another CU-level DIMD control flag can be signaled to indicate the level at which the DIMD is performed. If this flag is zero, this can indicate that the DIMD is performed at the PU level and all the TUs in the PU use the same derived intra mode for their intra-prediction; otherwise if the DIMD control flag is one, this can indicate that the DIMD is performed at the TU level and each TU in the PU derives its own intra mode.
  • the number of angular directions increases to 129, and the DC and planar modes still remain the same.
  • the precision of intra interpolation filtering for DIMD-coded CUs increases from 1/32-pel to 1/64-pel.
  • those 129 directions of the DIMD-coded CUs can be converted to “normal” intra modes (i.e., 65 angular intra directions) before they are used as MPM.
  • intra modes of intra N ⁇ N CUs are signaled.
  • the intra modes derived from DIMD are used as MPM candidates for predicting the intra modes of four PUs in the CU.
  • the DIMD candidate can be placed at the first place in the MPM list and the last existing MPM candidate can be removed. Also, a pruning operation can be performed such that the DIMD candidate may not be added to the MPM list if it is redundant.
  • one straightforward fast intra mode search algorithm is used for DIMD.
  • one initial estimation process can be performed to provide a good starting point for intra mode search.
  • an initial candidate list can be created by selecting N fixed modes from the allowed intra modes.
  • the SAD can be calculated for all the candidate intra modes and the one that minimizes the SAD can be selected as the starting intra mode.
  • the initial candidate list can include 11 intra modes, including DC, planar and every 4-th mode of the 33 angular intra directions as defined in HEVC, i.e., intra modes 0, 1, 2, 6, 10 . . . 30, 34.
  • the starting intra mode is either DC or planar, it can be used as the DIMD mode. Otherwise, based on the starting intra mode, one refinement process can then be applied where the optimal intra mode is identified through one iterative search.
  • the iterative search at each iteration, the SAD values for three intra modes separated by a given search interval can be compared and the intra mode that minimizes the SAD can be maintained. The search interval can then be reduced to half, and the selected intra mode from the last iteration can serve as the center intra mode for the current iteration. For the current DIMD implementation with 129 angular intra directions, up to 4 iterations can be used in the refinement process to find the optimal DIMD intra mode.
  • transmitting of the luma intra-prediction mode in the bitstream can be avoided. This is done by deriving the luma intra mode using previously encoded/decoded pixels, in an identical fashion at the encoder and at the decoder. This process defines a new coding mode called DIMD, whose selection signaled in the bitstream for intra coded blocks using a flag. DIMD can compete with other coding modes at the encoder, including the classic Intra coding mode (where the intra-prediction mode is coded). Note that in one example, DIMD may only apply to luma. For chroma, classical intra coding mode may apply.
  • a rate-distortion cost can be computed for the DIMD mode, and can then be compared to the coding costs of other modes to decide whether to select it as final coding mode for a current block.
  • the DIMD flag can be first parsed, if present. If the DIMD flag is true, the intra-prediction mode can be derived in the reconstruction process using the same previously encoded neighboring pixels. If not, the intra-prediction mode can be parsed from the bitstream as in classical intra coding mode.
  • FIG. 21 is an example of a diagram 2100 of a template of a set of chosen pixels on which a gradient analysis may be performed based on intra-prediction mode derivation.
  • a template surrounding the current block is chosen by T pixels to the left, and T pixels above. For example, T may have a value of 2.
  • a gradient analysis is performed on the pixels of the template. This can facilitate determining a main angular direction for the template, which can be assumed to have a high chance to be identical to the one of the current block.
  • a simple 3 ⁇ 3 Sobel gradient filter can be used, defined by the following matrices that may be convoluted with the template:
  • each of these two matrices with the 3 ⁇ 3 window centered around the current pixel can be point-by-point multiplied and composed of its 8 direct neighbors, and the result can be is summed.
  • G x from the multiplication with M x
  • G y from the multiplication with M y
  • FIG. 22 is an example of a diagram 2200 of a convolution of a 3 ⁇ 3 Sobel gradient filter with the template in accordance with aspects of the present disclosure.
  • the pixel 2210 is the current pixel.
  • Template pixels 2220 (including the current pixel 2210 ) are pixels on which the gradient analysis is possible.
  • Unavailable pixels 2230 are pixels on which the gradient analysis is not possible due to lack of some neighbors.
  • Reconstructed pixels 2240 are available pixels outside of the considered template, used in the gradient analysis of the template pixels 2220 . In case a reconstructed pixel 2240 is not available (due to blocks being too close to the border of the picture for instance), the gradient analysis of all template pixels 2220 that use the unavailable reconstructed pixel 2240 is not performed.
  • the intensity (G) and the orientation (0) of the gradient using G x and G y are calculated as such:
  • the orientation of the gradient can then be converted into an intra angular prediction mode, used to index a histogram (first initialized to zero).
  • the histogram value at that intra angular mode is increased by G.
  • the histogram can include cumulative values of gradient intensities, for each intra angular mode.
  • the mode that shows the highest peak in the histogram can be selected as intra-prediction mode for the current block. If the maximum value in the histogram is 0 (meaning no gradient analysis was able to be made, or the area composing the template is flat), then the DC mode can be selected as intra-prediction mode for the current block.
  • the gradient analysis of the pixels located in the top part of the template is not performed.
  • the DIMD flag is coded using three possible contexts, depending on the left and above neighboring blocks, similarly to the Skip flag coding.
  • Context 0 corresponds to the case where none of the left and above neighboring blocks are coded with DIMD mode
  • context 1 corresponds to the case where only one neighboring block is coded with DIMD
  • context 2 corresponds to the case where both neighbors are DIMD-coded.
  • Initial symbol probabilities for each context are set to 0.5.
  • DIMD offers over classical intra mode coding is that the derived intra mode can have a higher precision, allowing more precise predictions at no additional cost as it is not transmitted in the bitstream.
  • the derived intra mode spans 129 angular modes, hence a total of 130 modes including DC (e.g., the derived intra mode may not be planar in aspects described herein).
  • the classical intra coding mode is unchanged, i.e., the prediction and mode coding still use 67 modes.
  • the luma intra mode is derived during the reconstruction process, just prior to the block reconstruction. This is done to avoid a dependency on reconstructed pixels during parsing. However, by doing so, the luma intra mode of the block may be undefined for the chroma component of the block, and for the luma component of neighboring blocks. This can cause an issue because for chroma, a fixed mode candidate list is defined. Usually, if the luma mode equals one of the chroma candidates, that candidate may be replaced with the vertical diagonal (VDIA_IDX) intra mode. As in DIMD, the luma mode is unavailable, the initial chroma mode candidate list is not modified.
  • an MPM list is constructed using the luma intra modes of neighboring blocks, which can be unavailable if those blocks were coded using DIMD.
  • DIMD-coded blocks can be treated as inter blocks during MPM list construction, meaning they are effectively considered unavailable.
  • Entropy coding may be a form of lossless compression used at the last stage of video encoding (and the first stage of video decoding), after the video has been reduced to a series of syntax elements. Syntax elements describe how the video sequence can be reconstructed at the decoder. This includes the method of prediction (e.g., spatial or temporal prediction, intra-prediction mode, and motion vectors) and prediction error, also referred to as residual. Arithmetic coding is a type of entropy coding that can achieve compression close to the entropy of a sequence by effectively mapping the symbols (i.e., syntax elements) to codewords with a non-integer number of bits.
  • Context-adaptive binary arithmetic coding involves three main functions: binarization, context modeling, and arithmetic coding. Binarization maps the syntax elements to binary symbols (bins). Context modeling estimates the probability of the bins. Finally, arithmetic coding compresses the bins to bits based on the estimated probability.
  • binarization processes such as the truncated Rice (TR) binarization process, the truncated binary binarization process, the k-th order Exp-Golomb (EGk) binarization process and the fixed-length (FL) binarization process.
  • TR truncated Rice
  • EGk k-th order Exp-Golomb
  • FL fixed-length
  • Context modeling provides an accurate probability estimate required to achieve high coding efficiency. Accordingly, it is highly adaptive and different context models can be used for different bins and the probability of that context model is updated based on the values of the previously coded bins. Bins with similar distributions often share the same context model.
  • the context model for each bin can be selected based on the type of syntax element, bin position in syntax element (binIdx), luma/chroma, neighboring information, etc. A context switch can occur after each bin.
  • Arithmetic coding may be based on recursive interval division.
  • a range with an initial value of 0 to 1, is divided into two subintervals based on the probability of the bin.
  • the encoded bits provide an offset that, when converted to a binary fraction, selects one of the two subintervals, which indicates the value of the decoded bin.
  • the range is updated to equal the selected subinterval, and the interval division process repeats itself.
  • the range and offset have limited bit precision, so renormalization may be used whenever the range falls below a certain value to prevent underflow.
  • Renormalization can occur after each bin is decoded.
  • Arithmetic coding can be done using an estimated probability (context coded), or assuming equal probability of 0.5 (bypass coded).
  • context coded or assuming equal probability of 0.5
  • bypass coded bins the division of the range into subintervals can be done by a shift, whereas a look up table may be used for the context coded bins.
  • FIG. 23 is a schematic diagram 2300 of intra mode coding with greater than 67 intra-prediction modes to capture the arbitrary edge directions presented in natural video.
  • the number of directional intra modes may be extended from 67, as used in VVC, to 129 while the planar and the DC modes remain the same.
  • the pre-defined IPMs may be the IPMs have denser directions than conventional IPMs (e.g., IPMs denoted by the dashed lines in FIG. 23 ).
  • the N1 IPMs may be partial or full of the MPMs for the current block.
  • some pre-defined intra-prediction modes which are not in MPMs may also be contained in the given IPM candidate set.
  • one or more IPMs from DC/Planar/horizontal/vertical/diagonal top-right/diagonal bottom-left/diagonal top-left modes may be contained in the given IPM set.
  • one or more IPMs denoted by the dashed lines in FIG. 23 may be contained in the given IPM set.
  • N1 may be equal to or larger than N2 when one or more IPMs denoted by the dashed red lines are contained in the given IPM set.
  • N1 may be equal to or larger than N2.
  • FIGS. 24-31 illustrate examples of templates that may be formed from one or more sub-templates.
  • the template for a block may be selected for the specific block.
  • the template may be selected based on decoded information about the specific block or based on availability of the sub-templates for the specific block.
  • other templates may be selected based on different combinations of the sub-templates.
  • FIG. 24 is a diagram of an example of a template 2400 including a left-above sub-template 2420 (Template-LA).
  • the template 2400 may be selected for a block 2410 , which may have dimensions of M samples horizontally and N samples vertically.
  • the left-above sub-template 2420 may include left-above neighboring samples that are located both to the left of the block 2410 and above the block 2410 .
  • the left-above sub-template 2420 may have dimensions of L1 samples horizontally and L2 samples vertically.
  • L1 and L2 may be defined for the block 2410 , a slice including the block 2410 , or a picture including the block 2410 .
  • FIG. 25 is a diagram of an example of a template 2500 including a left sub-template 2440 (Template-L) and an above sub-template 2430 (Template-A).
  • the template 2500 may be selected for a block 2410 , which may have dimensions of M samples horizontally and N samples vertically.
  • the left sub-template 2440 may include samples located to the left of the block 2410 .
  • the left sub-template 2440 may be adjacent the top edge of the block 2410 .
  • the left sub-template 2440 may have dimensions of L1 samples horizontally and N samples vertically.
  • the above sub-template 2430 may include samples located above the block 2410 .
  • the above sub-template 2430 may be adjacent the top edge of the block 2410 .
  • the above sub-template 2430 may have dimensions of M samples horizontally and L2 samples vertically.
  • FIG. 26 is a diagram of an example of a template 2600 including the above sub-template 2430 (Template-A).
  • the template 2600 may be selected for a block 2410 , which may have dimensions of M samples horizontally and N samples vertically.
  • the above sub-template 2430 may include samples located above the block 2410 .
  • the above sub-template 2430 may have dimensions of M samples horizontally and L2 samples vertically.
  • FIG. 27 is a diagram of an example of a template 2700 including a left sub-template (Template-L).
  • the template 2700 may be selected for a block 2410 , which may have dimensions of M samples horizontally and N samples vertically.
  • the left sub-template 2440 may include samples located to the left of the block 2410 .
  • the left sub-template 2440 may have dimensions of L1 samples horizontally and N samples vertically.
  • FIG. 28 is a diagram of an example of a template 2800 including the left sub-template 2440 (Template-L) and a left-below sub-template 2450 (Template-LB).
  • the template 2800 may be selected for a block 2410 , which may have dimensions of M samples horizontally and N samples vertically.
  • the left sub-template 2440 may include samples located to the left of the block 2410 .
  • the left sub-template 2440 may have dimensions of L1 samples horizontally and N samples vertically.
  • the left-below sub-template 2450 may include samples that are located both to the left of the block 2410 and below the block 2410 .
  • the left-below sub-template 2450 may have dimensions of L1 samples horizontally and N samples vertically.
  • FIG. 29 is a diagram of an example of a template 2900 including the above sub-template 2430 (Template-A) and a right-above sub-template 2460 (Template-RA).
  • the template 2900 may be selected for a block 2410 , which may have dimensions of M samples horizontally and N samples vertically.
  • the above sub-template 2430 may include samples located above the block 2410 .
  • the above sub-template 2430 may have dimensions of M samples horizontally and L2 samples vertically.
  • the right-above sub-template 2460 may include samples located both above the block 2410 and to the right of the block 2410 .
  • the right-above sub-template 2460 may have dimensions of M samples horizontally and L2 samples vertically.
  • FIG. 30 is a diagram of an example of a template 3000 including the left sub-template 2440 , the left-below sub-template 2450 , the above sub-template 2430 , and the right-above sub-template 2460 .
  • the template 3000 may be selected for a block 2410 , which may have dimensions of M samples horizontally and N samples vertically.
  • the above sub-template 2430 may include samples located above the block 2410 .
  • the above sub-template 2430 may have dimensions of M samples horizontally and L2 samples vertically.
  • the right-above sub-template 2460 may include samples located above and to the right of the block 2410 .
  • the right-above sub-template 2460 may have dimensions of M samples horizontally and L2 samples vertically.
  • the left sub-template 2440 may include samples located to the left of the block 2410 .
  • the left sub-template 2440 may have dimensions of L1 samples horizontally and N samples vertically.
  • the left-below sub-template 2450 may include samples located to the left of the block 2410 and below the block 2410 .
  • the left-below sub-template 2450 may have dimensions of L1 samples horizontally and N samples vertically.
  • FIG. 31 is a diagram of an example of a template 3100 including the left-above sub-template 2420 , the left sub-template 2440 , the left-below sub-template 2450 , the above sub-template 2430 , and the right-above sub-template 2460 .
  • the template 3100 may be selected for a block 2410 , which may have dimensions of M samples horizontally and N samples vertically.
  • the left-above sub-template 2420 may include samples located to the left and above the block 2410 .
  • the left-above sub-template 2420 may have dimensions of L1 samples horizontally and L2 samples vertically.
  • the above sub-template 2430 may include samples located above the block 2410 .
  • the above sub-template 2430 may have dimensions of M samples horizontally and L2 samples vertically.
  • the right-above sub-template 2460 may include samples located above and to the right of the block 2410 .
  • the right-above sub-template 2460 may have dimensions of M samples horizontally and L2 samples vertically.
  • the left sub-template 2440 may include samples located to the left of the block 2410 .
  • the left sub-template 2440 may have dimensions of L1 samples horizontally and N samples vertically.
  • the left-below sub-template 2450 may include samples located to the left of and below the block 2410 .
  • the left-below sub-template 2450 may have dimensions of L1 samples horizontally and N samples vertically.
  • FIG. 32 is a diagram of an example of a template 3200 including a left-above sub-template 3220 , a left sub-template 3240 , a left-below sub-template 3260 , an above sub-template 3230 , and a right-above sub-template 3260 that are spaced apart from a block.
  • the example template 3200 may be selected for a block 2410 , which may have dimensions of M samples horizontally and N samples vertically.
  • the sub-templates in FIG. 32 may be spaced apart from the block 2410 .
  • the left-above sub-template 3220 , the left sub-template 3240 , and the left-below sub-template 3260 may be spaced horizontally apart from the block 2410 by a gap 3280 .
  • the gap 3280 may have a horizontal dimension of L3 samples.
  • the left-above sub-template 2420 , the above sub-template 2430 , and the right-above sub-template 2460 may be spaced vertically apart from the block 2410 by a gap 3270 .
  • the gap 3270 may have a vertical dimension of L4 samples.
  • each of the sub-templates 3220 , 3230 , 3240 , 3250 , 3260 may have dimensions that are the same as a corresponding sub-template 2420 , 2430 , 2440 , 2450 , 2460 in FIGS. 24-31 . Accordingly, in FIG. 32 , the locations of the sub-templates 3220 , 3230 , 3240 , 3250 , 3260 are different, but the size of the sub-templates 3220 , 3230 , 3240 , 3250 , 3260 may be the same as in FIGS. 24-31 .
  • FIG. 33 is a diagram of examples of template-reference samples 3310 for a template 3300 including a left-above sub-template 2420 , a left sub-template 2440 , and an above sub-template 2430 .
  • the example template 3300 may be selected for a block 2410 , which may have dimensions of M samples horizontally and N samples vertically.
  • the left-above sub-template 2420 may include samples located to the left and above the block 2410 .
  • the left-above sub-template 2420 may have dimensions of L1 samples horizontally and L2 samples vertically.
  • the above sub-template 2430 may include samples located above the block 2410 .
  • the above sub-template 2430 may have dimensions of M samples horizontally and L2 samples vertically.
  • the left sub-template 2440 may include samples located to the left of the block 2410 .
  • the left sub-template 2440 may have dimensions of L1 samples horizontally and N samples vertically.
  • the template-reference samples 3310 may be a single row of samples located above the template 3300 and a single column of samples located to the left of the template 3300 .
  • the row of samples may have a length of 2(L1+M)+1.
  • the column of samples may have a height of 2(L2+N)+1.
  • FIG. 34 is a diagram 3400 of example template-reference samples 3410 for the template 2500 including the left sub-template 2440 and the above sub-template 2430 .
  • the example template 2500 may be selected for a block 2410 , which may have dimensions of M samples horizontally and N samples vertically.
  • the above sub-template 2430 may include samples located above the block 2410 .
  • the above sub-template 2430 may have dimensions of M samples horizontally and L2 samples vertically.
  • the left sub-template 2440 may include samples located to the left of the block 2410 .
  • the left sub-template 2440 may have dimensions of L1 samples horizontally and N samples vertically.
  • the template-reference samples may include one or more lines (e.g., rows or columns) of samples.
  • the template-reference samples 3410 may include a single row of samples located above the template 2500 and a single column of samples located to the left of the template 2500 .
  • the row of samples may have a length of 2(L1+M)+1.
  • the column of samples may have a height of 2(L2+N)+1.
  • FIG. 35 is a diagram 3500 of example template-reference samples 3510 for the template 2600 including the above sub-template 2430 .
  • the template-reference samples 3510 may be a single row of samples located above the template 2600 and a single column of samples located to the left of the template 2600 .
  • the row of samples may have a length of 2M+1.
  • the column of samples may have a height of 2(L2+N)+1.
  • FIG. 36 is a diagram 3600 of example template-reference samples 3610 for the template 2700 including the left sub-template 2440 .
  • the template-reference samples 3610 may be a single row of samples located above the template 2700 and a single column of samples located to the left of the template 2700 .
  • the row of samples may have a length of 2(L1+M)+1.
  • the column of samples may have a height of 2N+1.
  • FIG. 37 is a diagram 3700 of example template-reference samples 3710 for the template 2600 including the above sub-template 2430 .
  • the template-reference samples 3710 may be a single row of samples located above the template 2600 and a single column of samples located to the left of the template 2600 .
  • the row of samples may have a length of 2(L1+M)+1.
  • the column of samples may have a height of 2(L2+N)+1. Because the template 2600 does not include the left-above sub-template 2420 or the left sub-template 2440 , the column of samples may be spaced from the template 2600 by a horizontal gap 3720 with a width of L1.
  • FIG. 38 is a diagram 3800 of example template-reference samples 3810 for the template 2700 including the left sub-template 2440 .
  • the template-reference samples 3810 may be a single row of samples located above the template 2700 and a single column of samples located to the left of the template 2700 .
  • the row of samples may have a length of 2(L1+M)+1.
  • the column of samples may have a height of 2(L2+N)+1. Because the template 2700 does not include the left-above sub-template 2420 or the above sub-template 2430 , the row of samples may be spaced from the template 2700 by a vertical gap 3820 with a height of L2.
  • FIG. 39 is a diagram 3900 of example template-reference samples 3910 for the template 2500 including the above sub-template 2430 and the left sub-template 2440 .
  • the template-reference samples 3910 may include a single column of samples located to the left of the template 2500 .
  • the column of samples may have a height of 2(L2+N)+1.
  • a portion 3920 of the row may be moved to a location 3930 in a second row that is adjacent the left sub-template 2440 .
  • the portion 3920 may include L1 samples.
  • the remaining portion in the first row may have a length of 2M+L1+1.
  • selecting template-reference samples that are adjacent a sub-template included within the template may improve the prediction of the template.
  • FIG. 40 is a diagram 4000 of example template-reference samples 4010 for the template 2500 including the above sub-template 2430 and the left sub-template 2440 .
  • the template-reference samples 4010 may include a single row of samples located above the template 2500 .
  • the row of samples may have a length of 2(L1+M)+1.
  • a portion 4020 of the column may be moved to a location 4030 in a second row that is adjacent the above sub-template 2430 .
  • the portion 4020 may include L2 samples.
  • the remaining portion in the first column may have a height of 2N+L2+1.
  • selecting template-reference samples that are adjacent a sub-template included within the template may improve the prediction of the template.
  • both of the portion 3920 and the portion 4020 may be moved to the location 3930 and the location 4030 , respectively.
  • FIG. 41 illustrates an example of a system 4100 for performing video decoding.
  • the system 4100 may include a computing device 4102 that performs video decoding, via execution of decoding component 4108 by processor 4104 and/or memory 4106 , video decoder 124 , video decoder 300 , or HEVC video encoder and decoder 400 .
  • the decoding component 4108 may receive an encoded bitstream 4146 .
  • the decoding component 4108 may perform a decoding operation (e.g., entropy decoding) to determine block information that may be provided to the variable template intra prediction unit 4110 , which may include one or more of a template selection unit 4120 , an IPM deriving unit 4130 , a final predictor unit 4140 , and a conversion unit 4150 .
  • a decoding operation e.g., entropy decoding
  • the template selection unit 4120 may receive block information for a current block. Block information for previously reconstructed blocks, including reconstructed samples thereof, may be stored in buffer 4160 .
  • the template selection unit 4120 may select for a current block, one or more sub-templates to form a selected template for DIMD.
  • the one or more sub-templates may be selected from a plurality of sub-templates including any of the example sub-templates described herein.
  • the one or more sub-templates may be selected from the left-above sub-template 2420 , the above sub-template 2430 , the left sub-template 2440 , the left-below sub-template 2450 , and the right-below sub-template 2460 .
  • the selected template may be a combination of the selected sub-templates.
  • the template selection unit 4120 may select the one or more sub-templates based on decoded information for the current block.
  • the template selection unit 4120 may determine which of the plurality of sub-templates for the current block have been reconstructed.
  • the template selection unit 4120 may determine a dimension of the selected template based on the decoded information for the current block.
  • the template selection unit 4120 may provide the selected template to the IPM deriving unit 4130 , 4230 .
  • the IPM deriving unit 4130 may determine a cost of using each of a plurality of candidate IPMs to predict samples in a template region based on the selected template and the current block. For example, the IPM deriving unit 4130 may determine one or more of: a sum of the absolute transformed difference (SATD), a sum of the squared errors (SSE), a subjective quality metric, or a structural similarity index measure (SSIM) between reconstructed samples of the template and predicted samples of the template predicted by the candidate IPM. In some examples, the IPM deriving unit 4130 may determine the predicted samples of each sub-template separately.
  • SATD sum of the absolute transformed difference
  • SSE sum of the squared errors
  • SSIM structural similarity index measure
  • the IPM deriving unit 4130 may determine the cost of each sub-template separately and sum the costs to determine a final cost for the selected template. In some examples, the IPM deriving unit 4130 may down-sample samples in the selected template and calculate the cost based on the down-sampled selected template. In some examples, the IPM deriving unit 4130 may determine one or more lines of template-reference samples neighboring the selected template based on the selected template. In some examples, the IPM deriving unit 4130 may substitute an unavailable template-reference sample with one of: a nearest available template-reference sample, a value based on a defined formula, or a generated value.
  • the IPM deriving unit 4130 may select a derived IPM from the plurality of candidate IPMs based on the cost. For example the IPM deriving unit 4130 may select the best cost, which may be the lowest cost or the highest cost depending on the cost metric. The IPM deriving unit 4130 may provide the derived IPM to the final predictor unit 4140 .
  • the final predictor unit 4150 may determining, based on the at least one IPM, a final predictor of the current video block. For example, the final predictor unit 4140 may predict the samples in the current block with intra-prediction using the derived IPM. For example, the final predictor unit 4140 may predict the samples in the current block using any of the techniques described above.
  • the final predictor unit 4140 may provide the predicted samples as a predicted block to the conversion unit 4150 , which may combine (e.g., sum) the predicted block with a residual block determined by the residual decoding unit 4170 .
  • a reconstructed block may be stored in the buffer 4160 to be used as reference samples for predicting other blocks. When all blocks are reconstructed, the frame or picture may be read out from the buffer 4160 as one of the plurality of video frames 4142 .
  • FIG. 42 illustrates an example of a system 4200 for performing video encoding.
  • the system 4200 may include a computing device 4202 that performs video encoding, via execution of decoding component 4210 by processor 4204 and/or memory 4206 , video encoder 114 , video encoder 200 , or HEVC video encoder and decoder 400 .
  • the system 4200 may receive a plurality of video frames 4242 as input and output an encoded bitstream 4246 .
  • encoding component 4210 can include at least one of a template selection unit 4220 for constructing, during a conversion between a current video block of a video and a bitstream of the video, at least one template set for the current video block from a plurality of sub-templates, an IPM deriving unit 4230 for deriving at least one intra-prediction mode (IPM) based on cost calculations, a final predictor unit 4240 for determining, based on the at least one IPM, a final predictor of the current video block.
  • encoding component 4210 may also include a conversion unit 4250 for performing the conversion based on the final predictor.
  • FIG. 43 is a flowchart of an example method 4300 of processing video data.
  • the method 4300 may be used for encoding a video into a bitstream or decoding a bitstream into a video.
  • a video may include a plurality of frames, each frame including blocks of samples.
  • the method 4300 may be performed for at least a current video block.
  • the method 4300 may be performed by a video decoder such as the system 4100 including the variable template intra prediction unit 4110 or a video encoder such as the system 4200 including the encoding component 4210 .
  • the method 4300 may include constructing, during a conversion between a current video block of a video and a bitstream of the video, at least one template set for the current video block from a plurality of sub-templates.
  • the one or more sub-templates may be selected from a plurality of sub-templates including: a left sub-template, an above sub-template, a right-above sub-template, a left-below sub-template, and a left-above sub-template.
  • the left sub-template 2440 , 3240 (e.g., Template-L) includes left neighboring samples
  • the above sub-template 2430 , 3230 e.g., Template-A
  • the right-above sub-template 2460 , 3260 e.g., Template-RA
  • 3250 (e.g., Template-LB) includes left-bottom neighboring samples
  • the left-above sub-template 2420 , 3220 (e.g., Template-LA) includes left-above neighboring samples.
  • the template selection unit 4120 may select the one or more sub-templates to form the selected template (e.g., template 2400 , 2500 , 2600 , 2700 , 2800 , 2900 , 3000 , 3100 , 3200 ). In an aspect, selecting the one or more sub-templates to form the selected template is performed for each candidate IPM.
  • the above sub-templates may be non-adjacent to the current block (e.g., FIG. 32 ).
  • a single sub-template is selected.
  • the single sub-template may refer to Template-L, or Template-A, or Template-RA, or Template-LB, or Template-LA.
  • Template-L may be selected.
  • Template-A may be selected.
  • multiple sub-templates may be selected. For instance (e.g., FIG. 25 ), the combination of Template-L and Template-A may be selected. In another example, the combination of Template-L, Template-A, and Template-LA may be selected.
  • the combination of Template-L and Template-LB may be selected.
  • the combination of Template-A and Template-RA may be selected.
  • the combination of Template-L, Template-LB, Template-A, and Template-RA may be selected.
  • the combination of Template-L, Template-LB, Template-A, Template-RA, and Template-LA may be selected.
  • the block 4310 may optionally include selecting the one or more sub-templates based on decoded information for the current block.
  • the selecting may occur on-the-fly, that is, for each block during the decoding process.
  • selecting the sub-templates may depend on the decoded information for the current block.
  • the decoded information may refer to a block dimension and/or a block shape.
  • Template-L may not be used for wide blocks and Template-A may not be used for tall blocks.
  • a block width may be denoted as BW and a block height may be denoted as BH.
  • T1 may equal to T2.
  • T1 may not equal to T2.
  • T1 may be less than T2.
  • T1 may be larger than T2.
  • selecting the sub-templates may depend on a signalled syntax element.
  • the decoded syntax element may indicate the one or more sub-templates.
  • the block 4310 may optionally include determining which of the plurality of sub-templates for the current block have been reconstructed.
  • the reconstructed samples for the template may be available.
  • one or more sub-templates e.g., 2420 , 2430 , 2440 , 2450 , 2460
  • the other templates may be selected. For instance, a sub-template may be unavailable near an edge of a picture.
  • the sub-templates at the left side of current block e.g., Template-L, Template-LA, and/or Template-LB
  • the sub-templates at the above side of current block e.g., Template-A or/and Template-RB
  • the templates at the above side of current block e.g., Template-A, Template-LA, and/or Template-RB
  • the templates at the left side of current block e.g., Template-L or Template-LB
  • a pre-defined IPM may be used.
  • the pre-defined IPM may refer to DC, Planar, horizontal mode, or vertical mode.
  • the method 4300 may optionally include determining a dimension of the selected template based on the decoded information for the current block.
  • the template selection unit 4120 may determine dimensions (e.g., M, N, L1, L2) of the selected template based on the decoded information for the current block (e.g., block dimension, or/and block shape, or/and slice/picture type).
  • the dimensions of Template-L, Template-A, Template-RA, Template-LB, and Template-LA may be L1 ⁇ BH, BW ⁇ L2, BW′ ⁇ L2, L1 ⁇ BH′ and L1 ⁇ L2, respectively.
  • BW′ may represent the width of Template-RA
  • BH′ may represent the height of Template-LB.
  • L1 or/and L2 may be pre-defined values.
  • L1 may be less than L2.
  • L1 may be larger than L2.
  • the values of L1 and/or L2 may depend on slice type. For instance, the values of L1 or/and L2 for I slice may be no less than the values for inter-coded slices (e.g., P/B slice).
  • L1 or/and L2 for I slice may be larger the values for inter-coded slices (e.g., P/B slice).
  • L1 when BW ⁇ BH is less than or equal to a configured threshold T5, L1 may be set equal to E1 and L2 may be set equal to E2; when BW ⁇ BH is larger than T5, L1 may be set equal to E3 and L2 may be set equal to E4.
  • E1 may be equal to E2, or/and E3 may be equal to E4.
  • L1 or/and L2 may be determined by a signaled syntax element.
  • the method 4300 may include deriving at least one intra-prediction mode (IPM) based on cost calculations.
  • IPM intra-prediction mode
  • the IPM deriving unit 4130 , 4240 may determine the cost of using each of a plurality of candidate IPMs to predict samples in a template region based on the selected template and the current block.
  • the block 4330 may optionally include determining one of: a sum of the absolute transformed difference (SATD), a sum of the squared errors (SSE), a subjective quality metric, or a structural similarity index measure (SSIM) between reconstructed samples of the template and predicted samples of the template predicted by the candidate IPM.
  • SATD sum of the absolute transformed difference
  • SSE sum of the squared errors
  • SSIM structural similarity index measure
  • the IPM deriving unit 4130 , 4230 may determine the predicted samples of the template for each candidate IPM. In an aspect, because the samples of the selected template have been reconstructed, the IPM deriving unit 4130 , 4230 may compare the predicted samples of the template with the reconstructed samples of the template to determine the cost.
  • the cost may be calculated in a form of D+lambda ⁇ R, where D is a metric of distortion such as SAD, SATD, SSE etc., R represents the number of bits under consideration, and lambda is a pre-defined factor. In an aspect, where different templates are selected for different IPMs, the cost may be normalized by dividing the total number of samples in the used templates for each IPM.
  • the IPM deriving unit 4130 , 4230 may select the candidate IPM that has the best cost.
  • the best cost may be the lowest cost when the cost metric is a SATD, SSE, or SAD.
  • the best cost metric may be the highest value when the cost metric is a subjective quality metric or S SIM.
  • the block 4330 may optionally include determining the predicted samples of each sub-template separately.
  • the block 4330 may optionally include determining a cost of each sub-template separately and summing the costs to determine a final cost for the selected template.
  • the costs of sub-templates may be used to get the final cost.
  • a weight may be used when summing up the cost of each sub-template.
  • J w 1 ⁇ J 1 +w 2 ⁇ J 2 +w 3 ⁇ J 3 ++w M ⁇ J M , where J i and w i denote the cost of and the weight of the i th template, and J denotes the final cost.
  • Template-A or/and Template-L have larger weights than other templates.
  • the weights may depend on block dimension or/and block shape of the current block.
  • the cost of each template may be normalized by dividing the number of samples in the template.
  • the block 4330 may optionally include down-sampling samples in the selected template and calculating the cost based on the down-sampled selected template.
  • Down-sampling may refer to using partial samples to calculate the cost.
  • the block 4330 may optionally include determining one or more lines of template-reference samples neighboring the selected template based on the selected template.
  • one or more lines e.g., rows or columns
  • template-reference samples are illustrated in FIGS. 33-40 .
  • multiple lines of template-reference samples neighboring the selected template may be used.
  • N lines of template-reference samples neighboring the template may be used to derive the prediction of the template, where N is an integer and larger than 1.
  • one of N lines of template-reference samples may be used.
  • which of the N lines of template-reference samples to use may be derived implicitly.
  • which of the N lines of template-reference samples to use may be signalled with a syntax element. In one example, which of the N lines of template-reference samples to use may be inherited from the index of reference line used in MRL. In one example, the N lines of template-reference samples neighboring the template are down-sampled into one line first and the down-sampled line may be used as the reference. In one example, the number of lines of template-reference samples for each template may be different. In one example, the number of lines of template-reference samples may depend on the decoded information, such as whether the current block is located at the CTU boundary, dimension or shape of the current block, or/and a slice or picture type.
  • unavailable template-reference samples may be filled using the available template-reference samples.
  • the unavailable template-reference samples may be filled using the nearest available template-reference samples.
  • the unavailable template-reference samples may be filled by a same value.
  • the unavailable template-samples may be filled using different values according to the distances from the available template-samples.
  • the unavailable template-reference samples may be substituted by some generated values. For example, the same reference sample substitution process as in HEVC or VVC may be used to fill these samples
  • the method 4300 may include determining, based on the at least one IPM, a final predictor of the current video block.
  • the final predictor unit 4140 , 4240 may determine, based on the at least one IPM, a final predictor of the current video block.
  • the final predictor unit 4140 , 4240 may predict the samples in the current block with intra prediction using the derived IPM.
  • final predictor unit 4140 , 4240 may predict the samples in the current block with intra prediction using the derived IPM.
  • the final predictor unit 4140 , 4240 may determine the positions of reference samples based on the derived IPM. The reference samples may be located within the selected template.
  • the method 4300 may include performing the conversion based on the final predictor.
  • the conversion unit 4250 may perform the conversion based on the final predictor.
  • whether to and/or how to apply the disclosed methods above may be signaled at the sequence level, picture level, slice level, or tile group level, such as in a sequence header, picture header, SPS, VPS, DPS, DCI, PPS, APS, slice header, or tile group header.
  • whether to and/or how to apply the disclosed methods above may be signaled at PU, TU, CU, VPDU, CTU, CTU row, slice, tile, or sub-picture.
  • whether to and/or how to apply the disclosed methods above may be dependent on coded information, such as block size, color format, single/dual tree partitioning, colour component, slice type or picture type.
  • a method of processing video data comprising:
  • the plurality of sub-templates includes at least one of the following: a left sub-template, an above sub-template, a right-above sub-template, a left-below sub-template, and a left-above sub-template.
  • the at least one template set includes a single sub-template, and wherein the single sub-template is one of a left sub-template, an above sub-template, a right-above sub-template, a left-below sub-template, or a left-above sub-template.
  • the dimension of one of the plurality of sub-templates is one of L1 ⁇ BH, BW ⁇ L2, BW′ ⁇ L2, L1 ⁇ BH′ or L1 ⁇ L2, where L1 is a height, L2 is a width, BW and BH represent width and height of the current video block respectively, BW′ represents the width of a right-above sub-template and BH′ represents the height of a left-below sub-template respectively, and wherein BW′ is equal to BW or BH and BH′ is equal to BH or BW, and L1 or L2 is a predefined value.
  • An apparatus for processing video data comprising a processor and a non-transitory memory with instructions thereon, wherein the instructions upon execution by the processor, cause the processor to:
  • a non-transitory computer-readable recording medium storing a bitstream of a video which is generated by a method performed by a video processing apparatus, wherein the method comprises:
  • a non-transitory computer-readable storage medium storing instructions that cause a processor to:
  • a method of processing video data comprising:
  • An apparatus for processing video data comprising a processor and a non-transitory memory with instructions thereon, wherein the instructions upon execution by the processor, cause the processor to:
  • a non-transitory computer-readable recording medium storing a bitstream of a video which is generated by a method performed by a video processing apparatus, wherein the method comprises:
  • a non-transitory computer-readable storage medium storing instructions that cause a processor to:
  • a method of processing a video wherein the video comprises a plurality of frames, each frame comprising blocks of samples, the method comprising, for at least a current block:
  • selecting the one line of the N lines comprises implicitly deriving the one line.
  • the method of clause 38 further comprises down-sampling the one or more lines to derive a single line.
  • determining one or more lines of template-reference samples comprises substituting an unavailable template-reference sample with an available reference sample.
  • An apparatus for processing video data comprising a processor and a non-transitory memory with instructions thereon, wherein the instructions upon execution by the processor, cause the processor to:
  • a non-transitory computer-readable recording medium storing a bitstream of a video which is generated by a method performed by a video processing apparatus, wherein the method comprises:
  • a non-transitory computer-readable storage medium storing instructions that cause a processor to:
  • Combinations such as “at least one of A, B, or C,” “one or more of A, B, or C,” “at least one of A, B, and C,” “one or more of A, B, and C,” and “A, B, C, or any combination thereof” include any combination of A, B, or C, and may include multiples of A, multiples of B, or multiples of C.
  • combinations such as “at least one of A, B, or C,” “one or more of A, B, or C,” “at least one of A, B, and C,” “one or more of A, B, and C,” and “A, B, C, or any combination thereof” may be A only, B only, C only, A and B, A and C, B and C, or A and B and C, where any such combinations may contain one or more member or members of A, B, or C.

Abstract

Example implementations include a method, apparatus and computer-readable medium of video processing, including constructing, during a conversion between a current video block of a video and a bitstream of the video, at least one template set for the current video block from a plurality of sub-templates. The one or more sub-templates may be selected from a plurality of sub-templates including: a left sub-template, an above sub-template, a right-above sub-template, a left-below sub-template, and a left-above sub-template. The implementations further include deriving at least one intra-prediction mode (IPM) based on cost calculations. The implementations include determining, based on the at least one IPM, a final predictor of the current video block. The implementations include performing the conversion based on the final predictor.

Description

    BACKGROUND
  • The present disclosure relates generally to video coding, and more particularly, to video encoding and decoding based on templates for decoder-side intra-prediction mode derivation.
  • SUMMARY
  • The following presents a simplified summary of one or more aspects in order to provide a basic understanding of such aspects. This summary is not an extensive overview of all contemplated aspects, and is intended to neither identify key or critical elements of all aspects nor delineate the scope of any or all aspects. Its sole purpose is to present some concepts of one or more aspects in a simplified form as a prelude to the more detailed description that is presented later.
  • In an aspect, the disclosure provides method of processing video data. A video may include a plurality of frames, each frame including blocks of samples. The method may include constructing, during a conversion between a current video block of a video and a bitstream of the video, at least one template set for the current video block from a plurality of sub-templates. The method may include deriving at least one intra-prediction mode (IPM) based on cost calculations. The method may include determining, based on the at least one IPM, a final predictor of the current video block. The method may include performing the conversion based on the final predictor.
  • In some implementations, the plurality of sub-templates includes at least one of the following: a left sub-template, an above sub-template, a right-above sub-template, a left-below sub-template, and a left-above sub-template.
  • In some implementations, the plurality of sub-templates includes non-adjacent samples of the current video block.
  • In some implementations, the at least one template set includes a single sub-template, and wherein the single sub-template is one of a left sub-template, an above sub-template, a right-above sub-template, a left-below sub-template, or a left-above sub-template.
  • In some implementations, the at least one template set includes any one of the following:
  • a) a left sub-template and an above sub-template; b) a left sub-template, an above sub-template, and a left-above sub-template; c) a left sub-template and a left-below sub-template; d) an above sub-template and a right-above sub-template; e) a left sub-template, a left-below sub-template, an above sub-template, and a right-above sub-template; or f) a left sub-template, a left-below sub-template, an above sub-template, a right-above sub-template, and a left-above sub-template.
  • In some implementations, the at least one template set is selected from the plurality of sub-templates based on a coding information for the current block, wherein the coding information includes a block dimension or block shape. The at least one template set may be selected from the plurality of sub-templates based on a relationship between the dimension and a pre-defined threshold. BW may represent a block width of the current video block and BH may represent a block height of the current video block. In some implementations, a left sub-template is not selected in a case that the BW divided by the BH is greater than or equal to a first threshold. In some implementations, an above sub-template is not selected in a case that the BH divided by the BW is greater than or equal to a second threshold.
  • In some implementations, a dimension of one of the plurality of sub-templates is based on the at least one of the following: a) a dimension of the current video block; b) a block shape of the current video block; c) a slice type of the current video block; or d) a picture type of the current video block.
  • In some implementations, constructing at least one template set from a plurality of sub-templates is further related to a second syntax element presented in the bitstream.
  • In some implementations, constructing at least one template set from a plurality of sub-templates is further related to a second syntax element presented in the bitstream.
  • In some implementations, in response to the plurality of sub-templates being unavailable, no template set is constructed and no IPM is derived. The final predictor of the current video block may be determined based on a predefined IPM, and the predefined IPM may be one of DC mode, planar mode, horizontal mode, or vertical mode
  • In some implementations, the conversion includes decoding the current video block from the bitstream.
  • In some implementations, the conversion includes encoding the current video block into the bitstream.
  • In another aspect, the disclosure provides apparatus for processing video data. The apparatus includes a processor and a non-transitory memory with instructions thereon. The instructions upon execution by the processor, cause the processor to: construct, during a conversion between a current video block of a video and a bitstream of the video, at least one template set for the current video block from a plurality of sub-templates; derive at least one intra-prediction mode (IPM) based on cost calculations; determine, based on the at least one IPM, a final predictor of the current video block; and perform the conversion based on the final predictor. The apparatus may be configured to perform any of the above implementations of the method.
  • In another aspect, the disclosure provides a non-transitory computer-readable recording medium storing a bitstream of a video which is generated by a method performed by a video processing apparatus. The non-transitory computer-readable medium may be generated by any of the above implementations of the method.
  • In another aspect, the disclosure provides a non-transitory computer-readable storage medium storing instructions that cause a processor to perform any of the above implementations of the method.
  • To the accomplishment of the foregoing and related ends, the one or more aspects include the features hereinafter fully described and particularly pointed out in the claims. The following description and the annexed drawings set forth in detail some illustrative features of the one or more aspects. These features are indicative, however, of but a few of the various ways in which the principles of various aspects may be employed, and this description is intended to include all such aspects and their equivalents.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 is a block diagram that illustrates an example of a video coding system, in accordance with some aspects of the present disclosure.
  • FIG. 2 is a block diagram that illustrates a first example of a video encoder, in accordance with some aspects of the present disclosure.
  • FIG. 3 is a block diagram that illustrates an example of a video decoder, in accordance with some aspects of the present disclosure.
  • FIG. 4 is a block diagram that illustrates a second example of a video encoder, in accordance with some aspects of the present disclosure.
  • FIG. 5 is an example of an encoder block diagram of versatile video coding (VVC) in accordance with some aspects of the present disclosure.
  • FIG. 6 is a schematic diagram of intra mode coding with 67 intra-prediction modes to capture the arbitrary edge directions presented in natural video in accordance with some aspects of the present disclosure.
  • FIGS. 7 and 8 are reference example diagrams of wide-angular intra-prediction in accordance with some aspects of the present disclosure.
  • FIG. 9 is a diagram of discontinuity in case of directions that exceed 45° angle in accordance with some aspects of the present disclosure.
  • FIG. 10 is a schematic diagram of location of the samples used for the derivation of a and β for the chroma in accordance with some aspects of the present disclosure.
  • FIG. 11 is a schematic diagram of location of the samples used for the derivation of a and β for the luma in accordance with some aspects of the present disclosure.
  • FIGS. 12-15 illustrate examples of reference samples (Rx,−1 and R−1,y) for PDPC applied over various prediction modes in accordance with some aspects of the present disclosure.
  • FIG. 16 is a diagram of multiple reference line (MRL) intra-prediction used in accordance with aspects of the present disclosure.
  • FIGS. 17 and 18 are example diagrams and of an intra sub-partitions (ISP) that divides luma intra-predicted blocks vertically or horizontally into sub-partitions depending on the block size in accordance with some aspects of the present disclosure.
  • FIG. 19 is a diagram of a matrix weighted intra-prediction process (MIP) method for VVC in accordance with some aspects of the present disclosure.
  • FIG. 20 is a diagram of a template based intra mode derivation where the target denotes the current block (of block size N) for which intra-prediction mode is to be estimated in accordance with some aspects of the present disclosure.
  • FIG. 21 is a diagram of a template of a set of chosen pixels on which a gradient analysis may be performed based on intra-prediction mode derivation in accordance with some aspects of the present disclosure.
  • FIG. 22 is a diagram of a convolution of a 3×3 sobel gradient filter with the template in accordance with aspects of the present disclosure.
  • FIG. 23 is a schematic diagram of intra mode coding with greater than 67 intra-prediction modes in accordance with some aspects of the present disclosure.
  • FIG. 24 is a diagram of an example template including a left-above sub-template in accordance with some aspects of the present disclosure.
  • FIG. 25 is a diagram of an example template including a left sub-template and an above sub-template in accordance with some aspects of the present disclosure.
  • FIG. 26 is a diagram of an example template including an above sub-template in accordance with some aspects of the present disclosure.
  • FIG. 27 is a diagram of an example template including a left sub-template in accordance with some aspects of the present disclosure.
  • FIG. 28 is a diagram of an example template including a left sub-template and a left-below sub-template in accordance with some aspects of the present disclosure.
  • FIG. 29 is a diagram of an example template including an above sub-template and a right-above sub-template in accordance with some aspects of the present disclosure.
  • FIG. 30 is a diagram of an example template including a left sub-template, a left-below sub-template, an above sub-template, and a right-above sub-template in accordance with some aspects of the present disclosure.
  • FIG. 31 is a diagram of an example template including a left-above sub-template, a left sub-template, a left-below sub-template, an above sub-template, and a right-above sub-template in accordance with some aspects of the present disclosure.
  • FIG. 32 is a diagram of an example template including sub-templates that are spaced apart from a target block in accordance with some aspects of the present disclosure.
  • FIG. 33 is a diagram of example template-reference samples for a template including a left-above sub-template, a left sub-template, and an above sub-template in accordance with some aspects of the present disclosure.
  • FIG. 34 is a diagram of example template-reference samples for a template including a left sub-template and an above sub-template in accordance with some aspects of the present disclosure.
  • FIG. 35 is a diagram of example template-reference samples for a template including an above sub-template in accordance with some aspects of the present disclosure.
  • FIG. 36 is a diagram of example template-reference samples for a template including a left sub-template in accordance with some aspects of the present disclosure.
  • FIG. 37 is a diagram of example template-reference samples with a horizontal gap for a template including an above sub-template in accordance with some aspects of the present disclosure.
  • FIG. 38 is a diagram of example template-reference samples with a vertical gap for a template including an above sub-template in accordance with some aspects of the present disclosure.
  • FIG. 39 is a diagram of example template-reference samples with a vertically shifted portion for a template in accordance with some aspects of the present disclosure.
  • FIG. 40 is a diagram of example template-reference samples with a horizontally shifted portion for a template in accordance with some aspects of the present disclosure.
  • FIG. 41 is a diagram of an example apparatus including a video decoder with a variable template intra-prediction unit in accordance with some aspects of the present disclosure.
  • FIG. 42 is a diagram of an example apparatus including a video encoder for variable template intra-prediction in accordance with some aspects of the present disclosure.
  • FIG. 43 is a flowchart of an example method of decoding a bitstream in accordance with some aspects of the present disclosure.
  • DETAILED DESCRIPTION
  • The detailed description set forth below in connection with the appended drawings is intended as a description of various configurations and is not intended to represent the only configurations in which the concepts described herein may be practiced. The detailed description includes specific details for the purpose of providing a thorough understanding of various concepts. However, it will be apparent to a person having ordinary skill in the art that these concepts may be practiced without these specific details. In some instances, structures and components are shown in block diagram form in order to avoid obscuring such concepts.
  • Several aspects of video coding and decoding will now be presented with reference to various apparatus and methods. These apparatus and methods will be described in the following detailed description and illustrated in the accompanying drawings by various blocks, components, circuits, processes, algorithms, among other examples (collectively referred to as “elements”). These elements may be implemented using electronic hardware, computer software, or any combination thereof. Whether such elements are implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system.
  • By way of example, an element, or any portion of an element, or any combination of elements may be implemented as a “processing system” that includes one or more processors. Examples of processors include microprocessors, microcontrollers, graphics processing units (GPUs), central processing units (CPUs), application processors, digital signal processors (DSPs), reduced instruction set computing (RISC) processors, systems on a chip (SoC), baseband processors, field programmable gate arrays (FPGAs), programmable logic devices (PLDs), state machines, gated logic, discrete hardware circuits, and other suitable hardware configured to perform the various functionality described throughout this disclosure. One or more processors in the processing system may execute software. Software shall be construed broadly to mean instructions, instruction sets, code, code segments, program code, programs, subprograms, software components, applications, software applications, software packages, routines, subroutines, objects, executables, threads of execution, procedures, functions, among other examples, whether referred to as software, firmware, middleware, microcode, hardware description language, or otherwise.
  • Accordingly, in one or more examples, the functions described may be implemented in hardware, software, or any combination thereof. If implemented in software, the functions may be stored on or encoded as one or more instructions or code on a computer-readable medium. Computer-readable media includes computer storage media, which may be referred to as non-transitory computer-readable media. Non-transitory computer-readable media may exclude transitory signals. Storage media may be any available media that can be accessed by a computer. By way of example, and not limitation, such computer-readable media can include a random-access memory (RAM), a read-only memory (ROM), an electrically erasable programmable ROM (EEPROM), optical disk storage, magnetic disk storage, other magnetic storage devices, combinations of the aforementioned types of computer-readable media, or any other medium that can be used to store computer executable code in the form of instructions or data structures that can be accessed by a computer.
  • Aspects described herein generally relate to selecting one or more templates used for deriving an intra-prediction modes (IPM) for decoding (or encoding) images. For example, video coding technologies, such as high efficiency video coding (HEVC), versatile video coding (VVC), etc., can use IPMs at the encoding side and decoding side to encode/decode each image or frame of a video, such to compress the number of bits in the bitstream, which can provide for efficient storage or transmission of the image or frame, and thus the video. As described in further detail herein, the IPMs are determined or specified per block of an image, where a block can include a portion of the image defined by a subset of coding units CUs or prediction units (PUs) (e.g., N×N CUs or PUs), where each CU or PU can be a pixel, a chroma, a luma, a collection of such, etc. The IPM is then used to predict a given block based on reference pixels, chromas, lumas, etc. of a previously decoded block. This can save from storing or communicating values for each pixel, chroma, or luma, etc. In current video coding technologies, the IPM used for encoding a block may be signaled in the bitstream.
  • In accordance with aspects described herein, the IPM used for intra-prediction may be derived at the decoder. Deriving the IPM may reduce the bitrate of the bitstream by reducing the bits used for signaling the IPM. Conventionally, the template used for intra-prediction has been fixed. In an aspect, the present disclosure provides techniques for selecting a template. The template may be a combination of one or more sub-templates. Selecting which sub-templates to include in the template may result in a template that better predicts the samples of the block. For example, the template may be selected based on the shape or dimensions of the block. Additionally, selecting the template may reduce the number of ratio of unavailable samples included in the template. For instance, the template may be selected based on which sub-templates for the current block have been reconstructed. Accordingly, selecting the template may reduce processing of unavailable samples. Additional aspects described herein relate to determining a cost of an IPM based on the selected template.
  • FIG. 1 is a block diagram that illustrates an example of a video coding system 100 that may utilize the techniques of this disclosure. As shown in FIG. 1, video coding system 100 may include a source device 110 and a destination device 120. The source device 110, which may be referred to as a video encoding device, may generate encoded video data. The destination device 120, which may be referred to as a video decoding device, may decode the encoded video data generated by the source device 110. The source device 110 may include a video source 112, a video encoder 114, and an input/output (I/O) interface 116.
  • The video source 112 may include a source such as a video capture device, an interface to receive video data from a video content provider, and/or a computer graphics system for generating video data, or a combination of such sources. The video data may comprise one or more pictures or images. The terms “picture,” “image,” or “frame” can be used interchangeably throughout to refer to a single image in a stream of images that produce a video. The video encoder 114 encodes the video data from the video source 112 to generate a bitstream. The bitstream may include a sequence of bits that form a coded representation of the video data. The bitstream may include coded pictures and associated data. The coded picture is a coded representation of a picture. The associated data may include sequence parameter sets, picture parameter sets, and other syntax structures. The I/O interface 116 may include a modulator/demodulator (modem) and/or a transmitter, a bus, or substantially any mechanism that facilitates transfer of data between devices or within a computing device that may include both the source device 110 and destination device 120 (e.g., where the computing device stores the encoded video generated using functions of the source device 110 for display using functions of the destination device 120). In one example, the encoded video data may be transmitted directly to destination device 120 via the I/O interface 116 through the network 130 a. The encoded video data may also be stored onto a storage medium/server 130 b for access by destination device 120.
  • The destination device 120 may include an I/O interface 126, a video decoder 124, and a display device 122. The I/O interface 126 may include a receiver and/or a modem, a bus, or substantially any mechanism that facilitates transfer of data between devices or within a computing device. The I/O interface 126 may acquire encoded video data from the source device 110 or the storage medium/server 130 b. The video decoder 124 may decode the encoded video data. The display device 122 may display the decoded video data to a user. The display device 122 may be integrated with the destination device 120, or may be external to the destination device 120 which be configured to interface with an external display device.
  • The video encoder 114 and the video decoder 124 may operate according to a video compression standard, such as the HEVC standard, VVC standard and other current and/or further standards.
  • FIG. 2 is a block diagram illustrating an example of a video encoder 200, which may be an example of the video encoder 114 in the system 100 illustrated in FIG. 1, in accordance with some aspects of the present disclosure.
  • The video encoder 200 may be configured to perform any or all of the techniques of this disclosure. In the example of FIG. 2, the video encoder 200 includes a plurality of functional components. The techniques described in this disclosure may be shared among the various components of the video encoder 200. In some examples, a processor may be configured to perform any or all of the techniques described in this disclosure, including those of video encoder 200.
  • The functional components of video encoder 200 may include one or more of a partition unit 201, a prediction unit 202 which may include a mode select unit 203, a motion estimation unit 204, a motion compensation unit 205 and an intra-prediction unit 206, a residual generation unit 207, a transform unit 208, a quantization unit 209, an inverse quantization unit 210, an inverse transform unit 211, a reconstruction unit 212, a buffer 213, and an entropy encoding unit 214.
  • In other examples, the video encoder 200 may include more, fewer, or different functional components. In an example, the prediction unit 202 may include an intra block copy (IBC) unit. The IBC unit may perform prediction in an IBC mode in which at least one reference picture is a picture where the current video block is located.
  • Furthermore, some components, such as the motion estimation unit 204 and the motion compensation unit 205, may be highly integrated, but are separately represented in the example of FIG. 2 for purposes of explanation.
  • The partition unit 201 may partition a picture into one or more video blocks. The video encoder 200 and the video decoder 300 may support various video block sizes.
  • The mode select unit 203 may select one of the coding modes, intra or inter, e.g., based on error results, and provide the resulting intra- or inter-coded block to at least one of a residual generation unit 207 to generate residual block data and to a reconstruction unit 212 to reconstruct the encoded block for use as a reference picture. In some examples, the mode select unit 203 may select a combination of intra- and inter-prediction (CIIP) mode in which the prediction is based on an inter-prediction signal and an intra-prediction signal. The mode select unit 203 may also select a resolution for a motion vector (e.g., a sub-pixel or integer pixel precision) for the block in the case of inter-prediction.
  • To perform inter-prediction on a current video block, the motion estimation unit 204 may generate motion information for the current video block by comparing one or more reference frames from buffer 213 to the current video block. In an example, each reference frame can correspond to a picture of the video. The motion compensation unit 205 may determine a predicted video block for the current video block based on the motion information and decoded samples of pictures from the buffer 213 other than the picture associated with the current video block.
  • The motion estimation unit 204 and the motion compensation unit 205 may perform different operations for a current video block, for example, depending on whether the current video block is in an I-slice, a P-slice, or a B-slice. As used herein, in some aspects, an “I-slice” may refer to a portion of a picture composed of macroblocks, all of which are based upon macroblocks within the same picture. Further, as used herein, in some aspects, “P-slices” and “B-slices” may refer to portions of a picture composed of macroblocks that are not dependent on macroblocks in the same picture.
  • In some examples, the motion estimation unit 204 may perform uni-directional prediction for the current video block, and the motion estimation unit 204 may search reference pictures of list 0 or list 1 for a reference video block for the current video block. The motion estimation unit 204 may then generate a reference index that indicates the reference picture in list 0 or list 1 that contains the reference video block and a motion vector that indicates a spatial displacement between the current video block and the reference video block. The motion estimation unit 204 may output the reference index, a prediction direction indicator, and the motion vector as the motion information of the current video block. The motion compensation unit 205 may generate the predicted video block of the current block based on the reference video block indicated by the motion information of the current video block.
  • In other examples, the motion estimation unit 204 may perform bi-directional prediction for the current video block, where the motion estimation unit 204 may search the reference pictures in list 0 for a reference video block for the current video block and may also search the reference pictures in list 1 for another reference video block for the current video block. The motion estimation unit 204 may then generate reference indexes that indicate the reference pictures in list 0 and list 1 containing the reference video blocks and motion vectors that indicate spatial displacements between the reference video blocks and the current video block. The motion estimation unit 204 may output the reference indexes and the motion vectors of the current video block as the motion information of the current video block. The motion compensation unit 205 may generate the predicted video block of the current video block based on the reference video blocks indicated by the motion information of the current video block.
  • In some examples, the motion estimation unit 204 may output a full set of motion information for decoding processing of a decoder.
  • In some examples, the motion estimation unit 204 may not output a full set of motion information for the current video. Rather, the motion estimation unit 204 may signal the motion information of the current video block with reference to the motion information of another video block. For example, the motion estimation unit 204 may determine that the motion information of the current video block is sufficiently similar to the motion information of a neighboring video block.
  • In one example, the motion estimation unit 204 may indicate, in a syntax structure associated with the current video block, a value that indicates to the video decoder 300 that the current video block has the same motion information as the another video block.
  • In another example, the motion estimation unit 204 may identify, in a syntax structure associated with the current video block, another video block and a motion vector difference (MVD). The motion vector difference indicates a difference between the motion vector of the current video block and the motion vector of the indicated video block. The video decoder 300 may use the motion vector of the indicated video block and the motion vector difference to determine the motion vector of the current video block.
  • As discussed above, video encoder 200 may predictively signal the motion vector. Two examples of predictive signaling techniques that may be implemented by video encoder 200 include advanced motion vector prediction (AMVP) and merge mode signaling.
  • The intra-prediction unit 206 may perform intra-prediction on the current video block. When the intra-prediction unit 206 performs intra-prediction on the current video block, the intra-prediction unit 206 may generate prediction data for the current video block based on decoded samples of other video blocks in the same picture. The prediction data for the current video block may include at least one of a predicted video block or one or more syntax elements.
  • The residual generation unit 207 may generate residual data for the current video block by subtracting (e.g., indicated by the minus sign) the predicted video block(s) of the current video block from the current video block. The residual data of the current video block may include residual video blocks that correspond to different sample components of the samples in the current video block.
  • In other examples, there may be no residual data for the current video block for the current video block, for example in a skip mode, and the residual generation unit 207 may not perform the subtracting operation.
  • The transform unit 208, which may also be referred to as a transform processing unit, may generate one or more transform coefficient video blocks for the current video block by applying one or more transforms to a residual video block associated with the current video block.
  • After the transform unit 208 generates a transform coefficient video block associated with the current video block, the quantization unit 209 may quantize the transform coefficient video block associated with the current video block based on one or more quantization parameter (QP) values associated with the current video block.
  • The inverse quantization unit 210 and the inverse transform unit 211 may apply inverse quantization and inverse transforms to the transform coefficient video block, respectively, to reconstruct a residual video block from the transform coefficient video block. The reconstruction unit 212 may add the reconstructed residual video block to corresponding samples from one or more predicted video blocks generated by the prediction unit 202 to produce a reconstructed video block associated with the current block for storage in the buffer 213.
  • After the reconstruction unit 212 reconstructs the video block, loop filtering operation may be performed to reduce video blocking artifacts in the video block.
  • The entropy encoding unit 214 may receive data from other functional components of the video encoder 200. When entropy encoding unit 214 receives the data, entropy encoding unit 214 may perform one or more entropy encoding operations to generate entropy encoded data and output a bitstream that includes the entropy encoded data.
  • FIG. 3 is a block diagram illustrating an example of video decoder 300, which may be an example of the video decoder 124 in the system 100 illustrated in FIG. 1, in accordance with some aspects of the present disclosure.
  • The video decoder 300 may be configured to perform any or all of the techniques of this disclosure. In the example of FIG. 3, the video decoder 300 includes a plurality of functional components. The techniques described in this disclosure may be shared among the various components of the video decoder 300. In some examples, a processor may be configured to perform any or all of the techniques described in this disclosure, including those of video decoder 300.
  • In the example of FIG. 3, the video decoder 300 includes one or more of an entropy decoding unit 301, a motion compensation unit 302, an intra-prediction unit 303, an inverse quantization unit 304, an inverse transform unit 305, a reconstruction unit 306, and a buffer 307. The video decoder 300 may, in some examples, perform a decoding pass generally reciprocal to the encoding pass described with respect to video encoder 200 (FIG. 2).
  • The video decoder 300 may receive, via the entropy decoding unit 301 or otherwise, an encoded bitstream. The encoded bitstream may include entropy coded video data (e.g., encoded blocks of video data). In this example, the entropy decoding unit 301 may decode the entropy coded video data. Based on the decoded video data, whether entropy decoded or otherwise, the motion compensation unit 302 may determine motion information including motion vectors, motion vector precision, reference picture list indexes, and other motion information. The motion compensation unit 302 may, for example, determine such information by performing the AMVP and merge mode. AMVP may be used, including derivation of several most probable candidates based on data from adjacent PBs and the reference picture. Motion information typically includes the horizontal and vertical motion vector displacement values, one or two reference picture indices, and, in the case of prediction regions in B slices, an identification of which reference picture list is associated with each index. As used herein, in some aspects, a “merge mode” may refer to deriving the motion information from spatially or temporally neighboring blocks.
  • The motion compensation unit 302 may produce motion compensated blocks, possibly performing interpolation based on interpolation filters. Identifiers for interpolation filters to be used with sub-pixel precision may be included in syntax elements received with the encoded bitstream or in separate assistance information, e.g., as specified by a video encoder when encoding the video.
  • The motion compensation unit 302 may use interpolation filters as used by video encoder 200 during encoding of the video block to calculate interpolated values for sub-integer pixels of a reference block. The motion compensation unit 302 may determine the interpolation filters used by video encoder 200 according to received syntax information and use the interpolation filters to produce predictive blocks.
  • The motion compensation unit 302 may use some of the syntax information to determine sizes of blocks used to encode frame(s) and/or slice(s) of the encoded video sequence, partition information that describes how each macroblock of a picture of the encoded video sequence is partitioned, modes indicating how each partition is encoded, one or more reference frames (and reference frame lists) for each inter-encoded block, and other information to decode the encoded video sequence. As used herein, in some aspects, a “slice” may refer to a data structure that can be decoded independently from other slices of the same picture, in terms of entropy coding, signal prediction, and residual signal reconstruction. A slice can either be an entire picture or a region of a picture.
  • The intra-prediction unit 303 may use intra-prediction modes for example received in the bitstream to form a prediction block from spatially adjacent blocks. Intra-prediction can be referred to herein as “intra,” and/or intra-prediction modes can be referred to herein as “intra modes” The inverse quantization unit 304 inverse quantizes, i.e., de-quantizes, the quantized video block coefficients provided in the bitstream and decoded by entropy decoding unit 301. Inverse transform unit 305 applies an inverse transform.
  • The reconstruction unit 306 may sum the residual blocks with the corresponding prediction blocks generated by motion compensation unit 302 or intra-prediction unit 303 to form decoded blocks. If desired, a deblocking filter may also be applied to filter the decoded blocks in order to remove blockiness artifacts. The decoded video blocks are then stored in buffer 307, which provides reference blocks for subsequent motion compensation/intra-prediction and also produces decoded video for presentation on a display device.
  • Although the following description may be focused on High Efficiency Video Coding (HEVC), and/or the standard Versatile Video Coding (VVC), the concepts described herein may be applicable to other coding standards or video codec.
  • FIG. 4 shows an example of a block diagram of a HEVC video encoder and decoder 400, which may be the video encoder 114 and video decoder 124 in the system 100 illustrated in FIG. 1, video encoder 200 in FIG. 2 and video decoder 300 in FIG. 3, etc., in accordance with some aspects of the present disclosure. The encoding algorithm for generating HEVC-compliant bitstreams may proceed as follows. Each picture can be divided into block regions (e.g., coding tree units (CTUs)), and the precise block division may be transmitted to the decoder. A CTU consists of a luma coding tree block (CTB) and the corresponding chroma CTBs and syntax elements. The size L×L of a luma CTB can be chosen as L=16, 32, or 64 samples, where the larger sizes can enable higher compression. HEVC then supports a partitioning of the CTBs into smaller blocks using a tree structure and quadtree-like signaling. The quadtree syntax of the CTU specifies the size and positions of its luma and chroma CBs. The root of the quadtree is associated with the CTU. Hence, the size of the luma CTB is the largest supported size for a luma CB. The splitting of a CTU into luma and chroma CBs may be jointly signaled. One luma CB and ordinarily two chroma CBs, together with associated syntax, form a coding unit (CU). A CTB may contain only one CU or may be split to form multiple CUs, and each CU has an associated partitioning into prediction units (PUs) and a tree of transform units (TUs).
  • The first picture of the video sequence (and/or the first picture at each clean random access point that enters the video sequence) can use only intra-picture prediction, which uses region-to-region spatial data prediction within the same picture, but does not rely on other pictures to encode the first picture. For the remaining pictures between sequential or random access points, the inter-picture temporal prediction coding mode may be used for most blocks. The encoding process for inter-picture prediction includes selecting motion data including a selected reference picture and a motion vector (MV) to be applied to predict samples of each block.
  • The decision whether to code a picture area using inter-picture or intra-picture prediction can be made at the CU level. A PU partitioning structure has its root at the CU level. Depending on the basic prediction-type decision, the luma and chroma CBs can then be further split in size and predicted from luma and chroma prediction blocks (PBs). HEVC supports variable PB sizes from 64×64 down to 4×4 samples. The prediction residual is coded using block transforms. A TU tree structure has its root at the CU level. The luma CB residual may be identical to the luma transform block (TB) or may be further split into smaller luma TBs. The same applies to the chroma TBs.
  • The encoder and decoder may apply motion compensation (MC) by using MV and mode decision data to generate the same inter-picture prediction signal, which is transmitted as auxiliary information. The residual signal of intra-picture or inter-picture prediction can be transformed by linear spatial transformation, which is the difference between the original block and its prediction. Then the transform coefficients can be scaled, quantized, entropy encoded, and transmitted together with the prediction information.
  • The encoder can duplicate the decoder processing loop so that both can generate the same prediction for subsequent data. Therefore, the quantized transform coefficients can be constructed by inverse scaling, and then can be inversely transformed to replicate the decoding approximation of the residual signal. The residual can then be added to the prediction, and the result of this addition can then be fed into one or two loop filters to smooth the artifacts caused by block-by-block processing and quantization. The final picture representation (i.e., the copy output by the decoder) can be stored in the decoded picture buffer for prediction of subsequent pictures. In general, the order of encoding or decoding processing of pictures may be different from the order in which they arrive from the source. As such, in some examples, it may be necessary to distinguish between the decoding order of the decoder (that is, the bit stream order) and the output order (that is, the display order).
  • Video material encoded by HEVC can be input as a progressive image (e.g., because the source video originates from this format or is generated by de-interlacing before encoding). There is no explicit coding feature in the HEVC design to support the use of interlaced scanning, because interlaced scanning is no longer used for displays and becomes very uncommon for distribution. However, metadata syntax has been provided in HEVC to allow the encoder to indicate that it has been sent by encoding each area of the interlaced video (i.e., even or odd lines of each video frame) into a separate picture interlaced video, or by encoding each interlaced frame as a HEVC encoded picture to indicate that it has been sent. This can provide an effective method for encoding interlaced video without the need to support special decoding processes for it.
  • FIG. 5 is an example of an encoder block diagram 500 of VVC, which can include multiple in-loop filtering blocks: e.g., deblocking filter (DF), sample adaptive offset (SAO) adaptive loop filter (ALF), etc. Unlike DF, which uses predefined filters, SAO and ALF may utilize the original samples of the current picture to reduce the mean square errors between the original samples and the reconstructed samples by adding an offset and by applying a finite impulse response (FIR) filter, respectively, with coded side information signaling the offsets and filter coefficients. ALF may be located at the last processing stage of each picture and can be regarded as a tool to catch and fix artifacts created by the previous stages.
  • FIG. 6 is a schematic diagram 600 of intra-prediction mode coding with 67 intra-prediction modes to capture the arbitrary edge directions presented in natural video. In some examples, the number of directional intra modes may be extended from 33, as used in HEVC, to 65 while the planar and the DC modes remain the same.
  • In some examples, the denser directional intra-prediction modes may apply for the block sizes and for both luma and chroma intra-predictions. In the HEVC, every intra-prediction mode coded block may include a square shape (e.g., a coded block of size N×N) and the length of each of its side may be a power of 2 (e.g., where Nis a power of 2).
  • Thus, no division operations are required to generate an intra-predictor using DC mode. In VVC, blocks can have a rectangular shape that may necessitate the use of a division operation per block in the general case. To avoid division operations for DC prediction, the longer side may be used to compute the average for non-square blocks.
  • Although 67 modes are defined in the VVC, the exact prediction direction for a given intra-prediction mode index may be further dependent on the block shape. Conventional angular intra-prediction directions are defined from 45 degrees to −135 degrees in clockwise direction. In VVC, several conventional angular intra-prediction modes may be adaptively replaced with wide-angle intra-prediction modes for non-square blocks. The replaced modes may be signaled using the original mode indexes, which are remapped to the indexes of wide angular modes after parsing. In some examples, the total number of intra-prediction modes may be unchanged, i.e., 67, and the intra mode coding method may also be unchanged.
  • FIGS. 7 and 8 are reference example diagrams 700 and 800 of wide-angular intra-prediction. In some examples, the number of replaced modes in wide-angular direction illustrated in Table 1:
  • TABLE 1
    Intra-prediction modes replaced by wide-angular modes
    Aspect ratio Replaced intra-prediction modes
    W/H == 16 Modes 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13,
    14, 15
    W/H == 8 Modes 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13
    W/H == 4 Modes 2, 3, 4, 5, 6, 7, 8, 9, 10, 11
    W/H == 2 Modes 2, 3, 4, 5, 6, 7, 8, 9
    W/H == 1 None
    W/H == ½ Modes 59, 60, 61, 62, 63, 64, 65, 66
    W/H == ¼ Mode 57, 58, 59, 60, 61, 62, 63, 64, 65, 66
    W/H == ⅛ Modes 55, 56, 57, 58, 59, 60, 61, 62, 63, 64,
    65, 66
    W/H == 1/16 Modes 53, 54, 55, 56, 57, 58, 59, 60, 61, 62,
    63, 64, 65, 66
  • FIG. 9 is a diagram 900 of discontinuity in case of directions that exceed 45° angle. In such instance, two vertically adjacent predicted samples may use two non-adjacent reference samples in the case of wide-angle intra-prediction. Hence, low-pass reference samples filter and side smoothing may be applied to the wide-angle prediction to reduce the negative effect of the increased gap AN. If a wide-angle mode represents a non-fractional offset, there may be 8 modes in the wide-angle modes satisfy this condition, which are [−14, −12, −10, −6, 72, 76, 78, 80]. When a block is predicted by these modes, the samples in the reference buffer can be directly copied without applying any interpolation. With this modification, the number of samples to be smoothed may be reduced.
  • In VVC, 4:2:2 and 4:4:4 chroma formats are supported as well as 4:2:0. Chroma derived mode (DM) derivation table for 4:2:2 chroma format was initially ported from HEVC extending the number of entries from 35 to 67 to align with the extension of intra-prediction modes. As HEVC specification does not support prediction angle below −135 degree and above 45 degree, luma intra-prediction modes ranging from 2 to 5 may be mapped to 2. Therefore, chroma DM derivation table for 4:2:2: chroma format can be updated by replacing some values of the entries of the mapping table to convert prediction angle more precisely for chroma blocks.
  • In some aspects, for each inter-predicted CU, motion parameters consisting of motion vectors, reference picture indices and reference picture list usage index, and additional information used for the new coding feature of VVC may be used for inter-predicted sample generation. The motion parameter can be signaled in an explicit or implicit manner. When a CU is coded with skip mode, the CU may be associated with one PU and may have no significant residual coefficients, no coded motion vector delta or reference picture index. A merge mode may be specified where the motion parameters for the current CU can be obtained from neighboring CUs, including spatial and temporal candidates, and additional schedules introduced in VVC. The merge mode can be applied to any inter-predicted CU, not only for skip mode. The alternative to merge mode may be the explicit transmission of motion parameters, where motion vector, corresponding reference picture index for each reference picture list and reference picture list usage flag and other needed information are signaled explicitly per each CU.
  • Additionally or alternatively, intra block copy (IBC) may be a tool adopted in HEVC extensions on SCC, and thus may be used by a video encoder 114, 200, 400, as described herein in encoding video, and/or by a video decoder 124, 300, 400, as described herein in decoding video. Such a tool may improve the coding efficiency of screen content materials. As IBC mode may be implemented as a block level coding mode, block matching (BM) may be performed at the encoder to find the optimal block vector (or motion vector) for each CU. Here, a block vector is used to indicate the displacement from the current block to a reference block, which is already reconstructed inside the current picture. The luma block vector of an IBC-coded CU may be in integer precision. The chroma block vector can round to integer precision as well. When combined with AMVR, the IBC mode can switch between 1-pel and 4-pel motion vector precisions. An IBC-coded CU may be treated as the third prediction mode other than intra- or inter-prediction modes. The IBC mode may be applicable to the CUs with both width and height smaller than or equal to 64 luma samples.
  • At the encoder side, hash-based motion estimation may be performed for IBC. The encoder performs RD check for blocks with either width or height no larger than 16 luma samples. For non-merge mode, the block vector search may be performed using hash-based search first. If hash search does not return valid candidate, block matching based local search may be performed. In the hash-based search, hash key matching (32-bit cyclic redundancy check (CRC)) between the current block and a reference block may be extended to all allowed block sizes. The hash key calculation for every position in the current picture may be based on 4×4 sub-blocks. For the current block of a larger size, a hash key may be determined to match that of the reference block when all the hash keys of all 4×4 sub-blocks match the hash keys in the corresponding reference locations. If hash keys of multiple reference blocks are found to match that of the current block, the block vector costs of each matched reference may be calculated and the one with the minimum cost may be selected.
  • In some examples, in block matching search, the search range may be set to cover both the previous and current CTUs. At CU level, IBC mode may be signaled with a flag and it can be signaled as IBC AMVP mode or IBC skip/merge mode. In one example, such as IBC skip/merge mode, a merge candidate index may be used to indicate which of the block vectors in the list from neighboring candidate IBC coded blocks is used to predict the current block. The merge list may include spatial, HMVP, and pairwise candidates.
  • In another example, such as IBC AMVP mode, a block vector difference may be coded in the same way as a motion vector difference. The block vector prediction method uses two candidates as predictors, one from left neighbor and one from above neighbor (if IBC coded). When either neighbor is not available, a default block vector can be used as a predictor. A flag can be signaled to indicate the block vector predictor index.
  • To reduce the cross-component redundancy, a cross-component linear model (CCLM) prediction mode may be used in the VVC, for which the chroma samples are predicted based on the reconstructed luma samples of the same CU by using a linear model as follows:

  • predC(i,j)=α·rec L′(i,j)+β   Equation 1
  • In such instance, predC(i, j) may represent the predicted chroma samples in a CU and recL(i, j) may represent the down-sampled reconstructed luma samples of the same CU. The CCLM parameters (α and β) may be derived with at most four neighboring chroma samples and their corresponding down-sampled luma samples. For instance, suppose the current chroma block dimensions are W×H, then W″ and H′ are set as W′=W, H′=H when LM mode is applied; W′=W+H when LM-T mode is applied; and H′=H+W when LM-L mode is applied.
  • The above neighboring positions may be denoted as S[0, −1] . . . S[W′−1, −1] and the left neighboring positions may be denoted as S[−1, 0] . . . S[−1, H′−1]. Then the four samples are selected as S[W′/4, −1], S[3*W′/4, −1], S[−1, H′/4], S[−1, 3*H′/4] when LM mode is applied and both above and left neighboring samples are available; S[W′/8, −1], S[3*W′/8, −1], S[5*W′/8, −1], S[7*W′/8, −1] when LM-T mode is applied or only the above neighboring samples are available; and S[−1, H′/8], S[−1, 3*H′/8], S[−1, 5*H′/8], S[−1, 7*H′/8] when LM-L mode is applied or only the left neighboring samples are available.
  • In some aspects, the four neighboring luma samples at the selected positions may be down-sampled and compared four times to find two larger values: x0 A and x1 A, and two smaller values: x0 B and x1 B. Their corresponding chroma sample values may be denoted as y0 A, y1 A, y0 B and y1 B. Then xA, xB, yA and yB may be derived as:

  • X a=(x 0 A +x 1 A+1)>>1;X b=(x 0 B +x 1 B+1)>>1;Y a=(y 0 A +y 1 A+1)>>1;Y b=(y 0 B +y 1 B+1)>>1   Equation 2
  • Finally, the linear model parameters a and may be obtained according to the following equations:
  • α = Y a - Y b X a - X b Equation 3 β = Y b - α · X b Equation 4
  • FIG. 10 is a schematic diagram 1000 of location of the samples used for the derivation of α and β for the chroma. FIG. 11 is a schematic diagram 1100 of location of the samples used for the derivation of α and β for the luma. For both FIGS. 10 and 11, the division operation to calculate parameter α may be implemented with a look-up table. To reduce the memory required for storing the table, the diff value (difference between maximum and minimum values) and the parameter α may be expressed by an exponential notation. For example, the diff value is approximated with a 4-bit significant part and an exponent. Consequently, the table for 1/diff is reduced into 16 elements for 16 values of the significand as follows:
  • TABLE 2
    DivTable [ ] = {0, 7, 6, 5, 5, 4, 4, 3, 3, 2, 2, 1, 1, 1, 1, 0}
  • In an example, the above template and left template can be used to calculate the linear model coefficients together. In another example, the above template and left template can be used alternatively in the other 2 LM modes, called LM_T, and LM_L modes. In LM_T mode, only the above template may be used to calculate the linear model coefficients. To get more samples, the above template is extended to (W+H) samples. In LM_L mode, only left template is used to calculate the linear model coefficients. To get more samples, the left template may be extended to (H+W) samples. In LM mode, left and above templates are used to calculate the linear model coefficients.
  • To match the chroma sample locations for 4:2:0 video sequences, two types of down-sampling filter are applied to luma samples to achieve 2 to 1 down-sampling ratio in both horizontal and vertical directions. The selection of down-sampling filter is specified by a SPS level flag. The two down-sampling filters are as follows, which are corresponding to “type-0” and “type-2” content, respectively.
  • Equation 5 Rec L ( i , j ) = [ rec L ( 2 i - 1 , 2 j - 1 ) + 2 · rec L ( 2 i - 1 , 2 j - 1 ) + rec L ( 2 i + 1 , 2 j - 1 ) + rec L ( 2 i - 1 , 2 j ) + 2 · rec L ( 2 i , 2 j ) + rec L ( 2 i + 1 , 2 j ) + 4 ] 3 Equation 6 rec L ( i , j ) = [ rec L ( 2 i , 2 j - 1 ) + rec L ( 2 i - 1 , 2 j ) + 4 · rec L ( 2 i , 2 j ) + rec L ( 2 i + 1 , 2 j ) + rec L ( 2 i , 2 j + 1 ) + 4 ] 3
  • Note that only one luma line (general line buffer in intra-prediction) may be used to make the down-sampled luma samples when the upper reference line is at the CTU boundary. This parameter computation may be performed as part of the decoding process, and not just as an encoder search operation. As a result, no syntax may be used to convey the α and β values to the decoder.
  • For chroma intra-prediction mode coding, a total of 8 intra-prediction modes are allowed for chroma intra mode coding. Those modes include five traditional intra-prediction modes and three cross-component linear model modes (LM, LM_T, and LM_L). Chroma mode signaling and derivation process are shown in Table 3 below. Chroma mode coding directly depends on the intra-prediction mode of the corresponding luma block. As separate block partitioning structure for luma and chroma components is enabled in I slices, one chroma block may correspond to multiple luma blocks. Therefore, for Chroma DM mode, the intra-prediction mode of the corresponding luma block covering the center position of the current chroma block can be directly inherited.
  • TABLE 3
    Chroma mode signalling and derivation process
    Chroma
    prediction Corresponding luma intra-prediction mode
    mode
    0 50 18 1 X (0 <= X <= 66)
    0 66 0 0 0 0
    1 50 66 50 50 50
    2 18 18 66 18 18
    3 1 1 1 66 1
    4 0 50 18 1 X
    5 81 81 81 81 81
    6 82 82 82 82 82
    7 83 83 83 83 83
  • TABLE 4
    Unified binarization table for chroma prediction mode
    Value of
    intra_chroma_pred_mode Bin string
    4 00
    0 0100
    1 0101
    2 0110
    3 0111
    5 10
    6 110
    7 111
  • In Table 4, the first bin indicates whether it is regular (0) or LM modes (1). If it is LM mode, then the next bin indicates whether it is LM CHROMA (0) or not (1). If it is not LM_CHROMA, next bin indicates whether it is LM_L (0) or LM_T (1). For this case, when sps_cclm_enabled_flag is 0, the first bin of the binarization table for the corresponding intra_chroma_pred_mode can be discarded prior to the entropy coding. In other words, the first bin is inferred to be 0 and hence not coded. This single binarization table is used for both sps_cclm_enabled_flag equal to 0 and 1 cases. The first two bins in Table 4 are context coded with its own context model, and the rest of the bins are bypass coded.
  • In addition, in order to reduce luma-chroma latency in dual tree, when the 64×64 luma coding tree node is partitioned with Not Split (and ISP is not used for the 64×64 CU) or QT, the chroma CUs in 32×32/32×16 chroma coding tree node is allowed to use CCLM in the following way: If the 32×32 chroma node is not split or partitioned QT split, all chroma CUs in the 32×32 node can use CCLM; Alternatively, if the 32×32 chroma node is partitioned with Horizontal BT, and the 32×16 child node does not split or uses Vertical BT split, all chroma CUs in the 32×16 chroma node can use CCLM. In all the other luma and chroma coding tree split conditions, CCLM is not allowed for chroma CU.
  • In VVC, the results of intra-prediction of DC, planar and several angular modes may further be modified by a position dependent prediction combination (PDPC) method. PDPC is a prediction method that invokes a combination of the boundary reference samples and HEVC style prediction with filtered boundary reference samples. PDPC can be applied to the following intra modes without signaling: planar, DC, intra angles less than or equal to horizontal, and intra angles greater than or equal to vertical and less than or equal to 80. If the current block is BDPCM mode or MRL index is larger than 0, PDPC is not applied.
  • The prediction sample pred(x′,y′) is predicted using an intra-prediction mode (DC, planar, angular) and a linear combination of reference samples according to the Equation 7 as follows:

  • pred(x′,y′)=Clip(0,(1<<BitDepth)−1,(wL×R −1,y′ +wT×R x;−1+(64−wL−wT)×pred(x′,y′)+32)>>6)   Equation 7
  • In the above equation, Rx,−1, R−1,y may represent the reference samples located at the top and left boundaries of current sample (x, y), respectively
  • In some aspects, if PDPC is applied to DC, planar, horizontal, and vertical intra modes, additional boundary filters may not be needed, as currently required in the case of HEVC DC mode boundary filter or horizontal/vertical mode edge filters. PDPC process for DC and planar modes is identical. For angular modes, if the current angular mode is HOR_IDX or VER_IDX, left or top reference samples is not used, respectively. The PDPC weights and scale factors are dependent on prediction modes and the block sizes. PDPC is applied to the block with both width and height greater than or equal to 4.
  • FIGS. 12-15 illustrate examples of reference samples 1200, 1300, 1400, 1500 (Rx,−1 and R−1,y) for PDPC applied over various prediction modes. The prediction sample pred(x′, y′) is located at (x′, y′) within the prediction block. As an example, the coordinate x of the reference sample Rx,−1 is given by: x=x′+y′+1, and the coordinate y of the reference sample R−1,y is similarly given by: y=x′+y′+1 for the diagonal modes. For the other angular mode, the reference samples Rx,−1 and R−1,y could be located in fractional sample position. In this case, the sample value of the nearest integer sample location is used.
  • FIG. 16 is a diagram 1600 of multiple reference line (MRL) intra-prediction used in accordance with aspects of the present disclosure. In some examples, the samples of segments A and F are not fetched from reconstructed neighboring samples but padded with the closest samples from Segment B and E, respectively. HEVC intra-picture prediction uses the nearest reference line (i.e., reference line 0). In MRL, 2 additional lines (reference line 1 and reference line 3) are used.
  • In some examples of video coding, the index of selected reference line (mrl_idx) can be signalled and used to generate intra predictor. For reference line index, which is greater than 0, the most probable mode (MPM) list may only include additional reference line modes and the MPM index can be signalled without remaining modes. The reference line index can be signalled before intra-prediction modes, and planar mode can be excluded from intra-prediction modes in case a non-zero reference line index is signaled.
  • MRL can be disabled for the first line of blocks inside a CTU to prevent using extended reference samples outside the current CTU line. Also, PDPC can be disabled when an additional line is used. For MRL mode, the derivation of DC value in DC intra-prediction mode for non-zero reference line indices can be aligned with that of reference line index 0. MRL may store 3 neighboring luma reference lines with a CTU to generate predictions. The Cross-Component Linear Model (CCLM) tool may store 3 neighboring luma reference lines for its down-sampling filters. The definition of MRL to use the same 3 lines can be aligned as CCLM to reduce the storage requirements for decoders.
  • FIGS. 17 and 18 are examples of diagrams 1700 and 1800 of an intra sub-partitions (ISP) that divides luma intra-predicted blocks vertically or horizontally into sub-partitions depending on the block size. For example, minimum block size for ISP is 4×8 (or 8×4). If block size is greater than 4×8 (or 8×4) then the corresponding block can be divided by 4 sub-partitions. It has been noted that the M×128 (with M≤64) and 128×N (with N≤64) ISP blocks could generate a potential issue with the 64×64 VDPU. For example, an M×128 CU in the single tree case has an M×128 luma TB and two corresponding
  • M 2 × 64 chroma TBs .
  • If the CU uses ISP, then the luma TB can be divided into four M×32 TBs (only the horizontal split is possible), each of them smaller than a 64×64 block. However, in the current design of ISP chroma blocks are not divided. Therefore, both chroma components may have a size greater than a 32×32 block. Analogously, a similar situation could be created with a 128×N CU using ISP. Hence, these two cases may be an issue for the 64×64 decoder pipeline. For this reason, the CU sizes that can use ISP may be restricted to a maximum of 64×64. FIGS. 17 and 18 shows examples of the two possibilities. All sub-partitions fulfill the condition of having at least 16 samples.
  • In ISP, the dependence of 1×N/2×N subblock prediction on the reconstructed values of previously decoded 1×N/2×N subblocks of the coding block is not allowed so that the minimum width of prediction for subblocks becomes four samples. For example, an 8×N (N>4) coding block that is coded using ISP with vertical split is split into two prediction regions each of size 4×N and four transforms of size 2×N. Also, a 4×N coding block that is coded using ISP with vertical split is predicted using the full 4×N block; four transform each of 1×N is used. Although the transform sizes of 1×N and 2×N are allowed, it is asserted that the transform of these blocks in 4×N regions can be performed in parallel. For example, when a 4×N prediction region contains four 1×N transforms, there is no transform in the horizontal direction; the transform in the vertical direction can be performed as a single 4×N transform in the vertical direction. Similarly, when a 4×N prediction region contains two 2×N transform blocks, the transform operation of the two 2×N blocks in each direction (horizontal and vertical) can be conducted in parallel. In this example, there may be no delay, or reduced delay, added in processing these smaller blocks than processing 4×4 regular-coded intra blocks.
  • TABLE 5
    Block Size Coefficient group Size
    1 × N, N ≥ 16  1 × 16
    N × 1, N ≥ 16 16 × 1 
    2 × N, N ≥ 8  2 × 8
    N × 2, N ≥ 8  8 × 2
    All other possible M × N cases 4 × 4
  • For each sub-partition, reconstructed samples are obtained by adding the residual signal to the prediction signal. Here, a residual signal is generated by the processes such as entropy decoding, inverse quantization and inverse transform. Therefore, the reconstructed sample values of each sub-partition can be available to generate the prediction of the next sub-partition, and each sub-partition is repeatedly processed. In addition, the first sub-partition to be processed is the one containing the top-left sample of the CU and then continuing downwards (horizontal split) or rightwards (vertical split). As a result, reference samples used to generate the sub-partitions prediction signals may only be located at the left and above sides of the lines. All sub-partitions can share the same intra mode. The followings are summary of interaction of ISP with other coding tools.
  • In one example, MRL may be implemented if a block has an MRL index other than 0, then the ISP coding mode can be inferred to be 0 and therefore ISP mode information may not be sent to the decoder. In another example, entropy coding coefficient group size may be selected if the sizes of the entropy coding subblocks have been modified so that they have 16 samples in all possible cases, as shown in Table 5. Note that the new sizes may only affect blocks produced by ISP in which one of the dimensions is less than 4 samples. In all other cases coefficient groups may keep the 4×4 dimensions.
  • Additionally or alternatively, with respect to coded block flag (CBF) coding, it is assumed to have at least one of the sub-partitions has a non-zero CBF. Hence, if n is the number of sub-partitions and the first n−1 sub-partitions have produced a zero CBF, then the CBF of the n-th sub-partition can be inferred to be 1. Transform size restriction: all ISP transforms with a length larger than 16 points can use the discrete cosine transform (DCT)-II. Multiple transform selection (MTS) flag: if a CU uses the ISP coding mode, the MTS CU flag may be set to 0 and it may not be sent to the decoder. Therefore, the encoder may not perform rate distortion (RD) tests for the different available transforms for each resulting sub-partition. The transform choice for the ISP mode may instead be fixed and selected according the intra mode, the processing order and the block size utilized. Hence, no signaling may be required, in this example.
  • For example, let tH and tV be the horizontal and the vertical transforms selected respectively for the w x h sub-partition, where w is the width and h is the height. Then the transform can be selected according to the following rules: If w=1 or h=1, then there may be no horizontal or vertical transform respectively. If w≥4 and w≤16, tH=discrete sine transform (DST)-VII, otherwise, tH=DCT-II. If h≥4 and h≤16, tV=DST-VII, otherwise, tV=DCT-II.
  • In ISP mode, all 67 intra-prediction modes are allowed. PDPC can also be applied if corresponding width and height is at least 4 samples long. In addition, the reference sample filtering process (reference smoothing) and the condition for intra interpolation filter selection may not exist anymore, and Cubic (DCT-IF) filter can be applied for fractional position interpolation in ISP mode.
  • FIG. 19 is an example of a diagram 1900 of matrix weighted intra-prediction process (MIP) for VVC. For predicting the samples of a rectangular block of width W and height H, −MIP takes one line of H reconstructed neighboring boundary samples left of the block and one line of W reconstructed neighboring boundary samples above the block as input. If the reconstructed samples are unavailable, they can be generated as in the conventional intra-prediction.
  • Among the boundary samples, four samples or eight samples can be selected by averaging based on block size and shape. Specifically, the input boundaries bdrytop and bdryleft are reduced to smaller boundaries bdryred top and bdryred left by averaging neighboring boundary samples according to predefined rule depends on block size. Then, the two reduced boundaries bdryred top and bdryred left can be concatenated to a reduced boundary vector bdryred which is thus of size four for blocks of shape 4×4 and of size eight for blocks of all other shapes. If mode refers to the MIP-mode, this concatenation is defined as follows:
  • Equation 8 bdry red = { [ bdry red top , bdry red left ] for W = H = 4 and mode < 18 [ bdry red left , bdry red top ] for W = H = 4 and mode 18 [ bdry red top , bdry red left ] for max ( W , H ) = 8 and mode < 10 [ bdry red left , bdry red top ] for max ( W , H ) = 8 and mode 10 [ bdry red top , bdry red left ] for max ( W , H ) > 8 and mode < 6 [ bdry red left , bdry red top ] for max ( W , H ) > 8 and mode 6 .
  • A matrix vector multiplication, followed by addition of an offset, is carried out with the averaged samples as an input. The result is a reduced prediction signal on a subsampled set of samples in the original block. Out of the reduced input vector bdryred a reduced prediction signal predred, which is a signal on the down-sampled block of width Wred and height Hred is generated. Here, Wred and Hred are defined as:
  • W red = { 4 for max ( W , H ) 8 min ( W , 8 ) for max ( W , H ) > 8 H red = { 4 for max ( W , H ) 8 min ( H , 8 ) for max ( W , H ) > 8 Equation 9
  • The reduced prediction signal predred may be computed by calculating a matrix vector product and adding an offset:

  • predred =A·bdryred +b    Equation 10
  • Here, A is a matrix that has Wred·Hred rows and 4 columns if W=H=4 and 8 columns in all other cases. b is a vector of size Wred·Hred. The matrix A and the offset vector b are taken from one of the sets S0, S1, S2. One defines an index idx=idx(W, H) as follows:
  • idx ( W , H ) = { 0 for W = H = 4 1 for max ( W , H ) = 8 2 for max ( W , H ) > 8 . Equation 11
  • Here, each coefficient of the matrix A is represented with 8 bit precision. The set S0 consists of 16 matrices A0 i, i∈{0, . . . , 15} each of which has 16 rows and 4 columns and 16 offset vectors b0 i, i∈{0, . . . , 16} each of size 16. Matrices and offset vectors of that set are used for blocks of size 4×4. The set S1 consists of 8 matrices A1 i, i∈{0, . . . , 7}, each of which has 16 rows and 8 columns and 8 offset vectors b1 i, i∈{0, . . . , 7} each of size 16. The set S2 consists of 6 matrices A2 i, i∈{0, . . . , 5}, each of which has 64 rows and 8 columns and of 6 offset vectors b2 i, i∈{0, . . . , 5} of size 64.
  • In some examples, the prediction signal at the remaining positions may be generated from the prediction signal on the subsampled set by linear interpolation which is a single step linear interpolation in each direction. The interpolation can be firstly performed in the horizontal direction and then in the vertical direction regardless of block shape or block size.
  • For each CU in intra mode, a flag indicating whether an MIP mode may be to be applied or not is sent. If an MIP mode is to be applied, MIP mode (predModeIntra) may be signaled. For an MIP mode, a transposed flag (isTransposed), which determines whether the mode is transposed, and MIP mode ID (modeId), which determines which matrix is to be used for the given MIP mode, can be derived as follows

  • isTransposed=predModeIntra&1

  • modeId=predModeIntra>>1   Equation 12
  • MIP coding mode may be harmonized with other coding tools by considering following aspects: (1) low-frequency non-separable transform (LFNST) is enabled for MIP on large blocks. Here, the LFNST transforms of planar mode are used; (2) the reference sample derivation for MIP is performed exactly or at least similarly as for the conventional intra-prediction modes; (3) for the up-sampling step used in the MIP-prediction, original reference samples are used instead of down-sampled ones; (4) clipping is performed before up-sampling and not after up-sampling; (5) MIP may be allowed up to 64×64 regardless of the maximum transform size. In some aspects, the number of MIP modes may be 32 for sizeId=0, 16 for sizeId=1 and 12 for sizeId=2.
  • In joint exploration model (JEM)-2.0 intra modes are extended to 67 from 35 modes in HEVC, and they are derived at encoder and explicitly signaled to decoder. A significant amount of overhead is spent on intra mode coding in JEM-2.0. For example, the intra mode signaling overhead may be up to 5˜10% of overall bitrate in all intra coding configuration. This contribution proposes the decoder-side intra mode derivation approach to reduce the intra mode coding overhead while keeping prediction accuracy. To reduce the overhead of intra mode signaling, a decoder-side intra mode derivation (DIMD) approach, which may be used by video decoders 124, 300, 400 in decoding video. In accordance with aspects of the present disclosure, instead of signaling intra mode explicitly, the information can be derived at both encoder and decoder from the neighboring reconstructed samples of current block. The intra mode derived by DIMD may be used in two ways, for example: 1) For 2N×2N CUs, the DIMD mode is used as the intra mode for intra-prediction when the corresponding CU-level DIMD flag is turned on; 2) For N×N CUs, the DIMD mode is used to replace one candidate of the existing MPM list to improve the efficiency of intra mode coding.
  • FIG. 20 is an example of a diagram 2000 of a template based intra mode derivation where the target denotes the current block (of block size N) for which intra-prediction mode is to be estimated. The template (indicated by the patterned region in FIG. 20) specifies a set of already reconstructed samples, which are used to derive the intra mode. The template size is denoted as the number of samples within the template that extends to the above and the left of the target block, i.e., L. In some implementations, a template size of 2 (i.e., L=2) can be used for 4×4 and 8×8 blocks and a template size of 4 (i.e., L=4) can be used for 16×16 and larger blocks. The reference of template (indicated by the dotted region in FIG. 20) can refer to a set of neighboring samples from above and left of the template, as defined by JEM-2.0. Unlike the template samples which are always from reconstructed region, the reference samples of template may not be reconstructed yet when encoding/decoding the target block. In this case, the existing reference samples substitution algorithm of JEM-2.0 is utilized to substitute the unavailable reference samples with the available reference samples.
  • For each intra-prediction mode, the DIMD calculates the absolute difference (SAD) between the reconstructed template samples and its prediction samples obtained from the reference samples of the template. The intra-prediction mode that yields the minimum SAD may be selected as the final intra-prediction mode of the target block.
  • For intra 2N×2N CUs, the DIMD can be used as one additional intra mode, which can be adaptively selected by comparing the DIMD intra mode with the optimal normal intra mode (i.e., being explicitly signaled). One flag is signaled for each intra 2N×2N CU to indicate the usage of the DIMD. If the flag is one, then the CU can be predicted using the intra mode derived by DIMD; otherwise, the DIMD is not applied and the CU is predicted using the intra mode explicitly signaled in the bit-stream. When the DIMD is enabled, chroma components can reuse the same intra mode as that derived for luma component, i.e., DM mode.
  • Additionally, for each DIMD-coded CU, the blocks in the CU can adaptively select to derive their intra modes at either PU-level or TU-level. Specifically, when the DIMD flag is one, another CU-level DIMD control flag can be signaled to indicate the level at which the DIMD is performed. If this flag is zero, this can indicate that the DIMD is performed at the PU level and all the TUs in the PU use the same derived intra mode for their intra-prediction; otherwise if the DIMD control flag is one, this can indicate that the DIMD is performed at the TU level and each TU in the PU derives its own intra mode.
  • Further, when the DIMD is enabled, the number of angular directions increases to 129, and the DC and planar modes still remain the same. To accommodate the increased granularity of angular intra modes, the precision of intra interpolation filtering for DIMD-coded CUs increases from 1/32-pel to 1/64-pel. Additionally, in order to use the derived intra mode of a DIMD coded CU as MPM candidate for neighboring intra blocks, those 129 directions of the DIMD-coded CUs can be converted to “normal” intra modes (i.e., 65 angular intra directions) before they are used as MPM.
  • In some aspects, intra modes of intra N×N CUs are signaled. However, to improve the efficiency of intra mode coding, the intra modes derived from DIMD are used as MPM candidates for predicting the intra modes of four PUs in the CU. In order to not increase the overhead of MPM index signaling, the DIMD candidate can be placed at the first place in the MPM list and the last existing MPM candidate can be removed. Also, a pruning operation can be performed such that the DIMD candidate may not be added to the MPM list if it is redundant.
  • In order to reduce encoding/decoding complexity, one straightforward fast intra mode search algorithm is used for DIMD. Firstly, one initial estimation process can be performed to provide a good starting point for intra mode search. Specifically, an initial candidate list can be created by selecting N fixed modes from the allowed intra modes. Then, the SAD can be calculated for all the candidate intra modes and the one that minimizes the SAD can be selected as the starting intra mode. To achieve a good complexity/performance trade-off, the initial candidate list can include 11 intra modes, including DC, planar and every 4-th mode of the 33 angular intra directions as defined in HEVC, i.e., intra modes 0, 1, 2, 6, 10 . . . 30, 34.
  • If the starting intra mode is either DC or planar, it can be used as the DIMD mode. Otherwise, based on the starting intra mode, one refinement process can then be applied where the optimal intra mode is identified through one iterative search. In the iterative search, at each iteration, the SAD values for three intra modes separated by a given search interval can be compared and the intra mode that minimizes the SAD can be maintained. The search interval can then be reduced to half, and the selected intra mode from the last iteration can serve as the center intra mode for the current iteration. For the current DIMD implementation with 129 angular intra directions, up to 4 iterations can be used in the refinement process to find the optimal DIMD intra mode.
  • In some examples, transmitting of the luma intra-prediction mode in the bitstream can be avoided. This is done by deriving the luma intra mode using previously encoded/decoded pixels, in an identical fashion at the encoder and at the decoder. This process defines a new coding mode called DIMD, whose selection signaled in the bitstream for intra coded blocks using a flag. DIMD can compete with other coding modes at the encoder, including the classic Intra coding mode (where the intra-prediction mode is coded). Note that in one example, DIMD may only apply to luma. For chroma, classical intra coding mode may apply. As done for other coding modes (classical intra, inter, merge, etc.), a rate-distortion cost can be computed for the DIMD mode, and can then be compared to the coding costs of other modes to decide whether to select it as final coding mode for a current block.
  • At the decoder side, the DIMD flag can be first parsed, if present. If the DIMD flag is true, the intra-prediction mode can be derived in the reconstruction process using the same previously encoded neighboring pixels. If not, the intra-prediction mode can be parsed from the bitstream as in classical intra coding mode.
  • To derive the intra-prediction mode for a block, a set of neighboring pixels may be first selected on which a gradient analysis is performed. For normativity purposes, these pixels can be in the decoded/reconstructed pool of pixels. FIG. 21 is an example of a diagram 2100 of a template of a set of chosen pixels on which a gradient analysis may be performed based on intra-prediction mode derivation. As shown in FIG. 21, a template surrounding the current block is chosen by T pixels to the left, and T pixels above. For example, T may have a value of 2.
  • Next, a gradient analysis is performed on the pixels of the template. This can facilitate determining a main angular direction for the template, which can be assumed to have a high chance to be identical to the one of the current block. Thus, a simple 3×3 Sobel gradient filter can be used, defined by the following matrices that may be convoluted with the template:
  • M x = [ - 1 0 1 - 2 0 2 - 1 0 1 ] and M y = [ - 1 - 2 - 1 0 0 0 1 2 1 ]
  • For each pixel of the template, each of these two matrices with the 3×3 window centered around the current pixel can be point-by-point multiplied and composed of its 8 direct neighbors, and the result can be is summed. Thus, two values Gx (from the multiplication with Mx), and Gy (from the multiplication with My) corresponding to the gradient at the current pixel can be obtained, in the horizontal and vertical direction respectively.
  • FIG. 22 is an example of a diagram 2200 of a convolution of a 3×3 Sobel gradient filter with the template in accordance with aspects of the present disclosure. In some examples, the pixel 2210 is the current pixel. Template pixels 2220 (including the current pixel 2210) are pixels on which the gradient analysis is possible. Unavailable pixels 2230 are pixels on which the gradient analysis is not possible due to lack of some neighbors. Reconstructed pixels 2240 are available pixels outside of the considered template, used in the gradient analysis of the template pixels 2220. In case a reconstructed pixel 2240 is not available (due to blocks being too close to the border of the picture for instance), the gradient analysis of all template pixels 2220 that use the unavailable reconstructed pixel 2240 is not performed.
  • For each template pixel 2220, the intensity (G) and the orientation (0) of the gradient using Gx and Gy are calculated as such:
  • G = G x + G y and O = atan ( G y G x )
  • Note that a fast implementation of the a tan function is proposed. The orientation of the gradient can then be converted into an intra angular prediction mode, used to index a histogram (first initialized to zero). The histogram value at that intra angular mode is increased by G. Once all the template pixels 2220 in the template have been processed, the histogram can include cumulative values of gradient intensities, for each intra angular mode. The mode that shows the highest peak in the histogram can be selected as intra-prediction mode for the current block. If the maximum value in the histogram is 0 (meaning no gradient analysis was able to be made, or the area composing the template is flat), then the DC mode can be selected as intra-prediction mode for the current block.
  • For blocks that are located at the top of CTUs, the gradient analysis of the pixels located in the top part of the template is not performed. The DIMD flag is coded using three possible contexts, depending on the left and above neighboring blocks, similarly to the Skip flag coding. Context 0 corresponds to the case where none of the left and above neighboring blocks are coded with DIMD mode, context 1 corresponds to the case where only one neighboring block is coded with DIMD, and context 2 corresponds to the case where both neighbors are DIMD-coded. Initial symbol probabilities for each context are set to 0.5.
  • One advantage that DIMD offers over classical intra mode coding is that the derived intra mode can have a higher precision, allowing more precise predictions at no additional cost as it is not transmitted in the bitstream. The derived intra mode spans 129 angular modes, hence a total of 130 modes including DC (e.g., the derived intra mode may not be planar in aspects described herein). The classical intra coding mode is unchanged, i.e., the prediction and mode coding still use 67 modes.
  • The required changes to Wide Angle Intra-prediction and simplified PDPC were performed to accommodate for prediction using 129 modes. Note that only the prediction process uses the extended intra modes, meaning that for any other purpose (deciding whether to filter the reference samples for instance), the mode can be converted back to 67-mode precision.
  • In the DIMD mode, the luma intra mode is derived during the reconstruction process, just prior to the block reconstruction. This is done to avoid a dependency on reconstructed pixels during parsing. However, by doing so, the luma intra mode of the block may be undefined for the chroma component of the block, and for the luma component of neighboring blocks. This can cause an issue because for chroma, a fixed mode candidate list is defined. Usually, if the luma mode equals one of the chroma candidates, that candidate may be replaced with the vertical diagonal (VDIA_IDX) intra mode. As in DIMD, the luma mode is unavailable, the initial chroma mode candidate list is not modified.
  • In classical intra mode, where the luma intra-prediction mode is to be parsed from the bitstream, an MPM list is constructed using the luma intra modes of neighboring blocks, which can be unavailable if those blocks were coded using DIMD. In this case, for example, DIMD-coded blocks can be treated as inter blocks during MPM list construction, meaning they are effectively considered unavailable.
  • Entropy coding may be a form of lossless compression used at the last stage of video encoding (and the first stage of video decoding), after the video has been reduced to a series of syntax elements. Syntax elements describe how the video sequence can be reconstructed at the decoder. This includes the method of prediction (e.g., spatial or temporal prediction, intra-prediction mode, and motion vectors) and prediction error, also referred to as residual. Arithmetic coding is a type of entropy coding that can achieve compression close to the entropy of a sequence by effectively mapping the symbols (i.e., syntax elements) to codewords with a non-integer number of bits. Context-adaptive binary arithmetic coding (CABAC) involves three main functions: binarization, context modeling, and arithmetic coding. Binarization maps the syntax elements to binary symbols (bins). Context modeling estimates the probability of the bins. Finally, arithmetic coding compresses the bins to bits based on the estimated probability.
  • Several different binarization processes are used in VVC, such as the truncated Rice (TR) binarization process, the truncated binary binarization process, the k-th order Exp-Golomb (EGk) binarization process and the fixed-length (FL) binarization process.
  • Context modeling provides an accurate probability estimate required to achieve high coding efficiency. Accordingly, it is highly adaptive and different context models can be used for different bins and the probability of that context model is updated based on the values of the previously coded bins. Bins with similar distributions often share the same context model. The context model for each bin can be selected based on the type of syntax element, bin position in syntax element (binIdx), luma/chroma, neighboring information, etc. A context switch can occur after each bin.
  • Arithmetic coding may be based on recursive interval division. A range, with an initial value of 0 to 1, is divided into two subintervals based on the probability of the bin. The encoded bits provide an offset that, when converted to a binary fraction, selects one of the two subintervals, which indicates the value of the decoded bin. After every decoded bin, the range is updated to equal the selected subinterval, and the interval division process repeats itself. The range and offset have limited bit precision, so renormalization may be used whenever the range falls below a certain value to prevent underflow.
  • Renormalization can occur after each bin is decoded. Arithmetic coding can be done using an estimated probability (context coded), or assuming equal probability of 0.5 (bypass coded). For bypass coded bins, the division of the range into subintervals can be done by a shift, whereas a look up table may be used for the context coded bins.
  • FIG. 23 is a schematic diagram 2300 of intra mode coding with greater than 67 intra-prediction modes to capture the arbitrary edge directions presented in natural video. In some examples, the number of directional intra modes may be extended from 67, as used in VVC, to 129 while the planar and the DC modes remain the same.
  • In one example, the pre-defined IPMs may be the IPMs have denser directions than conventional IPMs (e.g., IPMs denoted by the dashed lines in FIG. 23). In one example, the N1 IPMs may be partial or full of the MPMs for the current block. In one example, some pre-defined intra-prediction modes which are not in MPMs may also be contained in the given IPM candidate set.
  • In one example, one or more IPMs from DC/Planar/horizontal/vertical/diagonal top-right/diagonal bottom-left/diagonal top-left modes may be contained in the given IPM set.
  • In one example, one or more IPMs denoted by the dashed lines in FIG. 23 may be contained in the given IPM set.
  • In one example, N1 may be equal to or larger than N2 when one or more IPMs denoted by the dashed red lines are contained in the given IPM set.
  • In one example, N1 may be equal to or larger than N2.
  • FIGS. 24-31 illustrate examples of templates that may be formed from one or more sub-templates. As discussed in further detail below, the template for a block may be selected for the specific block. For instance, the template may be selected based on decoded information about the specific block or based on availability of the sub-templates for the specific block. Although several examples are illustrated, other templates may be selected based on different combinations of the sub-templates.
  • FIG. 24 is a diagram of an example of a template 2400 including a left-above sub-template 2420 (Template-LA). The template 2400 may be selected for a block 2410, which may have dimensions of M samples horizontally and N samples vertically. The left-above sub-template 2420 may include left-above neighboring samples that are located both to the left of the block 2410 and above the block 2410. The left-above sub-template 2420 may have dimensions of L1 samples horizontally and L2 samples vertically. L1 and L2 may be defined for the block 2410, a slice including the block 2410, or a picture including the block 2410.
  • FIG. 25 is a diagram of an example of a template 2500 including a left sub-template 2440 (Template-L) and an above sub-template 2430 (Template-A). The template 2500 may be selected for a block 2410, which may have dimensions of M samples horizontally and N samples vertically. The left sub-template 2440 may include samples located to the left of the block 2410. The left sub-template 2440 may be adjacent the top edge of the block 2410. The left sub-template 2440 may have dimensions of L1 samples horizontally and N samples vertically. The above sub-template 2430 may include samples located above the block 2410. The above sub-template 2430 may be adjacent the top edge of the block 2410. The above sub-template 2430 may have dimensions of M samples horizontally and L2 samples vertically.
  • FIG. 26 is a diagram of an example of a template 2600 including the above sub-template 2430 (Template-A). The template 2600 may be selected for a block 2410, which may have dimensions of M samples horizontally and N samples vertically. The above sub-template 2430 may include samples located above the block 2410. The above sub-template 2430 may have dimensions of M samples horizontally and L2 samples vertically.
  • FIG. 27 is a diagram of an example of a template 2700 including a left sub-template (Template-L). The template 2700 may be selected for a block 2410, which may have dimensions of M samples horizontally and N samples vertically. The left sub-template 2440 may include samples located to the left of the block 2410. The left sub-template 2440 may have dimensions of L1 samples horizontally and N samples vertically.
  • FIG. 28 is a diagram of an example of a template 2800 including the left sub-template 2440 (Template-L) and a left-below sub-template 2450 (Template-LB). The template 2800 may be selected for a block 2410, which may have dimensions of M samples horizontally and N samples vertically. The left sub-template 2440 may include samples located to the left of the block 2410. The left sub-template 2440 may have dimensions of L1 samples horizontally and N samples vertically. The left-below sub-template 2450 may include samples that are located both to the left of the block 2410 and below the block 2410. The left-below sub-template 2450 may have dimensions of L1 samples horizontally and N samples vertically.
  • FIG. 29 is a diagram of an example of a template 2900 including the above sub-template 2430 (Template-A) and a right-above sub-template 2460 (Template-RA). The template 2900 may be selected for a block 2410, which may have dimensions of M samples horizontally and N samples vertically. The above sub-template 2430 may include samples located above the block 2410. The above sub-template 2430 may have dimensions of M samples horizontally and L2 samples vertically. The right-above sub-template 2460 may include samples located both above the block 2410 and to the right of the block 2410. The right-above sub-template 2460 may have dimensions of M samples horizontally and L2 samples vertically.
  • FIG. 30 is a diagram of an example of a template 3000 including the left sub-template 2440, the left-below sub-template 2450, the above sub-template 2430, and the right-above sub-template 2460. The template 3000 may be selected for a block 2410, which may have dimensions of M samples horizontally and N samples vertically. The above sub-template 2430 may include samples located above the block 2410. The above sub-template 2430 may have dimensions of M samples horizontally and L2 samples vertically. The right-above sub-template 2460 may include samples located above and to the right of the block 2410. The right-above sub-template 2460 may have dimensions of M samples horizontally and L2 samples vertically. The left sub-template 2440 may include samples located to the left of the block 2410. The left sub-template 2440 may have dimensions of L1 samples horizontally and N samples vertically. The left-below sub-template 2450 may include samples located to the left of the block 2410 and below the block 2410. The left-below sub-template 2450 may have dimensions of L1 samples horizontally and N samples vertically.
  • FIG. 31 is a diagram of an example of a template 3100 including the left-above sub-template 2420, the left sub-template 2440, the left-below sub-template 2450, the above sub-template 2430, and the right-above sub-template 2460. The template 3100 may be selected for a block 2410, which may have dimensions of M samples horizontally and N samples vertically. The left-above sub-template 2420 may include samples located to the left and above the block 2410. The left-above sub-template 2420 may have dimensions of L1 samples horizontally and L2 samples vertically. The above sub-template 2430 may include samples located above the block 2410. The above sub-template 2430 may have dimensions of M samples horizontally and L2 samples vertically. The right-above sub-template 2460 may include samples located above and to the right of the block 2410. The right-above sub-template 2460 may have dimensions of M samples horizontally and L2 samples vertically. The left sub-template 2440 may include samples located to the left of the block 2410. The left sub-template 2440 may have dimensions of L1 samples horizontally and N samples vertically. The left-below sub-template 2450 may include samples located to the left of and below the block 2410. The left-below sub-template 2450 may have dimensions of L1 samples horizontally and N samples vertically.
  • FIG. 32 is a diagram of an example of a template 3200 including a left-above sub-template 3220, a left sub-template 3240, a left-below sub-template 3260, an above sub-template 3230, and a right-above sub-template 3260 that are spaced apart from a block. The example template 3200 may be selected for a block 2410, which may have dimensions of M samples horizontally and N samples vertically. In contrast to the sub-templates in FIGS. 24-31, the sub-templates in FIG. 32 may be spaced apart from the block 2410. For example, the left-above sub-template 3220, the left sub-template 3240, and the left-below sub-template 3260 may be spaced horizontally apart from the block 2410 by a gap 3280. The gap 3280 may have a horizontal dimension of L3 samples. The left-above sub-template 2420, the above sub-template 2430, and the right-above sub-template 2460 may be spaced vertically apart from the block 2410 by a gap 3270. The gap 3270 may have a vertical dimension of L4 samples. In an aspect, each of the sub-templates 3220, 3230, 3240, 3250, 3260 may have dimensions that are the same as a corresponding sub-template 2420, 2430, 2440, 2450, 2460 in FIGS. 24-31. Accordingly, in FIG. 32, the locations of the sub-templates 3220, 3230, 3240, 3250, 3260 are different, but the size of the sub-templates 3220, 3230, 3240, 3250, 3260 may be the same as in FIGS. 24-31.
  • FIG. 33 is a diagram of examples of template-reference samples 3310 for a template 3300 including a left-above sub-template 2420, a left sub-template 2440, and an above sub-template 2430. The example template 3300 may be selected for a block 2410, which may have dimensions of M samples horizontally and N samples vertically. The left-above sub-template 2420 may include samples located to the left and above the block 2410. The left-above sub-template 2420 may have dimensions of L1 samples horizontally and L2 samples vertically. The above sub-template 2430 may include samples located above the block 2410. The above sub-template 2430 may have dimensions of M samples horizontally and L2 samples vertically. The left sub-template 2440 may include samples located to the left of the block 2410. The left sub-template 2440 may have dimensions of L1 samples horizontally and N samples vertically. The template-reference samples 3310 may be a single row of samples located above the template 3300 and a single column of samples located to the left of the template 3300. The row of samples may have a length of 2(L1+M)+1. The column of samples may have a height of 2(L2+N)+1.
  • FIG. 34 is a diagram 3400 of example template-reference samples 3410 for the template 2500 including the left sub-template 2440 and the above sub-template 2430. The example template 2500 may be selected for a block 2410, which may have dimensions of M samples horizontally and N samples vertically. The above sub-template 2430 may include samples located above the block 2410. The above sub-template 2430 may have dimensions of M samples horizontally and L2 samples vertically. The left sub-template 2440 may include samples located to the left of the block 2410. The left sub-template 2440 may have dimensions of L1 samples horizontally and N samples vertically. The template-reference samples may include one or more lines (e.g., rows or columns) of samples. For example, the template-reference samples 3410 may include a single row of samples located above the template 2500 and a single column of samples located to the left of the template 2500. The row of samples may have a length of 2(L1+M)+1. The column of samples may have a height of 2(L2+N)+1.
  • FIG. 35 is a diagram 3500 of example template-reference samples 3510 for the template 2600 including the above sub-template 2430. The template-reference samples 3510 may be a single row of samples located above the template 2600 and a single column of samples located to the left of the template 2600. The row of samples may have a length of 2M+1. The column of samples may have a height of 2(L2+N)+1.
  • FIG. 36 is a diagram 3600 of example template-reference samples 3610 for the template 2700 including the left sub-template 2440. The template-reference samples 3610 may be a single row of samples located above the template 2700 and a single column of samples located to the left of the template 2700. The row of samples may have a length of 2(L1+M)+1. The column of samples may have a height of 2N+1.
  • FIG. 37 is a diagram 3700 of example template-reference samples 3710 for the template 2600 including the above sub-template 2430. The template-reference samples 3710 may be a single row of samples located above the template 2600 and a single column of samples located to the left of the template 2600. The row of samples may have a length of 2(L1+M)+1. The column of samples may have a height of 2(L2+N)+1. Because the template 2600 does not include the left-above sub-template 2420 or the left sub-template 2440, the column of samples may be spaced from the template 2600 by a horizontal gap 3720 with a width of L1.
  • FIG. 38 is a diagram 3800 of example template-reference samples 3810 for the template 2700 including the left sub-template 2440. The template-reference samples 3810 may be a single row of samples located above the template 2700 and a single column of samples located to the left of the template 2700. The row of samples may have a length of 2(L1+M)+1. The column of samples may have a height of 2(L2+N)+1. Because the template 2700 does not include the left-above sub-template 2420 or the above sub-template 2430, the row of samples may be spaced from the template 2700 by a vertical gap 3820 with a height of L2.
  • FIG. 39 is a diagram 3900 of example template-reference samples 3910 for the template 2500 including the above sub-template 2430 and the left sub-template 2440. The template-reference samples 3910 may include a single column of samples located to the left of the template 2500. The column of samples may have a height of 2(L2+N)+1. Instead of a single row of template-reference samples, a portion 3920 of the row may be moved to a location 3930 in a second row that is adjacent the left sub-template 2440. The portion 3920 may include L1 samples. The remaining portion in the first row may have a length of 2M+L1+1. In an aspect, selecting template-reference samples that are adjacent a sub-template included within the template may improve the prediction of the template.
  • FIG. 40 is a diagram 4000 of example template-reference samples 4010 for the template 2500 including the above sub-template 2430 and the left sub-template 2440. The template-reference samples 4010 may include a single row of samples located above the template 2500. The row of samples may have a length of 2(L1+M)+1. Instead of a single column of template-reference samples, a portion 4020 of the column may be moved to a location 4030 in a second row that is adjacent the above sub-template 2430. The portion 4020 may include L2 samples. The remaining portion in the first column may have a height of 2N+L2+1. In an aspect, selecting template-reference samples that are adjacent a sub-template included within the template may improve the prediction of the template. In another aspect, both of the portion 3920 and the portion 4020 may be moved to the location 3930 and the location 4030, respectively.
  • FIG. 41 illustrates an example of a system 4100 for performing video decoding. The system 4100 may include a computing device 4102 that performs video decoding, via execution of decoding component 4108 by processor 4104 and/or memory 4106, video decoder 124, video decoder 300, or HEVC video encoder and decoder 400. The decoding component 4108 may receive an encoded bitstream 4146. The decoding component 4108 may perform a decoding operation (e.g., entropy decoding) to determine block information that may be provided to the variable template intra prediction unit 4110, which may include one or more of a template selection unit 4120, an IPM deriving unit 4130, a final predictor unit 4140, and a conversion unit 4150.
  • The template selection unit 4120 may receive block information for a current block. Block information for previously reconstructed blocks, including reconstructed samples thereof, may be stored in buffer 4160. The template selection unit 4120 may select for a current block, one or more sub-templates to form a selected template for DIMD. The one or more sub-templates may be selected from a plurality of sub-templates including any of the example sub-templates described herein. For example, in an implementation, the one or more sub-templates may be selected from the left-above sub-template 2420, the above sub-template 2430, the left sub-template 2440, the left-below sub-template 2450, and the right-below sub-template 2460. The selected template may be a combination of the selected sub-templates. The template selection unit 4120 may select the one or more sub-templates based on decoded information for the current block. The template selection unit 4120 may determine which of the plurality of sub-templates for the current block have been reconstructed. The template selection unit 4120 may determine a dimension of the selected template based on the decoded information for the current block. The template selection unit 4120 may provide the selected template to the IPM deriving unit 4130, 4230.
  • The IPM deriving unit 4130 may determine a cost of using each of a plurality of candidate IPMs to predict samples in a template region based on the selected template and the current block. For example, the IPM deriving unit 4130 may determine one or more of: a sum of the absolute transformed difference (SATD), a sum of the squared errors (SSE), a subjective quality metric, or a structural similarity index measure (SSIM) between reconstructed samples of the template and predicted samples of the template predicted by the candidate IPM. In some examples, the IPM deriving unit 4130 may determine the predicted samples of each sub-template separately. In some examples, the IPM deriving unit 4130 may determine the cost of each sub-template separately and sum the costs to determine a final cost for the selected template. In some examples, the IPM deriving unit 4130 may down-sample samples in the selected template and calculate the cost based on the down-sampled selected template. In some examples, the IPM deriving unit 4130 may determine one or more lines of template-reference samples neighboring the selected template based on the selected template. In some examples, the IPM deriving unit 4130 may substitute an unavailable template-reference sample with one of: a nearest available template-reference sample, a value based on a defined formula, or a generated value.
  • The IPM deriving unit 4130 may select a derived IPM from the plurality of candidate IPMs based on the cost. For example the IPM deriving unit 4130 may select the best cost, which may be the lowest cost or the highest cost depending on the cost metric. The IPM deriving unit 4130 may provide the derived IPM to the final predictor unit 4140.
  • The final predictor unit 4150 may determining, based on the at least one IPM, a final predictor of the current video block. For example, the final predictor unit 4140 may predict the samples in the current block with intra-prediction using the derived IPM. For example, the final predictor unit 4140 may predict the samples in the current block using any of the techniques described above. The final predictor unit 4140 may provide the predicted samples as a predicted block to the conversion unit 4150, which may combine (e.g., sum) the predicted block with a residual block determined by the residual decoding unit 4170. A reconstructed block may be stored in the buffer 4160 to be used as reference samples for predicting other blocks. When all blocks are reconstructed, the frame or picture may be read out from the buffer 4160 as one of the plurality of video frames 4142.
  • FIG. 42 illustrates an example of a system 4200 for performing video encoding. The system 4200 may include a computing device 4202 that performs video encoding, via execution of decoding component 4210 by processor 4204 and/or memory 4206, video encoder 114, video encoder 200, or HEVC video encoder and decoder 400. The system 4200 may receive a plurality of video frames 4242 as input and output an encoded bitstream 4246. In an aspect, encoding component 4210 can include at least one of a template selection unit 4220 for constructing, during a conversion between a current video block of a video and a bitstream of the video, at least one template set for the current video block from a plurality of sub-templates, an IPM deriving unit 4230 for deriving at least one intra-prediction mode (IPM) based on cost calculations, a final predictor unit 4240 for determining, based on the at least one IPM, a final predictor of the current video block. In an example, encoding component 4210 may also include a conversion unit 4250 for performing the conversion based on the final predictor.
  • FIG. 43 is a flowchart of an example method 4300 of processing video data. For example, the method 4300 may be used for encoding a video into a bitstream or decoding a bitstream into a video. A video may include a plurality of frames, each frame including blocks of samples. The method 4300 may be performed for at least a current video block. The method 4300 may be performed by a video decoder such as the system 4100 including the variable template intra prediction unit 4110 or a video encoder such as the system 4200 including the encoding component 4210.
  • At block 4310, the method 4300 may include constructing, during a conversion between a current video block of a video and a bitstream of the video, at least one template set for the current video block from a plurality of sub-templates. The one or more sub-templates may be selected from a plurality of sub-templates including: a left sub-template, an above sub-template, a right-above sub-template, a left-below sub-template, and a left-above sub-template. The left sub-template 2440, 3240 (e.g., Template-L) includes left neighboring samples, the above sub-template 2430, 3230 (e.g., Template-A) includes above neighboring samples, the right-above sub-template 2460, 3260 (e.g., Template-RA) includes right-above neighboring samples, the left-below sub-template 2450. 3250 (e.g., Template-LB) includes left-bottom neighboring samples, and the left-above sub-template 2420, 3220 (e.g., Template-LA) includes left-above neighboring samples. In an aspect, for example, the template selection unit 4120 may select the one or more sub-templates to form the selected template (e.g., template 2400, 2500, 2600, 2700, 2800, 2900, 3000, 3100, 3200). In an aspect, selecting the one or more sub-templates to form the selected template is performed for each candidate IPM.
  • In one example, the above sub-templates may be non-adjacent to the current block (e.g., FIG. 32). In one example, a single sub-template is selected. For instance, the single sub-template may refer to Template-L, or Template-A, or Template-RA, or Template-LB, or Template-LA. In one example (e.g., FIG. 27), Template-L may be selected. In one example (e.g., FIG. 26), Template-A may be selected. In one example, multiple sub-templates may be selected. For instance (e.g., FIG. 25), the combination of Template-L and Template-A may be selected. In another example, the combination of Template-L, Template-A, and Template-LA may be selected. In one example (e.g., FIG. 28), the combination of Template-L and Template-LB may be selected. In one example (e.g., FIG. 29), the combination of Template-A and Template-RA may be selected. In one example (e.g., FIG. 30), the combination of Template-L, Template-LB, Template-A, and Template-RA may be selected. In one example (e.g., FIG. 31), the combination of Template-L, Template-LB, Template-A, Template-RA, and Template-LA may be selected.
  • In an aspect, at sub-block 4312, the block 4310 may optionally include selecting the one or more sub-templates based on decoded information for the current block. The selecting may occur on-the-fly, that is, for each block during the decoding process. In one example, selecting the sub-templates may depend on the decoded information for the current block. For instance, the decoded information may refer to a block dimension and/or a block shape. In an aspect, Template-L may not be used for wide blocks and Template-A may not be used for tall blocks. For instance, a block width may be denoted as BW and a block height may be denoted as BH. In one example, Template-L may not be used for a block when BW/BH>=T1, where T1 is a configured threshold. For instance, T1 may be an integer, such as T1=4 or T1=8, or T1 may be not an integer. In one example, Template-A may not be used for a block when BH/BW>=T2, wherein T2 is a configured threshold. For instance, T2 may be an integer, such as T2=4 or T2=8, or T2 may be not an integer. In one example, T1 may equal to T2. In one example, T1 may not equal to T2. In one example, T1 may be less than T2. Alternatively, T1 may be larger than T2. In one example, selecting the sub-templates may depend on a signalled syntax element. For instance, the decoded syntax element may indicate the one or more sub-templates.
  • In an aspect, at sub-block 4314, the block 4310 may optionally include determining which of the plurality of sub-templates for the current block have been reconstructed. When a sub-template for the current block has been reconstructed, the reconstructed samples for the template may be available. In one example, when one or more sub-templates (e.g., 2420, 2430, 2440, 2450, 2460) are unavailable, the other templates may be selected. For instance, a sub-template may be unavailable near an edge of a picture. In one example, when the sub-templates at the left side of current block (e.g., Template-L, Template-LA, and/or Template-LB) are unavailable, the sub-templates at the above side of current block (e.g., Template-A or/and Template-RB) may be selected. In one example, when the templates at the above side of current block (e.g., Template-A, Template-LA, and/or Template-RB) are unavailable, the templates at the left side of current block (e.g., Template-L or Template-LB) may be selected. In an aspect, when all templates are unavailable DIMD may not be applied to the current block and/or no IPM may be derived for the current block. In another aspect, when all templates are unavailable, a pre-defined IPM may be used. The pre-defined IPM may refer to DC, Planar, horizontal mode, or vertical mode.
  • At block 4320, the method 4300 may optionally include determining a dimension of the selected template based on the decoded information for the current block. For example, the template selection unit 4120 may determine dimensions (e.g., M, N, L1, L2) of the selected template based on the decoded information for the current block (e.g., block dimension, or/and block shape, or/and slice/picture type). As illustrated in FIGS. 24-31, the dimensions of Template-L, Template-A, Template-RA, Template-LB, and Template-LA may be L1×BH, BW×L2, BW′×L2, L1×BH′ and L1×L2, respectively. BW′ may represent the width of Template-RA and BH′ may represent the height of Template-LB. In one example, BW′=BW or BH and BH′=BH or BW. In one example, L1 or/and L2 may be pre-defined values. In one example L1 may be equal to L2 (i.e., L1=L2=V1, where V1 is an integer, such as V1=3, 5, 6, 7, or 8). In one example, L1 may be less than L2. In one example, L1 may be larger than L2. In one example, the values of L1 and/or L2 may depend on slice type. For instance, the values of L1 or/and L2 for I slice may be no less than the values for inter-coded slices (e.g., P/B slice). Alternatively, the values of L1 or/and L2 for I slice may be larger the values for inter-coded slices (e.g., P/B slice). In one example, L1 and/or L2 may depend on the dimensions of current block. For instance, when BH is less than or equal to T1, L1 may be set equal to A1; when BH is larger than T1, L1 may be set equal to A2. In some implementations, A1 may be less than or equal to A2. For instance, A1=2 and A2=4, or A1=1 and A2=2. In some implementations, A1 may be larger than A2. In one example, when BW is less than or equal to T2, L2 may be set equal to B1; when BW is larger than T2, L2 may be set equal to B2. In some implementations, B1 may be less than or equal to B2, such as B1=2, B2=4, or B1=1, B2=2. In some implementations, B1 may larger than B2. In one example, when BW is less than or equal to a configured threshold T3, L1 may be set equal to C1; when BW is larger than T3, L1 may be set equal to C2. In one example, when BH is less than or equal to a configured threshold T4, L2 may be set equal to D1; when BH is larger than T4, L2 may be set equal to D2. In one example, when BW×BH is less than or equal to a configured threshold T5, L1 may be set equal to E1 and L2 may be set equal to E2; when BW×BH is larger than T5, L1 may be set equal to E3 and L2 may be set equal to E4. In one example, E1 may be equal to E2, or/and E3 may be equal to E4. In one example, E1 may be less than E2, or/and E3 may be less than E4, such as E1=2, E2=4, E3=2, E4=4, or E1=1, E2=2, E3=1 E4=2. In one example, L1 or/and L2 may be determined by a signaled syntax element.
  • At block 4330, the method 4300 may include deriving at least one intra-prediction mode (IPM) based on cost calculations. In an aspect, for example, the IPM deriving unit 4130, 4240 may determine the cost of using each of a plurality of candidate IPMs to predict samples in a template region based on the selected template and the current block. For example, at sub-block 4332 the block 4330 may optionally include determining one of: a sum of the absolute transformed difference (SATD), a sum of the squared errors (SSE), a subjective quality metric, or a structural similarity index measure (SSIM) between reconstructed samples of the template and predicted samples of the template predicted by the candidate IPM. For instance, the IPM deriving unit 4130, 4230 may determine the predicted samples of the template for each candidate IPM. In an aspect, because the samples of the selected template have been reconstructed, the IPM deriving unit 4130, 4230 may compare the predicted samples of the template with the reconstructed samples of the template to determine the cost. The cost may be calculated in a form of D+lambda×R, where D is a metric of distortion such as SAD, SATD, SSE etc., R represents the number of bits under consideration, and lambda is a pre-defined factor. In an aspect, where different templates are selected for different IPMs, the cost may be normalized by dividing the total number of samples in the used templates for each IPM. The IPM deriving unit 4130, 4230 may select the candidate IPM that has the best cost. The best cost may be the lowest cost when the cost metric is a SATD, SSE, or SAD. The best cost metric may be the highest value when the cost metric is a subjective quality metric or S SIM.
  • For example, at sub-block 4334 the block 4330 may optionally include determining the predicted samples of each sub-template separately.
  • For example, at sub-block 4336 the block 4330 may optionally include determining a cost of each sub-template separately and summing the costs to determine a final cost for the selected template. In one example, the costs of sub-templates may be used to get the final cost. In one example, a weight may be used when summing up the cost of each sub-template. In one example, J=w1×J1+w2×J2+w3×J3++wM×JM, where Ji and wi denote the cost of and the weight of the ith template, and J denotes the final cost. In one example, Template-A or/and Template-L have larger weights than other templates. In one example, the weights may depend on block dimension or/and block shape of the current block. In one example, the cost of each template may be normalized by dividing the number of samples in the template.
  • For example, at sub-block 4338 the block 4330 may optionally include down-sampling samples in the selected template and calculating the cost based on the down-sampled selected template. Down-sampling may refer to using partial samples to calculate the cost. For example, the selected template may down-sampled by a scale S first, and the samples in the down-sampled template may be used to calculate the cost. In one example, S=½ or ¼.
  • For example, at sub-block 4340 the block 4330 may optionally include determining one or more lines of template-reference samples neighboring the selected template based on the selected template. In one example, one or more lines (e.g., rows or columns) of template-reference samples neighboring the selected template may be used. Examples of template-reference samples are illustrated in FIGS. 33-40. In one example, multiple lines of template-reference samples neighboring the selected template may be used. In one example, N lines of template-reference samples neighboring the template may be used to derive the prediction of the template, where N is an integer and larger than 1. In one example, one of N lines of template-reference samples may be used. In one example, which of the N lines of template-reference samples to use may be derived implicitly. In one example, which of the N lines of template-reference samples to use may be signalled with a syntax element. In one example, which of the N lines of template-reference samples to use may be inherited from the index of reference line used in MRL. In one example, the N lines of template-reference samples neighboring the template are down-sampled into one line first and the down-sampled line may be used as the reference. In one example, the number of lines of template-reference samples for each template may be different. In one example, the number of lines of template-reference samples may depend on the decoded information, such as whether the current block is located at the CTU boundary, dimension or shape of the current block, or/and a slice or picture type.
  • In one example, unavailable template-reference samples may be filled using the available template-reference samples. In one example, the unavailable template-reference samples may be filled using the nearest available template-reference samples. In one example, the unavailable template-reference samples may be filled by a same value. In one example, the unavailable template-reference samples may be filled with a predefined value V1, such as V1=2{circumflex over ( )}(bitdepth−1), wherein bitdepth is the bit depth of samples. For instance, the bit depth may be different for different color components. In one example, the unavailable template-samples may be filled using different values according to the distances from the available template-samples. In one example, the unavailable template-reference samples may be substituted by some generated values. For example, the same reference sample substitution process as in HEVC or VVC may be used to fill these samples
  • At block 4350, the method 4300 may include determining, based on the at least one IPM, a final predictor of the current video block. For example, the final predictor unit 4140, 4240 may determine, based on the at least one IPM, a final predictor of the current video block. For instance, the final predictor unit 4140, 4240 may predict the samples in the current block with intra prediction using the derived IPM. In an aspect, for example, final predictor unit 4140, 4240 may predict the samples in the current block with intra prediction using the derived IPM. For instance, the final predictor unit 4140, 4240 may determine the positions of reference samples based on the derived IPM. The reference samples may be located within the selected template.
  • At block 4360, the method 4300 may include performing the conversion based on the final predictor. For example, the conversion unit 4250 may perform the conversion based on the final predictor.
  • In an aspect, whether to and/or how to apply the disclosed methods above may be signaled at the sequence level, picture level, slice level, or tile group level, such as in a sequence header, picture header, SPS, VPS, DPS, DCI, PPS, APS, slice header, or tile group header. In an aspect, whether to and/or how to apply the disclosed methods above may be signaled at PU, TU, CU, VPDU, CTU, CTU row, slice, tile, or sub-picture. In an aspect, whether to and/or how to apply the disclosed methods above may be dependent on coded information, such as block size, color format, single/dual tree partitioning, colour component, slice type or picture type.
  • The following clauses describes some embodiments and techniques.
  • 1. A method of processing video data, comprising:
      • constructing, during a conversion between a current video block of a video and a bitstream of the video, at least one template set for the current video block from a plurality of sub-templates;
      • deriving at least one intra-prediction mode (IPM) based on cost calculations;
      • determining, based on the at least one IPM, a final predictor of the current video block; and
      • performing the conversion based on the final predictor.
  • 2. The method of clause 1, wherein the plurality of sub-templates includes at least one of the following: a left sub-template, an above sub-template, a right-above sub-template, a left-below sub-template, and a left-above sub-template.
  • 3. The method of clause 1, wherein the plurality of sub-templates includes non-adjacent samples of the current video block.
  • 4. The method of clause 1, wherein the at least one template set includes a single sub-template, and wherein the single sub-template is one of a left sub-template, an above sub-template, a right-above sub-template, a left-below sub-template, or a left-above sub-template.
  • 5. The method of clause 1, wherein the at least one template set includes any one of the following:
      • a) a left sub-template and an above sub-template;
      • b) a left sub-template, an above sub-template, and a left-above sub-template;
      • c) a left sub-template and a left-below sub-template;
      • d) an above sub-template and a right-above sub-template;
      • e) a left sub-template, a left-below sub-template, an above sub-template, and a right-above sub-template; or
      • f) a left sub-template, a left-below sub-template, an above sub-template, a right-above sub-template, and a left-above sub-template.
  • 6. The method of clause 1, wherein the at least one template set is selected from the plurality of sub-templates based on a coding information for the current block, wherein the coding information includes a block dimension or block shape.
  • 7. The method of clause 6, wherein the at least one template set is selected from the plurality of sub-templates based on a relationship between the dimension and a pre-defined threshold, and wherein a BW represents a block width of the current video block and a BH represents a block height of the current video block.
  • 8. The method of clause 7, wherein a left sub-template is not selected in a case that the BW divided by the BH is greater than or equal to a first threshold.
  • 9. The method of clause 8, wherein an above sub-template is not selected in a case that the BH divided by the BW is greater than or equal to a second threshold.
  • 10. The method of clause 1, wherein a dimension of one of the plurality of sub-templates is based on the at least one of the following:
      • a) a dimension of the current video block;
      • b) a block shape of the current video block;
      • c) a slice type of the current video block; or
      • d) a picture type of the current video block.
  • 11. The method of clause 10, wherein the dimension of one of the plurality of sub-templates is one of L1×BH, BW×L2, BW′×L2, L1×BH′ or L1×L2, where L1 is a height, L2 is a width, BW and BH represent width and height of the current video block respectively, BW′ represents the width of a right-above sub-template and BH′ represents the height of a left-below sub-template respectively, and wherein BW′ is equal to BW or BH and BH′ is equal to BH or BW, and L1 or L2 is a predefined value.
  • 12. The method of clause 11, wherein values of L1 and L2 depend on a slice type of the current video block or a dimension of the current video block.
  • 13. The method of clause 11, wherein values of L1 and L2 are related to a first syntax element presented in the bitstream.
  • 14. The method of clause 13, wherein constructing at least one template set from a plurality of sub-templates is further related to a second syntax element presented in the bitstream.
  • 15. The method of clause 1, wherein constructing at least one template set is based on availability of the plurality of sub-templates.
  • 16. The method of clause 15, wherein in response to the plurality of sub-templates being unavailable, no template set is constructed and no IPM is derived.
  • 17. The method of clause 16, wherein the final predictor of the current video block is determined based on a predefined IPM, and wherein the predefined IPM is one of DC mode, planar mode, horizontal mode, or vertical mode.
  • 18. The method of clause 1, wherein the conversion includes decoding the current video block from the bitstream.
  • 19. The method of clause 1, wherein the conversion includes encoding the current video block into the bitstream.
  • 20. An apparatus for processing video data comprising a processor and a non-transitory memory with instructions thereon, wherein the instructions upon execution by the processor, cause the processor to:
  • construct, during a conversion between a current video block of a video and a bitstream of the video, at least one template set for the current video block from a plurality of sub-templates;
      • derive at least one intra-prediction mode (IPM) based on cost calculations;
      • determine, based on the at least one IPM, a final predictor of the current video block; and
      • perform the conversion based on the final predictor.
  • 21. A non-transitory computer-readable recording medium storing a bitstream of a video which is generated by a method performed by a video processing apparatus, wherein the method comprises:
      • constructing, during a conversion between a current video block of a video and a bitstream of the video, at least one template set for the current video block from a plurality of sub-templates;
      • deriving at least one intra-prediction mode (IPM) based on cost calculations; determining, based on the at least one IPM, a final predictor of the current video block; and
      • generating the bitstream based on the final predictor.
  • 22. A non-transitory computer-readable storage medium storing instructions that cause a processor to:
      • construct, during a conversion between a current video block of a video and a bitstream of the video, at least one template set for the current video block from a plurality of sub-templates;
      • derive at least one intra-prediction mode (IPM) based on cost calculations;
      • determine, based on the at least one IPM, a final predictor of the current video block; and
      • perform the conversion based on the final predictor.
  • 23. A method of processing video data, comprising:
      • constructing, during a conversion between a current video block of a video and a bitstream of the video, at least one template set for the current video block from a plurality of sub-templates;
      • performing cost calculations based on partial or all samples of the at least one template set, wherein the cost calculation is further based on a calculation rule between partial or all reconstructed samples of the at least one template set and corresponding prediction samples;
      • deriving at least one intra-prediction mode (IPM) based on cost calculations;
      • determining, based on the at least one IPM, a final predictor of the current video block; and
      • performing the conversion based on the final predictor.
  • 24. The method of clause 23, wherein the calculation rule is one of the following rules:
  • 1) an absolute transformed difference (SATD) rule;
  • 2) a sum of the squared errors (SSE) rule;
  • 3) a sum of the absolute difference (SAD) rule;
  • 4) a subjective quality metric rule;
  • 5) a structural similarity index measure (SSIM) rule.
  • 25. The method of clause 23, wherein the cost calculation rule is based on a metric of distortion plus a defined weighting factor times a number of bits under consideration, wherein the metric of distortion in one of the SAD, SATD and SSE.
  • 26. The method of clause 23, wherein performing cost calculations comprising: determining a cost of each sub-template separately; and deriving a final cost by summing the cost of each sub-template, and wherein the cost of each sub-template has a corresponding weight.
  • 27. The method of clause 26, wherein an above sub-template or a left sub-template has a greatest weight.
  • 28. The method of clause 26, wherein the respective weights depend on a block dimension or block shape of the current video block.
  • 29. The method of clause 26, wherein the cost of each sub-template is normalized by dividing the cost of each sub-template by a number of samples in the each sub-template.
  • 30. The method of clause 23, wherein cost calculations are performed based on down-sampling samples of partial samples of the at least one template set.
  • 31. The method of clause 23, wherein a same sub-template is selected for a pre-defined IPM.
  • 32. The method of clause 23, wherein different sub-templates are selected for different IPMs.
  • 33. An apparatus for processing video data comprising a processor and a non-transitory memory with instructions thereon, wherein the instructions upon execution by the processor, cause the processor to:
      • construct, during a conversion between a current video block of a video and a bitstream of the video, at least one template set for the current video block from a plurality of sub-templates;
      • perform cost calculations based on partial or all samples of the at least one template set, wherein the cost calculation is further based on a calculation rule between partial or all reconstructed samples of the at least one template set and corresponding prediction samples;
      • derive at least one intra-prediction mode (IPM) based on cost calculations;
      • determine, based on the at least one IPM, a final predictor of the current video block; and
      • perform the conversion based on the final predictor.
  • 34. A non-transitory computer-readable recording medium storing a bitstream of a video which is generated by a method performed by a video processing apparatus, wherein the method comprises:
      • constructing, during a conversion between a current video block of a video and a bitstream of the video, at least one template set for the current video block from a plurality of sub-templates;
      • performing cost calculations based on partial or all samples of the at least one template set, wherein the cost calculation is further based on a calculation rule between partial or all reconstructed samples of the at least one template set and corresponding prediction samples;
      • deriving at least one intra-prediction mode (IPM) based on cost calculations;
      • determining, based on the at least one IPM, a final predictor of the current video block; and
      • performing the conversion based on the final predictor.
  • 35. A non-transitory computer-readable storage medium storing instructions that cause a processor to:
      • construct, during a conversion between a current video block of a video and a bitstream of the video, at least one template set for the current video block from a plurality of sub-templates;
      • perform cost calculations based on partial or all samples of the at least one template set, wherein the cost calculation is further based on a calculation rule between partial or all reconstructed samples of the at least one template set and corresponding prediction samples;
      • derive at least one intra-prediction mode (IPM) based on cost calculations;
      • determine, based on the at least one IPM, a final predictor of the current video block; and
      • perform the conversion based on the final predictor
  • 36. A method of processing a video, wherein the video comprises a plurality of frames, each frame comprising blocks of samples, the method comprising, for at least a current block:
      • selecting, during a conversion between the current block of the video and a bitstream of the video, a selected template for the current block;
      • determining a cost of using each of a plurality of candidate intra prediction modes (IPMs) to predict samples in a template region, wherein determining the cost of using each of the plurality of candidate IPMs to predict samples comprises determining one or more lines of template-reference samples neighboring the selected template based on the selected template;
      • selecting a derived IPM from the plurality of candidate IPMs based on the cost; and
  • predicting the samples in the current block with intra-prediction using the derived IPM; and performing the conversion based on the predicting.
  • 37. The method of clause 36, wherein the one or more lines of template-reference samples include a horizontal row above the selected template and a vertical column to the left of the selected template.
  • 38. The method of clause 36, wherein the one or more lines of template-reference samples includes N lines of parallel rows or columns, where N is an integer greater than 1.
  • 39. The method of clause 38, wherein predicting the samples in the current block with intra-prediction using the derived IPM comprises selecting one line of the N lines.
  • 40. The method of clause 39, wherein selecting the one line of the N lines comprises implicitly deriving the one line.
  • 41. The method of clause 39, wherein selecting the one line of the N lines is related to a syntax element presented in the bitstream.
  • 42. The method of clause 39, wherein the one line of the N lines is inherited from an index of a reference line for multiple reference line (MRL).
  • 43. The method of clause 38, further comprises down-sampling the one or more lines to derive a single line.
  • 44. The method of clause 38, wherein a number of lines of template-reference samples is based on the selected template.
  • 45. The method of clause 44, wherein the number of lines of template-reference samples is based on coded information for the current block.
  • 46. The method of clause 36, wherein determining one or more lines of template-reference samples comprises substituting an unavailable template-reference sample with an available reference sample.
  • 47. The method of clause 46, wherein the available template-reference sample is a nearest available template-reference sample.
  • 48. The method of clause 46, wherein the available template-reference sample is a predefined value, or a value based on a defined formula.
  • 49. The method of clause 48, wherein the defined formula is based on a bit depth of a color component.
  • 50. The method of clause 48, wherein the defined formula is based on a distance from the available template-reference sample.
  • 51. The method of clause 46, wherein the available template-reference sample is generated based on a reference sample substitution process.
  • 52. The method of clause 36, further comprising receiving signaling at a sequence level, picture level, slice level, or tile group level indicating whether or how to select a template.
  • 53. The method of clause 36, further comprising receiving signaling at PU, TU, CU, VPDU, CTU, CTU row, slice, tile, or sub-picture indicating whether or how to select a template.
  • 54. The method of clause 36 wherein whether or how to select a template dependent on coded information including at least one of: block size, color format, single or dual tree partitioning, color component, slice type or picture type.
  • 55. An apparatus for processing video data comprising a processor and a non-transitory memory with instructions thereon, wherein the instructions upon execution by the processor, cause the processor to:
      • select, during a conversion between the current block of the video and a bitstream of the video, a selected template for the current block;
      • determine a cost of using each of a plurality of candidate intra prediction modes (IPMs) to predict samples in a template region, wherein determining the cost of using each of the plurality of candidate IPMs to predict samples comprises determining one or more lines of template-reference samples neighboring the selected template based on the selected template;
      • select a derived IPM from the plurality of candidate IPMs based on the cost; and
      • predict the samples in the current block with intra-prediction using the derived IPM; and
      • perform the conversion based on the predicted samples.
  • 56. A non-transitory computer-readable recording medium storing a bitstream of a video which is generated by a method performed by a video processing apparatus, wherein the method comprises:
      • selecting, during a conversion between the current block of the video and a bitstream of the video, a selected template for the current block;
      • determining a cost of using each of a plurality of candidate intra prediction modes (IPMs) to predict samples in a template region, wherein determining the cost of using each of the plurality of candidate IPMs to predict samples comprises determining one or more lines of template-reference samples neighboring the selected template based on the selected template;
      • selecting a derived IPM from the plurality of candidate IPMs based on the cost;
      • predicting the samples in the current block with intra-prediction using the derived IPM; and
      • performing the conversion based on the predicting.
  • 57. A non-transitory computer-readable storage medium storing instructions that cause a processor to:
      • select, during a conversion between the current block of the video and a bitstream of the video, a selected template for the current block;
      • determine a cost of using each of a plurality of candidate intra prediction modes (IPMs) to predict samples in a template region, wherein determining the cost of using each of the plurality of candidate IPMs to predict samples comprises determining one or more lines of template-reference samples neighboring the selected template based on the selected template;
      • select a derived IPM from the plurality of candidate IPMs based on the cost;
      • predict the samples in the current block with intra-prediction using the derived IPM; and perform the conversion based on the predicted samples.
  • While the foregoing disclosure discusses illustrative aspects and/or embodiments, it should be noted that various changes and modifications could be made herein without departing from the scope of the described aspects and/or embodiments as defined by the appended claims. Furthermore, although elements of the described aspects and/or embodiments may be described or claimed in the singular, the plural is contemplated unless limitation to the singular is explicitly stated. Additionally, all or a portion of any aspect and/or embodiment may be utilized with all or a portion of any other aspect and/or embodiment, unless stated otherwise.
  • The previous description is provided to enable any person having ordinary skill in the art to practice the various aspects described herein. Various modifications to these aspects will be readily apparent to a person having ordinary skill in the art, and the generic principles defined herein may be applied to other aspects. The claims are not intended to be limited to the aspects shown herein, but is to be accorded the full scope consistent with the language claims, where reference to an element in the singular is not intended to mean “one and only one” unless specifically so stated, but rather “one or more.” Unless specifically stated otherwise, the term “some” refers to one or more. Combinations such as “at least one of A, B, or C,” “one or more of A, B, or C,” “at least one of A, B, and C,” “one or more of A, B, and C,” and “A, B, C, or any combination thereof” include any combination of A, B, or C, and may include multiples of A, multiples of B, or multiples of C. Specifically, combinations such as “at least one of A, B, or C,” “one or more of A, B, or C,” “at least one of A, B, and C,” “one or more of A, B, and C,” and “A, B, C, or any combination thereof” may be A only, B only, C only, A and B, A and C, B and C, or A and B and C, where any such combinations may contain one or more member or members of A, B, or C. All structural and functional equivalents to the elements of the various aspects described throughout this disclosure that are known or later come to be known to a person having ordinary skill in the art are expressly incorporated herein by reference and are intended to be encompassed by the claims. Moreover, nothing disclosed herein is intended to be dedicated to the public regardless of whether such disclosure is explicitly recited in the claims. The words “module,” “mechanism,” “element,” “device,” and the like may not be a substitute for the word “means.” As such, no claim element is to be construed as a means plus function unless the element is expressly recited using the phrase “means for.”

Claims (22)

1. A method of processing video data, comprising:
constructing, during a conversion between a current video block of a video and a bitstream of the video, at least one template set for the current video block from a plurality of sub-templates;
deriving at least one intra-prediction mode (IPM) based on cost calculations, wherein the cost calculations are determined based on the at least one template set and the current block;
determining, based on the at least one IPM, a final predictor of the current video block; and
performing the conversion based on the final predictor,
wherein the at least one template set is selected from the plurality of sub-templates based on a coding information for the current video block, wherein the coding information includes a block dimension of the current video block, and the at least one template set is selected from the plurality of sub-templates based on a relationship between the block dimension of the current video block and a pre-defined threshold, and wherein a BW represents a block width of the current video block and a BH represents a block height of the current video block, and
wherein a left sub-template is not selected in a case that the BW divided by the BH is greater than or equal to a first threshold.
2. The method of claim 1, wherein the plurality of sub-templates includes at least one of the following: a left sub-template, an above sub-template, a right-above sub-template, a left-below sub-template, and a left-above sub-template.
3. The method of claim 1, wherein the plurality of sub-templates includes non-adjacent samples of the current video block.
4. The method of claim 1, wherein the at least one template set includes a single sub-template, and wherein the single sub-template is one of a left sub-template, an above sub-template, a right-above sub-template, a left-below sub-template, or a left-above sub-template.
5. The method of claim 1, wherein the at least one template set includes any one of the following:
a) a left sub-template and an above sub-template;
b) a left sub-template, an above sub-template, and a left-above sub-template;
c) a left sub-template and a left-below sub-template;
d) an above sub-template and a right-above sub-template;
e) a left sub-template, a left-below sub-template, an above sub-template, and a right-above sub-template; or
f) a left sub-template, a left-below sub-template, an above sub-template, a right-above sub-template, and a left-above sub-template.
6. The method of claim 1, wherein the coding information further includes a block shape of the current video block.
7. (canceled)
8. (canceled)
9. The method of claim 1, wherein an above sub-template is not selected in a case that the BH divided by the BW is greater than or equal to a second threshold.
10. The method of claim 1, wherein a dimension of one of the plurality of sub-templates is based on the at least one of the following:
a) a dimension of the current video block;
b) a block shape of the current video block;
c) a slice type of the current video block; or
d) a picture type of the current video block.
11. The method of claim 10, wherein the dimension of one of the plurality of sub-templates is one of L1×BH, BW×L2, BW′×L2, L1×BH′ or L1×L2, where L1 is a height, L2 is a width, BW and BH represent width and height of the current video block respectively, BW′ represents the width of a right-above sub-template and BH′ represents the height of a left-below sub-template respectively, and wherein BW′ is equal to BW or BH and BH′ is equal to BH or BW, and L1 or L2 is a predefined value.
12. The method of claim 11, wherein values of L1 and L2 depend on a slice type of the current video block or a dimension of the current video block.
13. The method of claim 11, wherein values of L1 and L2 are related to a first syntax element presented in the bitstream.
14. The method of claim 13, wherein constructing at least one template set from a plurality of sub-templates is further related to a second syntax element presented in the bitstream.
15. The method of claim 1, wherein constructing at least one template set from a plurality of sub-templates is further related to a second syntax element presented in the bitstream.
16. The method of claim 15, wherein in response to the plurality of sub-templates being unavailable, no template set is constructed and no IPM is derived.
17. The method of claim 16, wherein the final predictor of the current video block is determined based on a predefined IPM, and wherein the predefined IPM is one of DC mode, planar mode, horizontal mode, or vertical mode.
18. The method of claim 1, wherein the conversion includes decoding the current video block from the bitstream.
19. The method of claim 1, wherein the conversion includes encoding the current video block into the bitstream.
20. An apparatus for processing video data comprising a processor and a non-transitory memory with instructions thereon, wherein the instructions upon execution by the processor, cause the processor to:
construct, during a conversion between a current video block of a video and a bitstream of the video, at least one template set for the current video block from a plurality of sub-templates;
derive at least one intra-prediction mode (IPM) based on cost calculations, wherein the cost calculations are determined based on the at least one template set and the current block;
determine, based on the at least one IPM, a final predictor of the current video block; and
perform the conversion based on the final predictor,
wherein the at least one template set is selected from the plurality of sub-templates based on a coding information for the current video block, wherein the coding information includes a block dimension of the current video block, and the at least one template set is selected from the plurality of sub-templates based on a relationship between the block dimension of the current video block and a pre-defined threshold, and wherein a BW represents a block width of the current video block and a BH represents a block height of the current video block, and
wherein a left sub-template is not selected in a case that the BW divided by the BH is greater than or equal to a first threshold.
21. A non-transitory computer-readable recording medium storing a bitstream of a video which is generated by a method performed by a video processing apparatus, wherein the method comprises:
constructing, during a conversion between a current video block of a video and a bitstream of the video, at least one template set for the current video block from a plurality of sub-templates;
deriving at least one intra-prediction mode (IPM) based on cost calculations, wherein the cost calculations are determined based on the at least one template set and the current block;
determining, based on the at least one IPM, a final predictor of the current video block; and
generating the bitstream based on the final predictor,
wherein the at least one template set is selected from the plurality of sub-templates based on a coding information for the current video block, wherein the coding information includes a block dimension of the current video block, and the at least one template set is selected from the plurality of sub-templates based on a relationship between the block dimension of the current video block and a pre-defined threshold, and wherein a BW represents a block width of the current video block and a BH represents a block height of the current video block, and
wherein a left sub-template is not selected in a case that the BW divided by the BH is greater than or equal to a first threshold.
22. A non-transitory computer-readable storage medium storing instructions that cause a processor to:
construct, during a conversion between a current video block of a video and a bitstream of the video, at least one template set for the current video block from a plurality of sub-templates;
derive at least one intra-prediction mode (IPM) based on cost calculations, wherein the cost calculations are determined based on the at least one template set and the current block;
determine, based on the at least one IPM, a final predictor of the current video block; and
perform the conversion based on the final predictor,
wherein the at least one template set is selected from the plurality of sub-templates based on a coding information for the current video block, wherein the coding information includes a block dimension of the current video block, and the at least one template set is selected from the plurality of sub-templates based on a relationship between the block dimension of the current video block and a pre-defined threshold, and wherein a BW represents a block width of the current video block and a BH represents a block height of the current video block, and
wherein a left sub-template is not selected in a case that the BW divided by the BH is greater than or equal to a first threshold.
US17/148,383 2021-01-13 2021-01-13 Usage of templates for decoder-side intra mode derivation Active US11388421B1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
US17/148,383 US11388421B1 (en) 2021-01-13 2021-01-13 Usage of templates for decoder-side intra mode derivation
CN202210028454.4A CN114765688A (en) 2021-01-13 2022-01-11 Use of templates for decoder-side intra mode derivation
US17/837,867 US11902537B2 (en) 2021-01-13 2022-06-10 Usage of templates for decoder-side intra mode derivation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US17/148,383 US11388421B1 (en) 2021-01-13 2021-01-13 Usage of templates for decoder-side intra mode derivation

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/837,867 Continuation US11902537B2 (en) 2021-01-13 2022-06-10 Usage of templates for decoder-side intra mode derivation

Publications (2)

Publication Number Publication Date
US11388421B1 US11388421B1 (en) 2022-07-12
US20220224915A1 true US20220224915A1 (en) 2022-07-14

Family

ID=82322243

Family Applications (2)

Application Number Title Priority Date Filing Date
US17/148,383 Active US11388421B1 (en) 2021-01-13 2021-01-13 Usage of templates for decoder-side intra mode derivation
US17/837,867 Active US11902537B2 (en) 2021-01-13 2022-06-10 Usage of templates for decoder-side intra mode derivation

Family Applications After (1)

Application Number Title Priority Date Filing Date
US17/837,867 Active US11902537B2 (en) 2021-01-13 2022-06-10 Usage of templates for decoder-side intra mode derivation

Country Status (2)

Country Link
US (2) US11388421B1 (en)
CN (1) CN114765688A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220345692A1 (en) * 2021-04-26 2022-10-27 Tencent America LLC Template matching based intra prediction
US20230049154A1 (en) * 2021-08-02 2023-02-16 Tencent America LLC Method and apparatus for improved intra prediction
US20230059794A1 (en) * 2021-08-23 2023-02-23 Mediatek Inc. Context-based adaptive binary arithmetic coding decoder capable of decoding multiple bins in one cycle and associated decoding method
WO2024034861A1 (en) * 2022-08-09 2024-02-15 현대자동차주식회사 Method and device for video coding using template-based prediction
EP4346203A1 (en) * 2022-09-27 2024-04-03 Sharp Kabushiki Kaisha Image decoding apparatus and image coding apparatus

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3962080A1 (en) * 2020-08-26 2022-03-02 Ateme Method and apparatus for image processing
US20220337875A1 (en) * 2021-04-16 2022-10-20 Tencent America LLC Low memory design for multiple reference line selection scheme
US11943432B2 (en) * 2021-04-26 2024-03-26 Tencent America LLC Decoder side intra mode derivation
US20220394269A1 (en) * 2021-06-03 2022-12-08 Qualcomm Incorporated Derived intra prediction modes and most probable modes in video coding
CN116830581A (en) * 2021-09-15 2023-09-29 腾讯美国有限责任公司 Improved signaling method and apparatus for motion vector differences
EP4346202A1 (en) * 2022-09-27 2024-04-03 Beijing Xiaomi Mobile Software Co., Ltd. Encoding/decoding video picture data

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170353719A1 (en) * 2016-06-03 2017-12-07 Mediatek Inc. Method and Apparatus for Template-Based Intra Prediction in Image and Video Coding
US20180359483A1 (en) * 2017-06-13 2018-12-13 Qualcomm Incorporated Motion vector prediction
US20190166370A1 (en) * 2016-05-06 2019-05-30 Vid Scale, Inc. Method and system for decoder-side intra mode derivation for block-based video coding
US20200128258A1 (en) * 2016-12-27 2020-04-23 Mediatek Inc. Method and Apparatus of Bilateral Template MV Refinement for Video Coding

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10341664B2 (en) * 2015-09-17 2019-07-02 Intel Corporation Configurable intra coding performance enhancements
EP3643062A1 (en) * 2017-07-04 2020-04-29 Huawei Technologies Co., Ltd. Decoder side intra mode derivation (dimd) tool computational complexity reduction

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190166370A1 (en) * 2016-05-06 2019-05-30 Vid Scale, Inc. Method and system for decoder-side intra mode derivation for block-based video coding
US20170353719A1 (en) * 2016-06-03 2017-12-07 Mediatek Inc. Method and Apparatus for Template-Based Intra Prediction in Image and Video Coding
US20200128258A1 (en) * 2016-12-27 2020-04-23 Mediatek Inc. Method and Apparatus of Bilateral Template MV Refinement for Video Coding
US20180359483A1 (en) * 2017-06-13 2018-12-13 Qualcomm Incorporated Motion vector prediction

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220345692A1 (en) * 2021-04-26 2022-10-27 Tencent America LLC Template matching based intra prediction
US20230049154A1 (en) * 2021-08-02 2023-02-16 Tencent America LLC Method and apparatus for improved intra prediction
US20230059794A1 (en) * 2021-08-23 2023-02-23 Mediatek Inc. Context-based adaptive binary arithmetic coding decoder capable of decoding multiple bins in one cycle and associated decoding method
WO2024034861A1 (en) * 2022-08-09 2024-02-15 현대자동차주식회사 Method and device for video coding using template-based prediction
EP4346203A1 (en) * 2022-09-27 2024-04-03 Sharp Kabushiki Kaisha Image decoding apparatus and image coding apparatus

Also Published As

Publication number Publication date
US20220329826A1 (en) 2022-10-13
CN114765688A (en) 2022-07-19
US11388421B1 (en) 2022-07-12
US11902537B2 (en) 2024-02-13

Similar Documents

Publication Publication Date Title
US11889097B2 (en) Techniques for decoding or coding images based on multiple intra-prediction modes
US11388421B1 (en) Usage of templates for decoder-side intra mode derivation
US11563957B2 (en) Signaling for decoder-side intra mode derivation
KR20210145754A (en) Calculations in matrix-based intra prediction
US11647198B2 (en) Methods and apparatuses for cross-component prediction
US11582460B2 (en) Techniques for decoding or coding images based on multiple intra-prediction modes
US20210377519A1 (en) Intra prediction-based video signal processing method and device
US11917196B2 (en) Initialization for counter-based intra prediction mode
US20230283766A1 (en) Methods and apparatuses for cross-component prediction
US20220182665A1 (en) Counter-based intra prediction mode
WO2023016408A1 (en) Method, apparatus, and medium for video processing
US20240137529A1 (en) Method, device, and medium for video processing
US20240031565A1 (en) Intra-Prediction on Non-dyadic Blocks
WO2022214028A1 (en) Method, device, and medium for video processing
WO2023016424A1 (en) Method, apparatus, and medium for video processing
WO2022218316A1 (en) Method, device, and medium for video processing
WO2023274372A1 (en) Method, device, and medium for video processing
WO2023051532A1 (en) Method, device, and medium for video processing
WO2022242727A1 (en) Method, device, and medium for video processing
WO2022242729A9 (en) Method, device, and medium for video processing
WO2023201930A1 (en) Method, apparatus, and medium for video processing
WO2023016439A1 (en) Method, apparatus, and medium for video processing
US20230396812A1 (en) Unsymmetric Binary Tree Partitioning and Non-dyadic Blocks
WO2022247884A1 (en) Method, device, and medium for video processing
WO2023217140A1 (en) Threshold of similarity for candidate list

Legal Events

Date Code Title Description
FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

AS Assignment

Owner name: BYTEDANCE INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:ZHANG, KAI;ZHANG, LI;HE, YUWEN;SIGNING DATES FROM 20210524 TO 20210526;REEL/FRAME:057435/0485

Owner name: BEIJING OCEAN ENGINE NETWORK TECHNOLOGY CO., LTD., CHINA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:WANG, YANG;REEL/FRAME:057435/0483

Effective date: 20210527

Owner name: BEIJING ZITIAO NETWORK TECHNOLOGY, CHINA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:LIU, HONGBIN;REEL/FRAME:056630/0907

Effective date: 20210524

AS Assignment

Owner name: LEMON INC., CAYMAN ISLANDS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:BEIJING OCEAN ENGINE NETWORK TECHNOLOGY CO., LTD.;REEL/FRAME:057151/0951

Effective date: 20210528

AS Assignment

Owner name: LEMON INC., CAYMAN ISLANDS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:BEIJING OCEAN ENGINE NETWORK TECHNOLOGY CO., LTD.;REEL/FRAME:057602/0758

Effective date: 20210528

Owner name: LEMON INC., CAYMAN ISLANDS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:BYTEDANCE INC.;REEL/FRAME:057603/0324

Effective date: 20210528

Owner name: LEMON INC., CAYMAN ISLANDS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:BEIJING ZITIAO NETWORK TECHNOLOGY CO., LTD.;REEL/FRAME:057603/0168

Effective date: 20210528

STCF Information on status: patent grant

Free format text: PATENTED CASE