WO2020232355A1 - Copie intra-bloc de codage de contenu d'écran - Google Patents

Copie intra-bloc de codage de contenu d'écran Download PDF

Info

Publication number
WO2020232355A1
WO2020232355A1 PCT/US2020/033134 US2020033134W WO2020232355A1 WO 2020232355 A1 WO2020232355 A1 WO 2020232355A1 US 2020033134 W US2020033134 W US 2020033134W WO 2020232355 A1 WO2020232355 A1 WO 2020232355A1
Authority
WO
WIPO (PCT)
Prior art keywords
block
mode
prediction
current video
video
Prior art date
Application number
PCT/US2020/033134
Other languages
English (en)
Inventor
Weijia Zhu
Jizheng Xu
Li Zhang
Kai Zhang
Original Assignee
Bytedance Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Bytedance Inc. filed Critical Bytedance Inc.
Priority to CN202080036494.2A priority Critical patent/CN113826390B/zh
Publication of WO2020232355A1 publication Critical patent/WO2020232355A1/fr

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/11Selection of coding mode or of prediction mode among a plurality of spatial predictive coding modes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/186Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/593Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/90Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
    • H04N19/96Tree coding, e.g. quad-tree coding

Definitions

  • This document is related to video and image coding technologies.
  • Digital video accounts for the largest bandwidth use on the internet and other digital communication networks. As the number of connected user devices capable of receiving and displaying video increases, it is expected that the bandwidth demand for digital video usage will continue to grow.
  • Devices, systems and methods related to digital video coding, and specifically, to intra block copy (IBC) for screen content coding for video coding are described.
  • the described methods may be applied to both the existing video coding standards (e.g., High Efficiency Video Coding (HEVC)) and future video coding standards (e.g., Versatile Video Coding (VVC)) or codecs.
  • HEVC High Efficiency Video Coding
  • VVC Versatile Video Coding
  • a method of video processing includes performing a conversion between a current video block of a current picture of a chroma component of a video and a bitstream representation of the video, wherein the bitstream representation conforms to a format rule, and wherein the format rule specifies that an indication of a use of an intra block copy (IBC) mode for the current video block is selectively included in the bitstream representation based on whether one of more luma blocks corresponding to the current video block are coded in the bitstream representation using the IBC mode, wherein the IBC mode comprises use of a prediction of the current video block based on samples from the current picture.
  • IBC intra block copy
  • a method of video processing includes determining, for a current video block of a video, that a use of an intra block copy (IBC) mode for the current video block is disabled or a block vector corresponding to the current video block is invalid, generating, based on the determining, a prediction for the current video block using a default intra mode, and performing, based on the prediction, a conversion between the current video block and a bitstream representation of the video.
  • IBC intra block copy
  • a method of video processing includes selectively enabling, for a conversion between a current video block of a video and a bitstream representation of the video, an intra block copy (IBC) mode for the current video block, and performing, subsequent to the selectively enabling, the conversion, wherein the current video block comprises one or more sub-blocks, and wherein at least one of the sub blocks are associated with invalid block vectors.
  • IBC intra block copy
  • a method of video processing includes performing a conversion between a current video block of a current picture of a video and a bitstream representation of the video, wherein the current video block is represented in the bitstream representation using an intra block copy mode based on prediction from a prediction block, and wherein the prediction block comprises pixels having a default value.
  • a method of video processing includes deriving, for a chroma video block that is coded using an intra block copy (IBC) mode, a motion vector or a block vector of the chroma video block based on a motion vector or a block vector of a neighboring chroma block, and performing, based on the deriving, a conversion between the chroma video block and a bitstream representation of the video.
  • IBC intra block copy
  • a method of video processing includes making a determination that a current video block is from a video unit of a video having a content type, and performing, based on the determination, a conversion between the current video block and the bitstream representation of the video, wherein the content type is indicated in the bitstream representation at the video unit level, and wherein a coding tool is selectively available for the conversion depending on the content type based on a rule.
  • a method of video processing includes storing, for a current video block that is coded using a triangular partition mode (TPM), uni -prediction information for at least one sub-block of the current video block, and performing, using the uni -prediction information, a conversion between the current video block of a video and a bitstream representation of the video.
  • TPM triangular partition mode
  • a method of video processing includes making a decision, based on a coding tree structure of a current video block, regarding a selective enablement of coding mode to the current video block, and performing, based on the decision, a conversion between the current video block and the bitstream representation of the current video block.
  • a method of video processing includes determining, based on a size of a current video block of a video that is coded using an intra block copy (IBC) mode, a predefined transform, and performing, based on the determining, a conversion between the current video block and a bitstream representation of the video, wherein the conversion comprises applying, during encoding, the predefined transform between an IBC prediction of the current video block and a residual coding in the bitstream
  • IBC intra block copy
  • a method of video processing includes conditionally enabling, for a current video block of a video, a triangular prediction mode (TPM) for the current video block with uni -prediction, and performing, subsequent to the conditionally enabling, a conversion between the current video block and a bitstream
  • TPM triangular prediction mode
  • the TPM comprises splitting the current video block into multiple sub-blocks, at least one having a non-square shape.
  • the above-described method may be implemented by a video encoder apparatus that comprises a processor.
  • these methods may be embodied in the form of processor-executable instructions and stored on a computer-readable program medium. [0017] These, and other, aspects are further described in the present document.
  • FIG. 1 shows an example of intra block copy.
  • FIG. 2 shows an example of five spatial neighboring candidates.
  • FIG. 3 shows an example of a block coded in palette mode.
  • FIG. 4 shows an example of use of a palette predictor to signal palette entries.
  • FIG. 5 shows an example of horizontal and vertical traverse scans.
  • FIG. 6 shows an example of coding of palette indices.
  • FIG. 7 shows an example of multi-type tree splitting modes.
  • FIG. 8 shows an example of samples used to derive parameters in a cross-component linear model (CCLM) prediction mode.
  • CCLM cross-component linear model
  • FIG. 9 shows an example of 67 intra prediction modes.
  • FIG. 10 shows an example of the left and above neighbors of a current block.
  • FIG. 11 shows an example of four reference lines neighboring a prediction block.
  • FIG. 12A shows an example of divisions of 4x8 and 8x4 blocks for an intra sub partition method.
  • FIG. 12B shows an example of divisions all blocks except 4x8, 8x4 and 4x4 for an intra sub partition method.
  • FIGS. 13A-3D show examples of samples used by a position dependent intra prediction combination (PDPC) method applied to diagonal and adjacent angular intra modes.
  • PDPC position dependent intra prediction combination
  • FIG. 14 shows an example of a triangle partition based inter prediction.
  • FIG. 15 shows an example of spatial and temporal neighboring blocks used to construct a uni -prediction candidate list.
  • FIG. 16 shows an example of the weights used in a blending process.
  • FIG. 17 shows an example of a selected luma block covering a luma region.
  • FIG. 18 shows examples of left and above neighboring blocks.
  • FIGS. 19A and 19B show examples of diagonal and anti-diagonal partitions in the triangular partitioning mode (TPM), respectively.
  • FIG. 20 show an example of sub-blocks which contains samples located on the diagonal or anti-diagonal lines.
  • FIG. 21 A show an example of sub-blocks which contains samples located on the diagonal or anti-diagonal lines of a block with its width larger than height.
  • FIG. 21B show an example of sub-blocks which contains samples located on the diagonal or anti-diagonal lines of a block with its width smaller than height.
  • FIG. 21C show an example of sub-blocks which contains samples located on the diagonal or anti-diagonal lines of a block with its width equal to height.
  • FIGS. 22A-22J are flowcharts for examples of video processing methods.
  • FIG. 23 is a block diagram of an example of a hardware platform for implementing a visual media decoding or a visual media encoding technique described in the present document.
  • FIG. 24 is a block diagram of an example video processing system in which disclosed techniques may be implemented.
  • the present document provides various techniques that can be used by a decoder of image or video bitstreams to improve the quality of decompressed or decoded digital video or images.
  • video is used herein to include both a sequence of pictures (traditionally called video) and individual images.
  • a video encoder may also implement these techniques during the process of encoding in order to reconstruct decoded frames used for further encoding.
  • Section headings are used in the present document for ease of understanding and do not limit the embodiments and techniques to the corresponding sections. As such, embodiments from one section can be combined with embodiments from other sections.
  • This document is related to video coding technologies. Specifically, it is related to IBC for screen content coding. It may be applied to the existing video coding standard like HEVC, or the standard (Versatile Video Coding) to be finalized. It may be also applicable to future video coding standards or video codec.
  • Video coding standards have evolved primarily through the development of the well-known ITU-T and ISO/IEC standards.
  • the ITU-T produced H.261 and H.263, ISO/IEC produced MPEG-1 and MPEG-4 Visual, and the two organizations jointly produced the
  • Joint Video Exploration Team JVET was founded by VCEG and MPEGjointly in 2015. Since then, many new methods have been adopted by JVET and put into the reference software named Joint Exploration Model (JEM).
  • JEM Joint Exploration Model
  • JEM Joint Exploration Model
  • JEM Joint Exploration Model
  • VTM The latest reference software of VVC, named VTM, could be found at:
  • IBC Intra block copy
  • HEVC-SCC HEVC Screen Content Coding extensions
  • VTM-4.0 the current VVC test model
  • IBC extends the concept of motion compensation from inter-frame coding to intra-frame coding.
  • the current block is predicted by a reference block in the same picture when IBC is applied.
  • the samples in the reference block must have been already reconstructed before the current block is coded or decoded.
  • IBC is not so efficient for most camera-captured sequences, it shows significant coding gains for screen content. The reason is that there are lots of repeating patterns, such as icons and text characters in a screen content picture.
  • an inter-coded coding unit can apply IBC if it chooses the current picture as its reference picture.
  • the MV is renamed as block vector (BV) in this case, and a BV always has an integer-pixel precision.
  • BV block vector
  • the current picture is marked as a“long-term” reference picture in the Decoded Picture Buffer (DPB).
  • DPB Decoded Picture Buffer
  • the prediction can be generated by copying the reference block.
  • the residual can be got by subtracting the reference pixels from the original signals.
  • transform and quantization can be applied as in other coding modes.
  • the whole reference block should be with the current coding tree unit (CTU) and does not overlap with the current block. Thus, there is no need to pad the reference or prediction block.
  • the IBC flag is coded as a prediction mode of the current CU. Thus, there are totally three prediction modes, MODE INTRA,
  • IBC merge mode an index pointing to an entry in the IBC merge candidates list is parsed from the bitstream.
  • the construction of the IBC merge list can be summarized according to the following sequence of steps:
  • Step 1 Derivation of spatial candidates
  • Step 2 Insertion of HMVP candidates
  • Step 3 Insertion of pairwise average candidates
  • IBC candidates from HMVP table may be inserted. Redundancy check are performed when inserting the HMVP candidates.
  • pairwise average candidates are inserted into the IBC merge list.
  • the merge candidate is called invalid merge candidate.
  • invalid merge candidates may be inserted into the IBC merge list.
  • JVET-N0843 is adopted to the VVC.
  • the BV predictors for merge mode and AMVP mode in IBC will share a common predictor list, which consist of the following elements:
  • JVET-N0843 In addition to the above-mentioned BV predictor candidate list, JVET-N0843 also proposed to simplify the pruning operations between HMVP candidates and the existing merge candidates (Al, Bl). In the simplification there will be up to 2 pruning operations since it only compares the first HMVP candidate with spatial merge candidate(s).
  • IBC AMVP mode an AMVP index point to an entry in the IBC AMVP list is parsed from the bitstream.
  • the construction of the IBC AMVP list can be summarized according to the following sequence of steps:
  • Step 1 Derivation of spatial candidates
  • Step 2 Insertion of HMVP candidates
  • Step 3 Insertion of zero candidates
  • IBC candidates from HMVP table may be inserted.
  • the motion compensation in the chroma IBC mode is performed at sub block level.
  • the chroma block will be partitioned into several sub blocks. Each sub block determines whether the corresponding luma block has a block vector and the validity if it is present.
  • There is encoder constrain in the current VTM where the chroma IBC mode will be tested if all sub blocks in the current chroma CU have valid luma block vectors. For example, on a YUV 420 video, the chroma block is NxM and then the collocated luma region is 2Nx2M.
  • the sub block size of a chroma block is 2x2. There are several steps to perform the chroma mv derivation then the block copy process.
  • the chroma block will be first partitioned into (N » 1)*(M » 1) sub blocks.
  • Each sub block with a top left sample coordinated at (x, y) fetches the corresponding luma block covering the same top-left sample which is coordinated at (2x, 2y).
  • the encoder checks the block vector(bv) of the fetched luma block. If one of the following conditions is satisfied, the bv is considered as invalid.
  • a bv of the corresponding luma block is not existing.
  • the chroma motion vector of a sub block is set to the motion vector of the corresponding luma sub block.
  • the IBC mode is allowed at the encoder when all sub blocks find a valid bv.
  • a luma location ( xCb, yCb ) specifying the top-left sample of the current coding block relative to the top-left luma sample of the current picture
  • variable cbWidth specifying the width of the current coding block in luma samples
  • variable cbHeight specifying the height of the current coding block in luma samples
  • variable treeType specifying whether a single or a dual tree is used and if a dual tree is used, it specifies whether the current tree corresponds to the luma or chroma components.
  • Output of this process is a modified reconstructed picture before in-loop filtering.
  • the derivation process for quantization parameters as specified in clause 8.7.1 is invoked with the luma location ( xCb, yCb ), the width of the current coding block in luma samples cbWidth and the height of the current coding block in luma samples cbHeight, and the variable treeType as inputs.
  • the decoding process for coding units coded in ibc prediction mode consists of the following ordered steps:
  • the motion vector components of the current coding unit are derived as follows:
  • treeType is equal to SINGLE TREE or DUAL TREE LUMA, the following applies:
  • treeType is equal to DUAL TREE CHROMA
  • numSbX ( cbWidth » 2 ) (8-886)
  • numSbY ( cbHeight » 2 ) (8-887)
  • the luma motion vector mvL[ xSbldx ][ ySbldx ] is derived as follows:
  • BB Neighbouring blocks availability checking process tbd] is invoked with the current chroma location ( xCurr, yCurr ) set equal to ( xCb / SubWidthC, yCb / SubHeightC ) and the neighbouring chroma location ( xCb / SubWidthC + ( mvC[ xSbldx ][ ySbldx ][ 0 ] » 5 ), yCb / SubHeightC +
  • the output shall be equal to TRUE.
  • BB Neighbouring blocks availability checking process tbd] is invoked with the current chroma location ( xCurr, yCurr ) set equal to ( xCb / SubWidthC, yCb / SubHeightC ) and the neighbouring chroma location ( xCb / SubWidthC + ( mvC[ xSbldx ][ ySbldx ][ 0 ] » 5 ) + cbWidth / SubWidthC - 1, yCb / SubHeightC + ( mvC[ xSbldx ][ ySbldx ][ 1 ] » 5 ) + cbHeight / SubHeightC - 1 ) as inputs, the output shall be equal to TRUE.
  • the prediction samples of the current coding unit are derived as follows:
  • ibc prediction samples that are an (cbWidth / 2)x(cbHeight / 2) array predSamplesa, of prediction chroma samples for the chroma components Cb as outputs.
  • predSamples that are an (cbWidth / 2)x(cbHeight / 2) array predSamplesc r of prediction chroma samples for the chroma components Cr as outputs.
  • NumSbX [ xCb ][ yCb ] and NumSbY[ xCb ][ yCb ] are set equal to numSbX and numSbY, respectively.
  • the residual samples of the current coding unit are derived as follows: - When treeType is equal to SINGLE TREE or treeType is equal to DUAL TREE LUMA, the decoding process for the residual signal of coding blocks coded in inter prediction mode as specified in clause 8.5.8 is invoked with the location ( xTbO, yTbO ) set equal to the luma location ( xCb, yCb ), the width nTbW set equal to the luma coding block width cbWidth, the height nTbH set equal to the luma coding block height cbHeight and the variable cldxset equal to 0 as inputs, and the array resSamples L as output.
  • the decoding process for the residual signal of coding blocks coded in inter prediction mode as specified in clause 8.5.8 is invoked with the location ( xTbO, yTbO ) set equal to the chroma location ( xCb / 2, yCb 1 2 ), the width nTbW set equal to the chroma coding block width cbWidth / 2, the height nTbH set equal to the chroma coding block height cbHeight / 2 and the variable cldxset equal to 1 as inputs, and the array resSamplesc b as output.
  • the decoding process for the residual signal of coding blocks coded in inter prediction mode as specified in clause 8.5.8 is invoked with the location ( xTbO, yTbO ) set equal to the chroma location ( xCb / 2, yCb 1 2 ), the width nTbW set equal to the chroma coding block width cbWidth / 2, the height nTbH set equal to the chroma coding block height cbHeight / 2 and the variable cldxset equal to 2 as inputs, and the array resSamplesc r as output.
  • the reconstructed samples of the current coding unit are derived as follows:
  • the picture reconstruction process for a colour component as specified in clause 8.7.5 is invoked with the block location ( xB, yB ) set equal to ( xCb, yCb ), the block width bWidth set equal to cbWidth, the block height bHeight set equal to cbHeight, the variable cldx set equal to 0, the (cbWidth)x(cbHeight) array predSamples set equal to predSamples L and the (cbWidth)x(cbHeight) array resSamples set equal to resSamples L as inputs, and the output is a modified reconstructed picture before in-loop filtering.
  • the picture reconstruction process for a colour component as specified in clause 8.7.5 is invoked with the block location ( xB, yB ) set equal to ( xCb / 2, yCb 12 ), the block width bWidth set equal to cbWidth / 2, the block height bHeight set equal to cbHeight / 2, the variable cldx set equal to 1, the (cbWidth / 2)x(cbHeight / 2) array predSamples set equal to predSamplesc b and the (cbWidth / 2)x(cbHeight / 2) array resSamples set equal to resSamplesc b as inputs, and the output is a modified reconstructed picture before in-loop filtering.
  • the picture reconstruction process for a colour component as specified in clause 8.7.5 is invoked with the block location ( xB, yB ) set equal to ( xCb / 2, yCb 12 ), the block width bWidth set equal to cbWidth / 2, the block height bHeight set equal to cbHeight / 2, the variable cldx set equal to 2, the (cbWidth / 2)x(cbHeight / 2) array predSamples set equal to predSamplesc r and the (cbWidth / 2)x(cbHeight / 2) array resSamples set equal to resSamplesc r as inputs, and the output is a modified reconstructed picture before in-loop filtering.
  • AMVR Adaptive motion vector resolution
  • MVDs motion vector differences
  • AMVR CU-level adaptive motion vector resolution
  • the MVDs of the current CU can be adaptively selected as follows:
  • -Normal AMVP mode quarter-luma-sample, integer-luma-sample or four-luma-sample.
  • -Affine AMVP mode quarter-luma-sample, integer-luma-sample or 1/16 luma-sample.
  • the CU-level MVD resolution indication is conditionally signalled if the current CU has at least one non-zero MVD component. If all MVD components (that is, both horizontal and vertical MVDs for reference list L0 and reference list LI) are zero, quarter-luma-sample MVD resolution is inferred.
  • a first flag is signalled to indicate whether quarter-luma-sample MVD precision is used for the CU. If the first flag is 0, no further signaling is needed and quarter-luma-sample MVD precision is used for the current CU. Otherwise, a second flag is signalled to indicate whether integer-luma-sample or four-luma- sample MVD precision is used for normal AMVP CU. The same second flag is used to indicate whether integer-luma-sample or 1/16 luma-sample MVD precision is used for affine AMVP CU.
  • the motion vector predictors for the CU will be rounded to the same precision as that of the MVD before being added together with the MVD.
  • the motion vector predictors are rounded toward zero (that is, a negative motion vector predictor is rounded toward positive infinity and a positive motion vector predictor is rounded toward negative infinity).
  • the encoder determines the motion vector resolution for the current CU using RD check.
  • VTM4 the RD check of MVD precisions other than quarter-luma-sample is only invoked conditionally.
  • the RD cost of quarter-luma-sample MVD precision and integer-luma sample MV precision is computed first. Then, the RD cost of integer-luma-sample MVD precision is compared to that of quarter-luma-sample MVD precision to decide whether it is necessary to further check the RD cost of four-luma-sample MVD precision.
  • affine AMVP mode if affine inter mode is not selected after checking rate-distortion costs of affine merge/skip mode, merge/skip mode, quarter-luma sample MVD precision normal AMVP mode and quarter-luma sample MVD precision affine AMVP mode, then 1/16 luma-sample MV precision and 1-pel MV precision affine inter modes are not checked. Furthermore affine parameters obtained in quarter-luma-sample MV precision affine inter mode is used as starting search point in 1/16 luma-sample and quarter-luma-sample MV precision affine inter modes.
  • palette mode The basic idea behind a palette mode is that the samples in the CU are represented by a small set of representative color values. This set is referred to as the palette. It is also possible to indicate a sample that is outside the palette by signaling an escape symbol followed by (possibly quantized) component values. This is illustrated in FIG. 3.
  • a palette predictor is maintained.
  • the maximum size of the palette as well as the palette predictor is signaled in the SPS.
  • HEVC-SCC a palette predictor for coding of the palette entries.
  • pal ette predi ctor initi al i zer present fl ag is introduced in the PPS.
  • this flag is 1, entries for initializing the palette predictor are signaled in the bitstream.
  • the palette predictor is initialized at the beginning of each CTU row, each slice and each tile.
  • the palette predictor is reset to 0 or initialized using the palette predictor intializer entries signaled in the PPS.
  • a palette predictor initializer of size 0 was enabled to allow explicit disabling of the palette predictor initialization at the PPS level.
  • a reuse flag is signaled to indicate whether it is part of the current palette. This is illustrated in FIG. 4.
  • the reuse flags are sent using run-length coding of zeros. After this, the number of new palette entries are signaled using exponential Golomb code of order 0. Finally, the component values for the new palette entries are signaled.
  • palette indices are coded using horizontal and vertical traverse scans as shown in FIG. 5.
  • the scan order is explicitly signaled in the bitstream using the palette transpose flag. For the rest of the subsection it is assumed that the scan is horizontal.
  • the palette indices are coded using two main palette sample modes: 'INDEX' and 'COPY ABOVE'.
  • the escape symbol is also signaled as an 'INDEX' mode and assigned an index equal to the maximum palette size.
  • the mode is signaled using a flag except for the top row or when the previous mode was 'COPY ABOVE'.
  • This syntax order is accomplished as follows. First the number of index values for the CU is signaled. This is followed by signaling of the actual index values for the entire CU using truncated binary coding. Both the number of indices as well as the the index values are coded in bypass mode. This groups the index-related bypass bins together. Then the palette sample mode (if necessary) and run are signaled in an interleaved manner. Finally, the component escape values corresponding to the escape samples for the entire CU are grouped together and coded in bypass mode. [00118] An additional syntax element, last run type flag, is signaled after signaling the index values. This syntax element, in conjunction with the number of indices, eliminates the need to signal the run value corresponding to the last run in the block.
  • each palette entry consists of 3 components.
  • the chroma samples are associated with luma sample indices that are divisible by 2. After reconstructing the palette indices for the CU, if a sample has only a single component associated with it, only the first component of the palette entry is used. The only difference in signaling is for the escape component values. For each escape sample, the number of escape component values signaled may be different depending on the number of components associated with that sample.
  • JVET-M0464 and JVET-N0280 several modifications are proposed on the coefficients coding in transform skip (TS) mode in order to adapt the residual coding to the statistics and signal characteristics of the transform skip levels.
  • Subblock CBFs The absence of the last significant scanning position signaling requires the subblock CBF signaling with coded sub block flag for TS to be modified as follows:
  • the coded sub block flag for the subblock covering the DC frequency position presents a special case.
  • the coded sub block flag for this subblock is never signaled and always inferred to be equal to 1.
  • the DC subblock may contain only zero/non significant levels although the coded sub block flag for this subblock is inferred to be equal to 1.
  • the coded sub block flag for each subblock is signaled. This also includes the coded sub block flag for the DC subblock except when all other coded sub block flag syntax elements are already equal to 0.
  • the context modeling for coded sub block flag is changed.
  • the context model index is calculated as the sum of the coded sub block flag to the left and the coded_sub_block_flag aboves the current subblock instead of and a logical disjunction of both.
  • sig coeff flag context modelling The local template in sig coeff flag context modeling is modified to only include the neighbor to the left (NBo) and the neighbor above (NBi) the current scanning position.
  • the context model offset is just the number of significant neighboring positions sig_coeff_flag[NBo] + sig_coeff_flag[NBi].
  • abs level gtl flag and par level flag context modelling a single context model is employed for abs level gtl flag and par level flag.
  • abs remainder coding Although the empirical distribution of the transform skip residual absolute levels typically still fits a Laplacian or a Geometrical distribution, there exist larger instationarities than for transform coefficient absolute levels. Particularly, the variance within a window of consecutive realization is higher for the residual absolute levels. This motivates the following modifications of the abs remainder syntax binarization and context modelling: [00131] o Using a higher cutoff value in the binarization, i.e., the transition point from the coding with sig coeff flag, abs level gtl flag, par level flag, and abs_level_gt3_flag to the Rice codes for abs remainder, and dedicated context models for each bin position yields higher compression efficiency.
  • the template for the rice parameter derivation is modified, i.e., only the neighbor to the left and the neighbor above the current scanning position are considered similar to the local template for sig coeff flag context modeling.
  • coeff sign flag context modelling Due to the instationarities inside the sequence of signs and the fact that the prediction residual is often biased, the signs can be coded using context models, even when the global empirical distribution is almost uniformly distributed. A single dedicated context model is used for the coding of the signs and the sign is parsed after sig coeff flag to keep all context coded bins together.
  • JVET-M0413 a quantized residual block differential pulse-code modulation (QR- BDPCM) is proposed to code screen contents efficiently.
  • QR- BDPCM quantized residual block differential pulse-code modulation
  • the prediction directions used in QR-BDPCM can be vertical and horizontal prediction modes.
  • the intra prediction is done on the entire block by sample copying in prediction direction (horizontal or vertical prediction) similar to intra prediction.
  • the residual is quantized and the delta between the quantized residual and its predictor (horizontal or vertical) quantized value is coded. This can be described by the following: For a block of size M (rows) c N (cols), let Ti j , 0 £ i £ M— 1, 0 ⁇ j ⁇ N— 1 be the prediction residual after performing intra prediction horizontally (copying left neighbor pixel value across the the predicted block line by line) or vertically (copying top neighbor line to each line in the predicted block) using unfiltered samples from above or left block boundary samples.
  • bdpcm_flag[ x0 ][ yO ] 1 specifies that a bdpcm dir flag is present in the coding unit including the luma coding block at the location ( xO, yO )
  • a CTU is split into CUs by using a quaternary-tree structure denoted as coding tree to adapt to various local characteristics.
  • the decision whether to code a picture area using inter-picture (temporal) or intra-picture (spatial) prediction is made at the leaf CU level.
  • Each leaf CU can be further split into one, two or four PUs according to the PU splitting type. Inside one PU, the same prediction process is applied and the relevant information is transmitted to the decoder on a PU basis.
  • a leaf CU After obtaining the residual block by applying the prediction process based on the PU splitting type, a leaf CU can be partitioned into transform units (TUs) according to another quaternary-tree structure similar to the coding tree for the CU.
  • transform units TUs
  • One of key feature of the HEVC structure is that it has the multiple partition conceptions including CU, PU, and TU.
  • a quadtree with nested multi-type tree using binary and ternary splits segmentation structure replaces the concepts of multiple partition unit types, i.e. it removes the separation of the CU, PU and TU concepts except as needed for CUs that have a size too large for the maximum transform length, and supports more flexibility for CU partition shapes.
  • a CU can have either a square or rectangular shape.
  • a coding tree unit (CTU) is first partitioned by a quaternary tree (a.k.a. quadtree) structure. Then the quaternary tree leaf nodes can be further partitioned by a multi-type tree structure. As shown in FIG.
  • the multi-type tree leaf nodes are called coding units (CUs), and unless the CU is too large for the maximum transform length, this segmentation is used for prediction and transform processing without any further partitioning.
  • CUs coding units
  • luma and chroma components have separate partition structures on I tiles.
  • JVET-K0353 and JVET-K0354 propose to signal a flag to determine whether to use the separate partition structures at CTU/CU level.
  • CCLM Cross-component linear model
  • pred c (i, j) represents the predicted chroma samples in a CU and rec L (i, j) represents the downsampled reconstructed luma samples of the same CU.
  • Linear model parameter a and b are derived from the relation between luma values and chroma values from two samples, which are luma sample with minimum sample value and with maximum sample inside the set of downsampled neighboring luma samples, and their corresponding chroma samples.
  • the linear model parameters a and b are obtained according to the following equations.
  • FIG. 8 shows an example of the location of the left and above samples and the sample of the current block involved in the CCLM mode.
  • VTM4 the number of directional intra modes in VTM4 is extended from 33, as used in HEVC, to 65.
  • the new directional modes not in HEVC are depicted as red dotted arrows in FIG. 9, and the planar and DC modes remain the same.
  • These denser directional intra prediction modes apply for all block sizes and for both luma and chroma intra predictions.
  • a unified 6-MPM list is proposed for intra blocks irrespective of whether MRL and ISP coding tools are applied or not.
  • the MPM list is constructed based on intra modes of the left and above neighboring block as in VTM4.0, as shown in FIG. 10.
  • MPM list Plant, Left , Left -1 , Left +1 , DC, Left -2 ⁇
  • MPM list Plant, DC, V, H, V-4, V+4 ⁇
  • the first MPM candidate i.e., the Planar mode is signaled separately from the remaining MPM candidates.
  • intra_luma mpm _flag[ xO ][ yO ] intra luma not planar _flag[ xO ][ yO ]
  • intra_luma_mpm_idx[ xO ][ yO ] and intra_luma_mpm_remainder[ xO ][ yO ] specify the intra prediction mode for luma samples.
  • the array indices xO, yO specify the location ( xO , yO ) of the top-left luma sample of the considered coding block relative to the top-left luma sample of the picture.
  • intra_luma_mpm_flag[ xO ][ yO ] is equal to 1, the intra prediction mode is inferred from a neighbouring intra-predicted coding unit according to clause 8.4.2.
  • intra_luma_mpm_flag[ xO ][ yO ] is not present (e.g., ISP enabled, or MRL enabled (with reference index > 0)), it is inferred to be equal to 1.
  • intra_luma_not_planar_flag[ xO ] [ yO ] is not present (e.g., MRL is enabled), it is inferred to be equal to 1.
  • variable cbWidth specifying the width of the current coding block in luma samples
  • variable cbHeight specifying the height of the current coding block in luma samples
  • the luma intra prediction mode IntraPrcdModcYI xCb ][ yCb ] is derived.
  • Table 8-1 specifies the value for the intra prediction mode IntraPrcdModcYI xCb
  • IntraPrcdModcYI xCb ][ yCb ] is derived as follows:
  • the neighbouring locations ( xNbA, yNbA ) and ( xNbB, yNbB ) are set equal to ( xCb - 1, yCb + cbHeight - 1 ) and ( xCb + cbWidth - 1, yCb - 1 ), respectively.
  • the candidate intra prediction mode candlntraPredModeX is derived as follows:
  • candlntraPredModeX is set equal to
  • variable availableX is equal to FALSE.
  • yCb - X is equal to B and yCb - 1 is less than ( ( yCb » CtbLog2SizeY ) « CtbLog2SizeY ).
  • candlntraPredModeX is set equal to IntraPrcdModcYI xNbX ][ yNbX ].
  • candlntraPredModeB is not equal to candlntraPredModeA and candlntraPredModeA or candlntraPredModeB is greater than INTRA DC, the following applies:
  • minAB Min( candlntraPredModeA, candlntraPredModeB ) (8-24)
  • maxAB Max( candlntraPredModeA, candlntraPredModeB ) (8-25)
  • IntraPredModeY[ xCb ][ yCb ] is derived by applying the following procedure:
  • IntraPredModeY[ xCb ][ yCb ] is set equal to candModcList
  • IntraPredModeY[ xCb ][ yCb ] is derived by applying the following ordered steps:
  • IntraPrcdModcYI xCb ][ yCb ] is derived by the following ordered steps:
  • is set equal to intra_luma_mpm_remainder[ xCb ] [ yCb ] .
  • IntraPrcdModcYI xCb ][ yCb ] is incremented by one iii.
  • IntraPrcdModcYI xCb ][ yCb ] is greater than or equal to candModeListf i ]
  • is incremented by one.
  • IntraPrcdModcYI xCb ][ yCb ] is set equal to INTRA PLANAR.
  • y yCb..yCb + cbHeight - 1 is set to be equal to IntraPrcdModcYI xCb ][ yCb ].
  • Multiple reference line (MRL) intra prediction uses more reference lines for intra prediction.
  • FIG. 11 an example of 4 reference lines is depicted, where the samples of segments A and F are not fetched from reconstructed neighboring samples but padded with the closest samples from Segment B and E, respectively.
  • HEVC intra-picture prediction uses the nearest reference line (i.e., reference line 0).
  • reference line 0 In MRL, 2 additional lines (reference line 1 and reference line 3) are used.
  • the index of selected reference line (mrl idx) is signalled and used to generate intra predictor. For reference line idx, which is greater than 0, only include additional reference line modes in MPM list and only signal mpm index without remaining mode.
  • the reference line index is signalled before intra prediction modes, and Planar and DC modes are excluded from intra prediction modes in case a nonzero reference line index is signalled.
  • MRL is disabled for the first line of blocks inside a CTU to prevent using extended reference samples outside the current CTU line. Also, PDPC is disabled when additional line is used.
  • the Intra Sub-Partitions (ISP) tool divides luma intra-predicted blocks vertically or horizontally into 2 or 4 sub-partitions depending on the block size. For example, minimum block size for ISP is 4x8 (or 8x4). If block size is greater than 4x8 (or 8x4) then the corresponding block is divided by 4 sub -partitions.
  • FIGS. 12A and 12B show examples of the two possibilities. All sub-partitions fulfill the condition of having at least 16 samples.
  • each sub-partition reconstructed samples are obtained by adding the residual signal to the prediction signal.
  • a residual signal is generated by the processes such as entropy decoding, inverse quantization and inverse transform. Therefore, the reconstructed sample values of each sub-partition are available to generate the prediction of the next sub -partition, and each sub-partition is processed repeatedly.
  • the first sub-partition to be processed is the one containing the top-left sample of the CU and then continuing downwards (horizontal split) or rightwards (vertical split).
  • reference samples used to generate the sub-partitions prediction signals are only located at the left and above sides of the lines. All sub-partitions share the same intra mode.
  • MRL Multiple Reference Line
  • Entropy coding coefficient group size the sizes of the entropy coding sub-blocks have been modified so that they have 16 samples in all possible cases, as shown in Table 1. Note that the new sizes only affect blocks produced by ISP in which one of the dimensions is less than 4 samples. In all other cases coefficient groups keep the 4 x 4 dimensions.
  • the CBF of the n-th sub-partition is inferred to be 1.
  • the MPM flag will be inferred to be one in a block coded by ISP mode, and the MPM list is modified to exclude the DC mode and to prioritize horizontal intra modes for the ISP horizontal split and vertical intra modes for the vertical one.
  • [00194] - MTS flag if a CU uses the ISP coding mode, the MTS CU flag will be set to 0 and it will not be sent to the decoder. Therefore, the encoder will not perform RD tests for the different available transforms for each resulting sub-partition.
  • the transform choice for the ISP mode will instead be fixed and selected according the intra mode, the processing order and the block size utilized. Hence, no signalling is required. For example, let t H and t v be the horizontal and the vertical transforms selected respectively for the w x h sub -partition, where w is the width and h is the height. Then the transform is selected according to the following rules:
  • VTM4 simplified 6-bit 4-tap Gaussian interpolation filter is used for only directional intra modes.
  • Non-directional intra prediction process is unmodified.
  • the selection of the 4-tap filters is performed according to the MDIS condition for directional intra prediction modes that provide non-fractional displacements, i.e. to all the directional modes excluding the following: 2,
  • the directional intra-prediction mode is classified into one of the following groups:
  • PDPC position dependent intra prediction combination
  • PDPC is an intra prediction method which invokes a combination of the un-filtered boundary reference samples and HEVC style intra prediction with filtered boundary reference samples.
  • PDPC is applied to the following intra modes without signaling: planar, DC, horizontal, vertical, bottom-left angular mode and its eight adjacent angular modes, and top-right angular mode and its eight adjacent angular modes.
  • R x,.i , R. ⁇ ,y represent the reference samples located at the top and left of current sample (x, y), respectively, and R. i,-i represents the reference sample located at the top-left comer of the current block.
  • FIGS. 13A-13D illustrate the definition of reference samples (R x,. i, R. ⁇ ,y and R- ⁇ - ⁇ ) for PDPC applied over various prediction modes.
  • the prediction sample pred (x’, y’) is located at (x’, y’) within the prediction block.
  • the reference samples R X . i andA-i , ⁇ could be located in fractional sample position. In this case, the sample value of the nearest integer sample location is used.
  • deblocking filtering process is mostly the same to those in HEVC.
  • a bilinear filter strong deblocking filter
  • a sample belonging to a large block is defined as when the width is larger than or equal to 32 for a vertical edge, and when height is larger than or equal to 32 for a horizontal edge.
  • tcPD t and tcPD j term is a position dependent clipping and g j , ft, Middle s t ,
  • condition 1 is the“large block condition”. This condition detects whether the samples at P-side and Q-side belong to large blocks.
  • the condition 2 and condition 3 are determined by:
  • VTM4 a triangle partition mode is supported for inter prediction.
  • the triangle partition mode is only applied to CUs that are 8x8 or larger and are coded in skip or merge mode but not in MMVD or CUP mode.
  • a CU-level flag is signalled to indicate whether the triangle partition mode is applied or not.
  • a CU is split evenly into two triangle-shaped partitions, using either the diagonal split or the anti-diagonal split (FIG. 14).
  • Each triangle partition in the CU is inter-predicted using its own motion; only uni -prediction is allowed for each partition, that is, each partition has one motion vector and one reference index.
  • the uni-prediction motion constraint is applied to ensure that same as the conventional bi-prediction, only two motion compensated prediction are needed for each CU.
  • the uni -prediction motion for each partition is derived from a uni -prediction candidate list constructed using the process in 3.4.10.1.
  • the CU-level flag indicates that the current CU is coded using the triangle partition mode. If triangle partition mode is used, then a flag indicating the direction of the triangle partition (diagonal or anti-diagonal), and two merge indices (one for each partition) are further signalled. After predicting each of the triangle partitions, the sample values along the diagonal or anti-diagonal edge are adjusted using a blending processing with adaptive weights. This is the prediction signal for the whole CU, and transform and quantization process will be applied to the whole CU as in other prediction modes. Finally, the motion field of a CU predicted using the triangle partition mode is stored in 4x4 units as in 2.16.3.
  • the uni -prediction candidate list consists of five uni -prediction motion vector candidates. It is derived from seven neighboring blocks including five spatial neighboring blocks (labelled 1 to 5 in FIG. 15) and two temporal co-located blocks (labelled 6 to 7 in FIG. 15).
  • the motion vectors of the seven neighboring blocks are collected and put into the uni -prediction candidate list according to the following order: first, the motion vectors of the uni-predicted neighboring blocks; then, for the bi-predicted neighboring blocks, the L0 motion vectors (that is, the L0 motion vector part of the bi-prediction MV), the LI motion vectors (that is, the LI motion vector part of the bi-prediction MV), and averaged motion vectors of the L0 and LI motion vectors of the bi-prediction MVs. If the number of candidates is less than five, zero motion vector is added to the end of the list.
  • the motion vectors of a CU coded in triangle partition mode are stored in 4x4 units. Depending on the position of each 4x4 unit, either uni -prediction or bi-prediction motion vectors are stored. Denote Mvl and Mv2 as uni -prediction motion vectors for partition 1 and partition 2, respectively. If a 4x4 unit is located in the non-weighted area shown in the example of FIG. 16, either Mvl or Mv2 is stored for that 4x4 unit. Otherwise, if the 4x4 unit is located in the weighted area, a bi-prediction motion vector is stored. The bi-prediction motion vector is derived from Mvl and Mv2 according to the following process:
  • Mv2 (or Mvl) is converted to a LI motion vector using that reference picture in LI. Then the two motion vectors are combined to form the bi-prediction motion vector;
  • Some coding tools e.g. IBC, intra prediction, Deblocking
  • IBC intra prediction
  • Deblocking Some coding tools (e.g. IBC, intra prediction, Deblocking) are designed without considering the feature of screen contents which may result in several problems as follows: [00244] (1) The chroma IBC is performed at sub block level and the chroma IBC flag is always signaled. The signaling of the chroma IBC flag may have redundancy when not all sub blocks have valid block vectors.
  • Intra prediction may be less efficient for screen contents coding due to the filtering process.
  • (3) PDPC may be less efficient for screen content.
  • RDPCM may be less efficient due to the dual tree structure
  • the IBC mode may prefer transform skip mode because both of them are designed for screen content coding.
  • Blending in the current TPM design may be inefficient for screen content.
  • TPM Triangular prediction mode
  • Whether to signal the indication of IBC mode for a chroma block may be based on whether IBC is enabled for one or multiple selected luma blocks.
  • whether to signal the indication of IBC mode for a chroma block may be based on whether IBC is enabled for one or multiple selected luma blocks.
  • signaling of the indication of IBC mode for a chroma block may be skipped when one or multiple of the selected luma blocks are not coded with IBC mode, e.g., when none of them are coded with IBC mode.
  • the indication of IBC mode for a chroma block may be signaled when at least one of the selected luma blocks is coded with IBC mode. i. Alternatively, the indication of IBC mode for a chroma block may be signaled when all of the selected luma blocks are coded with IBC mode. d. In one example, the size of the selected luma block may be the smallest
  • CU/PU/TU size or the unit for motion/mode storage (such as 4x4).
  • a selected luma block may be the CU/PU/TU covering the center, top left, top right, bottom left or bottom right position of the
  • FIG. 17 An example of a corresponding luma region is shown in the FIG. 17.
  • top -left coordinate of current chroma block is (xO, yO)
  • width and height of the current chroma block is wO and hO, respectively.
  • the coordinate of top-left sample in the corresponding luma region, width and height of the corresponding luma region may be scaled according to the color format.
  • collocated luma region is (2*x0, 2*y0), its width and height are 2*w0 and 2*h0, respectively.
  • collocated luma region is (xO, yO), its width and height are wO and hO, respectively.
  • coordinate of the center position may be:
  • coordinate of the top-right position may be:
  • coordinate of the bottom-left position may be:
  • coordinate of the bottom- right position may be:
  • Whether to signal the indication of IBC mode for a chroma block may be based on whether IBC is enabled for one or multiple selected luma blocks and chroma neighboring (adjacent or/and non-adjacent) blocks.
  • chroma components e.g., dual tree is enabled
  • the above method may be enabled.
  • signaling of the indication of IBC mode for a chroma block may be skipped when one or multiple of the selected luma blocks and chroma neighboring blocks are not coded with IBC mode, e.g., when none of them are coded with IBC mode. i. Alternatively, furthermore, when it is not signaled, usage of IBC mode of the chroma block may be inferred to be false.
  • the indication of IBC mode for a chroma block may be
  • the indication of IBC mode for a chroma block may be signaled when all of the selected luma blocks and chroma neighboring blocks are coded with IBC mode.
  • two chroma neighboring blocks may be utilized, such as the left and above in FIG. 10.
  • a default intra mode may be used to generate the prediction when IBC mode is inferred to be false or bv is invalid.
  • the intra mode indicated by a certain MPM intra mode may be used when IBC mode is inferred to be false or bv is invalid
  • the 1 st mode in MPM is used.
  • the 1 st available mode in MPM is used.
  • a predefined intra prediction mode may be used when IBC mode is inferred to be false or bv is invalid.
  • PLANNAR mode may be used.
  • DC mode may be used.
  • RDPCM mode may be used.
  • DM mode may be used for chroma block.
  • LM mode may be used for chroma block.
  • an intra mode to be used may be signaled in the
  • DP S/SPS/VPS/PPS/AP S/picture header/slice header/tile group header/ Largest coding unit (LCU)/Coding unit (CU)/LCU row/group of LCUs
  • the above method may be applied to chroma blocks, and the block vector of a chroma sub-block may be derived from the corresponding luma block or derived from the bitstream.
  • a bv of the corresponding luma block may be treated as invalid.
  • the bv may be treated as invalid.
  • a prediction block identified by a bv may be treated as invalid.
  • a prediction block identified by a bv may be treated as invalid.
  • a prediction block of an IBC-coded block may be filled with a default value. And whether to use default value or use a reference block pointed by a block vector for an IBC-coded chroma sub-block may depend on the availability of the sub-block.
  • a sub-block may be regarded as“unavailable” when it does not have a valid block vector or motion vector.
  • a sub-block may be
  • a sub-block may be unavailable if its corresponding luma block is coded with IBC mode but its motion vector or block vector is not valid for the current chroma sub block.
  • the chroma IBC may use a default value to fill up unavailable sub blocks only during the block copy process. The available sub blocks follow the original block copy process.
  • the chroma IBC may use a default value to fill up all sub
  • an unavailable sub block may denote it does not have a valid block vector or motion vector. 1) Alternatively, furthermore, in one example, a sub block may be unavailable if its corresponding luma block is not coded with IBC mode.
  • a sub block may be unavailable if its corresponding luma block is coded with IBC mode but its motion vector or block vector is not valid for the current sub block.
  • a default value may be an integer sample value (e.g. 128 or 512) and it may be based on
  • the default value may be signaled in a video unit, such as DPS/VPS/SPS/PPS/APS/slice header/tile group header/CTU/CU.
  • the motion/block vector of a block/sub-block/sample in a chroma IBC coded block may be derived based on the motion/block vectors of neighboring chroma blocks. a.
  • the above method may be applied.
  • the motion/block vector of a block/sub-block/sample in a chroma IBC coded block may be derived based on the left neighboring blocks. i.
  • the block vector may be copied from the block vector of the left neighboring block(s)
  • the motion/block vector of a block/sub-block/sample in a chroma IBC coded block may be derived based on the above neighboring blocks.
  • the block vector may be copied from the block vector of the above neighboring block(s) d.
  • multiple neighbouring chroma blocks may be checked orderly to find a block vector which is valid for the current chroma block.
  • the left neighboring blocks and above neighboring block may be the blocks marked as L and A, as shown in FIG. 18.
  • a neighboring block may be a basic block (such as 4x4 block) covering the position:
  • i. (x - 1, y + i), where i is an integer number and ranges from 0 to 2*H ii. (x + i, y - 1), where i is an integer number and ranges from 0 to 2*W Indication of the video content (e.g, screen or camera-captured) may be
  • Whether to apply and/or how to apply intra prediction process may be based on video contents.
  • the intra prediction may use the nearest integer sample of a sub-sample in the reference samples row or column as the prediction to replace the interpolation process based on video contents (e.g screen contents or natural contents).
  • the PDPC may be disabled based on video contents (e.g screen contents or natural contents).
  • the intra reference smoothing may be disabled based on video contents (e.g screen contents or natural contents).
  • matrix-based intra prediction may be disabled for screen content.
  • Whether to apply and/or how to apply inter predictions process may be based on video contents.
  • the AMVR may be disabled based on video contents (e.g screen contents or natural contents).
  • the fractional motion vectors may be not allowed based on video contents (e.g screen contents or natural contents).
  • BDOF may be disabled for screen content
  • DMVR may be disabled for screen content
  • affine motion compensation may be disabled for screen content
  • the blending process (e.g., the weighted predictions derived from motion information of two partitions for the weighted area as defined in section 2.16.2) in the TPM may be disabled for screen contents.
  • samples in the weighted area are treated in the same way as one of the two partitions, i.e., with uni -prediction according to the motion information of one the two partitions.
  • a and B be the motion compensated blocks obtained from the motion information of partition 0 and partition 1, respectively.
  • the partition 0 and partition 1 are shown in FIGS. 19A and 19B under diagonal and anti-diagonal split directions, respectively.
  • the weights of A and B may be 1 and 0, respectively.
  • this method may be applied.
  • the weights of A and B may be 0 and 1, respectively.
  • this method may be applied.
  • each sample may select its weights for A and B instead of being restricted to be within a partition.
  • the A and B weights for that position may be ⁇ 1, 0 ⁇ or ⁇ 0, 1 ⁇ .
  • weighted area and/or non-weighted area may be fixed, such as using sub-bullet a.
  • splitting direction such as according to sub-bullet b.
  • the decoded information may be the decoded merge indices.
  • weights for B may be set to M.
  • the decoded information may be the reference picture/motion information.
  • MVO is smaller than MV1 may be defined as abs(MV0.x) + abs(MV0.y) ⁇ abs(MVl.x) + abs(MVl.y) where the x and y are the horizontal and vertical components of a MV and abs is an operation to get the absolute value of an input.
  • MVO is smaller than MV1 may be defined as max(abs(MV0.x), abs(MV0.y)) ⁇ max(abs(MVl.x), abs(MVl.y)) where the x and y are the horizontal and vertical components of a MV, max is an operation to get a larger one from two inputs and abs is an operation to get the absolute value of an input.
  • the decoded information may be where the merge candidates are derived from (e.g., from which spatial neighboring block or from temporal or HMVP).
  • equal weights may be applied to samples in the weighted area.
  • the weighted area may only include the samples located at the diagonal/anti-diagonal lines of a block. a. Alternatively, the weighted area may be the whole block.
  • the above methods may be performed at sub block level.
  • the above methods may be applied on the sub blocks with certain positions.
  • the above methods may be applied on the samples in the sub blocks containing the samples located on the diagonal or anti-diagonal lines (as the weighted area, marked as grey regions in Fig.23 A-C)
  • the number of weighting lines in the blending process in the TPM may be reduced for screen contents.
  • the number of weighting lines for luma and chroma blocks may be N and M diagonal or anti -diagonal lines, at least one of the two conditions is true: N is smaller than 7, M is smaller than 3.
  • the M may remain 3 and the N may be 3 as well.
  • the current blending process on chroma blocks may be applied on luma blocks as well.
  • M and N may be based on i.
  • Video contents e.g. screen contents or natural contents
  • ii. A message signaled in the
  • DP S/SPS/VPS/PPS/AP S/picture header/slice header/tile group header/ Largest coding unit (LCUyCoding unit (CU)/LCU row/group of LCUs/TU/PU
  • Whether to apply and/or how to apply deblocking may be based on video contents.
  • the long tap filters may be disabled based on video contents (e.g screen contents or natural contents).
  • the deblocking filters may be disabled based on video contents (e.g screen contents or natural contents).
  • MvInfoO and Mvlnfol are corresponding to the motion information of partition 0 and partition 1, respectively, in FIGS. 19A and 19B.
  • the uni-prediction information may be
  • the uni-prediction information may be derived from MvInfoO and/or Mvlnfol.
  • the motion information to be stored may be determined by the following rules:
  • the rule includes the reference picture indices/MV values/POC values of reference pictures.
  • MvInfoO may be stored.
  • MvInfoO may be stored.
  • MVO is no larger than MV1
  • the above methods may be only applied to sub-blocks at certain positions, such as sub-blocks which contains samples located on the diagonal or anti-diagonal lines (The grey regions in FIG. 20).
  • the above methods may be only applied to sub-blocks at the weighted area, such as those depicted in FIGS. 21A-21C.
  • the above methods may be applied when bullet 7 is applied. e. In one example, the above methods may be applied under conditions, such as when the video content is screen content or when a flag that indicates these methods are enabled.
  • Triangular Prediction Mode is conditionally enabled for a video unit with uni prediction (e.g., P pictures/slices/tiles/bricks/independent sub-region/CU/PU).
  • an indication of allowing TPM for a video unit may be conditionally signaled when each sample within one TPM-coded block is predicted from only one set of motion information and for each unit for motion storage, only one set of motion information.
  • an indication of allowing TPM for a video unit may be conditionally signaled.
  • the conditions may include: 1) when the flag which disables TPM blending or screen content is true and 2) current video unit is uni -prediction.
  • TPM may be applied in different ways, e.g., the motion storage and/or motion compensation (e.g., how to apply blending) may be adaptively changed.
  • the way how to enable/apply TPM may be dependent on prediction unit (e.g., uni- or bi-prediction).
  • Whether to enable RDPCM may depend on the coding tree structure type.
  • the coding tree structure type e.g., dual tree
  • signaling of indication of RDPCM mode and/or other syntax related to RDPCM mode may be skipped and the RDPCM information may be inferred.
  • the indication of the RDPCM mode may be inferred as false when dual coding tree structure type is applied. Whether to enable QR-BDPCM may depend on the coding tree structure type.
  • the coding tree structure type e.g., dual tree
  • QR-BDPCM mode may be skipped and the QR-BDPCM information may be inferred.
  • the indication of the QR-BDPCM mode may be inferred as false when dual coding tree structure type is applied.
  • Whether to enable CCLM may depend on the coding tree structure type.
  • the coding tree structure type e.g., dual tree
  • signaling of indication of CCLM mode and/or other syntax related to CCLM mode may be skipped and the CCLM information may be inferred.
  • the indication of the CCLM mode may be inferred as false when dual coding tree structure type is applied.
  • transform skip (TS) mode may be always applied a.
  • the indication of TS mode may be inferred to be true when the prediction mode is IBC of a certain size
  • the indication of TS mode may be inferred to be false when the prediction mode is IBC.
  • TS mode may always be applied.
  • iii In one example, for IBC block with size larger than 32x32, TS mode may always be applied.
  • usage of DCT transform may be inferred to be false when the prediction mode is IBC of a certain size.
  • usage of DCT transform may be inferred to be true when the prediction mode is IBC.
  • transform skip (TS) mode could be applied.
  • the transform skip could be applied on when the prediction mode is IBC of a certain size. (e.g. a 64x64 block coded with IBC mode)
  • a predefined transform including identity transform, i.e. transform skip
  • a predefined transform may be always applied.
  • a predefined transform may be always applied as the horizontal transform. i.
  • TS may be always applied as the horizontal transform.
  • DCT2 may be always applied as the horizontal transform.
  • T1 is equal to 16.
  • T2 is equal to 4.
  • TS may be always applied as the vertical transform.
  • TS may be always applied as the vertical transform.
  • DCT2 may be always applied as the vertical
  • T3 is equal tol6.
  • T4 is equal to 4.
  • the threshold may be signal in the DPS/SPS/VPS/PPS/APS/picture
  • LCU Largest coding unit
  • CU Coding unit
  • Whether to the above mechanism may be controlled by a flag signaled in the DPS/SPS/VPS/PPS/APS/picture header/slice header/tile group header/ Largest coding unit (LCU)/Coding unit (CU)/LCU row/group of LCUs.
  • LCU Largest coding unit
  • CU Coding unit
  • ether to and/or how to apply above methods may be based on:
  • Video contents e.g. screen contents or natural contents
  • LCU Largest coding unit
  • CU Coding unit
  • Color component e.g. may be only applied on chroma components or luma component
  • pred mode chroma ibc flag 1 specifies that the current chorma coding unit is codec in IBC prediction mode when dual tree is enabled
  • pred mode chroma ibc flag 0 specifies that the current coding unit is not coded in IBC prediction mode.
  • pred mode ibc flag When pred mode ibc flag is not present, it is inferred as follows:
  • pred mode chroma ibc flag is infered to be equal to the value of
  • sps_ ⁇ tp m _blending_off ⁇ flag 1 specifies that bledning process is replaced by directly copying.
  • sps tpm blending off flag is not present, it is inferred to be equal to 0.
  • slice_ ⁇ tpm_blending_off ⁇ _flag 1 specifies that bledning process is replaced by directly copying.
  • slice scc flag is not present, it is inferred to be equal to 0.
  • variable refH specifying the reference samples height
  • x -1 - refldx
  • y -1 - refldx.. refH - 1
  • x -refldx.. refW - 1
  • y -1 - refldx
  • variable cldx specifying the colour component of the current block.
  • x -1 - refldx
  • y -1 - refldx.. refH - 1
  • x -refldx.. refW - 1
  • y -1 - refldx.
  • variable filterFlag is derived as follows: - If all of the following conditions are true, filterFlag is set equal to 1
  • nTbW * nTbH is greater than 32
  • nTbH is greater than or equal to nTbW
  • - predModelntra is equal to INTRA ANGULAR66 and nTbW is greater than or equal to nTbH
  • filterFlag is set equal to 0.
  • variable refH specifying the reference samples height
  • variable cldx specifying the colour component of the current block.
  • Outputs of this process are the modified predicted samples predSamples[ x ][ y ] with
  • cliplCmp is set equal to Clip 1 > . - Otherwise, cliplCmp is set equal to Cliplc.
  • nScale is set to ( ( Log2( nTbW ) + Log2( nTbH ) - 2 ) » 2 ) .
  • mainRef[ x ] p[ x ][ -1 ] (8-226)
  • predModelntra is equal to INTRA PLANAR or INTRA DC, the following applies:
  • predModelntra is equal to INTRA ANGULARl 8 or INTRA_ANGULAR50, the following applies:
  • predModelntra is equal to INTRA_ANGULAR2 or INTRA_ANGULAR66, the following applies:
  • predSamplesf x ][ y ] cliplCmp( ( refL[ x ][ y ] * wL[ x ] + refT[ x ][ y ] * wT[ y ] - p[ -l ][ -l ] * wTL[ x ][ y ] + (8-254) ( 64 - wL[ x ] - wT[ y ] + wTL[ x ][ y ] ) * predSamplesf x ][ y ] + 32 )
  • variable refH specifying the reference samples height
  • nTbS is set equal to ( Log2 ( nTbW ) + Log2 ( nTbH ) ) » 1.
  • the reference sample array ref[ x ] is specified as follows:
  • index variable ildx ( ( y + 1 + refldx ) * intraPredAngle ) » 5 + refldx (8-137)
  • iFact ( ( y + 1 + refldx ) * intraPredAngle ) & 31 (8-138)
  • tile_group_scc_flag 1
  • predSamples ⁇ x ][ y ] is set to ref[ x + ildx + 1 ]
  • the reference sample array ref[ x ] is specified as follows:
  • index variable ildx and the multiplication factor iFact are derived as follows:
  • ildx ( ( x + 1 + refldx ) * intraPredAngle ) » 5 (8-150)
  • iFact ( ( x + 1 + refldx ) * intraPredAngle ) & 31 (8-151)
  • fT[ j ] filterFlag ? fG[ iFact ][ j ] : fC[ iFact ][ j ] (8-152)
  • predSamples ⁇ x ][ y ] is set to ref[ x + ildx + 1 ]
  • nCbW specifying the width of the current coding block
  • nCbH specifying the height of the current coding block
  • variable edgeType specifying whether a vertical (EDGE VER) or a horizontal (EDGE HOR) edge is fdtered.
  • edgeType is equal to EDGE VER, the following applies:
  • variable numEdges is set equal to Max( 1, nCbW / 8 ).
  • the horizontal position x inside the current coding block is set equal to xEdge *8.
  • edgeFlagsf x ][ y ] is derived as follows:
  • maxFilterLengthQs [ x ][ y ] is derived as follows: - If the width in luma samples of the transform block at luma location
  • maxFilterLengthQs[ x ][ y ] is set equal to 3.
  • maxFilterLengthPsf x ][ y ] is set equal to 3.
  • edgeType is equal to EDGE HOR
  • variable muuEdges is set equal to Max( 1, nCbH / 8 ).
  • the vertical position y inside the current coding block is set equal to yEdge *8.
  • edgeFlagsf x ][ y ] is derived as follows:
  • maxFilterLengthQsf x ][ y ] is set equal to 3.
  • maxFilterLengthPsf x ][ y ] is set equal to 3.
  • nCbW and nCbH specifying the width and the height of the current coding block, two (nCbW)x(nCbH) arrays predSamplesLA and predSamplesLB,
  • Output of this process is the (nCbW)x(nCbH) array pbSamples of prediction sample values.
  • nCbR The variable nCbR is derived as follows:
  • nCbR ( nCbW > nCbH ) ? ( nCbW / nCbH ) : ( nCbH / nCbW ) (8-820)
  • variable bitDepth is derived as follows:
  • bitDepth is set equal to BitDeptli Y .
  • bitDepth is set equal to BitDepthc.
  • variable shiftl is set equal to Max( 5, 17 - bitDepth).
  • variable offsetl is set equal to 1 « ( shiftl - 1 ).
  • variable wldx is derived as follows:
  • triangleDir is equal to 0 and x ⁇ y or triangleDir is equal to 1 and x + y ⁇ nCbW)
  • pbSamples[ x ][ y ] predSamplesLB[ x ][ y ]
  • tile_group_scc_flag 0
  • a luma location ( xCb, yCb ) specifying the top-left sample of the current coding block relative to the top-left luma sample of the current picture
  • variable cbWidth specifying the width of the current coding block in luma samples
  • variable cbHeight specifying the height of the current coding block in luma samples
  • the luma motion vectors in 1/16 fractional -sample accuracy mvA and mvB
  • the prediction list flags predListFlagA and predListFlagB.
  • predSamplesLA L and predSamplesLB L be (cbWidth)x(cbHeight) arrays of predicted luma sample values and, predSamplesLAc b , predSamplesLBc b , predSamplesLAcr and predSamplesLBcr be
  • predSamples L predSamplesa, and predSamplescr are derived by the following ordered steps:
  • the reference picture consisting of an ordered two-dimensional array refPicLN L of luma samples and two ordered two-dimensional arrays refPicLNo, and refPicLNcr of chroma samples is derived by invoking the process specified in clause 8.5.6.2 with X set equal to predListFlagN and refldxX set equal to refldxN as input.
  • the array predSamplesLN L is derived by invoking the fractional sample interpolation process specified in clause 8.5.6.3 with the luma location ( xCb, yCb ), the luma coding block width sbWidth set equal to cbWidth, the luma coding block height sbHeight set equal to cbHeight, the motion vector offset mvOffset set equal to ( 0, 0 ), the motion vector mvLX set equal to mvN and the reference array refPicLX L set equal to refPicLN L , the variable bdofFlag set euqal to FALSE, and the variable cldx is set equal to 0 as inputs.
  • the array predSamplesLNc b is derived by invoking the fractional sample interpolation process specified in clause 8.5.6.3 with the luma location ( xCb, yCb ), the coding block width sbWidth set equal to cbWidth / 2, the coding block height sbHeight set equal to cbHeight / 2, the motion vector offset mvOffset set equal to ( 0, 0 ), the motion vector mvLX set equal to mvCN, and the reference array refPicLXa, set equal to refPicLNo, the variable bdofFlag set euqal to FALSE, and the variable cldx is set equal to 1 as inputs.
  • the array predSamplesLNc r is derived by invoking the fractional sample interpolation process specified in clause 8.5.6.3 with the luma location ( xCb, yCb ), the coding block width sbWidth set equal to cbWidth / 2, the coding block height sbHeight set equal to cbHeight / 2, the motion vector offset mvOffset set equal to ( 0, 0 ), the motion vector mvLX set equal to mvCN, and the reference array refPicLXcr set equal to refPicLNcr, the variable bdofFlag set euqal to FALSE, and the variable cldx is set equal to 2 as inputs.
  • the motion vector storing process for merge triangle mode specified in clause 8.5.7.3 is invoked with the luma coding block location ( xCb, yCb ), the luma coding block width cbWidth, the luma coding block height cbHeight, the partition direction triangleDir, the luma motion vectors mvA and mvB, the reference indices refldxA and refldxB, and the prediction list flags predListFlagA and predListFlagB as inputs.
  • nCbW and nCbH specifying the width and the height of the current coding block, two (nCbW)x(nCbH) arrays predSamplesLA and predSamplesLB,
  • variable refldxl specifying the reference index of the prediction block B ⁇ ⁇
  • Output of this process is the (nCbW)x(nCbH) array pbSamples of prediction sample values.
  • nCbR The variable nCbR is derived as follows:
  • nCbR ( nCbW > nCbH ) ? ( nCbW / nCbH ) : ( nCbH / nCbW ) (8-820)
  • bitDepth is derived as follows: - If cldx is equal to 0, bitDepth is set equal to BitDeptli Y .
  • bitDepth is set equal to BitDepthc.
  • variable shiftl is set equal to Max( 5, 17 - bitDepth).
  • variable offsetl is set equal to 1 « ( shiftl - 1 ).
  • variable wldx is derived as follows:
  • variable wValue specifying the weight of the prediction sample is derived using wldx and cldx as follows:
  • triangleDir is equal to 1
  • nCbW and nCbH specifying the width and the height of the current coding block, two (nCbW)x(nCbH) arrays predSamplesLA and predSamplesLB,
  • Output of this process is the (nCbW)x(nCbH) array pbSamples of prediction sample values.
  • nCbR The variable nCbR is derived as follows:
  • nCbR ( nCbW > nCbH ) ? ( nCbW / nCbH ) : ( nCbH / nCbW ) (8-820)
  • variable bitDepth is derived as follows:
  • bitDepth is set equal to BitDeptli Y .
  • bitDepth is set equal to BitDepthc.
  • variable shiftl is set equal to Max( 5, 17 - bitDepth).
  • variable offsetl is set equal to 1 « ( shiftl - 1 ).
  • variable wldx is derived as follows:
  • variable wValue specifying the weight of the prediction sample is derived using wldx and cldx as follows:
  • triangleDir is equal to 1
  • a luma location ( xCb, yCb ) specifying the top-left sample of the current coding block relative to the top-left luma sample of the current picture
  • variable cbWidth specifying the width of the current coding block in luma samples
  • variable cbHeight specifying the height of the current coding block in luma samples
  • the luma motion vectors in 1/16 fractional -sample accuracy mvA and mvB
  • the prediction list flags predListFlagA and predListFlagB.
  • predSamplesLA L and predSamplesLB L be (cbWidth)x(cbHeight) arrays of predicted luma sample values and, predSamplesLAc b , predSamplesLBc b , predSamplesLAcr and predSamplesLBcr be
  • predSamples L predSampleso, and predSampleso are derived by the following ordered steps:
  • the reference picture consisting of an ordered two-dimensional array refPicLN L of luma samples and two ordered two-dimensional arrays refPicLNo, and refPicLNo of chroma samples is derived by invoking the process specified in clause 8.5.6.2 with X set equal to predListFlagN and refldxX set equal to refldxN as input.
  • the array predSamplesLN L is derived by invoking the fractional sample interpolation process specified in clause 8.5.6.3 with the luma location ( xCb, yCb ), the luma coding block width sbWidth set equal to cbWidth, the luma coding block height sbHeight set equal to cbHeight, the motion vector offset mvOffset set equal to ( 0, 0 ), the motion vector mvLX set equal to mvN and the reference array refPicLX L set equal to refPicLN L , the variable bdofFlag set euqal to FALSE, and the variable cldx is set equal to 0 as inputs.
  • the array predSamplesLNc b is derived by invoking the fractional sample interpolation process specified in clause 8.5.6.3 with the luma location ( xCb, yCb ), the coding block width sbWidth set equal to cbWidth / 2, the coding block height sbHeight set equal to cbHeight / 2, the motion vector offset mvOffset set equal to ( 0, 0 ), the motion vector mvLX set equal to mvCN, and the reference array refPicLXo, set equal to refPicLNo, the variable bdofFlag set euqal to FALSE, and the variable cldx is set equal to 1 as inputs.
  • the array predSamplesLNc r is derived by invoking the fractional sample interpolation process specified in clause 8.5.6.3 with the luma location ( xCb, yCb ), the coding block width sbWidth set equal to cbWidth / 2, the coding block height sbHeight set equal to cbHeight / 2, the motion vector offset mvOffset set equal to ( 0, 0 ), the motion vector mvLX set equal to mvCN, and the reference array refPicLXcr set equal to refPicLNcr, the variable bdofFlag set euqal to FALSE, and the variable cldx is set equal to 2 as inputs.
  • the motion vector storing process for merge triangle mode specified in clause 8.5.7.3 is invoked with the luma coding block location ( xCb, yCb ), the luma coding block width cbWidth, the luma coding block height cbHeight, the partition direction triangleDir, the luma motion vectors mvA and mvB, the reference indices refldxA and refldxB, and the prediction list flags predListFlagA and predListFlagB as inputs.
  • nCbW and nCbH specifying the width and the height of the current coding block, two (nCbW)x(nCbH) arrays predSamplesLA and predSamplesLB,
  • variable MVb specifying the motion vector of the prediction block B ⁇ ⁇
  • Output of this process is the (nCbW)x(nCbH) array pbSamples of prediction sample values.
  • nCbR ( nCbW > nCbH ) ? ( nCbW / nCbH ) : ( nCbH / nCbW ) (8-820)
  • variable bitDepth is derived as follows:
  • bitDepth is set equal to BitDeptli Y .
  • bitDepth is set equal to BitDepthc.
  • variable shiftl is set equal to Max( 5, 17 - bitDepth).
  • variable offsetl is set equal to 1 « ( shiftl - 1 ).
  • variable wldx is derived as follows:
  • variable wValue specifying the weight of the prediction sample is derived using wldx and cldx as follows:
  • triangleDir is equal to 1
  • nCbW and nCbH specifying the width and the height of the current coding block, two (nCbW)x(nCbH) arrays predSamplesLA and predSamplesLB,
  • variable MVb specifying the motion vector of the prediction block B ⁇ ⁇
  • Output of this process is the (nCbW)x(nCbH) array pbSamples of prediction sample values.
  • nCbR The variable nCbR is derived as follows:
  • nCbR ( nCbW > nCbH ) ? ( nCbW / nCbH ) : ( nCbH / nCbW ) (8-820)
  • variable bitDepth is derived as follows:
  • bitDepth is set equal to BitDepthY.
  • bitDepth is set equal to BitDepthC.
  • variable shiftl is set equal to Max( 5, 17 - bitDepth).
  • variable offsetl is set equal to 1 « ( shiftl - 1 ).
  • xldx ( cbWidth > cbHeight ) ? ( xSbldx / nCbR) : xSbldx (8-813)
  • yldx ( cbWidth > cbHeight ) ? ySbldx : ( ySbldx / nCbR) (8-814)
  • variable sType is derived as follows:
  • variable wldx is derived as follows:
  • variable wValue specifying the weight of the prediction sample is derived using wldx and cldx as follows:
  • the prediction sample values are derived as follows
  • nCbW and nCbH specifying the width and the height of the current coding block

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

L'invention concerne des dispositifs, des systèmes et des procédés de copie intra-bloc (IBC) de codage de contenu d'écran à des fins de codage vidéo. Un procédé donné à titre d'exemple de traitement vidéo consiste à effectuer une conversion entre un bloc vidéo courant d'une image courante d'une composante de chrominance d'une vidéo et une représentation de flux binaire de la vidéo, la représentation de flux binaire étant conforme à une règle de format, et la règle de format spécifiant qu'une indication d'une utilisation d'un mode IBC du bloc vidéo courant est comprise sélectivement dans la représentation de flux binaire sur la base du fait qu'un ou plusieurs blocs de luminance correspondant au bloc vidéo courant sont codés dans la représentation de flux binaire à l'aide du mode IBC, le mode IBC consistant à faire appel à une prédiction du bloc vidéo courant sur la base d'échantillons provenant de l'image courante.
PCT/US2020/033134 2019-05-16 2020-05-15 Copie intra-bloc de codage de contenu d'écran WO2020232355A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202080036494.2A CN113826390B (zh) 2019-05-16 2020-05-15 屏幕内容编解码的帧内块复制

Applications Claiming Priority (12)

Application Number Priority Date Filing Date Title
CN2019087235 2019-05-16
CNPCT/CN2019/087235 2019-05-16
CN2019087993 2019-05-22
CNPCT/CN2019/087993 2019-05-22
CNPCT/CN2019/090265 2019-06-06
CN2019090265 2019-06-06
CNPCT/CN2019/092153 2019-06-20
CN2019092153 2019-06-20
CN2019092833 2019-06-25
CNPCT/CN2019/092833 2019-06-25
CNPCT/CN2019/095158 2019-07-08
CN2019095158 2019-07-08

Publications (1)

Publication Number Publication Date
WO2020232355A1 true WO2020232355A1 (fr) 2020-11-19

Family

ID=73289262

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2020/033134 WO2020232355A1 (fr) 2019-05-16 2020-05-15 Copie intra-bloc de codage de contenu d'écran

Country Status (2)

Country Link
CN (1) CN113826390B (fr)
WO (1) WO2020232355A1 (fr)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220150485A1 (en) * 2019-03-11 2022-05-12 Interdigital Vc Holdings, Inc. Intra prediction mode partitioning
WO2022256353A1 (fr) * 2021-05-31 2022-12-08 Beijing Dajia Internet Information Technology Co., Ltd. Procédés et dispositifs pour un mode de partition géométrique avec affinement de vecteur de mouvement
WO2023034798A1 (fr) * 2021-08-31 2023-03-09 Tencent America LLC Propagation d'informations de mode d'intra-prédiction pour un mode de partition géométrique avec ibc et intra-prédiction
WO2023040968A1 (fr) * 2021-09-15 2023-03-23 Beijing Bytedance Network Technology Co., Ltd. Procédé, appareil et support de traitement vidéo
WO2023246893A1 (fr) * 2022-06-22 2023-12-28 Beijing Bytedance Network Technology Co., Ltd. Procédé, appareil et support de traitement vidéo
WO2024012533A1 (fr) * 2022-07-15 2024-01-18 Beijing Bytedance Network Technology Co., Ltd. Procédé, appareil, et support de traitement vidéo
WO2024046479A1 (fr) * 2022-09-03 2024-03-07 Douyin Vision Co., Ltd. Procédé, appareil et support de traitement de vidéo
WO2024081261A1 (fr) * 2022-10-10 2024-04-18 Beijing Dajia Internet Information Technology Co., Ltd. Procédés et dispositifs à copie intra-bloc
WO2024078630A1 (fr) * 2022-10-14 2024-04-18 Douyin Vision Co., Ltd. Procédé, appareil et support de traitement vidéo
WO2024086520A1 (fr) * 2022-10-17 2024-04-25 Tencent America LLC Filtrage de limite sur blocs codés intrabc et intratmp

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114125442B (zh) * 2022-01-29 2022-05-03 腾讯科技(深圳)有限公司 屏幕视频编码模式确定方法、编码方法、装置和计算设备
US20240022732A1 (en) * 2022-07-13 2024-01-18 Tencent America LLC Weight derivation of multiple reference line for intra prediction fusion

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140086303A1 (en) * 2012-09-24 2014-03-27 Qualcomm Incorporated Bitstream conformance test in video coding
US20150124877A1 (en) * 2012-04-25 2015-05-07 Samsung Electronics Co., Ltd. Multiview video encoding method using reference picture set for multiview video prediction and device therefor, and multiview video decoding method using reference picture set for multiview video prediction and device therefor
US20160100163A1 (en) * 2014-10-07 2016-04-07 Qualcomm Incorporated Deblock filtering for intra block copying
US20160241868A1 (en) * 2013-10-14 2016-08-18 Microsoft Technology Licensing, Llc Features of intra block copy prediction mode for video and image coding and decoding
US20170105014A1 (en) * 2015-10-08 2017-04-13 Qualcomm Incorporated Luma-driven chroma scaling for high dynamic range and wide color gamut contents

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107547901B (zh) * 2012-04-16 2021-07-20 韩国电子通信研究院 用于解码视频信号的方法
WO2015057438A1 (fr) * 2013-10-14 2015-04-23 Mediatek Singapore Pte. Ltd. Procédé de modulation d'impulsions codées différentielles résiduelle pour extension de plage hevc
US10327002B2 (en) * 2014-06-19 2019-06-18 Qualcomm Incorporated Systems and methods for intra-block copy
WO2016200984A1 (fr) * 2015-06-08 2016-12-15 Vid Scale, Inc. Mode de copie de bloc intra pour codage de contenu d'écran
US20170099490A1 (en) * 2015-10-02 2017-04-06 Qualcomm Incorporated Constrained intra-prediction for block copy mode
CN109743576B (zh) * 2018-12-28 2020-05-12 杭州海康威视数字技术股份有限公司 编码方法、解码方法及装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150124877A1 (en) * 2012-04-25 2015-05-07 Samsung Electronics Co., Ltd. Multiview video encoding method using reference picture set for multiview video prediction and device therefor, and multiview video decoding method using reference picture set for multiview video prediction and device therefor
US20140086303A1 (en) * 2012-09-24 2014-03-27 Qualcomm Incorporated Bitstream conformance test in video coding
US20160241868A1 (en) * 2013-10-14 2016-08-18 Microsoft Technology Licensing, Llc Features of intra block copy prediction mode for video and image coding and decoding
US20160100163A1 (en) * 2014-10-07 2016-04-07 Qualcomm Incorporated Deblock filtering for intra block copying
US20170105014A1 (en) * 2015-10-08 2017-04-13 Qualcomm Incorporated Luma-driven chroma scaling for high dynamic range and wide color gamut contents

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
JILL BOYCE; YAN YE; JIANLE CHEN; ADARSH RAMASUBRAMONIAN: "Overview of SHVC: Scalable extensions of the high efficiency video coding standard", IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, vol. 26, no. 1, 1 January 2016 (2016-01-01), pages 20 - 34, XP055761632 *

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220150485A1 (en) * 2019-03-11 2022-05-12 Interdigital Vc Holdings, Inc. Intra prediction mode partitioning
WO2022256353A1 (fr) * 2021-05-31 2022-12-08 Beijing Dajia Internet Information Technology Co., Ltd. Procédés et dispositifs pour un mode de partition géométrique avec affinement de vecteur de mouvement
EP4209008A4 (fr) * 2021-08-31 2024-03-20 Tencent America LLC Propagation d'informations de mode d'intra-prédiction pour un mode de partition géométrique avec ibc et intra-prédiction
CN116325745A (zh) * 2021-08-31 2023-06-23 腾讯美国有限责任公司 采用ibc和帧内预测的几何分区模式的帧内预测模式信息传播
US11876978B2 (en) 2021-08-31 2024-01-16 Tencent America LLC Intra prediction mode information propagation for geometric partition mode with IBC and intra prediction
WO2023034798A1 (fr) * 2021-08-31 2023-03-09 Tencent America LLC Propagation d'informations de mode d'intra-prédiction pour un mode de partition géométrique avec ibc et intra-prédiction
WO2023040968A1 (fr) * 2021-09-15 2023-03-23 Beijing Bytedance Network Technology Co., Ltd. Procédé, appareil et support de traitement vidéo
WO2023246893A1 (fr) * 2022-06-22 2023-12-28 Beijing Bytedance Network Technology Co., Ltd. Procédé, appareil et support de traitement vidéo
WO2024012533A1 (fr) * 2022-07-15 2024-01-18 Beijing Bytedance Network Technology Co., Ltd. Procédé, appareil, et support de traitement vidéo
WO2024046479A1 (fr) * 2022-09-03 2024-03-07 Douyin Vision Co., Ltd. Procédé, appareil et support de traitement de vidéo
WO2024081261A1 (fr) * 2022-10-10 2024-04-18 Beijing Dajia Internet Information Technology Co., Ltd. Procédés et dispositifs à copie intra-bloc
WO2024078630A1 (fr) * 2022-10-14 2024-04-18 Douyin Vision Co., Ltd. Procédé, appareil et support de traitement vidéo
WO2024086520A1 (fr) * 2022-10-17 2024-04-25 Tencent America LLC Filtrage de limite sur blocs codés intrabc et intratmp

Also Published As

Publication number Publication date
CN113826390B (zh) 2024-03-08
CN113826390A (zh) 2021-12-21

Similar Documents

Publication Publication Date Title
CN113826390B (zh) 屏幕内容编解码的帧内块复制
KR102612765B1 (ko) 변환 스킵 모드에서 양자화 파라미터를 수정하는 기술
KR20210090175A (ko) 비디오 처리에서 펄스 코드 변조 기술
WO2021027776A1 (fr) Gestion de tampon lors d'un décodage de sous-image
WO2021073631A1 (fr) Interactions entre sous-images et filtrage en boucle
CN114208174B (zh) 预测过程中的调色板模式编解码
US11445183B2 (en) Chroma intra mode derivation in screen content coding
KR20240024335A (ko) 서브 블록 기반 인터 예측을 위한 조정 방법
TW201743619A (zh) 在視訊寫碼中適應性迴路濾波中之多個濾波器之混淆
AU2020258477A1 (en) Context coding for transform skip mode
US11765367B2 (en) Palette mode with intra block copy prediction
CN113475077B (zh) 调色板模式使用指示的独立编解码
CA3130472A1 (fr) Codage independant d'indication d'utilisation de mode de palette
KR20220002918A (ko) 변환 스킵 모드에서의 시그널링
KR20220002917A (ko) 코딩 트리 구조 유형을 기반으로 한 코딩 모드
WO2020236723A1 (fr) Transformation de blocs résiduels codés en dérivation dans une vidéo numérique
WO2020243246A1 (fr) Utilisation d'un type de structure d'arbre de codage pour commander un mode de codage

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20805578

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 23-03-2022)

122 Ep: pct application non-entry in european phase

Ref document number: 20805578

Country of ref document: EP

Kind code of ref document: A1