WO2021110116A1 - Prediction from multiple cross-components - Google Patents

Prediction from multiple cross-components Download PDF

Info

Publication number
WO2021110116A1
WO2021110116A1 PCT/CN2020/133786 CN2020133786W WO2021110116A1 WO 2021110116 A1 WO2021110116 A1 WO 2021110116A1 CN 2020133786 W CN2020133786 W CN 2020133786W WO 2021110116 A1 WO2021110116 A1 WO 2021110116A1
Authority
WO
WIPO (PCT)
Prior art keywords
component
video
samples
block
prediction
Prior art date
Application number
PCT/CN2020/133786
Other languages
French (fr)
Inventor
Junru LI
Meng Wang
Li Zhang
Kai Zhang
Hongbin Liu
Jizheng Xu
Yue Wang
Siwei Ma
Original Assignee
Beijing Bytedance Network Technology Co., Ltd.
Bytedance Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Bytedance Network Technology Co., Ltd., Bytedance Inc. filed Critical Beijing Bytedance Network Technology Co., Ltd.
Priority to CN202080084122.7A priority Critical patent/CN115004697A/en
Publication of WO2021110116A1 publication Critical patent/WO2021110116A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/105Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/186Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/593Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/70Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards

Definitions

  • This patent document relates to video coding techniques, devices and systems.
  • a method for video processing includes determining, for a conversion between a video block of a first component of a video and a bitstream representation of the video, prediction values of samples of the video block using representative samples outside of the video block. The representative samples are determined during the conversion. The method also includes performing the conversion based on the determining.
  • a method of video processing includes determining, for a conversion between a video block of a first component of a video and a bitstream representation of the video, a coding mode of a multiple cross-component coding tool. The method also includes performing the conversion based on the determining.
  • the coding mode is determined from multiple modes available for coding the video block, the multiple modes having different parameters for determining prediction values of samples of the video block using representative samples from at least one of a second component, a third component, or a neighboring block of the video block.
  • a method of video processing includes performing a conversion between a video block of a video and a bitstream representation of the video.
  • the video block is coded using a multiple cross-component prediction mode of from multiple prediction modes of a prediction from multiple-cross component (PMC) coding tool, and the multiple cross-component prediction mode is signaled in the bitstream representation as an intra prediction mode or an inter prediction mode.
  • PMC multiple-cross component
  • a method of video processing includes determining residual information of a video unit for a conversion between a video block of a video and a bitstream representation of the video in case a prediction from multiple cross-component (PMC) coding tool is enabled for a first component. The method also includes performing the conversion based on the determining.
  • PMC cross-component
  • a method of video processing includes determining, for a conversion between a video block of a video and a bitstream representation of the video, whether usage a cross-component prediction (CCP) coding tool is signaled in the bitstream representation, based on availability of neighboring samples of the video block.
  • the neighboring samples are adjacent or non-adjacent to the video block.
  • the method also includes performing the conversion based on the determining.
  • CCP cross-component prediction
  • a method of video processing includes determining prediction values of samples of a first component of a video block of a video using representative samples of a second component of the video and/or a third component of the video, and performing a conversion between the video block and a bitstream representation of the video block according to the determined prediction values of the first component.
  • the above-described method is embodied in the form of processor-executable code and stored in a computer-readable program medium.
  • a device that is configured or operable to perform the above-described method.
  • the device may include a processor that is programmed to implement this method.
  • a video decoder apparatus may implement a method as described herein.
  • FIG. 1 shows neighboring blocks used in intra mode prediction.
  • FIG. 2 shows 67 intra prediction modes.
  • FIG. 3 shows neighboring blocks used in most probable mode (MPM) list construction process.
  • FIG. 4 shows reference samples for wide-angular intra prediction.
  • FIG. 5 shows a problem of discontinuity in case of directions beyond 45 degree.
  • FIG. 6A shows an example definition of samples used by position dependent intra prediction combination (PDPC) applied to diagonal top-right mode.
  • PDPC position dependent intra prediction combination
  • FIG. 6B shows an example definition of samples used by position dependent intra prediction combination (PDPC) applied to diagonal bottom-left mode.
  • PDPC position dependent intra prediction combination
  • FIG. 6C shows an example definition of samples used by position dependent intra prediction combination (PDPC) applied to adjacent top-right mode.
  • PDPC position dependent intra prediction combination
  • FIG. 6D shows an example definition of samples used by position dependent intra prediction combination (PDPC) applied to adjacent bottom-left mode.
  • PDPC position dependent intra prediction combination
  • FIG. 7 shows an example of reference lines to be used for intra prediction.
  • FIG. 8 shows locations of the samples used for the derivation of ⁇ and ⁇ .
  • FIG. 9A shows a chroma sample (the triangle) and its corresponding four luma samples (circles) .
  • FIG. 9B shows downsampling filtering for cross-component linear model (CCLM) in Versatile Video Coding (VVC) .
  • CCLM cross-component linear model
  • VVC Versatile Video Coding
  • FIG. 10A shows a linear model top (LM-T) side assuming the chroma block size equal to NxN.
  • FIG. 10B shows a linear model left (LM-L) side assuming the chroma block size equal to NxN.
  • FIG. 11A shows an example of linear model (LM) parameter derivation process with 4 entries.
  • FIG. 11B shows another example of linear model (LM) parameter derivation process with 4 entries.
  • FIG. 12 shows an illustration of a straight line between minimum and maximum Luma value.
  • FIG. 13 shows a coding flow of Two-Step Cross-component Prediction Mode (TSCPM) taking 4: 2: 0 and 8x8 luma block, 4x4 chroma block as an example.
  • TCPM Two-Step Cross-component Prediction Mode
  • FIG. 14 shows examples of four neighboring samples, with both left and above reference samples available.
  • FIG. 15A shows 6 representative color component C1 samples (dark gray) used to predict (X c , Y c ) .
  • FIG. 15B shows 8 representative color component C1 samples (dark gray) used to predict (X c , Y c ) .
  • FIG. 16 shows decoding flow chart with a proposed method.
  • FIG. 17 shows a flowchart of an example video processing method.
  • FIG. 18 is a block diagram of a video processing apparatus.
  • FIG. 19 is a block diagram showing an example video processing system in which various techniques disclosed herein may be implemented.
  • FIG. 20 is a block diagram that illustrates an example video coding system.
  • FIG. 21 is a block diagram that illustrates an encoder in accordance with some embodiments of the present disclosure.
  • FIG. 22 is a block diagram that illustrates a decoder in accordance with some embodiments of the present disclosure.
  • FIG. 23 is a flowchart representation of a method for video processing in accordance with the present technology.
  • FIG. 24 is a flowchart representation of another method for video processing in accordance with the present technology.
  • FIG. 25 is a flowchart representation of another method for video processing in accordance with the present technology.
  • FIG. 26 is a flowchart representation of another method for video processing in accordance with the present technology.
  • FIG. 27 is a flowchart representation of yet another method for video processing in accordance with the present technology.
  • Section headings are used in the present document to facilitate ease of understanding and do not limit the embodiments disclosed in a section to only that section.
  • certain embodiments are described with reference to Versatile Video Coding or other specific video codecs, the disclosed techniques are applicable to other video coding technologies also.
  • video processing encompasses video coding or compression, video decoding or decompression and video transcoding in which video pixels are represented from one compressed format into another compressed format or at a different compressed bitrate.
  • This technology described in this patent application is related to image/video coding technologies. Specifically, it is related to cross-component prediction in image/video coding. It may be applied to the existing video coding standard like high efficiency video coding (HEVC) , or the standard (Versatile Video Coding) to be finalized. It may be also applicable to future video coding standards or video codec.
  • HEVC high efficiency video coding
  • VAV Very Video Coding
  • Video coding standards have evolved primarily through the development of the well-known ITU-T and ISO/IEC standards.
  • the ITU-T produced H. 261 and H. 263, ISO/IEC produced MPEG-1 and MPEG-4 Visual, and the two organizations jointly produced the H. 262/MPEG-2 Video and H. 264/MPEG-4 Advanced Video Coding (AVC) and H. 265/HEVC standards.
  • AVC H. 264/MPEG-4 Advanced Video Coding
  • H. 265/HEVC High Efficiency Video Coding
  • the video coding standards are based on the hybrid video coding structure wherein temporal prediction plus transform coding are utilized.
  • Joint Video Exploration Team JVET was founded by VCEG and MPEG jointly in 2015.
  • JVET Joint Exploration Model
  • Chroma subsampling is the practice of encoding images by implementing less resolution for chroma information than for luma information, taking advantage of the human visual system′slower acuity for color differences than for luminance.
  • 4: 2: 0 is 2: 1 subsampling of Horizontal and Vertical.
  • a signal with chroma 4: 4: 4 has no compression (so it is not subsampled) and transports both luminance and color data entirely.
  • 4: 2: 2 has half the chroma of 4: 4: 4
  • 4: 2: 0 has a quarter of the color information available.
  • one chroma block size is MxN wherein M is the width and N is the height of the chroma block, and the top-left position within the chroma block is denoted by (x, y) .
  • collocated luma block could be identified by:
  • Intra-prediction is the process to predict pixels of picture frame. Intra-picture prediction uses neighborhood pixels for predict picture block. Before intra prediction frame must be split.
  • Coding Tree Unit is therefore a coding logical unit, which is in turn encoded into an HEVC bitstream. It consists of three blocks, namely luma (Y) two chroma components (Cb and Cr) . Taking 4: 2: 0 color format as an example, Luma component have LxL samples and each chroma component have L/2xL/2 samples. Each block is called Coding Tree Block (CTB) .
  • CTB Coding Tree Block
  • Each CTB has the same size (LxL) as the CTU (64 ⁇ 64, 32 ⁇ 32, or 16 ⁇ 16) .
  • Each CTB can be divide repetitively in a quad-tree structure, from the same size as CTB to as small as 8 ⁇ 8.
  • Each block resulting from this partitioning is called Coding Blocks (CBs) and becomes the decision making point of prediction type (inter or intra prediction) .
  • the prediction type along with other parameters is coded in Coding Unit (CU) .
  • CU Coding Unit
  • So CU is the basic unit of prediction in HEVC, each of which is predicted from previously coded data.
  • the CU consists of three CBs (Y, Cb and Cr) .
  • PB Prediction Block
  • PCM pulse code modulation
  • normal intra prediction mode There are two kinds of intra prediction modes, PCM (pulse code modulation) and normal intra prediction mode.
  • I_PCM mode prediction, transform, quantization and entropy coding are bypassed. Coding of the samples of a block by directly representing the sample values without prediction or application of a transform.
  • I_PCM mode is only available for 2Nx2N PU.
  • Max and min I_PCM CU size is signalled in SPS, legal I_PCM CU sizes are 8x8, 16x16 and 32x32, user-selected PCM sample bit-depths, signalled in SPS for luma and chroma, separately.
  • the luma component there are 35 modes, including Planar, DC and 33 angular prediction modes for all block sizes. To better code these luma prediction modes, one most probable mode (MPM) flag is firstly code to indicate whether one of the 3 MPM modes is chosen. If the MPM flag is false, then the 32 rest modes are coded with fixed length coding) .
  • MPM most probable mode
  • the selection of the set of three most probable modes is based on modes of two neighboring PUs, one left and one to the above of the current PU. Let the intra modes of left and above of the current PU be A and B, respectively wherein the two neighboring blocks are depicted in FIG. 1.
  • a neighboring PU is not coded as intra or is coded with pulse code modulation (PCM) mode, the PU is considered to be a DC predicted one.
  • B is assumed to be DC mode when the above neighboring PU is outside the CTU to avoid introduction of an additional line buffer for intra mode reconstruction.
  • MPM [0] the first two most probable modes denoted as MPM [0] and MPM [1] are set equal to A and B, respectively, and the third most probable mode denoted as MPM [2] is determined as follows:
  • MPM [2] is set to planar mode.
  • MPM [2] is set to DC mode.
  • MPM [2] is set equal to angular mode 26 (directly vertical) .
  • the three most probable modes are determined as follows. In the case they are not angular modes (A and B are less than 2) , the three most probable modes are set equal to planar mode, DC mode and angular mode 26, respectively. Otherwise (A and B are greater than or equal to 2) , the first most probable mode MPM [0] is set equal to A and two remaining most probable modes MPM [1] and MPM [2] are set equal to the neighboring directions of A and calculated as:
  • DM Planar
  • DC Horizontal
  • Vertical Vertical
  • the number of directional intra modes is extended from 33, as used in HEVC, to 65.
  • the additional directional modes are depicted as grey dotted arrows in FIG. 2, and the planar and DC modes remain the same.
  • These denser directional intra prediction modes apply for all block sizes and for both luma and chroma intra predictions.
  • Conventional angular intra prediction directions are defined from 45 degrees to -135 degrees in clockwise direction as shown in FIG. 2.
  • VTM2 several conventional angular intra prediction modes are adaptively replaced with wide-angle intra prediction modes for the non-square blocks.
  • the replaced modes are signaled using the original method and remapped to the indexes of wide angular modes after parsing.
  • the total number of intra prediction modes is unchanged, e.g., 67, and the intra mode coding is unchanged.
  • every intra-coded block has a square shape and the length of each of its side is a power of 2. Thus, no division operations are required to generate an intra-predictor using DC mode.
  • blocks can have a rectangular shape that necessitates the use of a division operation per block in the general case. To avoid division operations for DC prediction, only the longer side is used to compute the average for non-square blocks.
  • VVC reference software VTM3.0. rc1 only intra mode of neighbor position A and B denoted as LEFT and ABOVE, as depicted in FIG. 3 are used for MPM list generation.
  • truncated binary coded is applied for the non MPM coding.
  • a neighboring CU is not coded as intra or is coded with pulse code modulation (PCM) mode, the CU is considered to be a Planar predicted one.
  • Mode B is assumed to be Planar mode when the above neighboring CU is outside the CTU to avoid introduction of an additional line buffer for intra mode reconstruction.
  • the 6 MPM modes are denoted by MPM [i] (i being 0... 5) .
  • the following steps are performed in order:
  • MPM [6] ⁇ Mode A , ! Mode A , 50, 18, 46, 54 ⁇ ;
  • Mode A is equal to Mode B , the followings apply
  • MPM [6] ⁇ Mode A , planar, DC, 2 + ( (candIntraPredModeA + 62) %65) , 2 + ( (candIntraPredModeA -1) %65, 2 + ( (candIntraPredModeA + 61) %65) ) ⁇ ;
  • MPM [biggerIdx] -MPM [! biggerIdx] is equal to neither 64 nor 1, the following applies:
  • MPM [5] 2 + (candModeList [biggerIdx] %65)
  • MPM [4] 2 + ( (MPM [biggerIdx] -1) %65)
  • Conventional angular intra prediction directions are defined from 45 degrees to -135 degrees in clockwise direction.
  • VTM2 several conventional angular intra prediction modes are adaptively replaced with wide-angle intra prediction modes for non-square blocks.
  • the replaced modes are signaled using the original method and remapped to the indexes of wide angular modes after parsing.
  • the total number of intra prediction modes for a certain block is unchanged, e.g., 67, and the intra mode coding is unchanged.
  • top reference with length 2W+1, and the left reference with length 2H+ 1, are defined as shown in FIG. 4.
  • the mode number of replaced mode in wide-angular direction mode is dependent on the aspect ratio of a block.
  • the replaced intra prediction modes are illustrated in Table 2-1.
  • two vertically-adjacent predicted samples may use two non-adjacent reference samples in the case of wide-angle intra prediction.
  • low-pass reference samples filter and side smoothing are applied to the wide-angle prediction to reduce the negative effect of the increased gap ⁇ p ⁇ .
  • PDPC position dependent intra prediction combination
  • PDPC is an intra prediction method which invokes a combination of the un-filtered boundary reference samples and HEVC style intra prediction with filtered boundary reference samples.
  • PDPC is applied to the following intra modes without signalling: planar, DC, horizontal, vertical, bottom-left angular mode and its eight adjacent angular modes, and top-right angular mode and its eight adjacent angular modes.
  • the prediction samplepred (x, y) is predicted using an intra prediction mode (DC, planar, angular) and a linear combination of reference samples according to the Equation as follows:
  • pred (x, y) (wL ⁇ R -1, y + wT ⁇ R x, -1 -wTL ⁇ R -1, -1 + (64 -wL -wT+wTL) ⁇ pred (x, y) + 32) >> 6
  • R x, -1 , R -1, y represent the reference samples located at the top and left of current sample (x, y) , respectively, and R -1, -1 represents the reference sample located at the top-left corner of the current block.
  • FIGS. 6A-6D illustrates the definition of reference samples (R x, -1 , R -1, y and R -1, -1 ) for PDPC applied over various prediction modes.
  • the prediction sample pred (x’, y’) is located at (x’, y’) within the prediction block.
  • the PDPC weights are dependent on prediction modes and are shown in Table 2-2.
  • the MRLIP has the following characteristics:
  • One of three lines may be selected for one luma block: reference line 0, 1, 3 as depicted in FIG. 7.
  • HEVC chroma coding five modes (including one direct mode (DM) which is the intra prediction mode from the top-left corresponding luma block and four default modes) are allowed for a chroma block.
  • the two-color components share the same intra prediction mode.
  • CCLM cross-component linear model
  • CCLM cross-component linear model
  • C (i, j) represents the predicted chroma samples in a CU and rec L ′ (i, j) represents the downsampled reconstructed luma samples of the same CU for color formats 4: 2: 0 or 4: 2: 2 while rec L ′ (i, j) represents the reconstructed luma samples of the same CU for color format 4: 4: 4.
  • CCLM Parameters ⁇ and ⁇ are derived by minimizing the regression error between the neighboring reconstructed luma and chroma samples around the current block as follows:
  • L (n) represents the down-sampled (for color formats 4: 2: 0 or 4: 2: 2) or original (for color format 4: 4: 4) top and left neighboring reconstructed luma samples
  • C (n) represents the top and left neighboring reconstructed chroma samples
  • value of N is equal to twice of the minimum of width and height of the current chroma coding block.
  • the CCLM luma-to-chroma prediction mode is added as one additional chroma intra prediction mode.
  • one more RD cost check for the chroma components is added for selecting the chroma intra prediction mode.
  • intra prediction modes other than the CCLM luma-to-chroma prediction mode is used for the chroma components of a CU
  • CCLM Cb-to-Cr prediction is used for Cr component prediction.
  • FIG. 8 shows the location of the left and above reconstructed samples and the sample of the current block involved in the CCLM mode.
  • This regression error minimization computation is performed as part of the decoding process, not just as an encoder search operation, so no syntax is used to convey the ⁇ and ⁇ values.
  • the CCLM prediction mode also includes prediction between the two chroma components, e.g., the Cr component is predicted from the Cb component. Instead of using the reconstructed sample signal, the CCLM Cb-to-Cr prediction is applied in residual domain. This is implemented by adding a weighted reconstructed Cb residual to the original Cr intra prediction to form the final Cr prediction:
  • the scaling factor ⁇ is derived in a similar way as in the CCLM luma-to-chroma prediction. The only difference is an addition of a regression cost relative to a default ⁇ value in the error function so that the derived scaling factor is biased towards a default value of-0.5 as follows:
  • Cb (n) represents the neighboring reconstructed Cb samples
  • Cr (n) represents the neighboring reconstructed Cr samples
  • is equal to ⁇ (Cb (n) ⁇ Cb (n) ) >> 9.
  • the reconstructed luma block needs to be downsampled to match the size of the chroma signal.
  • the default downsampling filter e.g., 6- tap as depicted in FIG. 9B
  • the default downsampling filter used in CCLM mode is as follows.
  • this downsampling assumes the “type 0” phase relationship as shown in FIG. 9A for the positions of the chroma samples relative to the positions of the luma samples -e.g., collocated sampling horizontally and interstitial sampling vertically.
  • MDLM multi-directional LM
  • two additional CCLM modes are proposed: LM-T, where the linear model parameters are derived only based on the top neighboring samples as shown in FIG. 10A, and LM-L, where the linear model parameters are derived only based on the left neighboring samples as shown in FIG. 10B.
  • CCLM from luma to chroma prediction as in JEM is adopted in VTM-2.0.
  • JVET-L0338 and JVET-L0191 are further adopted into VTM-3.0.
  • INTRA_LT_CCLM the one in JEM
  • INTRA_L_CCLM the one in JEM
  • INTRA_T_CCLM the one in JEM
  • INTRA_T_CCLM the two in JEM
  • LM-L the two in JEM
  • the differences among the three modes are which neighboring samples are utilized to derive the linear model parameters (e.g., ⁇ , ⁇ ) .
  • chroma block size equal to nTbW x nTbH, the availability of top or left block of current block by availT and availL, respectively.
  • LM-LT the above row and left column may be utilized to derive linear model parameters.
  • nS (availL &&availT) ? Min (nTbW, nTbH) : (availL ? nTbH: nTbW) ) (7)
  • xS and yS are set to 1 (e.g., no sub-sampling regardless it is non-square or square block) .
  • xS and yS are set to 1 (e.g., no sub-sampling regardless it is non-square or square block) .
  • the luma samples may be those downsampled luma samples instead of directly using the reconstructed luma samples.
  • the 2 or 4 points are selected with equal distance.
  • the block width and height of current chroma block is W and H, respectively.
  • the top-left coordinate of current chroma block is [0, 0] .
  • the two above samples’ coordinates are [floor (W/4) , -1] and [floor (3*W/4) , -1] .
  • the two left samples’ coordinates are [-1, floor (H/4) ] and [-1, floor (3*H/4) ] .
  • the selected samples are painted in solid color (e.g., grey or black) as depicted in FIG. 11A.
  • FIG. 11A shows example when both above and left neighboring samples are available. Subsequently, the 4 samples are sorted according to luma sample intensity and classified into 2 group. The two larger samples and two smaller samples are respectively averaged. Cross component prediction model is derived with the 2 averaged points. Alternatively, the maximum and minimum value of the four samples are used to derive the LM parameters.
  • the four selected above samples’ coordinates are [W/8, -1] , [W/8 + W/4, -1] , [W/8 + 2*W/4, -1] , and [W/8 + 3*W/4, -1] .
  • the selected samples are painted in solid color (e.g., grey or black) as depicted in FIG. 11B.
  • FIG. 11B shows an example when only above neighboring samples are available and top-right is not available.
  • the four selected left samples’ coordinates are [-1, H/8] , [-1, H/8 + H/4] , [-1, H/8 + 2*H/4, -1] , and [-1, H/8 + 3*H/4] .
  • W’ is the available number of above neighboring samples, which can be 2*W.
  • the four selected above samples’ coordinates are [W’/8, -1] , [W’/8 + W’/4, -1] , [W’/8 + 2*W’/4, -1] , and [W’/8 + 3*W’/4, -1] .
  • H’ is the available number of left neighboring samples, which can be 2*H.
  • the four selected left samples’ coordinates are [-1, H’/8] , [-1, H’/8 + H’/4] , [-1, H’/8 + 2*H’/4, -1] , and [-1, H’/8 + 3*H’/4] .
  • JVET-L0191 proposes to replace the LMS algorithm of the linear model parameters ⁇ and ⁇ by a straight-line equation, so called two-point method. Two smaller values among the four points are averaged, denoted as A; and the greater two values (the remaining two) among the four points are averaged, denoted by B. A and B are depicted in FIG. 12.
  • S is set equal to iShift, ⁇ is set equal to a and ⁇ is set equal to b;
  • g_aiLMDivTableLow and g_aiLMDivTableHigh are two tables each with 512 entries. Each entry stores a 16-bit integer.
  • the prediction values are further clipped to the allowed range of chroma values.
  • Clip1 C (x) Clip3 (0, (1 ⁇ BitDepth C ) -1, x) (15)
  • (a, b) , k (set to S) are the two linear model parameters derived from sub-sections 2.3.6.3.1 or 2.3.6.3.2 or 2.3.6.3.3 depending on the selected CCLM mode for the chroma block, nTbW and nTbH are the width and height of the chroma block, respectively, and pDsY is the downsampled collocated luma reconstructed block.
  • nTbH -1 is derived as follows:
  • nTbH -1 is derived as follows:
  • pY indicates the collocated luma reconstructed samples before deblocking filter.
  • TCPM Two-Step Cross-component Prediction Mode
  • TSCPM Two-Step Cross-component Prediction Mode
  • the internal prediction block is down-sampled to generate the final chroma prediction block.
  • FIG. 13 depicts the basic procedures of the chroma prediction block generation process.
  • the left square denotes the originally reconstructed luma sample located at (x, y) of the collocated luma block by R L (x, y) .
  • R L x, y
  • either 4 or 2 samples may be selected and averages of two larger values and two smaller values are utilized to calculate the parameters.
  • the ratio r of width and height of chroma coded block is calculated as Eq. 19. Then based on the availability of above row and left column, four or two samples are selected.
  • FIG. 14 shows an example of the locations regarding to the four neighboring samples. The selected samples are painted in yellow.
  • the 4 samples are sorted according to luma sample intensity and classified into 2 groups.
  • the two larger samples and two smaller samples are respectively averaged.
  • Cross component prediction model is derived with the 2 averaged points.
  • the similar way as described in 2.3.6.4 may be utilized to derive ⁇ , ⁇ and shift with the average of two larger selected sample values as (MaxLuma, MaxChroma) and the average of two smaller selected sample values as (MinLuma, MinChroma) .
  • a chroma prediction model is derived according to the luminance and chrominance values of selected 2 samples. In one example, the similar way as described in 2.3.6.4 may be utilized to derive ⁇ , ⁇ and shift.
  • the temporary chroma prediction block is generated with Eq. 21, where P′ c (x, y) denotes a temporary prediction block. ⁇ and ⁇ are model parameters. R L (x, y) is a reconstructed luma sample.
  • a six-tap filter (e.g., [1 2 1; 1 2 1] ) is introduced for the down-sampled process for temporary chroma prediction block, as shown in Eq. 4.
  • P c (2 ⁇ P′ c (2x, 2y) + 2 ⁇ P′ c (2x, 2y + 1) + P′ c (2x-1, 2y) + P′ c (2x + 1, 2y) + P′ c (2x-1, 2y + 1) +P′ c (2x + 1, 2y-1) +offset0) >> 3 (21)
  • the two varaiables offset0 and offset1 are integer values.
  • the varaiables offset0 and offset1 may be set to 4, and 1, respectively.
  • offset0 and offset1 may be set to 0.
  • TSCPM-LT denoted by TSCPM-LT
  • TSCPM-L denoted by TSCPM-A
  • TSCPM-A wherein only left or above neighboring samples are utilized.
  • a flag is used to signal whether the chroma intra- predication mode is TSCPM or not. This flag (as a 2 nd bin) is coded right after the indication of DM mode usage (1 st bin) .
  • the detailed bin strings for each chroma mode is tabulated in the table below.
  • Table 2-4 Coding bins signaling with TSCPM of chroma intra modes in TAVS3.
  • TSCPM utilizes the luma information to predict a chroma color component (e.g., Cb or Cr) . It is noticed that for a second chroma color component (e.g., Cr) to be coded, the other two-color components (e.g., luma and Cb) are already available. How to utilize those information needs to be further studied.
  • a chroma color component e.g., Cb or Cr
  • the other two-color components e.g., luma and Cb
  • the prediction signal of a first color component C0 may be derived using the reconstructed representative samples of corresponding blocks of a second and/or third color components, denoted by C1 and C2.
  • the prediction signal of C0 may further depends on the neighboring (e.g., adjacent or non-adjacent) samples of reconstructed samples of C1 and C2.
  • the prediction signal of C0 may further depends on the neighboring (e.g., adjacent or non-adjacent) samples of reconstructed samples of C0.
  • cross-component prediction may represent any variance of coding methods that derive the reconstruction/prediction signal of a first color component using the information of a second color component.
  • the coding/decoding process of a PMC coded C0 block may depend on reconstructed and/or prediction samples of representative samples with C1 and/or C2 color components corresponding to the current C0 samples.
  • linear function may be applied to the presentative samples with C1 and/or C2 color components and/or current C0 block’s neighboring samples (including adjacent or non-adjacent) .
  • non-linear function may be applied to the presentative samples with C1 and/or C2 color components and/or current C0 block’s neighboring samples (including adjacent or non-adjacent) .
  • the final predictor of one sample in the C0 block denoted by FPred c0 is derived by using the following equation:
  • FPred c0 X ⁇ TPred c0 + Y ⁇ (Rec c2 -FPred c2 ) + Z (4-1)
  • TPred c0 represents a temporary prediction value of the sample using existing prediction modes (e.g., intra/inter/IBC prediction modes)
  • Rec c2 and FPred c2 represent the reconstruction and final prediction values of representative C2 samples.
  • the final predictor of one sample in the C0 block denoted by FPred c0 is derived using the following equation:
  • FPred c0 X ⁇ ( ⁇ c0 *Rec c1 + ⁇ c0 ) + Y ⁇ (Rec c2 - ( ⁇ c2 *Rec c1 + ⁇ c2 ) + Z (4-2)
  • Rec c1 and Rec c2 represent the reconstruction values of representative C1 and C2 samples, respectively.
  • the final predictor of one sample in the C0 block denoted by FPred c0 is derived by using the following equation:
  • FPred c0 (X ⁇ ⁇ c0 -Y ⁇ ⁇ c2 ) *Rec c1 + (X ⁇ ⁇ c0 -Y ⁇ ⁇ c2 ) + Y ⁇ Rec c2 + Z (4-3)
  • two temporary blocks (with size equal to K ⁇ L) for C0 and C2 may be firstly derived, according to linear model parameters (X ⁇ ⁇ c0 , X ⁇ ⁇ c0 ) and (Y ⁇ ⁇ c2 , Y ⁇ ⁇ c2 ) , and corresponding C1 block with size equal to K ⁇ L, respectively.
  • the temporary blocks may be further downsampled to K’ ⁇ L’ with/without clipping.
  • the two temporary blocks are derived using the linear model parameters applied to the corresponding C1 block, similar to CCLM/TSCPM process.
  • one temporary blocks (with size equal to K ⁇ L) may be derived, according to (X ⁇ ⁇ c0 -Y ⁇ ⁇ c2 , X ⁇ ⁇ c0 -Y ⁇ ⁇ c2 ) and corresponding C1 block with size equal to K ⁇ L.
  • the temporary block may be further downsampled to K’ ⁇ L’ with/without clipping.
  • the final prediction may be generated by adding the collocated sample in the temporary block with/without being downsampled to Y ⁇ Rec c2 , or/subtracting the collocated sample in the temporary block with/without being downsampled from the Y ⁇ Rec c2
  • a temporary C1 block may be firstly derived, e.g., using downsampling filter, from C1 block with size equal to K ⁇ L.
  • Linear model parameters (X ⁇ ⁇ c0 , X ⁇ ⁇ c0 ) and (y ⁇ ⁇ c2 , Y ⁇ ⁇ c2 ) , may be applied to the temporary C1 block, followed by adding the collocated sample in the temporary block after linear model parameters being applied, to the Y ⁇ Rec c2 , or followed by subtracting the collocated sample in the temporary block after linear model parameters being applied, from the Y ⁇ Rec c2
  • variables X, Y are two variables which may represent the weighting factors and Z is an offset value; ⁇ c0 , ⁇ c2 are two variables applied to representative C1 samples; ⁇ c0 and ⁇ c2 are offset values.
  • X or Y or Z is equal to 1.
  • X or Y or Z is equal to 0.
  • X is equal to 1
  • Y is equal to -1
  • Z is equal to 0.
  • X or Y or Z is equal to 2 K or -2 K , wherein K is an integer value, such as a value being in the range of [-M, N] wherein M and N are no smaller than 0.
  • One or multiple of the variables used in above equations may be same for all samples within one video unit (e.g., coding block/prediction block/transform block) .
  • a first sample in the video unit may select a first set of variable values; and a second sample in the video unit may select a second set of variable values wherein at least one variable value is different in the first and second sets.
  • Ci 0 to 2
  • C0 is Cr
  • C1 is Y
  • C2 is Cb color component.
  • C0 is the luma color components (Y in YCbCr format; G in RGB format)
  • C1 and C2 are the remaining two-color components.
  • a representative sample may be obtained by down-sampling.
  • Rec c2 may be the corresponding C2 sample.
  • the final prediction values may be further clipped to a specific range.
  • How to select and/or how many representative samples of C1 and/or C2 to be used for predicting one C0 sample may be determined on-the-fly.
  • how to select the representative samples of C1 and/or C2 may be based on the position of the current C0 sample and/or color format.
  • the representative C1/C2 samples may be those surrounding the C1/C2 samples corresponding to the sample.
  • two representative luma reconstruction samples are defined as the samples located at (2X, 2Y) , (2X, 2Y+1) .
  • two representative luma reconstruction samples are defined as the samples: (2X, 2Y) , (2X+1, 2Y) .
  • FIG. 15A For example, six representative luma reconstruction samples are depicted in FIG. 15A.
  • FIGS. 15A-B show examples of selection of representative C1 samples in PMC.
  • FIG. 15B For example, eight representative luma reconstruction samples are depicted in FIG. 15B.
  • the representative C2 sample may be with the same coordinator as the current C0 sample.
  • the representative samples may be defined as those reconstructed samples before in-loop filtering methods (e.g., deblocking filter/SAO/ALF/CCALF) are applied.
  • in-loop filtering methods e.g., deblocking filter/SAO/ALF/CCALF
  • the representative samples may be defined as a function of multiple reconstructed samples before in-loop filtering methods (e.g., deblocking filter/SAO/ALF/CCALF) are applied.
  • in-loop filtering methods e.g., deblocking filter/SAO/ALF/CCALF
  • the function may be defined as a downsample filtering process.
  • the function may be defined as linear function (e.g., weighted average) or non-linear function.
  • Linear model parameters may be applied to representative C1 samples.
  • ( ⁇ c0 , ⁇ c0 ) are linear model parameters derived for the current sample/current block.
  • they may be derived using the neighboring samples of current block C0 and neighboring samples of C1 block.
  • ( ⁇ c2 , ⁇ c2 ) are linear model parameters derived for the representative C2 samples/the C2 block covering the representative C2 samples.
  • they may be derived using the neighboring samples of C2 block and neighboring samples of C1 block.
  • the linear model parameters may be derived in the same way as that used in VVC/JEM, or those used in TSCPM, or as described in PCT/CN2018/114158, PCT/CN2018/118799, PCT/CN2018/119709, PCT/CN2018/125412, PCT/CN2019/070002, PCT/CN2019/075874, PCT/CN2019/075993, PCT/CN2019/076195, PCT/CN2019/079396, PCT/CN2019/079431, PCT/CN2019/079769, which are incorporated by reference herein in its entirety for all purposes, .
  • linear model parameters may be derived from the neighboring reconstructed C1 samples without downsampling.
  • linear model parameters may be derived from the neighboring reconstructed C0/C2 samples with upsampling.
  • the linear model parameters may be firstly clipped to a range before being used either in CCP (TSCPM or CCLM) mode.
  • Multiple PMC modes may be allowed with different variable values/different linear model parameter derivation methods and/or different downsampling/up sampling methods and/or different locations of reconstructed/downsampled reconstructed neighboring samples for linear model derivation.
  • one mode is defined that may only utilize neighboring samples from above row and/or right above row.
  • one mode is defined that may only utilize neighboring samples from left column and/or left below column.
  • one mode is defined that multiple linear models (e.g., multiple sets of linear models) may be derived and applied to one block
  • the current luma reconstruction blocks and/or neighboring reconstructed samples may be split to M (M>1) categories. Different category may utilize different linear model.
  • one mode is defined that the downsampling filter is defined to be subsampling filter.
  • the L representative luma reconstruction samples are defined as the samples located at (2 *x -1, 2 *y) , (2 *x -1, 2 *y + 1) , (2 *x, 2 *y) , (2 *x, 2 *y+ 1) , (2 *x+ 1, 2 *y) and (2 *x+ 1, 2 *y+ 1) .
  • K samples nearest the position (a, b) may be used.
  • the prediction sample of the chroma block may only depend on K of the L representative luma reconstruction samples (K is an integer value) .
  • the prediction sample of the chroma block may only depend on sample located at (2 *x, 2 *y) .
  • the prediction sample of the chroma block may only depend on sample located at (2 *x + 1, 2 *y) .
  • the prediction sample of the chroma block may only depend on sample located at (2 *x + 1, 2 *y + 1) .
  • the prediction sample of the chroma block may only depend on sample located at (2 *x, 2 *y +1) .
  • the prediction sample of the chroma block may only depend on samples located at (2 *x, 2 *y) and (2 *x, 2 *y +1) .
  • the residual information of the video unit may be further signaled.
  • signaling of the residual information of the video unit may be omitted, e.g., only zero coefficients are available.
  • a flag e.g., coded block flag (CBF) for the C0 color component
  • CBF coded block flag
  • the CBF for the C0 color component is not signaled, and, in an example, inferred to be equal to 1.
  • signaling of the flag e.g., coded block flag (CBF) for the C2 color component
  • CBF coded block flag
  • the CBF for the C2 color component is inferred to be equal to 1.
  • whether to and/or how to signal the CBF for the C2 block may depend on the usage of PMC and/or which PMC mode.
  • whether to and/or how to signal the CBF for the C0 block may depend on the usage of PMC and/or which PMC mode.
  • signaling of the flag e.g., coded block flag (CBF) for the C1 color component
  • CBF coded block flag
  • the CBF for the C1 color component is inferred to be equal to 1.
  • whether to and/or how to signal the CBF for the C1 block may depend on the usage of PMC and/or which PMC mode.
  • whether to and/or how to signal the CBF for the C0 block may depend on the usage of PMC and/or which PMC mode.
  • PMC modes may be treated as some additional prediction modes.
  • Whether to signal an indication of PMC mode may depend on the coding mode of the current block.
  • the indication is signaled only when the current block is coded with one or multiple specific modes.
  • Whether to signal an indication of PMC mode may depend on the color format.
  • the indication is not signaled if the color format is 4: 0: 0.
  • the bin/flag indicating the utilization of PMC mode of C 0 may be signaled/parsed according to the CBF flag and/or prediction mode of C 1 and/or C 2 .
  • the PMC mode may be signaled when the CBF flag of C 1 and/or C 2 is 1 or 0, and/or the prediction mode of C 1 and/or C 2 is one of CCP (e.g., TSCPM/CCLM) modes.
  • CCP e.g., TSCPM/CCLM
  • the PMC mode may be inferred to be 0, if the CBF flag of C 1 and/or C 2 is 0 and/or the prediction mode of C 1 and/or C 2 is not one of CCP (e.g., TSCPM/CCLM) modes.
  • CCP e.g., TSCPM/CCLM
  • indication of enabling one of multiple PMC modes may be firstly signaled/parsed in addition to the existing intra prediction modes.
  • an index to the multiple PMC modes may be further signaled.
  • a first bin may be coded to indicate the usage of DM mode, followed by a 2 nd bin coded to indicate the usage ofCCP (e.g., TSCPM/CCLM) and a 3 rd bin coded to indicate the usage of PMC mode.
  • CCP e.g., TSCPM/CCLM
  • the 2 nd bin is coded to indicate the usage of PMC and the 3 rd bin is coded to indicate the usage of CCP (e.g., TSCPM/CCLM) modes.
  • CCP e.g., TSCPM/CCLM
  • a first bin may be coded to indicate the usage of PMC mode, followed by a bin coded to indicate the usage of DM and/or CCP (e.g., TSCPM/CCLM) modes.
  • DM and/or CCP e.g., TSCPM/CCLM
  • PMC modes may be treated as additional variances of cross-component prediction methods, such as being part of a set of CCP (e.g., CCLM/TSCPM) modes.
  • CCP e.g., CCLM/TSCPM
  • whether to signal/parse the PMC modes may depend on the usage of CCP modes.
  • an index may be further signaled to indicate which of the multiple CCP mode is applied to the block.
  • the indication of the category index may be firstly coded, followed by an index relative to the category if needed.
  • the indication of the category index may be coded after an index relative to the category, if needed.
  • same or different contexts may be utilized to code a first index relative to a first category (e.g., indication of TSCPM) and a second index relative to a second category.
  • the order of signalling DM/CCP/PMC mode may depend on the coded mode information of a spatial block.
  • the indication of PMC mode may be signalled before the indication of other CCP/DM modes.
  • the indication of DM mode may be signalled before the indication of CCP modes.
  • the indication of DM mode may be signalled before the indication of PMC mode.
  • a PMC mode is treated as a new intra prediction modes in addition to existing ones.
  • different PMC modes may be assigned with different mode indices and coded with binary bin strings.
  • indications of e.g., a flag/bin
  • the usage of PMC mode may be bypass coded, e.g., without any context.
  • indications of (e.g., a flag/bin) the usage of PMC mode may be context coded, e.g., with one or multiple context.
  • the context may be derived using neighboring blocks’ mode information (e.g., equal to PMC or equal to CCP) and/or availability of neighboring blocks.
  • neighboring blocks mode information (e.g., equal to PMC or equal to CCP) and/or availability of neighboring blocks.
  • the context may be derived according to the block dimension (e.g., width and/or height) of current block.
  • three PMC modes are enabled for processing a video unit (e.g., video/picture/slice/brick/tile/subpicture) , the following coding methods of indications of usage of one mode may be utilized.
  • PMC_Mode0, PMC_Mode1, PMC_Mode2 wherein PMC_Mode0 indicates the PMC mode using both left and above neighboring samples to derive linear model parameters;
  • PMC_Mode1 and PMC_Mode2 indicate the PMC mode using only left and only above neighboring samples to derive linear model parameters, respectively.
  • Table 4-7 Some examples are tabulated in Table 4-1, Table 4-2, Table 4-3, Table 4-4, Table 4-5, Table 4-6, Table 4-7 to describe the corresponding bin strings for different chroma intra prediction modes. Differences compared to the design before introducing PMC are highlighted in bold italicized text. It is noted that the TSCPM tabulated in those tables may be replaced by other CCP methods, the bin orders/mode indices may be also switched.
  • CCP cross-component prediction
  • Whether to signal/parse the indications of a CCP method may depend on the availability of neighboring samples (e.g., adjacent or non-adjacent) .
  • the indications of a CCP method e.g., LM-T, TSCPM-T, PMC-T, ) that relies on above neighboring samples may not be signalled.
  • the indications of a CCP method e.g., LM-L, TSCPM-L, PMC-L
  • LM-L LM-L
  • TSCPM-L LM-L
  • PMC-L a CCP method that relies on left neighboring samples
  • a CCP method e.g., LM-T, LM-L, TSCPM-T, TSCPM-L, PMC-T, PMC-L, LM-LT, TSCPM-LT, PMC-LT, other variances of CCLM/TSCPM/PMC
  • a CCP method e.g., LM-T, LM-L, TSCPM-T, TSCPM-L, PMC-T, PMC-L, LM-LT, TSCPM-LT, PMC-LT, other variances of CCLM/TSCPM/PMC
  • a CCP method e.g., LM-LT, TSCPM-LT, PMC-LT
  • LM-LT low-power linearization parameter
  • TSCPM-LT low-power linearization parameter
  • PMC-LT low-power linearization parameter
  • the indications of a CCP method e.g., LM-T, LM-L, TSCPM-T, TSCPM-L, PMC-T, PMC-L, CCLM, TSCPM, PMC
  • a CCP method e.g., LM-T, LM-L, TSCPM-T, TSCPM-L, PMC-T, PMC-L, CCLM, TSCPM, PMC
  • the coding method is inferred to be disabled.
  • a video processing unit e.g., sequence/video/picture/slice/tile/brick/subpicture/CTU row/CTU/VPDU/CU/PU/TU/sub-blocks in a CU/PU.
  • the indications of whether and/or how to use above mentioned methods may be signaled SPS/VPS/PPS/picture header/slice header/tile group header/group of CTUs/CTU/other kinds of video data units.
  • Whether and/or how to use the above-mentioned methods may depend on the decoded information, such as block dimension, position of a block relative to a video processing unit (e.g., relative to a slice) , slice/picture type, partitioning types (e.g., dual tree or single tree) , etc. al.
  • a confirming bitstream follows the rule that such methods be disabled when certain conditions (e.g., depending on block dimension) are satisfied.
  • Bin strings of each chroma intra prediction mode (PMC is treated as one of TSCPM mode, and one bin (e.g., 4 th bin) is further signaled to indicate it is belonging to TSCPM or PMC) .
  • Table 4-2 Bin strings of each chroma intra prediction mode (PMC is treated as one of TSCPM mode, and one bin (e.g., 4 th bin) is further signaled to indicate it is belonging to TSCPM or PMC) .
  • Table 4-3 Bin strings of each chroma intra prediction mode (PMC is treated as a new category (indicated by the 1 st bin) , before the indication of TSCPM represented by the 2 nd bin) .
  • Table 4-4 Bin strings of each chroma intra prediction mode (PMC is treated as a new category (indicated by the 0 th bin) ) .
  • Table 4-5 Bin strings of each chroma intra prediction mode (indication of PMC modes is signaled after TSCPM modes, denoted by the 2 nd bin) .
  • Table 4-6 Bin strings of each chroma intra prediction mode (each of the PMC mode is treated as a new chroma intra prediction mode. All of PMC modes are added after existing modes) .
  • Table 4-7 Bin strings of each chroma intra prediction mode (mode index signaled after CBF flags) .
  • Example of decoding process is illustrated as follows. Prediction from Multiple Cross-components (PMC) modes are proposed. The prediction of component C0 is derived by the reconstructed samples of other color components C1 and C2.
  • C0 is the Cr color component
  • C1 is the luma color component
  • C2 is the Cb color component.
  • the prediction of Cr component is derived by the linear combination of the Y and Cb reconstructed samples.
  • Three Multiple Cross-components e.g., PMC_LT, PMC_L and PMC_T
  • PMC modes are signaled with a flag after TSCPM, as illustrated in Table 5-1.
  • the indication of the explicit PMC mode indices e.g., PMC_LT, PMC_L and PMC_T
  • aligns to the representation of TSCPM mode indices e.g., TSCPM_LT, TSCPM_L and TSCPM_T.
  • the coded block flag (cbf) of Cb block is inferred to be 1, if the corresponding Cb/Cr block is coded with PMC mode.
  • TSCPM_LT/PMC_LT is employed for the case that the left and/or above neighboring reference samples are not available.
  • bin2 and bin3 which indicate the utilization and indices of enhanced TSCPM/PMC (e.g., TSCPM_L, TSCPM_T/PMC_L, PMC_T) , can be removed.
  • Bin strings of each chroma intra prediction mode (PMC is treated as one of TSCPM mode, and one bin (e.g., 4 th bin) is further signaled to indicate it is belonging to TSCPM or PMC) .
  • inter-channel linear model parameters ( ⁇ 0 , ⁇ 0 ) of Y-Cb and the model parameters ( ⁇ 1 , ⁇ 1 ) of Y-Cr are obtained from neighboring reconstructed samples.
  • the linear model parameters derived methods of PMC_LT, PMC_L and PMC_T are respectively identical to TSCPM_LT, TSCPM_L and TSCPM_T in AVS3.
  • an internal block IPred which has identical dimension of luma coding block is generated by linear model as follows,
  • IPred ( ⁇ 0 + ⁇ 1 ) ⁇ Rec Y + ( ⁇ 0 + ⁇ 1 ) , (22)
  • Rec Y is the reconstructed samples of Y component.
  • IPred′ is generated from IPred which employs the same set of down-sampling filters as those in TSCPM.
  • the final prediction FPred Cr of Cr can be formulated as follows,
  • FPred Cr Clip (0, (1 ⁇ bitDepth) -1, IPred′-Rec Cb ) . (23)
  • Rec Cb is the reconstructed samples of Cb component.
  • a Prediction from Multiple Cross-components (PMC) method is proposed wherein the prediction of Cr component is derived by the linear combination of the Y and Cb reconstructed samples.
  • An internal block IPred is firstly derived according to a linear model applied to the corresponding luma block, and the final prediction of Cr is set to the differences between the donwsampled temporary block and reconstructed Cb block. More specifically, the final prediction of Cr block is defined as follows:
  • IPred A ⁇ Rec Y + B, (24)
  • Rec Y denotes the reconstruction of Y components and IPred is an internal block that has identical dimension of luma coding block.
  • IPred′ represents the down-sampled IPred, which employs the same set of down-sampling filters as in TSCPM.
  • the linear parameters (A, B) are set to ( ⁇ 0 + ⁇ 1 , ⁇ 0 + ⁇ 1 ) wherein ( ⁇ 0 , ⁇ 0 ) and ( ⁇ 1 , ⁇ 1 ) are the two sets of linear model parameters derived for Cb and Cr, respectively, such as using TSCPM/CCLM methods.
  • the flag/bin indicating the enabling of Enhanced-TSCPM (e.g., TSCPM_L, TSCPM_T) is implicitly inferred to be 0 without signaling/parsing.
  • the flag/bin (index 2 in Table 5-1) indicating the type of TSCPM (e.g. TSCPM_LT or Enhanced-TSCPM) is removed.
  • the flag/bin (index 3 in Table 5-1) for discriminating TSCPM-L and TSCPM-T is also excluded.
  • IBC Intra Block Copy
  • FIG. 17 shows a flowchart of an example video processing method 1700.
  • prediction values of samples of a first component of a video block of a video are determined using representative samples of a second component of the video and/or a third component of the video.
  • a conversion is performed between the video block and a bitstream representation of the video block according to the determined prediction values of the first component.
  • the determining is based on reconstructed values of the representative samples or prediction values of the representative samples.
  • the representative samples are obtained during the conversion.
  • the prediction values of the first component for one sample of the video block is obtained using an equation.
  • X or Y or Z is equal to 1, or X or Y or Z is equal to 0, or X is equal to 1, Y is equal to -1 and Z is equal to 0, or X or Y or Z is equal to 2K or -2K, where K is an integer value being in the range of [-M, N] , where M and N are greater than or equal to 0.
  • the equation includes variables that are either pre-defined, or signaled in a bitstream, or derived.
  • the method of FIG. 17 further comprises deriving two temporary video blocks for the first component and the third component according to two sets of linear model parameters corresponding to a second video block associated with the second component, where the two temporary video blocks and the second video block have a first width and a first height that is different from a second width and a second height of the video block.
  • the two temporary blocks are derived using the linear model parameters applied to the second video block associated with the second component.
  • the method of FIG. 17 further comprises deriving one temporary video block according to linear model parameters corresponding to a second video block associated with the second component, where the one temporary video block and the second video block have a first width and a first height that is different from a second width and a second height of the video block.
  • the method of FIG. 17 further comprises deriving a temporary video block for the second component from a second video block associated with the second component, where the second video block has a first width and a first height that is different from a second width and a second height of the video block, applying linear model parameters to the temporary video block, and adding a collocated sample to or subtracting the collocated sample from the temporary video block after the linear model parameters are applied.
  • the first component is a blue chroma component
  • the second component is a luma component
  • the third component is a red chroma component
  • the first component is the red chroma component
  • the second component is the luma component
  • the third component is the blue chroma component
  • the first component is the luma component or a blue component
  • the second component and the third component are remaining components.
  • a selection of the representative samples and a number of the representative samples of the second component and/or the third component are dynamically determined. In some embodiments, the selection of the representative samples is based on a position of a current sample of the first component and/or a color format. In some embodiments, the color format includes a 4: 2: 0 color format, and the representative samples of the second component and/or the third component surround samples of the second component and/or the third component.
  • the representative samples include reconstructed samples before in-loop filtering methods. In some embodiments, the representative samples are a function of reconstructed samples before in-loop filtering methods. In some embodiments, linear model parameters are applied to the representative samples of the second component. In some embodiments, the linear model parameters include ⁇ c0 and ⁇ c0 that are derived for the samples or the video block, ⁇ c0 is a variable applied to the representative samples of the second component, and ⁇ c0 is an offset value.
  • ⁇ c0 and ⁇ c0 are derived using neighboring samples of the video block and neighboring samples of a second video block associated with the second component.
  • the linear model parameters include ⁇ c2 and ⁇ c2 that are derived for the representative samples of the third component or a third video block associated with the third component, ⁇ c2 is a variable applied to the representative samples of the third component, and ⁇ c2 is an offset value.
  • ⁇ c2 and ⁇ c2 are derived using neighboring samples of a second video block associated with the second component and neighboring samples of the third video block.
  • the linear model parameters are derived using versatile video coding (VVC) , Joint Exploration Model (JEM) , or two-step cross-component prediction mode (TSCPM) .
  • VVC versatile video coding
  • JEM Joint Exploration Model
  • TCPM two-step cross-component prediction mode
  • the equation includes variables, and the prediction values are determining using one of a plurality of prediction from multiple cross-components (PMC) modes that include: different variable values or different derivation methods for linear model parameters, and/or different downsampling or up sampling methods, and/or different locations of reconstructed or downsampled reconstructed neighboring samples for derivation of linear model parameters.
  • PMC cross-components
  • a residual information of the video block is further signaled.
  • a prediction from multiple cross-components (PMC) mode is enabled for the video block of the first component
  • a residual information of the video block is omitted.
  • signaling a flag that indicates a presence of non-zero coefficients in the video block of the first component is signaled based on a coding mode of the video block.
  • an indication of a prediction from multiple cross-components (PMC) mode for the video block is signaled based on a color format.
  • a bin or flag indicating a utilization of a prediction from multiple cross-components (PMC) mode of a first component is signaled or parsed according to a coded block flag (CBF) and/or a prediction mode of the second component and/or the third component.
  • the prediction values are determining using one of a plurality of prediction from multiple cross-components (PMC) modes, and an indication of the one PMC mode being enabled is signaled or parsed in addition to an existing intra prediction modes.
  • an index to the plurality of PMC modes is signaled.
  • the prediction values are determining using one of a plurality of prediction from multiple cross-components (PMC) modes, and an indication of enabling the one PMC mode is signaled or parsed in addition to an existing intra prediction modes.
  • the prediction values are determining using one of a plurality of prediction from multiple cross-components (PMC) modes, and the plurality of PMC modes are additional variances of cross-component prediction (CCP) modes or methods.
  • CCP cross-component prediction
  • a determination to signal or parse the one PMC mode depends on a usage of one CCP mode.
  • the prediction values are determining using a cross-component prediction (CCP) method, and the CCP method is signaled based on availability of neighboring samples next to the samples of the first component.
  • CCP cross-component prediction
  • an indication is not signaled for the CCP method that relies on the neighboring samples that are located above the samples of the first component and are unavailable.
  • an indication is not signaled for the CCP method that relies on the neighboring samples that are located left of the samples of the first component and are unavailable.
  • the prediction values are determined using a cross-component prediction (CCP) method or a prediction from multiple cross-components (PMC) mode, where the CCP method or the PMC mode is indicated via a signaled in a video processing unit.
  • CCP cross-component prediction
  • PMC multiple cross-components
  • the method of FIG. 17 further includes performing a determination, based on a decoded information associated with the video block, whether the prediction values are determined using a cross-component prediction (CCP) method or a prediction from multiple cross-components (PMC) mode.
  • CCP cross-component prediction
  • PMC prediction from multiple cross-components
  • the determination is made to disallow using CCP or PMC to determine the prediction values in response to the video block having a number of samples greater than or equal to an integer M, where M is 4096 or 1024.
  • the determination is made to disallow using CCP or PMC to determine the prediction values in response to the video block having a number of samples less than or equal to an integer M, where M is 4, 8, or 16.
  • FIG. 18 is a block diagram of a video processing apparatus 1800.
  • the apparatus 1800 may be used to implement one or more of the methods described herein.
  • the apparatus 1800 may be embodied in a smartphone, tablet, computer, Internet of Things (IoT) receiver, and so on.
  • the apparatus 1800 may include one or more processors 1802, one or more memories 1804 and video processing hardware 1806.
  • the processor (s) 1802 may be configured to implement one or more methods (including, but not limited to, method 15) described in the present document.
  • the memory (memories) 1804 may be used for storing data and code used for implementing the methods and techniques described herein.
  • the video processing hardware 1806 may be used to implement, in hardware circuitry, some techniques described in the present document. In some embodiments, the video coding methods may be implemented using an apparatus that is implemented on a hardware platform as described with respect to FIG. 18.
  • FIG. 19 is a block diagram showing an example video processing system 1900 in which various techniques disclosed herein may be implemented.
  • the system 1900 may include input 1902 for receiving video content.
  • the video content may be received in a raw or uncompressed format, e.g., 8 or 10 bit multi-component pixel values, or may be in a compressed or encoded format.
  • the input 1902 may represent a network interface, a peripheral bus interface, or a storage interface. Examples of network interface include wired interfaces such as Ethernet, passive optical network (PON) , etc. and wireless interfaces such as Wi-Fi or cellular interfaces.
  • PON passive optical network
  • the system 1900 may include a coding component 1704 that may implement the various coding or encoding methods described in the present document.
  • the coding component 1704 may reduce the average bitrate of video from the input 1902 to the output of the coding component 1704 to produce a coded representation of the video.
  • the coding techniques are therefore sometimes called video compression or video transcoding techniques.
  • the output of the coding component 1704 may be either stored, or transmitted via a communication connected, as represented by the component 1706.
  • the stored or communicated bitstream (or coded) representation of the video received at the input 1902 may be used by the component 1908 for generating pixel values or displayable video that is sent to a display interface 1910.
  • the process of generating user-viewable video from the bitstream representation is sometimes called video decompression.
  • certain video processing operations are referred to as “coding” operations or tools, it will be appreciated that the coding tools or operations are used at an encoder and corresponding decoding tools or operations that reverse the results of the coding will be performed by
  • peripheral bus interface or a display interface may include universal serial bus (USB) or high definition multimedia interface (HDMI) or Displayport, and so on.
  • storage interfaces include SATA (serial advanced technology attachment) , PCI, IDE interface, and the like.
  • FIG. 20 is a block diagram that illustrates an example video coding system 100 that may utilize the techniques of this disclosure.
  • video coding system 100 may include a source device 110 and a destination device 120.
  • Source device 110 generates encoded video data which may be referred to as a video encoding device.
  • Destination device 120 may decode the encoded video data generated by source device 110 which may be referred to as a video decoding device.
  • Source device 110 may include a video source 112, a video encoder 114, and an input/output (I/O) interface 116.
  • Video source 112 may include a source such as a video capture device, an interface to receive video data from a video content provider, and/or a computer graphics system for generating video data, or a combination of such sources.
  • the video data may comprise one or more pictures.
  • Video encoder 114 encodes the video data from video source 112 to generate a bitstream.
  • the bitstream may include a sequence of bits that form a coded representation of the video data.
  • the bitstream may include coded pictures and associated data.
  • the coded picture is a coded representation of a picture.
  • the associated data may include sequence parameter sets, picture parameter sets, and other syntax structures.
  • I/O interface 116 may include a modulator/demodulator (modem) and/or a transmitter.
  • the encoded video data may be transmitted directly to destination device 120 via I/O interface 116 through network 130a.
  • the encoded video data may also be stored onto a storage medium/server 130b for access by destination device 120.
  • Destination device 120 may include an I/O interface 126, a video decoder 124, and a display device 122.
  • I/O interface 126 may include a receiver and/or a modem. I/O interface 126 may acquire encoded video data from the source device 110 or the storage medium/server 130b. Video decoder 124 may decode the encoded video data. Display device 122 may display the decoded video data to a user. Display device 122 may be integrated with the destination device 120, or may be external to destination device 120 which be configured to interface with an external display device.
  • Video encoder 114 and video decoder 124 may operate according to a video compression standard, such as the High Efficiency Video Coding (HEVC) standard, Versatile Video Coding (VVC) standard and other current and/or further standards.
  • HEVC High Efficiency Video Coding
  • VVC Versatile Video Coding
  • FIG. 21 is a block diagram illustrating an example of video encoder 200, which may be video encoder 114 in the system 100 illustrated in FIG. 20.
  • Video encoder 200 may be configured to perform any or all of the techniques of this disclosure.
  • video encoder 200 includes a plurality of functional components. The techniques described in this disclosure may be shared among the various components of video encoder 200.
  • a processor may be configured to perform any or all of the techniques described in this disclosure.
  • the functional components of video encoder 200 may include a partition unit 201, a predication unit 202 which may include a mode select unit 203, a motion estimation unit 204, a motion compensation unit 205 and an intra prediction unit 206, a residual generation unit 207, a transform unit 208, a quantization unit 209, an inverse quantization unit 210, an inverse transform unit 211, a reconstruction unit 212, a buffer 213, and an entropy encoding unit 214.
  • a partition unit 201 may include a mode select unit 203, a motion estimation unit 204, a motion compensation unit 205 and an intra prediction unit 206, a residual generation unit 207, a transform unit 208, a quantization unit 209, an inverse quantization unit 210, an inverse transform unit 211, a reconstruction unit 212, a buffer 213, and an entropy encoding unit 214.
  • video encoder 200 may include more, fewer, or different functional components.
  • predication unit 202 may include an intra block copy (IBC) unit.
  • the IBC unit may perform predication in an IBC mode in which at least one reference picture is a picture where the current video block is located.
  • IBC intra block copy
  • motion estimation unit 204 and motion compensation unit 205 may be highly integrated, but are represented in the example of FIG. 5 separately for purposes of explanation.
  • Partition unit 201 may partition a picture into one or more video blocks.
  • Video encoder 200 and video decoder 300 may support various video block sizes.
  • Mode select unit 203 may select one of the coding modes, intra or inter, e.g., based on error results, and provide the resulting intra-or inter-coded block to a residual generation unit 207 to generate residual block data and to a reconstruction unit 212 to reconstruct the encoded block for use as a reference picture.
  • Mode select unit 203 may select a combination of intra and inter predication (CIIP) mode in which the predication is based on an inter predication signal and an intra predication signal.
  • CIIP intra and inter predication
  • Mode select unit 203 may also select a resolution for a motion vector (e.g., a sub-pixel or integer pixel precision) for the block in the case of inter-predication.
  • motion estimation unit 204 may generate motion information for the current video block by comparing one or more reference frames from buffer 213 to the current video block.
  • Motion compensation unit 205 may determine a predicted video block for the current video block based on the motion information and decoded samples of pictures from buffer 213 other than the picture associated with the current video block.
  • Motion estimation unit 204 and motion compensation unit 205 may perform different operations for a current video block, for example, depending on whether the current video block is in an I slice, a P slice, or a B slice.
  • motion estimation unit 204 may perform uni-directional prediction for the current video block, and motion estimation unit 204 may search reference pictures of list 0 or list 1 for a reference video block for the current video block. Motion estimation unit 204 may then generate a reference index that indicates the reference picture in list 0 or list 1 that contains the reference video block and a motion vector that indicates a spatial displacement between the current video block and the reference video block. Motion estimation unit 204 may output the reference index, a prediction direction indicator, and the motion vector as the motion information of the current video block. Motion compensation unit 205 may generate the predicted video block of the current block based on the reference video block indicated by the motion information of the current video block.
  • motion estimation unit 204 may perform bi-directional prediction for the current video block, motion estimation unit 204 may search the reference pictures in list 0 for a reference video block for the current video block and may also search the reference pictures in list 1 for another reference video block for the current video block. Motion estimation unit 204 may then generate reference indexes that indicate the reference pictures in list 0 and list 1 containing the reference video blocks and motion vectors that indicate spatial displacements between the reference video blocks and the current video block. Motion estimation unit 204 may output the reference indexes and the motion vectors of the current video block as the motion information of the current video block. Motion compensation unit 205 may generate the predicted video block of the current video block based on the reference video blocks indicated by the motion information of the current video block.
  • motion estimation unit 204 may output a full set of motion information for decoding processing of a decoder.
  • motion estimation unit 204 may do not output a full set of motion information for the current video. Rather, motion estimation unit 204 may signal the motion information of the current video block with reference to the motion information of another video block. For example, motion estimation unit 204 may determine that the motion information of the current video block is sufficiently similar to the motion information of a neighboring video block.
  • motion estimation unit 204 may indicate, in a syntax structure associated with the current video block, a value that indicates to the video decoder 300 that the current video block has the same motion information as the another video block.
  • motion estimation unit 204 may identify, in a syntax structure associated with the current video block, another video block and a motion vector difference (MVD) .
  • the motion vector difference indicates a difference between the motion vector of the current video block and the motion vector of the indicated video block.
  • the video decoder 300 may use the motion vector of the indicated video block and the motion vector difference to determine the motion vector of the current video block.
  • video encoder 200 may predictively signal the motion vector.
  • Two examples of predictive signaling techniques that may be implemented by video encoder 200 include advanced motion vector predication (AMVP) and merge mode signaling.
  • AMVP advanced motion vector predication
  • merge mode signaling merge mode signaling
  • Intra prediction unit 206 may perform intra prediction on the current video block. When intra prediction unit 206 performs intra prediction on the current video block, intra prediction unit 206 may generate prediction data for the current video block based on decoded samples of other video blocks in the same picture.
  • the prediction data for the current video block may include a predicted video block and various syntax elements.
  • Residual generation unit 207 may generate residual data for the current video block by subtracting (e.g., indicated by the minus sign) the predicted video block (s) of the current video block from the current video block.
  • the residual data of the current video block may include residual video blocks that correspond to different sample components of the samples in the current video block.
  • residual generation unit 207 may not perform the subtracting operation.
  • Transform processing unit 208 may generate one or more transform coefficient video blocks for the current video block by applying one or more transforms to a residual video block associated with the current video block.
  • quantization unit 209 may quantize the transform coefficient video block associated with the current video block based on one or more quantization parameter (QP) values associated with the current video block.
  • QP quantization parameter
  • Inverse quantization unit 210 and inverse transform unit 211 may apply inverse quantization and inverse transforms to the transform coefficient video block, respectively, to reconstruct a residual video block from the transform coefficient video block.
  • Reconstruction unit 212 may add the reconstructed residual video block to corresponding samples from one or more predicted video blocks generated by the predication unit 202 to produce a reconstructed video block associated with the current block for storage in the buffer 213.
  • loop filtering operation may be performed reduce video blocking artifacts in the video block.
  • Entropy encoding unit 214 may receive data from other functional components of the video encoder 200. When entropy encoding unit 214 receives the data, entropy encoding unit 214 may perform one or more entropy encoding operations to generate entropy encoded data and output a bitstream that includes the entropy encoded data.
  • FIG. 22 is a block diagram illustrating an example of video decoder 300 which may be video decoder 114 in the system 100 illustrated in FIG. 20.
  • the video decoder 300 may be configured to perform any or all of the techniques of this disclosure.
  • the video decoder 300 includes a plurality of functional components.
  • the techniques described in this disclosure may be shared among the various components of the video decoder 300.
  • a processor may be configured to perform any or all of the techniques described in this disclosure.
  • video decoder 300 includes an entropy decoding unit 301, a motion compensation unit 302, an intra prediction unit 303, an inverse quantization unit 304, an inverse transformation unit 305, and a reconstruction unit 306 and a buffer 307.
  • Video decoder 300 may, in some examples, perform a decoding pass generally reciprocal to the encoding pass described with respect to video encoder 200 (e.g., FIG. 21) .
  • Entropy decoding unit 301 may retrieve an encoded bitstream.
  • the encoded bitstream may include entropy coded video data (e.g., encoded blocks of video data) .
  • Entropy decoding unit 301 may decode the entropy coded video data, and from the entropy decoded video data, motion compensation unit 302 may determine motion information including motion vectors, motion vector precision, reference picture list indexes, and other motion information. Motion compensation unit 302 may, for example, determine such information by performing the AMVP and merge mode.
  • Motion compensation unit 302 may produce motion compensated blocks, possibly performing interpolation based on interpolation filters. Identifiers for interpolation filters to be used with sub-pixel precision may be included in the syntax elements.
  • Motion compensation unit 302 may use interpolation filters as used by video encoder 20 during encoding of the video block to calculate interpolated values for sub-integer pixels of a reference block. Motion compensation unit 302 may determine the interpolation filters used by video encoder 200 according to received syntax information and use the interpolation filters to produce predictive blocks.
  • Motion compensation unit 302 may uses some of the syntax information to determine sizes of blocks used to encode frame (s) and/or slice (s) of the encoded video sequence, partition information that describes how each macroblock of a picture of the encoded video sequence is partitioned, modes indicating how each partition is encoded, one or more reference frames (and reference frame lists) for each inter-encoded block, and other information to decode the encoded video sequence.
  • Intra prediction unit 303 may use intra prediction modes for example received in the bitstream to form a prediction block from spatially adjacent blocks.
  • Inverse quantization unit 303 inverse quantizes, e.g., de-quantizes, the quantized video block coefficients provided in the bitstream and decoded by entropy decoding unit 301.
  • Inverse transform unit 303 applies an inverse transform.
  • Reconstruction unit 306 may sum the residual blocks with the corresponding prediction blocks generated by motion compensation unit 202 or intra-prediction unit 303 to form decoded blocks. If desired, a deblocking filter may also be applied to filter the decoded blocks in order to remove blockiness artifacts.
  • the decoded video blocks are then stored in buffer 307, which provides reference blocks for subsequent motion compensation/intra predication and also produces decoded video for presentation on a display device.
  • FIG. 23 is a flowchart representation of a method 2300 for video processing in accordance with the present technology.
  • the method 2300 includes, at operation 2310, determining, for a conversion between a video block of a first component of a video and a bitstream representation of the video, prediction values of samples of the video block using representative samples outside of the video block. The representative samples are determined during the conversion.
  • the method 2300 also includes, at operation 2320, performing the conversion based on the determining.
  • the representative samples comprise samples from at least one off a video block of a second component, a video block of a third component, or a neighboring block of the video block, the neighboring block being adjacent or non-adjacent to the video block.
  • the determining is based on reconstructed values of the representative samples or prediction values of the representative samples.
  • the determining is based on applying a linear function to the representative samples. In some embodiments, the determining is based on applying a non-linear function to the representative samples.
  • TPred c0 represents a temporary prediction value of the sample, where Rec c2 and FPred c2 represent a reconstruction value and a final prediction value of a representative sample in the third component C 2 .
  • X and Y represent weighing factors and Z represents an offset value, X, Y and Z being real numbers.
  • a final prediction value of a sample in the first component C 0 is denoted as FPred c0
  • FPred c0 X ⁇ ( ⁇ c0 *Rec c1 + ⁇ c0 ) + Y ⁇ (Rec c2 - ( ⁇ c2 *Rec c1 + ⁇ c2 ) + Z
  • Rec c1 represents a reconstruction value of a representative sample in the second component C1
  • Rec c2 represents a reconstruction value of a representative sample in the third component C2.
  • X and Y represent weighing factors and Z represents an offset value
  • ⁇ c0 and ⁇ c0 are linear model parameters for the first component
  • ⁇ c2 and ⁇ c2 are linear model parameters for the third component
  • X, Y Z, ⁇ c0 , ⁇ c0 , ⁇ c2 , and ⁇ c2 being real numbers.
  • a final prediction value of a sample in the first component C 0 is denoted as FPred c0
  • FPred c0 (X ⁇ ⁇ e0 -Y ⁇ ⁇ c2 ) *Rec c1 + (X ⁇ ⁇ c0 -Y ⁇ ⁇ c2 ) + Y ⁇ Rec c2 + Z.
  • Rec c1 represents a reconstruction value of a representative sample in the second component C1
  • Rec c2 represents a reconstruction value of a representative sample in the third component C2.
  • X and Y represent weighing factors and Z represents an offset value.
  • ⁇ c0 and ⁇ c0 are linear model parameters for the first component
  • ⁇ c2 and ⁇ c2 are linear model parameters for the third component
  • X, Y Z, ⁇ c0 , ⁇ c0 , ⁇ c2 , and ⁇ c2 being real numbers.
  • the first component has a size of K’ ⁇ L’ and the second component as a size of K ⁇ L.
  • Two temporary blocks having the size of K ⁇ L are derived according to linear model parameters (X ⁇ ⁇ c0 , X ⁇ ⁇ c0 ) and (Y ⁇ ⁇ c2 , Y ⁇ ⁇ c2 ) , and the two temporary blocks are downsampled to the size of K’ ⁇ L’.
  • the two temporary blocks are downsampled with or without clipping.
  • the first component has a size of K’ ⁇ L’ and the second component as a size of K ⁇ L.
  • One temporary block having the size of K ⁇ L is derived according to linear model parameters (X ⁇ ⁇ c0 -Y ⁇ ⁇ c2 , X ⁇ ⁇ c0 -Y ⁇ ⁇ c2 ) and the temporary block is downsampled to the size of K’ ⁇ L’. In some embodiments, the temporary block is downsampled with or without clipping.
  • the prediction values of the samples of the video block are determined by adding or subtracting collocated samples in the temporary block with or without performing downsampling.
  • the first component has a size of K’ ⁇ L’ and the second component as a size of K ⁇ L.
  • One temporary block is derived based on downsampling the second component to the size of K ⁇ L, and the prediction values of the samples of the video block are determined by applying linear model parameters (X ⁇ ⁇ c0 , X ⁇ ⁇ c0 ) and (Y ⁇ ⁇ c2 , Y ⁇ ⁇ c2 ) to the temporary block and by adding or subtracting collocated samples in the temporary block.
  • ⁇ c0 and ⁇ c0 are derived using neighboring samples of the first component and neighboring samples of the second component.
  • ⁇ c2 and ⁇ c2 are derived using neighboring samples of the third component and neighboring samples of the second component.
  • the linear model parameters are derived in a same manner as parameters for a cross-component linear model (CCLM) prediction mode or a Two-Step Cross-component Prediction Mode.
  • the linear model parameters are derived based on reconstructed values of the neighboring samples of the second component without performing downsampling.
  • the linear model parameters are derived based on reconstructed values of the neighboring samples of the first component or the third component are used with upsampling. In some embodiments, the linear model parameters are clipped within a range prior to being used by the CCP coding tool.
  • At least one of X, Y, or Z is equal to 1. In some embodiments, at least one of X, Y, or Z is equal to 0. In some embodiments, X is equal 1, Y is equal to -1, and Z is equal to 0. In some embodiments, at least one of X, Y, or Z is equal to 2K or -2K, K is an integer value in a range of [-M, N] , where M and N are no smaller than 0. In some embodiments, variables including at least X, Y, Z, ⁇ c0 , ⁇ c0 , ⁇ c2 , or ⁇ c2 are indicated in the bitstream representation.
  • variables including at least X, Y, Z, ⁇ c0 , ⁇ c0 , ⁇ c2 , or ⁇ c2 are derived on the fly.
  • at least one of the variables has a same value for all samples within a video unit, the video unit comprising a coding block, a prediction block, or a transform block.
  • the linear model parameters have multiple sets of values. In some embodiments, different samples within a video unit use different sets of values.
  • the first component is Cb color component
  • the second component is Y component
  • the third component is Cr component.
  • the first component is Cr color component
  • the second component is Y component
  • the third component is Cb component.
  • the first component is a luma component
  • the second and third components are chroma components.
  • the final prediction values are clipped within a predefined range.
  • the representative samples are selected during the conversion based on a characteristic of the first component.
  • the characteristic of the first component comprises a position of a sample of the first component or a color format of the first component.
  • the representative samples are located surrounding a current sample of the first component in case the color format is 4: 2: 0.
  • the current sample is located at position (x, y) , and one representative sample is located at (2x, 2y) . In some embodiments, the current sample is located at position (x, y) , and one representative sample is located at (2x+1, 2y) . In some embodiments, the current sample is located at position (x, y) , and one representative sample is located at (2x+1, 2y+1) . In some embodiments, the current sample is located at position (x, y) , and one representative sample is located at (2x, 2y+1) . In some embodiments, the current sample is located at position (x, y) , and two representative samples are located at (2x, 2y) and (2x, 2y+1) .
  • the current sample is located at position (x, y) , and two representative samples are located at (2x, 2y) and (2x+1, 2y) . In some embodiments, the current sample is located at position (x, y) , and six representative samples are located at (2x-1, 2y) , (2x, 2y) , (2x+ 1, 2y) , (2x-1, 2y+ 1) , (2x, 2y+ 1) , and (2x+ 1, 2y+ 1) .
  • the current sample is located at position (x, y) , and eight representative samples are located at (2x, 2y-1) , (2x-1, 2y) , (2x, 2y) , (2x+1, 2y) , (2x-1, 2y+1) , (2x, 2y+1) , (2x+1, 2y+1) , and (2x, 2y+2) .
  • a representative sample is located at a same location as a current sample of the first component.
  • the characteristic of the first component comprises an in-loop filtering method applied to the first component.
  • the representative samples are based on reconstructed samples prior to applying the in-loop filtering method.
  • the representative samples are based on a function of reconstructed samples prior to applying the in-loop filtering method.
  • the function comprises a downsample filtering function.
  • the function comprises a linear function or a non-linear function.
  • the in-loop filtering method comprises a deblocking filtering method, a sample adaptive offset (SAO) method, an adaptive loop filtering method, or a cross-component adaptive loop filtering method.
  • SAO sample adaptive offset
  • FIG. 24 is a flowchart representation of a method 2400 for video processing in accordance with the present technology.
  • the method 2400 includes, at operation 2410, determining, for a conversion between a video block of a first component of a video and a bitstream representation of the video, a coding mode of a multiple cross-component coding tool.
  • the coding mode is determined from multiple modes available for coding the video block.
  • the multiple modes have different parameters for determining prediction values of samples of the video block using representative samples from at least one of a second component, a third component, or a neighboring block of the video block.
  • the method 2400 also includes, at operation 2420, performing the conversion based on the determining.
  • one mode of the multiple modes specifies that only neighboring samples from a row that is above or right-above the first component are used for the prediction values of the samples of the first component. In some embodiments, one mode of the multiple modes specifies that only neighboring samples from a column that is to the left or left-below the first component are used for the prediction values of the samples of the first component. In some embodiments, one mode of the multiple modes specifies multiple linear models are applicable to the video block. In some embodiments, the samples of the first component and reconstructed values of samples of the neighboring block are grouped into multiple categories, and different linear models are applicable to different categories of samples. In some embodiments, one mode of the multiple modes specifies that a downsampling filter comprises a subsample filter. In some embodiments, one mode of the multiple modes specifies that K prediction values of the samples depend on L representative samples, K and L being integers.
  • FIG. 25 is a flowchart representation of a method 2500 for video processing in accordance with the present technology.
  • the method 2500 includes, at operation 2510, performing a conversion between a video block of a video and a bitstream representation of the video.
  • the video block is coded using a multiple cross-component prediction mode of from multiple prediction modes of a prediction from multiple-cross component (PMC) coding tool.
  • the multiple cross-component prediction mode is signaled in the bitstream representation as an intra prediction mode or an inter prediction mode.
  • signaling of the multiple modes in the bitstream representation is based on a characteristic of the video block.
  • the characteristic comprises a coding mode of the video block.
  • the multiple modes are signaled in the bitstream representation in case the video block is coded using a specified coding mode.
  • the characteristic comprises a color format of the video block.
  • the multiple modes are signaled in the bitstream representation in case the color format of the video block is 4: 0: 0.
  • the characteristic comprises a code block flag or a prediction mode of the second component or the third component.
  • At least one of the multiple modes is signaled in the bitstream representation in case the code block flag of the second component or the third component is 1 or 0, and/or the prediction mode of the second component or the third component comprises a cross-component prediction (CCP) mode.
  • one of the multiple modes is determined to be 0 in case the code block flag of the second component or the third component is 0 and the prediction mode of the second component or the third component is not a cross-component prediction (CCP) mode.
  • signaling that one of the multiple modes is enabled is included in the bitstream representation in addition to signaling of the one or more prediction modes.
  • the bitstream representation further includes an index indicating one of the multiple modes after the signaling that one of the multiple modes is enabled.
  • usage of a DM mode, usage of a cross-component prediction (CCP) mode, and usage of one of the multiple modes of the PMC coding tool are organized in a particular order. In some embodiments, the particular order is based on coded information of the neighboring block. In some embodiments, in case the neighboring block is coded using the PMC coding tool, the usage of the PMC coding tool is signaled before the usage of the DM mode and the usage of the CCP mode. In some embodiments, in case the neighboring block is coded using the DM mode, the usage of the DM mode is signaled before the usage of the CCP mode or the usage of the PMC coding tool.
  • the multiple modes for the PMC coding tool are signaled as a part of information for one or more cross-component prediction (CCP) modes for a CCP coding tool.
  • signaling of the multiple modes for the PMC coding tool is based on usage of the CCP modes.
  • an index is included in the bitstream representation to indicate one of the one or more CCP modes applicable to the video block.
  • the one or more CCP modes are classified into multiple categories, and the bitstream representation includes a category index indicating a corresponding category. In some embodiments, the index and the category index are organized in an order.
  • one of the multiple modes for the PMC coding tool is treated as an intra prediction mode. In some embodiments, different modes for the PMC coding tool are assigned different indices and coded with different binary bin string. In some embodiments, the signaling of the multiple modes in the bitstream representation is bypass coded without any context. In some embodiments, the signaling of the multiple modes in the bitstream representation is context coded using one or more contexts derived based on information of the video block or the neighboring block. In some embodiments, the one or more contexts are derived based on a coding mode or availability of the neighboring block. In some embodiments, the one or more contexts are derived based on a dimension of the video block.
  • FIG. 26 is a flowchart representation of a method 2600 for video processing in accordance with the present technology.
  • the method 2600 includes, at operation 2610, determining residual information of a video unit for a conversion between a video block of a video and a bitstream representation of the video in case a prediction from multiple cross-component (PMC) coding tool is enabled for a first component.
  • the method 2600 also includes, at operation 2620, performing the conversion based on the determining.
  • PMC multiple cross-component
  • the residual information of the video unit is indicated in the bitstream representation. In some embodiments, the residual information of the video unit is omitted from the bitstream representation in case only zero coefficients are available. In some embodiments, the bitstream representation includes a flag indicating whether there is a non-zero coefficient in the video unit. In some embodiments, the flag is inferred to be equal to 1 in case the flag is omitted from the bitstream representation. In some embodiments, a manner of signaling the flag is based on usage of the PMC coding tool.
  • prediction values of samples of a first component are determined using representative samples from at least a second component or a third component, and the bitstream representation includes a flag indicating whether there is a non-zero coefficient in the second component or the second component.
  • the flag is inferred to be equal to 1 and omitted from the bitstream representation.
  • a manner of signaling the flag is based on usage of the PMC coding tool.
  • FIG. 27 is a flowchart representation of a method 2700 for video processing in accordance with the present technology.
  • the method includes, at operation 2710, determining, for a conversion between a video block of a video and a bitstream representation of the video, whether usage a cross-component prediction (CCP) coding tool is signaled in the bitstream representation based on availability of neighboring samples of the video block.
  • the neighboring samples are adjacent or non-adjacent to the video block.
  • the method also includes, at operation 2720, performing the conversion based on the determining.
  • CCP cross-component prediction
  • the CCP coding tool relies on neighboring samples above the video block, and signaling of the usage of the CCP coding tool is omitted in case the neighboring samples above the video block are not available. In some embodiments, the CCP coding tool relies on neighboring samples to the left of the video block, and signaling of the usage of the CCP coding tool is omitted in case the neighboring samples to the left of the video block are not available. In some embodiments, the CCP coding tool relies on neighboring samples above and to the left of the video block, and signaling of the usage of the CCP coding tool is omitted in case the neighboring samples above and to the left of the video block are not available.
  • the CCP coding tool relies on neighboring samples on both sides of the video block, and signaling of the usage of the CCP coding tool is omitted in case the neighboring samples on only one side of the video block are available. In some embodiments, the CCP coding tool relies on neighboring samples of the video block, and signaling of the usage of the CCP coding tool is omitted in case the neighboring samples on a left or right side of the video block are unavailable. In some embodiments, the CCP coding tool is considered as disabled in case the signaling of the usage of the CCP is omitted.
  • usage of any of the above methods is indicated in the bitstream representation in a video processing unit.
  • the video processing unit comprises a sequence, a picture, a slice, a tile, a brick, a subpicture, a coding tree unit row, a coding tree unit, a virtual pipeline data unit, a coding unit, a prediction unit, a transform unit, or a sub-block in a coding unit or prediction unit.
  • the usage is included in a sequence parameter set, a video parameter set, a picture parameter set, a picture header, a slice header, a tile group header, a group of coding tree units, or a coding tree unit.
  • usage of the method is based on information about the video.
  • the information about the video comprises a dimension of the video block, a number of samples in the video block, a position of a video block relative to a video processing unit, a slice type, a picture type, or a partitioning type.
  • the method is disabled in case the number of samples in the video block is greater than or equal to a threshold.
  • the threshold is 1024 or 4096.
  • the method is disabled in case the number of samples in the video block is smaller than or equal to a threshold.
  • the threshold is 4, 8, or 16.
  • the method is disabled in case the dimension of the video block is greater than or equal to a threshold.
  • the threshold is 32 or 64. In some embodiments, the method is disabled in case the dimension of the video block is smaller than or equal to a threshold. In some embodiments, the threshold is 2 or 4. In some embodiments, signaling of usage of the method is omitted in case the method is disabled.
  • performing the conversion includes encoding the video block into the bitstream representation. In some embodiments, performing the conversion includes decoding the video block from the bitstream representation.
  • Some embodiments of the disclosed technology include making a decision or determination to enable a video processing tool or mode.
  • the encoder when the video processing tool or mode is enabled, the encoder will use or implement the tool or mode in the processing of a block of video, but may not necessarily modify the resulting bitstream based on the usage of the tool or mode. That is, a conversion from the block of video to the bitstream representation of the video will use the video processing tool or mode when it is enabled based on the decision or determination.
  • the decoder when the video processing tool or mode is enabled, the decoder will process the bitstream with the knowledge that the bitstream has been modified based on the video processing tool or mode. That is, a conversion from the bitstream representation of the video to the block of video will be performed using the video processing tool or mode that was enabled based on the decision or determination.
  • Some embodiments of the disclosed technology include making a decision or determination to disable a video processing tool or mode.
  • the encoder will not use the tool or mode in the conversion of the block of video to the bitstream representation of the video.
  • the decoder will process the bitstream with the knowledge that the bitstream has not been modified using the video processing tool or mode that was enabled based on the decision or determination.
  • Implementations of the subject matter and the functional operations described in this patent document can be implemented in various systems, digital electronic circuitry, or in computer software, firmware, or hardware, including the structures disclosed in this specification and their structural equivalents, or in combinations of one or more of them. Implementations of the subject matter described in this specification can be implemented as one or more computer program products, e.g., one or more modules of computer program instructions encoded on a tangible and non-transitory computer readable medium for execution by, or to control the operation of, data processing apparatus.
  • the computer readable medium can be a machine-readable storage device, a machine-readable storage substrate, a memory device, a composition of matter effecting a machine-readable propagated signal, or a combination of one or more of them.
  • data processing unit or “data processing apparatus” encompasses all apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, or multiple processors or computers.
  • the apparatus can include, in addition to hardware, code that creates an execution environment for the computer program in question, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, or a combination of one or more of them.
  • a computer program (also known as a program, software, software application, script, or code) can be written in any form of programming language, including compiled or interpreted languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
  • a computer program does not necessarily correspond to a file in a file system.
  • a program can be stored in a portion of a file that holds other programs or data (e.g., one or more scripts stored in a markup language document) , in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub programs, or portions of code) .
  • a computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network.
  • the processes and logic flows described in this specification can be performed by one or more programmable processors executing one or more computer programs to perform functions by operating on input data and generating output.
  • the processes and logic flows can also be performed by, and apparatus can also be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application specific integrated circuit) .
  • processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer.
  • a processor will receive instructions and data from a read only memory or a random access memory or both.
  • the essential elements of a computer are a processor for performing instructions and one or more memory devices for storing instructions and data.
  • a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto optical disks, or optical disks.
  • mass storage devices for storing data, e.g., magnetic, magneto optical disks, or optical disks.
  • a computer need not have such devices.
  • Computer readable media suitable for storing computer program instructions and data include all forms of nonvolatile memory, media and memory devices, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices.
  • semiconductor memory devices e.g., EPROM, EEPROM, and flash memory devices.
  • the processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

Devices, systems and methods for digital video coding, which includes Prediction from Multiple Cross-components (PMC) methods, are described. An exemplary method for video processing includes determining, for a conversion between a video block of a first component of a video and a bitstream representation of the video, prediction values of samples of the video block using representative samples outside of the video block. The representative samples are determined during the conversion. The method also includes performing the conversion based on the determining.

Description

PREDICTION FROM MULTIPLE CROSS-COMPONENTS
CROSS REFERENCE TO RELATED APPLICATIONS
Under the applicable patent law and/or rules pursuant to the Paris Convention, this application is made to timely claim the priority to and benefits of International Patent Application No. PCT/CN2019/122946, filed on December 04, 2019. For all purposes under the law, the entire disclosure of the aforementioned application is incorporated by reference as part of the disclosure of this application.
TECHNICAL FIELD
This patent document relates to video coding techniques, devices and systems.
BACKGROUND
In spite of the advances in video compression, digital video still accounts for the largest bandwidth use on the internet and other digital communication networks. As the number of connected user devices capable of receiving and displaying video increases, it is expected that the bandwidth demand for digital video usage will continue to grow.
SUMMARY
Devices, systems and methods related to cross component prediction methods are described.
In one representative aspect, a method for video processing includes determining, for a conversion between a video block of a first component of a video and a bitstream representation of the video, prediction values of samples of the video block using representative samples outside of the video block. The representative samples are determined during the conversion. The method also includes performing the conversion based on the determining.
In another representative aspect, a method of video processing includes determining, for a conversion between a video block of a first component of a video and a bitstream representation of the video, a coding mode of a multiple cross-component coding tool. The method also includes performing the conversion based on the determining. The coding mode is determined from multiple modes available for coding the video block, the multiple modes having different parameters for determining prediction values of samples of  the video block using representative samples from at least one of a second component, a third component, or a neighboring block of the video block.
In another representative aspect, a method of video processing includes performing a conversion between a video block of a video and a bitstream representation of the video. The video block is coded using a multiple cross-component prediction mode of from multiple prediction modes of a prediction from multiple-cross component (PMC) coding tool, and the multiple cross-component prediction mode is signaled in the bitstream representation as an intra prediction mode or an inter prediction mode.
In another representative aspect, a method of video processing includes determining residual information of a video unit for a conversion between a video block of a video and a bitstream representation of the video in case a prediction from multiple cross-component (PMC) coding tool is enabled for a first component. The method also includes performing the conversion based on the determining.
In another representative aspect, a method of video processing includes determining, for a conversion between a video block of a video and a bitstream representation of the video, whether usage a cross-component prediction (CCP) coding tool is signaled in the bitstream representation, based on availability of neighboring samples of the video block. The neighboring samples are adjacent or non-adjacent to the video block. The method also includes performing the conversion based on the determining.
In another representative aspect, a method of video processing includes determining prediction values of samples of a first component of a video block of a video using representative samples of a second component of the video and/or a third component of the video, and performing a conversion between the video block and a bitstream representation of the video block according to the determined prediction values of the first component.
In another representative aspect, the above-described method is embodied in the form of processor-executable code and stored in a computer-readable program medium.
In yet another representative aspect, a device that is configured or operable to perform the above-described method is disclosed. The device may include a processor that is programmed to implement this method.
In yet another representative aspect, a video decoder apparatus may implement a method as described herein.
The above and other aspects and features of the disclosed technology are described in greater detail in the drawings, the description and the claims.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 shows neighboring blocks used in intra mode prediction.
FIG. 2 shows 67 intra prediction modes.
FIG. 3 shows neighboring blocks used in most probable mode (MPM) list construction process.
FIG. 4 shows reference samples for wide-angular intra prediction.
FIG. 5 shows a problem of discontinuity in case of directions beyond 45 degree.
FIG. 6A shows an example definition of samples used by position dependent intra prediction combination (PDPC) applied to diagonal top-right mode.
FIG. 6B shows an example definition of samples used by position dependent intra prediction combination (PDPC) applied to diagonal bottom-left mode.
FIG. 6C shows an example definition of samples used by position dependent intra prediction combination (PDPC) applied to adjacent top-right mode.
FIG. 6D shows an example definition of samples used by position dependent intra prediction combination (PDPC) applied to adjacent bottom-left mode.
FIG. 7 shows an example of reference lines to be used for intra prediction.
FIG. 8 shows locations of the samples used for the derivation of α and β.
FIG. 9A shows a chroma sample (the triangle) and its corresponding four luma samples (circles) .
FIG. 9B shows downsampling filtering for cross-component linear model (CCLM) in Versatile Video Coding (VVC) .
FIG. 10A shows a linear model top (LM-T) side assuming the chroma block size equal to NxN.
FIG. 10B shows a linear model left (LM-L) side assuming the chroma block size equal to NxN.
FIG. 11A shows an example of linear model (LM) parameter derivation process with 4 entries.
FIG. 11B shows another example of linear model (LM) parameter derivation process with 4 entries.
FIG. 12 shows an illustration of a straight line between minimum and maximum Luma value.
FIG. 13 shows a coding flow of Two-Step Cross-component Prediction Mode  (TSCPM) taking 4: 2: 0 and 8x8 luma block, 4x4 chroma block as an example.
FIG. 14 shows examples of four neighboring samples, with both left and above reference samples available.
FIG. 15A shows 6 representative color component C1 samples (dark gray) used to predict (X c, Y c) .
FIG. 15B shows 8 representative color component C1 samples (dark gray) used to predict (X c, Y c) .
FIG. 16 shows decoding flow chart with a proposed method.
FIG. 17 shows a flowchart of an example video processing method.
FIG. 18 is a block diagram of a video processing apparatus.
FIG. 19 is a block diagram showing an example video processing system in which various techniques disclosed herein may be implemented.
FIG. 20 is a block diagram that illustrates an example video coding system.
FIG. 21 is a block diagram that illustrates an encoder in accordance with some embodiments of the present disclosure.
FIG. 22 is a block diagram that illustrates a decoder in accordance with some embodiments of the present disclosure.
FIG. 23 is a flowchart representation of a method for video processing in accordance with the present technology.
FIG. 24 is a flowchart representation of another method for video processing in accordance with the present technology.
FIG. 25 is a flowchart representation of another method for video processing in accordance with the present technology.
FIG. 26 is a flowchart representation of another method for video processing in accordance with the present technology.
FIG. 27 is a flowchart representation of yet another method for video processing in accordance with the present technology.
DETAILED DESCRIPTION
Section headings are used in the present document to facilitate ease of understanding and do not limit the embodiments disclosed in a section to only that section. Furthermore, while certain embodiments are described with reference to Versatile Video Coding or other specific video codecs, the disclosed techniques are applicable to other video  coding technologies also. Furthermore, while some embodiments describe video coding steps in detail, it will be understood that corresponding steps decoding that undo the coding will be implemented by a decoder. Furthermore, the term video processing encompasses video coding or compression, video decoding or decompression and video transcoding in which video pixels are represented from one compressed format into another compressed format or at a different compressed bitrate.
1 Initial introduction
This technology described in this patent application is related to image/video coding technologies. Specifically, it is related to cross-component prediction in image/video coding. It may be applied to the existing video coding standard like high efficiency video coding (HEVC) , or the standard (Versatile Video Coding) to be finalized. It may be also applicable to future video coding standards or video codec.
2 Video coding discussion
Video coding standards have evolved primarily through the development of the well-known ITU-T and ISO/IEC standards. The ITU-T produced H. 261 and H. 263, ISO/IEC produced MPEG-1 and MPEG-4 Visual, and the two organizations jointly produced the H. 262/MPEG-2 Video and H. 264/MPEG-4 Advanced Video Coding (AVC) and H. 265/HEVC standards. Since H. 262, the video coding standards are based on the hybrid video coding structure wherein temporal prediction plus transform coding are utilized. To explore the future video coding technologies beyond HEVC, Joint Video Exploration Team (JVET) was founded by VCEG and MPEG jointly in 2015. Since then, many new methods have been adopted by JVET and put into the reference software named Joint Exploration Model (JEM) . In April 2018, the Joint Video Expert Team (JVET) between VCEG (Q6/16) and ISO/IEC JTC1 SC29/WG11 (MPEG) was created to work on the VVC standard targeting at 50%bitrate reduction compared to HEVC.
2.1 Color formats
Chroma subsampling is the practice of encoding images by implementing less resolution for chroma information than for luma information, taking advantage of the human visual system′slower acuity for color differences than for luminance.
4: 2: 0 is 2: 1 subsampling of Horizontal and Vertical. A signal with chroma 4: 4: 4 has no compression (so it is not subsampled) and transports both luminance and color data entirely. In a four by two array of pixels, 4: 2: 2 has half the chroma of 4: 4: 4, and 4: 2: 0 has a quarter of the color information available.
Suppose one chroma block size is MxN wherein M is the width and N is the height of the chroma block, and the top-left position within the chroma block is denoted by (x, y) .
Then the collocated luma block could be identified by:
Figure PCTCN2020133786-appb-000001
2.2 Intra prediction in HEVC/H. 265
In picture two distinct kinds of redundancy can be identified: 1) Spatial or temporal redundancy, 2) Psycho-visual redundancy. For remove spatial redundancy, prediction process is used. Intra-prediction is the process to predict pixels of picture frame. Intra-picture prediction uses neighborhood pixels for predict picture block. Before intra prediction frame must be split.
In HEVC, one picture/slice/tile may be split into multiple coding tree units (CTU) . Depending on the parameters like texture complexity, the CTUs can have the size: 64×64, 32×32, or 16× 16. Coding Tree Unit (CTU) is therefore a coding logical unit, which is in turn encoded into an HEVC bitstream. It consists of three blocks, namely luma (Y) two chroma components (Cb and Cr) . Taking 4: 2: 0 color format as an example, Luma component have LxL samples and each chroma component have L/2xL/2 samples. Each block is called Coding Tree Block (CTB) . Each CTB has the same size (LxL) as the CTU (64×64, 32×32, or 16×16) . Each CTB can be divide repetitively in a quad-tree structure, from the same size as CTB to as small as 8×8. Each block resulting from this partitioning is called Coding Blocks (CBs) and becomes the decision making point of prediction type (inter or intra prediction) . The prediction type along with other parameters is coded in Coding Unit (CU) . So CU is the basic unit of prediction in HEVC, each of which is predicted from previously coded data. And the CU consists of three CBs (Y, Cb and Cr) . CBs could still be too large to store motion vectors (inter-picture (temporal) prediction) or intra-picture (spatial) prediction mode. Therefore, Prediction Block (PB) was introduced. Each CB can be split into PBs differently depending on the temporal and/or spatial predictability. The CTUs can have the size: 32×32, 16× 16, 8×8 or 4×4.
There are two kinds of intra prediction modes, PCM (pulse code modulation) and  normal intra prediction mode.
2.2.1 PCM (pulse code modulation)
In I_PCM mode, prediction, transform, quantization and entropy coding are bypassed. Coding of the samples of a block by directly representing the sample values without prediction or application of a transform.
In HEVC, I_PCM mode is only available for 2Nx2N PU. Max and min I_PCM CU size is signalled in SPS, legal I_PCM CU sizes are 8x8, 16x16 and 32x32, user-selected PCM sample bit-depths, signalled in SPS for luma and chroma, separately.
Take Luma sample as an example: recSamplesL [i, j] = pcm_sample_luma [ (nS *j ) + i] << (BitDepthY -PCMBitDepthY) . It becomes lossless coding when PCMBitDepthY=BitDepthY
2.2.2 Normal intra prediction
For the luma component, there are 35 modes, including Planar, DC and 33 angular prediction modes for all block sizes. To better code these luma prediction modes, one most probable mode (MPM) flag is firstly code to indicate whether one of the 3 MPM modes is chosen. If the MPM flag is false, then the 32 rest modes are coded with fixed length coding) .
The selection of the set of three most probable modes is based on modes of two neighboring PUs, one left and one to the above of the current PU. Let the intra modes of left and above of the current PU be A and B, respectively wherein the two neighboring blocks are depicted in FIG. 1.
If a neighboring PU is not coded as intra or is coded with pulse code modulation (PCM) mode, the PU is considered to be a DC predicted one. In addition, B is assumed to be DC mode when the above neighboring PU is outside the CTU to avoid introduction of an additional line buffer for intra mode reconstruction.
If A is not equal to B, the first two most probable modes denoted as MPM [0] and MPM [1] are set equal to A and B, respectively, and the third most probable mode denoted as MPM [2] is determined as follows:
- If neither of A or B is planar mode, MPM [2] is set to planar mode.
- Otherwise, ifneither of A or B is DC mode, MPM [2] is set to DC mode.
- Otherwise (one of the two most probable modes is planar and the other is DC) , MPM [2] is set equal to angular mode 26 (directly vertical) .
If A is equal to B, the three most probable modes are determined as follows. In the case they are not angular modes (A and B are less than 2) , the three most probable modes are  set equal to planar mode, DC mode and angular mode 26, respectively. Otherwise (A and B are greater than or equal to 2) , the first most probable mode MPM [0] is set equal to A and two remaining most probable modes MPM [1] and MPM [2] are set equal to the neighboring directions of A and calculated as:
MPM [1] =2 + ( (A-2-1 + 32) %32)
MPM [2] =2 + ( (A-2 + 1) %32)
where%denotes the modulo operator (e.g., a%b denotes the remainder of a divided by b) .
For the chroma component, there are 5 modes, including DM, Planar, DC, Horizontal, Vertical.
2.3 Intra prediction in VVC
2.3.1 Intra mode coding with 67 intra prediction modes
To capture the arbitrary edge directions presented in natural video, the number of directional intra modes is extended from 33, as used in HEVC, to 65. The additional directional modes are depicted as grey dotted arrows in FIG. 2, and the planar and DC modes remain the same. These denser directional intra prediction modes apply for all block sizes and for both luma and chroma intra predictions.
Conventional angular intra prediction directions are defined from 45 degrees to -135 degrees in clockwise direction as shown in FIG. 2. In VTM2, several conventional angular intra prediction modes are adaptively replaced with wide-angle intra prediction modes for the non-square blocks. The replaced modes are signaled using the original method and remapped to the indexes of wide angular modes after parsing. The total number of intra prediction modes is unchanged, e.g., 67, and the intra mode coding is unchanged.
In the HEVC, every intra-coded block has a square shape and the length of each of its side is a power of 2. Thus, no division operations are required to generate an intra-predictor using DC mode. In VVV2, blocks can have a rectangular shape that necessitates the use of a division operation per block in the general case. To avoid division operations for DC prediction, only the longer side is used to compute the average for non-square blocks.
2.3.2 Intra mode coding for luma component with 6 MPMs
In VVC reference software VTM3.0. rc1, only intra mode of neighbor position A and B denoted as LEFT and ABOVE, as depicted in FIG. 3 are used for MPM list generation. For the non MPM coding, truncated binary coded is applied.
Let the intra modes of left and above of the current CU be Mode A and Mode B,  respectively.
If a neighboring CU is not coded as intra or is coded with pulse code modulation (PCM) mode, the CU is considered to be a Planar predicted one. In addition, Mode B is assumed to be Planar mode when the above neighboring CU is outside the CTU to avoid introduction of an additional line buffer for intra mode reconstruction.
The 6 MPM modes are denoted by MPM [i] (i being 0... 5) . The following steps are performed in order:
1. Initialized values: MPM [6] = {Mode A, ! Mode A, 50, 18, 46, 54} ;
2. If Mode A is equal to Mode B, the followings apply
- If Mode A is larger than 1 (non-DC/planar) , MPM [6] = {Mode A, planar, DC, 2 + ( (candIntraPredModeA + 62) %65) , 2 + ( (candIntraPredModeA -1) %65, 2 + ( (candIntraPredModeA + 61) %65) ) } ;
3. Otherwise (Mode A is equal to Mode B) , the followings apply:
- MPM [0] = Mode A, MPM [1] = Mode B
- Set the variable biggerIdx is as follows:
biggerIdx = candModeList [0] > candModeList [1] ? 0: 1
- Ifboth of Mode A and Mode B are larger than 1, MPM [x] with x = 2.. 5 is derived as follows:
MPM [2] = INTRA_PLANAR
MPM [3] = INTRA_DC
- If MPM [biggerIdx] -MPM [! biggerIdx] is equal to neither 64 nor 1, the following applies:
MPM [4] = 2 + ( (MPM [biggerIdx] + 62) %65)
MPM [5] = 2 + ( (MPM [biggerIdx] -1) %65)
- Otherwise, the following applies:
MPM [4] = 2 + ( (MPM [biggerIdx] + 61) %65)
MPM [5] = 2 + (candModeList [biggerIdx] %65)
- Otherwise, if sum of Mode A and Mode B is larger or equal to 2, the following applies:
MPM [2] = ! MPM [! biggerIdx]
MPM [3] = 2 + ( (MPM [biggerIdx] + 62) %65)
MPM [4] = 2 + ( (MPM [biggerIdx] -1) %65)
MPM [5] = 2 + ( (MPM [biggerIdx] + 61) %65)
where%denotes the modulo operator (e.g., a%b denotes the remainder of a divided by b) .
2.3.3 Wide-angle intra prediction for non-square blocks
Conventional angular intra prediction directions are defined from 45 degrees to -135 degrees in clockwise direction. In VTM2, several conventional angular intra prediction modes are adaptively replaced with wide-angle intra prediction modes for non-square blocks. The replaced modes are signaled using the original method and remapped to the indexes of wide angular modes after parsing. The total number of intra prediction modes for a certain block is unchanged, e.g., 67, and the intra mode coding is unchanged.
To support these prediction directions, the top reference with length 2W+1, and the left reference with length 2H+ 1, are defined as shown in FIG. 4.
The mode number of replaced mode in wide-angular direction mode is dependent on the aspect ratio of a block. The replaced intra prediction modes are illustrated in Table 2-1.
Table 2-1 -Intra prediction modes replaced by wide-angular modes
Condition Replaced intra prediction modes
W/H==2 Modes  2, 3, 4, 5, 6, 7
W/H>2 Modes  2, 3, 4, 5, 6, 7, 8, 9, 10, 11
W/H==1 None
H/W==1/2 Modes  61, 62, 63, 64, 65, 66
H/W<1/2 Mode  57, 58, 59, 60, 61, 62, 63, 64, 65, 66
As shown in FIG. 5, two vertically-adjacent predicted samples may use two non-adjacent reference samples in the case of wide-angle intra prediction. Hence, low-pass reference samples filter and side smoothing are applied to the wide-angle prediction to reduce the negative effect of the increased gap Δp α.
2.3.4 Position dependent intra prediction combination
In the VTM2, the results of intra prediction of planar mode are further modified by a position dependent intra prediction combination (PDPC) method. PDPC is an intra prediction method which invokes a combination of the un-filtered boundary reference samples and HEVC style intra prediction with filtered boundary reference samples. PDPC is applied to the following intra modes without signalling: planar, DC, horizontal, vertical, bottom-left angular mode and its eight adjacent angular modes, and top-right angular mode and its eight adjacent angular modes.
The prediction samplepred (x, y) is predicted using an intra prediction mode (DC, planar, angular) and a linear combination of reference samples according to the Equation as follows:
pred (x, y) = (wL×R -1, y + wT×R x, -1 -wTL ×R -1, -1+ (64 -wL -wT+wTL) ×pred (x, y) + 32) >> 6
where R x, -1, R -1, y represent the reference samples located at the top and left of current sample (x, y) , respectively, and R -1, -1 represents the reference sample located at the top-left corner of the current block.
If PDPC is applied to DC, planar, horizontal, and vertical intra modes, additional boundary filters are not needed, as required in the case of HEVC DC mode boundary filter or horizontal/vertical mode edge filters.
FIGS. 6A-6D illustrates the definition of reference samples (R x, -1, R -1, y and R -1, -1) for PDPC applied over various prediction modes. The prediction sample pred (x’, y’) is located at (x’, y’) within the prediction block. The coordinate x of the reference sample R x, -1 is given by: x = x’+ y’+ 1, and the coordinate y of the reference sample R -1, y is similarly given by: y = x’+ y’+1.
The PDPC weights are dependent on prediction modes and are shown in Table 2-2.
Table 2-2 -Example of PDPC weights according to prediction modes
Figure PCTCN2020133786-appb-000002
2.3.5 Multiple reference line intra prediction (MRLIP)
Instead of always using the reconstructed samples in the adjacent left column and above row (e.g., reference line 0) for intra prediction, it is proposed to allow using reference samples located at different distances.
The MRLIP has the following characteristics:
- Reference line index signaling
- for reference line idx > 0, only those in MPM list and only signal mpm index without remaining mode;
- for reference line index = 0, the same as original design, all kinds of intra prediction modes may be selected
- One of three lines may be selected for one luma block:  reference line  0, 1, 3 as depicted in FIG. 7.
- Top line of CTU restriction
- disable MRL for the first line of blocks inside a CTU
2.3.6 Chroma coding
In HEVC chroma coding, five modes (including one direct mode (DM) which is the intra prediction mode from the top-left corresponding luma block and four default modes) are allowed for a chroma block. The two-color components share the same intra prediction mode.
Different from the design in HEVC, two new methods have been proposed, including: cross-component linear model (CCLM) prediction mode and multiple DMs.
2.3.6.1 CCLM
To reduce the cross-component redundancy, a cross-component linear model (CCLM) prediction mode, a.k.a. LM, is used in the JEM, for which the chroma samples are predicted based on the reconstructed luma samples of the same CU by using a linear model as follows:
pred C (i, j) = α·rec L′ (i, j) + β      (1)
where pred C (i, j) represents the predicted chroma samples in a CU and rec L′ (i, j) represents the downsampled reconstructed luma samples of the same CU for color formats 4: 2: 0 or 4: 2: 2 while rec L′ (i, j) represents the reconstructed luma samples of the same CU for color format 4: 4: 4. CCLM Parameters α and β are derived by minimizing the regression error between the neighboring reconstructed luma and chroma samples around the current block as follows:
Figure PCTCN2020133786-appb-000003
Figure PCTCN2020133786-appb-000004
where L (n) represents the down-sampled (for color formats 4: 2: 0 or 4: 2: 2) or original (for color format 4: 4: 4) top and left neighboring reconstructed luma samples, C (n) represents the top and left neighboring reconstructed chroma samples, and value of N is equal to twice of the minimum of width and height of the current chroma coding block. For a coding block with a square shape, the above two equations are applied directly.
The CCLM luma-to-chroma prediction mode is added as one additional chroma  intra prediction mode. At the encoder side, one more RD cost check for the chroma components is added for selecting the chroma intra prediction mode. When intra prediction modes other than the CCLM luma-to-chroma prediction mode is used for the chroma components of a CU, CCLM Cb-to-Cr prediction is used for Cr component prediction.
2.3.6.1.1 CCLM for non-square block
For a non-square coding block, the neighboring samples of the longer boundary are first subsampled to have the same number of samples as for the shorter boundary. FIG. 8 shows the location of the left and above reconstructed samples and the sample of the current block involved in the CCLM mode.
This regression error minimization computation is performed as part of the decoding process, not just as an encoder search operation, so no syntax is used to convey the α and β values.
2.3.6.1.2 CCLM between chroma components
The CCLM prediction mode also includes prediction between the two chroma components, e.g., the Cr component is predicted from the Cb component. Instead of using the reconstructed sample signal, the CCLM Cb-to-Cr prediction is applied in residual domain. This is implemented by adding a weighted reconstructed Cb residual to the original Cr intra prediction to form the final Cr prediction:
Figure PCTCN2020133786-appb-000005
wherein resi Cb′ (i, j) presents the reconstructed Cb residue sample at position (i, j) .
The scaling factor α is derived in a similar way as in the CCLM luma-to-chroma prediction. The only difference is an addition of a regression cost relative to a default α value in the error function so that the derived scaling factor is biased towards a default value of-0.5 as follows:
Figure PCTCN2020133786-appb-000006
where Cb (n) represents the neighboring reconstructed Cb samples, Cr (n) represents the neighboring reconstructed Cr samples, and λ is equal to ∑ (Cb (n) ·Cb (n) ) >> 9.
2.3.6.1.3 Downsampling filters in CCLM mode
To perform cross-component prediction, for the 4: 2: 0 chroma format, where 4 luma samples corresponds to 1 chroma samples, the reconstructed luma block needs to be downsampled to match the size of the chroma signal. The default downsampling filter (e.g., 6- tap as depicted in FIG. 9B) used in CCLM mode is as follows.
Figure PCTCN2020133786-appb-000007
Note that this downsampling assumes the “type 0” phase relationship as shown in FIG. 9A for the positions of the chroma samples relative to the positions of the luma samples -e.g., collocated sampling horizontally and interstitial sampling vertically.
2.3.6.2 MDLM (multi-directional LM)
This contribution proposes multi-directional LM (MDLM) . In MDLM, two additional CCLM modes are proposed: LM-T, where the linear model parameters are derived only based on the top neighboring samples as shown in FIG. 10A, and LM-L, where the linear model parameters are derived only based on the left neighboring samples as shown in FIG. 10B.
2.3.6.3 Three CCLM solutions in VVC
CCLM from luma to chroma prediction as in JEM is adopted in VTM-2.0. In addition, JVET-L0338 and JVET-L0191 are further adopted into VTM-3.0.
In total, three modes for CCLM are supported, named INTRA_LT_CCLM (the one in JEM) , INTRA_L_CCLM (LM-T) and INTRA_T_CCLM (LM-L) . The differences among the three modes are which neighboring samples are utilized to derive the linear model parameters (e.g., α, β) .
Suppose the chroma block size equal to nTbW x nTbH, the availability of top or left block of current block by availT and availL, respectively. The subsampling ratio of above row and left column by xS and yS, respectively.
2.3.6.3.1 INTRA_LT_CCLM
In this mode, also denoted as LM-LT, the above row and left column may be utilized to derive linear model parameters. For non-square chroma block, the corresponding longer side may be sub-sampled. That is, up to 2*nS = 2* (min (nTbW, nTbH) ) samples may be utilized for linear model parameter derivation.
More specifically, the following applies:
nS= ( (availL &&availT) ? Min (nTbW, nTbH) : (availL ? nTbH: nTbW) )    (7)
xS = 1 << ( ( (nTbW > nTbH) &&availL &&availT) ? (Log2 (nTbW) -Log2 (nTbH) ) : 0)  (8)
yS = 1 << ( ( (nTbH > nTbW) &&availL &&availT) ? (Log2 (nTbH) -Log2 (nTbW) ) : 0)  (9)
2.3.6.3.2 INTRA_L_CCLM
In this mode, also denoted as LM-L, the above row and above-right side (up to numSampL samples) are both utilized, if needed.
More specifically, the following applies:
xS and yS are set to 1 (e.g., no sub-sampling regardless it is non-square or square block) .
numSampL= (availL &&predModeIntra = = INTRA_L_CCLM) ? (nTbH + numLeftBelow) : 0 (10)
2.3.6.3.3 INTRA_T_CCLM
In this mode, also denoted as LM-T, the left column and left-below side (up to numSampT samples) are both utilized, if needed.
More specifically, the following applies:
xS and yS are set to 1 (e.g., no sub-sampling regardless it is non-square or square block) .
numSampT= (availT && predModeIntra= =INTRA_T_CCLM) ? (nTbW + numTopRight) : 0 (11)
2.3.6.4 Four-point based linear model derivation
Either 2 points or 4 points of neighboring luma samples and its corresponding chroma samples are utilized to derive the linear model parameters. According to the color format, the luma samples may be those downsampled luma samples instead of directly using the reconstructed luma samples.
Basically, the 2 or 4 points are selected with equal distance. Suppose the block width and height of current chroma block is W and H, respectively. And the top-left coordinate of current chroma block is [0, 0] .
1. If the above and the left blocks are both available and current mode is the normal LM mode (excluding LM-T, and LM-L) , 2 chroma samples locating at above row, and 2 chroma samples located left column are selected.
The two above samples’ coordinates are [floor (W/4) , -1] and [floor (3*W/4) , -1] . The two left samples’ coordinates are [-1, floor (H/4) ] and [-1, floor (3*H/4) ] . The selected samples are painted in solid color (e.g., grey or black) as depicted in FIG. 11A.
FIG. 11A shows example when both above and left neighboring samples are available.  Subsequently, the 4 samples are sorted according to luma sample intensity and classified into 2 group. The two larger samples and two smaller samples are respectively averaged. Cross component prediction model is derived with the 2 averaged points. Alternatively, the maximum and minimum value of the four samples are used to derive the LM parameters.
2. If the above block is available while the left block is not available, four chroma samples from above block are selected when W>2 and 2 chroma samples are selected when W=2.
The four selected above samples’ coordinates are [W/8, -1] , [W/8 + W/4, -1] , [W/8 + 2*W/4, -1] , and [W/8 + 3*W/4, -1] . The selected samples are painted in solid color (e.g., grey or black) as depicted in FIG. 11B. FIG. 11B shows an example when only above neighboring samples are available and top-right is not available.
3. If the left block is available while the above block is not available, four chroma samples from left block are selected when H>2 and 2 chroma samples are selected when H=2.
The four selected left samples’ coordinates are [-1, H/8] , [-1, H/8 + H/4] , [-1, H/8 + 2*H/4, -1] , and [-1, H/8 + 3*H/4] .
4. If neither of the left and above blocks are available, a default prediction is used. with α equals 0, β equals to 1 << (BitDepth-1) , where BitDepth represents the bit-depth of chroma samples.
5. If the current mode is the LM-T mode, four chroma samples from above block are selected when W’>2 and 2 chroma samples are selected when W’=2. W’ is the available number of above neighboring samples, which can be 2*W.
The four selected above samples’ coordinates are [W’/8, -1] , [W’/8 + W’/4, -1] , [W’/8 + 2*W’/4, -1] , and [W’/8 + 3*W’/4, -1] .
6. If the current mode is the LM-L mode, four chroma samples from left block are selected when H’>2 and 2 chroma samples are selected when H’=2. H’ is the available number of left neighboring samples, which can be 2*H.
The four selected left samples’ coordinates are [-1, H’/8] , [-1, H’/8 + H’/4] , [-1, H’/8 + 2*H’/4, -1] , and [-1, H’/8 + 3*H’/4] .
JVET-L0191 proposes to replace the LMS algorithm of the linear model parameters α and β by a straight-line equation, so called two-point method. Two smaller values among the four points are averaged, denoted as A; and the greater two values (the remaining two) among  the four points are averaged, denoted by B. A and B are depicted in FIG. 12.
Where the linear model parameters α and β are obtained according to the following equations:
Figure PCTCN2020133786-appb-000008
β = y A -α *x A      (13)
The division to derive α is avoided and replaced multiplications and shifts as below:
- If either above or left neighboring block is available, the following applies:
a=0;
iShift = 16;
shift = (InternalBitDepth > 8) ? InternalBitDepth -9: 0;
add = shift ? 1 << (shift -1) : 0;
diff = (MaxLuma -MinLuma + add) >> shift;
if (diff > 0)
{
div = ( (MaxChroma-MinChroma) *g_aiLMDivTableLow [diff-1] + 32768) >> 16;
a = ( ( (MaxChroma-MinChroma) *g_aiLMDivTableHigh [diff-1] + div + add) >>
shift) ;
}
b = Minchroma - ( (a*MinLuma) >> iShift) ;
- Otherwise, the following applies:
a = 0; iShift = 0; b = 1 << (BitDepth C -1)
wherein S is set equal to iShift, α is set equal to a andβ is set equal to b; g_aiLMDivTableLow and g_aiLMDivTableHigh are two tables each with 512 entries. Each entry stores a 16-bit integer.
To derive the Chroma predictor, as for the current VTM implementation, the multiplication is replaced by an integer operation as the following:
Figure PCTCN2020133786-appb-000009
The prediction values are further clipped to the allowed range of chroma values.
2.3.6.5 Chroma prediction generation process
For all of the three supported LM modes, the followings apply:
The prediction samples predSamples [x] [y] of one chroma block equal to with x = 0.. nTbW -1, y = 0.. nTbH -1 are derived as follows:
predSamples [x] [y] = Clip1C ( ( (pDsY [x] [y] *a) >> k) + β)
                          (14)
Clip1 C (x) = Clip3 (0, (1 << BitDepth C) -1, x)      (15)
wherein (a, b) , k (set to S) are the two linear model parameters derived from sub-sections 2.3.6.3.1 or 2.3.6.3.2 or 2.3.6.3.3 depending on the selected CCLM mode for the chroma block, nTbW and nTbH are the width and height of the chroma block, respectively, and pDsY is the downsampled collocated luma reconstructed block.
More specifically, the down-sampled collocated luma samples pDsY [x] [y] with x = 0.. nTbW -1, y = 0.. nTbH -1 are derived as follows, with (1, 2, 1; 1, 2, 1) downsampling filter or (1, 1) downsampling filter for the top-left position:
- pDsY [x] [y] with x = 1 .. nTbW -1, y = 0.. nTbH -1 is derived as follows:
Figure PCTCN2020133786-appb-000010
- If availL is equal to TRUE, pDsY [0] [y] with y = 0.. nTbH -1 is derived as follows:
Figure PCTCN2020133786-appb-000011
- Otherwise, pDsY [0] [y] with y = 0.. nTbH -1 is derived as follows:
pDsY [0] [y] = (pY [0] [2*y] +pY [0] [2*y+ 1] + 1) >> 1      (18)
In above examples, pY indicates the collocated luma reconstructed samples before deblocking filter.
2.3.6.6 Syntax design of chroma intra prediction modes
7.3.9.5 Coding unit syntax
Figure PCTCN2020133786-appb-000012
Figure PCTCN2020133786-appb-000013
Table 20 -Specification of IntraPredModeC [xCb] [yCb] depending on cclm_mode_flag, cclm_mode_idx, intra_chroma_pred_mode and lumaIntraPredMode
Figure PCTCN2020133786-appb-000014
2.4 Two-Step Cross-component Prediction Mode (TSCPM) in AVS3
This section shows an example of the Two-Step Cross-component Prediction Mode (TSCPM) in AVS3. TSCPM is done in the following steps:
1) Get linear model from neighboring reconstructed samples
2) Apply the linear model to the originally reconstructed luma block to get an internal prediction block.
3) The internal prediction block is down-sampled to generate the final chroma prediction block.
FIG. 13 depicts the basic procedures of the chroma prediction block generation process. The left square denotes the originally reconstructed luma sample located at (x, y) of the collocated luma block by R L (x, y) . By simply applying the linear model with parameters (α, β) to each luma sample, a temporary chroma prediction block is generated. After that, the temporary chroma prediction block is further down-sampled to generate the final chroma prediction block.
The linear model derivation process and down-sampling process is described in the following sub-sections.
2.4.1 Derivation of linear model
In one embodiment, either 4 or 2 samples may be selected and averages of two larger values and two smaller values are utilized to calculate the parameters.
Selection of neighboring samples
Firstly, the ratio r of width and height of chroma coded block is calculated as Eq. 19. Then based on the availability of above row and left column, four or two samples are selected.
Figure PCTCN2020133786-appb-000015
More specifically, if the above and the left neighboring reference samples are both available, four samples locating at [0, -1] , [width-max (1, r) , -1] , [-1, 0] , [-1, height -max (1, r) ] are selected. When only above neighboring reference samples are available, four samples locating at [0, -1] , [width/4, -1] , [2width/4, -1] , [3width/4, -1] are used. For the case that only left reference samples are accessible, [-1, 0] , [-1, height/4] , [-1, 2height/4] , [-1, 3height/4] are employed. FIG. 14 shows an example of the locations regarding to the four neighboring samples. The selected samples are painted in yellow.
Subsequently, the 4 samples are sorted according to luma sample intensity and classified into 2 groups. The two larger samples and two smaller samples are respectively averaged. Cross component prediction model is derived with the 2 averaged points. In one example, the similar way as described in 2.3.6.4 may be utilized to derive α, β and shift with  the average of two larger selected sample values as (MaxLuma, MaxChroma) and the average of two smaller selected sample values as (MinLuma, MinChroma) .
If only either the above block with current chroma block width to 2 or left block with current chroma block height to 2 is available, [0, -1] and [1, -1] of the above line, or [-1, 0] , [-1, 1] of the left line are selected. A chroma prediction model is derived according to the luminance and chrominance values of selected 2 samples. In one example, the similar way as described in 2.3.6.4 may be utilized to derive α, β and shift.
If neither of the left and above blocks are available, a default prediction is used. with α equals 0, β equals to 1<< (BitDepth-1) , where BitDepth represents the bit-depth of chroma samples.
2.4.2 Two step derivation process of chroma prediction block
The temporary chroma prediction block is generated with Eq. 21, where P′ c (x, y) denotes a temporary prediction block. α and β are model parameters. R L (x, y) is a reconstructed luma sample.
P′ c (x, y) = α × R L (x, y) + β    (20)
Similar to normal intra prediction process, clipping operations are applied to P′ c (x, y) to make sure it is within [0, 1<< (BitDepth-1) ] .
A six-tap filter (e.g., [1 2 1; 1 2 1] ) is introduced for the down-sampled process for temporary chroma prediction block, as shown in Eq. 4.
P c = (2 × P′ c (2x, 2y) + 2 × P′ c (2x, 2y + 1) + P′ c (2x-1, 2y) + P′ c (2x + 1, 2y) + P′ c (2x-1, 2y + 1) +P′ c (2x + 1, 2y-1) +offset0) >> 3       (21)
In addition, for chroma samples located at the left most column, [1 1] downsampling filter if applied isntead.
P c = (P′ c (2x, 2y) + P′ c (2x + 1, 2y) + offset1) >> 1
The two varaiables offset0 and offset1 are integer values. In some examples, the varaiables offset0 and offset1 may be set to 4, and 1, respectively. In some examples, offset0 and offset1 may be set to 0.
2.4.3 Additional TSCPM modes
In addition to the TSCPM mode (denoted by TSCPM-LT) described in above-mentioned sub-sections, two additional TSCPM modes are further introduced, denoted as TSCPM-L and TSCPM-A wherein only left or above neighboring samples are utilized.
2.4.4 Syntax Design
Based on the AVS3 specification, a flag is used to signal whether the chroma intra- predication mode is TSCPM or not. This flag (as a 2 nd bin) is coded right after the indication of DM mode usage (1 st bin) . The detailed bin strings for each chroma mode is tabulated in the table below.
Table 2-4: Coding bins signaling with TSCPM of chroma intra modes in TAVS3.
Figure PCTCN2020133786-appb-000016
3 Problems
The design of cross-component prediction methods, including CCLM, TSCPM utilizes the luma information to predict a chroma color component (e.g., Cb or Cr) . It is noticed that for a second chroma color component (e.g., Cr) to be coded, the other two-color components (e.g., luma and Cb) are already available. How to utilize those information needs to be further studied.
4 Technical Solution (s)
To solve the above-mentioned problem, a method of Prediction from Multiple Cross-components (PMC) is proposed. In PMC, the prediction signal of a first color component C0 may be derived using the reconstructed representative samples of corresponding blocks of a second and/or third color components, denoted by C1 and C2. In yet another example, the prediction signal of C0 may further depends on the neighboring (e.g., adjacent or non-adjacent) samples of reconstructed samples of C1 and C2. In yet another example, the prediction signal of C0 may further depends on the neighboring (e.g., adjacent or non-adjacent) samples of reconstructed samples of C0.
The detailed techniques below should be considered as examples to explain general concepts. These techniques should not be interpreted in a narrow way. Furthermore, these techniques can be combined in any manner.
In the following descriptions, the term ‘cross-component prediction (CCP) ’ may represent any variance of coding methods that derive the reconstruction/prediction signal of a first color component using the information of a second color component.
1. The coding/decoding process of a PMC coded C0 block (e.g., prediction signal derivation process) may depend on reconstructed and/or prediction samples of representative samples with C1 and/or C2 color components corresponding to the current C0 samples.
a. In one example, linear function may be applied to the presentative samples with C1 and/or C2 color components and/or current C0 block’s neighboring samples (including adjacent or non-adjacent) .
b. In one example, non-linear function may be applied to the presentative samples with C1 and/or C2 color components and/or current C0 block’s neighboring samples (including adjacent or non-adjacent) .
c. In one example, the final predictor of one sample in the C0 block, denoted by FPred c0 is derived by using the following equation:
FPred c0 = X × TPred c0 + Y × (Rec c2 -FPred c2) + Z       (4-1)
wherein TPred c0 represents a temporary prediction value of the sample using existing prediction modes (e.g., intra/inter/IBC prediction modes) , Rec c2 and FPred c2 represent the reconstruction and final prediction values of representative C2 samples.
d. In one example, the final predictor of one sample in the C0 block, denoted by FPred c0 is derived using the following equation:
FPred c0 = X × (α c0*Rec c1 + β c0) + Y × (Rec c2 - (α c2*Rec c1 + β c2) + Z      (4-2)
wherein Rec c1 and Rec c2 represent the reconstruction values of representative C1 and C2 samples, respectively.
e. In one example, the final predictor of one sample in the C0 block, denoted by FPred c0 is derived by using the following equation:
FPred c0 = (X × α c0 -Y × α c2) *Rec c1 + (X × β c0 -Y × β c2) + Y × Rec c2 + Z      (4-3)
f. In one example, for a current C0 block with size equal to K’×L’, two temporary blocks (with size equal to K×L) for C0 and C2 may be firstly derived, according  to linear model parameters (X × α c0, X × β c0) and (Y × α c2, Y × β c2) , and corresponding C1 block with size equal to K×L, respectively. The temporary blocks may be further downsampled to K’×L’ with/without clipping.
i. In one example, the two temporary blocks are derived using the linear model parameters applied to the corresponding C1 block, similar to CCLM/TSCPM process.
g. In one example, for a current C0 block with size equal to K’×L’, one temporary blocks (with size equal to K×L) may be derived, according to (X × α c0 -Y × α c2, X × β c0 -Y × β c2) and corresponding C1 block with size equal to K×L. The temporary block may be further downsampled to K’×L’ with/without clipping.
i. Alternatively, furthermore, the final prediction may be generated by adding the collocated sample in the temporary block with/without being downsampled to Y × Rec c2, or/subtracting the collocated sample in the temporary block with/without being downsampled from the Y × Rec c2
h. In one example, for a current C0 block with size equal to K’×L’, a temporary C1 block may be firstly derived, e.g., using downsampling filter, from C1 block with size equal to K×L. Linear model parameters (X × α c0, X × β c0) and (y × α c2, Y × β c2) , may be applied to the temporary C1 block, followed by adding the collocated sample in the temporary block after linear model parameters being applied, to the Y × Rec c2, or followed by subtracting the collocated sample in the temporary block after linear model parameters being applied, from the Y × Rec c2
i. In above examples, the variables X, Y are two variables which may represent the weighting factors and Z is an offset value; α c0, α c2 are two variables applied to representative C1 samples; β c0 and β c2 are offset values.
i. In one example, X or Y or Z is equal to 1.
ii. In one example, X or Y or Z is equal to 0.
iii. In one example, X is equal to 1, Y is equal to -1 and Z is equal to 0.
iv. In one example, X or Y or Z is equal to 2 K or -2 K, wherein K is an integer value, such as a value being in the range of [-M, N] wherein M and N are no smaller than 0.
v. The variables used in above equations may be pre-defined or signaled in the bitstream.
1) Alternatively, the variables used in above equations may be derived on-the-fly.
vi. One or multiple of the variables used in above equations may be same for all samples within one video unit (e.g., coding block/prediction block/transform block) .
1) Alternatively, multiple sets of variables used in above equations may be derived or signalled.
2) Alternatively, a first sample in the video unit may select a first set of variable values; and a second sample in the video unit may select a second set of variable values wherein at least one variable value is different in the first and second sets.
j. In above examples, Ci (i being 0 to 2) may be defined as follows:
i. In one example, C0 is Cb; C1 is Y and C2 is Cr color component.
ii. In one example, C0 is Cr; C1 is Y and C2 is Cb color component.
iii. In one example, C0 is the luma color components (Y in YCbCr format; G in RGB format) , C1 and C2 are the remaining two-color components.
k. In one example, a representative sample may be obtained by down-sampling.
l. In above examples, Rec c2 may be the corresponding C2 sample.
m. In above examples, the final prediction values may be further clipped to a specific range.
2. How to select and/or how many representative samples of C1 and/or C2 to be used for predicting one C0 sample may be determined on-the-fly.
a. In one example, how to select the representative samples of C1 and/or C2 may be based on the position of the current C0 sample and/or color format.
i. Taking 4: 2: 0 color format for example, the representative C1/C2 samples may be those surrounding the C1/C2 samples corresponding to the sample.
a) assume a chroma sample to be predicted is located at (X c, Y c) which may be equal to (X, Y) , the L representative luma reconstruction samples may be defined as:
1) For example, two representative luma reconstruction samples are defined as the samples located at (2X, 2Y) , (2X, 2Y+1) .
2) For example, two representative luma reconstruction samples are defined as the samples: (2X, 2Y) , (2X+1, 2Y) .
3) For example, six representative luma reconstruction samples are depicted in FIG. 15A. FIGS. 15A-B show examples of selection of representative C1 samples in PMC.
4) For example, eight representative luma reconstruction samples are depicted in FIG. 15B.
ii. In one example, the representative C2 sample may be with the same coordinator as the current C0 sample.
b. In one example, the representative samples may be defined as those reconstructed samples before in-loop filtering methods (e.g., deblocking filter/SAO/ALF/CCALF) are applied.
c. In one example, the representative samples may be defined as a function of multiple reconstructed samples before in-loop filtering methods (e.g., deblocking filter/SAO/ALF/CCALF) are applied.
i. In one example, the function may be defined as a downsample filtering process.
ii. In one example, the function may be defined as linear function (e.g., weighted average) or non-linear function.
3. Linear model parameters may be applied to representative C1 samples.
a. In one example, (α c0, β c0) are linear model parameters derived for the current sample/current block.
i. In one example, they may be derived using the neighboring samples of current block C0 and neighboring samples of C1 block.
b. In one example, (α c2, β c2) are linear model parameters derived for the representative C2 samples/the C2 block covering the representative C2 samples.
i. In one example, they may be derived using the neighboring samples of C2 block and neighboring samples of C1 block.
c. The linear model parameters may be derived in the same way as that used in VVC/JEM, or those used in TSCPM, or as described in PCT/CN2018/114158, PCT/CN2018/118799, PCT/CN2018/119709, PCT/CN2018/125412, PCT/CN2019/070002, PCT/CN2019/075874, PCT/CN2019/075993, PCT/CN2019/076195, PCT/CN2019/079396, PCT/CN2019/079431,  PCT/CN2019/079769, which are incorporated by reference herein in its entirety for all purposes, .
i. Alternatively, linear model parameters may be derived from the neighboring reconstructed C1 samples without downsampling.
ii. Alternatively, linear model parameters may be derived from the neighboring reconstructed C0/C2 samples with upsampling.
iii. Alternatively, furthermore, the linear model parameters may be firstly clipped to a range before being used either in CCP (TSCPM or CCLM) mode.
4. Multiple PMC modes may be allowed with different variable values/different linear model parameter derivation methods and/or different downsampling/up sampling methods and/or different locations of reconstructed/downsampled reconstructed neighboring samples for linear model derivation.
a. In one example, one mode is defined that may only utilize neighboring samples from above row and/or right above row.
b. In one example, one mode is defined that may only utilize neighboring samples from left column and/or left below column.
c. In one example, one mode is defined that multiple linear models (e.g., multiple sets of linear models) may be derived and applied to one block
i. In one example, the current luma reconstruction blocks and/or neighboring reconstructed samples may be split to M (M>1) categories. Different category may utilize different linear model.
d. In one example, one mode is defined that the downsampling filter is defined to be subsampling filter. Assume a chroma sample to be predicted is located at (x, y) , the L representative luma reconstruction samples are defined as the samples located at (2 *x -1, 2 *y) , (2 *x -1, 2 *y + 1) , (2 *x, 2 *y) , (2 *x, 2 *y+ 1) , (2 * x+  1, 2 *y) and (2 * x+  1, 2 *y+ 1) .
i. In one example, K samples nearest the position (a, b) may be used. The variable (a, b) may depend on the color format. In one example, a = 2*x and b= 2*y for 4: 2: 0 color format.
ii. In one example, the prediction sample of the chroma block may only depend on K of the L representative luma reconstruction samples (K is an integer value) .
iii. In one example, the prediction sample of the chroma block may only depend on sample located at (2 *x, 2 *y) .
iv. In one example, the prediction sample of the chroma block may only depend on sample located at (2 *x + 1, 2 *y) .
v. In one example, the prediction sample of the chroma block may only depend on sample located at (2 *x + 1, 2 *y + 1) .
vi. In one example, the prediction sample of the chroma block may only depend on sample located at (2 *x, 2 *y +1) .
vii. In one example, the prediction sample of the chroma block may only depend on samples located at (2 *x, 2 *y) and (2 *x, 2 *y +1) .
5. When PMC is enabled for a video unit of C0 color component (e.g., for a Cr block) , the residual information of the video unit may be further signaled.
a. Alternatively, signaling of the residual information of the video unit may be omitted, e.g., only zero coefficients are available.
b. Alternatively, a flag (e.g., coded block flag (CBF) for the C0 color component) may be still signaled to indicate whether there are non-zero coefficients in the video unit.
i. Alternatively, the CBF for the C0 color component is not signaled, and, in an example, inferred to be equal to 1.
c. Alternatively, furthermore, signaling of the flag (e.g., coded block flag (CBF) for the C2 color component) to indicate whether there are non-zero coefficients in corresponding C2 block may be always skipped.
i. Alternatively, furthermore, the CBF for the C2 color component is inferred to be equal to 1.
ii. Alternatively, furthermore, whether to and/or how to signal the CBF for the C2 block may depend on the usage of PMC and/or which PMC mode.
iii. Alternatively, furthermore, whether to and/or how to signal the CBF for the C0 block may depend on the usage of PMC and/or which PMC mode.
d. Alternatively, furthermore, signaling of the flag (e.g., coded block flag (CBF) for the C1 color component) to indicate whether there are non-zero coefficients in corresponding C1 block may be always skipped.
i. Alternatively, furthermore, the CBF for the C1 color component is inferred to be equal to 1.
ii. Alternatively, furthermore, whether to and/or how to signal the CBF for the C1 block may depend on the usage of PMC and/or which PMC mode.
iii. Alternatively, furthermore, whether to and/or how to signal the CBF for the C0 block may depend on the usage of PMC and/or which PMC mode.
Signaling related to PMC
6. PMC modes may be treated as some additional prediction modes.
a. Whether to signal an indication of PMC mode may depend on the coding mode of the current block.
i. In one example, the indication is signaled only when the current block is coded with one or multiple specific modes.
b. Whether to signal an indication of PMC mode may depend on the color format.
i. For example, the indication is not signaled if the color format is 4: 0: 0.
c. In one example, the bin/flag indicating the utilization of PMC mode of C 0 may be signaled/parsed according to the CBF flag and/or prediction mode of C 1 and/or C 2.
i. In one example, the PMC mode may be signaled when the CBF flag of C 1 and/or C 2 is 1 or 0, and/or the prediction mode of C 1 and/or C 2 is one of CCP (e.g., TSCPM/CCLM) modes.
ii. In one example, the PMC mode may be inferred to be 0, if the CBF flag of C 1 and/or C 2 is 0 and/or the prediction mode of C 1 and/or C 2 is not one of CCP (e.g., TSCPM/CCLM) modes.
d. In one example, indication of enabling one of multiple PMC modes may be firstly signaled/parsed in addition to the existing intra prediction modes.
i. Alternatively, furthermore, when one of multiple PMC modes is enabled for a block, an index to the multiple PMC modes may be further signaled.
ii. In one example, a first bin may be coded to indicate the usage of DM mode, followed by a 2 nd bin coded to indicate the usage ofCCP (e.g., TSCPM/CCLM) and a 3 rd bin coded to indicate the usage of PMC mode.
a) Alternatively, the 2 nd bin is coded to indicate the usage of PMC and the 3 rd bin is coded to indicate the usage of CCP (e.g., TSCPM/CCLM) modes.
b) Alternatively, a first bin may be coded to indicate the usage of PMC mode, followed by a bin coded to indicate the usage of DM and/or CCP (e.g., TSCPM/CCLM) modes.
e. In one example, PMC modes may be treated as additional variances of cross-component prediction methods, such as being part of a set of CCP (e.g., CCLM/TSCPM) modes.
i. Alternatively, furthermore, whether to signal/parse the PMC modes may depend on the usage of CCP modes.
a) In one example, if the CCP mode is enabled (e.g., cclm_mode_flag in VVC, 2 nd bin of chroma intra prediction mode in AVS3) for a block, an index may be further signaled to indicate which of the multiple CCP mode is applied to the block.
1) Alternatively, furthermore, for those available CCP methods, it may be further classified to multiple categories, such as TSCPM/CCLM/PMC. Indications of the category index may be further signaled.
[1] In one example, the indication of the category index may be firstly coded, followed by an index relative to the category if needed.
[2] In one example, the indication of the category index may be coded after an index relative to the category, if needed.
[3] In one example, same or different contexts may be utilized to code a first index relative to a first category (e.g., indication of TSCPM) and a second index relative to a second category.
f. In one example, the order of signalling DM/CCP/PMC mode (e.g., DM before or after PMC) may depend on the coded mode information of a spatial block.
i. In one example, if the neighboring block is coded with PMC mode, the indication of PMC mode may be signalled before the indication of other CCP/DM modes.
ii. Alternatively, if the neighboring block is coded with DM mode, the indication of DM mode may be signalled before the indication of CCP modes.
iii. Alternatively, if the neighboring block is coded with non-PMC mode (e.g., DM mode or other chroma intra prediction mode unequal to PMC) , the indication of DM mode may be signalled before the indication of PMC mode.
g. Alternatively, a PMC mode is treated as a new intra prediction modes in addition to existing ones.
i. In one example, different PMC modes may be assigned with different mode indices and coded with binary bin strings.
h. In one example, indications of (e.g., a flag/bin) the usage of PMC mode may be bypass coded, e.g., without any context.
i. Alternatively, indications of (e.g., a flag/bin) the usage of PMC mode may be context coded, e.g., with one or multiple context.
a) In one example, the context may be derived using neighboring blocks’ mode information (e.g., equal to PMC or equal to CCP) and/or availability of neighboring blocks.
b) In one example, the context may be derived according to the block dimension (e.g., width and/or height) of current block.
i. In one example, three PMC modes are enabled for processing a video unit (e.g., video/picture/slice/brick/tile/subpicture) , the following coding methods of indications of usage of one mode may be utilized. Denote the three PCM modes by PMC_Mode0, PMC_Mode1, PMC_Mode2, wherein PMC_Mode0 indicates the PMC mode using both left and above neighboring samples to derive linear model parameters; PMC_Mode1 and PMC_Mode2 indicate the PMC mode using only left and only above neighboring samples to derive linear model parameters, respectively.
i. Some examples are tabulated in Table 4-1, Table 4-2, Table 4-3, Table 4-4, Table 4-5, Table 4-6, Table 4-7 to describe the corresponding bin strings for different chroma intra prediction modes. Differences compared to the design before introducing PMC are highlighted in bold italicized  text. It is noted that the TSCPM tabulated in those tables may be replaced by other CCP methods, the bin orders/mode indices may be also switched.
Related to signaling of variances of cross-component prediction (CCP) methods
7. Whether to signal/parse the indications of a CCP method (e.g., LM-T, LM-L, TSCPM-T, TSCPM-L, PMC-T, PMC-L) may depend on the availability of neighboring samples (e.g., adjacent or non-adjacent) .
a. In one example, if the above neighboring samples are unavailable, the indications of a CCP method (e.g., LM-T, TSCPM-T, PMC-T, ) that relies on above neighboring samples may not be signalled.
b. In one example, if the left neighboring samples are unavailable, the indications of a CCP method (e.g., LM-L, TSCPM-L, PMC-L) that relies on left neighboring samples may not be signalled.
c. In one example, if both left and above neighboring samples are unavailable, the indications of a CCP method (e.g., LM-T, LM-L, TSCPM-T, TSCPM-L, PMC-T, PMC-L, LM-LT, TSCPM-LT, PMC-LT, other variances of CCLM/TSCPM/PMC) that relies on neighboring samples may not be signalled.
d. In one example, if neighboring samples located at only one side (either left or above the current block) are available, the indications of a CCP method (e.g., LM-LT, TSCPM-LT, PMC-LT) that relies on neighboring samples of both sides may not be signalled.
e. In one example, if either left or above neighboring samples are unavailable, the indications of a CCP method (e.g., LM-T, LM-L, TSCPM-T, TSCPM-L, PMC-T, PMC-L, CCLM, TSCPM, PMC) that relies on neighboring samples may not be signalled.
f. Alternatively, furthermore, when the indication is not signaled, the coding method is inferred to be disabled.
8. Indications of whether and/or how to use above mentioned methods may be signalled in a video processing unit (e.g., sequence/video/picture/slice/tile/brick/subpicture/CTU row/CTU/VPDU/CU/PU/TU/sub-blocks in a CU/PU) .
a. Alternatively, furthermore, the indications of whether and/or how to use above mentioned methods may be signaled SPS/VPS/PPS/picture header/slice header/tile group header/group of CTUs/CTU/other kinds of video data units.
9. Whether and/or how to use the above-mentioned methods may depend on the decoded information, such as block dimension, position of a block relative to a video processing unit (e.g., relative to a slice) , slice/picture type, partitioning types (e.g., dual tree or single tree) , etc. al.
a. In one example, for blocks (e.g., chroma blocks) with number of samples larger than (or equal to) M (e.g., M=4096, 1024) , such methods are disallowed.
b. In one example, for blocks (e.g., chroma blocks) with number of samples smaller than (or equal to) M (e.g., M=4, 8, 16) , such methods are disallowed.
c. In one example, for blocks (e.g., chroma blocks) with width and/or height larger than (or equal to) M (e.g., M=64, 32) , such methods are disallowed.
d. In one example, for blocks (e.g., chroma blocks) with width and/or height smaller than (or equal to) M (e.g., M=2, 4) , such methods are disallowed.
e. When the above method is disallowed, indications of usage of such methods may be skipped.
f. Alternatively, a confirming bitstream follows the rule that such methods be disabled when certain conditions (e.g., depending on block dimension) are satisfied.
Table 4-1: Bin strings of each chroma intra prediction mode (PMC is treated as one of TSCPM mode, and one bin (e.g., 4 th bin) is further signaled to indicate it is belonging to TSCPM or PMC) .
Figure PCTCN2020133786-appb-000017
Figure PCTCN2020133786-appb-000018
Table 4-2: Bin strings of each chroma intra prediction mode (PMC is treated as one of TSCPM mode, and one bin (e.g., 4 th bin) is further signaled to indicate it is belonging to TSCPM or PMC) .
Figure PCTCN2020133786-appb-000019
Table 4-3: Bin strings of each chroma intra prediction mode (PMC is treated as a new category (indicated by the 1 st bin) , before the indication of TSCPM represented by the 2 nd bin) .
Figure PCTCN2020133786-appb-000020
Figure PCTCN2020133786-appb-000021
Table 4-4: Bin strings of each chroma intra prediction mode (PMC is treated as a new category (indicated by the 0 th bin) ) .
Figure PCTCN2020133786-appb-000022
Figure PCTCN2020133786-appb-000023
Table 4-5: Bin strings of each chroma intra prediction mode (indication of PMC modes is signaled after TSCPM modes, denoted by the 2 nd bin) .
Figure PCTCN2020133786-appb-000024
Table 4-6: Bin strings of each chroma intra prediction mode (each of the PMC mode is treated as a new chroma intra prediction mode. All of PMC modes are added after existing modes) .
Figure PCTCN2020133786-appb-000025
Figure PCTCN2020133786-appb-000026
Table 4-7: Bin strings of each chroma intra prediction mode (mode index signaled after CBF flags) .
Figure PCTCN2020133786-appb-000027
Figure PCTCN2020133786-appb-000028
5 Embodiments
Example of decoding process is illustrated as follows. Prediction from Multiple Cross-components (PMC) modes are proposed. The prediction of component C0 is derived by the reconstructed samples of other color components C1 and C2.
5.1 Embodiment #1
In this embodiment, C0 is the Cr color component, C1 is the luma color component and C2 is the Cb color component.
The prediction of Cr component is derived by the linear combination of the Y and Cb reconstructed samples. Three Multiple Cross-components (e.g., PMC_LT, PMC_L and PMC_T) modes are proposed. PMC modes are signaled with a flag after TSCPM, as illustrated in Table 5-1. Meanwhile, the indication of the explicit PMC mode indices (e.g., PMC_LT, PMC_L and PMC_T) aligns to the representation of TSCPM mode indices (e.g., TSCPM_LT, TSCPM_L and TSCPM_T) . Moreover, the coded block flag (cbf) of Cb block is inferred to be 1, if the corresponding Cb/Cr block is coded with PMC mode. For the case that the left and/or above neighboring reference samples are not available, only TSCPM_LT/PMC_LT is employed. In such scenario, bin2 and bin3, which indicate the utilization and indices of enhanced TSCPM/PMC (e.g., TSCPM_L, TSCPM_T/PMC_L, PMC_T) , can be removed.
Table 5-1: Bin strings of each chroma intra prediction mode (PMC is treated as one of TSCPM mode, and one bin (e.g., 4 th bin) is further signaled to indicate it is belonging to TSCPM or PMC) .
Figure PCTCN2020133786-appb-000029
The overall flow is illustrated in FIG. 16. First, inter-channel linear model parameters (α 0, β 0) of Y-Cb and the model parameters (α 1, β 1) of Y-Cr are obtained from neighboring reconstructed samples. The linear model parameters derived methods of PMC_LT, PMC_L and PMC_T are respectively identical to TSCPM_LT, TSCPM_L and TSCPM_T in AVS3.
Second, an internal block IPred which has identical dimension of luma coding block is generated by linear model as follows,
IPred = (α 0 + α 1) ·Rec Y + (β 0 +β 1) ,      (22)
where Rec Y is the reconstructed samples of Y component.
Third, a down-sampled block IPred′ is generated from IPred which employs the same set of down-sampling filters as those in TSCPM.
Fourth, the final prediction FPred Cr of Cr can be formulated as follows,
FPred Cr = Clip (0, (1 << bitDepth) -1, IPred′-Rec Cb) .         (23)
where Rec Cb is the reconstructed samples of Cb component.
Alternatively, the following may apply:
A Prediction from Multiple Cross-components (PMC) method is proposed wherein the prediction of Cr component is derived by the linear combination of the Y and Cb reconstructed samples. An internal block IPred is firstly derived according to a linear model applied to the corresponding luma block, and the final prediction of Cr is set to the differences between the donwsampled temporary block and reconstructed Cb block. More specifically, the final prediction of Cr block is defined as follows:
IPred = A ·Rec Y + B,                                     (24)
FPred Cr = IPred′-Rec Cb.                                (25)
where Rec Y denotes the reconstruction of Y components and IPred is an internal block that has identical dimension of luma coding block. IPred′ represents the down-sampled IPred, which employs the same set of down-sampling filters as in TSCPM.
To keep the complexity as low as possible and resume the logic of TSCPM, the linear parameters (A, B) are set to (α 0 + α 1, β 0 + β 1) wherein (α 0, β 0) and (α 1, β 1) are the two sets of linear model parameters derived for Cb and Cr, respectively, such as using TSCPM/CCLM methods.
5.2 Embodiment#2
If TSCPM mode is disabled (tscpm_enable_flag = 0) , the flag/bin (enhance_tscpm_enable_flag) indicating the enabling of Enhanced-TSCPM (e.g., TSCPM_L, TSCPM_T) is implicitly inferred to be 0 without signaling/parsing.
5.3 Embodiment#3
If Enhanced-TSCPM mode is disabled (e.g. enhance_tscpm_enable_flag = 0) , the flag/bin (index 2 in Table 5-1) indicating the type of TSCPM (e.g. TSCPM_LT or Enhanced-TSCPM) is removed. The flag/bin (index 3 in Table 5-1) for discriminating TSCPM-L and TSCPM-T is also excluded.
5.4 Embodiment#4
If Intra Block Copy (IBC) mode is disabled (ibc_enable_flag = 0) , the flag/bin (abvr_enable_flag) is implicitly inferred to 0 without signaling/parsing.
Example implementations of the disclosed technology
FIG. 17 shows a flowchart of an example video processing method 1700. At operation 1702, prediction values of samples of a first component of a video block of a video  are determined using representative samples of a second component of the video and/or a third component of the video. At operation 1704, a conversion is performed between the video block and a bitstream representation of the video block according to the determined prediction values of the first component.
In some embodiments, the determining is based on reconstructed values of the representative samples or prediction values of the representative samples. In some embodiments, the representative samples are obtained during the conversion. In some embodiments, the prediction values of the first component for one sample of the video block is obtained using an equation. In some embodiments, the equation includes: FPred c0 = X ×TPred c0 + Y × (Rec c2 -FPred c2) + Z, where FPred c0 is a prediction value for one sample, X and Y are weighting factors, Z is an offset value, TPred c0 is a temporary prediction value of the one sample using a prediction mode, and Rec c2 and FPred c2 respectively represent reconstruction values and final prediction values of the representative samples of the third component.
In some embodiments, the equation includes: FPred c0 = X × (α c0*Rec c1 + β c0) + Y ×(Rec c2 - (α c2*Rec c1 + β c2) + Z, where FPred c0 is a prediction value for the one sample, X and Y are weighting factors, Z is an offset value, α c0, α c2 are two variables applied to the representative samples of the second component, β c0 and β c2 are offset values, Rec c1 and Rec c2 represent reconstruction values of the representative samples of the second component and the third component, respectively.
In some embodiments, the equation includes: FPred c0 = (X × α c0 -Y × α c2) *Rec c1 + (X × β c0 -Y × β c2) + Y × Rec c2 + Z, where FPred c0 is a prediction value for the one sample, X and Y are weighting factors, Z is an offset value, α c0, α c2 are two variables applied to the representative samples of the second component, β c0 and β c2 are offset values, Rec c1 and Rec c2 represent reconstruction values of the representative samples of the second component and the third component, respectively.
In some embodiments, X or Y or Z is equal to 1, or X or Y or Z is equal to 0, or X is equal to 1, Y is equal to -1 and Z is equal to 0, or X or Y or Z is equal to 2K or -2K, where K is an integer value being in the range of [-M, N] , where M and N are greater than or equal to 0. In some embodiments, the equation includes variables that are either pre-defined, or signaled in a bitstream, or derived.
In some embodiments, the method of FIG. 17 further comprises deriving two temporary video blocks for the first component and the third component according to two sets of linear model parameters corresponding to a second video block associated with the second  component, where the two temporary video blocks and the second video block have a first width and a first height that is different from a second width and a second height of the video block. In some embodiments, the two temporary blocks are derived using the linear model parameters applied to the second video block associated with the second component.
In some embodiments, the method of FIG. 17 further comprises deriving one temporary video block according to linear model parameters corresponding to a second video block associated with the second component, where the one temporary video block and the second video block have a first width and a first height that is different from a second width and a second height of the video block. In some embodiments, the method of FIG. 17 further comprises deriving a temporary video block for the second component from a second video block associated with the second component, where the second video block has a first width and a first height that is different from a second width and a second height of the video block, applying linear model parameters to the temporary video block, and adding a collocated sample to or subtracting the collocated sample from the temporary video block after the linear model parameters are applied.
In some embodiments, the first component is a blue chroma component, the second component is a luma component, and the third component is a red chroma component, or the first component is the red chroma component, the second component is the luma component, and the third component is the blue chroma component, or the first component is the luma component or a blue component, the second component and the third component are remaining components.
In some embodiments, a selection of the representative samples and a number of the representative samples of the second component and/or the third component are dynamically determined. In some embodiments, the selection of the representative samples is based on a position of a current sample of the first component and/or a color format. In some embodiments, the color format includes a 4: 2: 0 color format, and the representative samples of the second component and/or the third component surround samples of the second component and/or the third component.
In some embodiments, the representative samples include reconstructed samples before in-loop filtering methods. In some embodiments, the representative samples are a function of reconstructed samples before in-loop filtering methods. In some embodiments, linear model parameters are applied to the representative samples of the second component. In some embodiments, the linear model parameters include αc0 and βc0 that are derived for the  samples or the video block, α c0 is a variable applied to the representative samples of the second component, and β c0 is an offset value.
In some embodiments, α c0 and β c0 are derived using neighboring samples of the video block and neighboring samples of a second video block associated with the second component. In some embodiments, the linear model parameters include αc2 and βc2 that are derived for the representative samples of the third component or a third video block associated with the third component, α c2 is a variable applied to the representative samples of the third component, and β c2 is an offset value. In some embodiments, α c2 and β c2 are derived using neighboring samples of a second video block associated with the second component and neighboring samples of the third video block.
In some embodiments, the linear model parameters are derived using versatile video coding (VVC) , Joint Exploration Model (JEM) , or two-step cross-component prediction mode (TSCPM) . In some embodiments, the equation includes variables, and the prediction values are determining using one of a plurality of prediction from multiple cross-components (PMC) modes that include: different variable values or different derivation methods for linear model parameters, and/or different downsampling or up sampling methods, and/or different locations of reconstructed or downsampled reconstructed neighboring samples for derivation of linear model parameters.
In some embodiments, when a prediction from multiple cross-components (PMC) mode is enabled for the video block of the first component, a residual information of the video block is further signaled. In some embodiments, when a prediction from multiple cross-components (PMC) mode is enabled for the video block of the first component, a residual information of the video block is omitted. In some embodiments, signaling a flag that indicates a presence of non-zero coefficients in the video block of the first component. In some embodiments, an indication of a prediction from multiple cross-components (PMC) mode for the video block is signaled based on a coding mode of the video block. In some embodiments, an indication of a prediction from multiple cross-components (PMC) mode for the video block is signaled based on a color format.
In some embodiments, a bin or flag indicating a utilization of a prediction from multiple cross-components (PMC) mode of a first component is signaled or parsed according to a coded block flag (CBF) and/or a prediction mode of the second component and/or the third component. In some embodiments, the prediction values are determining using one of a plurality of prediction from multiple cross-components (PMC) modes, and an indication of the  one PMC mode being enabled is signaled or parsed in addition to an existing intra prediction modes. In some embodiments, an index to the plurality of PMC modes is signaled.
In some embodiments, the prediction values are determining using one of a plurality of prediction from multiple cross-components (PMC) modes, and an indication of enabling the one PMC mode is signaled or parsed in addition to an existing intra prediction modes. In some embodiments, the prediction values are determining using one of a plurality of prediction from multiple cross-components (PMC) modes, and the plurality of PMC modes are additional variances of cross-component prediction (CCP) modes or methods. In some embodiments, a determination to signal or parse the one PMC mode depends on a usage of one CCP mode.
In some embodiments, the prediction values are determining using a cross-component prediction (CCP) method, and the CCP method is signaled based on availability of neighboring samples next to the samples of the first component. In some embodiments, an indication is not signaled for the CCP method that relies on the neighboring samples that are located above the samples of the first component and are unavailable. In some embodiments, an indication is not signaled for the CCP method that relies on the neighboring samples that are located left of the samples of the first component and are unavailable. In some embodiments, the prediction values are determined using a cross-component prediction (CCP) method or a prediction from multiple cross-components (PMC) mode, where the CCP method or the PMC mode is indicated via a signaled in a video processing unit.
In some embodiments, the method of FIG. 17 further includes performing a determination, based on a decoded information associated with the video block, whether the prediction values are determined using a cross-component prediction (CCP) method or a prediction from multiple cross-components (PMC) mode. In some embodiments, the determination is made to disallow using CCP or PMC to determine the prediction values in response to the video block having a number of samples greater than or equal to an integer M, where M is 4096 or 1024. In some embodiments, the determination is made to disallow using CCP or PMC to determine the prediction values in response to the video block having a number of samples less than or equal to an integer M, where M is 4, 8, or 16.
FIG. 18 is a block diagram of a video processing apparatus 1800. The apparatus 1800 may be used to implement one or more of the methods described herein. The apparatus 1800 may be embodied in a smartphone, tablet, computer, Internet of Things (IoT) receiver, and so on. The apparatus 1800 may include one or more processors 1802, one or more  memories 1804 and video processing hardware 1806. The processor (s) 1802 may be configured to implement one or more methods (including, but not limited to, method 15) described in the present document. The memory (memories) 1804 may be used for storing data and code used for implementing the methods and techniques described herein. The video processing hardware 1806 may be used to implement, in hardware circuitry, some techniques described in the present document. In some embodiments, the video coding methods may be implemented using an apparatus that is implemented on a hardware platform as described with respect to FIG. 18.
FIG. 19 is a block diagram showing an example video processing system 1900 in which various techniques disclosed herein may be implemented. Various implementations may include some or all of the components of the system 1900. The system 1900 may include input 1902 for receiving video content. The video content may be received in a raw or uncompressed format, e.g., 8 or 10 bit multi-component pixel values, or may be in a compressed or encoded format. The input 1902 may represent a network interface, a peripheral bus interface, or a storage interface. Examples of network interface include wired interfaces such as Ethernet, passive optical network (PON) , etc. and wireless interfaces such as Wi-Fi or cellular interfaces.
The system 1900 may include a coding component 1704 that may implement the various coding or encoding methods described in the present document. The coding component 1704 may reduce the average bitrate of video from the input 1902 to the output of the coding component 1704 to produce a coded representation of the video. The coding techniques are therefore sometimes called video compression or video transcoding techniques. The output of the coding component 1704 may be either stored, or transmitted via a communication connected, as represented by the component 1706. The stored or communicated bitstream (or coded) representation of the video received at the input 1902 may be used by the component 1908 for generating pixel values or displayable video that is sent to a display interface 1910. The process of generating user-viewable video from the bitstream representation is sometimes called video decompression. Furthermore, while certain video processing operations are referred to as “coding” operations or tools, it will be appreciated that the coding tools or operations are used at an encoder and corresponding decoding tools or operations that reverse the results of the coding will be performed by a decoder.
Examples of a peripheral bus interface or a display interface may include universal serial bus (USB) or high definition multimedia interface (HDMI) or Displayport, and so on. Examples of storage interfaces include SATA (serial advanced technology attachment) , PCI,  IDE interface, and the like. The techniques described in the present document may be embodied in various electronic devices such as mobile phones, laptops, smartphones or other devices that are capable of performing digital data processing and/or video display.
FIG. 20 is a block diagram that illustrates an example video coding system 100 that may utilize the techniques of this disclosure.
As shown in FIG. 20, video coding system 100 may include a source device 110 and a destination device 120. Source device 110 generates encoded video data which may be referred to as a video encoding device. Destination device 120 may decode the encoded video data generated by source device 110 which may be referred to as a video decoding device.
Source device 110 may include a video source 112, a video encoder 114, and an input/output (I/O) interface 116.
Video source 112 may include a source such as a video capture device, an interface to receive video data from a video content provider, and/or a computer graphics system for generating video data, or a combination of such sources. The video data may comprise one or more pictures. Video encoder 114 encodes the video data from video source 112 to generate a bitstream. The bitstream may include a sequence of bits that form a coded representation of the video data. The bitstream may include coded pictures and associated data. The coded picture is a coded representation of a picture. The associated data may include sequence parameter sets, picture parameter sets, and other syntax structures. I/O interface 116 may include a modulator/demodulator (modem) and/or a transmitter. The encoded video data may be transmitted directly to destination device 120 via I/O interface 116 through network 130a. The encoded video data may also be stored onto a storage medium/server 130b for access by destination device 120.
Destination device 120 may include an I/O interface 126, a video decoder 124, and a display device 122.
I/O interface 126 may include a receiver and/or a modem. I/O interface 126 may acquire encoded video data from the source device 110 or the storage medium/server 130b. Video decoder 124 may decode the encoded video data. Display device 122 may display the decoded video data to a user. Display device 122 may be integrated with the destination device 120, or may be external to destination device 120 which be configured to interface with an external display device.
Video encoder 114 and video decoder 124 may operate according to a video compression standard, such as the High Efficiency Video Coding (HEVC) standard, Versatile  Video Coding (VVC) standard and other current and/or further standards.
FIG. 21 is a block diagram illustrating an example of video encoder 200, which may be video encoder 114 in the system 100 illustrated in FIG. 20.
Video encoder 200 may be configured to perform any or all of the techniques of this disclosure. In the example of FIG. 21, video encoder 200 includes a plurality of functional components. The techniques described in this disclosure may be shared among the various components of video encoder 200. In some examples, a processor may be configured to perform any or all of the techniques described in this disclosure.
The functional components of video encoder 200 may include a partition unit 201, a predication unit 202 which may include a mode select unit 203, a motion estimation unit 204, a motion compensation unit 205 and an intra prediction unit 206, a residual generation unit 207, a transform unit 208, a quantization unit 209, an inverse quantization unit 210, an inverse transform unit 211, a reconstruction unit 212, a buffer 213, and an entropy encoding unit 214.
In other examples, video encoder 200 may include more, fewer, or different functional components. In an example, predication unit 202 may include an intra block copy (IBC) unit. The IBC unit may perform predication in an IBC mode in which at least one reference picture is a picture where the current video block is located.
Furthermore, some components, such as motion estimation unit 204 and motion compensation unit 205 may be highly integrated, but are represented in the example of FIG. 5 separately for purposes of explanation.
Partition unit 201 may partition a picture into one or more video blocks. Video encoder 200 and video decoder 300 may support various video block sizes.
Mode select unit 203 may select one of the coding modes, intra or inter, e.g., based on error results, and provide the resulting intra-or inter-coded block to a residual generation unit 207 to generate residual block data and to a reconstruction unit 212 to reconstruct the encoded block for use as a reference picture. In some example, Mode select unit 203 may select a combination of intra and inter predication (CIIP) mode in which the predication is based on an inter predication signal and an intra predication signal. Mode select unit 203 may also select a resolution for a motion vector (e.g., a sub-pixel or integer pixel precision) for the block in the case of inter-predication.
To perform inter prediction on a current video block, motion estimation unit 204 may generate motion information for the current video block by comparing one or more reference frames from buffer 213 to the current video block. Motion compensation unit 205  may determine a predicted video block for the current video block based on the motion information and decoded samples of pictures from buffer 213 other than the picture associated with the current video block.
Motion estimation unit 204 and motion compensation unit 205 may perform different operations for a current video block, for example, depending on whether the current video block is in an I slice, a P slice, or a B slice.
In some examples, motion estimation unit 204 may perform uni-directional prediction for the current video block, and motion estimation unit 204 may search reference pictures of list 0 or list 1 for a reference video block for the current video block. Motion estimation unit 204 may then generate a reference index that indicates the reference picture in list 0 or list 1 that contains the reference video block and a motion vector that indicates a spatial displacement between the current video block and the reference video block. Motion estimation unit 204 may output the reference index, a prediction direction indicator, and the motion vector as the motion information of the current video block. Motion compensation unit 205 may generate the predicted video block of the current block based on the reference video block indicated by the motion information of the current video block.
In other examples, motion estimation unit 204 may perform bi-directional prediction for the current video block, motion estimation unit 204 may search the reference pictures in list 0 for a reference video block for the current video block and may also search the reference pictures in list 1 for another reference video block for the current video block. Motion estimation unit 204 may then generate reference indexes that indicate the reference pictures in list 0 and list 1 containing the reference video blocks and motion vectors that indicate spatial displacements between the reference video blocks and the current video block. Motion estimation unit 204 may output the reference indexes and the motion vectors of the current video block as the motion information of the current video block. Motion compensation unit 205 may generate the predicted video block of the current video block based on the reference video blocks indicated by the motion information of the current video block.
In some examples, motion estimation unit 204 may output a full set of motion information for decoding processing of a decoder.
In some examples, motion estimation unit 204 may do not output a full set of motion information for the current video. Rather, motion estimation unit 204 may signal the motion information of the current video block with reference to the motion information of  another video block. For example, motion estimation unit 204 may determine that the motion information of the current video block is sufficiently similar to the motion information of a neighboring video block.
In one example, motion estimation unit 204 may indicate, in a syntax structure associated with the current video block, a value that indicates to the video decoder 300 that the current video block has the same motion information as the another video block.
In another example, motion estimation unit 204 may identify, in a syntax structure associated with the current video block, another video block and a motion vector difference (MVD) . The motion vector difference indicates a difference between the motion vector of the current video block and the motion vector of the indicated video block. The video decoder 300 may use the motion vector of the indicated video block and the motion vector difference to determine the motion vector of the current video block.
As discussed above, video encoder 200 may predictively signal the motion vector. Two examples of predictive signaling techniques that may be implemented by video encoder 200 include advanced motion vector predication (AMVP) and merge mode signaling.
Intra prediction unit 206 may perform intra prediction on the current video block. When intra prediction unit 206 performs intra prediction on the current video block, intra prediction unit 206 may generate prediction data for the current video block based on decoded samples of other video blocks in the same picture. The prediction data for the current video block may include a predicted video block and various syntax elements.
Residual generation unit 207 may generate residual data for the current video block by subtracting (e.g., indicated by the minus sign) the predicted video block (s) of the current video block from the current video block. The residual data of the current video block may include residual video blocks that correspond to different sample components of the samples in the current video block.
In other examples, there may be no residual data for the current video block for the current video block, for example in a skip mode, and residual generation unit 207 may not perform the subtracting operation.
Transform processing unit 208 may generate one or more transform coefficient video blocks for the current video block by applying one or more transforms to a residual video block associated with the current video block.
After transform processing unit 208 generates a transform coefficient video block associated with the current video block, quantization unit 209 may quantize the transform  coefficient video block associated with the current video block based on one or more quantization parameter (QP) values associated with the current video block.
Inverse quantization unit 210 and inverse transform unit 211 may apply inverse quantization and inverse transforms to the transform coefficient video block, respectively, to reconstruct a residual video block from the transform coefficient video block. Reconstruction unit 212 may add the reconstructed residual video block to corresponding samples from one or more predicted video blocks generated by the predication unit 202 to produce a reconstructed video block associated with the current block for storage in the buffer 213.
After reconstruction unit 212 reconstructs the video block, loop filtering operation may be performed reduce video blocking artifacts in the video block.
Entropy encoding unit 214 may receive data from other functional components of the video encoder 200. When entropy encoding unit 214 receives the data, entropy encoding unit 214 may perform one or more entropy encoding operations to generate entropy encoded data and output a bitstream that includes the entropy encoded data.
FIG. 22 is a block diagram illustrating an example of video decoder 300 which may be video decoder 114 in the system 100 illustrated in FIG. 20.
The video decoder 300 may be configured to perform any or all of the techniques of this disclosure. In the example of FIG. 22, the video decoder 300 includes a plurality of functional components. The techniques described in this disclosure may be shared among the various components of the video decoder 300. In some examples, a processor may be configured to perform any or all of the techniques described in this disclosure.
In the example of FIG. 22, video decoder 300 includes an entropy decoding unit 301, a motion compensation unit 302, an intra prediction unit 303, an inverse quantization unit 304, an inverse transformation unit 305, and a reconstruction unit 306 and a buffer 307. Video decoder 300 may, in some examples, perform a decoding pass generally reciprocal to the encoding pass described with respect to video encoder 200 (e.g., FIG. 21) .
Entropy decoding unit 301 may retrieve an encoded bitstream. The encoded bitstream may include entropy coded video data (e.g., encoded blocks of video data) . Entropy decoding unit 301 may decode the entropy coded video data, and from the entropy decoded video data, motion compensation unit 302 may determine motion information including motion vectors, motion vector precision, reference picture list indexes, and other motion information. Motion compensation unit 302 may, for example, determine such information by performing the AMVP and merge mode.
Motion compensation unit 302 may produce motion compensated blocks, possibly performing interpolation based on interpolation filters. Identifiers for interpolation filters to be used with sub-pixel precision may be included in the syntax elements.
Motion compensation unit 302 may use interpolation filters as used by video encoder 20 during encoding of the video block to calculate interpolated values for sub-integer pixels of a reference block. Motion compensation unit 302 may determine the interpolation filters used by video encoder 200 according to received syntax information and use the interpolation filters to produce predictive blocks.
Motion compensation unit 302 may uses some of the syntax information to determine sizes of blocks used to encode frame (s) and/or slice (s) of the encoded video sequence, partition information that describes how each macroblock of a picture of the encoded video sequence is partitioned, modes indicating how each partition is encoded, one or more reference frames (and reference frame lists) for each inter-encoded block, and other information to decode the encoded video sequence.
Intra prediction unit 303 may use intra prediction modes for example received in the bitstream to form a prediction block from spatially adjacent blocks. Inverse quantization unit 303 inverse quantizes, e.g., de-quantizes, the quantized video block coefficients provided in the bitstream and decoded by entropy decoding unit 301. Inverse transform unit 303 applies an inverse transform.
Reconstruction unit 306 may sum the residual blocks with the corresponding prediction blocks generated by motion compensation unit 202 or intra-prediction unit 303 to form decoded blocks. If desired, a deblocking filter may also be applied to filter the decoded blocks in order to remove blockiness artifacts. The decoded video blocks are then stored in buffer 307, which provides reference blocks for subsequent motion compensation/intra predication and also produces decoded video for presentation on a display device.
FIG. 23 is a flowchart representation of a method 2300 for video processing in accordance with the present technology. The method 2300 includes, at operation 2310, determining, for a conversion between a video block of a first component of a video and a bitstream representation of the video, prediction values of samples of the video block using representative samples outside of the video block. The representative samples are determined during the conversion. The method 2300 also includes, at operation 2320, performing the conversion based on the determining.
In some embodiments, the representative samples comprise samples from at least  one off a video block of a second component, a video block of a third component, or a neighboring block of the video block, the neighboring block being adjacent or non-adjacent to the video block. In some embodiments, the determining is based on reconstructed values of the representative samples or prediction values of the representative samples.
In some embodiments, the determining is based on applying a linear function to the representative samples. In some embodiments, the determining is based on applying a non-linear function to the representative samples.
In some embodiments, a final prediction value of a sample in the first component C 0 is denoted as FPred c0, and FPred c0 = X × TPred c0 + Y × (Rec c2 -FPred c2) + Z. TPred c0 represents a temporary prediction value of the sample, where Rec c2 and FPred c2 represent a reconstruction value and a final prediction value of a representative sample in the third component C 2. X and Y represent weighing factors and Z represents an offset value, X, Y and Z being real numbers. In some embodiments, a final prediction value of a sample in the first component C 0 is denoted as FPred c0, and FPred c0 = X × (α c0*Rec c1 + β c0) + Y × (Rec c2 - (α c2*Rec c1 + β c2) + Z, where Rec c1 represents a reconstruction value of a representative sample in the second component C1 and Rec c2 represents a reconstruction value of a representative sample in the third component C2. X and Y represent weighing factors and Z represents an offset value, α c0 and β c0 are linear model parameters for the first component, and α c2 and β c2 are linear model parameters for the third component, X, Y Z, α c0, β c0, α c2, and β c2 being real numbers. In some embodiments, a final prediction value of a sample in the first component C 0 is denoted as FPred c0, and FPred c0 = (X × α e0 -Y × α c2) *Rec c1 + (X × β c0 -Y × β c2) + Y × Rec c2 + Z. Rec c1 represents a reconstruction value of a representative sample in the second component C1 and Rec c2 represents a reconstruction value of a representative sample in the third component C2. X and Y represent weighing factors and Z represents an offset value. α c0 and β c0 are linear model parameters for the first component, and α c2 and β c2 are linear model parameters for the third component, X, Y Z, α c0, β c0, α c2, and β c2 being real numbers.
In some embodiments, the first component has a size of K’×L’ and the second component as a size of K×L. W. Two temporary blocks having the size of K×L are derived according to linear model parameters (X × α c0, X × β c0) and (Y × α c2, Y × β c2) , and the two temporary blocks are downsampled to the size of K’×L’. In some embodiments, the two temporary blocks are downsampled with or without clipping.
In some embodiments, the first component has a size of K’×L’ and the second  component as a size of K×L. One temporary block having the size of K×L is derived according to linear model parameters (X × α c0-Y × α c2, X × β c0-Y × β c2) and the temporary block is downsampled to the size of K’×L’. In some embodiments, the temporary block is downsampled with or without clipping.
In some embodiments, the prediction values of the samples of the video block are determined by adding or subtracting collocated samples in the temporary block with or without performing downsampling. In some embodiments, the first component has a size of K’×L’ and the second component as a size of K×L. One temporary block is derived based on downsampling the second component to the size of K×L, and the prediction values of the samples of the video block are determined by applying linear model parameters (X × α c0, X × β c0) and (Y × α c2, Y × β c2) to the temporary block and by adding or subtracting collocated samples in the temporary block.
In some embodiments, α c0 and β c0 are derived using neighboring samples of the first component and neighboring samples of the second component. In some embodiments, α c2 and β c2 are derived using neighboring samples of the third component and neighboring samples of the second component. In some embodiments, the linear model parameters are derived in a same manner as parameters for a cross-component linear model (CCLM) prediction mode or a Two-Step Cross-component Prediction Mode. In some embodiments, the linear model parameters are derived based on reconstructed values of the neighboring samples of the second component without performing downsampling. In some embodiments, the linear model parameters are derived based on reconstructed values of the neighboring samples of the first component or the third component are used with upsampling. In some embodiments, the linear model parameters are clipped within a range prior to being used by the CCP coding tool.
In some embodiments, at least one of X, Y, or Z is equal to 1. In some embodiments, at least one of X, Y, or Z is equal to 0. In some embodiments, X is equal 1, Y is equal to -1, and Z is equal to 0. In some embodiments, at least one of X, Y, or Z is equal to 2K or -2K, K is an integer value in a range of [-M, N] , where M and N are no smaller than 0. In some embodiments, variables including at least X, Y, Z, α c0, β c0, α c2, or β c2 are indicated in the bitstream representation. In some embodiments, variables including at least X, Y, Z, α c0, β c0, α c2, or β c2 are derived on the fly. In some embodiments, at least one of the variables has a same value for all samples within a video unit, the video unit comprising a coding block, a prediction block, or a transform block.
In some embodiments, the linear model parameters have multiple sets of values. In  some embodiments, different samples within a video unit use different sets of values. In some embodiments, the first component is Cb color component, the second component is Y component, and the third component is Cr component. In some embodiments, the first component is Cr color component, the second component is Y component, and the third component is Cb component. In some embodiments, the first component is a luma component, and the second and third components are chroma components.
In some embodiments, the final prediction values are clipped within a predefined range. In some embodiments, the representative samples are selected during the conversion based on a characteristic of the first component. In some embodiments, the characteristic of the first component comprises a position of a sample of the first component or a color format of the first component. In some embodiments, the representative samples are located surrounding a current sample of the first component in case the color format is 4: 2: 0.
In some embodiments, the current sample is located at position (x, y) , and one representative sample is located at (2x, 2y) . In some embodiments, the current sample is located at position (x, y) , and one representative sample is located at (2x+1, 2y) . In some embodiments, the current sample is located at position (x, y) , and one representative sample is located at (2x+1, 2y+1) . In some embodiments, the current sample is located at position (x, y) , and one representative sample is located at (2x, 2y+1) . In some embodiments, the current sample is located at position (x, y) , and two representative samples are located at (2x, 2y) and (2x, 2y+1) . In some embodiments, the current sample is located at position (x, y) , and two representative samples are located at (2x, 2y) and (2x+1, 2y) . In some embodiments, the current sample is located at position (x, y) , and six representative samples are located at (2x-1, 2y) , (2x, 2y) , (2x+ 1, 2y) , (2x-1, 2y+ 1) , (2x, 2y+ 1) , and (2x+ 1, 2y+ 1) . In some embodiments, the current sample is located at position (x, y) , and eight representative samples are located at (2x, 2y-1) , (2x-1, 2y) , (2x, 2y) , (2x+1, 2y) , (2x-1, 2y+1) , (2x, 2y+1) , (2x+1, 2y+1) , and (2x, 2y+2) . In some embodiments, a representative sample is located at a same location as a current sample of the first component.
In some embodiments, the characteristic of the first component comprises an in-loop filtering method applied to the first component. In some embodiments, the representative samples are based on reconstructed samples prior to applying the in-loop filtering method. In some embodiments, the representative samples are based on a function of reconstructed samples prior to applying the in-loop filtering method. In some embodiments, the function comprises a downsample filtering function. In some embodiments, the function comprises a  linear function or a non-linear function. In some embodiments, the in-loop filtering method comprises a deblocking filtering method, a sample adaptive offset (SAO) method, an adaptive loop filtering method, or a cross-component adaptive loop filtering method.
FIG. 24 is a flowchart representation of a method 2400 for video processing in accordance with the present technology. The method 2400 includes, at operation 2410, determining, for a conversion between a video block of a first component of a video and a bitstream representation of the video, a coding mode of a multiple cross-component coding tool. The coding mode is determined from multiple modes available for coding the video block. The multiple modes have different parameters for determining prediction values of samples of the video block using representative samples from at least one of a second component, a third component, or a neighboring block of the video block. The method 2400 also includes, at operation 2420, performing the conversion based on the determining.
In some embodiments, one mode of the multiple modes specifies that only neighboring samples from a row that is above or right-above the first component are used for the prediction values of the samples of the first component. In some embodiments, one mode of the multiple modes specifies that only neighboring samples from a column that is to the left or left-below the first component are used for the prediction values of the samples of the first component. In some embodiments, one mode of the multiple modes specifies multiple linear models are applicable to the video block. In some embodiments, the samples of the first component and reconstructed values of samples of the neighboring block are grouped into multiple categories, and different linear models are applicable to different categories of samples. In some embodiments, one mode of the multiple modes specifies that a downsampling filter comprises a subsample filter. In some embodiments, one mode of the multiple modes specifies that K prediction values of the samples depend on L representative samples, K and L being integers.
FIG. 25 is a flowchart representation of a method 2500 for video processing in accordance with the present technology. The method 2500 includes, at operation 2510, performing a conversion between a video block of a video and a bitstream representation of the video. The video block is coded using a multiple cross-component prediction mode of from multiple prediction modes of a prediction from multiple-cross component (PMC) coding tool. The multiple cross-component prediction mode is signaled in the bitstream representation as an intra prediction mode or an inter prediction mode.
In some embodiments, signaling of the multiple modes in the bitstream  representation is based on a characteristic of the video block. In some embodiments, the characteristic comprises a coding mode of the video block. In some embodiments, the multiple modes are signaled in the bitstream representation in case the video block is coded using a specified coding mode. In some embodiments, the characteristic comprises a color format of the video block. In some embodiments, the multiple modes are signaled in the bitstream representation in case the color format of the video block is 4: 0: 0. In some embodiments, the characteristic comprises a code block flag or a prediction mode of the second component or the third component. In some embodiments, at least one of the multiple modes is signaled in the bitstream representation in case the code block flag of the second component or the third component is 1 or 0, and/or the prediction mode of the second component or the third component comprises a cross-component prediction (CCP) mode. In some embodiments, one of the multiple modes is determined to be 0 in case the code block flag of the second component or the third component is 0 and the prediction mode of the second component or the third component is not a cross-component prediction (CCP) mode.
In some embodiments, signaling that one of the multiple modes is enabled is included in the bitstream representation in addition to signaling of the one or more prediction modes. In some embodiments, the bitstream representation further includes an index indicating one of the multiple modes after the signaling that one of the multiple modes is enabled. In some embodiments, usage of a DM mode, usage of a cross-component prediction (CCP) mode, and usage of one of the multiple modes of the PMC coding tool are organized in a particular order. In some embodiments, the particular order is based on coded information of the neighboring block. In some embodiments, in case the neighboring block is coded using the PMC coding tool, the usage of the PMC coding tool is signaled before the usage of the DM mode and the usage of the CCP mode. In some embodiments, in case the neighboring block is coded using the DM mode, the usage of the DM mode is signaled before the usage of the CCP mode or the usage of the PMC coding tool.
In some embodiments, the multiple modes for the PMC coding tool are signaled as a part of information for one or more cross-component prediction (CCP) modes for a CCP coding tool. In some embodiments, signaling of the multiple modes for the PMC coding tool is based on usage of the CCP modes. In some embodiments, in case the CCP coding tool is enabled for the video block, an index is included in the bitstream representation to indicate one of the one or more CCP modes applicable to the video block. In some embodiments, the one or more CCP modes are classified into multiple categories, and the bitstream representation  includes a category index indicating a corresponding category. In some embodiments, the index and the category index are organized in an order.
In some embodiments, one of the multiple modes for the PMC coding tool is treated as an intra prediction mode. In some embodiments, different modes for the PMC coding tool are assigned different indices and coded with different binary bin string. In some embodiments, the signaling of the multiple modes in the bitstream representation is bypass coded without any context. In some embodiments, the signaling of the multiple modes in the bitstream representation is context coded using one or more contexts derived based on information of the video block or the neighboring block. In some embodiments, the one or more contexts are derived based on a coding mode or availability of the neighboring block. In some embodiments, the one or more contexts are derived based on a dimension of the video block.
FIG. 26 is a flowchart representation of a method 2600 for video processing in accordance with the present technology. The method 2600 includes, at operation 2610, determining residual information of a video unit for a conversion between a video block of a video and a bitstream representation of the video in case a prediction from multiple cross-component (PMC) coding tool is enabled for a first component. The method 2600 also includes, at operation 2620, performing the conversion based on the determining.
In some embodiments, the residual information of the video unit is indicated in the bitstream representation. In some embodiments, the residual information of the video unit is omitted from the bitstream representation in case only zero coefficients are available. In some embodiments, the bitstream representation includes a flag indicating whether there is a non-zero coefficient in the video unit. In some embodiments, the flag is inferred to be equal to 1 in case the flag is omitted from the bitstream representation. In some embodiments, a manner of signaling the flag is based on usage of the PMC coding tool.
In some embodiments, using the PMC coding tool, prediction values of samples of a first component are determined using representative samples from at least a second component or a third component, and the bitstream representation includes a flag indicating whether there is a non-zero coefficient in the second component or the second component. In some embodiments, the flag is inferred to be equal to 1 and omitted from the bitstream representation. In some embodiments, a manner of signaling the flag is based on usage of the PMC coding tool.
FIG. 27 is a flowchart representation of a method 2700 for video processing in accordance with the present technology. The method includes, at operation 2710, determining,  for a conversion between a video block of a video and a bitstream representation of the video, whether usage a cross-component prediction (CCP) coding tool is signaled in the bitstream representation based on availability of neighboring samples of the video block. The neighboring samples are adjacent or non-adjacent to the video block. The method also includes, at operation 2720, performing the conversion based on the determining.
In some embodiments, the CCP coding tool relies on neighboring samples above the video block, and signaling of the usage of the CCP coding tool is omitted in case the neighboring samples above the video block are not available. In some embodiments, the CCP coding tool relies on neighboring samples to the left of the video block, and signaling of the usage of the CCP coding tool is omitted in case the neighboring samples to the left of the video block are not available. In some embodiments, the CCP coding tool relies on neighboring samples above and to the left of the video block, and signaling of the usage of the CCP coding tool is omitted in case the neighboring samples above and to the left of the video block are not available. In some embodiments, the CCP coding tool relies on neighboring samples on both sides of the video block, and signaling of the usage of the CCP coding tool is omitted in case the neighboring samples on only one side of the video block are available. In some embodiments, the CCP coding tool relies on neighboring samples of the video block, and signaling of the usage of the CCP coding tool is omitted in case the neighboring samples on a left or right side of the video block are unavailable. In some embodiments, the CCP coding tool is considered as disabled in case the signaling of the usage of the CCP is omitted.
In some embodiments, usage of any of the above methods is indicated in the bitstream representation in a video processing unit. In some embodiments, the video processing unit comprises a sequence, a picture, a slice, a tile, a brick, a subpicture, a coding tree unit row, a coding tree unit, a virtual pipeline data unit, a coding unit, a prediction unit, a transform unit, or a sub-block in a coding unit or prediction unit. In some embodiments, the usage is included in a sequence parameter set, a video parameter set, a picture parameter set, a picture header, a slice header, a tile group header, a group of coding tree units, or a coding tree unit.
In some embodiments, usage of the method is based on information about the video. In some embodiments, the information about the video comprises a dimension of the video block, a number of samples in the video block, a position of a video block relative to a video processing unit, a slice type, a picture type, or a partitioning type. In some embodiments, the method is disabled in case the number of samples in the video block is greater than or equal  to a threshold. In some embodiments, the threshold is 1024 or 4096. In some embodiments, the method is disabled in case the number of samples in the video block is smaller than or equal to a threshold. In some embodiments, the threshold is 4, 8, or 16. In some embodiments, the method is disabled in case the dimension of the video block is greater than or equal to a threshold. In some embodiments, the threshold is 32 or 64. In some embodiments, the method is disabled in case the dimension of the video block is smaller than or equal to a threshold. In some embodiments, the threshold is 2 or 4. In some embodiments, signaling of usage of the method is omitted in case the method is disabled.
In some embodiments, performing the conversion includes encoding the video block into the bitstream representation. In some embodiments, performing the conversion includes decoding the video block from the bitstream representation.
Some embodiments of the disclosed technology include making a decision or determination to enable a video processing tool or mode. In an example, when the video processing tool or mode is enabled, the encoder will use or implement the tool or mode in the processing of a block of video, but may not necessarily modify the resulting bitstream based on the usage of the tool or mode. That is, a conversion from the block of video to the bitstream representation of the video will use the video processing tool or mode when it is enabled based on the decision or determination. In another example, when the video processing tool or mode is enabled, the decoder will process the bitstream with the knowledge that the bitstream has been modified based on the video processing tool or mode. That is, a conversion from the bitstream representation of the video to the block of video will be performed using the video processing tool or mode that was enabled based on the decision or determination.
Some embodiments of the disclosed technology include making a decision or determination to disable a video processing tool or mode. In an example, when the video processing tool or mode is disabled, the encoder will not use the tool or mode in the conversion of the block of video to the bitstream representation of the video. In another example, when the video processing tool or mode is disabled, the decoder will process the bitstream with the knowledge that the bitstream has not been modified using the video processing tool or mode that was enabled based on the decision or determination.
From the foregoing, it will be appreciated that specific embodiments of the presently disclosed technology have been described herein for purposes of illustration, but that various modifications may be made without deviating from the scope of the invention. Accordingly, the presently disclosed technology is not limited except as by the appended  claims.
Implementations of the subject matter and the functional operations described in this patent document can be implemented in various systems, digital electronic circuitry, or in computer software, firmware, or hardware, including the structures disclosed in this specification and their structural equivalents, or in combinations of one or more of them. Implementations of the subject matter described in this specification can be implemented as one or more computer program products, e.g., one or more modules of computer program instructions encoded on a tangible and non-transitory computer readable medium for execution by, or to control the operation of, data processing apparatus. The computer readable medium can be a machine-readable storage device, a machine-readable storage substrate, a memory device, a composition of matter effecting a machine-readable propagated signal, or a combination of one or more of them. The term “data processing unit” or “data processing apparatus” encompasses all apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, or multiple processors or computers. The apparatus can include, in addition to hardware, code that creates an execution environment for the computer program in question, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, or a combination of one or more of them.
A computer program (also known as a program, software, software application, script, or code) can be written in any form of programming language, including compiled or interpreted languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. A computer program does not necessarily correspond to a file in a file system. A program can be stored in a portion of a file that holds other programs or data (e.g., one or more scripts stored in a markup language document) , in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub programs, or portions of code) . A computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network.
The processes and logic flows described in this specification can be performed by one or more programmable processors executing one or more computer programs to perform functions by operating on input data and generating output. The processes and logic flows can also be performed by, and apparatus can also be implemented as, special purpose logic  circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application specific integrated circuit) .
Processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer. Generally, a processor will receive instructions and data from a read only memory or a random access memory or both. The essential elements of a computer are a processor for performing instructions and one or more memory devices for storing instructions and data. Generally, a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto optical disks, or optical disks. However, a computer need not have such devices. Computer readable media suitable for storing computer program instructions and data include all forms of nonvolatile memory, media and memory devices, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices. The processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.
It is intended that the specification, together with the drawings, be considered exemplary only, where exemplary means an example. As used herein, the use of “or” is intended to include “and/or” , unless the context clearly indicates otherwise.
While this patent document contains many specifics, these should not be construed as limitations on the scope of any invention or of what may be claimed, but rather as descriptions of features that may be specific to particular embodiments of particular inventions. Certain features that are described in this patent document in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination.
Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. Moreover, the separation of various system components in the embodiments  described in this patent document should not be understood as requiring such separation in all embodiments.
Only a few implementations and examples are described and other implementations, enhancements and variations can be made based on what is described and illustrated in this patent document.

Claims (119)

  1. A method for video processing, comprising:
    determining, for a conversion between a video block of a first component of a video and a bitstream representation of the video, prediction values of samples of the video block using representative samples outside of the video block, wherein the representative samples are determined during the conversion; and
    performing the conversion based on the determining.
  2. The method of claim 1, wherein the representative samples comprise samples from at least one of: a video block of a second component, a video block of a third component, or a neighboring block of the video block, the neighboring block being adjacent or non-adjacent to the video block.
  3. The method of claim 1 or 2, wherein the determining is based on reconstructed values of the representative samples or prediction values of the representative samples.
  4. The method of any of claims 1 to 3, wherein the determining is based on applying a linear function to the representative samples.
  5. The method of any of claims 1 to 3, wherein the determining is based on applying a non-linear function to the representative samples.
  6. The method of any of claims 1 to 5, wherein a final prediction value of a sample in the first component C 0 is denoted as FPred c0, and FPred c0 = X × TPred c0 + Y × (Rec c2 -FPred c2) + Z, wherein TPred c0 represents a temporary prediction value of the sample, wherein Rec c2 and FPred c2 represent a reconstruction value and a final prediction value of a representative sample in the third component C 2, and wherein X and Y represent weighing factors and Z represents an offset value, X, Y and Z being real numbers.
  7. The method of any of claims 1 to 5, wherein a final prediction value of a sample in the first component C 0 is denoted as FPred c0, and FPred c0 =  X × (α c0*Rec c1 + β c0) + Y × (Rec c2- (α c2*Rec c1 + β c2) + Z, wherein Rec c1 represents a reconstruction value of a representative sample in the second component C1 and Rec c2 represents a reconstruction value of a representative sample in the third component C2, wherein X and Y represent weighing factors and Z represents an offset value, wherein α c0 and β c0 are linear model parameters for the first component, and wherein α c2 and β c2 are linear model parameters for the third component, X, Y Z, α c0, β c0, α c2, and β c2 being real numbers.
  8. The method of any of claims 1 to 5, wherein a final prediction value of a sample in the first component C 0 is denoted as FPred c0, and FPred c0 = (X × α c0 -Y × α c2) *Rec c1 + (X × β c0 - Y × β c2) + Y × Rec c2 + Z, wherein Rec c1 represents a reconstruction value of a representative sample in the second component C1 and Rec c2 represents a reconstruction value of a representative sample in the third component C2, wherein X and Y represent weighing factors and Z represents an offset value, and wherein α c0 and β c0 are linear model parameters for the first component, and α c2 and β c2 are linear model parameters for the third component, X, Y Z, α c0, β c0, α c2, and β c2 being real numbers.
  9. The method of any of claims 1 to 8, wherein the first component has a size of K’ ×L’ and the second component as a size of K×L, wherein two temporary blocks having the size of K×L are derived according to linear model parameters (X × α c0, X × β c0) and (Y × α c2, Y × β c2) , and the two temporary blocks are downsampled to the size of K’ ×L’ .
  10. The method of claim 9, wherein the two temporary blocks are downsampled with or without clipping.
  11. The method of any of claims 1 to 8, wherein the first component has a size of K’ ×L’ and the second component as a size of K×L, wherein one temporary block having the size of K×L is derived according to linear model parameters (X × α c0 -Y × α c2, X × β c0 -Y × β c2) and the temporary block is downsampled to the size of K’ ×L’ .
  12. The method of claim 11, wherein the temporary block is downsampled with or without clipping.
  13. The method of claim 11 or 12, wherein the prediction values of the samples of the video block are determined by adding or subtracting collocated samples in the temporary block with or without performing downsampling.
  14. The method of any of claims 1 to 8, wherein the first component has a size of K’ ×L’ and the second component as a size of K×L, wherein one temporary block is derived based on downsampling the second component to the size of K×L, and wherein the prediction values of the samples of the video block are determined by applying linear model parameters (X × α c0, X × β c0) and (Y × α c2, Y × β c2) to the temporary block and by adding or subtracting collocated samples in the temporary block.
  15. The method of any of claims 7 to 14, wherein α c0 and β c0 are derived using neighboring samples of the first component and neighboring samples of the second component.
  16. The method of any of claims 7 to 14, wherein α c2 and β c2 are derived using neighboring samples of the third component and neighboring samples of the second component.
  17. The method of any of claims 7 to 16, wherein the linear model parameters are derived in a same manner as parameters for a cross-component linear model (CCLM) prediction mode or a Two-Step Cross-component Prediction Mode.
  18. The method of claim 17, wherein the linear model parameters are derived based on reconstructed values of the neighboring samples of the second component without performing downsampling.
  19. The method of claim 17, wherein the linear model parameters are derived based on reconstructed values of the neighboring samples of the first component or the third component are used with upsampling.
  20. The method of any of claims 17 to 19, wherein the linear model parameters are clipped within a range prior to being used by the CCP coding tool.
  21. The method of any of claims 1 to 20, wherein at least one of X, Y, or Z is equal to 1.
  22. The method of any of claims 1 to 20, wherein at least one of X, Y, or Z is equal to 0.
  23. The method of any of claims 1 to 20, wherein X is equal 1, Y is equal to -1, and Z is equal to 0.
  24. The method of any of claims 1 to 20, wherein at least one of X, Y, or Z is equal to 2K or -2K, wherein K is an integer value in a range of [-M, N] , where M and N are no smaller than 0.
  25. The method of any of claims 1 to 24, wherein variables including at least X, Y, Z, α c0, β c0, α c2, or β c2 are indicated in the bitstream representation.
  26. The method of any of claims 1 to 24, wherein variables including at least X, Y, Z, α c0, β c0, α c2, or β c2 are derived on the fly.
  27. The method of claim 25 or 26, wherein at least one of the variables has a same value for all samples within a video unit, the video unit comprising a coding block, a prediction block, or a transform block.
  28. The method of any of claims 1 to 27, wherein the linear model parameters have multiple sets of values.
  29. The method of claim 28, wherein different samples within a video unit use different sets of values.
  30. The method of any of claims 2 to 29, wherein the first component is Cb color component, the second component is Y component, and the third component is Cr component.
  31. The method of any of claims 2 to 29, wherein the first component is Cr color component, the second component is Y component, and the third component is Cb component.
  32. The method of any of claims 2 to 29, wherein the first component is a luma component, and the second and third components are chroma components.
  33. The method of any of claims 1 to 32, wherein the final prediction values are clipped within a predefined range.
  34. The method of any of claims 1 to 33, wherein the representative samples are selected during the conversion based on a characteristic of the first component.
  35. The method of claim 34, wherein the characteristic of the first component comprises a position of a sample of the first component or a color format of the first component.
  36. The method of any of claims 34 or 35, wherein the representative samples are located surrounding a current sample of the first component in case the color format is 4∶2∶0.
  37. The method of any of claims 34 to 36, wherein the current sample is located at position (x, y) , and wherein one representative sample is located at (2x, 2y) .
  38. The method of any of claims 34 to 36, wherein the current sample is located at position (x, y) , and wherein one representative sample is located at (2x+1, 2y) .
  39. The method of any of claims 34 to 36, wherein the current sample is located at position (x, y) , and wherein one representative sample is located at (2x+1, 2y+1) .
  40. The method of any of claims 34 to 36, wherein the current sample is located at position (x, y) , and wherein one representative sample is located at (2x, 2y+1) .
  41. The method of any of claims 34 to 36, wherein the current sample is located at position (x, y) , and wherein two representative samples are located at (2x, 2y) and (2x, 2y+1) .
  42. The method of any of claims 34 to 36, wherein the current sample is located at position (x, y) , and wherein two representative samples are located at (2x, 2y) and (2x+1, 2y) .
  43. The method of any of claims 34 to 36, wherein the current sample is located at position (x, y) , and wherein six representative samples are located at (2x-1, 2y) , (2x, 2y) , (2x+1, 2y) , (2x-1, 2y+1) , (2x, 2y+1) , and (2x+1, 2y+1) .
  44. The method of any of claims 34 to 36, wherein the current sample is located at position (x, y) , and wherein eight representative samples are located at (2x, 2y-1) , (2x-1, 2y) , (2x, 2y) , (2x+1, 2y) , (2x-1, 2y+1) , (2x, 2y+1) , (2x+1, 2y+1) , and (2x, 2y+2) .
  45. The method of any of claims 34 or 35, wherein a representative sample is located at a same location as a current sample of the first component.
  46. The method of claim 34, wherein the characteristic of the first component comprises an in-loop filtering method applied to the first component.
  47. The method of claim 46, wherein the representative samples are based on reconstructed samples prior to applying the in-loop filtering method.
  48. The method of claim 46 or 47, wherein the representative samples are based on a function of reconstructed samples prior to applying the in-loop filtering method.
  49. The method of claim 48, wherein the function comprises a downsample filtering function.
  50. The method of claim 48, wherein the function comprises a linear function or a non-linear function.
  51. The method of any of claims 46 to 50, wherein the in-loop filtering method comprises a deblocking filtering method, a sample adaptive offset (SAO) method, an adaptive loop filtering method, or a cross-component adaptive loop filtering method.
  52. A method of video processing, comprising:
    determining, for a conversion between a video block of a first component of a video and a bitstream representation of the video, a coding mode of a multiple cross-component coding tool; and
    performing the conversion based on the determining;
    wherein the coding mode is determined from multiple modes available for coding the video block, the multiple modes having different parameters for determining prediction values of samples of the video block using representative samples from at least one of a second component, a third component, or a neighboring block of the video block.
  53. The method of claim 52, wherein one mode of the multiple modes specifies that only neighboring samples from a row that is above or right-above the first component are used for the prediction values of the samples of the first component.
  54. The method of claim 52 or 53, wherein one mode of the multiple modes specifies that only neighboring samples from a column that is to the left or left-below the first component are used for the prediction values of the samples of the first component.
  55. The method of any of claims 52 to 54, wherein one mode of the multiple modes specifies multiple linear models are applicable to the video block.
  56. The method of claim 55, wherein the samples of the first component and reconstructed values of samples of the neighboring block are grouped into multiple categories, and wherein different linear models are applicable to different categories of samples.
  57. The method of any of claims 52 to 56, wherein one mode of the multiple modes specifies that a downsampling filter comprises a subsample filter.
  58. The method of any of claims 52 to 57, wherein one mode of the multiple modes specifies that K prediction values of the samples depend on L representative samples, K and L being integers.
  59. A method of video processing, comprising:
    performing a conversion between a video block of a video and a bitstream representation of the video;
    wherein the video block is coded using a multiple cross-component prediction mode of from multiple prediction modes of a prediction from multiple-cross component (PMC) coding tool,
    wherein the multiple cross-component prediction mode is signaled in the bitstream representation as an intra prediction mode or an inter prediction mode.
  60. The method of claim 59, wherein signaling of the multiple modes in the bitstream representation is based on a characteristic of the video block.
  61. The method of claim 60, wherein the characteristic comprises a coding mode of the video block.
  62. The method of claim 61, wherein the multiple modes are signaled in the bitstream representation in case the video block is coded using a specified coding mode.
  63. The method of claim 60, wherein the characteristic comprises a color format of the video block.
  64. The method of claim 63, wherein the multiple modes are signaled in the bitstream representation in case the color format of the video block is 4∶0∶0.
  65. The method of claim 60, wherein the characteristic comprises a code block flag or a prediction mode of the second component or the third component.
  66. The method of claim 65, wherein at least one of the multiple modes is signaled in the bitstream representation in case the code block flag of the second component or the third component is 1 or 0, and/or the prediction mode of the second component or the third component comprises a cross-component prediction (CCP) mode.
  67. The method of claim 65, wherein one of the multiple modes is determined to be 0 in case the code block flag of the second component or the third component is 0 and the prediction  mode of the second component or the third component is not a cross-component prediction (CCP) mode.
  68. The method of any of claims 59 to 67, wherein signaling that one of the multiple modes is enabled is included in the bitstream representation in addition to signaling of the one or more prediction modes.
  69. The method of claim 68, wherein the bitstream representation further includes an index indicating one of the multiple modes after the signaling that one of the multiple modes is enabled.
  70. The method of claim 68 or 69, wherein usage of a DM mode, usage of a cross-component prediction (CCP) mode, and usage of one of the multiple modes of the PMC coding tool are organized in a particular order.
  71. The method of claim 70, wherein the particular order is based on coded information of the neighboring block.
  72. The method of claim 71, wherein, in case the neighboring block is coded using the PMC coding tool, the usage of the PMC coding tool is signaled before the usage of the DM mode and the usage of the CCP mode.
  73. The method of claim 71, wherein, in case the neighboring block is coded using the DM mode, the usage of the DM mode is signaled before the usage of the CCP mode or the usage of the PMC coding tool.
  74. The method of any of claims 59 to 73, wherein the multiple modes for the PMC coding tool are signaled as a part of information for one or more cross-component prediction (CCP) modes for a CCP coding tool.
  75. The method of claim 74, wherein signaling of the multiple modes for the PMC coding tool is based on usage of the CCP modes.
  76. The method of claim 74 or 75, wherein, in case the CCP coding tool is enabled for the video block, an index is included in the bitstream representation to indicate one of the one or more CCP modes applicable to the video block.
  77. The method of any of claims 74 to 76, wherein the one or more CCP modes are classified into multiple categories, and wherein the bitstream representation includes a category index indicating a corresponding category.
  78. The method of claim 77, wherein the index and the category index are organized in an order.
  79. The method of any of claims 59 to 78, wherein one of the multiple modes for the PMC coding tool is treated as an intra prediction mode.
  80. The method of claim 79, wherein different modes for the PMC coding tool are assigned different indices and coded with different binary bin string.
  81. The method of any of claims 59 to 80, wherein the signaling of the multiple modes in the bitstream representation is bypass coded without any context.
  82. The method of any of claims 59 to 81, wherein the signaling of the multiple modes in the bitstream representation is context coded using one or more contexts derived based on information of the video block or the neighboring block.
  83. The method of claim 82, wherein the one or more contexts are derived based on a coding mode or availability of the neighboring block.
  84. The method of claim 74, wherein the one or more contexts are derived based on a dimension of the video block.
  85. A method for video processing, comprising:
    determining residual information of a video unit for a conversion between a video block of a video and a bitstream representation of the video in case a prediction from multiple cross-component (PMC) coding tool is enabled for a first component; and
    performing the conversion based on the determining.
  86. The method of claim 85, wherein the residual information of the video unit is indicated in the bitstream representation.
  87. The method of claim 85, wherein the residual information of the video unit is omitted from the bitstream representation in case only zero coefficients are available.
  88. The method of claim 87, wherein the bitstream representation includes a flag indicating whether there is a non-zero coefficient in the video unit.
  89. The method of claim 88, wherein the flag is inferred to be equal to 1 in case the flag is omitted from the bitstream representation.
  90. The method of claim 88 or 89, wherein a manner of signaling the flag is based on usage of the PMC coding tool.
  91. The method of any of claims 85 to 90, wherein, using the PMC coding tool, prediction values of samples of a first component are determined using representative samples from at least a second component or a third component, and wherein the bitstream representation includes a flag indicating whether there is a non-zero coefficient in the second component or the second component.
  92. The method of claim 91, wherein the flag is inferred to be equal to 1 and omitted from the bitstream representation.
  93. The method of claim 91, wherein a manner of signaling the flag is based on usage of the PMC coding tool.
  94. A method for video processing, comprising:
    determining, for a conversion between a video block of a video and a bitstream representation of the video, whether usage a cross-component prediction (CCP) coding tool is signaled in the bitstream representation, based on availability of neighboring samples of the video block, the neighboring samples being adjacent or non-adjacent to the video block; and
    performing the conversion based on the determining.
  95. The method of claim 94, wherein the CCP coding tool relies on neighboring samples above the video block, and wherein signaling of the usage of the CCP coding tool is omitted in case the neighboring samples above the video block are not available.
  96. The method of claim 94, wherein the CCP coding tool relies on neighboring samples to the left of the video block, and wherein signaling of the usage of the CCP coding tool is omitted in case the neighboring samples to the left of the video block are not available.
  97. The method of claim 94, wherein the CCP coding tool relies on neighboring samples above and to the left of the video block, and wherein signaling of the usage of the CCP coding tool is omitted in case the neighboring samples above and to the left of the video block are not available.
  98. The method of claim 94, wherein the CCP coding tool relies on neighboring samples on both sides of the video block, and wherein signaling of the usage of the CCP coding tool is omitted in case the neighboring samples on only one side of the video block are available.
  99. The method of claim 94, wherein the CCP coding tool relies on neighboring samples of the video block, and wherein signaling of the usage of the CCP coding tool is omitted in case the neighboring samples on a left or right side of the video block are unavailable.
  100. The method of any of claims 94 to 99, wherein the CCP coding tool is considered as disabled in case the signaling of the usage of the CCP is omitted.
  101. The method of any of claims 1 to 97, wherein usage of the method is indicated in the bitstream representation in a video processing unit.
  102. The method of claim 101, wherein the video processing unit comprises a sequence, a picture, a slice, a tile, a brick, a subpicture, a coding tree unit row, a coding tree unit, a virtual pipeline data unit, a coding unit, a prediction unit, a transform unit, or a sub-block in a coding unit or prediction unit.
  103. The method of claim 101, wherein the usage is included in a sequence parameter set, a video parameter set, a picture parameter set, a picture header, a slice header, a tile group header, a group of coding tree units, or a coding tree unit.
  104. The method of any of claims 1 to 103, wherein usage of the method is based on information about the video.
  105. The method of claim 104, wherein the information about the video comprises a dimension of the video block, a number of samples in the video block, a position of a video block relative to a video processing unit, a slice type, a picture type, or a partitioning type.
  106. The method of claim 104 or 105, wherein the method is disabled in case the number of samples in the video block is greater than or equal to a threshold.
  107. The method of claim 106, wherein the threshold is 1024 or 4096.
  108. The method of claim 104 or 105, wherein the method is disabled in case the number of samples in the video block is smaller than or equal to a threshold.
  109. The method of claim 108, wherein the threshold is 4, 8, or 16.
  110. The method of claim 104 or 105, wherein the method is disabled in case the dimension of the video block is greater than or equal to a threshold.
  111. The method of claim 110, wherein the threshold is 32 or 64.
  112. The method of claim 104 or 105, wherein the method is disabled in case the dimension of the video block is smaller than or equal to a threshold.
  113. The method of claim 112, wherein the threshold is 2 or 4.
  114. The method of any of claims 1 to 113, wherein signaling of usage of the method is omitted in case the method is disabled.
  115. The method of any one or more of claims 1 to 114, wherein performing the conversion includes encoding the video block into the bitstream representation.
  116. The method of any one or more of claims 1 to 114, wherein performing the conversion includes decoding the video block from the bitstream representation.
  117. An apparatus in a video system comprising a processor and a non-transitory memory with instructions thereon, wherein the instructions upon execution by the processor, cause the processor to implement the method in any one of claims 1 to 116.
  118. A computer program product stored on a non-transitory computer readable media, the computer program product including program code for carrying out the method in any one of claims 1 to 116.
  119. A computer-readable medium having a bitstream representation of a video stored thereupon, the bitstream representation being generated according to a method recited in any one or more of claims 1 to 116.
PCT/CN2020/133786 2019-12-04 2020-12-04 Prediction from multiple cross-components WO2021110116A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202080084122.7A CN115004697A (en) 2019-12-04 2020-12-04 Prediction from multiple cross-components

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN2019122946 2019-12-04
CNPCT/CN2019/122946 2019-12-04

Publications (1)

Publication Number Publication Date
WO2021110116A1 true WO2021110116A1 (en) 2021-06-10

Family

ID=76222468

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/133786 WO2021110116A1 (en) 2019-12-04 2020-12-04 Prediction from multiple cross-components

Country Status (2)

Country Link
CN (1) CN115004697A (en)
WO (1) WO2021110116A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023116706A1 (en) * 2021-12-21 2023-06-29 Mediatek Inc. Method and apparatus for cross component linear model with multiple hypotheses intra modes in video coding system
WO2023219290A1 (en) * 2022-05-13 2023-11-16 현대자동차주식회사 Method and apparatus for encoding intra prediction mode for each chroma component
WO2024010832A1 (en) * 2022-07-05 2024-01-11 Beijing Dajia Internet Information Technology Co., Ltd. Methods and apparatus on chroma motion compensation using adaptive cross-component filtering

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105723706A (en) * 2013-10-18 2016-06-29 Ge视频压缩有限责任公司 Multi-component picture or video coding concept
WO2018053293A1 (en) * 2016-09-15 2018-03-22 Qualcomm Incorporated Linear model chroma intra prediction for video coding
WO2019054200A1 (en) * 2017-09-15 2019-03-21 ソニー株式会社 Image processing device and method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105723706A (en) * 2013-10-18 2016-06-29 Ge视频压缩有限责任公司 Multi-component picture or video coding concept
WO2018053293A1 (en) * 2016-09-15 2018-03-22 Qualcomm Incorporated Linear model chroma intra prediction for video coding
WO2019054200A1 (en) * 2017-09-15 2019-03-21 ソニー株式会社 Image processing device and method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
G. VAN DER AUWERA (QUALCOMM), J. HEO (LGE), A. FILIPPOV (HUAWEI): "Description of Core Experiment 3 (CE3): Intra Prediction and Mode Coding", 11. JVET MEETING; 20180711 - 20180718; LJUBLJANA; (THE JOINT VIDEO EXPLORATION TEAM OF ISO/IEC JTC1/SC29/WG11 AND ITU-T SG.16 ), 18 July 2018 (2018-07-18), XP030200049 *
TAO ZHANG; XIAOPENG FAN; DEBIN ZHAO; WEN GAO: "Improving chroma intra prediction for HEVC", 2016 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA & EXPO WORKSHOPS (ICMEW), IEEE, 11 July 2016 (2016-07-11), pages 1 - 6, XP032970872, DOI: 10.1109/ICMEW.2016.7574735 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023116706A1 (en) * 2021-12-21 2023-06-29 Mediatek Inc. Method and apparatus for cross component linear model with multiple hypotheses intra modes in video coding system
WO2023219290A1 (en) * 2022-05-13 2023-11-16 현대자동차주식회사 Method and apparatus for encoding intra prediction mode for each chroma component
WO2024010832A1 (en) * 2022-07-05 2024-01-11 Beijing Dajia Internet Information Technology Co., Ltd. Methods and apparatus on chroma motion compensation using adaptive cross-component filtering

Also Published As

Publication number Publication date
CN115004697A (en) 2022-09-02

Similar Documents

Publication Publication Date Title
US11570458B2 (en) Indication of two-step cross-component prediction mode
CN114586370B (en) Method, apparatus and medium for using chroma quantization parameters in video encoding and decoding
WO2021083259A1 (en) Signaling of cross-component adaptive loop filter
KR20210145757A (en) Matrix Derivation in Intra Coding Mode
WO2021110116A1 (en) Prediction from multiple cross-components
WO2020228717A1 (en) Block dimension settings of transform skip mode
CN114402601B (en) Method and device for shifting quantization parameter of chroma deblocking filter
US11785260B2 (en) Cross-component adaptive loop filtering in video coding
US20220312020A1 (en) Cross-component prediction using multiple components
CN113826405B (en) Use of transform quantization bypass modes for multiple color components
CN113853787B (en) Using transform skip mode based on sub-blocks
CN114503597B (en) Chroma deblocking method in video coding and decoding
WO2022194197A1 (en) Separate Tree Coding Restrictions

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20896832

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20896832

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 12-10-2022)

122 Ep: pct application non-entry in european phase

Ref document number: 20896832

Country of ref document: EP

Kind code of ref document: A1