WO2024149247A1 - Methods and apparatus of region-wise cross-component model merge mode for video coding - Google Patents

Methods and apparatus of region-wise cross-component model merge mode for video coding Download PDF

Info

Publication number
WO2024149247A1
WO2024149247A1 PCT/CN2024/071360 CN2024071360W WO2024149247A1 WO 2024149247 A1 WO2024149247 A1 WO 2024149247A1 CN 2024071360 W CN2024071360 W CN 2024071360W WO 2024149247 A1 WO2024149247 A1 WO 2024149247A1
Authority
WO
WIPO (PCT)
Prior art keywords
candidate
component
cross
subblocks
block
Prior art date
Application number
PCT/CN2024/071360
Other languages
French (fr)
Inventor
Chia-Ming Tsai
Hsin-Yi Tseng
Cheng-Yen Chuang
Chen-Yen LAI
Yu-Ling Hsiao
Chih-Wei Hsu
Yi-Wen Chen
Ching-Yeh Chen
Tzu-Der Chuang
Original Assignee
Mediatek Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mediatek Inc. filed Critical Mediatek Inc.
Publication of WO2024149247A1 publication Critical patent/WO2024149247A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/105Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/119Adaptive subdivision aspects, e.g. subdivision of a picture into rectangular or non-rectangular coding blocks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/186Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors
    • H04N19/517Processing of motion vectors by encoding
    • H04N19/52Processing of motion vectors by encoding by predictive encoding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/70Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards

Definitions

  • the present invention is a non-Provisional Application of and claims priority to U.S. Provisional Patent Application No. 63/479,003, filed on January 9, 2023.
  • the U.S. Provisional Patent Application is hereby incorporated by reference in its entirety.
  • the present invention relates to video coding system.
  • the present invention relates to region-wise cross-component merge mode or multiple history tables in a video coding system.
  • VVC Versatile video coding
  • JVET Joint Video Experts Team
  • MPEG ISO/IEC Moving Picture Experts Group
  • ISO/IEC 23090-3 2021
  • Information technology -Coded representation of immersive media -Part 3 Versatile video coding, published Feb. 2021.
  • VVC is developed based on its predecessor HEVC (High Efficiency Video Coding) by adding more coding tools to improve coding efficiency and also to handle various types of video sources including 3-dimensional (3D) video signals.
  • HEVC High Efficiency Video Coding
  • Fig. 1A illustrates an exemplary adaptive Inter/Intra video encoding system incorporating loop processing.
  • Intra Prediction 110 the prediction data is derived based on previously coded video data in the current picture.
  • Motion Estimation (ME) is performed at the encoder side and Motion Compensation (MC) is performed based on the result of ME to provide prediction data derived from other picture (s) and motion data.
  • Switch 114 selects Intra Prediction 110 or Inter-Prediction 112 and the selected prediction data is supplied to Adder 116 to form prediction errors, also called residues.
  • the prediction error is then processed by Transform (T) 118 followed by Quantization (Q) 120.
  • T Transform
  • Q Quantization
  • the transformed and quantized residues are then coded by Entropy Encoder 122 to be included in a video bitstream corresponding to the compressed video data.
  • the bitstream associated with the transform coefficients is then packed with side information such as motion and coding modes associated with Intra prediction and Inter prediction, and other information such as parameters associated with loop filters applied to underlying image area.
  • the side information associated with Intra Prediction 110, Inter prediction 112 and in-loop filter 130, are provided to Entropy Encoder 122 as shown in Fig. 1A. When an Inter-prediction mode is used, a reference picture or pictures have to be reconstructed at the encoder end as well.
  • the transformed and quantized residues are processed by Inverse Quantization (IQ) 124 and Inverse Transformation (IT) 126 to recover the residues.
  • the residues are then added back to prediction data 136 at Reconstruction (REC) 128 to reconstruct video data.
  • the reconstructed video data may be stored in Reference Picture Buffer 134 and used for prediction of other frames.
  • incoming video data undergoes a series of processing in the encoding system.
  • the reconstructed video data from REC 128 may be subject to various impairments due to a series of processing.
  • in-loop filter 130 is often applied to the reconstructed video data before the reconstructed video data are stored in the Reference Picture Buffer 134 in order to improve video quality.
  • deblocking filter (DF) may be used.
  • SAO Sample Adaptive Offset
  • ALF Adaptive Loop Filter
  • the loop filter information may need to be incorporated in the bitstream so that a decoder can properly recover the required information. Therefore, loop filter information is also provided to Entropy Encoder 122 for incorporation into the bitstream.
  • DF deblocking filter
  • SAO Sample Adaptive Offset
  • ALF Adaptive Loop Filter
  • Loop filter 130 is applied to the reconstructed video before the reconstructed samples are stored in the reference picture buffer 134.
  • the system in Fig. 1A is intended to illustrate an exemplary structure of a typical video encoder. It may correspond to the High Efficiency Video Coding (HEVC) system, VP8, VP9, H. 264 or VVC.
  • HEVC High Efficiency Video Coding
  • the decoder can use similar or portion of the same functional blocks as the encoder except for Transform 118 and Quantization 120 since the decoder only needs Inverse Quantization 124 and Inverse Transform 126.
  • the decoder uses an Entropy Decoder 140 to decode the video bitstream into quantized transform coefficients and needed coding information (e.g. ILPF information, Intra prediction information and Inter prediction information) .
  • the Intra prediction 150 at the decoder side does not need to perform the mode search. Instead, the decoder only needs to generate Intra prediction according to Intra prediction information received from the Entropy Decoder 140.
  • the decoder only needs to perform motion compensation (MC 152) according to Inter prediction information received from the Entropy Decoder 140 without the need for motion estimation.
  • an input picture is partitioned into non-overlapped square block regions referred as CTUs (Coding Tree Units) , similar to HEVC.
  • CTUs Coding Tree Units
  • Each CTU can be partitioned into one or multiple smaller size coding units (CUs) .
  • the resulting CU partitions can be in square or rectangular shapes.
  • VVC divides a CTU into prediction units (PUs) as a unit to apply prediction process, such as Inter prediction, Intra prediction, etc.
  • the VVC standard incorporates various new coding tools to further improve the coding efficiency over the HEVC standard. Some new tools relevant to the present invention are reviewed as follows.
  • methods and apparatus to shared buffer to store coding information for multiple coding tools including the Cross-Component Model (CCM) mode are disclosed to improve the performance.
  • methods and apparatus to use innovative default cross-component candidates are disclosed to improve the coding performance.
  • a method and apparatus for video coding using coding tools including one or more cross component models related modes are disclosed.
  • input data associated with a current block comprising a first-colour block and a second-colour block are received, wherein the input data comprise pixel data to be encoded at an encoder side or data associated with the current block to be decoded at a decoder side.
  • the current block is partitioned into two or more subblocks.
  • a target cross-component model for at least one of said two or more subblocks is derived, wherein the target cross-component model is inherited from a spatial, a temporal or a historical neighbouring block or position.
  • a candidate list comprising the target cross-component model is derived.
  • Said at least one of said two or more subblocks is encoded or decoded using information comprising the candidate list, wherein when a target cross-component candidate corresponding to the target cross-component model is selected for said at least one of said two or more subblocks, a predictor is generated for a second-colour subblock of said at least one of said two or more subblocks by applying the target cross-component model to a first-colour subblock of said at least one of said two or more subblocks.
  • the candidate list is used for a cross-component merge mode.
  • at least another subblock of said two or more subblocks is coded by an inter or intra coding tool.
  • the current block is partitioned into said two or more subblocks symmetrically or asymmetrically.
  • a candidate index for the historical neighbouring block or position is explicitly signalled or parsed. In another embodiment, the candidate index for the historical neighbouring block or position is implicitly indicated.
  • the current block is partitioned into two subblocks and both of the two subblocks are coded using a cross-component merge mode, first two target cross-component candidates in the candidate list correspond to two candidate indexes of the two subblocks.
  • the two candidate indexes are assigned to the two subblocks implicitly.
  • a first index of the two candidate indexes is signalled or parsed explicitly and a second index of the two candidate indexes is signalled or parsed depending on the first index.
  • the current block is partitioned into two subblocks and both of the two subblocks are coded based on a cross-component merge mode, a first target cross-component candidate in the candidate list is used by both of the two subblocks.
  • said at least one of said two or more subblocks corresponds to a first subblock of said two or more subblocks corresponds and the target cross-component model is implicitly derived from a stored cross-component model at a corresponding top-left position of the current block in previous coded slices/pictures.
  • the current picture is partitioned into multiple regions and multiple history tables associated with the multiple regions are determined, wherein at least two history tables of the multiple history tables are kept for two different regions.
  • a candidate list comprising one or more cross-component historical candidates is derived from the multiple history tables.
  • the current block is encoded or decoded using information comprising the candidate list, wherein when a target cross-component historical candidate from the candidate list is selected for the current block, a predictor is generated for the second-colour block by applying a target cross-component model associated with the target cross-component historical candidate to the first-colour block.
  • one of the multiple history tables is kept for each of the multiple regions.
  • an initial history table is generated for a current region based on a target history table associated with a target region of the multiple history tables in previously coded pictures.
  • an index is signalled or parsed to indicate the target region in the previously coded pictures.
  • the target region in the previously coded pictures is determined implicitly.
  • a current history table for a current region is generated using at least two of multiple history tables.
  • Fig. 1A illustrates an exemplary adaptive Inter/Intra video encoding system incorporating loop processing.
  • Fig. 1B illustrates a corresponding decoder for the encoder in Fig. 1A.
  • Fig. 2 shows an example of the location of the left and above samples and the sample of the current block involved in the LM_LA mode.
  • Fig. 3 shows an example of classifying the neighbouring samples into two groups.
  • Fig. 4 illustrates an example of spatial part of the convolutional filter.
  • Fig. 5 illustrates an example of reference area with paddings used to derive the filter coefficients.
  • Fig. 6 illustrates the 16 gradient patterns for Gradient Linear Model (GLM) .
  • Fig. 7 illustrates the neighbouring blocks used for deriving spatial merge candidates for VVC.
  • Fig. 8 illustrates the possible candidate pairs considered for redundancy check in VVC.
  • Fig. 9 illustrates an example of temporal candidate derivation, where a scaled motion vector is derived according to POC (Picture Order Count) distances.
  • Fig. 10 illustrate the position for the temporal candidate selected between candidates C 0 and C 1 .
  • Fig. 11 illustrates an exemplary pattern of the non-adjacent spatial merge candidates.
  • Fig. 12 illustrates an example of inheriting temporal neighbouring model parameters.
  • Figs. 13A-B illustrates two search patterns for inheriting non-adjacent spatial neighbouring models.
  • Fig. 14 illustrates an example of multiple history table for storing cross-component model where each grid represents a CTU.
  • Figs. 15A-B illustrate examples for constructing the history table of the current region from the history table of the region having the same beginning geometric position of the current region (Fig. 15A) or from the history table of the region containing the centre geometric position of the current region (Fig. 15B) .
  • Fig. 16 illustrates a flowchart of an exemplary video coding system that uses region-wise cross-component merge mode according to an embodiment of the present invention.
  • Fig. 17 illustrates a flowchart of an exemplary video coding system that multiple history tables to derive cross-component candidate list according to an embodiment of the present invention.
  • CCLM cross-component linear model
  • LM mode cross-component linear model
  • pred C (i, j) represents the predicted chroma samples in a CU and rec L ′ (i, j) represents the downsampled reconstructed luma samples of the same CU.
  • the CCLM parameters ( ⁇ and ⁇ ) are derived with at most four neighbouring chroma samples and their corresponding down-sampled luma samples. Suppose the current chroma block dimensions are W ⁇ H, then W’ and H’ are set as
  • the four neighbouring luma samples at the selected positions are down-sampled and compared four times to find two larger values: x 0 A and x 1 A , and two smaller values: x 0 B and x 1 B .
  • Their corresponding chroma sample values are denoted as y 0 A , y 1 A , y 0 B and y 1 B .
  • Fig. 2 shows an example of the location of the left and above samples and the sample of the current block involved in the LM_LA mode.
  • Fig. 2 shows the relative sample locations of N ⁇ N chroma block 210, the corresponding 2N ⁇ 2N luma block 220 and their neighbouring samples (shown as filled circles) .
  • the division operation to calculate parameter ⁇ is implemented with a look-up table.
  • the diff value difference between maximum and minimum values
  • LM_A 2 LM modes
  • LM_L 2 LM modes
  • LM_A mode only the above template is used to calculate the linear model coefficients. To get more samples, the above template is extended to (W+H) samples. In LM_L mode, only left template are used to calculate the linear model coefficients. To get more samples, the left template is extended to (H+W) samples.
  • LM_LA mode left and above templates are used to calculate the linear model coefficients.
  • two types of down-sampling filter are applied to luma samples to achieve 2 to 1 down-sampling ratio in both horizontal and vertical directions.
  • the selection of down-sampling filter is specified by a SPS level flag.
  • the two down-sampling filters are as follows, which are corresponding to “type-0” and “type-2” content, respectively.
  • Rec L ′ (i, j) [rec L (2i-1, 2j-1) +2 ⁇ rec L (2i, 2j-1) +rec L (2i+1, 2j-1) + rec L (2i-1, 2j) +2 ⁇ rec L (2i, 2j) +rec L (2i+1, 2j) +4] >>3 (6)
  • Rec L ′ (i, j) rec L (2i, 2j-1) +rec L (2i-1, 2j) +4 ⁇ rec L (2i, 2j) +rec L (2i+1, 2j) + rec L (2i, 2j+1) +4] >>3 (7)
  • This parameter computation is performed as part of the decoding process, and is not just as an encoder search operation. As a result, no syntax is used to convey the ⁇ and ⁇ values to the decoder.
  • chroma intra mode coding For chroma intra mode coding, a total of 8 intra modes are allowed for chroma intra mode coding. Those modes include five traditional intra modes and three cross-component linear model modes (i.e., ⁇ LM_LA, LM_L, and LM_A ⁇ , or ⁇ CCLM_LT, CCLM_L, and CCLM_T ⁇ ) .
  • the terms of ⁇ LM_LA, LM_L, LM_A ⁇ and ⁇ CCLM_LT, CCLM_L, CCLM_T ⁇ are used interchangeably in this disclosure.
  • Chroma mode coding directly depends on the intra prediction mode of the corresponding luma block.
  • one chroma block may correspond to multiple luma blocks. Therefore, for Chroma DM mode, the intra prediction mode of the corresponding luma block covering the centre position of the current chroma block is directly inherited.
  • MMLM Multiple Model CCLM
  • MMLM multiple model CCLM mode
  • the samples of the current luma block are also classified based on the same rule for the classification of neighbouring luma samples.
  • Three MMLM model modes (MMLM_LT, MMLM_T, and MMLM_L) are allowed for choosing the neighbouring samples from left-side and above-side, above-side only, and left-side only, respectively.
  • the MMLM uses two models according to the sample level of the neighbouring samples.
  • CCCM Convolutional Cross-Component Model
  • a convolutional model is applied to improve the chroma prediction performance.
  • the convolutional model has 7-tap filter consisting of a 5-tap plus sign shape spatial component, a nonlinear term and a bias term.
  • the input to the spatial 5-tap component of the filter consists of a centre (C) luma sample which is collocated with the chroma sample to be predicted and its above/north (N) , below/south (S) , left/west (W) and right/east (E) neighbours as shown in Fig. 4.
  • the bias term (denoted as B) represents a scalar offset between the input and output (similarly to the offset term in CCLM) and is set to the middle chroma value (512 for 10-bit content) .
  • the filter coefficients c i are calculated by minimising MSE between predicted and reconstructed chroma samples in the reference area.
  • Fig. 5 illustrates an example of the reference area which consists of 6 lines of chroma samples above and left of the PU. Reference area extends one PU width to the right and one PU height below the PU boundaries. Area is adjusted to include only available samples.
  • Multi-model CCCM mode can be selected for PUs which have at least 128 reference samples available.
  • the GLM utilizes luma sample gradients to derive the linear model. Specifically, when the GLM is applied, the input to the CCLM process, i.e., the down-sampled luma samples L, are replaced by luma sample gradients G. The other parts of the CCLM (e.g., parameter derivation, prediction sample linear transform) are kept unchanged.
  • C ⁇ G+ ⁇
  • the CCLM mode when the CCLM mode is enabled for the current CU, two flags are signalled separately for Cb and Cr components to indicate whether GLM is enabled for each component. If the GLM is enabled for one component, one syntax element is further signalled to select one of 16 gradient filters (610-640 in Fig. 6) for the gradient calculation.
  • the GLM can be combined with the existing CCLM by signalling one extra flag in bitstream. When such combination is applied, the filter coefficients that are used to derive the input luma samples of the linear model are calculated as the combination of the selected gradient filter of the GLM and the down-sampling filter of the CCLM.
  • the derivation of spatial merge candidates in VVC is the same as that in HEVC except that the positions of first two merge candidates are swapped.
  • a maximum of four merge candidates (B 0 , A 0 , B 1 and A 1 ) for current CU 710 are selected among candidates located in the positions depicted in Fig. 7.
  • the order of derivation is B 0 , A 0 , B 1 , A 1 and B 2 .
  • Position B 2 is considered only when one or more neighbouring CU of positions B 0 , A 0 , B 1 , A 1 are not available (e.g. belonging to another slice or tile) or is intra coded.
  • a scaled motion vector is derived based on the co-located CU 920 belonging to the collocated reference picture as shown in Fig. 9.
  • the reference picture list and the reference index to be used for the derivation of the co-located CU is explicitly signalled in the slice header.
  • the scaled motion vector 930 for the temporal merge candidate is obtained as illustrated by the dotted line in Fig.
  • tb is defined to be the POC difference between the reference picture of the current picture and the current picture
  • td is defined to be the POC difference between the reference picture of the co-located picture and the co-located picture.
  • the reference picture index of temporal merge candidate is set equal to zero.
  • the position for the temporal candidate is selected between candidates C 0 and C 1 , as depicted in Fig. 10. If CU at position C 0 is not available, is intra coded, or is outside of the current row of CTUs, position C 1 is used. Otherwise, position C 0 is used in the derivation of the temporal merge candidate.
  • Non-Adjacent Motion Vector Prediction (NAMVP)
  • JVET-L0399 a coding tool referred as Non-Adjacent Motion Vector Prediction (NAMVP)
  • JVET-L0399 Joint Video Exploration Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11, 12th Meeting: Macao, CN, 3–12 Oct. 2018, Document: JVET-L0399
  • the non-adjacent spatial merge candidates are inserted after the TMVP (i.e., the temporal MVP) in the regular merge candidate list.
  • the pattern of spatial merge candidates is shown in Fig.
  • the distances between non-adjacent spatial candidates and current coding block are based on the width and height of current coding block.
  • each small square corresponds to a NAMVP candidate and the candidates are ordered (as shown by the number inside the square) according to the distance.
  • the line buffer restriction is not applied. In other words, the NAMVP candidates far away from a current block may have to be stored that may require a large buffer.
  • the guided parameter set is used to refine the derived model parameters by a specified CCLM mode.
  • the guided parameter set is explicitly signalled in the bitstream, after deriving the model parameters, the guided parameter set is added to the derived model parameters as the final model parameters.
  • the guided parameter set contain at least one of a differential scaling parameter (dA) , a differential offset parameter (dB) , and a differential shift parameter (dS) .
  • dA differential scaling parameter
  • dB differential offset parameter
  • dS differential shift parameter
  • pred C (i, j) ( ( ( ⁇ ′+dA) ⁇ rec L ′ (i, j) ) >>s) + ⁇ .
  • pred C (i, j) ( ( ⁇ ′ ⁇ rec L ′ (i, j) ) >>s) + ( ⁇ +dB) .
  • pred C (i, j) ( ( ⁇ ′ ⁇ rec L ′ (i, j) ) >> (s+dS) ) + ⁇ .
  • pred C (i, j) ( ( ( ⁇ ′+dA) ⁇ rec L ′ (i, j) ) >>s) + ( ⁇ +dB) .
  • the guided parameter set can be signalled per colour component.
  • one guided parameter set is signalled for Cb component, and another guided parameter set is signalled for Cr component.
  • one guided parameter set can be signalled and shared among colour components.
  • the signalled dA and dB can be a positive or negative value.
  • signalling dA one bin is signalled to indicate the sign of dA.
  • signalling dB one bin is signalled to indicate the sign of dB.
  • the guided parameter set can also be signalled per model.
  • one guided parameter set is signalled for one model and another guided parameter set is signalled for another model.
  • one guided parameter set is signalled and shared among linear models.
  • only one guided parameter set is signalled for one selected model, and another model is not further refined by guided parameter set.
  • the final scaling parameter of the current block is inherited from the neighbouring blocks and further refined by dA (e.g., dA derivation or signalling can be similar or the same as the method in the previous “Guided parameter set for refining the cross-component model parameters” ) .
  • the offset parameter e.g., ⁇ in CCLM
  • the final scaling parameter is derived based on the inherited scaling parameter and the average value of neighbouring luma and chroma samples of the current block. For example, if the final scaling parameter is inherited from a selected neighbouring block, and the inherited scaling parameter is ⁇ ′ nei , then the final scaling parameter is ( ⁇ ′ nei + dA) .
  • the final scaling parameter is inherited from a historical list and further refined by dA.
  • the historical list records the most recent j entries of final scaling parameters from previous CCLM-coded blocks. Then, the final scaling parameter is inherited from one selected entry of the historical list, ⁇ ′ list , and the final scaling parameter is ( ⁇ ′ list + dA) .
  • the final scaling parameter is inherited from a historical list or the neighbouring blocks, but only the MSB (Most Significant Bit) part of the inherited scaling parameter is taken, and the LSB (Least Significant Bit) of the final scaling parameter is from dA.
  • the final scaling parameter is inherited from a historical list or the neighbouring blocks, but does not further refine by dA.
  • the filter coefficients (c i ) are inherited.
  • the offset parameter e.g., c 6 ⁇ B or c 6 in CCCM
  • c 6 ⁇ B or c 6 in CCCM can be re-derived based on the inherited parameter and the average value of neighbouring corresponding position luma and chroma samples of the current block.
  • only partial filter coefficients are inherited (e.g., only n out of 6 filter coefficients are inherited, where 1 ⁇ n ⁇ 6) , the rest filter coefficients are further re-derived using the neighbouring luma and chroma samples of the current block.
  • the current block shall also inherit the GLM gradient pattern of the candidate and apply to the current luma reconstructed samples.
  • the classification threshold is also inherited to classify the neighbouring samples of the current block into multiple groups, and the inherited multiple cross-component model parameters are further assigned to each group.
  • the classification threshold is the average value of the neighbouring reconstructed luma samples, and the inherited multiple cross-component model parameters are further assigned to each group.
  • the offset parameter of each group is re-derived based on the inherited scaling parameter and the average value of neighbouring luma and chroma samples of each group of the current block.
  • the offset parameter (e.g., c 6 ⁇ B or c 6 in CCCM) of each group is re-derived based on the inherited coefficient parameter and the neighbouring luma and chroma samples of each group of the current block.
  • inheriting model parameters may depend on the colour component.
  • Cb and Cr components may inherit model parameters or model derivation method from the same candidate or different candidates.
  • only one of colour components inherits model parameters, and the other colour component derives model parameters based on the inherited model derivation method (e.g., if the inherit candidate is coded by MMLM or CCCM, the current block also derives model parameters based on MMLM or CCCM using the current neighbouring reconstructed samples) .
  • only one of colour components inherits model parameters, and the other colour component derives its model parameters using the current neighbouring reconstructed samples.
  • Cb and Cr components can inherit model parameters or model derivation method from different candidates.
  • the inherited model of Cr can depend on the inherited model of Cb.
  • possible cases include but not limited to (1) if the inherited model of Cb is CCCM, the inherited model of Cr shall be CCCM; (2) if the inherit model of Cb is CCLM, the inherit model of Cr shall be CCLM; (3) if the inherited model of Cb is MMLM, the inherited model of Cr shall be MMLM; (4) if the inherited model of Cb is CCLM, the inherited model of Cr shall be CCLM or MMLM; (5) if the inherited model of Cb is MMLM, the inherited model of Cr shall be CCLM or MMLM; (6) if the inherited model of Cb is GLM, the inherited model of Cr shall be GLM.
  • the (CCM) information cross-component model of the current block is derived and stored for later reconstruction process of neighbouring blocks using inherited neighbours model parameter.
  • the CCM information mentioned in this disclosure includes but not limited to prediction mode (e.g., CCLM, MMLM, CCCM) , GLM pattern index, model parameters, or classification threshold.
  • prediction mode e.g., CCLM, MMLM, CCCM
  • GLM pattern index e.g., GLM pattern index
  • model parameters e.g., classification threshold
  • the cross-component model parameters of the current block can be derived by using the current luma and chroma reconstruction or prediction samples. Later, if another block is predicted by using inherited neighbours model parameters, it can inherit the model parameters from the current block.
  • the current block is coded by cross-component prediction
  • the cross-component model parameters of the current block are re-derived by using the current luma and chroma reconstruction or prediction samples.
  • the stored cross-component model can be CCCM, LM_LA (i.e., single model LM using both above and left neighbouring samples to derive model) , or MMLM_LA (multi-model LM using both above and left neighbouring samples to derive model) .
  • the cross-component model parameters of the current block are derived by using the current luma and chroma reconstruction or prediction samples.
  • the cross-component model parameters of the current block are re-derived by using the current luma and chroma reconstruction or prediction samples. Later the re-derived model parameters are combined with the original cross-component models which is used in reconstructing the current block.
  • the inherited model parameters can be from a block that is an immediate neighbouring block.
  • the models from blocks at pre-defined positions are added into the candidate list in a pre-defined order.
  • the pre-defined positions can be the positions depicted in Fig. 7, and the pre-defined order can be B 0 , A 0 , B 1 , A 1 and B 2 , or A 0 , B 0 , B 1 , A 1 and B 2 .
  • the pre-defined positions include the positions at the immediate above (W >> 1) or ( (W >> 1) –1) position if W is greater than or equal to TH, and the positions at the immediate left (H >> 1) or ( (H >> 1) –1) position if H is greater than or equal to TH, where W and H are the width and height of the current block, TH is a threshold value which can be 4, 8, 16, 32, or 64.
  • the maximum number of inherited models from spatial neighbours are smaller than the number of pre-defined positions. For example, if the pre-defined positions are as depicted in Fig. 7, there are 5 pre-defined positions. If pre-defined order is B 0 , A 0 , B 1 , A 1 and B 2 , and the maximum number of inherited models from spatial neighbours is 4, the model from B2 is added into the candidate list only when one of preceding blocks is not available or is not coded in cross-component model.
  • the inherited model parameters can be from the block in the previous coded slices/pictures. For example, as shown in the Fig. 12, the current block position is at (x, y) and the block size is w ⁇ h.
  • ⁇ x and ⁇ y are set to 0.
  • ⁇ x and ⁇ y are set to the horizontal and vertical motion vector of the current block.
  • ⁇ x and ⁇ y are set to the horizontal and vertical motion vectors in reference picture list 0.
  • ⁇ x and ⁇ y are set to the horizontal and vertical motion vectors in reference picture list 1.
  • the inherited model parameters can be from the block in the previous coded slices/pictures in the reference lists. For example, if the horizontal and vertical motion vector in reference picture list 0 is ⁇ x L0 and ⁇ y L0 , the motion vector can be scaled to other reference pictures in the reference list 0 and 1. If the motion vector is scaled to the i th reference picture in the reference list 0 as ( ⁇ x L0, i0 , ⁇ y L0, i0 ) . The model can be from the block in the i th reference picture in the reference list 0, and ⁇ x and ⁇ y are set to ( ⁇ x L0, i0 , ⁇ y L0, i0 ) .
  • the horizontal and vertical motion vector in reference picture list 0 is ⁇ x L0 and ⁇ y L0 and the motion vector is scaled to the i th reference picture in the reference list 1 as ( ⁇ x L0, i1 , ⁇ y L0, i1 ) .
  • the model can be from the block in the i th reference picture in the reference list 1, and ⁇ x and ⁇ y are set to ( ⁇ x L0, i1 , ⁇ y L0, i1 )
  • the inherited model parameters can be from blocks that are spatial neighbouring blocks.
  • the models from blocks at pre-defined positions are added into the candidate list in a pre-defined order.
  • the pattern of the positions and order can be as the pattern depicted in Fig. 11, where the distance between each position is the width and height of current coding block.
  • the distance between the positions that are closer to the current encoding block is smaller than the positions that are further away from the current block.
  • the maximum number of inherited models from non-adjacent spatial neighbours are smaller than the number of pre-defined positions. For example, if the pre-defined positions are as depicted in Figs. 13A-B, where two patterns (pattern 1310 in Fig. 13A and pattern 1320 in Fig. 13B) are shown. If the maximum number of inherited models from non-adjacent spatial neighbours is N, the search pattern 2 is used only when the number of available models from search pattern 1 is smaller than N.
  • the inherited model parameters can be from a cross-component model history table.
  • the cross-component models in the history table can be added into the candidate list according to a pre-defined order.
  • the adding order of historical candidates can be from the beginning of the table to the end of the table.
  • the adding order of historical candidates can be from a certain pre-defined position to the end of the table.
  • the adding order of historical candidates can be from the end of the table to the beginning of the table.
  • the adding order of historical candidates can be from a certain pre-defined position to the beginning of the table.
  • the adding order of historical candidates can be in an interleaved manner (e.g., the first added is from the beginning of the table, the second added candidate from the end of the table and so on) .
  • single cross-component model history table can be maintained for storing the previous cross-component model, and the cross-component model history table can be reset at the start of the current picture, current slice, current tile, every M CTU rows or every N CTUs, where N and M can be any value greater than 0.
  • the cross-component model history table can be reset at the end of the current picture, current slice, current tile, current CTU row or current CTU.
  • multiple cross-component model history tables can be maintained for storing the previous cross-component model.
  • One picture can be divided into several regions, and for each region, a history table is kept.
  • the size of region is pre-defined, and it can be X by Y CTUs where X and Y can be any value greater than 0.
  • history table 1 a total of N history tables is used here, denoted as history table 1 to history table N.
  • history table 0 There can be another history table for storing all the previous cross-component model, which is denoted as history table 0 here.
  • the history table 0 will always be updated during the encoding/decoding process. When the end of the divided region is reached, the history table of this divided region will be updated by the history table 0.
  • Fig. 14 shows an example of multiple history tables for storing the cross-component models, where each grid represents a CTU and the size of region is 4 by 1 CTUs.
  • the number in each CTU indicates the history table where the model parameter associated with the CTU is stored.
  • the dark block 1410 indicated the current CTU.
  • the small square at the end of each region indicates where the table is updated.
  • one picture can be divided into multiple regions, and for each region, a history table is kept.
  • the history table 0 and one additional history table will be updated during the encoding/decoding process.
  • the additional history table can be determined by the current position. For example, if the current CU is located in the second region, the additional history table to be updated is history table 2.
  • multiple history tables are used for different updated frequencies.
  • the first history table is updated every CU
  • the second history table is updated every two CUs
  • the third history table is updated every four CUs and so on.
  • multiple history tables are used for storing different types of cross-component models.
  • the first history table is used for storing single model
  • the second history table is used for storing multi-model.
  • the first history table is used for storing a gradient model
  • the second history table is used for storing a non-gradient model.
  • the second history table is used for storing a complicated model (e.g., CCCM) .
  • multiple history tables are used for different reconstructed luma intensity. For example, if the average of reconstructed luma samples in the current block is greater than a pre-defined threshold, the cross-component model will be stored in the first history table; otherwise, the cross-component model will be stores in the second history table.
  • multiple history tables are used for different reconstructed chroma intensities. For example, if the average of neighbouring reconstructed chroma samples in the current block is greater than a pre-defined threshold, the cross-component model will be stores in the first history table; otherwise, the cross-component model will be stored in the second history table.
  • the adding order when adding historical candidates from multiple history tables to the candidate list, can be from the beginning of the certain table to the end of the certain table, and then add the next history table in the same order or in a reversed order. In another embodiment, the adding order can be from the end of the certain table to the beginning of the certain table, and then add the next history table in the same order or in a reversed order. In another embodiment, the adding order can be from the certain pre-defined position of the certain table to the end of the certain table, and then add the next history table in the same order or in a reversed order.
  • the adding order can be from the certain pre-defined position of the certain table to the beginning of the certain table, and then add the next history table in the same order or in a reversed order.
  • the adding order of historical candidates can be in an interleaved manner in a certain history table (e.g., the first added candidate from the beginning of the certain history table, the second added candidate from the end of the certain history table and so on) , and then add the next history table in the same order or in a reversed order.
  • the adding order can be from the beginning of each history table to the end of each history table. In another embodiment, the adding order can be from the end of each history table to the beginning of each history table. In another embodiment, the adding order can be from the certain pre-defined position of each history table to the end of each history table. In another embodiment, the adding order can be from the certain pre-defined position of each history table to the beginning of each history table. In another embodiment, the adding order of historical candidates can be in an interleaved manner in each certain history table (e.g., the first added candidates from the beginning of all history table, the second added candidates from the end of all history table and so on) .
  • multiple cross-component model history tables are used, but not all history tables will be used for creating the candidate list. Only history tables whose regions are close to the region of current block can be used to create the candidate list.
  • the range for selecting non-adjacent candidates can be reduced by using a smaller distance between each position of non-adjacent candidate.
  • the number of non-adjacent candidates can be reduced by measuring the distance from the left-top position of the current block to the candidate position, and then exclude the candidate with the distance greater than a pre-defined threshold.
  • the number of non-adjacent candidates can be reduced by skipping the candidates that are not located in the same region.
  • the number of non-adjacent candidates can be reduced by skipping the candidates that are not located in the neighbouring regions.
  • the range of neighbouring regions is pre-defined, and it can be M by N regions, where M and N can be any value greater than 0.
  • the range for selecting non-adjacent candidates can be reduced by skipping the second search pattern.
  • one picture can be divided into multiple regions, and at least one history table is kept in each region.
  • a region of the current picture it can use or combine the history tables of one or multiple regions in the previous coded pictures as the initial history table.
  • a picture can implicitly or explicitly select the history table from one of N regions in the previous coded pictures as the initial history table.
  • the index of one of N regions can be signalled or implicitly derived from the corresponding region in the previous coded pictures.
  • Figs. 15A-B where the current picture 1520 is a P/B coded picture and the previous picture 1510 is an Intra coded picture.
  • Each picture is divided into 4 regions as shown in 4 rectangular boxes.
  • the corresponding region in the previous coded pictures can be the region 1512 having the same beginning geometric position as the current region 1522 as shown in Fig. 15A or containing the centre geometric position of the current region 1522 as shown in Fig. 15B.
  • it can combine more than one history tables in the previous coded regions/pictures to construct the history table of the current region (e.g., the method in the section entitled: Inheriting Candidates from the Candidates in the Candidate List of Neighbours) .
  • a single cross-component model can be generated from a multiple cross-component model. For example, if a candidate is coded with multiple cross-component models (e.g., MMLM, or CCCM with multi-model) , a single cross-component model can be generated by selecting the first or the second cross-component model in the multi cross-component models.
  • multiple cross-component models e.g., MMLM, or CCCM with multi-model
  • the candidate list is constructed by adding candidates in a pre-defined order until the maximum candidate number is reached.
  • the candidates added may include all or some of the aforementioned candidates, but not limited to the aforementioned candidates.
  • the candidate list may include spatial neighbouring candidates, temporal neighbouring candidate, historical candidates, non-adjacent neighbouring candidates, single model candidates generated based on other inherited models or combined model (as mentioned later in section entitled: Inheriting Multiple Cross-Component Models) .
  • the candidate list can include the same candidates as previous example, but the candidates are added into the list in a different order.
  • the default candidates include but not limited to the candidates described below.
  • the average value of neighbouring luma samples can be calculated by all selected luma samples, the luma DC mode value the current luma CB, or the average of the maximum and minimum luma samples (e.g., or ) .
  • average value of neighbouring chroma samples can be calculated by all selected chroma samples, the chroma DC mode value the current chroma CB, or the average of the maximum and minimum chroma samples (e.g., or ) .
  • the default candidates include but not limited to the candidates described below.
  • the default candidates are ⁇ G+ ⁇ , where G is the luma sample gradients instead of down-sampled luma samples L.
  • the 16 GLM filters described in the section, entitled: Gradient Linear Model (GLM) are applied.
  • the final scaling parameter ⁇ is from the set ⁇ 0, 1/8, -1/8, +2/8, -2/8, +3/8, -3/8, +4/8, -4/8 ⁇ .
  • the offset parameter ⁇ 1/ (1 ⁇ bit_depth) or is derived based on neighbouring luma and chroma samples.
  • a default candidate can be an earlier candidate with a delta scaling parameter refinement.
  • the scaling parameter of an earlier candidate is ⁇
  • the scaling parameter of a default candidate is ( ⁇ + ⁇ ) , where ⁇ can be from the set ⁇ 1/8, -1/8, +2/8, -2/8, +3/8, -3/8, +4/8, -4/8 ⁇ .
  • the offset parameter of a default candidate will be derived by ( ⁇ + ⁇ ) and the average value of neighbouring luma and chroma samples of the current block.
  • a default candidate can be a shortcut to indicate a cross-component mode (i.e., using the current neighbouring luma/chroma reconstruction samples to derive cross-component models) rather than inheriting parameters from neighbours.
  • a default candidate can be CCLM_LA, CCLM_L, CCLM_A, MMLM_LA, MMLM_L, MMLM_A, single-model CCCM, multiple-model CCCM or cross-component model with a specified GLM pattern.
  • a default candidate can be a cross-component mode (i.e., using the current neighbouring luma/chroma reconstruction samples to derive cross-component models) rather than inheriting parameters from neighbours, and also with a scaling parameter update ( ⁇ ) .
  • the scaling parameter of a default candidate is ( ⁇ + ⁇ ) .
  • a default candidate can be CCLM_LA, CCLM_L, CCLM_A, MMLM_LA, MMLM_L, or MMLM_A.
  • can be from the set ⁇ 1/8, -1/8, +2/8, -2/8, +3/8, -3/8, +4/8, -4/8 ⁇ .
  • the offset parameter of a default candidate will be derived by ( ⁇ + ⁇ ) and the average value of neighbouring luma and chroma samples of the current block.
  • the ⁇ can be different for each colour components.
  • a default candidate can be an earlier candidate with partial selected model parameters. For example, suppose an earlier candidate has m parameters, it can choose k out of m parameters from an earlier candidate to be a default candidate, where 0 ⁇ k ⁇ m and m > 1.
  • a default candidate can be the first model of an earlier MMLM candidate (i.e., the model used when the sample value being less than or equal to a classification threshold) .
  • a default candidate can be the second model of an earlier MMLM candidate (i.e., the model used when the sample value being greater than or equal to a classification threshold) .
  • a default candidate can be the combination of two models of an earlier MMLM candidate. For example, if the models of an earlier MMLM candidate are and The model parameters of an default candidate can be where ⁇ is a weighting factor which can be predefined or implicitly derived based on neighbouring template cost, and is the x-th parameter of the y-th model.
  • candidates are inserted into the list according to a pre-defined order.
  • the pre-defined order can be spatial adjacent candidates, temporal candidates, spatial non-adjacent candidates, historical candidates, and then default candidates.
  • the candidate models of non-LM coded blocks are included into the list after including candidate models of LM coded blocks.
  • the candidate models of non-LM coded blocks are included into the list before including default candidates.
  • the candidate models of non-LM coded blocks have lower priority to be included into the list than candidate models from LM coded blocks.
  • the model of a candidate parameter is similar to the existing models, the model will not be included in the candidate list. In one embodiment, it can compare the similarity of ( ⁇ lumaAvg+ ⁇ ) or ⁇ among existing candidates to decide whether to include the model of a candidate or not.
  • the model of the candidate is not included.
  • the threshold can be adaptive based on coding information (e.g., the current block size or area) .
  • a model from a candidate and the existing model when comparing the similarity, if a model from a candidate and the existing model both use CCCM, it can compare similarity by checking the value of (c 0 C + c 1 N + c 2 S + c 3 E + c 4 W + c 5 P + c 6 B) to decide whether to include the model of a candidate or not.
  • the model of the candidate parameter if a candidate position point to a CU which is the same one of the existing candidates, the model of the candidate parameter is not included.
  • the model of a candidate if the model of a candidate is similar to one of existing candidate models, it can adjust the inherited model parameters so that the inherited model is different from the existing candidate models.
  • the inherited scaling parameter can add a predefined offset (e.g., 1>>S or - (1>>S) , where S is the shift parameter) so that the inherited parameter is different from the existing candidate models.
  • a predefined offset e.g., 1>>S or - (1>>S) , where S is the shift parameter
  • a default candidate can be a shortcut to indicate a cross-component mode (i.e., using the current neighbouring luma/chroma reconstructed samples to derive cross-component models) rather than inheriting parameters from neighbours.
  • default candidate can be CCLM_LA, CCLM_L, CCLM_A, MMLM_LA, MMLM_L, MMLM_A, single model CCCM, multiple models CCCM or cross-component model with a specified GLM pattern.
  • a default candidate can be a cross-component mode (i.e., using the current neighbouring luma/chroma reconstructed samples to derive cross-component models) rather than inheriting parameters from neighbours, and also with a scaling parameter update ( ⁇ ) .
  • the scaling parameter of a default candidate is ( ⁇ + ⁇ ) .
  • default candidate can be CCLM_LA, CCLM_L, CCLM_A, MMLM_LA, MMLM_L, or MMLM_A.
  • can be in the set ⁇ 1/8, -1/8, +2/8, -2/8, +3/8, -3/8, +4/8, -4/8 ⁇ .
  • the offset parameter of a default candidate will be derived by ( ⁇ + ⁇ ) and the average value of neighbouring luma and chroma samples of the current block.
  • the ⁇ can be different for each colour components.
  • a default candidate can be an earlier candidate with partial selected model parameters. For example, suppose an earlier candidate has m parameters, it can choose k out of m parameters from the earlier candidate to be a default candidate, where 0 ⁇ k ⁇ m and m > 1.
  • a default candidate can be the first model of an earlier MMLM candidate (i.e., the model used when the sample value is less than or equal to classification threshold) .
  • a default candidate can be the second model of an earlier MMLM candidate (i.e., the model used when the sample value is greater than or equal to classification threshold) .
  • a default candidate can be the combination of two models of an earlier MMLM candidate. For example, if the models of an earlier MMLM candidate are and The model parameters of an default candidate can be where ⁇ is a weighting factor which can be predefined or implicitly derived by neighbouring template cost, and is the x-th parameter of the y-th model.
  • the candidates in the list can be reordered to reduce the syntax overhead when signalling the selected candidate index.
  • the reordering rules can depend on the coding information of neighbouring blocks or the model error. For example, if neighbouring above or left blocks are coded by MMLM, the MMLM candidates in the list can be moved to the head of the current list. Similarly, if neighbouring above or left blocks are coded by single model LM or CCCM, the single model LM or CCCM candidates in the list can be moved to the head of the current list. Similarly, if GLM is used by neighbouring above or left blocks, the GLM related candidates in the list can be moved to the head of the current list.
  • the reordering rule is based on the model error by applying the candidate model to the neighbouring templates of the current block, and then compare the error with the reconstructed samples of the neighbouring template.
  • the candidates of different types are reordered separately before the candidates are added into the final candidate list.
  • the candidates are added into a primary candidate list with a pre-defined size N 1 .
  • the candidates in the primary list are reordered.
  • the candidates (N 2 ) with the smallest costs are then added into the final candidate list, where N 2 ⁇ N 1 .
  • the candidates are categorized into different types based on the source of the candidates, including but not limited to the spatial neighbouring models, temporal neighbouring models, non-adjacent spatial neighbouring models, and the historical candidates.
  • the candidates are categorized into different types based on the cross-component model mode.
  • the types can be CCLM, MMLM, CCCM, and CCCM multi-model.
  • the types can be GLM-non active or GLM active.
  • the redundancy of the candidate can be further checked.
  • a candidate is considered to be redundant if the template cost difference between it and its predecessor in the list is smaller than a threshold. If a candidate is considered redundant, it can be removed from the list, or it can be move to the end of the list.
  • An on/off flag can be signalled to indicate if the current block inherits the cross-component model parameters from neighbouring blocks or not.
  • the flag can be signalled per CU/CB, per PU, per TU/TB, or per colour component, or per chroma colour component.
  • a high level syntax can be signalled in SPS, PPS (Picture Parameter Set) , PH (Picture header) or SH (Slice Header) to indicate if the proposed method is allowed for the current sequence, picture, or slice.
  • the inherit candidate index is signalled.
  • the index can be signalled (e.g., signalled using truncate unary code, Exp-Golomb code, or fix length code) and shared among both the current Cb and Cr blocks.
  • the index can be signalled per colour component.
  • one inherited index is signalled for Cb component, and another inherited index is signalled for Cr component.
  • it can use chroma intra prediction syntax (e.g., IntraPredModeC [xCb] [yCb] ) to store the inherited index.
  • the current chroma intra prediction mode e.g., IntraPredModeC [xCb] [yCb] as defined in VVC standard
  • a cross-component mode e.g., CCLM_LA
  • the candidate list is derived, and the inherited candidate model is then determined by the inherited candidate index.
  • the coding information of the current block is then updated according to the inherited candidate model.
  • the coding information of the current block includes but not limited to the prediction mode (e.g., CCLM_LA or MMLM_LA) , related sub-mode flags (e.g., CCCM mode flag) , prediction pattern (e.g., GLM pattern index) , and the current model parameters. Then, the prediction of the current block is generated according to the updated coding information.
  • the prediction mode e.g., CCLM_LA or MMLM_LA
  • related sub-mode flags e.g., CCCM mode flag
  • prediction pattern e.g., GLM pattern index
  • a current block is partitioned into two or more prediction regions/sub-blocks, where each region can be predicted by inter or intra coding tool. Furthermore, at least one of prediction regions is coded by CC merge mode, where the cross-component model of the at least one region is inherited from a spatial, historical, or temporal neighbouring block/position.
  • the current block is partitioned by quad-tree, binary-tree, or ternary-tree split. The split can be symmetric or asymmetric split.
  • the current block is partition into two regions, where one of two regions is predicted by inter or intra coding tool, and another one is predicted by CC merge mode.
  • the inherited candidate index of the region predicted by CC merge mode can be explicitly or implicitly indicated. For example, it can explicitly signal the candidate index by the method in section entitled: Signalling the Inherited Candidate Index in the List. For another example, it can implicitly select the first candidate in the list as the candidate index.
  • the candidates in the list can be reordered by the method mentioned in the section entitled: Reordering the Candidates in the List.
  • current block is partition into two regions, two regions are both predicted by CC merge mode, and the first two candidates in the list are the candidate indexes of the two regions. It can implicitly set the candidate index of the first region (e.g., the region having the top-left sample of the current block) to the first candidate and set the candidate index of the second region to the second candidate.
  • the list can be reordered by the method mentioned in the section entitled: Reordering the Candidates in the List. For another example, if two regions are both predicted by CC merge mode, an index is explicitly signalled to indicate the candidate index of the first region, and the candidate index of the second region is the signalled index + k or the signalled index –k, where k can be 1, 2, 3, 4, or 5.
  • the candidate model of the first region is implicitly derived from the stored cross-component model at the top-left position of the current block relative to the top-left position in the previous coded slices/pictures as the method mentioned in the section entitled: Inheriting Temporal Neighbouring Model Parameters.
  • the first candidate in the list is the candidate index of the two regions.
  • the candidate list with candidates restricted from one or more specific cross-component mode types as described above can be implemented in an encoder side or a decoder side.
  • any of the proposed candidate derivation method can be implemented in an Intra/Inter coding module (e.g. Intra Pred. 150/MC 152 in Fig. 1B) in a decoder or an Intra/Inter coding module is an encoder (e.g. Intra Pred. 110/Inter Pred. 112 in Fig. 1A) .
  • Any of the proposed shared buffer to store coding information among multiple coding tools including the CCM mode can also be implemented as a circuit coupled to the intra/inter coding module at the decoder or the encoder.
  • the decoder or encoder may also use additional processing unit to implement the required cross-component prediction processing.
  • Intra Pred. units e.g. unit 110/112 in Fig. 1A and unit 150/152 in Fig. 1B
  • the Intra Pred. units are shown as individual processing units, they may correspond to executable software or firmware codes stored on a media, such as hard disk or flash memory, for a CPU (Central Processing Unit) or programmable devices (e.g. DSP (Digital Signal Processor) or FPGA (Field Programmable Gate Array) ) .
  • DSP Digital Signal Processor
  • FPGA Field Programmable Gate Array
  • Fig. 16 illustrates a flowchart of an exemplary video coding system that uses region-wise cross-component merge mode according to an embodiment of the present invention.
  • the steps shown in the flowchart may be implemented as program codes executable on one or more processors (e.g., one or more CPUs) at the encoder side.
  • the steps shown in the flowchart may also be implemented based hardware such as one or more electronic devices or processors arranged to perform the steps in the flowchart.
  • input data associated with a current block comprising a first-colour block and a second-colour block are received in step 1610, wherein the input data comprise pixel data to be encoded at an encoder side or data associated with the current block to be decoded at a decoder side, wherein the current block is partitioned into two or more subblocks.
  • a target cross-component model for at least one of said two or more subblocks is derived in step 1620, wherein the target cross-component model is inherited from a spatial, a temporal or a historical neighbouring block or position.
  • a candidate list comprising the target cross-component model is derived in step 1630.
  • Said at least one of said two or more subblocks is encoded or decoded using information comprising the candidate list in step 1640, wherein when a target cross-component candidate corresponding to the target cross-component model is selected for said at least one of said two or more subblocks, a predictor is generated for a second-colour subblock of said at least one of said two or more subblocks by applying the target cross-component model to a first-colour subblock of said at least one of said two or more subblocks.
  • Fig. 17 illustrates a flowchart of an exemplary video coding system that multiple history tables to derive cross-component candidate list according to an embodiment of the present invention.
  • input data associated with a current block comprising a first-colour block and a second-colour block are received in step 1710, wherein the input data comprise pixel data to be encoded at an encoder side or data associated with the current block to be decoded at a decoder side and the current picture is partitioned into multiple regions.
  • Multiple history tables associated with the multiple regions are determined in step 1720, wherein at least two history tables of the multiple history tables are kept for two different regions.
  • a candidate list comprising one or more cross-component historical candidates is derived from the multiple history tables in step 1730.
  • the current block is encoded or decoded using information comprising the candidate list in step 1740, wherein when a target cross-component historical candidate from the candidate list is selected for the current block, a predictor is generated for the second-colour block by applying a target cross-component model associated with the target cross-component historical candidate to the first-colour block.
  • Embodiment of the present invention as described above may be implemented in various hardware, software codes, or a combination of both.
  • an embodiment of the present invention can be one or more circuit circuits integrated into a video compression chip or program code integrated into video compression software to perform the processing described herein.
  • An embodiment of the present invention may also be program code to be executed on a Digital Signal Processor (DSP) to perform the processing described herein.
  • DSP Digital Signal Processor
  • the invention may also involve a number of functions to be performed by a computer processor, a digital signal processor, a microprocessor, or field programmable gate array (FPGA) .
  • These processors can be configured to perform particular tasks according to the invention, by executing machine-readable software code or firmware code that defines the particular methods embodied by the invention.
  • the software code or firmware code may be developed in different programming languages and different formats or styles.
  • the software code may also be compiled for different target platforms.
  • different code formats, styles and languages of software codes and other means of configuring code to perform the tasks in accordance with the invention will not depart from the spirit and scope of the invention.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

A method and apparatus for video coding using region-wise cross-component merge mode or multiple history tables. According to one method, a target cross-component model for a current subblock of two or subblocks is derived by inheriting from a spatial, a temporal or a historical neighbouring block or position. A candidate list comprising the target cross-component model is derived. The current subblock is encoded or decoded using information comprising the candidate list. When a target cross-component candidate corresponding to the target cross-component model is selected for the current subblock, a predictor is generated for a current second-colour subblock by applying the target cross-component model to a current first-colour subblock. According to another method, the current picture is partitioned into multiple regions and multiple history tables associated with the multiple regions are determined. A candidate list comprising one or more cross-component historical candidates is derived from the multiple history tables.

Description

METHODS AND APPARATUS OF REGION-WISE CROSS-COMPONENT MODEL MERGE MODE FOR VIDEO CODING
CROSS REFERENCE TO RELATED APPLICATIONS
The present invention is a non-Provisional Application of and claims priority to U.S. Provisional Patent Application No. 63/479,003, filed on January 9, 2023. The U.S. Provisional Patent Application is hereby incorporated by reference in its entirety.
FIELD OF THE INVENTION
The present invention relates to video coding system. In particular, the present invention relates to region-wise cross-component merge mode or multiple history tables in a video coding system.
BACKGROUND
Versatile video coding (VVC) is the latest international video coding standard developed by the Joint Video Experts Team (JVET) of the ITU-T Video Coding Experts Group (VCEG) and the ISO/IEC Moving Picture Experts Group (MPEG) . The standard has been published as an ISO standard: ISO/IEC 23090-3: 2021, Information technology -Coded representation of immersive media -Part 3: Versatile video coding, published Feb. 2021. VVC is developed based on its predecessor HEVC (High Efficiency Video Coding) by adding more coding tools to improve coding efficiency and also to handle various types of video sources including 3-dimensional (3D) video signals.
Fig. 1A illustrates an exemplary adaptive Inter/Intra video encoding system incorporating loop processing. For Intra Prediction 110, the prediction data is derived based on previously coded video data in the current picture. For Inter Prediction 112, Motion Estimation (ME) is performed at the encoder side and Motion Compensation (MC) is performed based on the result of ME to provide prediction data derived from other picture (s) and motion data. Switch 114 selects Intra Prediction 110 or Inter-Prediction 112 and the selected prediction data is supplied to Adder 116 to form prediction errors, also called residues. The prediction error is then processed by Transform (T) 118 followed by Quantization (Q) 120. The transformed and quantized residues are then coded by Entropy Encoder 122 to be included in a video bitstream corresponding to the compressed video data. The bitstream associated with the transform coefficients is then packed with side information such as motion and coding modes associated with Intra prediction and Inter prediction, and other information such as parameters associated with loop filters applied to underlying image area. The side information associated with Intra Prediction 110, Inter prediction 112 and in-loop filter 130, are provided to Entropy Encoder 122 as shown in Fig. 1A. When an Inter-prediction mode is used, a reference picture or pictures have to be reconstructed at the encoder end as well. Consequently, the transformed and  quantized residues are processed by Inverse Quantization (IQ) 124 and Inverse Transformation (IT) 126 to recover the residues. The residues are then added back to prediction data 136 at Reconstruction (REC) 128 to reconstruct video data. The reconstructed video data may be stored in Reference Picture Buffer 134 and used for prediction of other frames.
As shown in Fig. 1A, incoming video data undergoes a series of processing in the encoding system. The reconstructed video data from REC 128 may be subject to various impairments due to a series of processing. Accordingly, in-loop filter 130 is often applied to the reconstructed video data before the reconstructed video data are stored in the Reference Picture Buffer 134 in order to improve video quality. For example, deblocking filter (DF) , Sample Adaptive Offset (SAO) and Adaptive Loop Filter (ALF) may be used. The loop filter information may need to be incorporated in the bitstream so that a decoder can properly recover the required information. Therefore, loop filter information is also provided to Entropy Encoder 122 for incorporation into the bitstream. In Fig. 1A, Loop filter 130 is applied to the reconstructed video before the reconstructed samples are stored in the reference picture buffer 134. The system in Fig. 1A is intended to illustrate an exemplary structure of a typical video encoder. It may correspond to the High Efficiency Video Coding (HEVC) system, VP8, VP9, H. 264 or VVC.
The decoder, as shown in Fig. 1B, can use similar or portion of the same functional blocks as the encoder except for Transform 118 and Quantization 120 since the decoder only needs Inverse Quantization 124 and Inverse Transform 126. Instead of Entropy Encoder 122, the decoder uses an Entropy Decoder 140 to decode the video bitstream into quantized transform coefficients and needed coding information (e.g. ILPF information, Intra prediction information and Inter prediction information) . The Intra prediction 150 at the decoder side does not need to perform the mode search. Instead, the decoder only needs to generate Intra prediction according to Intra prediction information received from the Entropy Decoder 140. Furthermore, for Inter prediction, the decoder only needs to perform motion compensation (MC 152) according to Inter prediction information received from the Entropy Decoder 140 without the need for motion estimation.
According to VVC, an input picture is partitioned into non-overlapped square block regions referred as CTUs (Coding Tree Units) , similar to HEVC. Each CTU can be partitioned into one or multiple smaller size coding units (CUs) . The resulting CU partitions can be in square or rectangular shapes. Also, VVC divides a CTU into prediction units (PUs) as a unit to apply prediction process, such as Inter prediction, Intra prediction, etc.
The VVC standard incorporates various new coding tools to further improve the coding efficiency over the HEVC standard. Some new tools relevant to the present invention are reviewed as follows.
In the present invention, methods and apparatus to shared buffer to store coding information for multiple coding tools including the Cross-Component Model (CCM) mode are  disclosed to improve the performance. In addition, methods and apparatus to use innovative default cross-component candidates are disclosed to improve the coding performance.
BRIEF SUMMARY OF THE INVENTION
A method and apparatus for video coding using coding tools including one or more cross component models related modes are disclosed. According to the method, input data associated with a current block comprising a first-colour block and a second-colour block are received, wherein the input data comprise pixel data to be encoded at an encoder side or data associated with the current block to be decoded at a decoder side. The current block is partitioned into two or more subblocks. A target cross-component model for at least one of said two or more subblocks is derived, wherein the target cross-component model is inherited from a spatial, a temporal or a historical neighbouring block or position. A candidate list comprising the target cross-component model is derived. Said at least one of said two or more subblocks is encoded or decoded using information comprising the candidate list, wherein when a target cross-component candidate corresponding to the target cross-component model is selected for said at least one of said two or more subblocks, a predictor is generated for a second-colour subblock of said at least one of said two or more subblocks by applying the target cross-component model to a first-colour subblock of said at least one of said two or more subblocks.
In one embodiment, the candidate list is used for a cross-component merge mode. In one embodiment, at least another subblock of said two or more subblocks is coded by an inter or intra coding tool.
In one embodiment, the current block is partitioned into said two or more subblocks symmetrically or asymmetrically.
In one embodiment, when the target cross-component model is inherited from the historical neighbouring block or position, a candidate index for the historical neighbouring block or position is explicitly signalled or parsed. In another embodiment, the candidate index for the historical neighbouring block or position is implicitly indicated.
In one embodiment, the current block is partitioned into two subblocks and both of the two subblocks are coded using a cross-component merge mode, first two target cross-component candidates in the candidate list correspond to two candidate indexes of the two subblocks. In one embodiment, the two candidate indexes are assigned to the two subblocks implicitly. In another embodiment, a first index of the two candidate indexes is signalled or parsed explicitly and a second index of the two candidate indexes is signalled or parsed depending on the first index.
In one embodiment, the current block is partitioned into two subblocks and both of the two subblocks are coded based on a cross-component merge mode, a first target cross-component candidate in the candidate list is used by both of the two subblocks.
In one embodiment, said at least one of said two or more subblocks corresponds to a first subblock of said two or more subblocks corresponds and the target cross-component model is implicitly derived from a stored cross-component model at a corresponding top-left position of the current block in previous coded slices/pictures.
According to another method, the current picture is partitioned into multiple regions and multiple history tables associated with the multiple regions are determined, wherein at least two history tables of the multiple history tables are kept for two different regions. A candidate list comprising one or more cross-component historical candidates is derived from the multiple history tables. The current block is encoded or decoded using information comprising the candidate list, wherein when a target cross-component historical candidate from the candidate list is selected for the current block, a predictor is generated for the second-colour block by applying a target cross-component model associated with the target cross-component historical candidate to the first-colour block.
In one embodiment, one of the multiple history tables is kept for each of the multiple regions. In one embodiment, an initial history table is generated for a current region based on a target history table associated with a target region of the multiple history tables in previously coded pictures. In one embodiment, an index is signalled or parsed to indicate the target region in the previously coded pictures. In one embodiment, the target region in the previously coded pictures is determined implicitly.
In one embodiment, a current history table for a current region is generated using at least two of multiple history tables.
BRIEF DESCRIPTION OF THE DRAWINGS
Fig. 1A illustrates an exemplary adaptive Inter/Intra video encoding system incorporating loop processing.
Fig. 1B illustrates a corresponding decoder for the encoder in Fig. 1A.
Fig. 2 shows an example of the location of the left and above samples and the sample of the current block involved in the LM_LA mode.
Fig. 3 shows an example of classifying the neighbouring samples into two groups.
Fig. 4 illustrates an example of spatial part of the convolutional filter.
Fig. 5 illustrates an example of reference area with paddings used to derive the filter coefficients.
Fig. 6 illustrates the 16 gradient patterns for Gradient Linear Model (GLM) .
Fig. 7 illustrates the neighbouring blocks used for deriving spatial merge candidates for VVC.
Fig. 8 illustrates the possible candidate pairs considered for redundancy check in VVC.
Fig. 9 illustrates an example of temporal candidate derivation, where a scaled motion  vector is derived according to POC (Picture Order Count) distances.
Fig. 10 illustrate the position for the temporal candidate selected between candidates C0 and C1.
Fig. 11 illustrates an exemplary pattern of the non-adjacent spatial merge candidates.
Fig. 12 illustrates an example of inheriting temporal neighbouring model parameters.
Figs. 13A-B illustrates two search patterns for inheriting non-adjacent spatial neighbouring models.
Fig. 14 illustrates an example of multiple history table for storing cross-component model where each grid represents a CTU.
Figs. 15A-B illustrate examples for constructing the history table of the current region from the history table of the region having the same beginning geometric position of the current region (Fig. 15A) or from the history table of the region containing the centre geometric position of the current region (Fig. 15B) .
Fig. 16 illustrates a flowchart of an exemplary video coding system that uses region-wise cross-component merge mode according to an embodiment of the present invention.
Fig. 17 illustrates a flowchart of an exemplary video coding system that multiple history tables to derive cross-component candidate list according to an embodiment of the present invention.
DETAILED DESCRIPTION OF THE INVENTION
It will be readily understood that the components of the present invention, as generally described and illustrated in the figures herein, may be arranged and designed in a wide variety of different configurations. Thus, the following more detailed description of the embodiments of the systems and methods of the present invention, as represented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. References throughout this specification to “one embodiment, ” “an embodiment, ” or similar language mean that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the present invention. Thus, appearances of the phrases “in one embodiment” or “in an embodiment” in various places throughout this specification are not necessarily all referring to the same embodiment.
Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. One skilled in the relevant art will recognize, however, that the invention can be practiced without one or more of the specific details, or with other methods, components, etc. In other instances, well-known structures, or operations are not shown or described in detail to avoid obscuring aspects of the invention. The illustrated embodiments of the invention will be best understood by reference to the drawings, wherein like parts are designated by like numerals throughout. The following description is intended only by way of example, and simply  illustrates certain selected embodiments of apparatus and methods that are consistent with the invention as claimed herein.
Cross-Component Linear Model (CCLM) Prediction
To reduce the cross-component redundancy, a cross-component linear model (CCLM, sometimes abbreviated as LM mode) prediction mode is used in the VVC, for which the chroma samples are predicted based on the reconstructed luma samples of the same CU by using a linear model as follows:
predC (i, j) =α·recL′ (i, j) + β       (1)
where predC (i, j) represents the predicted chroma samples in a CU and recL′ (i, j) represents the downsampled reconstructed luma samples of the same CU.
The CCLM parameters (α and β) are derived with at most four neighbouring chroma samples and their corresponding down-sampled luma samples. Suppose the current chroma block dimensions are W×H, then W’ and H’ are set as
– W’ = W, H’ = H when LM_LA mode is applied;
– W’ =W + H when LM_A mode is applied;
– H’ = H + W when LM_L mode is applied.
The above neighbouring positions are denoted as S [0, -1] …S [W’ -1, -1] and the left neighbouring positions are denoted as S [-1, 0] …S [-1, H’ -1] . Then the four samples are selected as
- S [W’ /4, -1] , S [3 *W’ /4, -1] , S [-1, H’ /4] , S [-1, 3 *H’ /4] when LM mode is applied and both above and left neighbouring samples are available;
- S [W’ /8, -1] , S [3 *W’ /8, -1] , S [5 *W’ /8, -1] , S [7 *W’ /8, -1] when LM-A mode is applied or only the above neighbouring samples are available;
- S [-1, H’ /8] , S [-1, 3 *H’ /8] , S [-1, 5 *H’ /8] , S [-1, 7 *H’ /8] when LM-L mode is applied or only the left neighbouring samples are available.
The four neighbouring luma samples at the selected positions are down-sampled and compared four times to find two larger values: x0 A and x1 A, and two smaller values: x0 B and x1 B. Their corresponding chroma sample values are denoted as y0 A, y1 A, y0 B and y1 B. Then xA, xB, yA and yB are derived as:
Xa= (x0A + x1A +1) >>1;
Xb= (x0B + x1B +1) >>1;
Ya= (y0A + y1A +1) >>1;
Yb= (y0B + y1B +1) >>1        (2)
Finally, the linear model parameters α and β are obtained according to the following equations.

β=Yb-α·Xb         (4)
Fig. 2 shows an example of the location of the left and above samples and the sample of the current block involved in the LM_LA mode. Fig. 2 shows the relative sample locations of N × N chroma block 210, the corresponding 2N × 2N luma block 220 and their neighbouring samples (shown as filled circles) .
The division operation to calculate parameter α is implemented with a look-up table. To reduce the memory required for storing the table, the diff value (difference between maximum and minimum values) and the parameter α are expressed by an exponential notation. For example, diff is approximated with a 4-bit significant part and an exponent. Consequently, the table for 1/diff is reduced into 16 elements for 16 values of the significand as follows:
DivTable [] = {0, 7, 6, 5, 5, 4, 4, 3, 3, 2, 2, 1, 1, 1, 1, 0}         (5)
This would have a benefit of both reducing the complexity of the calculation as well as the memory size required for storing the needed tables.
Besides the above template and left template can be used to calculate the linear model coefficients together, they also can be used alternatively in the other 2 LM modes, called LM_A, and LM_L modes.
In LM_A mode, only the above template is used to calculate the linear model coefficients. To get more samples, the above template is extended to (W+H) samples. In LM_L mode, only left template are used to calculate the linear model coefficients. To get more samples, the left template is extended to (H+W) samples.
In LM_LA mode, left and above templates are used to calculate the linear model coefficients.
To match the chroma sample locations for 4: 2: 0 video sequences, two types of down-sampling filter are applied to luma samples to achieve 2 to 1 down-sampling ratio in both horizontal and vertical directions. The selection of down-sampling filter is specified by a SPS level flag. The two down-sampling filters are as follows, which are corresponding to “type-0” and “type-2” content, respectively.
RecL′ (i, j) = [recL (2i-1, 2j-1) +2·recL (2i, 2j-1) +recL (2i+1, 2j-1) + 
recL (2i-1, 2j) +2·recL (2i, 2j) +recL (2i+1, 2j) +4] >>3    (6)
RecL′ (i, j) =recL (2i, 2j-1) +recL (2i-1, 2j) +4·recL (2i, 2j) +recL (2i+1, 2j) + 
recL (2i, 2j+1) +4] >>3   (7)
Note that only one luma line (general line buffer in intra prediction) is used to make the down-sampled luma samples when the upper reference line is at the CTU boundary.
This parameter computation is performed as part of the decoding process, and is not just as an encoder search operation. As a result, no syntax is used to convey the α and β values to the decoder.
For chroma intra mode coding, a total of 8 intra modes are allowed for chroma intra mode coding. Those modes include five traditional intra modes and three cross-component linear model modes (i.e., {LM_LA, LM_L, and LM_A} , or {CCLM_LT, CCLM_L, and CCLM_T} ) . The terms of {LM_LA, LM_L, LM_A} and {CCLM_LT, CCLM_L, CCLM_T} are used interchangeably in this disclosure. Chroma mode coding directly depends on the intra prediction mode of the corresponding luma block. Since separate block partitioning structure for luma and chroma components is enabled in I slices, one chroma block may correspond to multiple luma blocks. Therefore, for Chroma DM mode, the intra prediction mode of the corresponding luma block covering the centre position of the current chroma block is directly inherited.
Multiple Model CCLM (MMLM)
In the JEM (J. Chen, E. Alshina, G. J. Sullivan, J. -R. Ohm, and J. Boyce, Algorithm Description of Joint Exploration Test Model 7, document JVET-G1001, ITU-T/ISO/IEC Joint Video Exploration Team (JVET) , Jul. 2017) , multiple model CCLM mode (MMLM) is proposed for using two models for predicting the chroma samples from the luma samples for the whole CU. In MMLM, neighbouring luma samples and neighbouring chroma samples of the current block are classified into two groups, each group is used as a training set to derive a linear model (i.e., a particular α and β are derived for a particular group) . Furthermore, the samples of the current luma block are also classified based on the same rule for the classification of neighbouring luma samples. Three MMLM model modes (MMLM_LT, MMLM_T, and MMLM_L) are allowed for choosing the neighbouring samples from left-side and above-side, above-side only, and left-side only, respectively.
Fig. 3 shows an example of classifying the neighbouring samples into two groups. Threshold is calculated as the average value of the neighbouring reconstructed luma samples. A neighbouring sample with Rec′L [x, y] <= Threshold is classified into group 1; while a neighbouring sample with Rec′L [x, y] > Threshold is classified into group 2.
Accordingly, the MMLM uses two models according to the sample level of the neighbouring samples.
Convolutional Cross-Component Model (CCCM) -Single Model and Multi- Model
In CCCM, a convolutional model is applied to improve the chroma prediction performance. The convolutional model has 7-tap filter consisting of a 5-tap plus sign shape spatial component, a nonlinear term and a bias term. The input to the spatial 5-tap component of the filter consists of a centre (C) luma sample which is collocated with the chroma sample to be predicted and its above/north (N) , below/south (S) , left/west (W) and right/east (E) neighbours as shown in Fig. 4.
The nonlinear term (denoted as P) is represented as power of two of the centre luma sample C and scaled to the sample value range of the content:
P = (C*C + midVal) >> bitDepth.
For example, for 10-bit contents, the nonlinear term is calculated as:
P = (C*C + 512) >> 10
The bias term (denoted as B) represents a scalar offset between the input and output (similarly to the offset term in CCLM) and is set to the middle chroma value (512 for 10-bit content) .
Output of the filter is calculated as a convolution between the filter coefficients ci and the input values and clipped to the range of valid chroma samples:
predChromaVal = c0C + c1N + c2S + c3E + c4W + c5P + c6B
The filter coefficients ci are calculated by minimising MSE between predicted and reconstructed chroma samples in the reference area. Fig. 5 illustrates an example of the reference area which consists of 6 lines of chroma samples above and left of the PU. Reference area extends one PU width to the right and one PU height below the PU boundaries. Area is adjusted to include only available samples.
Also, similarly to CCLM, there is an option of using a single model or multi-model variant of CCCM. The multi-model variant uses two models, one model derived for samples above the average luma reference value and another model for the rest of the samples (following the spirit of the CCLM design) . Multi-model CCCM mode can be selected for PUs which have at least 128 reference samples available.
Gradient Linear Model (GLM)
Compared with the CCLM, instead of down-sampled luma values, the GLM utilizes luma sample gradients to derive the linear model. Specifically, when the GLM is applied, the input to the CCLM process, i.e., the down-sampled luma samples L, are replaced by luma sample gradients G. The other parts of the CCLM (e.g., parameter derivation, prediction sample linear transform) are kept unchanged.
C=α·G+β
For signalling, when the CCLM mode is enabled for the current CU, two flags are  signalled separately for Cb and Cr components to indicate whether GLM is enabled for each component. If the GLM is enabled for one component, one syntax element is further signalled to select one of 16 gradient filters (610-640 in Fig. 6) for the gradient calculation. The GLM can be combined with the existing CCLM by signalling one extra flag in bitstream. When such combination is applied, the filter coefficients that are used to derive the input luma samples of the linear model are calculated as the combination of the selected gradient filter of the GLM and the down-sampling filter of the CCLM.
Spatial Candidate Derivation
The derivation of spatial merge candidates in VVC is the same as that in HEVC except that the positions of first two merge candidates are swapped. A maximum of four merge candidates (B0, A0, B1 and A1) for current CU 710 are selected among candidates located in the positions depicted in Fig. 7. The order of derivation is B0, A0, B1, A1 and B2. Position B2 is considered only when one or more neighbouring CU of positions B0, A0, B1, A1 are not available (e.g. belonging to another slice or tile) or is intra coded. After candidate at position A1 is added, the addition of the remaining candidates is subject to a redundancy check which ensures that candidates with the same motion information are excluded from the list so that coding efficiency is improved. To reduce computational complexity, not all possible candidate pairs are considered in the mentioned redundancy check. Instead, only the pairs linked with an arrow in Fig. 8 are considered and a candidate is only added to the list if the corresponding candidate used for redundancy check does not have the same motion information.
Temporal Candidates Derivation
In this step, only one candidate is added to the list. Particularly, in the derivation of this temporal merge candidate for a current CU 910, a scaled motion vector is derived based on the co-located CU 920 belonging to the collocated reference picture as shown in Fig. 9. The reference picture list and the reference index to be used for the derivation of the co-located CU is explicitly signalled in the slice header. The scaled motion vector 930 for the temporal merge candidate is obtained as illustrated by the dotted line in Fig. 9, which is scaled from the motion vector 940 of the co-located CU using the POC (Picture Order Count) distances, tb and td, where tb is defined to be the POC difference between the reference picture of the current picture and the current picture and td is defined to be the POC difference between the reference picture of the co-located picture and the co-located picture. The reference picture index of temporal merge candidate is set equal to zero.
The position for the temporal candidate is selected between candidates C0 and C1, as depicted in Fig. 10. If CU at position C0 is not available, is intra coded, or is outside of the current row of CTUs, position C1 is used. Otherwise, position C0 is used in the derivation of the temporal merge candidate.
Non-Adjacent Spatial Candidate
During the development of the VVC standard, a coding tool referred as Non-Adjacent Motion Vector Prediction (NAMVP) has been proposed in JVET-L0399 (Yu Han, et al., “CE4.4.6: Improvement on Merge/Skip mode” , Joint Video Exploration Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11, 12th Meeting: Macao, CN, 3–12 Oct. 2018, Document: JVET-L0399) . According to the NAMVP technique, the non-adjacent spatial merge candidates are inserted after the TMVP (i.e., the temporal MVP) in the regular merge candidate list. The pattern of spatial merge candidates is shown in Fig. 11. The distances between non-adjacent spatial candidates and current coding block are based on the width and height of current coding block. In Fig. 11, each small square corresponds to a NAMVP candidate and the candidates are ordered (as shown by the number inside the square) according to the distance. The line buffer restriction is not applied. In other words, the NAMVP candidates far away from a current block may have to be stored that may require a large buffer.
In order to improve the prediction accuracy or coding performance of cross-component prediction, various schemes related to inheriting cross-component models are disclosed.
Guided Parameter Set for Refining the Cross-Component Model Parameters
According to this method, the guided parameter set is used to refine the derived model parameters by a specified CCLM mode. For example, the guided parameter set is explicitly signalled in the bitstream, after deriving the model parameters, the guided parameter set is added to the derived model parameters as the final model parameters. The guided parameter set contain at least one of a differential scaling parameter (dA) , a differential offset parameter (dB) , and a differential shift parameter (dS) . For example, equation (1) can be rewritten as:
predC (i, j) = ( (α′·recL′ (i, j) ) >>s) + β,
and if dA is signalled, the final prediction is:
predC (i, j) = ( ( (α′+dA) ·recL′ (i, j) ) >>s) + β.
Similarly, if dB is signalled, then the final prediction is:
predC (i, j) = ( (α′·recL′ (i, j) ) >>s) + (β+dB) .
If dS is signalled, then the final prediction is:
predC (i, j) = ( (α′·recL′ (i, j) ) >> (s+dS) ) + β.
If dA and dB are signalled, then the final prediction is:
predC (i, j) = ( ( (α′+dA) ·recL′ (i, j) ) >>s) + (β+dB) .
The guided parameter set can be signalled per colour component. For example, one guided parameter set is signalled for Cb component, and another guided parameter set is signalled for  Cr component. Alternatively, one guided parameter set can be signalled and shared among colour components. The signalled dA and dB can be a positive or negative value. When signalling dA, one bin is signalled to indicate the sign of dA. Similarly, when signalling dB, one bin is signalled to indicate the sign of dB.
For another embodiment, in MMLM, the guided parameter set can also be signalled per model. For example, one guided parameter set is signalled for one model and another guided parameter set is signalled for another model. Alternatively, one guided parameter set is signalled and shared among linear models. Or only one guided parameter set is signalled for one selected model, and another model is not further refined by guided parameter set.
Inherit Neighbouring Model Parameters for Refining the Cross-Component Model Parameters
The final scaling parameter of the current block is inherited from the neighbouring blocks and further refined by dA (e.g., dA derivation or signalling can be similar or the same as the method in the previous “Guided parameter set for refining the cross-component model parameters” ) . Once the final scaling parameter is determined, the offset parameter (e.g., β in CCLM) is derived based on the inherited scaling parameter and the average value of neighbouring luma and chroma samples of the current block. For example, if the final scaling parameter is inherited from a selected neighbouring block, and the inherited scaling parameter is α′nei, then the final scaling parameter is (α′nei + dA) . For yet another embodiment, the final scaling parameter is inherited from a historical list and further refined by dA. For example, the historical list records the most recent j entries of final scaling parameters from previous CCLM-coded blocks. Then, the final scaling parameter is inherited from one selected entry of the historical list, α′list, and the final scaling parameter is (α′list + dA) . For yet another embodiment, the final scaling parameter is inherited from a historical list or the neighbouring blocks, but only the MSB (Most Significant Bit) part of the inherited scaling parameter is taken, and the LSB (Least Significant Bit) of the final scaling parameter is from dA. For yet another embodiment, the final scaling parameter is inherited from a historical list or the neighbouring blocks, but does not further refine by dA.
For yet another embodiment, if the inherited neighbour block is coded with CCCM, the filter coefficients (ci) are inherited. The offset parameter (e.g., c6×B or c6 in CCCM) can be re-derived based on the inherited parameter and the average value of neighbouring corresponding position luma and chroma samples of the current block. For still another embodiment, only partial filter coefficients are inherited (e.g., only n out of 6 filter coefficients are inherited, where 1≤n<6) , the rest filter coefficients are further re-derived using the neighbouring luma and chroma samples of the current block.
For still another embodiment, if the inherited candidate applies GLM gradient pattern to its luma reconstructed samples, the current block shall also inherit the GLM gradient pattern of the  candidate and apply to the current luma reconstructed samples.
For still another embodiment, if the inherited neighbour block is coded with multiple cross-component models (e.g., MMLM, or CCCM with multi-model) , the classification threshold is also inherited to classify the neighbouring samples of the current block into multiple groups, and the inherited multiple cross-component model parameters are further assigned to each group. For yet another embodiment, the classification threshold is the average value of the neighbouring reconstructed luma samples, and the inherited multiple cross-component model parameters are further assigned to each group. Similarly, once the final scaling parameter of each group is determined, the offset parameter of each group is re-derived based on the inherited scaling parameter and the average value of neighbouring luma and chroma samples of each group of the current block. For another example, if CCCM with multi-model is used, once the final coefficient parameter of each group is determined (e.g., c0 to c5 except for c6 in CCCM) , the offset parameter (e.g., c6×B or c6 in CCCM) of each group is re-derived based on the inherited coefficient parameter and the neighbouring luma and chroma samples of each group of the current block.
For still another embodiment, inheriting model parameters may depend on the colour component. For example, Cb and Cr components may inherit model parameters or model derivation method from the same candidate or different candidates. For yet another example, only one of colour components inherits model parameters, and the other colour component derives model parameters based on the inherited model derivation method (e.g., if the inherit candidate is coded by MMLM or CCCM, the current block also derives model parameters based on MMLM or CCCM using the current neighbouring reconstructed samples) . For still another example, only one of colour components inherits model parameters, and the other colour component derives its model parameters using the current neighbouring reconstructed samples.
For still another example, if Cb and Cr components can inherit model parameters or model derivation method from different candidates. The inherited model of Cr can depend on the inherited model of Cb. For example, possible cases include but not limited to (1) if the inherited model of Cb is CCCM, the inherited model of Cr shall be CCCM; (2) if the inherit model of Cb is CCLM, the inherit model of Cr shall be CCLM; (3) if the inherited model of Cb is MMLM, the inherited model of Cr shall be MMLM; (4) if the inherited model of Cb is CCLM, the inherited model of Cr shall be CCLM or MMLM; (5) if the inherited model of Cb is MMLM, the inherited model of Cr shall be CCLM or MMLM; (6) if the inherited model of Cb is GLM, the inherited model of Cr shall be GLM.
For yet another embodiment, after decoding a block, the (CCM) information cross-component model of the current block is derived and stored for later reconstruction process of neighbouring blocks using inherited neighbours model parameter. The CCM information mentioned in this disclosure includes but not limited to prediction mode (e.g., CCLM, MMLM, CCCM) , GLM  pattern index, model parameters, or classification threshold. For example, even the current block is coded by inter prediction, the cross-component model parameters of the current block can be derived by using the current luma and chroma reconstruction or prediction samples. Later, if another block is predicted by using inherited neighbours model parameters, it can inherit the model parameters from the current block. For another example, the current block is coded by cross-component prediction, the cross-component model parameters of the current block are re-derived by using the current luma and chroma reconstruction or prediction samples. For another example, the stored cross-component model can be CCCM, LM_LA (i.e., single model LM using both above and left neighbouring samples to derive model) , or MMLM_LA (multi-model LM using both above and left neighbouring samples to derive model) . For still example, even the current block is coded by non-cross-component intra prediction (e.g., DC, planar, intra angular modes, MIP, or ISP) , the cross-component model parameters of the current block are derived by using the current luma and chroma reconstruction or prediction samples. For another example, even the current block is coded by cross-component prediction, the cross-component model parameters of the current block are re-derived by using the current luma and chroma reconstruction or prediction samples. Later the re-derived model parameters are combined with the original cross-component models which is used in reconstructing the current block. For combining with the original cross-component models, it can use the model combination methods mentioned in sections entitled: Models Generated Based on Other Inherited Models and entitled: Inheriting Multiple Cross-Component Models. For example, assume the original cross-component model parameters areand the re-derived cross-component model parameters areThe final cross-component model is where α is a weighting factor which can be predefined or implicitly derived by neighbouring template cost.
Inherit Spatial Neighbouring Model Parameters
For another embodiment, the inherited model parameters can be from a block that is an immediate neighbouring block. The models from blocks at pre-defined positions are added into the candidate list in a pre-defined order. For example, the pre-defined positions can be the positions depicted in Fig. 7, and the pre-defined order can be B0, A0, B1, A1 and B2, or A0, B0, B1, A1 and B2.
For still another embodiment, the pre-defined positions include the positions at the immediate above (W >> 1) or ( (W >> 1) –1) position if W is greater than or equal to TH, and the positions at the immediate left (H >> 1) or ( (H >> 1) –1) position if H is greater than or equal to TH, where W and H are the width and height of the current block, TH is a threshold value which can be 4, 8, 16, 32, or 64.
For still another embodiment, the maximum number of inherited models from spatial neighbours are smaller than the number of pre-defined positions. For example, if the pre-defined positions are as depicted in Fig. 7, there are 5 pre-defined positions. If pre-defined order is B0, A0, B1,  A1 and B2, and the maximum number of inherited models from spatial neighbours is 4, the model from B2 is added into the candidate list only when one of preceding blocks is not available or is not coded in cross-component model.
Inheriting Temporal Neighbouring Model Parameters
For still another embodiment, if the current slice/picture is a non-intra slice/picture, the inherited model parameters can be from the block in the previous coded slices/pictures. For example, as shown in the Fig. 12, the current block position is at (x, y) and the block size is w×h. The inherited model parameters can be from the block at position (x’, y’) , (x’, y’ + h/2) , (x’ + w/2, y’) , (x’ + w/2, y’ + h/2) , (x’ + w, y’) , (x’, y’ + h) , or (x’ + w, y’ + h) of the previous coded slices/picture, where x’ = x + Δx and y’ = y + Δy. In one embodiment, if the prediction mode of the current block is intra, Δx and Δy are set to 0. If the prediction mode of the current block is inter prediction, Δx and Δy are set to the horizontal and vertical motion vector of the current block. In another embodiment, if the current block is inter bi-prediction, Δx and Δy are set to the horizontal and vertical motion vectors in reference picture list 0. In still another embodiment, if the current block is inter bi-prediction, Δx and Δy are set to the horizontal and vertical motion vectors in reference picture list 1.
For still another embodiment, if the current block is inter bi-prediction, the inherited model parameters can be from the block in the previous coded slices/pictures in the reference lists. For example, if the horizontal and vertical motion vector in reference picture list 0 is ΔxL0 and ΔyL0, the motion vector can be scaled to other reference pictures in the reference list 0 and 1. If the motion vector is scaled to the ith reference picture in the reference list 0 as (ΔxL0, i0, ΔyL0, i0) . The model can be from the block in the ith reference picture in the reference list 0, and Δx and Δy are set to (ΔxL0, i0, ΔyL0, i0) . For another example, if the horizontal and vertical motion vector in reference picture list 0 is ΔxL0 and ΔyL0 and the motion vector is scaled to the ith reference picture in the reference list 1 as (ΔxL0, i1, ΔyL0, i1) . The model can be from the block in the ith reference picture in the reference list 1, and Δx and Δy are set to (ΔxL0, i1, ΔyL0, i1)
Inherit Non-Adjacent Spatial Neighbouring Models
For another embodiment, the inherited model parameters can be from blocks that are spatial neighbouring blocks. The models from blocks at pre-defined positions are added into the candidate list in a pre-defined order. For example, the pattern of the positions and order can be as the pattern depicted in Fig. 11, where the distance between each position is the width and height of current coding block. For another embodiment, the distance between the positions that are closer to the current encoding block is smaller than the positions that are further away from the current block.
For still another embodiment, the maximum number of inherited models from non-adjacent spatial neighbours are smaller than the number of pre-defined positions. For example, if the pre-defined positions are as depicted in Figs. 13A-B, where two patterns (pattern 1310 in Fig. 13A  and pattern 1320 in Fig. 13B) are shown. If the maximum number of inherited models from non-adjacent spatial neighbours is N, the search pattern 2 is used only when the number of available models from search pattern 1 is smaller than N.
Inherit Model Parameters from History Table
In one embodiment, the inherited model parameters can be from a cross-component model history table. The cross-component models in the history table can be added into the candidate list according to a pre-defined order. In one embodiment, the adding order of historical candidates can be from the beginning of the table to the end of the table. In another embodiment, the adding order of historical candidates can be from a certain pre-defined position to the end of the table. In another embodiment, the adding order of historical candidates can be from the end of the table to the beginning of the table. In another embodiment, the adding order of historical candidates can be from a certain pre-defined position to the beginning of the table. In another embodiment, the adding order of historical candidates can be in an interleaved manner (e.g., the first added is from the beginning of the table, the second added candidate from the end of the table and so on) .
In one embodiment, single cross-component model history table can be maintained for storing the previous cross-component model, and the cross-component model history table can be reset at the start of the current picture, current slice, current tile, every M CTU rows or every N CTUs, where N and M can be any value greater than 0. In another embodiment, the cross-component model history table can be reset at the end of the current picture, current slice, current tile, current CTU row or current CTU.
In another embodiment, multiple cross-component model history tables can be maintained for storing the previous cross-component model. One picture can be divided into several regions, and for each region, a history table is kept. The size of region is pre-defined, and it can be X by Y CTUs where X and Y can be any value greater than 0. If there are a total of N regions in one picture, a total of N history tables is used here, denoted as history table 1 to history table N. There can be another history table for storing all the previous cross-component model, which is denoted as history table 0 here. In one embodiment, the history table 0 will always be updated during the encoding/decoding process. When the end of the divided region is reached, the history table of this divided region will be updated by the history table 0.
Fig. 14 shows an example of multiple history tables for storing the cross-component models, where each grid represents a CTU and the size of region is 4 by 1 CTUs. The number in each CTU indicates the history table where the model parameter associated with the CTU is stored. The dark block 1410 indicated the current CTU. The small square at the end of each region indicates where the table is updated.
In another embodiment, one picture can be divided into multiple regions, and for each region, a history table is kept. The history table 0 and one additional history table will be updated  during the encoding/decoding process. The additional history table can be determined by the current position. For example, if the current CU is located in the second region, the additional history table to be updated is history table 2.
In another embodiment, multiple history tables are used for different updated frequencies. For example, the first history table is updated every CU, the second history table is updated every two CUs, the third history table is updated every four CUs and so on.
In another embodiment, multiple history tables are used for storing different types of cross-component models. For example, the first history table is used for storing single model, and the second history table is used for storing multi-model. For another example, the first history table is used for storing a gradient model, and the second history table is used for storing a non-gradient model. For another example, the first history table is used for storing a simple linear model (e.g., y = ax + b) , and the second history table is used for storing a complicated model (e.g., CCCM) .
In another embodiment, multiple history tables are used for different reconstructed luma intensity. For example, if the average of reconstructed luma samples in the current block is greater than a pre-defined threshold, the cross-component model will be stored in the first history table; otherwise, the cross-component model will be stores in the second history table. In another embodiment, multiple history tables are used for different reconstructed chroma intensities. For example, if the average of neighbouring reconstructed chroma samples in the current block is greater than a pre-defined threshold, the cross-component model will be stores in the first history table; otherwise, the cross-component model will be stored in the second history table.
In one embodiment, when adding historical candidates from multiple history tables to the candidate list, the adding order can be from the beginning of the certain table to the end of the certain table, and then add the next history table in the same order or in a reversed order. In another embodiment, the adding order can be from the end of the certain table to the beginning of the certain table, and then add the next history table in the same order or in a reversed order. In another embodiment, the adding order can be from the certain pre-defined position of the certain table to the end of the certain table, and then add the next history table in the same order or in a reversed order. In another embodiment, the adding order can be from the certain pre-defined position of the certain table to the beginning of the certain table, and then add the next history table in the same order or in a reversed order. In another embodiment, the adding order of historical candidates can be in an interleaved manner in a certain history table (e.g., the first added candidate from the beginning of the certain history table, the second added candidate from the end of the certain history table and so on) , and then add the next history table in the same order or in a reversed order.
In another embodiment, the adding order can be from the beginning of each history table to the end of each history table. In another embodiment, the adding order can be from the end of each history table to the beginning of each history table. In another embodiment, the adding order  can be from the certain pre-defined position of each history table to the end of each history table. In another embodiment, the adding order can be from the certain pre-defined position of each history table to the beginning of each history table. In another embodiment, the adding order of historical candidates can be in an interleaved manner in each certain history table (e.g., the first added candidates from the beginning of all history table, the second added candidates from the end of all history table and so on) .
In one embodiment, multiple cross-component model history tables are used, but not all history tables will be used for creating the candidate list. Only history tables whose regions are close to the region of current block can be used to create the candidate list.
In one embodiment, if the historical candidates are used, the range for selecting non-adjacent candidates can be reduced by using a smaller distance between each position of non-adjacent candidate. In another embodiment, if the historical candidates are used, the number of non-adjacent candidates can be reduced by measuring the distance from the left-top position of the current block to the candidate position, and then exclude the candidate with the distance greater than a pre-defined threshold. In another embodiment, if the historical candidates are used, the number of non-adjacent candidates can be reduced by skipping the candidates that are not located in the same region. In another embodiment, if the historical candidates are used, the number of non-adjacent candidates can be reduced by skipping the candidates that are not located in the neighbouring regions. The range of neighbouring regions is pre-defined, and it can be M by N regions, where M and N can be any value greater than 0. In another embodiment, if the historical candidates are used, the range for selecting non-adjacent candidates can be reduced by skipping the second search pattern.
In another embodiment, one picture can be divided into multiple regions, and at least one history table is kept in each region. For a region of the current picture, it can use or combine the history tables of one or multiple regions in the previous coded pictures as the initial history table. For example, if a picture is divided into N regions, it can implicitly or explicitly select the history table from one of N regions in the previous coded pictures as the initial history table. The index of one of N regions can be signalled or implicitly derived from the corresponding region in the previous coded pictures. As shown in Figs. 15A-B, where the current picture 1520 is a P/B coded picture and the previous picture 1510 is an Intra coded picture. Each picture is divided into 4 regions as shown in 4 rectangular boxes. According to an embodiment of the present invention, the corresponding region in the previous coded pictures can be the region 1512 having the same beginning geometric position as the current region 1522 as shown in Fig. 15A or containing the centre geometric position of the current region 1522 as shown in Fig. 15B. For another example, it can combine more than one history tables in the previous coded regions/pictures to construct the history table of the current region (e.g., the method in the section entitled: Inheriting Candidates from the Candidates in the Candidate List of Neighbours) .
Models Generated Based on Other Inherited Models
In another embodiment, a single cross-component model can be generated from a multiple cross-component model. For example, if a candidate is coded with multiple cross-component models (e.g., MMLM, or CCCM with multi-model) , a single cross-component model can be generated by selecting the first or the second cross-component model in the multi cross-component models.
Candidate List Construction
In one embodiment, the candidate list is constructed by adding candidates in a pre-defined order until the maximum candidate number is reached. The candidates added may include all or some of the aforementioned candidates, but not limited to the aforementioned candidates. For example, the candidate list may include spatial neighbouring candidates, temporal neighbouring candidate, historical candidates, non-adjacent neighbouring candidates, single model candidates generated based on other inherited models or combined model (as mentioned later in section entitled: Inheriting Multiple Cross-Component Models) . For another example, the candidate list can include the same candidates as previous example, but the candidates are added into the list in a different order.
In another embodiment, if all the pre-defined neighbouring and historical candidates are added but the maximum candidate number is not reached, some default candidates are added into the candidate list until the maximum candidate number is reached.
In one sub-embodiment, the default candidates include but not limited to the candidates described below. The final scaling parameter α is from the set {0, 1/8, -1/8, +2/8, -2/8, +3/8, -3/8, +4/8, -4/8} , and the offset parameter β=1/ (1<<bit_depth) or is derived based on neighbouring luma and chroma samples. For example, if the average value of neighbouring luma and chroma samples are lumaAvg and chromaAvg, then β is derived by β=chromaAvg-α·lumaAvg. The average value of neighbouring luma samples (lumaAvg) can be calculated by all selected luma samples, the luma DC mode value the current luma CB, or the average of the maximum and minimum luma samples (e.g.,  or) . Similarly, average value of neighbouring chroma samples (chromaAvg) can be calculated by all selected chroma samples, the chroma DC mode value the current chroma CB, or the average of the maximum and minimum chroma samples (e.g., or ) .
In another sub-embodiment, the default candidates include but not limited to the candidates described below. The default candidates are α·G+β, where G is the luma sample gradients instead of down-sampled luma samples L. The 16 GLM filters described in the section, entitled: Gradient Linear Model (GLM) , are applied. The final scaling parameter α is from the set {0, 1/8, -1/8, +2/8, -2/8, +3/8, -3/8, +4/8, -4/8} . The offset parameter β=1/ (1<<bit_depth) or  is derived based on neighbouring luma and chroma samples.
In another embodiment, a default candidate can be an earlier candidate with a delta scaling parameter refinement. For example, if the scaling parameter of an earlier candidate is α, the scaling parameter of a default candidate is (α+Δα) , where Δα can be from the set {1/8, -1/8, +2/8, -2/8, +3/8, -3/8, +4/8, -4/8} . The offset parameter of a default candidate will be derived by (α+Δα) and the average value of neighbouring luma and chroma samples of the current block.
In another embodiment, a default candidate can be a shortcut to indicate a cross-component mode (i.e., using the current neighbouring luma/chroma reconstruction samples to derive cross-component models) rather than inheriting parameters from neighbours. For example, a default candidate can be CCLM_LA, CCLM_L, CCLM_A, MMLM_LA, MMLM_L, MMLM_A, single-model CCCM, multiple-model CCCM or cross-component model with a specified GLM pattern.
In another embodiment, a default candidate can be a cross-component mode (i.e., using the current neighbouring luma/chroma reconstruction samples to derive cross-component models) rather than inheriting parameters from neighbours, and also with a scaling parameter update (Δα) . Then, the scaling parameter of a default candidate is (α+Δα) . For example, a default candidate can be CCLM_LA, CCLM_L, CCLM_A, MMLM_LA, MMLM_L, or MMLM_A. For another example, Δα can be from the set {1/8, -1/8, +2/8, -2/8, +3/8, -3/8, +4/8, -4/8} . The offset parameter of a default candidate will be derived by (α+Δα) and the average value of neighbouring luma and chroma samples of the current block. For still another example, the Δα can be different for each colour components.
In another embodiment, a default candidate can be an earlier candidate with partial selected model parameters. For example, suppose an earlier candidate has m parameters, it can choose k out of m parameters from an earlier candidate to be a default candidate, where 0 < k < m and m > 1.
In another embodiment, a default candidate can be the first model of an earlier MMLM candidate (i.e., the model used when the sample value being less than or equal to a classification threshold) . In still another embodiment, a default candidate can be the second model of an earlier MMLM candidate (i.e., the model used when the sample value being greater than or equal to a classification threshold) . In still another embodiment, a default candidate can be the combination of two models of an earlier MMLM candidate. For example, if the models of an earlier MMLM candidate areandThe model parameters of an default candidate can bewhere αis a weighting factor which can be predefined or implicitly derived based on neighbouring template cost, andis the x-th parameter of the y-th model.
When constructing a candidate list, candidates are inserted into the list according to a pre-defined order. For example, the pre-defined order can be spatial adjacent candidates, temporal  candidates, spatial non-adjacent candidates, historical candidates, and then default candidates. In one embodiment, if cross-component models are derived for non-LM coded blocks (e.g., as mentioned in the section entitled: Inherit Neighbouring Model Parameters for Refining the Cross-Component Model Parameters) , the candidate models of non-LM coded blocks are included into the list after including candidate models of LM coded blocks. In another embodiment, if cross-component models are derived for non-LM coded blocks, the candidate models of non-LM coded blocks are included into the list before including default candidates. In still another embodiment, if cross-component models are derived for non-LM coded blocks, the candidate models of non-LM coded blocks have lower priority to be included into the list than candidate models from LM coded blocks.
Removing or modifying similar neighbouring model parameters
When inheriting cross-component model parameters from other blocks, it can further check the similarity between the inherited model and the existing models in the candidate list or those model candidates derived by the neighbouring reconstructed samples of the current block (e.g., models derived by CCLM, MMLM, or CCCM using the neighbouring reconstructed samples of the current block) . If the model of a candidate parameter is similar to the existing models, the model will not be included in the candidate list. In one embodiment, it can compare the similarity of (α×lumaAvg+β) or α among existing candidates to decide whether to include the model of a candidate or not. For example, if the (α×lumaAvg+β) or α of the candidate is the same as one of the existing candidates, the model of the candidate is not included. For another example, if the difference of (α×lumaAvg+β) or α between the candidate and one of existing candidates is less than a threshold, the model of the candidate is not included. Besides, the threshold can be adaptive based on coding information (e.g., the current block size or area) . For another example, when comparing the similarity, if a model from a candidate and the existing model both use CCCM, it can compare similarity by checking the value of (c0C + c1N + c2S + c3E + c4W + c5P + c6B) to decide whether to include the model of a candidate or not. In another embodiment, if a candidate position point to a CU which is the same one of the existing candidates, the model of the candidate parameter is not included. In still another embodiment, if the model of a candidate is similar to one of existing candidate models, it can adjust the inherited model parameters so that the inherited model is different from the existing candidate models. For example, if the inherited scaling parameter is similar to one of existing candidate models, the inherited scaling parameter can add a predefined offset (e.g., 1>>S or - (1>>S) , where S is the shift parameter) so that the inherited parameter is different from the existing candidate models.
In another embodiment, a default candidate can be a shortcut to indicate a cross-component mode (i.e., using the current neighbouring luma/chroma reconstructed samples to derive cross-component models) rather than inheriting parameters from neighbours. For example, default candidate can be CCLM_LA, CCLM_L, CCLM_A, MMLM_LA, MMLM_L, MMLM_A, single  model CCCM, multiple models CCCM or cross-component model with a specified GLM pattern.
In another embodiment, a default candidate can be a cross-component mode (i.e., using the current neighbouring luma/chroma reconstructed samples to derive cross-component models) rather than inheriting parameters from neighbours, and also with a scaling parameter update (Δα) . Then, the scaling parameter of a default candidate is (α+Δα) . For example, default candidate can be CCLM_LA, CCLM_L, CCLM_A, MMLM_LA, MMLM_L, or MMLM_A. For another example, Δα can be in the set {1/8, -1/8, +2/8, -2/8, +3/8, -3/8, +4/8, -4/8} . The offset parameter of a default candidate will be derived by (α+Δα) and the average value of neighbouring luma and chroma samples of the current block. For still another example, the Δα can be different for each colour components.
In another embodiment, a default candidate can be an earlier candidate with partial selected model parameters. For example, suppose an earlier candidate has m parameters, it can choose k out of m parameters from the earlier candidate to be a default candidate, where 0 < k < m and m > 1.
In another embodiment, a default candidate can be the first model of an earlier MMLM candidate (i.e., the model used when the sample value is less than or equal to classification threshold) . In still another embodiment, a default candidate can be the second model of an earlier MMLM candidate (i.e., the model used when the sample value is greater than or equal to classification threshold) . In still another embodiment, a default candidate can be the combination of two models of an earlier MMLM candidate. For example, if the models of an earlier MMLM candidate are andThe model parameters of an default candidate can be where α is a weighting factor which can be predefined or implicitly derived by neighbouring template cost, and is the x-th parameter of the y-th model.
Reordering the Candidates in the List
The candidates in the list can be reordered to reduce the syntax overhead when signalling the selected candidate index. The reordering rules can depend on the coding information of neighbouring blocks or the model error. For example, if neighbouring above or left blocks are coded by MMLM, the MMLM candidates in the list can be moved to the head of the current list. Similarly, if neighbouring above or left blocks are coded by single model LM or CCCM, the single model LM or CCCM candidates in the list can be moved to the head of the current list. Similarly, if GLM is used by neighbouring above or left blocks, the GLM related candidates in the list can be moved to the head of the current list.
In still another embodiment, the reordering rule is based on the model error by applying the candidate model to the neighbouring templates of the current block, and then compare the error with the reconstructed samples of the neighbouring template.
In still another embodiment, the candidates of different types are reordered separately before the candidates are added into the final candidate list. For each type of the candidates, the candidates are added into a primary candidate list with a pre-defined size N1. The candidates in the primary list are reordered. The candidates (N2) with the smallest costs are then added into the final candidate list, where N2≤N1. In another embodiment, the candidates are categorized into different types based on the source of the candidates, including but not limited to the spatial neighbouring models, temporal neighbouring models, non-adjacent spatial neighbouring models, and the historical candidates. In another embodiment, the candidates are categorized into different types based on the cross-component model mode. For example, the types can be CCLM, MMLM, CCCM, and CCCM multi-model. For another example, the types can be GLM-non active or GLM active.
In still another embodiment, after the candidates are reordered based on the template cost, the redundancy of the candidate can be further checked. A candidate is considered to be redundant if the template cost difference between it and its predecessor in the list is smaller than a threshold. If a candidate is considered redundant, it can be removed from the list, or it can be move to the end of the list.
Signalling the inherit candidate index in the list
An on/off flag can be signalled to indicate if the current block inherits the cross-component model parameters from neighbouring blocks or not. The flag can be signalled per CU/CB, per PU, per TU/TB, or per colour component, or per chroma colour component. A high level syntax can be signalled in SPS, PPS (Picture Parameter Set) , PH (Picture header) or SH (Slice Header) to indicate if the proposed method is allowed for the current sequence, picture, or slice.
If the current block inherits the cross-component model parameters from neighbouring blocks, the inherit candidate index is signalled. The index can be signalled (e.g., signalled using truncate unary code, Exp-Golomb code, or fix length code) and shared among both the current Cb and Cr blocks. For another example, the index can be signalled per colour component. For example, one inherited index is signalled for Cb component, and another inherited index is signalled for Cr component. For another example, it can use chroma intra prediction syntax (e.g., IntraPredModeC [xCb] [yCb] ) to store the inherited index.
If the current block inherits the cross-component model parameters from neighbouring blocks, the current chroma intra prediction mode (e.g., IntraPredModeC [xCb] [yCb] as defined in VVC standard) is temporally set to a cross-component mode (e.g., CCLM_LA) at the bitstream syntax parsing stage. Later, at the prediction stage or reconstruction stage, the candidate list is derived, and the inherited candidate model is then determined by the inherited candidate index. After obtaining the inherited model, the coding information of the current block is then updated according to the inherited candidate model. The coding information of the current block includes but not limited to the prediction mode (e.g., CCLM_LA or MMLM_LA) , related sub-mode flags (e.g., CCCM mode  flag) , prediction pattern (e.g., GLM pattern index) , and the current model parameters. Then, the prediction of the current block is generated according to the updated coding information.
Region-wise Cross-Component Model Merge Method
According to this method, a current block is partitioned into two or more prediction regions/sub-blocks, where each region can be predicted by inter or intra coding tool. Furthermore, at least one of prediction regions is coded by CC merge mode, where the cross-component model of the at least one region is inherited from a spatial, historical, or temporal neighbouring block/position. In one embodiment, the current block is partitioned by quad-tree, binary-tree, or ternary-tree split. The split can be symmetric or asymmetric split.
In another embodiment, the current block is partition into two regions, where one of two regions is predicted by inter or intra coding tool, and another one is predicted by CC merge mode. The inherited candidate index of the region predicted by CC merge mode can be explicitly or implicitly indicated. For example, it can explicitly signal the candidate index by the method in section entitled: Signalling the Inherited Candidate Index in the List. For another example, it can implicitly select the first candidate in the list as the candidate index. The candidates in the list can be reordered by the method mentioned in the section entitled: Reordering the Candidates in the List.
In still another embodiment, current block is partition into two regions, two regions are both predicted by CC merge mode, and the first two candidates in the list are the candidate indexes of the two regions. It can implicitly set the candidate index of the first region (e.g., the region having the top-left sample of the current block) to the first candidate and set the candidate index of the second region to the second candidate. Besides, the list can be reordered by the method mentioned in the section entitled: Reordering the Candidates in the List. For another example, if two regions are both predicted by CC merge mode, an index is explicitly signalled to indicate the candidate index of the first region, and the candidate index of the second region is the signalled index + k or the signalled index –k, where k can be 1, 2, 3, 4, or 5. For still the same example, the candidate model of the first region is implicitly derived from the stored cross-component model at the top-left position of the current block relative to the top-left position in the previous coded slices/pictures as the method mentioned in the section entitled: Inheriting Temporal Neighbouring Model Parameters. In another embodiment, if two regions are both predicted by CC merge mode, the first candidate in the list is the candidate index of the two regions.
The candidate list with candidates restricted from one or more specific cross-component mode types as described above can be implemented in an encoder side or a decoder side. For example, any of the proposed candidate derivation method can be implemented in an Intra/Inter coding module (e.g. Intra Pred. 150/MC 152 in Fig. 1B) in a decoder or an Intra/Inter coding module is an encoder (e.g. Intra Pred. 110/Inter Pred. 112 in Fig. 1A) . Any of the proposed shared buffer to store coding information among multiple coding tools including the CCM mode can also be  implemented as a circuit coupled to the intra/inter coding module at the decoder or the encoder. However, the decoder or encoder may also use additional processing unit to implement the required cross-component prediction processing. While the Intra Pred. units (e.g. unit 110/112 in Fig. 1A and unit 150/152 in Fig. 1B) are shown as individual processing units, they may correspond to executable software or firmware codes stored on a media, such as hard disk or flash memory, for a CPU (Central Processing Unit) or programmable devices (e.g. DSP (Digital Signal Processor) or FPGA (Field Programmable Gate Array) ) .
Fig. 16 illustrates a flowchart of an exemplary video coding system that uses region-wise cross-component merge mode according to an embodiment of the present invention. The steps shown in the flowchart may be implemented as program codes executable on one or more processors (e.g., one or more CPUs) at the encoder side. The steps shown in the flowchart may also be implemented based hardware such as one or more electronic devices or processors arranged to perform the steps in the flowchart. According to the method, input data associated with a current block comprising a first-colour block and a second-colour block are received in step 1610, wherein the input data comprise pixel data to be encoded at an encoder side or data associated with the current block to be decoded at a decoder side, wherein the current block is partitioned into two or more subblocks. A target cross-component model for at least one of said two or more subblocks is derived in step 1620, wherein the target cross-component model is inherited from a spatial, a temporal or a historical neighbouring block or position. A candidate list comprising the target cross-component model is derived in step 1630. Said at least one of said two or more subblocks is encoded or decoded using information comprising the candidate list in step 1640, wherein when a target cross-component candidate corresponding to the target cross-component model is selected for said at least one of said two or more subblocks, a predictor is generated for a second-colour subblock of said at least one of said two or more subblocks by applying the target cross-component model to a first-colour subblock of said at least one of said two or more subblocks.
Fig. 17 illustrates a flowchart of an exemplary video coding system that multiple history tables to derive cross-component candidate list according to an embodiment of the present invention. According to the method, input data associated with a current block comprising a first-colour block and a second-colour block are received in step 1710, wherein the input data comprise pixel data to be encoded at an encoder side or data associated with the current block to be decoded at a decoder side and the current picture is partitioned into multiple regions. Multiple history tables associated with the multiple regions are determined in step 1720, wherein at least two history tables of the multiple history tables are kept for two different regions. A candidate list comprising one or more cross-component historical candidates is derived from the multiple history tables in step 1730. The current block is encoded or decoded using information comprising the candidate list in step 1740, wherein when a target cross-component historical candidate from the candidate list is selected for the  current block, a predictor is generated for the second-colour block by applying a target cross-component model associated with the target cross-component historical candidate to the first-colour block.
The flowcharts shown are intended to illustrate an example of video coding according to the present invention. A person skilled in the art may modify each step, re-arranges the steps, split a step, or combine steps to practice the present invention without departing from the spirit of the present invention. In the disclosure, specific syntax and semantics have been used to illustrate examples to implement embodiments of the present invention. A skilled person may practice the present invention by substituting the syntax and semantics with equivalent syntax and semantics without departing from the spirit of the present invention.
The above description is presented to enable a person of ordinary skill in the art to practice the present invention as provided in the context of a particular application and its requirement. Various modifications to the described embodiments will be apparent to those with skill in the art, and the general principles defined herein may be applied to other embodiments. Therefore, the present invention is not intended to be limited to the particular embodiments shown and described, but is to be accorded the widest scope consistent with the principles and novel features herein disclosed. In the above detailed description, various specific details are illustrated in order to provide a thorough understanding of the present invention. Nevertheless, it will be understood by those skilled in the art that the present invention may be practiced.
Embodiment of the present invention as described above may be implemented in various hardware, software codes, or a combination of both. For example, an embodiment of the present invention can be one or more circuit circuits integrated into a video compression chip or program code integrated into video compression software to perform the processing described herein. An embodiment of the present invention may also be program code to be executed on a Digital Signal Processor (DSP) to perform the processing described herein. The invention may also involve a number of functions to be performed by a computer processor, a digital signal processor, a microprocessor, or field programmable gate array (FPGA) . These processors can be configured to perform particular tasks according to the invention, by executing machine-readable software code or firmware code that defines the particular methods embodied by the invention. The software code or firmware code may be developed in different programming languages and different formats or styles. The software code may also be compiled for different target platforms. However, different code formats, styles and languages of software codes and other means of configuring code to perform the tasks in accordance with the invention will not depart from the spirit and scope of the invention.
The invention may be embodied in other specific forms without departing from its spirit or essential characteristics. The described examples are to be considered in all respects only as illustrative and not restrictive. The scope of the invention is therefore, indicated by the appended  claims rather than by the foregoing description. All changes which come within the meaning and range of equivalency of the claims are to be embraced within their scope.

Claims (19)

  1. A method of coding colour pictures using coding tools including one or more cross component models related modes, the method comprising:
    receiving input data associated with a current block comprising a first-colour block and a second-colour block, wherein the input data comprises pixel data to be encoded at an encoder side or data associated with the current block to be decoded at a decoder side, and wherein the current block is partitioned into two or more subblocks;
    deriving a target cross-component model for at least one of said two or more subblocks, wherein the target cross-component model is inherited from a spatial, a temporal or a historical neighbouring block or position;
    deriving a candidate list comprising the target cross-component model; and
    encoding or decoding said at least one of said two or more subblocks using information comprising the candidate list, wherein when a target cross-component candidate corresponding to the target cross-component model is selected for said at least one of said two or more subblocks, a predictor is generated for a second-colour subblock of said at least one of said two or more subblocks by applying the target cross-component model to a first-colour subblock of said at least one of said two or more subblocks.
  2. The method of Claim 1, wherein the candidate list is used for a cross-component merge mode.
  3. The method of Claim 2, wherein at least another subblock of said two or more subblocks is coded by an inter or intra coding tool.
  4. The method of Claim 1, wherein the current block is partitioned into said two or more subblocks symmetrically or asymmetrically.
  5. The method of Claim 1, wherein when the target cross-component model is inherited from the historical neighbouring block or position, a candidate index for the historical neighbouring block or position is explicitly signalled or parsed.
  6. The method of Claim 1, wherein when the target cross-component model is inherited from the historical neighbouring block or position, a candidate index for the historical neighbouring block or position is implicitly indicated.
  7. The method of Claim 1, wherein the current block is partitioned into two subblocks, both of the two subblocks are coded using a cross-component merge mode, and first two target cross-component candidates in the candidate list correspond to two candidate indexes of the two subblocks.
  8. The method of Claim 7, wherein the two candidate indexes are assigned to the two subblocks  implicitly.
  9. The method of Claim 7, wherein a first index of the two candidate indexes is signalled or parsed explicitly and a second index of the two candidate indexes is signalled or parsed depending on the first index.
  10. The method of Claim 1, wherein the current block is partitioned into two subblocks, both of the two subblocks are coded based on a cross-component merge mode, and a first target cross-component candidate in the candidate list is used by both of the two subblocks.
  11. The method of Claim 1, wherein said at least one of said two or more subblocks corresponds to a first subblock of said two or more subblocks corresponds and the target cross-component model is implicitly derived from a stored cross-component model at a corresponding top-left position of the current block in previous coded slices/pictures.
  12. An apparatus for video coding, the apparatus comprising one or more electronics or processors arranged to:
    receive input data associated with a current block comprising a first-colour block and a second-colour block, wherein the input data comprises pixel data to be encoded at an encoder side or data associated with the current block to be decoded at a decoder side, and wherein the current block is partitioned into two or more subblocks;
    derive a target cross-component model for at least one of said two or more subblocks, wherein the target cross-component model is inherited from a spatial, a temporal or a historical neighbouring block or position;
    derive a candidate list comprising the target cross-component model; and
    encode or decode said at least one of said two or more subblocks using information comprising the candidate list, wherein when a target cross-component candidate corresponding to the target cross-component model is selected for said at least one of said two or more subblocks, a predictor is generated for a second-colour subblock of said at least one of said two or more subblocks by applying the target cross-component model to a first-colour subblock of said at least one of said two or more subblocks.
  13. A method of coding colour pictures using coding tools including one or more cross component models related modes, the method comprising:
    receiving input data associated with a current block comprising a first-colour block and a second-colour block, wherein the input data comprises pixel data to be encoded at an encoder side or data associated with the current block to be decoded at a decoder side, and wherein a current picture is partitioned into multiple regions;
    determining multiple history tables associated with the multiple regions, wherein at least two  history tables of the multiple history tables are kept for two different regions;
    deriving a candidate list comprising one or more cross-component historical candidates from the multiple history tables; and
    encoding or decoding the current block using information comprising the candidate list, wherein when a target cross-component historical candidate from the candidate list is selected for the current block, a predictor is generated for the second-colour block by applying a target cross-component model associated with the target cross-component historical candidate to the first-colour block.
  14. The method of Claim 13, wherein one of the multiple history tables is kept for each of the multiple regions.
  15. The method of Claim 13, wherein an initial history table is generated for a current region based on a target history table associated with a target region of the multiple history tables in previously coded pictures.
  16. The method of Claim 15, wherein an index is signalled or parsed to indicate the target region in the previously coded pictures.
  17. The method of Claim 15, wherein the target region in the previously coded pictures is determined implicitly.
  18. The method of Claim 13, wherein a current history table for a current region is generated using at least two of multiple history tables.
  19. An apparatus for video coding, the apparatus comprising one or more electronics or processors arranged to:
    receive input data associated with a current block comprising a first-colour block and a second-colour block, wherein the input data comprise pixel data to be encoded at an encoder side or data associated with the current block to be decoded at a decoder side, and wherein a current picture is partitioned into multiple regions;
    determine multiple history tables associated with the multiple regions, wherein at least two history tables of the multiple history tables are kept for two different regions;
    derive a candidate list comprising one or more cross-component historical candidates from the multiple history tables; and
    encode or decode the current block using information comprising the candidate list, wherein when a target cross-component historical candidate from the candidate list is selected for the current block, a predictor is generated for the second-colour block by applying a target cross-component model associated with the target cross-component historical candidate to the first-colour block.
PCT/CN2024/071360 2023-01-09 2024-01-09 Methods and apparatus of region-wise cross-component model merge mode for video coding WO2024149247A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202363479003P 2023-01-09 2023-01-09
US63/479003 2023-01-09

Publications (1)

Publication Number Publication Date
WO2024149247A1 true WO2024149247A1 (en) 2024-07-18

Family

ID=91897714

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2024/071360 WO2024149247A1 (en) 2023-01-09 2024-01-09 Methods and apparatus of region-wise cross-component model merge mode for video coding

Country Status (1)

Country Link
WO (1) WO2024149247A1 (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110267036A (en) * 2019-03-11 2019-09-20 杭州海康威视数字技术股份有限公司 A kind of filtering method and equipment
EP3700201A1 (en) * 2019-02-21 2020-08-26 InterDigital VC Holdings, Inc. Separate coding trees for luma and chroma prediction
US20200288135A1 (en) * 2017-10-09 2020-09-10 Canon Kabushiki Kaisha New sample sets and new down-sampling schemes for linear component sample prediction
CN112235572A (en) * 2019-06-30 2021-01-15 腾讯美国有限责任公司 Video decoding method and apparatus, computer device, and storage medium
US20210029356A1 (en) * 2018-06-21 2021-01-28 Beijing Bytedance Network Technology Co., Ltd. Sub-block mv inheritance between color components
CN114793280A (en) * 2021-01-25 2022-07-26 脸萌有限公司 Method and apparatus for cross-component prediction

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200288135A1 (en) * 2017-10-09 2020-09-10 Canon Kabushiki Kaisha New sample sets and new down-sampling schemes for linear component sample prediction
US20210029356A1 (en) * 2018-06-21 2021-01-28 Beijing Bytedance Network Technology Co., Ltd. Sub-block mv inheritance between color components
EP3700201A1 (en) * 2019-02-21 2020-08-26 InterDigital VC Holdings, Inc. Separate coding trees for luma and chroma prediction
CN110267036A (en) * 2019-03-11 2019-09-20 杭州海康威视数字技术股份有限公司 A kind of filtering method and equipment
CN112235572A (en) * 2019-06-30 2021-01-15 腾讯美国有限责任公司 Video decoding method and apparatus, computer device, and storage medium
CN114793280A (en) * 2021-01-25 2022-07-26 脸萌有限公司 Method and apparatus for cross-component prediction

Similar Documents

Publication Publication Date Title
JP7358436B2 (en) Motion vector refinement for multi-reference prediction
US12058358B2 (en) Video signal processing method and apparatus using adaptive motion vector resolution
KR101588559B1 (en) Method and apparatus for storing motion vectors, method of encoding and decoding, apparatus of encoding and decoding, and recording medium
US20220248025A1 (en) Methods and apparatuses for cross-component prediction
EP3843389A1 (en) Method for encoding/decoding image signal and apparatus therefor
US20240214586A1 (en) Method, apparatus, and medium for video processing
WO2023131347A1 (en) Method and apparatus using boundary matching for overlapped block motion compensation in video coding system
WO2024149247A1 (en) Methods and apparatus of region-wise cross-component model merge mode for video coding
WO2024109618A1 (en) Method and apparatus of inheriting cross-component models with cross-component information propagation in video coding system
WO2024093785A1 (en) Method and apparatus of inheriting shared cross-component models in video coding systems
WO2024120478A1 (en) Method and apparatus of inheriting cross-component models in video coding system
WO2024120307A1 (en) Method and apparatus of candidates reordering of inherited cross-component models in video coding system
WO2024109715A1 (en) Method and apparatus of inheriting cross-component models with availability constraints in video coding system
WO2024149251A1 (en) Methods and apparatus of cross-component model merge mode for video coding
WO2024153069A1 (en) Method and apparatus of default model derivation for cross-component model merge mode in video coding system
WO2024169989A1 (en) Methods and apparatus of merge list with constrained for cross-component model candidates in video coding
WO2024104086A1 (en) Method and apparatus of inheriting shared cross-component linear model with history table in video coding system
WO2024088340A1 (en) Method and apparatus of inheriting multiple cross-component models in video coding system
WO2024149159A1 (en) Methods and apparatus for improvement of transform information coding according to intra chroma cross-component prediction model in video coding
WO2024074129A1 (en) Method and apparatus of inheriting temporal neighbouring model parameters in video coding system
WO2024149293A1 (en) Methods and apparatus for improvement of transform information coding according to intra chroma cross-component prediction model in video coding
WO2024175000A1 (en) Methods and apparatus of multiple hypothesis blending for cross-component model merge mode in video codingcross reference to related applications
WO2024120386A1 (en) Methods and apparatus of sharing buffer resource for cross-component models
WO2024074131A1 (en) Method and apparatus of inheriting cross-component model parameters in video coding system
WO2024193577A1 (en) Methods and apparatus for hiding bias term of cross-component prediction model in video coding

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 24741244

Country of ref document: EP

Kind code of ref document: A1