WO2023024712A1 - Procédé et appareil de codage conjoint pour des composants multicolores dans un système de codage vidéo - Google Patents

Procédé et appareil de codage conjoint pour des composants multicolores dans un système de codage vidéo Download PDF

Info

Publication number
WO2023024712A1
WO2023024712A1 PCT/CN2022/103718 CN2022103718W WO2023024712A1 WO 2023024712 A1 WO2023024712 A1 WO 2023024712A1 CN 2022103718 W CN2022103718 W CN 2022103718W WO 2023024712 A1 WO2023024712 A1 WO 2023024712A1
Authority
WO
WIPO (PCT)
Prior art keywords
colour
block
coding
residual block
joint
Prior art date
Application number
PCT/CN2022/103718
Other languages
English (en)
Inventor
Chen-Yen LAI
Tzu-Der Chuang
Ching-Yeh Chen
Chun-Chia Chen
Chih-Wei Hsu
Yu-Wen Huang
Original Assignee
Mediatek Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mediatek Inc. filed Critical Mediatek Inc.
Priority to TW111131420A priority Critical patent/TWI811070B/zh
Publication of WO2023024712A1 publication Critical patent/WO2023024712A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/105Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • H04N19/137Motion inside a coding unit, e.g. average field, frame or block difference
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/157Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/186Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component

Definitions

  • the present invention is a non-Provisional Application of and claims priority to U.S. Provisional Patent Application No. 63/237,555, filed on August 27, 2021.
  • the U.S. Provisional Patent Application is hereby incorporated by reference in its entirety.
  • the present invention relates to video coding system.
  • the present invention relates to efficient joint coding of multi-colour component in a video coding system.
  • VVC Versatile video coding
  • JVET Joint Video Experts Team
  • MPEG ISO/IEC Moving Picture Experts Group
  • ISO/IEC 23090-3 2021
  • Information technology -Coded representation of immersive media -Part 3 Versatile video coding, published Feb. 2021.
  • VVC is developed based on its predecessor HEVC (High Efficiency Video Coding) by adding more coding tools to improve coding efficiency and also to handle various types of video sources including 3-dimensional (3D) video signals.
  • HEVC High Efficiency Video Coding
  • Fig. 1A illustrates an exemplary adaptive Inter/Intra video coding system incorporating loop processing.
  • Intra Prediction the prediction data is derived based on previously coded video data in the current picture.
  • Motion Estimation (ME) is performed at the encoder side and Motion Compensation (MC) is performed based of the result of ME to provide prediction data derived from other picture (s) and motion data.
  • Switch 114 selects Intra Prediction 110 or Inter-Prediction 112 and the selected prediction data is supplied to Adder 116 to form prediction errors, also called residues.
  • the prediction error is then processed by Transform (T) 118 followed by Quantization (Q) 120.
  • T Transform
  • Q Quantization
  • the transformed and quantized residues are then coded by Entropy Encoder 122 to be included in a video bitstream corresponding to the compressed video data.
  • the bitstream associated with the transform coefficients is then packed with side information such as motion and coding modes associated with Intra prediction and Inter prediction, and other information such as parameters associated with loop filters applied to underlying image area.
  • the side information associated with Intra Prediction 110, Inter prediction 112 and in-loop filter 130, are provided to Entropy Encoder 122 as shown in Fig. 1A. When an Inter-prediction mode is used, a reference picture or pictures have to be reconstructed at the encoder end as well.
  • the transformed and quantized residues are processed by Inverse Quantization (IQ) 124 and Inverse Transformation (IT) 126 to recover the residues.
  • the residues are then added back to prediction data 136 at Reconstruction (REC) 128 to reconstruct video data.
  • the reconstructed video data may be stored in Reference Picture Buffer 134 and used for prediction of other frames.
  • incoming video data undergoes a series of processing in the encoding system.
  • the reconstructed video data from REC 128 may be subject to various impairments due to a series of processing.
  • in-loop filter 130 is often applied to the reconstructed video data before the reconstructed video data are stored in the Reference Picture Buffer 134 in order to improve video quality.
  • deblocking filter (DF) may be used.
  • SAO Sample Adaptive Offset
  • ALF Adaptive Loop Filter
  • the loop filter information may need to be incorporated in the bitstream so that a decoder can properly recover the required information. Therefore, loop filter information is also provided to Entropy Encoder 122 for incorporation into the bitstream.
  • DF deblocking filter
  • SAO Sample Adaptive Offset
  • ALF Adaptive Loop Filter
  • Loop filter 130 is applied to the reconstructed video before the reconstructed samples are stored in the reference picture buffer 134.
  • the system in Fig. 1A is intended to illustrate an exemplary structure of a typical video encoder. It may correspond to the High Efficiency Video Coding (HEVC) system, VP8, VP9, H. 264 or VVC.
  • HEVC High Efficiency Video Coding
  • the decoder can use similar or portion of the same functional blocks as the encoder except for Transform 118 and Quantization 120 since the decoder only needs Inverse Quantization 124 and Inverse Transform 126.
  • the decoder uses an entropy decoder 140 to decode the video bitstream into quantized transform coefficients and needed coding information (e.g., ILPF information, Intra prediction information and Inter prediction information) .
  • the Intra prediction 150 at the decoder side does not need to perform the mode search. Instead, the decoder only need to generate Intra prediction according to Intra prediction information received from the Entropy Decoder 140.
  • the decoder only needs to perform motion compensation (MC 152) according to Intra prediction information received from the Entropy Decoder 140 without the need for motion estimation.
  • an input picture is partitioned into non-overlapped square block regions referred as CTUs (Coding Tree Units) , similar to HEVC.
  • CTUs Coding Tree Units
  • Each CTU can be partitioned into one or multiple smaller size coding units (CUs) .
  • the resulting CU partitions can be in square or rectangular shapes.
  • VVC divides a CTU into prediction units (PUs) as a unit to apply prediction process, such as Inter prediction, Intra prediction, etc.
  • the VVC standard incorporates various new coding tools to further improve the coding efficiency over the HEVC standard.
  • various new coding tools some have been adopted by the standard and some are not.
  • a technique, named Joint coding of chroma residuals (JCCR) has been disclosed.
  • the JCCR is briefly reviewed as follows.
  • VVC (Adrian Browne, et al., “Algorithm description for Versatile Video Coding and Test Model 14 (VTM 14) ” , Joint Video Exploration Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11, 23rd Meeting, by teleconference, 7–16 July 2021, Document: W2002) supports the joint coding of chroma residual (JCCR) tool (section 3.5.7 of JVET-W2002) where the chroma residuals for colour components are coded jointly.
  • JCCR chroma residual
  • the usage (i.e., activation) of the JCCR mode is indicated by a TU-level flag tu_joint_cbcr_residual_flag and the selected mode is implicitly indicated by the chroma CBFs (i.e., coded block flags) .
  • the flag tu_joint_cbcr_residual_flag is present if either or both chroma CBFs for a TU are equal to 1.
  • chroma QP (quantization parameter) offset values are signalled for the JCCR mode to differentiate from the usual chroma QP offset values signalled for the regular chroma residual coding mode.
  • chroma QP offset values are used to derive the chroma QP values for some blocks coded by the JCCR mode.
  • the JCCR mode has 3 sub-modes.
  • a corresponding JCCR sub-mode i.e., sub-modes 2 in Table 3-11
  • this chroma QP offset is added to the applied luma-derived chroma QP during quantization and decoding of that TU.
  • the chroma QPs are derived in the same way as for conventional Cb or Cr blocks.
  • the reconstruction process of the chroma residuals (resCb and resCr) from the transmitted transform blocks is depicted in Table 3-11 of JVET-W2002.
  • JCCR mode When the JCCR mode is activated, one single joint chroma residual block (resJointC [x] [y] in Table 3-11) is signalled, and residual block for Cb (resCb) and residual block for Cr (resCr) are derived considering information such as tu_cbf_cb, tu_cbf_cr, and CSign, which is a sign value specified in the slice header.
  • resJointC ⁇ 1, 2 ⁇ are generated by the encoder as follows:
  • the value CSign is a sign value (+1 or -1) , which is specified in the slice header, and resJointC [] [] is the transmitted residual.
  • the three joint chroma coding sub-modes described above in Table 3-11 are only supported in I slices. In P and B slices, only mode 2 is supported. Hence, in P and B slices, the syntax element tu_joint_cbcr_residual_flag is only present if both chroma cbfs are 1 (i.e., at least one non-zero coefficient existing in the block) .
  • the JCCR mode can be combined with the chroma transform skip (TS) mode (more details of the TS mode can be found in section 3.9.3 of JVET-W2002) .
  • the JCCR transform selection depends on whether the independent coding of Cb and Cr components selects the DCT-2 or the TS as the best transform, and whether there are non-zero coefficients in independent chroma coding. Specifically, if one chroma component selects DCT-2 (or TS) and the other component is all zero, or both chroma components select DCT-2 (or TS) , then only DCT-2 (or TS) will be considered in JCCR encoding. Otherwise, if one component selects DCT-2 and the other selects TS, then both DCT-2 and TS will be considered in JCCR encoding.
  • JCCR coding tool can improve coding efficiency, it requires to signal a flag (i.e., jccr_flag) for each block. In some circumstances, this overhead information may even hurt the overall coding efficiency. Therefore, it is desirable to develop schemes to retain the coding efficiency for all circumstances.
  • a method and apparatus for video encoding and decoding systems that utilize joint coding of multi-colour components are disclosed.
  • input data comprising a multi-colour block of a video unit are received, where the multi-colour block comprises at least a first colour block corresponding to a first-colour component and a second colour block corresponding to a second-colour component.
  • a first residual block for the first colour block is determined by using a first predictor.
  • a second residual block for the second colour block is determined by using a second predictor.
  • Whether a target condition is satisfied is determined, where the target condition belongs to a group comprising a first condition related to a number of non-zero quantized coefficients of a joint residual block or a second condition related to a coding unit size for the multi-colour block; and the joint residual block is generated based on the first residual block and the second residual block.
  • joint multi-colour coding is disabled by always encoding the first residual block and the second residual block individually.
  • enabling said joint multi-colour coding by encoding the joint residual block instead of the first residual block and the second residual block individually if a mode corresponding to said joint multi-colour coding is activated.
  • Video bitstream comprising coded data for the joint residual block, or for the first residual block and the second residual block individually is then generated.
  • a video bitstream comprising coded data for a multi-colour block of a video unit is received, where the multi-colour block comprises at least a first colour block corresponding to a first-colour component and a second colour block corresponding to a second-colour component.
  • the target condition belongs to a group comprising a first condition related to a number of non-zero quantized coefficients of a joint residual block decoded from the coded data or a second condition related to a coding unit size for the multi-colour block.
  • a first residual block for the first colour block and a second residual block for the second colour block are derived individually based on the coded data.
  • both the first residual block and the second residual block are derived from the joint residual block if a mode corresponding to joint multi-colour coding is activated for the multi-colour block.
  • the target condition corresponds to the number of non-zero quantized coefficients being smaller than a threshold.
  • the threshold can be pre-defined or decoded from the video bitstream.
  • the threshold can be dependent on the CU size associated with the multi-colour block.
  • the first-colour component corresponds to a luma component and the second-colour component corresponds to one chroma component, or both the first-colour component and the second-colour component correspond to chroma components.
  • the group further comprises a third condition corresponding to dual luma-chroma trees being enabled. In yet another embodiment, the group further comprises a fourth condition corresponding to Cross-Component Linear Model prediction being enabled.
  • the mode corresponding to the joint multi-colour coding has multiple sub-modes, when the target condition is not satisfied, a first sub-mode is determined implicitly based on boundary matching between candidates of reconstructed multi-colour block and L-shaped neighbouring reconstructed pixels.
  • different numbers of sub-modes for the mode are selected for Coding Tree Units (CTUs) , slices, tiles or pictures according to flags decoded from the video bitstream
  • CTUs Coding Tree Units
  • Fig. 1A illustrates an exemplary adaptive Inter/Intra video coding system incorporating loop processing.
  • Fig. 1B illustrates a corresponding decoder for the encoder in Fig. 1A.
  • Fig. 2 illustrates a flowchart of an exemplary video decoding system that utilizes joint coding of multi-colour components according to an embodiment of the present invention.
  • Fig. 3 illustrates a flowchart of an exemplary video encoding system that utilizes joint coding of multi-colour components according to an embodiment of the present invention.
  • the basic flow for encoding a residual includes transform, quantization, inverse quantization and inverse transform as described in Fig. 1A and Fig. 1B.
  • VVC supports the joint coding of chroma residual (JCCR) tool where the chroma residuals are coded jointly. It helps to encode chroma residual more efficiently because chances are high for the strong correlation between the Cb and Cr colour components. However, often the residuals among all colour components (including the luma component) exhibit some correlation. In order to further improve the coding efficiency of JCCR, some improved methods are disclosed as follows.
  • JCMC Jointly Coding Multi-Colour Components
  • residual block for Y (resY) , Cb (resCb) and Cr (resCr) are derived considering information signalled in the bitstream and the pre-defined rules.
  • Residual block Y, Cb and Cr can be derived by a pre-defined formula.
  • Residual block Y is equal to the joint residual
  • residual block Cb is equal to half of joint residual
  • residual block Cr is equal to minus half of joint residual.
  • only one joint residual is encoded and decoded in the decoder.
  • Residual block Y, Cb and Cr can be derived by a pre-defined formula which is indicated by an index (jcmc_mode_idx) signalled in the bitstream. Different formulas correspond to different sub-modes and the index (jcmc_mode_idx) indicates a particular sub-mode being selected.
  • jcmc_mode_idx is equal to 1
  • residual block Y is equal to the joint residual
  • residual block Cb is equal to half of joint residual
  • residual block Cr is equal to minus half of joint residual.
  • Residual block Y is equal to the joint residual
  • residual block Cb is equal to minus half of joint residual
  • residual block Cr is equal to half of joint residual.
  • a pre-defined formula is used. The parameters used in the formula can be signalled in the bitstream. In the following formula, A and B can be indicated by the information signalled in the bitstream.
  • the pre-defined formula can be any linear operations with or without non-linear operation, such as minimum, maximum, or clipping operations.
  • Residual block Y joint residual block
  • Residual block Cb Residual block Y *A
  • Residual block Cr Residual block Y *B
  • two joint residuals are encoded and decoded in the decoder. They are used to derive residual block Y, Cb and Cr by a pre-defined formula.
  • residual block Y is equal to the averaging of joint residual 1 and 2
  • residual block Cb is equal to half of joint residual 1 adding quarter of joint residual 2
  • residual block Cr is equal to minus half of joint residual adding minus quarter of joint residual 2.
  • two joint residuals are encoded and decoded in the decoder. Each samples of residual block Y, Cb and Cr can be derived by the converted joint residuals.
  • one or more converted joint residuals will be derived and the residual blocks Y, Cb and Cr are generated by the one or more converted joint residuals.
  • an inverse Hadamard transform is applied to a two dimensional data, [sample value in joint residual 1 in position (x, y) , sample value in joint residual 2 in position (x, y) ] to generate two converted joint residual samples [conv-value-1, conv-value-2] for the corresponding position (x, y) .
  • the residual blocks Y, Cb, and Cr in position (x, y) can be derived.
  • Sample value at residual block Y position (x, y) is equal to conv-value-1
  • sample value at residual block Cb position (x, y) is equal to minus conv-value-1 adding conv-value-2
  • sample value at residual block Cr position (x, y) is equal to conv-value-1 adding minus conv-value-2.
  • one or more joint residuals are tested in the encoder and different pre-defined derivation formula are tested. They can be treated as different prediction modes.
  • a set of syntax signalled in the bitstream can be used to indicate which mode and which pre-defined derivation formula are used. Based on the formula, residual blocks Y, Cb, and Cr can be derived.
  • an implicit selection between different modes or derivation formula can be made, according to the CU or slice information (e.g. coding mode) , transform type, prediction direction, motion information, intra prediction mode, slice type, and so on.
  • the encoder does not need to send JCMC sub-mode (for indicating a pre-defined formula) to decoder; instead, the decoder can use boundary matching to implicitly select an appropriate one. For example, if there are 3 sub-modes, the decoder can use each sub-mode to derive luma residual and add back the residual with a predictor to get temporary reconstruction; and the decoder can compare the reconstruction with the L-neighbour (i.e., left/top-neighbouring reconstructed pixels of the current CU) , compare the boundary smoothness and choose the best sub-mode (from the 3 sub-modes) with the best boundary smoothness.
  • L-neighbour i.e., left/top-neighbouring reconstructed pixels of the current CU
  • the boundary smoothness is evaluated for all candidates (i.e., all sub-modes) and the candidate that achieves the best boundary smoothness is chosen.
  • the boundary smoothness is a measure of smoothness across the boundary. Usually, a properly reconstructed block is expected to see less discontinuity (i.e., more smooth) across the boundary.
  • the decoder can choose the best sub-mode by implicitly comparing the temporary reconstruction (i.e., adding assumed JCMC sub-mode residual to the predictor) with the reference picture (using the current MC) to compute the SAD/SSD, and the best JCMC will be implicitly decided to be the one with the smallest SAD/SSD.
  • JCMC sub-modes can be implicitly derived in the decoder by comparing temporary reconstruction for a corresponding sub-TB (i.e., adding the current sub-TB with the corresponding predictor region) with the reference picture (or the current L-neighbouring pixels for boundary smoothness) . In this way, no syntax is needed to be signalled for multi JCMC sub-modes for different sub-TBs.
  • the encoder can decide which modes will be used for the current CTU/slice/tile/picture, and send a flag in the CTU/slice/tile/picture. For example, there are 4 predefined JCMC sub-modes in total. However, for some CTUs, only 1 is useful. Therefore, the encoder can send flags to indicate those CTUs that use only 1 JCMC sub-mode. Therefore, for each CU in such CTU, the JCMC mode flag can be saved since there is only one sub-mode available.
  • the JCMC mode is turned off when the number of non-zero coefficients in a residual block is small. Since the decoder can determine whether the number of non-zero coefficients in the residual block is small, the decoder can make the same decision as the encoder without the need of signalling a flag.
  • the JCMC mode can be implicitly turned off since signalling the JCMC flag may not be good for coding efficiency.
  • the current predictor i.e., without adding residual
  • the JCMC mode can be implicitly turned off or implicitly reduced for the number of modes (e.g., reducing 4 modes implicitly to 2 modes for the CU) without signalling the JCMC flag.
  • the threshold can be pre-defined or signalled in the video bitstream. Furthermore, the threshold can be CU size dependent. For example, a larger threshold value for a larger CU size.
  • the jointly coding multi-colour components (JCMC) technique is disabled when there is less correlation between colour components.
  • JCMC will not be applied.
  • JCMC will not be enabled at the same time with LFNST (Low-Frequency Non-Separable Transform) .
  • LFNST Low-Frequency Non-Separable Transform
  • LFNST is another new coding tool included in VVC (JVET-W2002) .
  • JCMC will not be enabled at the same time with MTS (Multiple Transform Selection) .
  • MTS Multiple Transform Selection
  • MTS is yet another new coding tool included in VVC (JVET-W2002) .
  • JCMC can be enabled with MTS, but the function performed to generate the jointly coding residual can be different or same as DCT-2.
  • JCMC is not allowed to be used.
  • CCLM Cross-Component Linear Model
  • JCMC is applied implicitly. That is to say that, it doesn’t need to signal additional syntax to the decoder according to an embodiment of the present invention.
  • the use of CCLM mode implies high correlation among colour components. Therefore, if CCLM is performed, JCMC will be always applied according to one embodiment of the present invention.
  • CCLM is another new coding tool included in VVC (JVET-W2002) .
  • the intra-prediction predictor for the right-bottom region is much farther away from the L-neighbour and will not be accurate. Therefore, the right-bottom part of the CU tends to have larger residual (in the spatial domain) values.
  • the inaccuracy of intra-prediction for the right-bottom region will be coherent between luma intra prediction and chroma intra prediction if the chroma intra-angle is similar to the luma intra-angle. Therefore, for intra-prediction, it is more likely that the luma residual and the chroma residual are similar. Therefore, according to one embodiment of the present invention, the luma to chroma residual JCMC mode is only enabled for the intra mode.
  • the luma to chroma residual JCMC mode is disabled for inter mode.
  • For the right-bottom part we can use JCMC to process and; for other parts, we can use the conventional transform method.
  • the 4 parts can be reconstructed accordingly.
  • “turned on” for the JCMC mode means the JCMC mode is enabled so that a residual block is allowed to use the JCMC mode. Only when the JCMC mode is “turned on” , a residual block has the option to use the JCMC mode. Whether a residual block will use the JCMC is up to the encoder to decide. If the JCMC mode results in favourable result, the JCMC mode is applied (or activated) to the residual block. While the JCMC mode is turned on, the encoder may decide not to apply (or activate) the JCMC to a residual block. In the above disclosure, “turned off” for the JCMC mode means the JCMC mode is disabled so that a residual block is not allowed to use the JCMC mode.
  • any of the foregoing proposed methods can be implemented in encoders and/or decoders.
  • any of the proposed methods can be implemented in inter/intra coding of an encoder, and/or a decoder.
  • any of the proposed methods can be implemented as a circuit coupled to the inter/intra coding of the encoder and/or the decoder, so as to provide the information needed by the inter/intra coding.
  • Fig. 2 illustrates a flowchart of an exemplary video decoding system that utilizes joint coding of multi-colour components according to an embodiment of the present invention.
  • the steps shown in the flowchart may be implemented as program codes executable on one or more processors (e.g., one or more CPUs) at the encoder side.
  • the steps shown in the flowchart may also be implemented based hardware such as one or more electronic devices or processors arranged to perform the steps in the flowchart.
  • a video bitstream comprising coded data for a multi-colour block of a video unit is received in step 210, wherein the multi-colour block comprises at least a first colour block corresponding to a first-colour component and a second colour block corresponding to a second-colour component.
  • step 220 Whether a target condition is satisfied is determined in step 220, wherein the target condition belongs to a group comprising a first condition related to a number of non-zero quantized coefficients of a joint residual block decoded from the coded data or a second condition related to a coding unit size for the multi-colour block.
  • step 230 is performed. Otherwise, step 240 is performed.
  • step 230 a first residual block for the first colour block and a second residual block for the second colour block are derived individually based on the coded data.
  • step 240 both the first residual block and the second residual block are derived from the derived single residual block if a mode corresponding to joint multi-colour coding is activated for the multi-colour block.
  • Fig. 3 illustrates a flowchart of an exemplary video encoding system that utilizes joint coding of multi-colour components according to an embodiment of the present invention.
  • input data comprising a multi-colour block of a video unit are received in step 310, wherein the multi-colour block comprises at least a first colour block corresponding to a first-colour component and a second colour block corresponding to a second-colour component.
  • a first residual block for the first colour block is determined by using a first predictor in step 320.
  • a second residual block for the second colour block is determined by using a second predictor in step 330.
  • step 340 Whether a target condition is satisfied is determined in step 340, wherein the target condition belongs to a group comprising a first condition related to a number of non-zero quantized coefficients of a joint residual block or a second condition related to a coding unit size for the multi-colour block, and wherein the joint residual block is generated based on the first residual block and the second residual block.
  • step 350 is performed. Otherwise, step 360 is performed.
  • step 350 joint multi-colour coding is disabled by encoding the first residual block and the second residual block individually.
  • step 360 said joint multi-colour coding is enabled by encoding the joint residual block instead of the first residual block and the second residual block individually if a mode corresponding to said joint multi-colour coding is activated for the multi-colour block.
  • a video bitstream comprising coded data for the joint residual block, or for the first residual block and the second residual block individually is generated in step 370.
  • Embodiment of the present invention as described above may be implemented in various hardware, software codes, or a combination of both.
  • an embodiment of the present invention can be one or more circuit circuits integrated into a video compression chip or program code integrated into video compression software to perform the processing described herein.
  • An embodiment of the present invention may also be program code to be executed on a Digital Signal Processor (DSP) to perform the processing described herein.
  • DSP Digital Signal Processor
  • the invention may also involve a number of functions to be performed by a computer processor, a digital signal processor, a microprocessor, or field programmable gate array (FPGA) .
  • These processors can be configured to perform particular tasks according to the invention, by executing machine-readable software code or firmware code that defines the particular methods embodied by the invention.
  • the software code or firmware code may be developed in different programming languages and different formats or styles.
  • the software code may also be compiled for different target platforms.
  • different code formats, styles and languages of software codes and other means of configuring code to perform the tasks in accordance with the invention will not depart from the spirit and scope of the invention.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Color Television Systems (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

L'invention concerne des procédés et un appareil pour des systèmes de codage et de décodage vidéo qui utilisent un codage conjoint de composants multicolores. Du côté du codage vidéo, un premier bloc résiduel pour le premier bloc de couleur et un second bloc résiduel pour le second bloc de couleur sont déterminés. On détermine si une condition cible est remplie, la condition cible appartenant à un groupe comprenant une première condition liée à un nombre de coefficients quantifiés non nuls d'un bloc résiduel commun. Lorsque la condition cible est remplie, le codage multicolore conjoint est désactivé. Lorsque la condition cible n'est pas remplie, le codage multicolore est autorisé. Par conséquent, aucune signalisation n'est requise pour indiquer si le codage multicolore est activé lorsque la condition cible est remplie.
PCT/CN2022/103718 2021-08-27 2022-07-04 Procédé et appareil de codage conjoint pour des composants multicolores dans un système de codage vidéo WO2023024712A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW111131420A TWI811070B (zh) 2021-08-27 2022-08-22 視訊編解碼方法及裝置

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202163237555P 2021-08-27 2021-08-27
US63/237,555 2021-08-27

Publications (1)

Publication Number Publication Date
WO2023024712A1 true WO2023024712A1 (fr) 2023-03-02

Family

ID=85321370

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/103718 WO2023024712A1 (fr) 2021-08-27 2022-07-04 Procédé et appareil de codage conjoint pour des composants multicolores dans un système de codage vidéo

Country Status (2)

Country Link
TW (1) TWI811070B (fr)
WO (1) WO2023024712A1 (fr)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111586408A (zh) * 2019-02-19 2020-08-25 诺基亚技术有限公司 用于跨通道残差编码和解码的量化参数得出
WO2020223496A1 (fr) * 2019-04-30 2020-11-05 Beijing Dajia Internet Informationtechnology Co., Ltd. Procédés et appareil de codage conjoint de résidus de chrominance
WO2021112950A1 (fr) * 2019-12-05 2021-06-10 Alibaba Group Holding Limited Procédé et appareil d'échantillonnage de chrominance
WO2021138476A1 (fr) * 2019-12-30 2021-07-08 Beijing Dajia Internet Information Technology Co., Ltd. Codage de résidus de chrominance
CN113115047A (zh) * 2019-12-24 2021-07-13 腾讯美国有限责任公司 视频编解码方法和设备

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2589769B (en) * 2018-07-15 2023-02-15 Beijing Bytedance Network Tech Co Ltd Cross-component coding order derivation

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111586408A (zh) * 2019-02-19 2020-08-25 诺基亚技术有限公司 用于跨通道残差编码和解码的量化参数得出
WO2020223496A1 (fr) * 2019-04-30 2020-11-05 Beijing Dajia Internet Informationtechnology Co., Ltd. Procédés et appareil de codage conjoint de résidus de chrominance
WO2021112950A1 (fr) * 2019-12-05 2021-06-10 Alibaba Group Holding Limited Procédé et appareil d'échantillonnage de chrominance
CN113115047A (zh) * 2019-12-24 2021-07-13 腾讯美国有限责任公司 视频编解码方法和设备
WO2021138476A1 (fr) * 2019-12-30 2021-07-08 Beijing Dajia Internet Information Technology Co., Ltd. Codage de résidus de chrominance

Also Published As

Publication number Publication date
TW202310623A (zh) 2023-03-01
TWI811070B (zh) 2023-08-01

Similar Documents

Publication Publication Date Title
US11019338B2 (en) Methods and apparatuses of video encoding or decoding with adaptive quantization of video data
US11438590B2 (en) Methods and apparatuses of chroma quantization parameter derivation in video processing system
US11665337B2 (en) Method and apparatus for encoding/decoding an image signal
EP2548372B1 (fr) Procédés et dispositif de sélection implicite de prédicteur de vecteur de mouvement adaptatif pour codage et décodage vidéo
WO2015113510A1 (fr) Procédé et appareil de précision adaptative de vecteur de mouvement
JP7322285B2 (ja) クロマデブロックフィルタリングのための量子化パラメータオフセット
TWI741584B (zh) 視訊編碼系統之語法傳訊和參照限制的方法和裝置
JP2022552339A (ja) ビデオコーディングにおけるクロマ量子化パラメータの使用
EP3643062A1 (fr) Réduction de complexité de calcul d'outil de dérivation de mode intra côté décodeur (dimd)
KR102293097B1 (ko) 비디오 코딩을 위한 디바이스들 및 방법들
US11871034B2 (en) Intra block copy for screen content coding
WO2016200714A2 (fr) Stratégies de recherche pour des modes de prédiction intra-images
US20240283948A1 (en) Coding of transform coefficients in video coding
CN113132739B (zh) 边界强度确定、编解码方法、装置及其设备
WO2022214055A1 (fr) Interaction de multiples partitions
WO2022218322A1 (fr) Gestion de frontière pour séparation d'arbre de codage
WO2023024712A1 (fr) Procédé et appareil de codage conjoint pour des composants multicolores dans un système de codage vidéo
WO2023246901A1 (fr) Procédés et appareil pour un codage de transformée de sous-bloc implicite
WO2023116716A1 (fr) Procédé et appareil pour modèle linéaire de composante transversale pour une prédiction inter dans un système de codage vidéo
TWI853402B (zh) 視訊編解碼方法及相關裝置
WO2023116706A1 (fr) Procédé et appareil pour modèle linéaire à composantes croisées avec de multiples modes intra d'hypothèses dans un système de codage vidéo
WO2022213966A1 (fr) Contraintes de partitionnement basées sur le voisinage
Tok et al. A parametric merge candidate for high efficiency video coding
KR20130070215A (ko) 적응적 깊이 정보 선택 및 수행에 대한 디블록킹 필터링 방법 및 장치

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22860059

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22860059

Country of ref document: EP

Kind code of ref document: A1