WO2016115736A1 - Additional intra prediction modes using cross-chroma-component prediction - Google Patents

Additional intra prediction modes using cross-chroma-component prediction Download PDF

Info

Publication number
WO2016115736A1
WO2016115736A1 PCT/CN2015/071460 CN2015071460W WO2016115736A1 WO 2016115736 A1 WO2016115736 A1 WO 2016115736A1 CN 2015071460 W CN2015071460 W CN 2015071460W WO 2016115736 A1 WO2016115736 A1 WO 2016115736A1
Authority
WO
WIPO (PCT)
Prior art keywords
chroma
mode
modes
block
prediction
Prior art date
Application number
PCT/CN2015/071460
Other languages
French (fr)
Inventor
Xianguo Zhang
Original Assignee
Mediatek Singapore Pte. Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mediatek Singapore Pte. Ltd. filed Critical Mediatek Singapore Pte. Ltd.
Priority to PCT/CN2015/071460 priority Critical patent/WO2016115736A1/en
Priority to PCT/CN2016/070331 priority patent/WO2016115981A1/en
Priority to US15/541,802 priority patent/US10321140B2/en
Priority to CN201680006803.5A priority patent/CN107211121B/en
Publication of WO2016115736A1 publication Critical patent/WO2016115736A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/593Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/186Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/46Embedding additional information in the video signal during the compression process
    • H04N19/463Embedding additional information in the video signal during the compression process by compressing encoding parameters before transmission

Definitions

  • the invention relates generally to video coding process, including general video, Screen Content (SC) video, multi-view video and Three-Dimensional (3D) video processing.
  • SC Screen Content
  • 3D Three-Dimensional
  • the present invention relates to methods for how to generate the prediction data for each chroma component, especially predicting the non-first chroma component block from the reconstructed chroma blocks of the current coding unit.
  • chroma components share the same chroma mode, which is selected from DM, DC, PLANAR, VER, HOR and VER+8.
  • DM mode has the highest priority and the shortest binarization length 1, and the other four modes have equal priority with binarization length equal to 3.
  • LM chroma mode which predicts chroma samples by using reconstructed luma samples with a linear model, is proposed to enhance the coding efficiency of HEVC.
  • the linear model to generate LM predictor P is formulated as follows.
  • a and b are called LM parameters. These LM parameters are derived from the reconstructed luma and chroma samples around current block and they are not required to be coded in the bitstream.
  • chroma predictors can be generated according to the linear model and the reconstructed luma samples of the current coding unit. For example, if the video format is YUV420, then there are one 8x8 luma block and two 4x4 chroma blocks for one 8x8 coding unit, as shown in Fig. 1, where one square is one sample, black squares are samples in the current coding unit, and gray squares are reconstructed samples around the current coding unit.
  • the first step in LM chroma mode is to derive LM parameters by using the neighboring reconstructed samples of the current coding unit, which are represented as circles in Fig. 1. Because the video format is YUV420, the chroma position is located between the two vertical luma samples. An average value between two vertical luma samples is used to derive the LM parameters instead of using one of them at the left block boundary. At the top block boundary, in order to reduce line buffer requirement, the average value is replaced by using the closest sample in the vertical direction.
  • the next step is to generate the chroma predictors based on the linear model and current luma reconstructed samples.
  • an average luma value may be used instead of the corresponding luma sample.
  • LM chroma prediction modes can be further classified into Top+Left (both top and left neighboring samples are utilized) , TopOnly (only top samples) and LeftOnly (only left samples) LM modes.
  • the common point is to utilize luma neighbouring data (above-row and left-column samples in Fig. 1) and the current neighbouring data (above-row and left-column samples in Fig. 1) to derive the LM parameters, and then utilize the reconstructed Y-component data to generate predictors by the LM parameters.
  • this invention proposes to selectively generate the predictors from non-luma components, following the LM process.
  • the predictor generation procedure for the chroma blocks of each intra CU has following important steps, as shown in Fig. 3.
  • Coding flags identifying the new kind of LM modes are transmitted to signalize whether the LM_P modes for the current chroma block utilizes non-luma data to calculate LM parameters and as prediction reference. For example, when different chroma components share the same chroma mode, one chroma block’s predictor still utilizes luma data as prediction reference even under LM_P modes, if there are no reconstructed chroma samples for the current coding unit. Otherwise, the current chroma block can utilize reconstructed chroma block of the current coding unit as prediction reference by using LM_P modes.
  • Predictor generation for the first chroma component under LM_P modes For the first chroma block, no whether under the LM_P modes or traditional LM modes, because no chroma block has been reconstructed for the current coding unit, one non-LM_P mode is conducted to get the predictor. As shown in Fig. 4 for Cb component, luma component-block is utilized to derive the parameters and generate predictor for Cb.
  • Cb component-block is utilized to generate the predictor for Cr.
  • Fig. 1 is a diagram illustrating the basic LM mode proposed for HEVC.
  • Figs. 2 (a) , 2 (b) and 2 (c) are diagrams illustrating the referred different kinds of LM chroma modes.
  • Fig. 3 is a diagram illustrating the coding procedure for chroma blocks of one intra CU which utilizes this new LM_P mode.
  • Fig. 4 is a diagram illustrating predictor generation example under new LM_P mode.
  • the coding procedure for chroma blocks of one intra CU which utilizes LM_P modes can be divided into steps of (1) Coding the flags which can identify the new LM_P modes from chroma mode candidate lists. (2) Prediction generation for the first-chroma block under LM_P modes when chroma components share the same mode. (3) LM predictor generation for non-first-chroma block under LM_P mode with non-luma data as prediction reference.
  • step (1) when different chroma blocks share the same prediction mode flag, LM_P modes are added into the chroma mode candidate list as additional ones.
  • step (1) when difference chroma component blocks can utilize different prediction modes, LM_P modes are added into the non-first chroma block’s mode candidate list as additional ones.
  • step (1) among the chroma mode candidates including LM_P modes, when all chroma components share the same mode, LM_P modes are binarized with longer symbols than other LM modes.
  • step (1) among the chroma mode candidates including LM_P modes, when all different chroma components have different modes, Top+Left LM_P mode is binarized with longer symbols than Top+Left LM mode, but shorter symbols than the other LM modes.
  • LM_P modes can be binarized with equal-lenth symbols as all the other LM modes.
  • step (1) among the chroma mode candidates including LM_P, when all chroma components share the same mode, LM_P can be selected as one mode of the four chroma mode candidates and binarized with equal-lenth symbols.
  • step (2) when different chroma component blocks share the same prediction mode flag, under LM_P modes, the first chroma component block’s predictor is generated as TOP+Left LM mode does, using reconstructed luma data to derive LLS parameters and as prediction reference.
  • step (2) when different chroma component blocks share the same prediction mode flag, under LM_P modes, the first chroma component block’s predictor is generated by using the traditional DM mode, which conducts the luma prediction mode to get the predictor.
  • step (2) when different chroma component blocks share the same prediction mode flag, under LM_P modes, the first chroma component block’s predictor is generated by using the corresponding LM mode of the current LM_P mode. For example, when the current LM_P mode conducts Top+Left LM prediction procedures with chroma samples as prediction reference, the first-chroma block should be predicted by Top+Left LM mode with luma samples as prediction reference.
  • step (2) when different chroma blocks share the same prediction mode flag, under LM_P modes, the first chroma component block’s predictor is generated by selecting the best one among traditional chroma modes, when additional flags should be signalized to determine which is selected.
  • step (3) when there is one reconstructed previous-component chroma block V col for the current coding unit , under Top+Left LM_P mode, the above-row and left-column reconstructed samples of the current chroma block C and V col are utilized to derive the LLS parameters a and b.
  • step (3) when there is one reconstructed previous-component chroma block V col for the current coding unit, under Top LM_P mode, the above-row reconstructed samples of the current chroma block C and V col are utilized to derive the LLS parameters a and b.
  • step (3) when there is one reconstructed previous-component chroma block V col for the current coding unit, under Left LM_P mode, the left-row reconstructed samples of the current chroma block C and V col are utilized to derive the LLS parameters a and b.
  • an embodiment of the present invention can be a circuit integrated into a video compression chip or program codes integrated into video compression software to perform the processing described herein.
  • An embodiment of the present invention may also be program codes to be executed on a Digital Signal Processor (DSP) to perform the processing described herein.
  • DSP Digital Signal Processor
  • the invention may also involve a number of functions to be performed by a computer processor, a digital signal processor, a microprocessor, or field programmable gate array (FPGA) .
  • processors can be configured to perform particular tasks according to the invention, by executing machine-readable software code or firmware code that defines the particular methods embodied by the invention.
  • the software code or firmware codes may be developed in different programming languages and different format or style.
  • the software code may also be compiled for different target platform.
  • different code formats, styles and languages of software codes and other means of configuring code to perform the tasks in accordance with the invention will not depart from the spirit and scope of the invention.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

Methods of chroma intra prediction for general videos are disclosed. Several methods are proposed for predicting non-first chroma components from the reconstructed chroma block of the current coding unit, especially the cases of using derived or decoded LLS parameters.

Description

ADDITIONAL INTRA PREDICTION MODES USING CROSS-CHROMA-COMPONENT PREDICTION TECHNICAL FIELD
The invention relates generally to video coding process, including general video, Screen Content (SC) video, multi-view video and Three-Dimensional (3D) video processing. In particular, the present invention relates to methods for how to generate the prediction data for each chroma component, especially predicting the non-first chroma component block from the reconstructed chroma blocks of the current coding unit.
BACKGROUND
In the current HEVC design, chroma components share the same chroma mode, which is selected from DM, DC, PLANAR, VER, HOR and VER+8. DM mode has the highest priority and the shortest binarization length 1, and the other four modes have equal priority with binarization length equal to 3.
In JCT-VC meetings, LM chroma mode, which predicts chroma samples by using reconstructed luma samples with a linear model, is proposed to enhance the coding efficiency of HEVC. In LM chroma mode, for a to-be-predicted chroma sample V with its reconstructed luma sample Vcol of the current coding unit, the linear model to generate LM predictor P is formulated as follows.
P=a·Vcol+b
In the above equation, a and b are called LM parameters. These LM parameters are derived from the reconstructed luma and chroma samples around current block and they are not required to be coded in the bitstream. After deriving the LM parameters, chroma predictors can be generated according to the linear model and the reconstructed luma samples of the current coding unit. For example, if the video format is YUV420, then there are one 8x8 luma block and two 4x4 chroma blocks for one 8x8 coding unit, as shown in Fig. 1, where one square is one sample, black squares are samples in the current coding unit, and gray squares are reconstructed  samples around the current coding unit.
The first step in LM chroma mode is to derive LM parameters by using the neighboring reconstructed samples of the current coding unit, which are represented as circles in Fig. 1. Because the video format is YUV420, the chroma position is located between the two vertical luma samples. An average value between two vertical luma samples is used to derive the LM parameters instead of using one of them at the left block boundary. At the top block boundary, in order to reduce line buffer requirement, the average value is replaced by using the closest sample in the vertical direction.
After deriving LM parameters, the next step is to generate the chroma predictors based on the linear model and current luma reconstructed samples. According to the video format, an average luma value may be used instead of the corresponding luma sample.
As Figs. 2 (a) , 2 (b) and 2 (c) shown, according to which neighboring samples are utilized as reference, LM chroma prediction modes can be further classified into Top+Left (both top and left neighboring samples are utilized) , TopOnly (only top samples) and LeftOnly (only left samples) LM modes.
For all these LM modes, the common point is to utilize luma neighbouring data (above-row and left-column samples in Fig. 1) and the current neighbouring data (above-row and left-column samples in Fig. 1) to derive the LM parameters, and then utilize the reconstructed Y-component data to generate predictors by the LM parameters.
Instead of utilizing luma data as prediction reference, this invention proposes to selectively generate the predictors from non-luma components, following the LM process.
SUMMARY
It is proposed to utilize non-luma data as prediction reference to derive LM parameters and to generate the current chroma block’s predictors under the new designed LM modes LM_P, including Top+Left LM_P, LeftOnly LM_P and TopOnly LM_P.
In practical video codec with new LM modes integrated, the predictor generation procedure for the chroma blocks of each intra CU has following  important steps, as shown in Fig. 3.
(1) Coding flags identifying the new kind of LM modes, LM_P modes. Flags are transmitted to signalize whether the LM_P modes for the current chroma block utilizes non-luma data to calculate LM parameters and as prediction reference. For example, when different chroma components share the same chroma mode, one chroma block’s predictor still utilizes luma data as prediction reference even under LM_P modes, if there are no reconstructed chroma samples for the current coding unit. Otherwise, the current chroma block can utilize reconstructed chroma block of the current coding unit as prediction reference by using LM_P modes.
(2) Predictor generation for the first chroma component under LM_P modes. For the first chroma block, no whether under the LM_P modes or traditional LM modes, because no chroma block has been reconstructed for the current coding unit, one non-LM_P mode is conducted to get the predictor. As shown in Fig. 4 for Cb component, luma component-block is utilized to derive the parameters and generate predictor for Cb.
(3) LM parameter derivation without using luma data. When one LM_P mode is utilized and there are some reconstructed chroma blocks for the current coding unit the LM parameters are not derived from luma data, but from the neighbouring samples of the reconstructed chroma block of the current coding unit and the reconstructed neighbouring samples of the current chroma block. As shown in Fig. 4 for Cr component, Cb component-block is utilized to derive the parameters for Cr.
(4) LM predictor generation with non-luma data as prediction reference. When one LM_P mode is utilized and there are some reconstructed chroma blocks for the current coding unit, the current chroma block’s predictors P are generated from the reconstructed previous-component chroma block Vcol of the current coding unit by P=a·Vcol+b, from the derived parameters a and b in step (3) . As shown in Fig. 4 for Cr component, Cb component-block is utilized to generate the predictor for Cr.
Other aspects and features of the invention will become apparent to those with ordinary skill in the art upon review of the following descriptions of specific embodiments.
BRIEF DESCRIPTION OF DRAWINGS
The invention can be more fully understood by reading the subsequent detailed description and examples with references made to the accompanying drawings, wherein:
Fig. 1 is a diagram illustrating the basic LM mode proposed for HEVC.
Figs. 2 (a) , 2 (b) and 2 (c) are diagrams illustrating the referred different kinds of LM chroma modes.
Fig. 3 is a diagram illustrating the coding procedure for chroma blocks of one intra CU which utilizes this new LM_P mode.
Fig. 4 is a diagram illustrating predictor generation example under new LM_P mode.
DETAILED DESCRIPTION
The following description is of the best-contemplated mode of carrying out the invention. This description is made for the purpose of illustrating the general principles of the invention and should not be taken in a limiting sense. The scope of the invention is best determined by reference to the appended claims.
The coding procedure for chroma blocks of one intra CU which utilizes LM_P modes can be divided into steps of (1) Coding the flags which can identify the new LM_P modes from chroma mode candidate lists. (2) Prediction generation for the first-chroma block under LM_P modes when chroma components share the same mode. (3) LM predictor generation for non-first-chroma block under LM_P mode with non-luma data as prediction reference.
A first embodiment of step (1) , when different chroma blocks share the same prediction mode flag, LM_P modes are added into the chroma mode candidate list as additional ones.
A second embodiment of step (1) , when difference chroma component blocks can utilize different prediction modes, LM_P modes are added into the non-first chroma block’s mode candidate list as additional ones.
Another embodiment of step (1) , among the chroma mode candidates  including LM_P modes, when all chroma components share the same mode, LM_P modes are binarized with longer symbols than other LM modes.
Another embodiment of step (1) , among the chroma mode candidates including LM_P modes, when all different chroma components have different modes, Top+Left LM_P mode is binarized with longer symbols than Top+Left LM mode, but shorter symbols than the other LM modes.
Another embodiment of step (1) , among the chroma mode candidates including LM_P, LM_P modes can be binarized with equal-lenth symbols as all the other LM modes.
Another embodiment of step (1) , among the chroma mode candidates including LM_P, when all chroma components share the same mode, LM_P can be selected as one mode of the four chroma mode candidates and binarized with equal-lenth symbols.
A first embodiment of step (2) , when different chroma component blocks share the same prediction mode flag, under LM_P modes, the first chroma component block’s predictor is generated as TOP+Left LM mode does, using reconstructed luma data to derive LLS parameters and as prediction reference.
A second embodiment of step (2) , when different chroma component blocks share the same prediction mode flag, under LM_P modes, the first chroma component block’s predictor is generated by using the traditional DM mode, which conducts the luma prediction mode to get the predictor.
Another embodiment of step (2) , when different chroma component blocks share the same prediction mode flag, under LM_P modes, the first chroma component block’s predictor is generated by using the corresponding LM mode of the current LM_P mode. For example, when the current LM_P mode conducts Top+Left LM prediction procedures with chroma samples as prediction reference, the first-chroma block should be predicted by Top+Left LM mode with luma samples as prediction reference.
Another embodiment of step (2) , when different chroma blocks share the same prediction mode flag, under LM_P modes, the first chroma component block’s predictor is generated by selecting the best one among traditional chroma modes, when additional flags should be signalized to determine which is selected.
A first embodiment of step (3) , when there is one reconstructed previous-component chroma block Vcol for the current coding unit , under Top+Left LM_P  mode, the above-row and left-column reconstructed samples of the current chroma block C and Vcol are utilized to derive the LLS parameters a and b.
A second embodiment of step (3) , when there is one reconstructed previous-component chroma block Vcol for the current coding unit, under Top LM_P mode, the above-row reconstructed samples of the current chroma block C and Vcol are utilized to derive the LLS parameters a and b.
A third embodiment of step (3) , when there is one reconstructed previous-component chroma block Vcol for the current coding unit, under Left LM_P mode, the left-row reconstructed samples of the current chroma block C and Vcol are utilized to derive the LLS parameters a and b.
Another embodiment of step (3) , when there is one reconstructed chroma block Vcol for the current coding unit, under LM_P modes, samples in Vcol are utilized to generate the predictor P of the current chroma block C from the derived LLS parameters a and b by P=a·Vcol+b.
Another embodiment of step (3) , when there is one reconstructed chroma block Vcol for the current coding unit, under LM_P modes, samples in Vcol are utilized to generate the predictor P of the current chroma block C from the transmitted LLS parameters a and b by P=a·Vcol+b.
The proposed method described above can be used in a video encoder as well as in a video decoder. Embodiments of the method according to the present invention as described above may be implemented in various hardware, software codes, or a combination of both. For example, an embodiment of the present invention can be a circuit integrated into a video compression chip or program codes integrated into video compression software to perform the processing described herein. An embodiment of the present invention may also be program codes to be executed on a Digital Signal Processor (DSP) to perform the processing described herein. The invention may also involve a number of functions to be performed by a computer processor, a digital signal processor, a microprocessor, or field programmable gate array (FPGA) . These processors can be configured to perform particular tasks according to the invention, by executing machine-readable software code or firmware code that defines the particular methods embodied by the invention. The software code or firmware codes may be developed in different programming languages and different format or style. The software code may also  be compiled for different target platform. However, different code formats, styles and languages of software codes and other means of configuring code to perform the tasks in accordance with the invention will not depart from the spirit and scope of the invention.
The invention may be embodied in other specific forms without departing from its spirit or essential characteristics. The described examples are to be considered in all respects only as illustrative and not restrictive. To the contrary, it is intended to cover various modifications and similar arrangements (as would be apparent to those skilled in the art) . Therefore, the scope of the appended claims should be accorded the broadest interpretation so as to encompass all such modifications and similar arrangements.

Claims (21)

  1. A method of chroma-component intra prediction for non-4:0:0 video coding, wherein non-first chroma components can get predictors from the reconstructed chroma block of the current coding unit.
  2. The method as claimed in claim 1, wherein the coding procedure for chroma blocks of one intra CU for this method can comprise the following steps of
    Flag Transmission; Coding the flags which can identify the new LM_P modes from chroma mode candidate lists;
    Prediction generation for non-LM_P blocks; Prediction generation for the first-chroma block under LM_P modes when chroma components share the same mode;
    Prediction generation for LM_P blocks; LM predictor generation for non-first-chroma block under LM_P mode with non-luma data as prediction reference.
  3. The method as claimed in claim 2, wherein for the Flag Transmission step, either additional flags are added or existing flags are reused to signal whether some chroma component utilizes the LM_P mode; The LM_P mode is one additional mode which can be added into the chroma mode candidate list.
  4. The method as claimed in claim 3, wherein when different chroma blocks share the same prediction mode flag, LM_P modes are directly added into the chroma mode candidate list as additional ones.
  5. The method as claimed in claim 3, wherein when difference chroma component blocks can utilize different prediction modes, LM_P modes are only added into the non-first chroma block’s mode candidate list as additional ones.
  6. The method as claimed in claim 4, wherein among the chroma mode candidates including LM_P modes, when all chroma components share the same mode, LM_P modes are binarized with longer symbols than other LM modes.
  7. The method as claimed in claim 5, wherein among the chroma mode candidates including LM_P modes, when all different chroma components have different modes, Top+Left LM_P mode is binarized with longer symbols than Top+Left LM mode, but shorter symbols than the other LM modes.
  8. The method as claimed in claim 3, wherein among the chroma mode  candidates including LM_P, LM_P modes can be binarized with equal-lenth symbols as all the other LM modes.
  9. The method as claimed in claim 3, wherein among the chroma mode candidates including LM_P, when all chroma components share the same mode, LM_P can be selected as one mode of the four chroma mode candidates and binarized with equal-lenth symbols.
  10. The method as claimed in claim 2, wherein for the step of Prediction generation for non-LM_P blocks, it only happens when chroma components share the same mode; In this case, the first chroma component block cannot utilize LM_P mode to get the predictor.
  11. The method as claimed in claim 10, wherein under LM_P modes, the first chroma component block’s predictor is generated as TOP+Left LM mode does, using reconstructed luma data to derive LLS parameters and as prediction reference.
  12. The method as claimed in claim 10, wherein under LM_P modes, the first chroma component block’s predictor is generated by using the traditional DM mode, which conducts the luma prediction mode to get the predictor.
  13. The method as claimed in claim 10, wherein under LM_P modes, the first chroma component block’s predictor is generated by using the corresponding LM mode of the current LM_P mode; For example, when the current LM_P mode conducts Top+Left LM prediction procedures with chroma samples as prediction reference, the first-chroma block should be predicted by Top+Left LM mode with luma samples as prediction reference.
  14. The method as claimed in claim 10, wherein under LM_P modes, the first chroma component block’s predictor is generated by selecting the best one among traditional chroma modes, when additional flags should be signalized to determine which is selected.
  15. The method as claimed in claim 2, wherein for the step of Prediction generation for LM_P blocks, non-luma data has been typically utilized to derive the LLS parameters and generate one chroma component block’s predictors.
  16. The method as claimed in claim 15, wherein supposing the reconstructed chroma block of the current coding unit be Vcol, under Top+Left LM_P mode, the above-row and left-column reconstructed samples of the current chroma block C and Vcol are utilized to derive the LLS parameters a and b.
  17. The method as claimed in claim 15, wherein supposing the reconstructed chroma block of the current coding unit be Vcol, under Top LM_P mode, the above-row reconstructed samples of the current chroma block C and Vcol are utilized to derive the LLS parameters a and b.
  18. The method as claimed in claim 15, wherein supposing the reconstructed chroma block of the current coding unit be Vcol, under Left LM_P mode, the left-row reconstructed samples of the current chroma block C and Vcol are utilized to derive the LLS parameters a and b.
  19. The method as claimed in claim 15, wherein supposing the reconstructed chroma block of the current coding unit be Vcol, under LM_P modes, samples in P are utilized to generate the predictor P of the current chroma block C from the derived LLS parameters a and b by P=a·Vcol+b.
  20. The method as claimed in claim 19, wherein under LM_P modes, the LLS parameters a and b are both derived from reconstructed samples.
  21. The method as claimed in claim 19, wherein under LM_P modes, the LLS parameters a and b are not both derived from reconstructed samples, at least one of them is decoded from flags transmitted in video stream.
PCT/CN2015/071460 2015-01-22 2015-01-23 Additional intra prediction modes using cross-chroma-component prediction WO2016115736A1 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
PCT/CN2015/071460 WO2016115736A1 (en) 2015-01-23 2015-01-23 Additional intra prediction modes using cross-chroma-component prediction
PCT/CN2016/070331 WO2016115981A1 (en) 2015-01-22 2016-01-07 Method of video coding for chroma components
US15/541,802 US10321140B2 (en) 2015-01-22 2016-01-07 Method of video coding for chroma components
CN201680006803.5A CN107211121B (en) 2015-01-22 2016-01-07 Video encoding method and video decoding method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2015/071460 WO2016115736A1 (en) 2015-01-23 2015-01-23 Additional intra prediction modes using cross-chroma-component prediction

Publications (1)

Publication Number Publication Date
WO2016115736A1 true WO2016115736A1 (en) 2016-07-28

Family

ID=56416316

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2015/071460 WO2016115736A1 (en) 2015-01-22 2015-01-23 Additional intra prediction modes using cross-chroma-component prediction

Country Status (1)

Country Link
WO (1) WO2016115736A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109565603A (en) * 2016-08-15 2019-04-02 高通股份有限公司 It is decoded in video using decoupling tree construction
CN109640098A (en) * 2018-12-21 2019-04-16 深圳市网心科技有限公司 A kind of intra-frame prediction method based on AVS2, system and electronic equipment and storage medium
WO2019162117A1 (en) * 2018-02-23 2019-08-29 Canon Kabushiki Kaisha Methods and devices for improvement in obtaining linear component sample prediction parameters
CN111095933A (en) * 2017-09-15 2020-05-01 索尼公司 Image processing apparatus and method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103227917A (en) * 2012-01-31 2013-07-31 华为技术有限公司 Decoding method and device
CN103260018A (en) * 2012-02-16 2013-08-21 乐金电子(中国)研究开发中心有限公司 Intra-frame image predictive encoding and decoding method and video codec
WO2014204584A1 (en) * 2013-06-19 2014-12-24 Apple Inc. Sample adaptive offset control

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103227917A (en) * 2012-01-31 2013-07-31 华为技术有限公司 Decoding method and device
CN103260018A (en) * 2012-02-16 2013-08-21 乐金电子(中国)研究开发中心有限公司 Intra-frame image predictive encoding and decoding method and video codec
WO2014204584A1 (en) * 2013-06-19 2014-12-24 Apple Inc. Sample adaptive offset control

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109565603A (en) * 2016-08-15 2019-04-02 高通股份有限公司 It is decoded in video using decoupling tree construction
US11743509B2 (en) 2016-08-15 2023-08-29 Qualcomm Incorporated Intra video coding using a decoupled tree structure
CN109565603B (en) * 2016-08-15 2023-10-10 高通股份有限公司 Video intra coding using decoupled tree structure
CN111095933A (en) * 2017-09-15 2020-05-01 索尼公司 Image processing apparatus and method
EP3684055A4 (en) * 2017-09-15 2020-07-22 Sony Corporation Image processing device and method
US11070824B2 (en) 2017-09-15 2021-07-20 Sony Corporation Image processing device and method
CN111095933B (en) * 2017-09-15 2022-05-13 索尼公司 Image processing apparatus and method
US11695941B2 (en) 2017-09-15 2023-07-04 Sony Group Corporation Image processing device and method
WO2019162117A1 (en) * 2018-02-23 2019-08-29 Canon Kabushiki Kaisha Methods and devices for improvement in obtaining linear component sample prediction parameters
CN109640098A (en) * 2018-12-21 2019-04-16 深圳市网心科技有限公司 A kind of intra-frame prediction method based on AVS2, system and electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
US10321140B2 (en) Method of video coding for chroma components
US11546613B2 (en) Method and apparatus for adaptive motion vector precision
TWI627856B (en) Method and apparatus of video coding and decoding
US10750169B2 (en) Method and apparatus for intra chroma coding in image and video coding
WO2015192781A1 (en) Method of sub-pu syntax signaling and illumination compensation for 3d and multi-view video coding
WO2018049594A1 (en) Methods of encoder decision for quad-tree plus binary tree structure
WO2018054269A1 (en) Method and apparatus for video coding using decoder side intra prediction derivation
TWI520579B (en) Picture coding supporting block partitioning and block merging
KR101705277B1 (en) Encoding or decoding method and apparatus
US20210067802A1 (en) Video decoding method and device using cross-component prediction, and video encoding method and device using cross-component prediction
WO2016115708A1 (en) Methods for chroma component coding with separate intra prediction mode
US11445173B2 (en) Method and apparatus for Intra prediction fusion in image and video coding
WO2015149698A1 (en) Method of motion information coding
WO2016115736A1 (en) Additional intra prediction modes using cross-chroma-component prediction
WO2015100731A1 (en) Methods for determining the prediction partitions
WO2016115728A1 (en) Improved escape value coding methods
WO2015192372A1 (en) A simplified method for illumination compensation in multi-view and 3d video coding
WO2016065538A1 (en) Guided cross-component prediction
WO2016044974A1 (en) Palette table signalling
EP3785437A1 (en) Method and apparatus for restricted linear model parameter derivation in video coding
WO2023174426A1 (en) Geometric partitioning mode and merge candidate reordering
WO2016123801A1 (en) Methods for partition mode coding
WO2016165122A1 (en) Inter prediction offset
KR20200062639A (en) Video coding method and apparatus using merge
WO2016070363A1 (en) Merge with inter prediction offset

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15878413

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15878413

Country of ref document: EP

Kind code of ref document: A1