US20210368205A1 - Video coding using intra sub-partition coding mode - Google Patents

Video coding using intra sub-partition coding mode Download PDF

Info

Publication number
US20210368205A1
US20210368205A1 US17/393,367 US202117393367A US2021368205A1 US 20210368205 A1 US20210368205 A1 US 20210368205A1 US 202117393367 A US202117393367 A US 202117393367A US 2021368205 A1 US2021368205 A1 US 2021368205A1
Authority
US
United States
Prior art keywords
sub
mode
modes
prediction
partition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
US17/393,367
Other languages
English (en)
Inventor
Yi-Wen Chen
Xiaoyu Xiu
Xianglin Wang
Tsung-Chuan MA
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to US17/393,367 priority Critical patent/US20210368205A1/en
Assigned to Beijing Dajia Internet Information Technology Co., Ltd. reassignment Beijing Dajia Internet Information Technology Co., Ltd. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: MA, Tsung-Chuan, CHEN, YI-WEN, WANG, XIANGLIN, XIU, Xiaoyu
Publication of US20210368205A1 publication Critical patent/US20210368205A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors
    • H04N19/517Processing of motion vectors by encoding
    • H04N19/52Processing of motion vectors by encoding by predictive encoding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/105Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/11Selection of coding mode or of prediction mode among a plurality of spatial predictive coding modes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/119Adaptive subdivision aspects, e.g. subdivision of a picture into rectangular or non-rectangular coding blocks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/132Sampling, masking or truncation of coding units, e.g. adaptive resampling, frame skipping, frame interpolation or high-frequency transform coefficient masking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/157Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/44Decoders specially adapted therefor, e.g. video decoders which are asymmetric with respect to the encoder
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/593Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/70Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/90Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
    • H04N19/96Tree coding, e.g. quad-tree coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/157Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
    • H04N19/159Prediction type, e.g. intra-frame, inter-frame or bidirectional frame prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/186Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/46Embedding additional information in the video signal during the compression process

Definitions

  • the present disclosure relates generally to video coding and compression. More specifically, this disclosure relates to systems and methods for performing video coding using intra sub-partition coding mode.
  • Video coding can be performed according to one or more video coding standards.
  • Some illustrative video coding standards include versatile video coding (VVC), joint exploration test model (JEM) coding, high-efficiency video coding (H.265/HEVC), advanced video coding (H.264/AVC), and moving picture experts group (MPEG) coding.
  • Video coding generally utilizes predictive methods (e.g., inter-prediction, intra-prediction, or the like) that take advantage of redundancy inherent in video images or sequences.
  • predictive methods e.g., inter-prediction, intra-prediction, or the like
  • One goal of video coding techniques is to compress video data into a form that uses a lower bit rate, while avoiding or minimizing degradations to video quality.
  • the first version of the HEVC standard offers an approximately 50% bit-rate saving or equivalent perceptual quality, compared to the prior-generation video coding standard (H.264/MPEG AVC).
  • H.264/MPEG AVC prior-generation video coding standard
  • VCEG Video Coding Experts Group
  • MPEG Moving Picture Experts Group
  • JVET Joint Video Exploration Team
  • VVC Versatile Video Coding
  • a video coding method is performed at a computing device having one or more processors and memory storing a plurality of programs to be executed by the one or more processors.
  • the method includes independently generating a respective intra prediction for each of a plurality of corresponding sub-partitions, wherein each respective intra prediction is generated using a plurality of reference samples from a current coding block.
  • a video coding method is performed at a computing device having one or more processors and memory storing a plurality of programs to be executed by the one or more processors.
  • the method includes generating a respective intra prediction for each of a plurality of corresponding sub-partitions using only N modes out of M possible intra prediction modes for a luma component of an intra sub partition (ISP) coded block, wherein M and N are positive integers and N is less than M.
  • ISP intra sub partition
  • a video coding method is performed at a computing device having one or more processors and memory storing a plurality of programs to be executed by the one or more processors.
  • the method includes generating an intra prediction using only N modes out of M possible intra prediction modes for a chroma component of an intra sub partition (ISP) coded block, wherein M and N are positive integers and N is less than M.
  • ISP intra sub partition
  • a video coding method is performed at a computing device having one or more processors and memory storing a plurality of programs to be executed by the one or more processors.
  • the method includes generating a respective luma intra prediction for each of a plurality of corresponding sub-partitions of an entire intra sub partition (ISP) coded block for a luma component, and for a chroma component, generating a chroma intra prediction for the entire intra sub partition (ISP) coded block.
  • ISP intra sub partition
  • a video coding method is performed at a computing device having one or more processors and memory storing a plurality of programs to be executed by the one or more processors.
  • the method includes generating a first prediction using an intra sub partition mode; generating a second prediction using an inter prediction mode; and combining the first and second predictions to generate a final prediction by applying a weighted averaging to the first prediction and the second prediction.
  • FIG. 1 is a block diagram setting forth an illustrative encoder which may be used in conjunction with many video coding standards.
  • FIG. 2 is a block diagram setting forth an illustrative decoder which may be used in conjunction with many video coding standards.
  • FIG. 3 illustrates five types of exemplary block partitions for a multi-type tree structure.
  • FIG. 4 illustrates a set of exemplary intra modes for use with the VVC standard.
  • FIG. 5 illustrates a set of multiple reference lines for performing intra prediction.
  • FIG. 6A illustrates a first set of reference samples and angular directions that are used for performing an intra prediction of a first rectangular block.
  • FIG. 6B illustrates a second set of reference samples and angular directions that are used for performing an intra prediction of a second rectangular block.
  • FIG. 6C illustrates a third set of reference samples and angular directions that are used for performing an intra prediction of a square block.
  • FIG. 7 illustrates a set of exemplary locations for neighboring reconstructed samples that are used for a position-dependent intra prediction combination (PDPC) of one coding block.
  • PDPC position-dependent intra prediction combination
  • FIG. 8A illustrates an exemplary set of short-distance intra prediction (SDIP) partitions for an 8 ⁇ 4 block.
  • SDIP short-distance intra prediction
  • FIG. 8B illustrates an exemplary set of short-distance intra prediction (SDIP) partitions for a 4 ⁇ 8 block.
  • SDIP short-distance intra prediction
  • FIG. 8C illustrates an exemplary set of short-distance intra prediction (SDIP) partitions for an arbitrarily-sized block.
  • SDIP short-distance intra prediction
  • FIG. 9A is a plot of chroma values as a function of luma values, where the plot is used to derive a set of linear model parameters.
  • FIG. 9B shows the locations of samples that are used for the derivation of the linear model parameters of FIG. 9A .
  • FIG. 10 illustrates the generation of reference samples for intra prediction for all sub-partitions using only reference samples outside of a current coding block.
  • FIG. 11 illustrates a combining of inter predictor samples and intra predictor samples for a first sub-partition of FIG. 10 .
  • FIG. 12 illustrates a combining of inter predictor samples and intra predictions samples for a second sub-partition of FIG. 10 .
  • first, second, third, etc. may be used herein to describe various information, the information should not be limited by these terms. These terms are only used to distinguish one category of information from another. For example, without departing from the scope of the present disclosure, first information may be termed as second information; and similarly, second information may also be termed as first information. As used herein, the term “if” may be understood to mean “when” or “upon” or “in response to,” depending on the context.
  • video coding standards are similar, including those previously mentioned in the Background section.
  • many video coding standards use block-based processing, and share similar video coding block diagrams to achieve video compression.
  • the VVC standard is built upon the block-based hybrid video coding framework.
  • FIG. 1 shows a block diagram of an illustrative encoder 100 which may be used in conjunction with many video coding standards.
  • a video frame is partitioned into a plurality of video blocks for processing.
  • a prediction is formed based on either an inter prediction approach or an intra prediction approach.
  • inter prediction one or more predictors are formed through motion estimation and motion compensation, based on pixels from previously reconstructed frames.
  • intra prediction predictors are formed based on reconstructed pixels in a current frame. Through mode decision, a best predictor may be chosen to predict a current block.
  • a prediction residual representing the difference between a current video block and its predictor, is sent to a Transform 102 circuitry.
  • Transform coefficients are then sent from the Transform 102 circuitry to a Quantization 104 circuitry for entropy reduction.
  • Quantized coefficients are then fed to an Entropy Coding 106 circuitry to generate a compressed video bitstream.
  • prediction-related information 110 from an inter prediction circuitry and/or an Intra Prediction 112 circuitry such as video block partition info, motion vectors, reference picture index, and intra prediction mode, are also fed through the Entropy Coding 106 circuitry and saved into a compressed video bitstream 114 .
  • decoder-related circuitries are also needed in order to reconstruct pixels for the purpose of prediction.
  • a prediction residual is reconstructed through an Inverse Quantization 116 circuitry and an Inverse Transform 118 circuitry. This reconstructed prediction residual is combined with a Block Predictor 120 to generate un-filtered reconstructed pixels for a current video block.
  • an In-Loop Filter 115 is commonly used.
  • a deblocking filter is available in AVC, HEVC, as well as the current version of VVC.
  • SAO sample adaptive offset
  • SAO sample adaptive offset
  • ALF adaptive loop filter
  • These in-loop filter operations are optional. Performing these operations helps to improve coding efficiency and visual quality. They may also be turned off as a decision rendered by the encoder 100 to save computational complexity.
  • intra prediction is usually based on unfiltered reconstructed pixels, while inter prediction is based on filtered reconstructed pixels if these filter options are turned on by the encoder 100 .
  • FIG. 2 is a block diagram setting forth an illustrative decoder 200 which may be used in conjunction with many video coding standards. This decoder 200 is similar to the reconstruction-related section residing in the encoder 100 of FIG. 1 .
  • an incoming video bitstream 201 is first decoded through an Entropy Decoding 202 circuitry to derive quantized coefficient levels and prediction-related information.
  • the quantized coefficient levels are processed through an Inverse Quantization 204 circuitry and an Inverse Transform 206 circuitry to obtain a reconstructed prediction residual.
  • a block predictor mechanism implemented in an Intra/inter Mode Selector 212 , is configured to perform either an Intra Prediction 208 procedure, or a Motion Compensation 210 procedure, based on decoded prediction information.
  • a set of unfiltered reconstructed pixels are obtained by summing up the reconstructed prediction residual from the Inverse Transform 206 circuitry and a predictive output generated by the block predictor mechanism, using a Summer 214 .
  • the reconstructed block may further go through an In-Loop Filter 209 before it is stored in a Picture Buffer 213 which functions as a reference picture store.
  • the reconstructed video in the Picture Buffer 213 can then be sent out to drive a display device, as well as used to predict future video blocks.
  • a filtering operation is performed on these reconstructed pixels to derive a final reconstructed Video Output 222 .
  • an input video signal to the encoder 100 is processed block by block.
  • Each block is called a coding unit (CU)).
  • a CU can be up to 128 ⁇ 128 pixels.
  • HEVC High Efficiency Video Encoding
  • JEM Joint Exploration Test Model
  • VVC Versatile Video Encoding
  • the basic unit for compression is termed a coding tree unit (CTU).
  • CTU coding tree unit
  • one coding tree unit (CTU) is split into CUs to adapt to varying local characteristics based on a quad/binary/ternary-tree structure.
  • the concept of multiple partition unit type in the HEVC standard is not present in the VVC standard, i.e., the separation of CU, prediction unit (PU) and transform unit (TU) does not exist in the VVC standard; instead, each CU is always used as the basic unit for both prediction and transform without further partitions.
  • the maximum CTU size for HEVC and JEM is defined as being up to 64 by 64 luma pixels, and two blocks of 32 by 32 chroma pixels, for a 4:2:0 chroma format.
  • the maximum allowed size of the luma block in a CTU is specified to be 128 ⁇ 128 (although the maximum size of the luma transform block is 64 ⁇ 64).
  • FIG. 3 illustrates five types of exemplary block partitions for a multi-type tree structure.
  • the five types of exemplary block partitions include quaternary partitioning 301 , horizontal binary partitioning 302 , vertical binary partitioning 303 , horizontal ternary partitioning 304 , and vertical ternary partitioning 305 .
  • one CTU is first partitioned by a quad-tree structure. Then, each quad-tree leaf node can be further partitioned by a binary and ternary tree structure.
  • spatial prediction and/or temporal prediction may be performed using the configuration shown in FIG. 1 .
  • Spatial prediction (or “intra prediction”) uses pixels from the samples of already-coded neighboring blocks (which are called reference samples) in the same video picture/slice to predict the current video block. Spatial prediction reduces spatial redundancy inherent in the video signal.
  • Temporal prediction (also referred to as “inter prediction” or “motion compensated prediction”) uses reconstructed pixels from already-coded video pictures to predict the current video block. Temporal prediction reduces temporal redundancy inherent in the video signal. Temporal prediction signal for a given CU is usually signaled by one or more motion vectors (MVs) which indicate the amount and the direction of motion between the current CU and its temporal reference. Also, if multiple reference pictures are supported, one reference picture index is additionally sent, which is used to identify from which reference picture in the reference picture store the temporal prediction signal comes.
  • MVs motion vectors
  • an intra/inter mode decision 121 circuitry in the encoder 100 chooses the best prediction mode, for example based on the rate-distortion optimization method.
  • the block predictor 120 is then subtracted from the current video block; and the resulting prediction residual is de-correlated using the transform 102 circuitry and the quantization 104 circuitry.
  • the resulting quantized residual coefficients are inverse quantized by the inverse quantization 116 circuitry and inverse transformed by the inverse transform 118 circuitry 118 to form the reconstructed residual, which is then added back to the prediction block to form the reconstructed signal of the CU.
  • in-loop filtering 115 such as a deblocking filter, a sample adaptive offset (SAO), and/or an adaptive in-loop filter (ALF) may be applied on the reconstructed CU before it is put in the reference picture store of the picture buffer 117 and used to code future video blocks.
  • coding mode inter or intra
  • prediction mode information motion information
  • quantized residual coefficients are all sent to the entropy coding unit 106 to be further compressed and packed to form the bit-stream.
  • the basic intra prediction scheme applied in the VVC standard is kept generally the same as that of the HEVC standard, except that several modules are further extended and/or improved in the VVC standard, such as intra sub-partition (ISP) coding mode, extended intra prediction with wide-angle intra directions, position-dependent intra prediction combination (PDPC) and 4-tap intra interpolation.
  • ISP intra sub-partition
  • PDPC position-dependent intra prediction combination
  • 4-tap intra interpolation 4-tap intra interpolation.
  • the VVC standard uses a set of previously-decoded samples that neighbor one current CU (i.e., above or left) to predict the samples of the CU.
  • the amount of angular intra modes is extended from 33 modes in the HEVC standard to 93 modes in the VVC standard.
  • both the HEVC standard and the VVC standard provide for a Planar mode (which assumes a gradual changing surface with a horizontal and a vertical slope derived from boundaries), and a DC mode (which assumes a flat surface).
  • FIG. 4 illustrates a set of exemplary intra modes 400 for use with the VVC standard
  • FIG. 5 illustrates a set of multiple reference lines for performing intra prediction.
  • the set of exemplary intra modes 400 includes modes 0, 1, ⁇ 14, ⁇ 12, ⁇ 10, ⁇ 8, ⁇ 6, ⁇ 4, ⁇ 2, 2, 4, 6, 8, 10, 12, 14, 16, 18, 20, 22, 24, 26, 28, 30, 32, 34, 36, 38, 40, 42, 44, 46, 48, 50, 52, 54, 56, 58, 60, 62, 64, 66, 68, 70, 72, 74, 76, 78, and 80.
  • Mode 0 corresponds to the Planar mode
  • mode 1 corresponds to the DC mode.
  • all of the defined intra modes i.e., planar, DC and angular directions
  • the VVC standard utilize a set of neighboring reconstructed samples above and left to the predicted block as the reference for intra prediction.
  • a multi reference line MRL is introduced in the VVC where two additional rows/columns (i.e., line 1 , 503 and line 3 , 505 of FIG. 5 ) are used for the intra prediction process.
  • the index of the selected reference row/column is signaled from the encoder 100 ( FIG.
  • FIG. 6A illustrates a first set of reference samples and angular directions 602 , 604 that are used for an intra prediction of a rectangular block (width W divided by height H equals 2).
  • the first set of reference samples includes a first sample 601 and a second sample 603 .
  • the second set of reference samples includes a third sample 605 and a fourth sample 607 .
  • the third set of reference samples includes a fifth sample 609 and a sixth sample 610 .
  • FIG. 6C illustrates the locations of the third reference samples that can be used in the VVC standard to derive the predicted samples of one intra block.
  • FIG. 6C because the quad/binary/ternary tree partition structure is applied, in addition to the coding blocks of square shape, rectangular coding blocks also exist for the intra prediction procedure in the context of the VVC standard.
  • the intra prediction samples are generated from either a non-filtered or a filtered set of neighboring reference samples, which may introduce discontinuities along the block boundaries between the current coding block and its neighbors.
  • boundary filtering is applied in the HEVC standard by combing the first row/column of prediction samples of DC, horizontal (i.e., mode 18 of FIG. 4 ) and vertical (i.e., mode 50) prediction modes with the unfiltered reference samples, utilizing a 2-tap filter (for DC mode) or a gradient-based smoothing filter (for horizontal and vertical prediction modes).
  • the position-dependent intra prediction combination (PDPC) tool in the VVC standard extends the foregoing concept by employing a weighted combination of intra prediction samples with unfiltered reference samples.
  • the PDPC is enabled for the following intra modes without signaling: planar, DC, horizontal (i.e, mode 18), vertical (i.e., mode 50), angular directions close to the bottom-left diagonal directions (i.e., modes 2, 3, 4, . . . , 10) and angular directions close to the top-right diagonal directions (i.e., modes 58, 59, 60, . . . , 66).
  • planar DC
  • horizontal i.e, mode 18
  • vertical i.e., mode 50
  • angular directions close to the bottom-left diagonal directions i.e., modes 2, 3, 4, . . . , 10)
  • angular directions close to the top-right diagonal directions i.e., modes 58, 59, 60, . . . , 66
  • pred( x,y ) ( wL ⁇ R ⁇ 1,y +wT ⁇ R x, ⁇ 1 ⁇ wTL ⁇ R ⁇ 1, ⁇ 1 +(64 ⁇ wL ⁇ wT+wTL ) ⁇ pred( x,y )+32)>>6 (1)
  • R x, ⁇ 1 , R ⁇ 1,y represent the reference samples located at the top and left of current sample (x, y), respectively, and R ⁇ 1, ⁇ 1 represents the reference sample located at the top-left corner of the current block.
  • FIG. 7 illustrates a set of exemplary locations for neighboring reconstructed samples that are used for a position-dependent intra prediction combination (PDPC) of one coding block.
  • a first reference sample 701 (R x, ⁇ 1 ), represents a reference sample located at the top of a current prediction sample (x, y).
  • a second reference sample 703 (R ⁇ 1,y ) represents a reference sample located to the left of the current prediction sample (x, y).
  • a third reference sample 705 (R ⁇ 1, ⁇ 1 ) represents the reference sample located at the top-left corner of the current prediction sample (x, y).
  • Reference samples including the first, second, and third reference samples 701 , 703 and 705 are combined with the current prediction sample (x, y) during the PDPC process.
  • the weights wL, wT and wTL in equation (1) are adaptively selected depending on prediction mode and sample position, as described as follows where the current coding block is assumed to be in the size of W ⁇ H:
  • multiple transform selection (MTS) tool is enabled in the VVC standard by introducing additional core transforms of DCT-VIII and DST-VII.
  • MTS multiple transform selection
  • the adaptive selection of the transforms is enabled at the coding block level by signaling one MTS flag to a bitstream. Specifically, when the MTS flag is equal to 0 for one block, one pair of fixed transforms (e.g., DCT-II) are applied in the horizontal and vertical directions. Otherwise (when the MTS flag is equal to 1), two additional flags will be further signaled for the block to indicate the transform type (either DCT-VIII or DST-VII) for each direction.
  • DCT-II fixed transforms
  • one shape-adaptive transform selection method is applied to all intra-coded blocks in which the DCT-II and DST-VII transforms are implicitly enabled based on the width and height of the current block. More specifically, for each rectangular block, the method uses the DST-VII transform in the direction associated with shorter side of one block and DCT-II transform in the direction associated with longer side of the block. For each square block, the DST-VII is applied in both directions. Additionally, to avoid introducing new transforms in different block sizes, the DST-VII transform is only enabled when the shorter side of one intra-coded block is equal to or smaller than 16. Otherwise, the DCT-II transform is always applied.
  • Table 2 illustrates the enabled horizontal and vertical transforms for intra-coded blocks based on the shape adaptive transform selection method in the VVC.
  • an 8 ⁇ 8 block may be divided into four 2 ⁇ 8 or four 8 ⁇ 2 sub-blocks.
  • One extreme case of such sub-block based intra prediction is so-called line-based prediction, wherein a block is divided into 1-D line/column for prediction.
  • one W ⁇ H (width ⁇ height) block can be split either into H sub-blocks in size of W ⁇ 1 or into W sub-blocks in size of 1 ⁇ H for intra prediction.
  • Each of the resulting lines/columns are coded in the same way of a normal 2-dimension (2-D) block (as shown in FIGS.
  • FIG. 8A illustrates an exemplary set of short-distance intra prediction (SDIP) partitions for an 8 ⁇ 4 block 801
  • FIG. 8B illustrates an exemplary set of short-distance intra prediction (SDIP) partitions for a 4 ⁇ 8 block 803
  • FIG. 8C illustrates an exemplary set of short-distance intra prediction (SDIP) partitions for an arbitrarily-sized block 805 .
  • a video coding tool called sub-partition prediction (ISP) was introduced into the VVC standard.
  • ISP sub-partition prediction
  • ISP is very similar to SDIP. Specifically, depending on the block size, the ISP divides the current coding block into 2 or 4 sub-blocks in either a horizontal or vertical direction, and each sub-block contains at least 16 samples.
  • FIGS. 8A, 8B, and 8C illustrate all of the possible partition cases for different coding block sizes. Moreover, the following main aspects are also included in the current ISP design to handle its interaction with the other coding tools in the VVC standard:
  • FIG. 9A is a plot of chroma values as a function of luma values, where the plot is used to derive a set of linear model parameters. More specifically, a straight-line 901 relationship between chroma values and luma values is used to derive a set of linear model parameters ⁇ and ⁇ as follows.
  • a cross-component linear model (CCLM) prediction mode is used in VVC, for which the chroma samples are predicted based on the reconstructed luma samples of the same CU by using a linear model as follows:
  • Linear model parameters ⁇ and ⁇ are derived from the straight-line 901 relationship between luma values and chroma values from two samples, which are minimum luma sample A (X A , Y A ) and maximum luma sample B (X B , Y B ) inside the set of neighboring luma samples, as exemplified in FIG. 9A .
  • X A , Y A are the x-coordinate (i.e. luma value) and y-coordinate (i.e. chroma value) value for sample A
  • X B , Y B are the x-coordinate and y-coordinate value for sample B.
  • the linear model parameters ⁇ and ⁇ are obtained according to the following equations.
  • Such a method is also called a min-Max method.
  • the division in the equation above could be avoided and replaced by a multiplication and a shift.
  • FIG. 9B shows the locations of the samples that are used for the derivation of the linear model parameters of FIG. 9A .
  • the above two equations for the linear model parameters ⁇ and ⁇ are applied directly.
  • the neighboring samples of the longer boundary are first subsampled to have the same number of samples as for the shorter boundary.
  • FIG. 9B shows the locations of the left and above samples and the sample of the current block involved in the CCLM mode, including an N ⁇ N set of chroma samples 903 and a 2N ⁇ 2N set of luma samples 905 .
  • these templates also can be used alternatively in the other 2 LM modes, called LM_A, and LM_L modes.
  • LM_A mode only pixel samples in the above template are used to calculate the linear model coefficients. To get more samples, the above template is extended to (W+W). In LM_L mode, only pixel samples in the left template are used to calculate the linear model coefficients. To get more samples, the left template are extended to (H+H). Note that only one luma line (general line buffer in intra prediction) is used to make the down-sampled luma samples when the upper reference line is at the CTU boundary.
  • Chroma mode signaling and derivation process are shown in Table 4.
  • Chroma mode coding directly depends on the intra prediction mode of the corresponding luma block. Since separate block partitioning structure for luma and chroma components is enabled in I slices, one chroma block may correspond to multiple luma blocks. Therefore, for Chroma DM mode, the intra prediction mode of the corresponding luma block covering the center position of the current chroma block is directly inherited.
  • VVC VVC
  • ISP tool in VVC can enhance intra prediction efficiency, there is room to further improve the performance of VVC.
  • some parts of the existing ISP would benefit from further simplification to provide for more efficient codec hardware implementations, and/or to provide improved coding efficiency.
  • several methods are proposed to further improve the ISP coding efficiency, to simplify the existing ISP design, and/or to facilitate improved hardware implementations.
  • FIG. 10 illustrates the generation of reference samples for intra prediction 1007 for all sub-partitions using only reference samples outside of a current coding block 1000 .
  • the current coding block 1000 includes a first sub-partition 1 1001 , a second sub-partition 2 1002 , a third sub-partition 3 1003 , and a fourth sub-partition 4 1004 .
  • it is proposed to generate an intra prediction for each sub-partition 1001 , 1002 , 1003 , and 1004 independently.
  • all of the predictors for the sub-partitions 1001 , 1002 , 1003 , and 1004 can be generated in a parallel manner.
  • the predictors for all the sub-partitions are generated using the same approaches as used in the conventional non-sub-partition intra modes. Specifically, no reconstructed samples of one sub-partition are used to generate the intra prediction samples for any other sub-partition in the same coding unit; all the predictors of each sub-partition 1001 , 1002 , 1003 , 1004 are generated using the reference samples of the current coding block 1000 as shown in FIG. 10 .
  • the width of each sub-partition can be smaller than or equal to two.
  • One detailed example is as follows. Pursuant to the ISP mode in the VVC standard, the dependence of 2 ⁇ N (width by height) subblock prediction on reconstructed values of previously decoded 2 ⁇ N subblocks of the coding block is not allowed so that the minimum width for prediction for subblocks becomes four samples. For example, an 8 ⁇ 8 coding block that is coded using ISP with vertical split is split into four prediction regions each of size 2 ⁇ 8, and the left two 2 ⁇ 8 prediction regions are merged into a first 4 ⁇ 8 prediction region to perform intra prediction. Transform 102 ( FIG. 1 ) circuitry is applied to each 2 ⁇ 8 partition.
  • the right two 2 ⁇ 8 prediction regions are merged into a second 4 ⁇ 8 prediction region to perform intra prediction.
  • Transform 102 circuitry is applied to each 2 ⁇ 8 partition. It is noted that the first 4 ⁇ 8 prediction region uses the neighboring pixels of the current coding block to generate the intra predictors, while the second 4 ⁇ 8 region uses the reconstructed pixels from the first 4 ⁇ 8 region (located to the left of the second 4 ⁇ 8 region) or the neighboring pixels from the current coding block (located to the top of the second 4 ⁇ 8).
  • only the horizontal (HOR) prediction mode (shown as mode 18 in FIG. 4 ), as well as those prediction modes having mode indices smaller than 18 (as indicated in FIG. 4 ) may be used in forming the intra prediction of the horizontal sub-partitions; and only the vertical (VER) prediction mode (i.e. mode 50 in FIG. 4 ), and those prediction modes having mode indices larger than 50 (as indicated in FIG. 4 ) may be used in forming the intra prediction of the vertical sub-partitions.
  • the HOR prediction mode shown as mode 18 in FIG. 4
  • the intra prediction for each horizontal sub-partition can be performed independently and in parallel.
  • the VER prediction mode mode 50 in FIG. 4
  • all of the angular prediction modes with modes larger than 50 the intra prediction for each vertical sub-partition can be performed independently and in parallel.
  • N being a positive integer.
  • this single allowed mode may be Planar mode.
  • this single allowed mode may be DC mode.
  • this single allowed mode may be one of HOR prediction mode, VER prediction mode, or diagonal (DIA) mode (mode 34 in FIG. 4 ) intra prediction mode.
  • only one mode is allowed for the luma component of an ISP coded block, and this mode may be different according to the sub-partition orientation, i.e. whether it is a horizontal or vertical sub-partition.
  • this mode may be different according to the sub-partition orientation, i.e. whether it is a horizontal or vertical sub-partition.
  • only the HOR prediction mode is allowed for the horizontal sub-partitioning, while only the VER prediction mode is allowed for the vertical sub-partitioning.
  • only the VER prediction mode is allowed for the horizontal sub-partitioning, while only the HOR prediction mode is allowed for the vertical sub-partitioning.
  • only two modes are allowed for the luma component of an ISP-coded block.
  • Each of respective modes may be selected in response to a corresponding sub-partition orientation, i.e. whether the orientation of the sub-partition is horizontal or vertical.
  • a corresponding sub-partition orientation i.e. whether the orientation of the sub-partition is horizontal or vertical.
  • PLANAR and HOR prediction modes are allowed for the horizontal sub-partitioning
  • PLANAR and VER prediction modes are allowed for the vertical sub-partitioning.
  • the codewords may be generated using any of a variety of different processes, including a Truncated Binary (TB) binarization process, a Fixed-length binarization process, a Truncated Rice (TR) binarization process, a k-th order Exp-Golomb binarization process, a Limited EGk binarization process, etc. These binary codeword generation processes are well defined in the specification of HEVC.
  • the truncated Rice with Rice parameter equal to zero is also known as truncated unary binarization.
  • Table 5 One set of examples of codewords using different binarization methods is presented in Table 5.
  • the MPM derivation process for the regular intra mode is directly re-used for ISP mode, and the signaling methods of the MPM flag and MPM index are kept the same as it is in the existing ISP design.
  • N c is a positive integer.
  • N c is a positive integer.
  • this single allowed mode may be a direct mode (DM).
  • this single allowed mode may be an LM mode.
  • this single allowed mode may be one of the HOR prediction mode or the VER prediction mode.
  • the direct mode (DM) is configured to apply the same intra prediction mode which is used by the corresponding luma block to the chroma blocks.
  • only two modes are allowed for the chroma component of an ISP coded block.
  • only DM and LM are allowed for the chroma components of an ISP coded block.
  • only four modes are allowed for the chroma component of an ISP coded block.
  • only DM, LM, LM_L and LM_A are allowed for the chroma components of an ISP coded block.
  • the convention MPM mechanism is not utilized. Instead, fixed binary codewords are used to indicate the selected chroma modes in the bitstream.
  • the chroma intra prediction modes for an ISP coded block can be signaled using the determined binary codewords.
  • the codewords may be generated using different processes including the Truncated Binary (TB) binarization process, the Fixed-length binarization process, the truncated Rice (TR) binarization process, the k-th order Exp-Golomb binarization process, the Limited EGk binarization process, etc.
  • FIG. 11 illustrates a combining of inter predictor samples and intra predictor samples for the first sub-partition 1001 of FIG. 10 .
  • FIG. 12 illustrates a combining of inter predictor samples and intra predictions samples for the second sub-partition 1002 of FIG. 10 .
  • a new prediction mode is provided where the prediction is generated as a weighted combination (e.g. weighted averaging) of the ISP mode and the inter prediction mode.
  • the intra predictor generation for ISP mode is the same as illustrated previously in connection with FIG. 10 .
  • the inter predictor may be generated through the process of merge mode or inter mode.
  • the inter predictor samples 1101 for a current block is generated by performing motion compensation using the merge candidates indicated by the merge index.
  • the intra predictor samples 1103 are generated by performing the intra prediction using the signaled intra mode. It is noted that this process may use the reconstructed samples of the previous sub-partition to generate the intra predictor samples for a non-first sub-partition (i.e., the second sub-partition 1002 ) as shown in FIG. 12 . After the inter and intra predictor samples are generated, they are weight-averaged to generate the final predictor samples for the sub-partition.
  • the combined modes can be treated as an intra mode. Alternatively, the combined modes may be treated as an inter mode or a merge mode instead of intra mode.
  • the value of CBF for the last sub-partition is always inferred as one.
  • the value of CBF for the last sub-partition is always inferred as zero.
  • a method of video coding comprises independently generating a respective intra prediction for each of a plurality of corresponding sub-partitions, wherein each respective intra prediction is generated using a plurality of reference samples from a current coding block.
  • no reconstructed sample from a first sub-partition of the plurality of corresponding sub-partitions is used to generate a respective intra prediction for any other sub-partition of the plurality of corresponding sub-partitions.
  • each of the plurality of corresponding sub-partitions has a width less than or equal to 2.
  • the plurality of corresponding sub-partitions comprises a plurality of vertical sub-partitions and a plurality of horizontal sub-partitions
  • the method further comprises using only a horizontal prediction mode to generate a first set of intra predictions for the plurality of horizontal subdivisions, and using only a vertical prediction mode to generate a second set of intra predictions for the plurality of vertical sub-partitions.
  • the horizontal prediction mode is performed using a mode index smaller than 18.
  • the vertical prediction mode is performed using a mode index larger than 50.
  • the horizontal prediction mode is performed for each of the plurality of horizontal sub-partitions independently and in parallel.
  • the vertical prediction mode is performed for each of the plurality of vertical sub-partitions independently and in parallel.
  • the plurality of corresponding sub-partitions includes a last sub-partition
  • the method further comprises signaling a coefficients block flag (CBF) value for the last sub-partition.
  • CBF coefficients block flag
  • the plurality of corresponding sub-partitions includes a last sub-partition
  • the method further comprises inferring a coefficients block flag (CBF) value for the last sub-partition at a decoder.
  • CBF coefficients block flag
  • the coefficients block flag (CBF) value is always inferred as one.
  • the coefficients block flag (CBF) value is always inferred as zero.
  • a method of video coding comprises generating a respective intra prediction for each of a plurality of corresponding sub-partitions using only N modes out of M possible intra prediction modes for a luma component of an intra sub partition (ISP) coded block, wherein M and N are positive integers and N is less than M.
  • ISP intra sub partition
  • each of the plurality of corresponding sub-partitions has a width less than or equal to 2.
  • N is equal to one, such that only a single mode is allowed for the luma component.
  • the single mode is a planar mode.
  • the single mode is a DC mode.
  • the single mode is any one of following modes: a horizontal (HOR) prediction mode, a vertical (VER) prediction mode, or a diagonal (DIA) prediction mode.
  • HOR horizontal
  • VER vertical
  • DIA diagonal
  • the single mode is selected in response to a sub-partition orientation, wherein the horizontal (HOR) prediction mode is selected in response to the sub-partition orientation being horizontal, and wherein the vertical (VER) prediction mode is selected in response to the sub-partition orientation being vertical.
  • the single mode is selected in response to a sub-partition orientation, wherein the horizontal (HOR) prediction mode is selected in response to the sub-partition orientation being vertical, and wherein the vertical (VER) prediction mode is selected in response to the sub-partition orientation being horizontal.
  • N is equal to two, such that two modes are allowed for the luma component.
  • a first set of two modes are selected in response to a first sub-partition orientation, and a second set of two modes are selected in response to a second sub-partition orientation.
  • the first set of two modes comprises a planar mode and a horizontal (HOR) prediction mode
  • the first sub-partition orientation comprises horizontal sub-partitioning
  • the second set of two modes comprises a planar mode and a vertical (VER) prediction mode
  • the second sub-partition orientation comprises vertical sub-partitioning
  • each respective mode of the N modes is signaled using a corresponding binary codeword from a set of predetermined binary codewords.
  • the set of predetermined binary codewords is generated using at least one of a truncated binary (TB) binarization process, a fixed-length binarization process, a truncated Rice (TR) binarization process, a truncated unary binarization process, a k-th order Exp-Golumb binarization process, or a limited EGk binarization process.
  • TB truncated binary
  • TR truncated Rice
  • a method of video coding comprises generating an intra prediction using only N modes out of M possible intra prediction modes for a chroma component of an intra sub partition (ISP) coded block, wherein M and N are positive integers and N is less than M.
  • ISP intra sub partition
  • N is equal to one, such that only a single mode is allowed for the chroma component.
  • the single mode is direct mode (DM), linear model (LM) mode, horizontal (HOR) prediction mode, or vertical (VER) prediction mode.
  • N is equal to two, such that two modes are allowed for the chroma component.
  • N is equal to four, such that four modes are allowed for the chroma component.
  • each respective mode of the N modes is signaled using a corresponding binary codeword from a set of predetermined binary codewords.
  • the set of predetermined binary codewords is generated using at least one of a truncated binary (TB) binarization process, a fixed-length binarization process, a truncated Rice (TR) binarization process, a truncated unary binarization process, a k-th order Exp-Golumb binarization process, or a limited EGk binarization process.
  • TB truncated binary
  • TR truncated Rice
  • a method of video coding comprises generating a respective luma intra prediction for each of a plurality of corresponding sub-partitions of an entire intra sub partition (ISP) coded block for a luma component, and for a chroma component, generating a chroma intra prediction for the entire intra sub partition (ISP) coded block.
  • ISP intra sub partition
  • each of the plurality of corresponding sub-partitions has a width less than or equal to 2.
  • a method of video coding comprises generating a first prediction using an intra sub partition mode; generating a second prediction using an inter prediction mode; and combining the first and second predictions to generate a final prediction by applying a weighted averaging to the first prediction and the second prediction.
  • the second prediction is generated using at least one of a merge mode or an inter mode.
  • a method of video decoding comprises independently performing an intra prediction for a plurality of corresponding sub-partitions, wherein the intra prediction is performed using a plurality of reference samples for a current coding block.
  • no reconstructed sample from a first sub-partition of the plurality of corresponding sub-partitions is used to perform the intra prediction for any other sub-partition of the plurality of corresponding sub-partitions.
  • each of the plurality of corresponding sub-partitions has a width less than or equal to 2.
  • the plurality of corresponding sub-partitions are a plurality of vertical sub-partitions.
  • the method of video decoding further comprises generating a set of prediction samples for the plurality of vertical sub-partitions using a vertical prediction mode with a mode index larger than 50.
  • the method of video decoding further comprises performing the intra prediction for the plurality of vertical sub-partitions independently and in parallel.
  • the plurality of corresponding sub-partitions includes a last sub-partition
  • the method further comprises signaling a coefficients block flag (CBF) value for the last sub-partition.
  • CBF coefficients block flag
  • the plurality of corresponding sub-partitions includes a last sub-partition
  • the method further comprises inferring a coefficients block flag (CBF) value for the last sub-partition at a decoder.
  • CBF coefficients block flag
  • the coefficients block flag (CBF) value is always inferred as one.
  • the coefficients block flag (CBF) value is always inferred as zero.
  • more than one prediction sample for the intra prediction are generated in parallel for the plurality of sub-partitions.
  • a method of video decoding comprises performing an intra prediction for a plurality of corresponding sub-partitions using only N modes out of M intra prediction modes for a luma component of an intra sub partition (ISP) coded block, wherein M and N are positive integers and N is less than M.
  • ISP intra sub partition
  • each of the plurality of corresponding sub-partitions has a width less than or equal to 2.
  • N is equal to one, such that only a single mode is allowed for the luma component.
  • the single mode is a planar mode.
  • the single mode is a DC mode.
  • the single mode is any one of following modes: a horizontal (HOR) prediction mode, a vertical (VER) prediction mode, or a diagonal (DIA) prediction mode.
  • HOR horizontal
  • VER vertical
  • DIA diagonal
  • the method of video decoding further comprises selecting the single mode in response to a sub-partition orientation, wherein the horizontal (HOR) prediction mode is selected in response to the sub-partition orientation being horizontal, and wherein the vertical (VER) prediction mode is selected in response to the sub-partition orientation being vertical.
  • the method of video decoding further comprises selecting the single mode in response to a sub-partition orientation, wherein the horizontal (HOR) prediction mode is selected in response to the sub-partition orientation being vertical, and wherein the vertical (VER) prediction mode is selected in response to the sub-partition orientation being horizontal.
  • N is equal to two, such that two modes are allowed for the luma component.
  • a first set of two modes are selected in response to a first sub-partition orientation, and a second set of two modes are selected in response to a second sub-partition orientation.
  • the first set of two modes comprises a planar mode and a horizontal (HOR) prediction mode
  • the first sub-partition orientation comprises horizontal sub-partitioning
  • the second set of two modes comprises a planar mode and a vertical (VER) prediction mode
  • the second sub-partition orientation comprises vertical sub-partitioning
  • the method of video decoding further comprises signaling each mode of the N modes using a corresponding binary codeword from a set of predetermined binary codewords.
  • the set of predetermined binary codewords is generated using at least one of a truncated binary (TB) binarization process, a fixed-length binarization process, a truncated Rice (TR) binarization process, a truncated unary binarization process, a k-th order Exp-Golumb binarization process, and a limited EGk binarization process.
  • TB truncated binary
  • TR truncated Rice
  • a method of video decoding comprises performing an intra prediction using only N modes out of M intra prediction modes for a chroma component of an intra sub partition (ISP) coded block, wherein M and N are positive integers and N is less than M.
  • ISP intra sub partition
  • N is equal to one, such that only a single mode is allowed for the chroma component.
  • the single mode is at least one of direct mode (DM), linear model (LM) mode, horizontal (HOR) prediction mode, and vertical (VER) prediction mode.
  • DM direct mode
  • LM linear model
  • HOR horizontal
  • VER vertical prediction mode
  • N is equal to two, such that two modes are allowed for the chroma component.
  • N is equal to four, such that four modes are allowed for the chroma component.
  • the method of video decoding further comprises signaling each mode of the N modes using a corresponding binary codeword from a set of predetermined binary codewords.
  • the set of predetermined binary codewords is generated using at least one of a truncated binary (TB) binarization process, a fixed-length binarization process, a truncated Rice (TR) binarization process, a truncated unary binarization process, a k-th order Exp-Golumb binarization process, and a limited EGk binarization process.
  • TB truncated binary
  • TR truncated Rice
  • a method of video decoding comprises for a luma component, performing a luma intra prediction for a plurality of corresponding sub-partitions of an entire intra sub partition (ISP) coded block, and for a chroma component, performing a chroma intra prediction for the entire intra sub partition (ISP) coded block.
  • ISP intra sub partition
  • each of the plurality of corresponding sub-partitions has a width less than or equal to 2.
  • a method of video decoding comprises performing a first prediction using an intra sub partition mode; performing a second prediction using an inter prediction mode; and combining the first and second predictions to generate a final prediction by applying a weighted averaging to the first prediction and the second prediction.
  • the method of video decoding further comprises performing the second prediction using at least one of a merge mode and an inter mode.
  • Computer-readable media may include computer-readable storage media, which corresponds to a tangible medium such as data storage media, or communication media including any medium that facilitates transfer of a computer program from one place to another, e.g., according to a communication protocol.
  • Computer-readable media generally may correspond to (1) tangible computer-readable storage media which is non-transitory or (2) a communication medium such as a signal or carrier wave.
  • Data storage media may be any available media that can be accessed by one or more computers or one or more processors to retrieve instructions, code and/or data structures for implementation of the implementations described in the present application.
  • a computer program product may include a computer-readable medium.
  • the above methods may be implemented using an apparatus that includes one or more circuitries, which include application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays (FPGAs), controllers, micro-controllers, microprocessors, or other electronic components.
  • the apparatus may use the circuitries in combination with the other hardware or software components for performing the above described methods.
  • Each module, sub-module, unit, or sub-unit disclosed above may be implemented at least partially using the one or more circuitries.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
US17/393,367 2019-02-05 2021-08-03 Video coding using intra sub-partition coding mode Pending US20210368205A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/393,367 US20210368205A1 (en) 2019-02-05 2021-08-03 Video coding using intra sub-partition coding mode

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US201962801214P 2019-02-05 2019-02-05
PCT/US2020/016888 WO2020163535A1 (en) 2019-02-05 2020-02-05 Video coding using intra sub-partition coding mode
US17/393,367 US20210368205A1 (en) 2019-02-05 2021-08-03 Video coding using intra sub-partition coding mode

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2020/016888 Continuation WO2020163535A1 (en) 2019-02-05 2020-02-05 Video coding using intra sub-partition coding mode

Publications (1)

Publication Number Publication Date
US20210368205A1 true US20210368205A1 (en) 2021-11-25

Family

ID=71947318

Family Applications (2)

Application Number Title Priority Date Filing Date
US17/393,367 Pending US20210368205A1 (en) 2019-02-05 2021-08-03 Video coding using intra sub-partition coding mode
US17/393,364 Active 2040-11-20 US11936890B2 (en) 2019-02-05 2021-08-03 Video coding using intra sub-partition coding mode

Family Applications After (1)

Application Number Title Priority Date Filing Date
US17/393,364 Active 2040-11-20 US11936890B2 (en) 2019-02-05 2021-08-03 Video coding using intra sub-partition coding mode

Country Status (7)

Country Link
US (2) US20210368205A1 (ko)
EP (1) EP3922029A4 (ko)
JP (2) JP2022518612A (ko)
KR (3) KR20220021036A (ko)
CN (2) CN113348671A (ko)
MX (1) MX2021009355A (ko)
WO (1) WO2020163535A1 (ko)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210076028A1 (en) * 2018-03-30 2021-03-11 Lg Electronics Inc. Image/video coding method based on intra prediction, and device therefor
US20210368193A1 (en) * 2019-02-05 2021-11-25 Beijing Dajia Internet Information Technology Co., Ltd. Video coding using intra sub-partition coding mode
US20220166968A1 (en) * 2019-03-22 2022-05-26 Lg Electronics Inc. Intra prediction method and apparatus based on multi-reference line in image coding system
US11818344B2 (en) * 2019-07-08 2023-11-14 Lg Electronics Inc. Adaptive loop filter-based video or image coding

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023132622A1 (ko) * 2022-01-04 2023-07-13 엘지전자 주식회사 Dimd 모드 기반 인트라 예측 방법 및 장치

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130136175A1 (en) * 2011-09-12 2013-05-30 Qualcomm Incorporated Non-square transform units and prediction units in video coding
US20170347103A1 (en) * 2016-05-25 2017-11-30 Arris Enterprises Llc Weighted angular prediction coding for intra coding
US20180302620A1 (en) * 2015-10-15 2018-10-18 Lg Electronics Inc. METHOD FOR ENCODING AND DECODING VIDEO SIGNAL, AND APPARATUS THEREFOR (As Amended)
US20190149828A1 (en) * 2016-05-02 2019-05-16 Industry-Universty Cooperation Foundation Hanyang University Image encoding/decoding method and apparatus using intra-screen prediction
US20200244980A1 (en) * 2019-01-30 2020-07-30 Tencent America LLC Method and apparatus for improved sub-block partitioning intra sub-partitions coding mode

Family Cites Families (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8976870B1 (en) * 2006-08-30 2015-03-10 Geo Semiconductor Inc. Block and mode reordering to facilitate parallel intra prediction and motion vector prediction
KR101432775B1 (ko) * 2008-09-08 2014-08-22 에스케이텔레콤 주식회사 서브블록 내 임의 화소를 이용한 영상 부호화/복호화 방법 및 장치
EP3328081B1 (en) 2011-11-11 2019-10-16 GE Video Compression, LLC Effective prediction using partition coding
US9491457B2 (en) * 2012-09-28 2016-11-08 Qualcomm Incorporated Signaling of regions of interest and gradual decoding refresh in video coding
KR102379609B1 (ko) 2012-10-01 2022-03-28 지이 비디오 컴프레션, 엘엘씨 향상 레이어 모션 파라미터들에 대한 베이스-레이어 힌트들을 이용한 스케일러블 비디오 코딩
CN107113444A (zh) * 2014-11-04 2017-08-29 三星电子株式会社 使用帧内预测对视频进行编码/解码的方法和装置
EP3393126A4 (en) 2016-02-16 2019-04-17 Samsung Electronics Co., Ltd. INTRA PREDICTION METHOD FOR REDUCING INTRA PREDICTION ERRORS AND DEVICE THEREFOR
CN116634143A (zh) 2016-11-25 2023-08-22 株式会社Kt 用于对视频进行编码和解码的方法
CN117041559A (zh) * 2016-12-07 2023-11-10 株式会社Kt 对视频进行解码或编码的方法和存储视频数据的设备
JP7187566B2 (ja) * 2018-02-09 2022-12-12 フラウンホーファー-ゲゼルシャフト・ツール・フェルデルング・デル・アンゲヴァンテン・フォルシュング・アインゲトラーゲネル・フェライン パーティションベースのイントラ符号化概念
TWI764015B (zh) * 2018-07-13 2022-05-11 弗勞恩霍夫爾協會 分區框內寫碼技術
AU2019358651A1 (en) * 2018-10-12 2021-04-15 Guangdong Oppo Mobile Telecommunications Corp. , Ltd. Method for encoding/decoding image signal and device for same
WO2020159198A1 (ko) * 2019-01-28 2020-08-06 주식회사 엑스리스 영상 신호 부호화/복호화 방법 및 이를 위한 장치
CN113348671A (zh) * 2019-02-05 2021-09-03 北京达佳互联信息技术有限公司 使用帧内子分区编码模式的视频编码
US20200252608A1 (en) * 2019-02-05 2020-08-06 Qualcomm Incorporated Sub-partition intra prediction
US11418811B2 (en) * 2019-03-12 2022-08-16 Apple Inc. Method for encoding/decoding image signal, and device therefor
US11284073B2 (en) * 2019-07-08 2022-03-22 Hyundai Motor Company Method and apparatus for intra prediction coding of video data involving matrix based intra-prediction

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130136175A1 (en) * 2011-09-12 2013-05-30 Qualcomm Incorporated Non-square transform units and prediction units in video coding
US20180302620A1 (en) * 2015-10-15 2018-10-18 Lg Electronics Inc. METHOD FOR ENCODING AND DECODING VIDEO SIGNAL, AND APPARATUS THEREFOR (As Amended)
US20190149828A1 (en) * 2016-05-02 2019-05-16 Industry-Universty Cooperation Foundation Hanyang University Image encoding/decoding method and apparatus using intra-screen prediction
US20170347103A1 (en) * 2016-05-25 2017-11-30 Arris Enterprises Llc Weighted angular prediction coding for intra coding
US20200244980A1 (en) * 2019-01-30 2020-07-30 Tencent America LLC Method and apparatus for improved sub-block partitioning intra sub-partitions coding mode

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Hernández et. al. CE3: Line-based intra coding mode (Tests 2.1.1 and 2.1.2) Joint Video Experts Team (JVET) ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11 12th Meeting: Macao, CN, 3–12 Oct. 2018 (Hernández) *
SANTIAGO DE-LUXAN-HERNANDEZ et al., CE3: Intra Sub-Partitions Coding Mode (Tests 1.1.1 and 1.1.2), Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11, JVET-M0102-v5, 13th Meeting: Marrakech, MA, 18 January 2019, (9p). (cited in ISR) *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210076028A1 (en) * 2018-03-30 2021-03-11 Lg Electronics Inc. Image/video coding method based on intra prediction, and device therefor
US11706405B2 (en) * 2018-03-30 2023-07-18 Lg Electronics Inc. Image/video coding method based on intra prediction involving parsing of MPM index, and apparatus thereof
US20210368193A1 (en) * 2019-02-05 2021-11-25 Beijing Dajia Internet Information Technology Co., Ltd. Video coding using intra sub-partition coding mode
US11936890B2 (en) * 2019-02-05 2024-03-19 Beijing Dajia Internet Information Technology Co., Ltd. Video coding using intra sub-partition coding mode
US20220166968A1 (en) * 2019-03-22 2022-05-26 Lg Electronics Inc. Intra prediction method and apparatus based on multi-reference line in image coding system
US11818344B2 (en) * 2019-07-08 2023-11-14 Lg Electronics Inc. Adaptive loop filter-based video or image coding

Also Published As

Publication number Publication date
EP3922029A1 (en) 2021-12-15
US11936890B2 (en) 2024-03-19
KR20220021036A (ko) 2022-02-21
US20210368193A1 (en) 2021-11-25
KR102517389B1 (ko) 2023-04-03
EP3922029A4 (en) 2022-08-24
JP2023090929A (ja) 2023-06-29
KR20230049758A (ko) 2023-04-13
MX2021009355A (es) 2021-09-14
JP2022518612A (ja) 2022-03-15
KR20210113259A (ko) 2021-09-15
CN113348671A (zh) 2021-09-03
CN113630607A (zh) 2021-11-09
WO2020163535A1 (en) 2020-08-13

Similar Documents

Publication Publication Date Title
US11265540B2 (en) Apparatus and method for applying artificial neural network to image encoding or decoding
US11936890B2 (en) Video coding using intra sub-partition coding mode
US20130272623A1 (en) Intra prediction method and encoding apparatus and decoding apparatus using same
EP3522538A1 (en) Image processing method and apparatus therefor
US11962770B2 (en) Methods and devices for intra sub-partition coding mode
US11991378B2 (en) Method and device for video coding using various transform techniques
WO2019007492A1 (en) INTRAMODE BYPASS LINE MEMORY HARMONIZATION DECODER SIDE WITH UNBLOCKING FILTER
KR20230058033A (ko) 영상을 부호화 또는 복호화하는 방법 및 장치
CN114143548B (zh) 视频编解码中变换系数的编解码
US20240187623A1 (en) Video Coding Using Intra Sub-Partition Coding Mode
US12003737B2 (en) Coding of transform coefficients in video coding
WO2023198105A1 (en) Region-based implicit intra mode derivation and prediction
WO2024022144A1 (en) Intra prediction based on multiple reference lines
WO2024016955A1 (en) Out-of-boundary check in video coding
WO2023241347A1 (en) Adaptive regions for decoder-side intra mode derivation and prediction
WO2023197998A1 (en) Extended block partition types for video coding
WO2023217235A1 (en) Prediction refinement with convolution model
WO2023217140A1 (en) Threshold of similarity for candidate list
WO2023198187A1 (en) Template-based intra mode derivation and prediction
WO2023208219A1 (en) Cross-component sample adaptive offset
WO2023154574A1 (en) Methods and devices for geometric partitioning mode with adaptive blending
WO2023158765A1 (en) Methods and devices for geometric partitioning mode split modes reordering with pre-defined modes order
KR20240051257A (ko) 디코더 측 인트라 모드 도출을 위한 방법 및 장치
WO2023250047A1 (en) Methods and devices for motion storage in geometric partitioning mode
WO2023081322A1 (en) Intra prediction modes signaling

Legal Events

Date Code Title Description
AS Assignment

Owner name: BEIJING DAJIA INTERNET INFORMATION TECHNOLOGY CO., LTD., CHINA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:CHEN, YI-WEN;XIU, XIAOYU;WANG, XIANGLIN;AND OTHERS;SIGNING DATES FROM 20200128 TO 20200130;REEL/FRAME:057075/0780

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE AFTER FINAL ACTION FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: ADVISORY ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED