US20220295056A1 - Video signal processing method and device - Google Patents

Video signal processing method and device Download PDF

Info

Publication number
US20220295056A1
US20220295056A1 US17/636,966 US202017636966A US2022295056A1 US 20220295056 A1 US20220295056 A1 US 20220295056A1 US 202017636966 A US202017636966 A US 202017636966A US 2022295056 A1 US2022295056 A1 US 2022295056A1
Authority
US
United States
Prior art keywords
sample
chroma
neighboring
block
luma
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
US17/636,966
Other languages
English (en)
Inventor
Sung Won Lim
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
KT Corp
Original Assignee
KT Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by KT Corp filed Critical KT Corp
Assigned to KT CORPORATION reassignment KT CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: LIM, SUNG WON
Publication of US20220295056A1 publication Critical patent/US20220295056A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/117Filters, e.g. for pre-processing or post-processing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/186Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/11Selection of coding mode or of prediction mode among a plurality of spatial predictive coding modes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/132Sampling, masking or truncation of coding units, e.g. adaptive resampling, frame skipping, frame interpolation or high-frequency transform coefficient masking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/157Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
    • H04N19/159Prediction type, e.g. intra-frame, inter-frame or bidirectional frame prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/167Position within a video image, e.g. region of interest [ROI]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/593Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/80Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation
    • H04N19/82Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation involving filtering within a prediction loop

Definitions

  • the present disclosure relates to a method and a device for processing a video signal.
  • HD(High Definition) images and UHD(Ultra High Definition) images have increased in a variety of application fields.
  • image data becomes high-resolution and high-quality, the volume of data relatively increases compared to the existing image data, so when image data is transmitted by using media such as the existing wire and wireless broadband circuit or is stored by using the existing storage medium, expenses for transmission and expenses for storage increase.
  • High efficiency image compression technologies may be utilized to resolve these problems which are generated as image data becomes high-resolution and high-quality.
  • an inter prediction technology which predicts a pixel value included in a current picture from a previous or subsequent picture of a current picture with an image impression technology
  • an intra prediction technology which predicts a pixel value included in a current picture by using pixel information in a current picture
  • an entropy encoding technology which assigns a short sign to a value with high appearance frequency and assigns a long sign to a value with low appearance frequency and so on
  • image data may be effectively compressed and transmitted or stored by using these image compression technologies.
  • a purpose of the present disclosure is to provide an intra prediction method and device in encoding/decoding a video signal.
  • a purpose of the present disclosure is to provide a method and a device of predicting a chroma component by using a luma component reconstructed sample in encoding/decoding a video signal.
  • a video signal decoding method includes determining whether a CCLM (Cross-component Linear Model) mode is applied to a chroma block, obtaining a filtered neighboring luma sample for a neighboring chroma sample adjacent to the chroma block when it is determined that a CCLM mode is determined for the chroma block, deriving a CCLM parameter by using the neighboring chroma sample and the filtered neighboring luma sample and generating a prediction block for the chroma block by using the CCLM parameter.
  • CCLM Cross-component Linear Model
  • a video signal encoding method includes determining whether a CCLM (Cross-component Linear Model) mode is applied to a chroma block, obtaining a filtered neighboring luma sample for a neighboring chroma sample adjacent to the chroma block when it is determined that a CCLM mode is determined for the chroma block, deriving a CCLM parameter by using the neighboring chroma sample and the filtered neighboring luma sample and generating a prediction block for the chroma block by using the CCLM parameter.
  • CCLM Cross-component Linear Model
  • the filtered neighboring luma sample may be generated by applying a downsampling filter to a co-located luma sample corresponding to the neighboring chroma sample and neighboring luma samples adjacent to the co-located luma sample.
  • a reconstructed sample positioned on a boundary in a luma block may be padded to a position of an unavailable sample.
  • a type of the downsampling filter may be determined based on a type of a current image.
  • a type of the downsampling filter may be determined based on a position of the neighboring chroma sample.
  • the neighboring chroma sample may be extracted by subsampling a plurality of neighboring chroma samples neighboring the chroma block.
  • a subsampling rate may be determined based on at least one of a size or a shape of the chroma block.
  • encoding/decoding efficiency may be improved by predicting a chroma sample by using a luma reconstructed sample.
  • encoding/decoding efficiency of a CCLM mode may be improved by determining a downsampling filter type regardless of availability of a neighboring sample.
  • encoding/decoding efficiency of a CCLM mode may be improved by subsampling neighboring samples to derive a CCLM parameter.
  • FIG. 1 is a block diagram showing an image encoding device according to an embodiment of the present disclosure.
  • FIG. 2 is a block diagram showing an image decoding device according to an embodiment of the present disclosure.
  • FIG. 3 is a flow diagram showing an intra prediction method according to an embodiment of the present disclosure.
  • FIG. 4 illustrates a type of intra prediction modes.
  • FIG. 5 is a drawing for describing an example of deriving a prediction sample under a planar mode.
  • FIG. 6 illustrates a plurality of reference sample set candidates.
  • FIG. 7 is to describe a method of deriving a prediction sample under a DC mode.
  • FIG. 8 is a flow diagram showing a method of deriving a prediction sample of a chroma component according to an embodiment of the present disclosure.
  • FIG. 9 illustrates a downsampling filter type per chroma sample position when a current image is a HDR image.
  • FIG. 10 illustrates a downsampling filter type per chroma sample position when a current image is not a HDR image.
  • FIGS. 11 and 12 show an example in which a downsampling filter type is determined regardless of availability of neighboring samples adjacent to a luma block.
  • FIG. 13 shows an example in which a scope of reconstructed pixels used to derive a CCLM parameter is set differently according to a CCLM mode type.
  • FIG. 14 illustrates a downsampling filter type applied to a co-located luma sample of a top neighboring sample when a current image is not a HDR image.
  • FIG. 15 illustrates a downsampling filter type applied to a co-located luma sample of a top neighboring sample when a current image is a HDR image.
  • FIG. 16 shows an example to which a filter in a fixed type is applied according to a position of a top neighboring sample.
  • FIG. 17 illustrates a downsampling filter type applied to a co-located luma sample of a left neighboring sample.
  • a term such as first, second, etc. may be used to describe various components, but the components should not be limited by the terms. The terms are used only to distinguish one component from other components. For example, without going beyond a scope of a right of the present disclosure, a first component may be referred to as a second component and similarly, a second component may be also referred to as a first component.
  • a term, and/or, includes a combination of a plurality of relative entered items or any item of a plurality of relative entered items.
  • FIG. 1 is a block diagram showing an image encoding device according to an embodiment of the present disclosure.
  • an image encoding device 100 may include a picture partitioning unit 110 , prediction units 120 and 125 , a transform unit 130 , a quantization unit 135 , a rearrangement unit 160 , an entropy encoding unit 165 , a dequantization unit 140 , an inverse-transform unit 145 , a filter unit 150 , and a memory 155 .
  • each construction unit in FIG. 1 is independently shown to show different characteristic functions in an image encoding device, it does not mean that each construction unit is constituted by separated hardware or one software unit. That is, as each construction unit is included by being enumerated as each construction unit for convenience of a description, at least two construction units of each construction unit may be combined to constitute one construction unit or one construction unit may be partitioned into a plurality of construction units to perform a function, and even an integrated embodiment and a separated embodiment of each construction unit are also included in a scope of a right of the present disclosure unless they are departing from essence of the present disclosure.
  • some components may be just an optional component for improving performance, not a necessary component which perform an essential function in the present disclosure.
  • the present disclosure may be implemented by including only a construction unit necessary for implementing essence of the present disclosure excluding a component used to just improve performance, and a structure including only a necessary component excluding an optional component used to just improve performance is also included in a scope of a right of the present disclosure.
  • a picture partitioning unit 110 may partition an input picture into at least one processing unit.
  • a processing unit may be a prediction unit (PU), a transform unit (TU), or a coding unit (CU).
  • PU prediction unit
  • TU transform unit
  • CU coding unit
  • one picture may be partitioned into a combination of a plurality of coding units, prediction units and transform units and a picture may be encoded by selecting a combination of one coding unit, prediction unit and transform unit according to a predetermined standard (for example, cost function).
  • a predetermined standard for example, cost function
  • one picture may be partitioned into a plurality of coding units.
  • a recursive tree structure such as a quad tree structure may be used, and a coding unit which is partitioned into other coding units by using one image or the largest coding unit as a route may be partitioned with as many child nodes as the number of partitioned coding units.
  • a coding unit which is no longer partitioned according to a certain restriction becomes a leaf node. In other words, when it is assumed that only square partitioning is possible for one coding unit, one coding unit may be partitioned into up to four other coding units.
  • a coding unit may be used as a unit for encoding or may be used as a unit for decoding.
  • a prediction unit may be partitioned with at least one square or rectangular shape, etc. in the same size in one coding unit or may be partitioned so that any one prediction unit of prediction units partitioned in one coding unit can have a shape and/or a size different from another prediction unit.
  • intra prediction when it is not the smallest coding unit, intra prediction may be performed without performing partitioning into a plurality of prediction units, N ⁇ N.
  • Prediction units 120 and 125 may include an inter prediction unit 120 performing inter prediction and an intra prediction unit 125 performing intra prediction. Whether to perform inter prediction or intra prediction for a prediction unit may be determined and detailed information according to each prediction method (for example, an intra prediction mode, a motion vector, a reference picture, etc.) may be determined. In this connection, a processing unit that prediction is performed may be different from a processing unit that a prediction method and details are determined. For example, a prediction method, a prediction mode, etc. may be determined in a prediction unit and prediction may be performed in a transform unit. A residual value (a residual block) between a generated prediction block and an original block may be input to a transform unit 130 . In addition, prediction mode information used for prediction, motion vector information, etc.
  • an original block may be encoded as it is and transmitted to a decoding unit without generating a prediction block through prediction units 120 or 125 .
  • An inter prediction unit 120 may predict a prediction unit based on information on at least one picture of a previous picture or a subsequent picture of a current picture, or in some cases, may predict a prediction unit based on information on some encoded regions in a current picture.
  • An inter prediction unit 120 may include a reference picture interpolation unit, a motion prediction unit and a motion compensation unit.
  • a reference picture interpolation unit may receive reference picture information from a memory 155 and generate pixel information equal to or less than an integer pixel in a reference picture.
  • an 8-tap DCT-based interpolation filter having a different filter coefficient may be used to generate pixel information equal to or less than an integer pixel in a 1 ⁇ 4 pixel unit.
  • a 4-tap DCT-based interpolation filter having a different filter coefficient may be used to generate pixel information equal to or less than an integer pixel in a 1 ⁇ 8 pixel unit.
  • a motion prediction unit may perform motion prediction based on a reference picture interpolated by a reference picture interpolation unit.
  • various methods such as FBMA (Full search-based Block Matching Algorithm), TSS (Three Step Search), NTS (New Three-Step Search Algorithm), etc. may be used.
  • a motion vector may have a motion vector value in a unit of a 1 ⁇ 2 or 1 ⁇ 4 pixel based on an interpolated pixel.
  • a motion prediction unit may predict a current prediction unit by varying a motion prediction method.
  • various methods such as a skip method, a merge method, an advanced motion vector prediction (AMVP) method, an intra block copy method, etc. may be used.
  • AMVP advanced motion vector prediction
  • An intra prediction unit 125 may generate a prediction unit based on reference pixel information around a current block, which is pixel information in a current picture.
  • a neighboring block in a current prediction unit is a block which performed inter prediction and thus, a reference pixel is a pixel which performed inter prediction
  • a reference pixel included in a block which performed inter prediction may be used by being replaced with reference pixel information of a surrounding block which performed intra prediction.
  • unavailable reference pixel information may be used by being replaced with at least one reference pixel of available reference pixels.
  • a prediction mode in intra prediction may have a directional prediction mode using reference pixel information according to a prediction direction and a non-directional mode not using directional information when performing prediction.
  • a mode for predicting luma information may be different from a mode for predicting chroma information and intra prediction mode information used for predicting luma information or predicted luma signal information may be utilized to predict chroma information.
  • intra prediction for a prediction unit may be performed based on a pixel at a left position of a prediction unit, a pixel at a top-left position and a pixel at a top position.
  • intra prediction may be performed by using a reference pixel based on a transform unit.
  • intra prediction using N ⁇ N partitioning may be used only for the smallest coding unit.
  • a prediction block may be generated after applying an adaptive intra smoothing (AIS) filter to a reference pixel according to a prediction mode.
  • AIS adaptive intra smoothing
  • a type of an AIS filter applied to a reference pixel may be different.
  • an intra prediction mode in a current prediction unit may be predicted from an intra prediction mode in a prediction unit around a current prediction unit.
  • information that a prediction mode in a current prediction unit is the same as a prediction mode in a surrounding prediction unit may be transmitted by using predetermined flag information if an intra prediction mode in a current prediction unit is the same as an intra prediction mode in a surrounding prediction unit and prediction mode information of a current block may be encoded by performing entropy encoding if a prediction mode in a current prediction unit is different from a prediction mode in a surrounding prediction unit.
  • a residual block may be generated which includes information on a residual value that is a difference value between a prediction unit which performed prediction based on a prediction unit generated in prediction units 120 and 125 and an original block in a prediction unit.
  • a generated residual block may be input to a transform unit 130 .
  • a transform unit 130 may transform an original block and a residual block which includes residual value information in a prediction unit generated through prediction units 120 and 125 by using a transform method such as DCT (Discrete Cosine Transform), DST (Discrete Sine Transform), KLT. Whether to apply DCT, DST or KLT to transform a residual block may be determined based on an intra prediction mode information in a prediction unit which is used to generate a residual block.
  • DCT Discrete Cosine Transform
  • DST Discrete Sine Transform
  • KLT Whether to apply DCT, DST or KLT to transform a residual block may be determined based on an intra prediction mode information in a prediction unit which is used to generate a residual block.
  • a quantization unit 135 may quantize values transformed into a frequency domain in a transform unit 130 .
  • a quantization coefficient may be changed according to a block or importance of an image.
  • a value calculated in a quantization unit 135 may be provided to a dequantization unit 140 and a rearrangement unit 160 .
  • a rearrangement unit 160 may perform rearrangement on coefficient values for a quantized residual value.
  • a rearrangement unit 160 may change a coefficient in a shape of a two-dimensional block into a shape of a one-dimensional vector through a coefficient scanning method. For example, a rearrangement unit 160 may scan from a DC coefficient to a coefficient in a high frequency domain by using a zig-zag scanning method and change it into a shape of a one-dimensional vector. According to a size of a transform unit and an intra prediction mode, instead of zig-zag scanning, vertical scanning where a coefficient in a shape of a two-dimensional block is scanned in a column direction or horizontal scanning where a coefficient in a shape of a two-dimensional block is scanned in a row direction may be used. In other words, which scanning method among zig-zag scanning, vertical directional scanning and horizontal directional scanning will be used may be determined according to a size of a transform unit and an intra prediction mode.
  • An entropy encoding unit 165 may perform entropy encoding based on values calculated by a rearrangement unit 160 .
  • Entropy encoding may use various encoding methods such as exponential Golomb, CAVLC(Context-Adaptive Variable Length Coding), CABAC(Context-Adaptive Binary Arithmetic Coding).
  • An entropy encoding unit 165 may encode a variety of information such as residual value coefficient information in a coding unit and block type information, prediction mode information, partitioning unit information, prediction unit information and transmission unit information, motion vector information, reference frame information, block interpolation information, filtering information, etc. from a rearrangement unit 160 and prediction units 120 and 125 .
  • An entropy encoding unit 165 may perform entropy encoding for a coefficient value in a coding unit which is input from a rearrangement unit 160 .
  • a dequantization unit 140 and an inverse transform unit 145 perform dequantization for values quantized in a quantization unit 135 and perform inverse transform on values transformed in a transform unit 130 .
  • a residual value generated by a dequantization unit 140 and an inverse transform unit 145 may be combined with a prediction unit predicted by a motion prediction unit, a motion compensation unit and an intra prediction unit included in prediction units 120 and 125 to generate a reconstructed block.
  • a filter unit 150 may include at least one of a deblocking filter, an offset correction unit and an adaptive loop filter (ALF).
  • ALF adaptive loop filter
  • a deblocking filter may remove block distortion which is generated by a boundary between blocks in a reconstructed picture.
  • whether a deblocking filter is applied to a current block may be determined based on a pixel included in several rows or columns included in a block.
  • a strong filter or a weak filter may be applied according to required deblocking filtering strength.
  • horizontal directional filtering and vertical directional filtering may be set to be processed in parallel.
  • An offset correction unit may correct an offset with an original image in a unit of a pixel for an image that deblocking was performed.
  • a region where an offset will be performed may be determined after dividing a pixel included in an image into the certain number of regions and a method in which an offset is applied to a corresponding region or a method in which an offset is applied by considering edge information of each pixel may be used.
  • Adaptive loop filtering may be performed based on a value obtained by comparing a filtered reconstructed image with an original image. After a pixel included in an image is divided into predetermined groups, filtering may be discriminately performed per group by determining one filter which will be applied to a corresponding group. Information related to whether ALF will be applied may be transmitted per coding unit (CU) for a luma signal and a shape and a filter coefficient of an ALF filter to be applied may vary according to each block. In addition, an ALF filter in the same shape (fixed shape) may be applied regardless of a feature of a block to be applied.
  • ALF Adaptive loop filtering
  • a memory 155 may store a reconstructed block or picture calculated through a filter unit 150 and a stored reconstructed block or picture may be provided to prediction units 120 and 125 when performing inter prediction.
  • FIG. 2 is a block diagram showing an image decoding device according to an embodiment of the present disclosure.
  • an image decoding device 200 may include an entropy decoding unit 210 , a rearrangement unit 215 , a dequantization unit 220 , an inverse transform unit 225 , prediction units 230 and 235 , a filter unit 240 , and a memory 245 .
  • an input bitstream may be decoded according to a procedure opposite to an image encoding device.
  • An entropy decoding unit 210 may perform entropy decoding according to a procedure opposite to a procedure in which entropy encoding is performed in an entropy encoding unit of an image encoding device. For example, in response to a method performed in an image encoding device, various methods such as Exponential Golomb, CAVLC (Context-Adaptive Variable Length Coding), CABAC (Context-Adaptive Binary Arithmetic Coding) may be applied.
  • An entropy decoding unit 210 may decode information related to intra prediction and inter prediction performed in an encoding device.
  • a rearrangement unit 215 may perform rearrangement based on a method that a bitstream entropy-decoded in an entropy decoding unit 210 is rearranged in an encoding unit. Coefficients represented in a form of a one-dimensional vector may be rearranged by being reconstructed into coefficients in a form of a two-dimensional block.
  • a rearrangement unit 215 may receive information related to coefficient scanning performed in an encoding unit and perform rearrangement through a method in which scanning is inversely performed based on a scanning order performed in a corresponding encoding unit.
  • a dequantization unit 220 may perform dequantization based on a quantization parameter provided from an encoding device and a coefficient value of a rearranged block.
  • An inverse transform unit 225 may perform transform performed in a transform unit, i.e., inverse transform for DCT, DST, and KLT, i.e., inverse DCT, inverse DST and inverse KLT for a result of quantization performed in an image encoding device.
  • Inverse transform may be performed based on a transmission unit determined in an image encoding device.
  • a transform technique for example, DCT, DST, KLT
  • a plurality of information such as a prediction method, a size of a current block, a prediction direction, etc.
  • Prediction units 230 and 235 may generate a prediction block based on information related to generation of a prediction block provided from an entropy decoding unit 210 and pre-decoded block or picture information provided from a memory 245 .
  • intra prediction for a prediction unit may be performed based on a pixel at a left position of a prediction unit, a pixel at a top-left position and a pixel at a top position, but when a size of a prediction unit is different from a size of a transform unit in performing intra prediction, intra prediction may be performed by using a reference pixel based on a transform unit.
  • intra prediction using N ⁇ N partitioning may be used only for the smallest coding unit.
  • Prediction units 230 and 235 may include a prediction unit determination unit, an inter prediction unit and an intra prediction unit.
  • a prediction unit determination unit may receive a variety of information such as prediction unit information, prediction mode information of an intra prediction method, motion prediction-related information of an inter prediction method, etc. which are input from an entropy decoding unit 210 , divide a prediction unit in a current coding unit and determine whether a prediction unit performs inter prediction or intra prediction.
  • An inter prediction unit 230 may perform inter prediction for a current prediction unit based on information included in at least one picture of a previous picture or a subsequent picture of a current picture including a current prediction unit by using information necessary for inter prediction in a current prediction unit provided from an image encoding device. Alternatively, inter prediction may be performed based on information on some regions which are pre-reconstructed in a current picture including a current prediction unit.
  • a motion prediction method in a prediction unit included in a corresponding coding unit is a skip mode, a merge mode, an AMVP mode, or an intra block copy mode may be determined based on a coding unit.
  • An intra prediction unit 235 may generate a prediction block based on pixel information in a current picture.
  • intra prediction may be performed based on intra prediction mode information in a prediction unit provided from an image encoding device.
  • An intra prediction unit 235 may include an adaptive intra smoothing (AIS) filter, a reference pixel interpolation unit and a DC filter.
  • AIS adaptive intra smoothing
  • an AIS filter may be applied by determining whether a filter is applied according to a prediction mode in a current prediction unit.
  • AIS filtering may be performed for a reference pixel of a current block by using AIS filter information and a prediction mode in a prediction unit provided from an image encoding device.
  • an AIS filter may not be applied.
  • a reference pixel interpolation unit may interpolate a reference pixel to generate a reference pixel in a unit of a pixel equal to or less than an integer value.
  • a prediction mode in a current prediction unit is a prediction mode which generates a prediction block without interpolating a reference pixel
  • a reference pixel may not be interpolated.
  • a DC filter may generate a prediction block through filtering when a prediction mode of a current block is a DC mode.
  • a reconstructed block or picture may be provided to a filter unit 240 .
  • a filter unit 240 may include a deblocking filter, an offset correction unit and ALF.
  • Information on whether a deblocking filter was applied to a corresponding block or picture and information on whether a strong filter or a weak filter was applied when a deblocking filter was applied may be provided from an image encoding device.
  • Information related to a deblocking filter provided from an image encoding device may be provided in a deblocking filter of an image decoding device and deblocking filtering for a corresponding block may be performed in an image decoding device.
  • An offset correction unit may perform offset correction on a reconstructed image based on offset value information, a type of offset correction applied to an image when performing encoding.
  • ALF may be applied to a coding unit based on information on whether ALF is applied, ALF coefficient information, etc. provided from an encoding device. Such ALF information may be provided by being included in a specific parameter set.
  • a memory 245 may store a reconstructed picture or block for use as a reference picture or a reference block and provide a reconstructed picture to an output unit.
  • a coding unit is used as a term of a coding unit for convenience of a description, but it may be a unit which performs decoding as well as encoding.
  • a current block represents a block to be encoded/decoded, it may represent a coding tree block(or a coding tree unit), a coding block(or a coding unit), a transform block(or a transform unit) or a prediction block(or a prediction unit), etc. according to an encoding/decoding step.
  • ‘a unit’ may represent a base unit for performing a specific encoding/decoding process and ‘a block’ may represent a pixel array in a predetermined size. Unless otherwise classified, ‘a block’ and ‘a unit’ may be used interchangeably. For example, in the after-described embodiment, it may be understood that a coding block(a coding block) and a coding unit(a coding unit) are used interchangeably.
  • FIG. 3 is a flow diagram showing an intra prediction method according to an embodiment of the present disclosure.
  • an index of a reference sample line of a current block may be determined S 301 .
  • the index may specify one of a plurality of reference sample line candidates.
  • a plurality of reference sample line candidates may include an adjacent reference sample line adjacent to a current block and at least one non-adjacent reference sample line which is not adjacent to a current block.
  • an adjacent reference sample line composed of an adjacent row whose y-axis coordinate is smaller by 1 than an uppermost row of a current block and an adjacent column whose x-axis coordinate is smaller by 1 than a leftmost column of a current block may be used as a reference sample line candidate.
  • a first non-adjacent reference sample line including a non-adjacent row whose y-axis coordinate is smaller by 2 than an uppermost row of a current block and a non-adjacent column whose x-axis coordinate is smaller by 2 than a leftmost column of a current block may be used as a reference sample line candidate.
  • a second non-adjacent reference sample line including a non-adjacent row whose y-axis coordinate is smaller by 3 than an uppermost row of a current block and a non-adjacent column whose x-axis coordinate is smaller by 3 than a leftmost column of a current block may be used as a reference sample line candidate.
  • the index may indicate one of an adjacent reference sample line, a first non-adjacent reference sample line or a second non-adjacent reference sample line.
  • an index when an index is 0, it means an adjacent reference sample line is selected and when an index is 1, it means a first non-adjacent reference sample line is selected and when an index is 2, it means a second non-adjacent reference sample line is selected.
  • An index specifying one of a plurality of reference sample line candidates may be signaled in a bitstream.
  • an index may be signaled for a luma component block and signaling of an index may be omitted for a chroma component block.
  • an index may be considered 0.
  • intra prediction may be performed by using an adjacent reference sample line.
  • Reconstructed samples included by a selected reference sample line may be derived as reference samples.
  • an intra prediction mode of a current block may be determined S 302 .
  • FIG. 4 illustrates a type of intra prediction modes.
  • intra prediction modes include a nondirectional prediction mode (DC and Planar) and a directional prediction mode.
  • DC and Planar nondirectional prediction mode
  • directional prediction mode a directional prediction mode.
  • FIG. 4 illustrated that 65 directional prediction modes are defined.
  • a flag representing whether an intra prediction mode of a current block is the same as a MPM may be signaled in a bitstream.
  • a value of a MPM flag is 1, it represents that there is the same MPM as an intra prediction mode of a current block.
  • a value of a MPM flag is 0, it represents that there is no same MPM as an intra prediction mode of a current block.
  • a flag representing whether an intra prediction mode of a current block is the same as a default intra prediction mode may be signaled.
  • a default intra prediction mode may be at least one of a DC, a Planar, a vertical directional prediction mode or a horizontal directional prediction mode.
  • intra notplanar flag a flag representing whether an intra prediction mode of a current block is a planar mode
  • intra notplanar flag When a value of the flag, intra notplanar flag, is 0, it represents that an intra prediction mode of a current block is planar.
  • intra notplanar flag when a value of the flag, intra notplanar flag, is 1, it represents that an intra prediction mode of a current block is not planar.
  • an index specifying one of MPM candidates may be signaled.
  • An intra prediction mode of a current block may be set to be the same as a MPM indicated by a MPM index.
  • a prediction sample may be derived S 303 .
  • a prediction sample may be derived by using a reference sample positioned on a line which follows an angle of a directional prediction mode.
  • a prediction sample may be derived by using a reference sample in a vertical direction of a prediction target sample and a reference sample in a horizontal direction.
  • FIG. 5 is a drawing for describing an example of deriving a prediction sample under a planar mode.
  • T represents a reference sample adjacent to a top-right corner of a current block and L represents a reference sample adjacent to a bottom-left corner of a current block.
  • horizontal directional prediction sample P 1 and vertical directional prediction sample P 2 may be derived.
  • Horizontal directional prediction sample P 1 may be generated by performing linear interpolation for top-right reference sample T and reference sample H positioned on the same horizontal line as a prediction target sample.
  • Vertical directional prediction sample P 2 may be generated by performing linear interpolation for bottom-left reference sample L and reference sample V positioned on the same vertical line as a prediction target sample.
  • Equation 1 represents an example in which prediction sample P is derived by a weighted sum operation of horizontal directional prediction sample P 1 and vertical directional prediction sample P 2 .
  • represents a weight applied to horizontal directional prediction sample P 1 and ⁇ represents a weight applied to vertical directional prediction sample P 2 .
  • Weights ⁇ and ⁇ may be determined based on a size or a shape of a current block. Concretely, weights ⁇ and ⁇ may be determined by considering at least one of a width or a height of a current block. In an example, when a width and a height of a current block are the same, weights ⁇ and ⁇ may be set as the same value. When weights ⁇ and ⁇ are the same, a prediction sample may be derived as an average value of horizontal directional prediction sample P 1 and vertical directional prediction sample P 2 . On the other hand, when a width and a height of a current block are different, weights ⁇ and ⁇ may be set differently.
  • weight ⁇ when a width of a current block is greater than a height, weight ⁇ may be set as a value larger than weight ⁇ and when a height of a current block is greater than a width, weight ⁇ may be set as a value larger than weight ⁇ .
  • weight ⁇ when a width of a current block is greater than a height, weight ⁇ may be set as a value larger than weight ⁇ and when a height of a current block is greater than a width, weight ⁇ may be set as a value larger than weight ⁇ .
  • weights ⁇ and ⁇ may be derived from one of a plurality of weight set candidates.
  • weight candidate sets representing a combination of weights ⁇ and ⁇ are predefined, weights ⁇ and ⁇ may be selected to be the same as one of the weight candidate sets.
  • An index indicating one of a plurality of weight set candidates may be signaled in a bitstream.
  • the index may be signaled at a block level.
  • an index may be signaled in a unit of a coding block or a transform block.
  • an index may be signaled at a level of a coding tree unit, a slice, a picture or a sequence.
  • Blocks included in an index transmission unit may determine weights ⁇ and ⁇ by referring to an index signaled at a higher level. In other words, for blocks included in an index transmission unit, weights ⁇ and ⁇ may be set to be the same.
  • top-right reference sample T is used to derive horizontal directional prediction sample P 1 and bottom-left reference sample L is used to derive vertical directional prediction sample P 2 .
  • Horizontal directional prediction sample P 1 may be derived by using a reference sample other than a top-right reference sample or vertical directional prediction sample P 2 may be derived by using a reference sample other than a bottom-left reference sample.
  • horizontal directional prediction sample P 1 and vertical directional prediction sample P 2 may be derived by configuring reference sample set candidates for a first reference sample used to derive horizontal directional prediction sample P 1 and a second reference sample used to derive vertical directional prediction sample P 2 and using one selected among a plurality of reference sample set candidates.
  • An index identifying one of a plurality of reference sample set candidates may be signaled in a bitstream.
  • the index may be signaled in a unit of a block, a sub-block or a sample.
  • a reference sample set candidate may be selected.
  • FIG. 6 illustrates a plurality of reference sample set candidates.
  • indication of (y, x) represents a combination of a y-coordinate and an x-coordinate of each sample.
  • (2, 1) represents a sample that a y-coordinate is 2 and an x-coordinate is 1.
  • a first reference sample set candidate may be configured with reference sample T 1 adjacent to a top-right corner of a current block and reference sample L 1 adjacent to a top-left corner of a current block.
  • T 1 represents a reference sample of a ( ⁇ 1, W) coordinate
  • L 1 represents a reference sample of a (H, ⁇ 1) coordinate.
  • W and H represent a width and a height of a current block, respectively.
  • a second reference sample set candidate may be configured with reference sample T 2 adjacent to the top of T 1 and reference sample L 2 adjacent to the left of L 1 .
  • T 2 represents a reference sample of a ( ⁇ 2, W) coordinate
  • L 2 represents a reference sample of a (H, ⁇ 2) coordinate.
  • a third reference sample set candidate may be configured with reference sample T 3 adjacent to the top of T 2 and reference sample L 3 adjacent to the left of L 2 .
  • T 3 represents a reference sample of a ( ⁇ 3, W) coordinate
  • L 3 represents a reference sample of a (H, ⁇ 3) coordinate.
  • a fourth reference sample set candidate may be configured with reference sample T 4 adjacent to the top of T 3 and reference sample L 4 adjacent to the left of L 3 .
  • T 4 represents a reference sample of a ( ⁇ 4, W) coordinate
  • L 4 represents a reference sample of a (H, ⁇ 4) coordinate.
  • a reference sample set candidate is not limited to a shown example.
  • a combination of a reference sample that an x-axis coordinate of a current block is W and a reference sample that a y-axis coordinate is H is set as a reference sample set candidate.
  • a reference sample that an x-axis coordinate is W/2 or (W/2) ⁇ 1 or a reference sample that a y-axis coordinate is H/2 or (H/2) ⁇ 1 may configure a reference sample set candidate.
  • a reference sample set may be adaptively selected.
  • a reference sample set configured with reference sample ( ⁇ 1, W) and reference sample (H, ⁇ 1) may be used.
  • a reference sample set configured with reference sample ( ⁇ 1, W/2) and reference sample (H, ⁇ 1) may be used.
  • a reference sample set configured with reference sample ( ⁇ 1, W/2) and reference sample (H, ⁇ 1) may be used.
  • a reference sample set configured with reference sample ( ⁇ 1, W) and reference sample (H/2, ⁇ 1) may be used.
  • a prediction sample may be derived based on an average value of reference samples.
  • FIG. 7 is to describe a method of deriving a prediction sample under a DC mode.
  • An average value of reference samples adjacent to a current block may be calculated and an average value calculated for all samples in a current block may be set as a prediction value.
  • An average value may be derived based on top reference samples adjacent to the top of a current block and left reference samples adjacent to the left of a current block.
  • an average value may be derived by using only top reference samples or left reference samples.
  • a current block when a current block is a square block, an average value may be derived by using top reference samples and left reference samples.
  • a width of a current block is greater than a height or when a ratio of a width and a height is equal to or greater than (or less than) a predefined value
  • an average value may be derived by using only top reference samples.
  • a height of a current block is greater than a width or when a ratio of a width and a height is equal to or greater than (or less than) a predefined value
  • an average value may be derived by using only left reference samples.
  • a specific reference sample may be excluded.
  • a reference sample not beyond a scope of k times a standard deviation from an average value may be used to calculate an average value and other reference samples may be excluded from the calculation of an average value.
  • k is a natural number and may have a value of 1, 2, 3, 4, etc.
  • a value of k may be predefined in an encoder and a decoder.
  • a value of k may be determined based on at least one of a size or a shape of a block.
  • information representing a value of k may be signaled in a bitstream.
  • Whether a reference sample is used may be determined by setting any threshold value, instead of standard deviation ⁇ .
  • a reference sample that an absolute value of difference with an average value is equal to or less than a threshold value may be set to be available when deriving an average value.
  • a reference sample that an absolute value of difference with an average value is greater than a threshold value may be set to be unavailable when deriving an average value.
  • a threshold value may be predefined in an encoder and a decoder.
  • a threshold value may be determined based on at least one of a size or a shape of a block.
  • information representing a threshold value may be signaled in a bitstream.
  • Reference samples may be subsampled to reduce complexity for calculation of an average value and an average value may be calculated by using subsampled reference samples.
  • top reference samples positioned at a coordinate of ( ⁇ 1, 2m) may be used to derive an average value or top reference samples positioned at a coordinate of ( ⁇ 1, 2m+1) may be used to derive an average value among top reference samples.
  • left reference samples positioned at a coordinate of (2n, ⁇ 1) may be used to derive an average value or left reference samples positioned at a coordinate of (2n+1, ⁇ 1) may be used to derive an average value among left reference samples.
  • m is a natural number from 0 to (W/2) ⁇ 1 and n is a natural number from 0 to (H/2) ⁇ 1.
  • a scope of m and n may be determined.
  • a subsampling rate may be adaptively determined.
  • a reference sample may be selected at a fixed interval.
  • a value representing an interval between reference samples may be predefined in an encoder and a decoder.
  • an interval between reference samples may be adaptively determined based on at least one of a size or a shape of a current block.
  • an interval between reference samples may be determined based on index information specifying one of a plurality of candidates.
  • an average value may be derived based on one selected among a plurality of set candidates.
  • a first set candidate may include all top reference samples adjoining an upper boundary of a current block and all left reference samples adjoining a left boundary of a current block.
  • a second set candidate may include top reference samples at a position of ( ⁇ 1, 2m) among top reference samples of a current block and left reference samples at a position of (2n, ⁇ 1) among left reference samples of a current block.
  • a third set candidate may include top reference samples at a position of ( ⁇ 1, 2m+1) among top reference samples of a current block and left reference samples at a position of (2n+1, ⁇ 1) among left reference samples of a current block.
  • a number and type of set candidates are not limited to the above-described example. It is possible to define more or less set candidates than the above-described example.
  • An encoder may generate a prediction block per set candidate and determine an optimum set candidate by measuring a cost for each prediction block. And, an index specifying an optimum set candidate may be encoded and signaled in a bitstream.
  • an optimum set candidate may be determined based on at least one of a size or a shape of a current block.
  • one of a set candidate configured with reference samples before subsampling is performed (e.g., at least one of left reference samples and top reference samples) and a set candidate configured with reference samples that subsampling is performed may be selected as an optimum set candidate.
  • An intra prediction mode of a chroma component may be determined based on an intra prediction mode of a luma component. Concretely, an intra prediction mode of a chroma component is determined by referring to an intra prediction mode of a luma component, but a method of determining an intra prediction mode of a chroma component may be different depending on a chroma mode.
  • a chroma mode may include at least one of a DC mode, a planar mode, a VER mode, a HOR mode or a DM mode.
  • Table 1 shows a method of deriving an intra prediction mode of a chroma component according to a chroma mode.
  • Index information for specifying a chroma mode may be signaled in a bitstream.
  • a chroma mode index indicating one of a DC mode, a planar mode, a VER mode, a HOR mode or a DM mode may be signaled in a bitstream.
  • an intra prediction mode of a chroma component may be set as a planar mode except when an intra prediction mode of a luma component is 0 (planar).
  • an intra prediction mode of a chroma component may be set in a vertical direction except when an intra prediction mode of a luma component is 50 (vertical direction).
  • an intra prediction mode of a chroma component may be set in a horizontal direction except when an intra prediction mode of a luma component is 18 (horizontal direction).
  • an intra prediction mode of a chroma component may be set as DC except when an intra prediction mode of a luma component is 1 (DC).
  • an intra prediction mode of a chroma component may be set to be the same as an intra prediction mode of a luma component.
  • a CCLM(Cross-component linear mode) mode may be additionally defined as a chroma mode.
  • Table 2 shows an example in which a CCLM mode is added as a new chroma mode.
  • a plurality of CCLM modes may be defined.
  • index 4 to 6 indicate a first CCLM mode (LM mode), a second CCLM mode (LM-A (Above) mode) and a third CCLM mode (LM-L(Left) mode), respectively.
  • a chroma mode index Based on a chroma mode index, whether a chroma mode is a CCLM mode may be determined.
  • the maximum length of a chroma mode index may be variably determined according to whether a CCLM mode is enabled.
  • a chroma mode index may indicate one of 0 to 4 as illustrated in Table 1.
  • a chroma mode index may indicate one of 0 to 7 as illustrated in Table 2.
  • a flag representing whether a chroma mode is a CCLM mode may be signaled.
  • a flag, cclm_mode_flag is 1, it represents that a chroma mode is a CCLM mode.
  • a flag, cclm_mode_flag is 0, it represents that a chroma mode is not a CCLM mode.
  • a chroma mode index specifying one of residual chroma modes may be signaled in a bitstream.
  • the maximum length of a chroma mode index may have a fixed value regardless of whether a CCLM mode is enabled.
  • a flag, cclm_mode_flag is 1, an index for specifying one of a plurality of CCLM modes may be additionally signaled.
  • an index, cclm_mode_idx any one of a LM mode, a LM-A mode or a LM-L mode may be determined as a chroma mode.
  • a size and/or a shape of a current block or whether a CCLM is applied to a neighboring block whether a flag representing whether a CCLM is applied will be used, whether an index specifying one of a plurality of CCLM modes will be used or whether a flag and an index representing whether a CCLM mode is applied will be encoded/decoded before a chroma mode index may be determined.
  • a prediction sample of a chroma component may be derived based on a reconstructed luma component sample. Accordingly, redundancy between a luma component sample and a chroma component sample may be removed by using a CCLM mode. Equation 2 shows an example in which a prediction sample of a chroma component is derived under a CCLM mode.
  • Pred C [ y,x ] ⁇ Pred L ′[ y,x ]+ ⁇ ( y,x ⁇ coordinates in block)
  • Pred C means a prediction sample of a chroma component.
  • Pred L ′ represents a reconstructed luma component sample.
  • ⁇ and ⁇ represent CCLM parameters. Concretely, ⁇ represents a weight and ⁇ represents an offset.
  • FIG. 8 is a flow diagram showing a method of deriving a prediction sample of a chroma component according to an embodiment of the present disclosure.
  • a size of a luma image is the same as that of a chroma image S 801 .
  • a chroma subsampling format is 4:4:4
  • it may be determined that a size of a luma image is the same as that of a chroma image.
  • a chroma subsampling format is 4:2:2 or 4:2:0
  • reconstructed samples included in a luma image may be downsampled S 802 .
  • a filtered luma block in the same size as a current chroma block may be obtained by applying a downsampling filter to a luma block corresponding to a current chroma block.
  • applying a downsampling filter to a luma block may be omitted.
  • a type of a downsampling filter may be determined based on at least one of a type of a current image, a CCLM mode type or a position of a sample. For different types of filters, at least one of a shape, the number of taps or a coefficient of a filter may be different.
  • a type of a current image represents whether a current picture is a HDR (High dynamic range) image.
  • a CCLM mode type represents one of a LM mode, a LM-A mode or a LM-L mode.
  • Information for determining a current image type may be signaled in a bitstream.
  • a flag representing whether a position of a chroma component sample relatively moves compared with a position of a co-located luma sample may be signaled in a bitstream.
  • the flag When the flag is 1, it represents that a position of a chroma component sample is the same as that of a co-located luma sample. It represents that a current image is a HDR image.
  • the flag When the flag is 0, it represents that a position of a chroma component sample relatively moves down by a 0.5 pixel compared with a position of a co-located luma sample. It represents that a current image is not a HDR image.
  • a position of a co-located luma sample corresponding to a chroma sample may be determined.
  • a position of a co-located luma sample corresponding to a chroma sample at a position of (y, x) may be determined as (y*subHeightC, x*subWidthC).
  • variables subWidthC and subHeightC may be determined based on a chroma subsampling format. In an example, when a chroma subsampling format is 4:4:4, variables subWidthC and subHeightC may be set as 1.
  • variable subWidthC may be set as 2 and variable subHeightC may be set as 1.
  • variables subWidthC and subHeightC may be set as 2.
  • FIG. 9 illustrates a downsampling filter type per chroma sample position when a current image is a HDR image.
  • A represents a sample at a top-left position of a current chroma block.
  • B represents residual samples excluding top-left sample A among samples included in the uppermost row of a current chroma block.
  • C represents residual samples excluding top-left sample A among samples included in the leftmost column of a current chroma block.
  • D represents residual samples excluding samples included in the uppermost row of a current chroma block and samples included in the leftmost column.
  • Variable AvailL represents whether left neighboring blocks of a luma block are available.
  • Variable AvailT represents whether top neighboring blocks of a luma block are available.
  • Variable AvailL and variable AvailT may be determined based on at least one of a CCLM mode type, whether a neighboring block is encoded by intra prediction, whether a luma block and a neighboring block are included in the same coding tree unit or whether a neighboring block gets out of a boundary of a picture.
  • a cross-shaped downsampling filter may be applied to a luma sample corresponding to chroma sample D.
  • a downsampling filter may be applied to a co-located luma sample of chroma sample D, luma samples neighboring in a horizontal direction of the co-located luma sample and luma samples neighboring in a vertical direction of the co-located luma sample.
  • a ratio of a filter coefficient applied to a co-located luma sample and a filter coefficient applied to a neighboring luma sample may be 4:1.
  • a downsampling filter type for a luma sample corresponding to chroma sample C included in the leftmost column in a chroma block may be determined based on variable AvailL.
  • a cross-shaped filter may be applied when left neighboring samples neighboring a luma block are available.
  • a downsampling filter may be applied to a co-located luma sample of chroma sample C, luma samples neighboring in a horizontal direction of the co-located luma sample and luma samples neighboring in a vertical direction of the co-located luma sample.
  • a ratio of a filter coefficient applied to a co-located luma sample and a filter coefficient applied to a neighboring luma sample may be 4:1.
  • a vertical directional filter may be applied.
  • a downsampling filter may be applied to a co-located luma sample of chroma sample C and luma samples neighboring in a vertical direction of the co-located luma sample.
  • a ratio of a filter coefficient applied to a co-located luma sample and a filter coefficient applied to a neighboring luma sample may be 2:1.
  • a downsampling filter type for a luma sample corresponding to chroma sample B included in the uppermost row in a chroma block may be determined based on variable AvailT.
  • a cross-shaped filter may be applied when top neighboring samples neighboring a luma block are available.
  • a downsampling filter may be applied to a co-located luma sample of chroma sample B, luma samples neighboring in a horizontal direction of the co-located luma sample and luma samples neighboring in a vertical direction of the co-located luma sample.
  • a ratio of a filter coefficient applied to a co-located luma sample and a filter coefficient applied to a neighboring luma sample may be 4:1.
  • a horizontal directional filter may be applied.
  • a downsampling filter may be applied to a co-located luma sample of chroma sample C and luma samples neighboring in a horizontal direction of the co-located luma sample.
  • a ratio of a filter coefficient applied to a co-located luma sample and a filter coefficient applied to a neighboring luma sample may be 2:1.
  • a downsampling filter type for a luma sample corresponding to top-left chroma sample A in a chroma sample may be determined based on variable AvailL and variable AvailT.
  • a cross-shaped filter may be applied when all of left neighboring samples and top neighboring samples neighboring a luma block are available.
  • a downsampling filter may be applied to a co-located luma sample of chroma sample A, luma samples neighboring in a horizontal direction of the co-located luma sample and luma samples neighboring in a vertical direction of the co-located luma sample.
  • a ratio of a filter coefficient applied to a co-located luma sample and a filter coefficient applied to a neighboring luma sample may be 4:1.
  • a vertical directional filter When left neighboring samples neighboring a luma block are available, but top neighboring samples are unavailable, a vertical directional filter may be applied.
  • a downsampling filter When a vertical directional filter is applied, a downsampling filter may be applied to a co-located luma sample of chroma sample A and luma samples neighboring in a vertical direction of the co-located luma sample.
  • a ratio of a filter coefficient applied to a co-located luma sample and a filter coefficient applied to a neighboring luma sample may be 2:1.
  • a horizontal directional filter When top neighboring samples neighboring a luma block are available, but left neighboring samples are unavailable, a horizontal directional filter may be applied.
  • a downsampling filter When a horizontal directional filter is applied, a downsampling filter may be applied to a co-located luma sample of chroma sample A and luma samples neighboring in a horizontal direction of the co-located luma sample.
  • a ratio of a filter coefficient applied to a co-located luma sample and a filter coefficient applied to a neighboring luma sample may be 2:1.
  • a downsampling filter may not be applied to a co-located luma sample corresponding to chroma sample A.
  • FIG. 10 illustrates a downsampling filter type per chroma sample position when a current image is not a HDR image.
  • a 6-tap downsampling filter may be applied to a co-located luma sample corresponding to a sample (for example, B and D) included in residual columns excluding the leftmost column of a chroma block.
  • a downsampling filter may be applied to horizontal directional neighboring samples respectively neighboring in a horizontal direction of the co-located luma sample and a bottom neighboring sample from the co-located luma sample and the bottom neighboring sample at a bottom position of the co-located luma sample.
  • a ratio of a filter coefficient applied to a co-located luma sample and a bottom neighboring sample and a filter coefficient applied to horizontal directional neighboring samples may be 2:1.
  • a downsampling filter in a different shape may be applied according to variable AvailL.
  • a 6-tap downsampling filter may be applied to a co-located luma sample corresponding to chroma component sample A or C.
  • a 2-tap downsampling filter may be applied to a co-located luma sample corresponding to chroma component sample A or C.
  • a 2-tap downsampling filter may be applied to a co-located luma sample and a bottom neighboring sample at a bottom position of the co-located luma sample.
  • a ratio of a filter coefficient applied to a co-located luma sample and a bottom neighboring sample may be 1:1.
  • a downsampling filter type is determined based on at least one of variable AvailL representing whether left neighboring samples of a luma block are available or variable AvailT representing whether top neighboring samples of a luma block are available.
  • a downsampling filter may be determined independently from variable AvailL and variable AvailT. Concretely, regardless of variable AvailL and variable AvailT, a downsampling filter in a fixed shape may be applied.
  • FIGS. 11 and 12 show an example in which a downsampling filter type is determined regardless of availability of neighboring samples adjacent to a luma block.
  • FIG. 11 shows an application aspect of a downsampling filter when a current image is a HDR image
  • FIG. 12 shows an application aspect of a downsampling filter when a current image is not a HDR image.
  • a downsampling filter corresponding to each chroma sample may be determined without considering availability of neighboring samples adjacent to a luma block.
  • a downsampling filter type using neighboring samples adjacent to a luma block may be set to be unavailable.
  • a downsampling filter selected when variable AvailL and variable AvailT are false may be fixedly applied.
  • a cross-shaped filter may be applied to a co-located luma sample corresponding to chroma sample D in a chroma block.
  • a vertical directional filter may be applied to a co-located luma sample corresponding to chroma sample C in a chroma block.
  • a horizontal directional filter may be applied to a co-located luma sample corresponding to chroma sample B in a chroma block.
  • a 6-tap downsampling filter may be applied to a co-located luma sample corresponding to chroma sample D or B in a chroma block.
  • a 2-tap downsampling filter may be applied to a co-located luma sample corresponding to chroma sample A or C in a chroma block.
  • a downsampling filter type using a neighboring sample adjacent to a luma block may be fixedly applied to a luma block.
  • a downsampling filter selected when variable AvailL and variable AvailT are true may be fixedly applied.
  • pixels positioned on a boundary in a luma block may be padded to a position of an unavailable neighboring sample.
  • an unavailable neighboring sample may be replaced with pixels positioned on a boundary in a luma block.
  • reconstructed samples included in the leftmost column in a luma block may be padded to the left.
  • top neighboring samples adjacent to the top of a luma block are unavailable, reconstructed samples included in the uppermost row in a luma block may be padded to the top.
  • FIG. 9 to FIG. 12 it was shown that a different downsampling filter type is determined per each category after classifying each of chroma samples into one of A to D.
  • classification of chroma samples is performed based on at least one of whether it is included in the uppermost row or whether a chroma sample is included in the leftmost column.
  • conditions for classification of chroma samples may be set differently.
  • a size of a chroma block is 4 ⁇ 4, it may follow classification conditions shown in FIG. 9 to FIG. 12 .
  • chroma samples may be classified based on whether it is included in 2 uppermost rows in a chroma block or whether it is included in 2 leftmost columns in a chroma block.
  • chroma samples included in a 2 ⁇ 2-sized top-left region in a chroma block may be classified as A.
  • residual chroma samples excluding chroma samples classified as A among chroma samples included in 2 uppermost rows in a chroma block may be classified as B.
  • Residual chroma samples excluding chroma samples classified as A among chroma samples included in 2 leftmost columns in a chroma block may be classified as C. Residual chroma samples in a chroma block may be classified as D.
  • CCLM parameters ⁇ and ⁇ may be derived based on reconstructed pixels around a chroma block and reconstructed pixels around a luma block S 803 .
  • reconstructed pixels around a luma block may be downsampled.
  • CCLM parameters may be derived based on at least one of top neighboring samples adjacent to the top of a luma block and a chroma block or left neighboring blocks adjacent to the left of a luma block and a chroma block.
  • CCLM parameters when deriving CCLM parameters, whether top neighboring samples and left neighboring samples are used may be determined.
  • CCLM parameters may be derived based on top neighboring samples and left neighboring samples.
  • CCLM parameters Under a LM-A mode, CCLM parameters may be derived based on only top neighboring samples.
  • Under a LM-L mode, CCLM parameters Under a LM-L mode, CCLM parameters may be derived based on only left neighboring samples.
  • the number or a scope of neighboring reconstructed pixels may be determined based on at least one of a size or a shape of a current block, a type of a current image, a CCLM mode type or a chroma subsampling format.
  • FIG. 13 shows an example in which a scope of reconstructed pixels used to derive a CCLM parameter is set differently according to a CCLM mode type.
  • a CCLM parameter may be derived by using W top neighboring samples adjoining an upper boundary of a chroma block and H left neighboring samples adjoining a left boundary of a chroma block.
  • a CCLM parameter may be derived by using 2W top neighboring samples adjacent to the top of a chroma block.
  • CCLM parameters may be derived by using 2H left neighboring samples adjacent to the left of a chroma block.
  • CCLM parameters may be derived by using only subsampled neighboring samples.
  • Equation 3 represents a position combination of subsampled neighboring samples used to derive a CCLM parameter under a LM mode.
  • a first value in brackets represents a y-coordinate of a neighboring sample and a second value represents an x-coordinate of a neighboring sample.
  • Equation 4 represents a position combination of subsampled neighboring samples used to derive a CCLM parameter.
  • Equation 5 represents a position combination of subsampled neighboring samples used to derive a CCLM parameter.
  • a co-located luma sample corresponding to a neighboring chroma sample may be extracted from a luma image.
  • a luma sample to which a downsampling filter is applied may be derived.
  • a downsampling filter may be applied to a co-located luma pixel and neighboring samples neighboring the co-located luma pixel.
  • a type of a downsampling filter may be determined based on a type of a current image, a position of a co-located luma pixel, variable AvailL, variable AvailT or whether a current block adjoins a boundary of a coding tree unit. For different types of filters, at least one of a shape, the number of taps or a coefficient of a filter may be different.
  • Reconstructed samples included in up to N lines from an upper boundary of a luma block may be used to derive a filtered luma sample for a top neighboring sample neighboring the top of a chroma block.
  • the number of lines may be determined based on at least one of a chroma subsampling format, a CCLM mode type, an image type, a shape or a size of a current block, whether a current block adjoins a boundary of a coding tree unit or a type of a downsampling filter.
  • FIG. 14 illustrates a downsampling filter type applied to a collocated luma sample of a top neighboring sample when a current image is not a HDR image.
  • FIG. 15 illustrates a downsampling filter type applied to a collocated luma sample of a top neighboring sample when a current image is a HDR image.
  • variable AvailTL may be derived by applying an AND operator between variable AvailL and variable AvailT.
  • a value of variable AvailTL may be set as 1 and when at least one of left neighboring samples and top neighboring samples are unavailable, variable AvailTL may be set as 0.
  • a downsampling filter type applied to a co-located luma sample may be determined based on at least one of variable AvailL, variable AvailT, a position of a top neighboring sample and whether a current block adjoins an upper boundary of a coding tree unit.
  • a 6-tap downsampling filter may be applied to a co-located luma sample.
  • a downsampling filter may be applied to horizontal directional neighboring samples respectively neighboring in a horizontal direction of a co-located sample and a bottom neighboring sample from the co-located luma sample and the bottom neighboring sample at a bottom position of the co-located sample.
  • a ratio of a filter coefficient applied to a co-located sample and a bottom neighboring sample and a filter coefficient applied to horizontal directional neighboring samples may be 2:1.
  • a horizontal directional filter may be applied to a co-located luma sample.
  • a downsampling filter may be applied to a co-located luma sample and luma samples neighboring in a horizontal direction of the co-located luma sample.
  • a ratio of a filter coefficient applied to a co-located luma sample and a filter coefficient applied to a neighboring luma sample may be 2:1.
  • a 6-tap downsampling filter may be applied to a co-located luma sample.
  • a horizontal directional downsampling filter may be applied.
  • a 2-tap vertical directional filter may be applied to a co-located luma sample.
  • a 2-tap vertical directional filter may be applied to a co-located luma sample and a bottom neighboring sample at a bottom position of the co-located luma sample.
  • a ratio of a filter coefficient applied to a co-located luma sample and a bottom neighboring sample may be 1:1.
  • top neighboring samples and left neighboring samples are unavailable and a current block adjoins an upper boundary of a coding tree unit, it may be set not to apply a downsampling filter to a co-located luma sample.
  • a cross-shaped filter may be applied to a co-located luma sample.
  • a downsampling filter may be applied to a co-located luma sample, luma samples neighboring in a horizontal direction of the co-located sample and luma samples neighboring in a vertical direction of the co-located luma sample.
  • a ratio of a filter coefficient applied to a co-located luma sample and a filter coefficient applied to a neighboring luma sample may be 4:1.
  • a horizontal directional filter may be applied to a co-located luma sample.
  • a downsampling filter may be applied to a co-located luma sample and luma samples neighboring in a horizontal direction of the co-located luma sample.
  • a ratio of a filter coefficient applied to a co-located luma sample and a filter coefficient applied to a neighboring luma sample may be 2:1.
  • a cross-shaped filter may be applied to a co-located luma sample.
  • a horizontal directional downsampling filter may be applied.
  • a current coding block when a current coding block does not adjoin an upper boundary of a coding tree unit, but at least one of top neighboring samples and left neighboring samples are unavailable, it may be set not to apply a downsampling filter to a co-located luma sample.
  • top neighboring samples and left neighboring samples are unavailable and a current block adjoins an upper boundary of a coding tree unit, it may be set not to apply a downsampling filter to a co-located luma sample.
  • a downsampling filter type applied to a co-located luma sample is determined based on an image type, variable AvailTL and whether a current block adjoins an upper boundary of a coding tree unit.
  • a downsampling filter type may be determined regardless of at least one of an image type, variable AvailTL or whether a current block adjoins an upper boundary of a coding tree unit.
  • FIG. 16 shows an example to which a filter in a fixed type is applied according to a position of a top neighboring sample.
  • a horizontal directional filter may be applied to a co-located luma sample.
  • a downsampling filter may not be applied to a co-located luma sample.
  • a first type of a downsampling filter may be applied to all top neighboring samples.
  • a second type of a downsampling filter may be applied to all top neighboring samples.
  • a first type downsampling filter may represent a 3-tap horizontal directional filter and a second type downsampling filter may represent a 6-tap filter.
  • reconstructed samples when a neighboring sample adjacent to a luma block is unavailable, available reconstructed samples may be padded to a position of an unavailable neighboring sample.
  • reconstructed samples included in the leftmost column in a luma block may be padded to the left.
  • top neighboring samples adjacent to the top of a luma block are unavailable, reconstructed samples included in the uppermost row in a luma block may be padded to the top.
  • a downsampling filter type may be determined without considering availability of neighboring samples.
  • reconstructed samples included in up to M lines from a left boundary of a luma block may be used.
  • the number of lines may be determined based on at least one of a chroma subsampling format, a CCLM mode type, an image type, a shape or a size of a current block, whether a current block adjoins a boundary of a coding tree unit or a type of a downsampling filter.
  • a downsampling filter for a co-located luma sample at a left position of a luma block reconstructed samples included in 3 columns around a left boundary of a luma block are used.
  • a downsampling filter may be applied to a co-located luma pixel and neighboring pixels neighboring the co-located luma pixel.
  • FIG. 17 illustrates a downsampling filter type applied to a co-located luma sample of a left neighboring sample.
  • a downsampling filter type applied to a co-located luma sample may be determined based on at least one of an image type, variable AvailL, variable AvailT and a position of a left neighboring sample.
  • a cross-shaped filter may be applied to a co-located luma sample.
  • a downsampling filter may be applied to a co-located luma sample, luma samples neighboring in a horizontal direction of the co-located sample and luma samples neighboring in a vertical direction of the co-located luma sample.
  • a ratio of a filter coefficient applied to a co-located luma sample and a filter coefficient applied to a neighboring luma sample may be 4:1.
  • a cross-shaped filter may be applied to a co-located luma sample regardless of availability of left neighboring reference samples and top neighboring reference samples.
  • a horizontal directional downsampling filter may be applied.
  • a downsampling filter may be applied to a co-located luma sample and luma samples neighboring in a horizontal direction of the co-located sample.
  • a ratio of a filter coefficient applied to a co-located luma sample and a filter coefficient applied to horizontal directional neighboring samples may be 2:1.
  • a 6-tap downsampling filter may be applied to a co-located luma sample.
  • a downsampling filter may be applied to horizontal directional neighboring samples respectively neighboring in a horizontal direction of a co-located sample and a bottom neighboring sample from the co-located luma sample and the bottom neighboring sample at a bottom position of the co-located sample.
  • a ratio of a filter coefficient applied to a co-located sample and a bottom neighboring sample and a filter coefficient applied to horizontal directional neighboring samples may be 2:1.
  • a downsampling filter type applied to a co-located luma sample is determined based on an image type and variable AvailTL.
  • a downsampling filter type may be determined regardless of at least one of an image type or variable AvailTL.
  • FIG. 17 shows an example to which a filter in a fixed type is applied according to a type of a current image.
  • a horizontal directional filter may be applied to a co-located luma sample.
  • a 6-tap downsampling filter may be applied to a co-located luma sample.
  • reconstructed samples when a neighboring sample adjacent to a luma block is unavailable, available reconstructed samples may be padded to a position of an unavailable neighboring sample.
  • reconstructed samples included in the leftmost column in a luma block may be padded to the left.
  • top neighboring samples adjacent to the top of a luma block are unavailable, reconstructed samples included in the uppermost row in a luma block may be padded to the top.
  • a downsampling filter type may be determined without considering availability of neighboring samples.
  • Information on a method of determining a downsampling filter may be signaled in a bitstream.
  • information specifying one of a first method in which a downsampling filter type is determined by considering at least one of an image type, variable AvailTL or whether a current block adjoins an upper boundary of a coding tree unit or a second method in which a downsampling filter type is determined regardless of at least one of the conditions may be signaled in a bitstream.
  • At least one of a first method or a second method may be selected based on at least one of a type of a current image, a size and/or a shape of a current block, whether a current block is adjacent to an upper boundary of a coding tree unit, a chroma subsampling format, a CCLM mode type, variable AvailL, variable AvailT or variable AvailTL.
  • a filter type may be determined by considering only whether a current image is a HDR image.
  • a current image is a HDR image
  • only a first type of a downsampling filter may be fixedly applied to a top neighboring sample or a left neighboring sample
  • only a second type of a downsampling filter may be fixedly applied to a top neighboring sample or a left neighboring sample.
  • the number of taps or a coefficient may be different.
  • padding may be pre-performed to N rows around an upper boundary of a luma block and/or M rows around a left boundary of a luma block.
  • all neighboring samples around a luma block may be set to be available, and accordingly, a filter type may be determined regardless of whether neighboring samples are available.
  • a position of subsampled neighboring samples may be set as a combination different from a described example.
  • Equation 6 instead of Equation 3, subsampled neighboring samples may be derived.
  • W and H represent a width and a height of a current chroma block, respectively.
  • a combination of variables a, b and c may be determined based on one of the following Table 3.
  • Equation 6 and Table 3 It may be effective to subsample neighboring samples by using one combination of Equation 6 and Table 3, instead of subsampling neighboring samples by using Equation 3, to properly keep an interval of neighboring samples to be subsampled.
  • an interval between 2 samples to be subsampled may be changed in a nonuniform way as a width or a height of a current block is greater than the other.
  • an interval between 2 samples to be subsampled may be kept more uniformly compared with a case in which Equation 3 is used.
  • variable combinations defined in Table 3 may be selected based on at least one of a size or a shape of a current block, a CCLM mode type, a type of a current image and a chroma subsampling format.
  • at least one of a width or a height of a current block may be set as variable c and a combination of (a, b) corresponding to determined variable c may be called.
  • a combination of (a, b) when at least one of a width or a height of a current block is 4, a combination of (a, b) may be set as (3, 5), when at least one of a width or a height of a current block is 8, a combination of (a, b) may be set as (5, 11), when at least one of a width or a height of a current block is 16, a combination of (a, b) may be set as (11, 21) and when at least one of a width or a height of a current block is equal to or greater than 32, a combination of (a, b) may be set as (21, 43).
  • a combination of variables for a horizontal direction may be set to be different from a combination of variables for a vertical direction.
  • subsampling for top neighboring samples of a current block may be performed by calling a combination of (a, b) corresponding to set variable c after setting W, a width of a current block, as variable c.
  • subsampling for left neighboring samples of a current block may be performed by calling a combination of (a, b) corresponding to set variable c after setting H, a width of a current block, as variable c.
  • information for specifying at least one of variables a, b and c may be signaled in a bitstream.
  • information for specifying each of variables a, b and c may be signaled through a sequence parameter set or a picture parameter set.
  • index information specifying one of combinations of variables a, b and c may be signaled in a bitstream.
  • a value of variables a, b and c may be fixed in an encoder and a decoder.
  • the number of chroma samples to be subsampled may be increased or decreased compared with those described in the Equation 3 to 5.
  • the number of chroma samples to be subsampled may be determined based on at least one of a size or a shape of a current block, a type of a current image, whether a current block adjoins a boundary of a coding tree unit, a CCLM mode type, a chroma subsampling format, variable AvailT, variable AvailL or variable AvailTL.
  • Equation 3 to 5 may be changed into the following Equation 7 to 9.
  • Equation 7 to Equation 9 represent a combination of subsampled neighboring samples under a LM mode, a LM-A mode and a LM-L mode, respectively.
  • the number of subsampled neighboring samples may be adjusted based on a size of a current block. In an example, when a current block is smaller than a threshold value, W ⁇ H, 4 neighboring samples may be selected. On the other hand, when a current block is equal to or greater than a threshold value, W ⁇ H, 8 neighboring samples may be selected.
  • a downsampling filter is applied to a co-located luma sample corresponding to a chroma sample to derive a CCLM parameter.
  • a CCLM parameter may be derived without applying a downsampling filter to a co-located luma sample.
  • a downsampling filter when not applied, it represents that a value of a co-located luma sample is used as it is in deriving a CCLM parameter.
  • whether a downsampling filter is applied may be determined according to the number of neighboring samples. In an example, when 4 neighboring samples are selected, a downsampling filter may be applied to a co-located luma sample. On the other hand, when 8 neighboring samples are selected, a downsampling filter may not be applied to a co-located luma sample.
  • a downsampling filter type may be adaptively determined according to the number of neighboring samples. In an example, for a case in which 4 neighboring samples are selected and a case in which 8 neighboring samples are selected, at least one of a shape, the number of taps or a coefficient of a downsampling filter may be different.
  • Specificity of a reconstructed pixel may be removed or alleviated by applying a low pass filter to a chroma image or a luma image.
  • a low pass filter may be applied before selecting neighboring samples around a chroma block.
  • block characteristics may be reflected better when a CCLM parameter is derived.
  • a low pass filter may be set to be applied only to a chroma image. It is because that for a reconstructed pixel in a luma image, specificity of a reconstructed pixel may be removed or alleviated through a downsampling filter.
  • a low pass filter may be applied both to a chroma image and a luma image.
  • Information representing whether a low pass filter will be applied may be signaled in a bitstream.
  • the information may be signaled for each of a luma component and a chroma component.
  • CCLM parameters ⁇ and ⁇ may be derived by using neighboring samples neighboring a chroma block and neighboring samples neighboring a luma block.
  • luma component neighboring samples may be classified into 2 groups.
  • the classification may be based on a value of neighboring samples. In an example, when 4 neighboring samples are selected, 2 largest values among 4 neighboring samples may be classified into a first group and 2 smallest values may be classified into a second group.
  • Chroma component neighboring samples may be also classified into 2 groups.
  • the classification may be based on a classification result of luma components. In other words, if a neighboring luma sample is classified into a N-th group, a neighboring chroma sample corresponding to it may be also classified into a N-th group.
  • a sample average value per each group may be derived.
  • an average value Xb may be derived by averaging neighboring luma samples belonging to a first group and an average value Xa may be derived by averaging neighboring luma samples belonging to a second group.
  • an average value Yb may be derived by averaging neighboring chroma samples belonging to a first group and an average value Ya may be derived by averaging neighboring chroma samples belonging to a second group.
  • a CCLM parameter may be derived based on derived average values.
  • weight ⁇ and offset ⁇ may be derived based on the following Equation 10 and 11.
  • a chroma sample may be predicted by using a downsampled luma sample S 804 .
  • a chroma prediction sample may be derived by adding offset ⁇ to a multiplication of a downsampled luma sample and weight ⁇ .
  • Luma component neighboring samples and chroma component neighboring samples may be classified into 3 or more groups.
  • n of N luma neighboring samples may be classified into a first group
  • m luma neighboring samples may be classified into a second group
  • (N-n-m) luma neighboring samples may be classified into a third group.
  • the classification may be based on a value of neighboring luma samples.
  • Chroma component neighboring samples may be also classified into 3 or more groups by referring to a classification result of luma component neighboring samples.
  • a CCLM parameter may be derived by using only 2 groups of 3 or more groups.
  • a CCLM parameter may be derived by using a group including a neighboring sample with the largest value and a group including a neighboring sample with the smallest value.
  • chroma neighboring samples are classified by referring to a classification result of luma neighboring samples.
  • luma neighboring samples may be classified by referring to a classification result of chroma neighboring samples after classifying chroma neighboring samples.
  • neighboring luma samples and neighboring chroma samples may be independently classified into a plurality of groups.
  • each component e.g., a unit, a module, etc.
  • configuring a block diagram in the above-described embodiment may be implemented as a hardware device or a software and a plurality of components may be combined and implemented as one hardware device or software.
  • the above-described embodiment may be recorded in a computer readable recoding medium by being implemented in a form of a program instruction which may be performed by a variety of computer components.
  • the computer readable recoding medium may include a program instruction, a data file, a data structure, etc.
  • a hardware device which is specially configured to store and perform magnetic media such as a hard disk, a floppy disk and a magnetic tape, optical recording media such as CD-ROM, DVD, magneto-optical media such as a floptical disk and a program instruction such as ROM, RAM, a flash memory, etc. is included in a computer readable recoding medium.
  • the hardware device may be configured to operate as one or more software modules in order to perform processing according to the present disclosure and vice versa.
  • the present disclosure may be applied to an electronic device which may encode/decode an image.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
US17/636,966 2019-08-28 2020-08-28 Video signal processing method and device Pending US20220295056A1 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
KR20190106040 2019-08-28
KR10-2019-0106040 2019-08-28
PCT/KR2020/011547 WO2021040458A1 (ko) 2019-08-28 2020-08-28 비디오 신호 처리 방법 및 장치

Publications (1)

Publication Number Publication Date
US20220295056A1 true US20220295056A1 (en) 2022-09-15

Family

ID=74685693

Family Applications (1)

Application Number Title Priority Date Filing Date
US17/636,966 Pending US20220295056A1 (en) 2019-08-28 2020-08-28 Video signal processing method and device

Country Status (4)

Country Link
US (1) US20220295056A1 (zh)
KR (1) KR20210027173A (zh)
CN (1) CN114303369A (zh)
WO (1) WO2021040458A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220337847A1 (en) * 2019-12-30 2022-10-20 Beijing Dajia Internet Information Technology Co., Ltd. Cross component determination of chroma and luma components of video data
US20240015279A1 (en) * 2022-07-11 2024-01-11 Tencent America LLC Mixed-model cross-component prediction mode

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023197193A1 (zh) * 2022-04-12 2023-10-19 Oppo广东移动通信有限公司 编解码方法、装置、编码设备、解码设备以及存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040076237A1 (en) * 2001-11-29 2004-04-22 Shinya Kadono Coding distortion removal method, moving picture coding method, moving picture decoding method, and apparatus for realizing the same, program
US20180220138A1 (en) * 2015-07-08 2018-08-02 Vid Scale, Inc. Enhanced chroma coding using cross plane filtering
US20200195970A1 (en) * 2017-04-28 2020-06-18 Sharp Kabushiki Kaisha Image decoding device and image encoding device
US20200382769A1 (en) * 2019-02-22 2020-12-03 Beijing Bytedance Network Technology Co., Ltd. Neighboring sample selection for intra prediction

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9288500B2 (en) * 2011-05-12 2016-03-15 Texas Instruments Incorporated Luma-based chroma intra-prediction for video coding
US10455249B2 (en) * 2015-03-20 2019-10-22 Qualcomm Incorporated Downsampling process for linear model prediction mode
US11025903B2 (en) * 2017-01-13 2021-06-01 Qualcomm Incorporated Coding video data using derived chroma mode
JP2021005741A (ja) * 2017-09-14 2021-01-14 シャープ株式会社 画像符号化装置及び画像復号装置
GB2567249A (en) * 2017-10-09 2019-04-10 Canon Kk New sample sets and new down-sampling schemes for linear component sample prediction
WO2019131349A1 (ja) * 2017-12-25 2019-07-04 シャープ株式会社 画像復号装置、画像符号化装置
WO2019135636A1 (ko) * 2018-01-05 2019-07-11 에스케이텔레콤 주식회사 Ycbcr간의 상관 관계를 이용한 영상 부호화/복호화 방법 및 장치
KR20190083956A (ko) * 2018-01-05 2019-07-15 에스케이텔레콤 주식회사 YCbCr간의 상관 관계를 이용한 영상 부호화/복호화 방법 및 장치

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040076237A1 (en) * 2001-11-29 2004-04-22 Shinya Kadono Coding distortion removal method, moving picture coding method, moving picture decoding method, and apparatus for realizing the same, program
US20180220138A1 (en) * 2015-07-08 2018-08-02 Vid Scale, Inc. Enhanced chroma coding using cross plane filtering
US20200195970A1 (en) * 2017-04-28 2020-06-18 Sharp Kabushiki Kaisha Image decoding device and image encoding device
US20200382769A1 (en) * 2019-02-22 2020-12-03 Beijing Bytedance Network Technology Co., Ltd. Neighboring sample selection for intra prediction

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220337847A1 (en) * 2019-12-30 2022-10-20 Beijing Dajia Internet Information Technology Co., Ltd. Cross component determination of chroma and luma components of video data
US20240015279A1 (en) * 2022-07-11 2024-01-11 Tencent America LLC Mixed-model cross-component prediction mode

Also Published As

Publication number Publication date
KR20210027173A (ko) 2021-03-10
CN114303369A (zh) 2022-04-08
WO2021040458A1 (ko) 2021-03-04

Similar Documents

Publication Publication Date Title
US11445177B2 (en) Method and apparatus for processing video signal
US11930161B2 (en) Method and apparatus for processing video signal
US20240121422A1 (en) Method and apparatus for processing video signal
US11805255B2 (en) Method and apparatus for processing video signal
US11438582B2 (en) Video signal processing method and device for performing intra-prediction for an encoding/decoding target block
US11743481B2 (en) Method and apparatus for processing video signal
US20230300324A1 (en) Method and device for processing video signal
US20230053392A1 (en) Method and apparatus for processing video signal
US11457218B2 (en) Method and apparatus for processing video signal
US11350086B2 (en) Method and apparatus for processing video signal
US20220295056A1 (en) Video signal processing method and device
US20230049912A1 (en) Method and apparatus for processing video signal
US20210092362A1 (en) Method and apparatus for processing video signal
US20220167012A1 (en) Video signal processing method and device involving modification of intra predicted sample
US11758150B2 (en) Method and apparatus for encoding/decoding a video signal, and a recording medium storing a bitstream
US20240129528A1 (en) Video signal encoding/decoding method and apparatus based on intra prediction, and recording medium storing bitstream

Legal Events

Date Code Title Description
AS Assignment

Owner name: KT CORPORATION, KOREA, REPUBLIC OF

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:LIM, SUNG WON;REEL/FRAME:059205/0430

Effective date: 20220218

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER