WO2020184936A1 - Procédé et dispositif de codage ou de décodage d'un signal vidéo - Google Patents

Procédé et dispositif de codage ou de décodage d'un signal vidéo Download PDF

Info

Publication number
WO2020184936A1
WO2020184936A1 PCT/KR2020/003266 KR2020003266W WO2020184936A1 WO 2020184936 A1 WO2020184936 A1 WO 2020184936A1 KR 2020003266 W KR2020003266 W KR 2020003266W WO 2020184936 A1 WO2020184936 A1 WO 2020184936A1
Authority
WO
WIPO (PCT)
Prior art keywords
block
information
intra prediction
mode
unit
Prior art date
Application number
PCT/KR2020/003266
Other languages
English (en)
Korean (ko)
Inventor
정병득
Original Assignee
정병득
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 정병득 filed Critical 정병득
Priority claimed from KR1020200028855A external-priority patent/KR20200107871A/ko
Publication of WO2020184936A1 publication Critical patent/WO2020184936A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/119Adaptive subdivision aspects, e.g. subdivision of a picture into rectangular or non-rectangular coding blocks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/157Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
    • H04N19/159Prediction type, e.g. intra-frame, inter-frame or bidirectional frame prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/186Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/593Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/80Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation
    • H04N19/82Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation involving filtering within a prediction loop
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/85Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
    • H04N19/86Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving reduction of coding artifacts, e.g. of blockiness

Definitions

  • the present invention relates to a method and apparatus for encoding or decoding a video signal, and more particularly, to a method and apparatus for encoding or decoding a video signal for determining an intra prediction mode using various pieces of information.
  • High-resolution and high-quality images such as high definition (HD) images and ultra high definition (UHD) images is increasing in various application fields.
  • the higher the resolution and quality of the image data the higher the amount of data is compared to the existing image data. Therefore, when the image data is transmitted using a medium such as an existing wired/wireless broadband line or stored using an existing storage medium, transmission The cost and storage cost will increase.
  • High-efficiency image compression techniques can be used to solve these problems that occur as image data becomes high-resolution and high-quality.
  • a method of predicting using information of neighboring blocks of the current block can be used without transmitting information of the current block as it is.
  • This prediction method includes pixels included in the current picture from a picture before or after the current picture.
  • An inter prediction technique for predicting a value and an intra prediction technique for predicting a pixel value included in a current picture using pixel information in the current picture may be used.
  • there are various technologies such as entropy encoding technology that allocates short codes to values with high frequency of appearance and long codes to values with low frequency of appearance, and effectively compresses and transmits or stores image data using such image compression technology. I can.
  • the inter prediction method predicts the pixel value of the current picture by referring to information of another picture, and in intra prediction, the correlation between pixels within the same picture is used. Predict the pixel value. That is, in the case of performing intra prediction, the intra prediction mode of the current block is determined using pixel values spatially adjacent to the current block to be coded, rather than referring to a reference picture to encode the current block. Can be done.
  • the problem to be solved by the present invention is to provide a video signal decoding method and apparatus for improving the coding efficiency of intra prediction by determining an intra prediction mode using various information.
  • Another problem to be solved by the present invention is to provide a video signal decoding method and apparatus for improving coding efficiency by using both intra prediction and inter prediction.
  • Another problem to be solved by the present invention is to provide a video signal decoding method and apparatus for improving coding efficiency by expanding and sharing probability information of each tile in a tile group.
  • the technical problem to be solved by the present invention is the steps of dividing a current coding unit block into at least two or more sub-blocks; Deriving intra prediction mode information for each sub-block; Generating an intra prediction image for each sub-block by using the intra prediction mode information; Selectively applying filtering to the intra prediction image; Restoring each of the sub-blocks using the prediction image to which the filtering has been applied; And selectively performing deblocking filtering on a boundary of each of the reconstructed sub-blocks.
  • the dividing into sub-blocks may implicitly perform division of the current coding unit block using at least one of division information of an upper coding unit, division information of a neighboring coding unit, and division information of a neighboring ISP mode. .
  • the color difference block may be implicitly divided into two sub-blocks.
  • it may further include determining whether deblocking filtering is performed at a boundary of an adjacent sub-block using information on whether or not a transform coefficient of each sub-block exists and information about an intra prediction mode.
  • the deriving of the intra prediction mode information for each sub-block may include: deriving intra prediction mode availability information for the current block; And inducing intra prediction mode information for each sub-block by using the intra prediction mode availability information.
  • the intra prediction mode availability information may be derived using at least one of reference pixel position information indicating whether a reference pixel belongs to a picture boundary or another tile, coding unit size information, luminance and color difference information, and transform block size information. I can.
  • the step of obtaining an intra-prediction image for each sub-block may be applied only when the size of at least one of the width and height of the current block is greater than a predetermined value.
  • the horizontal or vertical size of the transform block of the current coding unit is greater than or equal to a predetermined value, is a luminance signal, and is a reference pixel adjacent to the current block. It may be performed in the case of an intra prediction image predicted from.
  • Another technical problem to be solved by the present invention is to obtain a block vector for a current block; Obtaining a prediction sample for the current block in a reference buffer region that has been undone in a current picture using the block vector; Restoring the current block using the predicted sample; And periodically updating the reference buffer area.
  • the step of periodically updating the reference buffer area may be performed before the first coding tree unit of each row of the current picture is decoded, and the step of periodically updating the reference buffer area is a reference buffer memory Can be set to a predetermined value.
  • a video signal decoding method and apparatus thereof are provided to improve coding efficiency of intra prediction.
  • a video signal decoding method for improving coding efficiency by resetting probability information of a tile group of a current block by using probability information of at least one pre-transmitted tile group is to provide a device for this.
  • FIG. 1 is a block diagram illustrating an apparatus for encoding a video signal according to an embodiment of the present invention.
  • FIG. 2 is a block diagram illustrating an apparatus for decoding a video signal according to an embodiment of the present invention.
  • FIG 3 illustrates at least one tile group included in one picture according to an embodiment of the present invention.
  • FIG. 4 illustrates examples of a multi-type tree structure according to an embodiment of the present invention.
  • FIG 5 shows an example of a multi-type tree structure according to an embodiment of the present invention.
  • FIG. 6 illustrates an intra prediction mode of a current coding block according to an embodiment of the present invention.
  • FIG. 7 is a diagram illustrating syntax for differently setting an intra prediction mode of a divided block according to an embodiment of the present invention.
  • FIG. 8 shows an example of implicit partitioning of a current coding unit according to an embodiment of the present invention.
  • FIG 9 shows examples of asymmetric block division according to an embodiment of the present invention.
  • FIG. 10 is a diagram for explaining a method of deriving a reference pixel for a subblock according to an embodiment of the present invention.
  • 11 is an example of a conversion unit syntax according to an embodiment of the present invention.
  • FIG. 12 shows an example of a coding unit syntax according to an embodiment of the present invention.
  • FIG. 13 shows an example of a conversion unit syntax according to another embodiment of the present invention.
  • FIG. 14 shows an example of a coding unit syntax according to another embodiment of the present invention.
  • first and second are used to describe various components, members, parts, regions, and/or parts, but these components, members, parts, regions, and/or parts are these terms. It is obvious that it should not be limited by. These terms are only used to distinguish one component, member, part, region or part from another region or part. Accordingly, the first component, member, part, region or part to be described below may refer to the second component, member, part, region or part without departing from the teachings of the present invention. Further, and/or the term includes a combination of a plurality of related items or any of a plurality of related items.
  • FIG. 1 is a block diagram illustrating an image encoding apparatus according to an embodiment of the present invention.
  • the video signal encoding apparatus 100 includes a picture division unit 105, an inter prediction unit 110, an intra prediction unit 115, a transform unit 120, a quantization unit 125, A rearrangement unit 130, an entropy encoding unit 135, an inverse quantization unit 140, an inverse transform unit 145, a filter unit 150, and a memory 155 are included.
  • each of the components shown in FIG. 1 is independently shown to represent different characteristic functions in the video encoding apparatus, and does not mean that each component is formed of separate hardware or a software component unit. That is, each component is listed and included as each component for convenience of explanation, and at least two components of each component are combined to form one component, or one component is divided into a plurality of components to function. Can be done. An embodiment in which each of these components is integrated or a separate embodiment may be included in the scope of the present invention as long as it does not depart from the essential aspects of the present invention.
  • the picture dividing unit 105 may divide the input picture into at least one processing unit.
  • the processing unit may be a sub-picture, may be a tile, and may be a slice.
  • the sub picture may be transmitted by partially independently encoding an input picture.
  • the slice may have a rectangular shape as well as a square shape.
  • the slide or tile may be at least one coding tree unit (hereinafter referred to as “CTU”), and the CTU may be recursively divided into a quart tree and a multi-type tree.
  • the CTU may start to be divided into a multi-type tree at an end node of the quart tree node, and the multi-type tree may be divided into a binary tree (hereinafter referred to as'BT') and a three-stage division shape (Ternary Tree, hereinafter referred to as'TT').
  • the end node of the division may be a coating unit (hereinafter referred to as'CU'), and prediction and transformation may be performed in units of the CU.
  • a prediction unit may be expressed as a prediction block
  • an encoding or decoding unit may be expressed as a coding block or a decoding block.
  • the picture splitter 105 divides one picture into a combination of a plurality of coding units, prediction units, and transformation units, and divides one picture into one based on a predetermined criterion (eg, a cost function).
  • a picture may be encoded by selecting a combination of a coding unit, a prediction unit, and a transformation unit.
  • one picture may be split into a plurality of coding units.
  • one picture may be divided into the coding unit using a recursive tree structure such as a quad tree structure or a multi-type tree structure, and one image or Coding units split into other coding units based on a largest coding unit as a root may be split with as many child nodes as the number of split coding units. Coding units that are no longer split through this process may become leaf nodes. For example, when it is assumed that only square splitting is possible for one coding unit, one coding unit may be split into up to four coding units.
  • the coding unit, prediction unit, and/or transformation unit are not limited to symmetric partitioning when partitioning, and an asymmetric partition is also possible, and not only four partitions but also two partitions are possible. Is only exemplary, and the present invention is not limited thereto.
  • the prediction unit may include an inter prediction unit 110 that performs inter prediction and an intra prediction unit 115 that performs intra prediction.
  • a video signal is not encoded as it is, but an image is predicted using a specific region inside a picture that has already been encoded and decoded, and a residual value between the original image and the predicted image is encoded.
  • prediction mode information, motion vector information, etc. used for prediction may be encoded by the entropy encoder 135 together with a residual value and transmitted to the decoder.
  • the prediction block may not be generated through the prediction units 110 and 115, but the original block may be encoded as it is and transmitted to the decoder.
  • the prediction units 110 and 115 determine whether to perform inter prediction or intra prediction on a prediction unit, and the prediction such as an inter prediction mode, a motion vector, and a reference picture. Specific information according to each method can be determined. In this case, a processing unit in which prediction is performed, a prediction method, and a detailed processing unit may be different. For example, although a prediction mode and a prediction method are determined according to a prediction unit, prediction may be performed according to a transformation unit.
  • I slice I picture
  • P slice P picture
  • B slice B picture
  • the I slice is a slice that is encoded and decoded only by intra prediction
  • the P slice is a slice that can be decoded using inter prediction using one motion vector and a reference picture index to predict the sample value of each prediction block.
  • the B slice is a slice that can be encoded or decoded using inter prediction using one or two motion vectors and indexes of reference pictures related to the motion vectors in order to predict the sample value of each block.
  • the prediction units 110 and 115 may perform prediction on a processing unit of a picture divided by the picture division unit 105 to generate a prediction block composed of predicted samples.
  • the picture processing unit in the prediction units 110 and 115 may be a coding unit.
  • the inter prediction unit 110 may predict a prediction unit based on information of at least one or more pictures of a previous picture or a subsequent picture of the current picture, and in some cases, predict based on information of a partial region in the current picture that has been encoded. You can predict the unit.
  • the inter prediction unit 110 may include a reference picture interpolation unit, a motion prediction unit, and a motion compensation unit.
  • the information of the one or more pictures used for prediction by the inter prediction unit 110 may be information on pictures that have already been encoded and decoded, or information on pictures that have been transformed and stored in an arbitrary method.
  • the picture modified and stored in the arbitrary method may be a picture obtained by expanding or reducing a picture that has undergone encoding and decoding, or a picture obtained by modifying the brightness of all pixel values in the picture or by modifying the color format. May be.
  • both intra prediction and inter prediction may be used and encoded.
  • a third prediction block may be generated by averaging or a weighted average of the first prediction block performing intra prediction and the second prediction block performing motion compensation.
  • a low frequency or high frequency filter may be passed.
  • Whether to use a filter may be determined according to a position of each pixel in the third prediction block, and the intensity or intensity of the used filter may vary.
  • the filter can be applied only at the edge of the current block.
  • the filter can be applied only to the rest of the current block except for the edge.
  • whether or not to use a filter may vary according to the horizontal and/or vertical size of the third prediction block.
  • a filter may be applied only when the size of the third prediction block is greater than or equal to a predetermined value.
  • the reference picture interpolation unit may receive reference picture information from the memory 155 and generate pixel information of integer pixels or less in the reference picture.
  • pixel information of an integer or less in units of 1/4 pixel may be generated by using a DCT-based 8-tap interpolation filter with different filter coefficients.
  • pixel information of an integer or less in units of 1/8 pixels may be generated by using a DCT-based interpolation filter in which coefficients of the filter are different.
  • the type of filter and a unit for generating pixel information of an integer or less are not limited thereto, and a unit for generating pixel information of an integer or less may be determined using various interpolation filters.
  • the motion prediction unit may perform motion prediction based on the reference picture interpolated by the reference picture interpolation unit.
  • Various methods can be used to calculate the motion vector.
  • the motion vector may have a motion vector value of an integer pixel unit or a 1/2 or 1/4 pixel unit based on the interpolated pixel.
  • the motion prediction unit may predict a prediction unit of a current block by differently predicting a motion.
  • the motion prediction method may use various methods including a merge method, an advanced motion vector prediction (AMVP) method, and a skip method. In this way, information including an index of a reference picture selected by the inter prediction unit 110, a motion vector predictor (MVP), and a residual signal may be entropy-coded and transmitted to the decoder.
  • AMVP advanced motion vector prediction
  • the intra prediction unit 115 may generate a prediction unit based on reference pixel information around a current block, which is pixel information in a current picture.
  • the neighboring blocks of the prediction unit are blocks that have performed inter prediction, that is, when the reference pixel is a pixel that has performed inter prediction, a reference pixel included in the block that has performed inter prediction is predicted in the surrounding area. It can be used by replacing with reference pixel information of the block that has performed. That is, when the reference pixel is not available (unavailable), an unavailable reference pixel may be replaced with at least one reference pixel among available reference pixels.
  • the intra-prediction unit 115 may use various methods including an intra-block copy method.
  • the intra prediction unit 115 may use the most probable intra prediction mode (MPM) obtained from neighboring blocks to encode the intra prediction mode. Even when the intra prediction unit 115 performs intra prediction, a processing unit in which prediction is performed and a processing unit in which a prediction method and specific content are determined may be different from each other.
  • MCM most probable intra prediction mode
  • the prediction modes of intra prediction may include 33 directional prediction modes and at least two or more non-directional modes.
  • the non-directional mode may include a DC prediction mode and a planar mode.
  • the number of the 35 inter prediction modes is only exemplary, and the present invention is not limited thereto, and intra prediction may be performed in more directional or non-directional modes to predict by various methods.
  • a prediction block may be generated after applying a filter to a reference pixel.
  • whether to apply the filter to the reference pixel may be determined according to the intra prediction mode and/or size of the current block.
  • the coding unit CU may be determined in various sizes and shapes.
  • the coding unit in the case of inter prediction, may have a size such as 2N x 2N, 2N x N, N x 2N, or N x N.
  • the coding unit in the case of intra prediction, may have a size such as 2N ⁇ 2N or N ⁇ N (N is an integer), but intra prediction may be performed not only in a square size but also in a rectangular shape. In this case, a coding unit of size N ⁇ N may be set to be applied only in a specific case.
  • an intra prediction unit having a size such as N x mN, mN x N, 2N x mN or mN x 2N (m is a fraction or integer) may be further defined and used.
  • a residual value (residual block or residual signal) between the prediction block generated by the intra prediction unit 115 and the original block may be input to the transform unit 120.
  • prediction mode information, interpolation filter information, etc. used for prediction may be encoded by the entropy encoder 135 together with a residual value and transmitted to a decoder.
  • neighboring blocks of the current block may be smoothed by using at least one filter regardless of positions of neighboring blocks of the current block.
  • the transform unit 120 converts an original block as a transform unit and a residual block including residual value information of a prediction unit generated through the prediction units 110 and 115 into Discrete Cosine Transform (DST) and Discrete Sine Transform (DST). , Can be transformed using a transformation method such as KLT (Karhunen Loeve Transform). Whether DCT, DST, or KLT is applied to transform the residual block may be determined based on intra prediction mode information of the prediction unit used to generate the residual block.
  • KLT Kerhunen Loeve Transform
  • the conversion process may not be performed according to the image characteristics of the block. If frequency transformation is not performed, a process of changing the positions of pixels within a block may be performed in order to be more efficiently coded in the quantization and entropy coding processes, which is a preprocessing for more efficient coding in the scanning process. It can be a process.
  • the scaling factor may have a fixed constant value regardless of the position of the transformation coefficient, and may have, for example, a value of 16 as the scaling factor.
  • the scaling factor is a predefined value and is shared equally by the encoder and the decoder.
  • the scaling factor may be generated by a user to an arbitrary value, which may be included in a bitstream and transmitted to a decoder.
  • the decoder may decode the scaling factor value generated by the user from the bitstream and use it for inverse quantization.
  • the quantization unit 125 may quantize residual values transformed by the transform unit 120 to generate a quantization coefficient.
  • the converted residual values may be values converted into a frequency domain.
  • the quantization coefficient may be changed according to a transformation unit or an importance of an image, and a value calculated by the quantization unit 125 may be provided to the inverse quantization unit 140 and the rearrangement unit 130.
  • the quantization process may not be performed according to the image characteristics of the block, and in this case, the transform coefficient may be output as it is. If both the transform and quantization are not performed by the transform unit 120 and the quantization unit 125, the difference block may be entropy-coded as it is. Thereafter, the entropy encoder 1350 may perform arithmetic coding on the quantized transform coefficient using a probability distribution and output it as a bitstream.
  • the probability distribution can be obtained from neighboring blocks of the current block.
  • the currently coded block (or picture) is used as a reference block (or picture) in coding the next block.
  • the reference block is a decoded block rather than an original block.
  • the encoded block is decoded through the same process as in the decoder and output as a reference block. That is, in the encoder, in order to obtain the same reference block as the decoder, the coded block is decoded again and stored in the reference picture buffer.
  • the reordering unit 130 may rearrange quantization coefficients provided from the quantization unit 125.
  • the reordering unit 130 may improve encoding efficiency in the entropy encoder 135 by rearranging the quantization coefficients.
  • the reordering unit 130 may rearrange quantized coefficients in the form of a two-dimensional block into a vector form of a one-dimensional shape through a coefficient scanning method.
  • the coefficient scanning method may determine which scanning method is to be used according to the size of a transform unit and an intra prediction mode.
  • the coefficient scanning method may include a zig-zag scan, a vertical scan in which a two-dimensional block type coefficient is scanned in a column direction, and a horizontal scan in which a two-dimensional block shape coefficient is scanned in a row direction.
  • the reordering unit 130 may increase entropy encoding efficiency in the entropy encoder 135 by changing the order of coefficient scanning based on probabilistic statistics of coefficients transmitted from the quantization unit.
  • the entropy encoding unit 135 may perform entropy encoding on quantization coefficients rearranged by the reordering unit 130.
  • Entropy coding may use various coding methods such as Exponential Golomb, Context-Adaptive Variable Length Coding (CAVLC), and Content-Adaptive Binary Arithmetic Coding (CABAC).
  • the entropy encoder 135 includes quantization coefficient information and block type information, prediction mode information, division unit information, prediction unit information, and transmission unit information of a coding unit transmitted from the rearrangement unit 130 and the prediction units 110 and 115, Various information such as motion vector information, reference picture information, block interpolation information, and filtering information may be encoded. In addition, in an embodiment, the entropy encoder 135 may apply a certain change to the transmitted parameter set or syntax, if necessary.
  • the inverse quantization unit 140 inverse quantizes the values quantized by the quantization unit 125, and the inverse transform unit 145 inversely transforms the inverse quantized values by the inverse quantization unit 140.
  • the residual values generated by the inverse quantization unit 140 and the inverse transform unit 145 may be combined with the prediction blocks predicted by the prediction units 110 and 115 to generate a reconstructed block.
  • the reconstructed image may be input to the filter unit 150.
  • the filter unit 150 may include a deblocking filter unit, an offset correction unit (Sample Adaptive Offset (SAO)), and an adaptive loop filter unit (ALF).
  • the reconstructed image is a deblocking filter A deblocking filter is applied in the unit to reduce or remove blocking artifacts, and then input to the offset correction unit to correct the offset.
  • the picture output from the offset correction unit may be input to the adaptive loop filter unit and pass through an adaptive loop filter (ALF) filter, and the picture passing through the filter may be transmitted to the memory 155.
  • ALF adaptive loop filter
  • the deblocking filter unit may remove distortion in a block generated at a boundary between blocks in a reconstructed picture.
  • it may be determined whether to apply the deblocking filter to the current block based on the pixels included in several columns or rows included in the block.
  • a strong filter or a weak filter may be applied according to the required deblocking filtering strength.
  • horizontal filtering and vertical filtering may be processed in parallel when performing vertical filtering and horizontal filtering.
  • the offset correction unit may correct an offset from the original image in pixel units for the residual block to which the deblocking filter is applied.
  • determining the region to perform offset correction, and applying the offset to the region (Band Offset) or It can be applied in the form of a method of applying an offset in consideration of edge information (Edge Offset).
  • the filter unit 150 may not apply filtering to the reconstructed block used for inter prediction.
  • the adaptive loop filter may be performed only when high efficiency is applied based on a value obtained by comparing the filtered reconstructed image and the original image. After dividing the pixels included in the image into predetermined groups, one filter to be applied to the corresponding group may be determined, and filtering may be performed differentially for each group. As for information on whether to apply the ALF, a luminance signal may be transmitted for each coding unit (CU), and a shape and filter coefficient of an ALF filter to be applied may vary according to each block. In addition, the same type (fixed type) ALF filter may be applied regardless of the characteristics of the block to be applied.
  • ALF adaptive loop filter
  • the memory 155 may store the reconstructed block or picture calculated through the filter unit 150.
  • the reconstructed block or picture stored in the memory 155 may be provided to the inter prediction unit 110 or the intra prediction unit 115 that performs inter prediction.
  • the pixel values of the reconstructed blocks used in the intra prediction unit 115 may be data to which the deblocking filter unit, the offset correction unit, and the adaptive loop filter unit are not applied.
  • the image decoding apparatus 200 includes an entropy decoding unit 210, a rearrangement unit 215, an inverse quantization unit 220, an inverse transform unit 225, an inter prediction unit 230, and an intra prediction. It includes a unit 235, a filter unit 240, and a memory 245.
  • the input bitstream may be decoded in a reverse process of a procedure in which image information is processed by the encoding device.
  • VLC variable length coding
  • the entropy decoder 210 is also used in the encoding device. Entropy decoding can be performed by implementing the same VLC table as the used VLC table.
  • the entropy decoder 210 may perform entropy decoding using CABAC in response thereto.
  • the entropy decoding unit 210 provides information for generating a prediction block among the decoded information to the inter prediction unit 230 and the intra prediction unit 235, and the residual value for which entropy decoding is performed by the entropy decoding unit. May be input to the rearrangement unit 215.
  • the reordering unit 215 may rearrange the bitstream entropy-decoded by the entropy decoder 210 based on a method in which the image encoder rearranges it.
  • the re-alignment unit 215 may perform rearrangement through a method of receiving information related to coefficient scanning performed by the encoding apparatus and performing reverse scanning based on the scanning order performed by the encoding apparatus.
  • the inverse quantization unit 220 may perform inverse quantization based on a quantization parameter provided by an encoding apparatus and a coefficient value of a rearranged block.
  • the inverse transform unit 225 may perform inverse DCT, inverse DST, or inverse KLT with respect to the DCT, DST, or KLT performed by the transform unit of the encoding apparatus with respect to the quantization result performed by the image encoding apparatus.
  • the inverse transformation may be performed based on a transmission unit or a division unit of an image determined by the encoding apparatus.
  • the transform unit of the encoding device may selectively perform DCT, DST, or KLT according to information such as a prediction method, a size of a current block, and a prediction direction, and the inverse transform unit 225 of the decoding device is a transform unit of the encoding device.
  • An inverse transform method may be determined based on the performed transform information to perform inverse transform.
  • the quantized transform coefficient when inverse quantization is not performed, the quantized transform coefficient may be output as a transform coefficient as it is. In this case, if the position of the transform coefficient is changed due to the efficiency of the scanning process in the encoder, it may be restored to the original state and then output.
  • the inverse transform unit 225 may output a difference block by performing inverse transform on the transform coefficient. If the inverse transform is not performed, the transform coefficient may be a difference block as it is.
  • the prediction units 230 and 235 may generate a prediction block based on information related to generation of a prediction block provided from the entropy decoder 210 and previously decoded block and/or picture information provided from the memory 245.
  • the reconstructed block may be generated using a prediction block generated by the prediction units 230 and 235 and a residual block provided by the inverse transform unit 225.
  • the detailed prediction method performed by the prediction units 230 and 235 may be the same as the prediction method performed by the prediction units 110 and 115 of the encoding apparatus.
  • the prediction units 230 and 235 may include a prediction unit determination unit (not shown), an inter prediction unit 230, and an intra prediction unit 235.
  • the prediction unit determining unit receives various information such as prediction unit information input from the entropy decoder 210, prediction mode information of the intra prediction method, and motion prediction related information of the inter prediction method, and the prediction unit in the current coding unit And, it is possible to determine whether the prediction unit performs inter prediction or intra prediction.
  • the inter prediction unit 230 uses information required for inter prediction of the current prediction unit provided from the video encoder, based on information included in at least one picture of a previous picture or a subsequent picture of the current picture containing the current prediction unit. As a result, inter prediction on the current prediction unit can be performed.
  • a prediction block for the current block may be generated by selecting a reference picture for the current block and selecting a reference block having the same size as the current block.
  • information of neighboring blocks of the current picture may be used.
  • a prediction block for a current block may be generated based on information of neighboring blocks using a method such as a skip mode, a merge mode, and an advanced motion vector prediction (AMVP).
  • AMVP advanced motion vector prediction
  • the prediction block may be generated in an integer or less sample unit, such as a 1/2 pixel sample unit and a 1/4 pixel sample unit.
  • the motion vector may also be expressed in units of integer pixels or less.
  • a luminance pixel may be expressed in 1/4 pixel units and a color difference pixel may be expressed in 1/8 pixel units.
  • Motion information including a motion vector and a reference picture index required for inter prediction of the current block may be derived in response to checking the skip flag and the merge flag received from the encoding apparatus.
  • the intra prediction unit 235 may generate a prediction block based on pixel information in the current picture.
  • intra prediction may be performed based on intra prediction mode information of the prediction unit provided from an image encoder.
  • the intra-prediction unit 235 may use various methods including an intra-block copy method.
  • the intra prediction unit 235 may use the most probable intra prediction mode (MPM) obtained from neighboring blocks to encode the intra prediction mode.
  • the most probable intra prediction mode may use an intra prediction mode of a spatial neighboring block of the current block.
  • the following method may be used to construct a list of the most likely intra prediction modes of the current block.
  • six most probable intra prediction modes may be used to encode the intra prediction modes of the current block.
  • the neighboring blocks used to obtain the most probable intra prediction mode may be a left upper block, an upper block, an upper right block, a left block, and a left lower block of the current block.
  • the five neighboring blocks may be scanned, and intra prediction modes of the neighboring blocks may be allocated to the most likely intra prediction mode list.
  • the planar mode and the DC mode may be allocated to the most likely intra prediction mode list.
  • the most probable intra prediction mode overlapping is not allocated to the list.
  • the number and location of neighboring blocks for obtaining the most probable intra prediction mode, and a method of scanning the neighboring blocks are preset.
  • a processing unit in which prediction is performed by the intra prediction unit 235 and a processing unit in which a prediction method and specific content are determined may be different from each other.
  • a prediction mode may be determined in a prediction unit and prediction may be performed in a prediction unit
  • a prediction mode may be determined in a prediction unit
  • intra prediction may be performed in a transformation unit.
  • the intra prediction unit 235 may include an AIS (Adaptive Intra Smoothing) filter unit, a reference pixel interpolation unit, and a DC filter unit.
  • the AIS filter unit is a part that performs filtering on a reference pixel of a current block, and may determine and apply a filter according to a prediction mode of a current prediction unit.
  • AIS filtering may be performed on a reference pixel of a current block by using the prediction mode and AIS filter information of the prediction unit provided by the image encoder.
  • the prediction mode of the current block is a mode in which AIS filtering is not performed, the AIS filter unit may not be applied to the current block.
  • the reference pixel interpolation unit may generate a reference pixel of a pixel unit of an integer value or less by interpolating the reference pixel. have.
  • the prediction mode of the current prediction unit is a prediction mode in which a prediction block is generated without interpolating a reference pixel
  • the reference pixel may not be interpolated.
  • the DC filter unit may generate a prediction block through filtering when the prediction mode of the current block is the DC mode.
  • the intra prediction unit 235 may smooth neighboring blocks of the current block by using at least one smoothing filter.
  • a third prediction block is obtained by averaging or weighting the first prediction block performing intra prediction and the second prediction block performing motion compensation. Can be generated.
  • a low frequency or high frequency filter may be passed.
  • whether or not to use a filter may vary depending on the position of each pixel in the third prediction block, or the intensity or intensity of the filter may vary.
  • the filter may be applied only to the edge of the current block.
  • the filter may be applied only to the rest of the current block except for the edge.
  • whether or not to use a filter may vary according to the horizontal and/or vertical size of the third prediction block, and the filter may be applied only when the size of the third prediction block is greater than or equal to a predetermined value T.
  • the reconstructed block and/or picture may be provided to the filter unit 240.
  • the filter unit 240 may include a deblocking filter unit, an offset correction unit (Sample Adaptive Offset) and/or an adaptive loop filter unit in the reconstructed block and/or picture.
  • the deblocking filter unit may receive information indicating whether a deblocking filter is applied to a corresponding block or picture from an image encoder, and information indicating whether a strong filter or a weak filter is applied when the deblocking filter is applied.
  • the deblocking filter unit may receive information related to a deblocking filter provided from an image encoder, and may perform deblocking filtering on a corresponding block by an image decoder.
  • the offset correction unit may perform offset correction on the reconstructed image based on the type of offset correction applied to the image during encoding and offset value information.
  • the adaptive loop filter unit may be applied as a coding unit based on information on whether to apply an adaptive loop filter provided from an encoder and information such as coefficient information of the adaptive loop filter. Information related to the adaptive loop filter may be provided by being included in a specific parameter set.
  • the memory 245 may store the reconstructed picture or block and use it as a reference picture or a reference block later, and may also provide the reconstructed picture to an output unit.
  • bitstream input to the decoding apparatus may be input to the entropy decoder through a parsing step.
  • the entropy decoding unit may perform a parsing process.
  • coding may be interpreted as encoding or decoding in some cases, and information includes all values, parameters, coefficients, elements, flags, etc. It can be understood as doing.
  • 'Screen' or'picture' generally refers to a unit representing one image in a specific time period, and'slice','frame', etc. are used in coding of an actual video signal. It is a unit constituting a part, and if necessary, it can be used interchangeably with a picture.
  • 'Pixel','pixel', or'pel' represents the smallest unit constituting one image. Also, as a term indicating a value of a specific pixel,'sample' may be used. Samples may be divided into luminance (Luma) and chrominance (chroma) components, but generally, a term including all of them may be used. In the above, the color difference component represents a difference between predetermined colors and is generally composed of Cb and Cr.
  • A'unit' refers to a basic unit of image processing or a specific position of an image, such as the coding unit, prediction unit, and transformation unit described above, and in some cases, terms such as'block' or'area' And can be used interchangeably. Further, a block may be used as a term indicating a set of samples or transform coefficients composed of M columns and N rows.
  • the size of the basic unit block to be partitioned may vary according to the resolution of the image.
  • a block unit having a large size may be selected for a high-resolution image, and a block unit having a small size may be selected for a low-resolution image.
  • the encoder not only the resolution of the image, but also the characteristics of the image (spatial correlation, fast motion, frame rate captured, etc.), the type of image (natural image, 2D/3D animation, screen contents image including document or PPT image, 3D image)
  • the size of the basic block can be adaptively determined according to the depth map used in, an image captured by a VR camera, an image mixed with VR/AR, etc.), and this basic block size information is included in the bitstream and transmitted to the decoder. Can be.
  • the decoder may parse the basic block size information and set the size of the processed basic unit block.
  • the size of the basic unit block can be a square shape such as 256x256, 128x128, 64x64, 32x32, 16x16, 8x8, 4x4, etc., 256x128, 128x256, 256x64, 64x256,... , 128x64, 64x128, 128x32,... It may have a rectangular shape, such as.
  • a picture can be divided into a tile group and a tile.
  • One tile group is a set of consecutive tiles in raster scan order.
  • One tile is a set of consecutive coding tree units (CTUs) in raster scan order.
  • CTUs consecutive coding tree units
  • FIG 3 illustrates at least one tile group included in one picture according to an embodiment of the present invention.
  • one picture 10 may include at least one tile group 11, 12, 13, and the tile group 11, 12, 13 includes at least one tile 11a, 11b, ..., 13e).
  • the picture 10 may include 18 CTUs in the horizontal direction and 12 CTUs in the vertical direction. Since each of the tiles 11a, ..., 13e is encoded independently of each other, each of the tiles can be encoded in parallel. However, since encoding information between different tiles cannot be used, encoding efficiency may be lowered.
  • the tile groups 11, 12, 13 may represent a set of consecutive tiles 11a, ..., 13e in a raster scan order, but the scan order of tiles is not limited thereto, and in one embodiment, tiles
  • the groups 11, 12, and 13 may be formed by grouping consecutive tiles in an arbitrary scan order. In this case, information indicating the determined arbitrary scan order may be included in the bitstream and transmitted to the decoder.
  • the decoder may parse the transmitted scan order, check the tile scan order of the tile group, and decode it.
  • the tiles 11a, ..., 13e are independently encoded, they may have advantages in parallel processing and low encoding efficiency.
  • at least one syntax probability information after tiles 11a, ..., 13e are finally encoded (probability for at least one syntax in CABAC) context value) and decoding start position information in the bitstream may be included in the bitstream and transmitted to the decoder.
  • the decoder may parse the transmitted probability information, update the probability information for the corresponding tiles 11a, ..., 13e, and then perform decoding on the tile. If the probability information for the syntax that has not been transmitted may be initialized or updated to a predetermined value. In other words, since CABAC probability information for each syntax is transmitted in the bitstream, each of the tiles 11a, ..., 13e can be decoded in parallel.
  • the encoder and decoder may have syntax probability information for various situations in the form of a table.
  • the encoder may not include the syntax probability information in the bitstream, but may include only the index number of the table in the bitstream and transmit it to the decoder.
  • the decoder may reconfigure the syntax probability information by parsing the index number of the table included in the transmitted bitstream.
  • Each of the tiles 11a, ⁇ , and 13e in the tile groups 11, 12, and 13 may share and use probability information with each other.
  • all or arbitrary tiles in the tile groups 11, 12, 13 may update probability information using probability information for a tile group delivered from a bitstream.
  • all or arbitrary tiles in the tile group may perform CABAC decoding using the same probability information. Therefore, when one image includes both a portion having a gentle characteristic and a portion having a complex characteristic, a portion having similar characteristics constitutes one tile group, so that the tiles constituting the tile group provide the same probability information. By sharing, coding efficiency can be improved.
  • one coding tree unit may include three coding tree blocks (CTBs: Y, Cb, and Cr), and coding tree syntax associated with each CTB.
  • CTBs coding tree blocks
  • One CTU can be divided into several CUs (Coding Units).
  • One CU may include three Coding Blocks (CBs), a coding syntax associated with each CB, and an associated transform block (transform unit, hereinafter referred to as'TU').
  • One TU may include three transform blocks (transform blocks, hereinafter referred to as'TB') and transform syntax associated with each TB.
  • the size of the color difference CTB in a 4:2:0 color difference sampling format, when the size of the luminance CTB is 2 M x 2 N , the size of the color difference CTB may be 2 M-1 x 2 N-1 . In another embodiment, in a 4:2:2 color difference sampling format, when the size of the luminance CTB is 2 M x 2 N , the size of the color difference CTB may be 2 M-1 x 2 N or 2 M x 2 N-1 . . In another embodiment, when the size of the luminance CTB is 2 M x 2 N in the 4:4:4 color difference sampling format, the size of the color difference CTB may be 2 M x 2 N.
  • M and N may be the same value, or M and N may be different values.
  • One CU may have a square or rectangular shape.
  • one CTU top-level CU
  • one CTU top-level CU
  • each of the four-segmented tree leaf nodes divided into four may be further divided into two or three using a multi-type tree structure.
  • FIG. 4 four types of division are shown in this multi-type tree structure.
  • FIG. 4 illustrates examples of a multi-type tree structure according to an embodiment of the present invention.
  • one multi-type tree leaf node may be one CU. If the size of the CU is not larger than the maximum transform size, it may not be partitioned any more, and prediction and transform processes may be performed in units of the CU. In one embodiment, in a multi-type tree structure, CU, PU, and TU may have the same block size. Referring to FIG. 4, if the horizontal or vertical size of the CB is larger than the maximum horizontal or vertical size of the TB, the CB may be implicitly divided in the horizontal or vertical direction until the maximum horizontal or vertical size of the TB becomes the same. have.
  • FIG 5 shows an example of a multi-tree type structure according to an embodiment of the present invention.
  • a thick solid line indicates a case of being divided using a four-division tree structure
  • a thin solid line indicates a case of being divided using a multi-type tree structure.
  • Each of the four-segmented tree leaf nodes divided into four in a four-segmented tree structure may be recursively partitioned using a four-segmented tree structure, or may be partitioned using a multi-type tree structure.
  • the multi-type tree leaf nodes divided into two or three may be recursively divided using a multi-type tree structure.
  • division into a four-segmented tree structure in a multi-type tree leaf node is not allowed.
  • the blocks of the multi-type tree node may be divided using a four-division tree structure.
  • the four quadrilateral tree leaf nodes divided using the four-division tree structure may be recursively partitioned using the four-division tree structure, or may be partitioned using a multi-type tree structure. Splitting from a multi-type tree leaf node into a quadrilateral tree structure can be applied only when the horizontal and vertical size of the block is an arbitrary value (or greater than or equal to or less than or equal to an arbitrary value).
  • the division of the transform unit may depend on the division of the coding unit.
  • the division of the transform unit may be recursively performed in the lowest coding unit.
  • the lowest-order coding unit performs prediction, and transform and quantization processes may be performed by dividing the transform unit only on the error signal generated by the prediction.
  • an error signal for a plurality of coding units may constitute one block, and the block may be processed as one transform unit.
  • an error signal of two 16 x 8 blocks may be reconstructed into one 16 x 16 block.
  • a process of splitting and encoding or decoding a transform unit may be performed in units of reconstructed 16 x 16 blocks.
  • a coding unit and a transform unit may independently perform decoding.
  • the syntax for the coding unit and the syntax for the transform unit are configured separately in a multi-type tree node. Accordingly, the coding unit and the transform unit are independently decoded without forming a top-down relationship with each other, and a split structure for the coding unit and a split structure for the transform unit in the highest multi-type tree node may be independently determined from each other. .
  • the coding unit may have a square shape or a rectangular shape. If the coding unit has a square shape, DCT transform or DST transform may be performed using a transform matrix having the same horizontal and vertical length. If the coding unit has a rectangular shape, since the horizontal and vertical lengths are different, a transformation matrix may be predetermined according to the rectangular shape of the coding unit to perform DCT transformation or DST transformation.
  • the coding unit when the coding unit has a rectangular shape, the coding unit may be divided into a plurality of square subunits, and DCT transformation or DST transformation may be performed for each subunit.
  • DCT transform or DST transform may be performed on a square coding unit of an upper node including a divided rectangular coding unit.
  • one transform unit may include a plurality of rectangular coding units.
  • a neighboring pixel adjacent to the current coding unit may be designated as a reference pixel. Thereafter, the most efficient prediction direction may be set, and a reference block may be generated by padding or interpolating a pixel from the reference pixel in a corresponding direction using the set prediction direction.
  • FIG. 6 illustrates an intra prediction mode of a current coding block according to an embodiment of the present invention.
  • each mode has a specific prediction direction, and a prediction block may be generated from a reference pixel using the prediction direction.
  • intra-prediction direction modes there are a plan mode (Intra_Planar), an average mode (DC; Intra_DC), and a mode for predicting a color difference signal from the restored luminance signal (CCLM; Cross-component linear model).
  • the encoding for the intra prediction direction mode is applied to a luminance signal and a color difference signal, respectively, and in the case of a luminance signal, the CCLM mode may be excluded.
  • a method of predicting an intra prediction mode for the current CU uses the most probable mode (MPM).
  • MPM most probable mode
  • six MPM modes are used, and information (intra_luma_mpm_flag) indicating whether an intra prediction mode of the current CU is in the MPM list may be included in the bitstream.
  • the decoder may determine whether there is an intra prediction mode of the current CU in the MPM list through intra_luma_mpm_flag.
  • intra_luma_mpm_idx may be additionally parsed, and the intra prediction mode of the current CU may be derived using the intra_luma_mpm_idx-th intra prediction mode from the MPM list. . If the intra prediction mode of the current coding unit is not in the MPM list, remaining information (intra_luma_mpm_remainder) for intra prediction mode information can be additionally parsed, and the intra prediction mode of the current CU can be derived from the parsed information. I can.
  • the current block may be predictively coded using a mode without directionality.
  • An intra-prediction mode without directionality can be used in a smooth image region to improve encoding efficiency.
  • the mode without directionality since the mode without directionality is commonly used, it may exist in the earliest order among various intra prediction modes (for example, 67).
  • the '0'th order is used for the planar mode, and the '1'th order is used for the average mode.
  • encoding of an intra prediction mode uses an MPM list constructed from intra prediction modes of neighboring blocks.
  • the encoder may store index information on whether to use the MPM list and which one in the MPM list to use in order to encode the intra prediction mode in a bitstream and transmit it to the decoder. Encoding an intra prediction mode using such an MPM list can increase encoding efficiency in encoding a directional mode.
  • the bitstream may include information indicating whether the current block is in a directional mode or not in a directional mode. If the current block is not in the directional mode, information indicating whether the current block is in a planar mode or an average mode may be further included in the bitstream.
  • the prediction mode in the screen of the current block is a directional mode, it is possible to select whether to derive an MPM list or not.
  • the information on whether to induce the MPM list is the location information of the reference pixel, the horizontal and vertical size information of the block of the current CU or the neighboring CU, the luminance signal, the color difference signal, the use of the ISP mode, and the prediction mode of the current CU.
  • Information information about intra prediction mode information of a CU adjacent to the current CU, information about whether a CU adjacent to the current CU is in a directional mode, information about combined inter and intra prediction (CIIP) mode, whether a reference pixel belongs to a picture boundary or another tile Information, intra block copy (IBC) mode information, intra- and inter-screen coding mode information, quantization parameters, and information on CTU boundaries can be derived using at least one or more independently, without using at least one or more Can be induced.
  • IBC intra block copy
  • index information on which one to use in the MPM list may be included in the bitstream.
  • prediction mode residual information (intra_luma_mpm_remainder) indicating the remaining prediction modes not included in the MPM list among the intra prediction modes may be included in the bitstream.
  • the decoder may parse information on whether the intra prediction mode of the current block is a directional mode. If the intra prediction mode of the current block is not a directional mode, information on whether the current block is a planar mode or an average mode may be parsed. If the prediction mode in the screen of the current block is the directional mode, whether to derive the MPM list is determined by the location information of the reference pixel, the horizontal and vertical size information of the transform block of the current CU or neighboring CU, the luminance signal, the color difference signal, and the ISP mode.
  • information about the intra prediction mode of the current CU information about the intra prediction mode of the CU adjacent to the current CU, information about whether the current CU and the adjacent CU is in a directional mode, information about the combined inter and intra prediction (CIIP) mode, and the reference pixel It can be derived using at least one of information on whether it belongs to a picture boundary or another tile, intra block copy (IBC) mode information, intra and inter-screen coding mode information, quantization parameter, and information on CTU boundary. .
  • the MPM list may be derived by using the information individually.
  • the decoder may parse information indicating whether to derive an MPM list. If all of the neighboring blocks adjacent to the current block are in a non-directional mode, whether to derive the MPM list is not parsed and the MPM list may not be derived. If a tile including a neighboring block adjacent to the current block and a tile including the current block are different tiles, information of other tiles cannot be derived to remove the dependency between tiles, so the intra prediction mode of the neighboring block is set. It can be set to directional mode or non-directional mode.
  • parsing robustness may be reduced from a decoder side. Therefore, if the intra prediction mode of the current block is the directional mode, whether to derive the MPM list is parsed regardless of the intra prediction mode of neighboring blocks adjacent to the current block.
  • index information on an intra prediction mode used in the MPM list may be parsed to derive intra prediction information for a current block. If the MPM list is not used, prediction mode residual information (intra_luma_mpm_remainder) among intra prediction modes may be parsed to derive intra prediction information for the current block.
  • the prediction mode residual information may be decoded using CABAC using truncated binary codes.
  • the MPM list may be configured except for a mode without directionality or may be configured with only directional mode.
  • the maximum value (maxVal) of the truncated binary code that the prediction mode residual information may have may be 59.
  • the MPM list may be configured based on intra prediction mode information of a neighboring block adjacent to the current block, information on whether the current block is in the ISP mode, and information indicating which reference pixel is used. .
  • a process for removing duplication of intra prediction modes within the MPM list may also be performed, which may increase the complexity of the encoder and decoder.
  • the default MPM list includes at least one prediction mode among Planar (0), DC (1), Vertical (50), HOR (18), VER-4 (46), and VER + 4 (54).
  • the MPM list may be constructed using only intra prediction mode information of a neighboring block adjacent to the current block.
  • the MPM list may include one or more intra prediction modes among A, B, Planar (0), DC (1), Vertical (50), and HOR (18).
  • A may be an intra prediction mode of a block adjacent to the left of the current block
  • B may be an intra prediction mode of a block adjacent to an upper side of the current block.
  • the order of prediction modes within each picture within the MPM list may be different.
  • At least two or more blocks may share and use one MPM list.
  • the MPM list may use an MPM list derived from any one block among several blocks.
  • an MPM list derived from a block to be encoded or decoded first among at least two or more blocks may be used as the MPM list of the next block.
  • an MPM list may be configured in a block for an upper node of at least two or more blocks, and the lower blocks may share and use the MPM list configured in the upper block.
  • the MPM list derived in units of coding blocks may be shared and used by sub-blocks.
  • the MPM list for the first subblock and the MPM list for the second subblock may be identical to each other.
  • intra prediction modes may be grouped first.
  • intra prediction mode information for example, 10
  • information (intra_luma_group_index) of whether the intra prediction mode for the current coding unit is included in the corresponding group may be included in the bitstream.
  • M is divided into N (for example, 5) subgroups, and information (intra_luma_subgroup_index) of whether an intra prediction mode for the current coding unit is included in a corresponding subgroup may be included in the bitstream.
  • the remaining information (intra_luma_remainder_in_subgroup) about intra prediction mode information within the subgroup may be included in the bitstream.
  • the decoder may parse intra_luma_group_index, intra_luma_subgroup_index, and intra_luma_remainder_in_subgroup to derive intra prediction mode information for the current coding block.
  • Grouping of intra prediction modes for indicating intra prediction mode information for the current coding block may be performed at equal intervals or may be performed non-uniformly. In one embodiment, 5 in the first group, 10 in the second group, 7 in the third group, ... You can also designate groups unevenly, such as In grouping, whether to perform grouping evenly or non-uniformly, and if grouping is performed unevenly, the number of each group includes information on intra prediction modes included in the MPM list, location information of reference pixels.
  • IBC intra block copy
  • the reference pixel when the current coding unit block is located at the top of the picture or the top of the tile, the reference pixel may exist only to the left of the current coding unit block.
  • the intra prediction mode of the current coding unit may be encoded using only the intra prediction mode having horizontal directionality, and the intra prediction mode having vertical directionality may not be used implicitly.
  • the intra prediction mode information to be included in the initial MPM list may include only the intra prediction mode of the left coding unit block, Planar (0), DC (1), and an intra-picture mode having horizontal orientation. Accordingly, it is possible to select an available intra prediction mode selectively by checking whether the current coding block is located at a picture boundary and a tile boundary.
  • intra prediction mode availability information indicating information on an intra prediction mode that can be used in a current coding unit block. That is, the number of available intra prediction modes may vary for each coding unit block. For example, when the size of the current coding unit block is greater than or equal to an arbitrary value, Planar (0), DC (1), Vertical (50), HOR (18), VER-4 (46), VER + 4 Only (54) can be used.
  • the intra prediction mode availability information may be set for each intra prediction mode. For example, the availability information for the Planar (0) mode may be set to '1', and the availability information for the DC (1) mode may be set to '0'. In this case, the current block is the DC mode. Is not available, and Planar mode is available.
  • the prediction mode availability information in the screen includes position information of a reference pixel, horizontal and vertical size information of a transform block of a current CU or a neighboring CU, a luminance signal, a color difference signal, whether an ISP mode is used, and an intra-screen mode of the current CU.
  • Direction information information about intra prediction mode of a CU adjacent to the current CU, information about combined inter and intra prediction (CIIP) mode, information about whether a reference pixel belongs to a picture boundary or another tile, information about intra block copy (IBC) mode, It may be derived by using at least one or more of information on intra- and inter-screen coding mode information, quantization parameters, and CTU boundaries.
  • CIIP inter and intra prediction
  • IBC intra block copy
  • the intra prediction mode availability information may be included in the bitstream and transmitted to the decoder.
  • the decoder can parse the intra prediction mode availability information, set the intra prediction mode information available for the current coding unit block, and efficiently decode the intra prediction mode in terms of bit rate using the intra prediction mode availability information. I can.
  • the process of generating an intra prediction sample for the current CU may be performed as follows.
  • Step 1 construct a reference pixel.
  • Step 1 it is possible to determine the availability of a reference pixel as to whether or not the surrounding pixels adjacent to the current coding unit can be used as reference pixels.
  • the reference pixel availability becomes false, and thus cannot be used as a reference pixel. If all of the reference pixel availability of neighboring pixels adjacent to the current coding unit are false, after setting a randomly determined value for all reference pixels, all of the reference pixel availability may be changed to true. If even one of the neighboring pixels adjacent to the current coding unit has a reference pixel availability false, the pixel value can be filled using a pixel adjacent to the reference pixel availability false pixel, and then the reference pixel availability can be changed to true. have.
  • filtering may be performed on the reference pixel.
  • Whether or not to perform the filtering includes information about the location of a reference pixel, information about the horizontal and vertical size of a transform block of the current CU or a neighboring CU, a luminance signal, a color difference signal, whether or not ISP mode is used, information about an intra prediction mode of the current CU, and the current CU In-screen prediction mode information of adjacent CUs, CIIP (Combined inter and intra prediction) mode information, information on whether a reference pixel belongs to a picture boundary or other tile, IBC (Intra block copy) mode information, intra- and inter-screen coding It may be derived by using at least one or more of mode information, quantization parameter, and information on CTU boundaries.
  • the filtering may be performed when the horizontal or vertical size of the transform block of the current coding unit is equal to or greater than a predetermined value, is a luminance signal, and is an intra prediction image predicted from a reference pixel adjacent to the current block. .
  • an intra prediction sample may be generated according to the prediction mode information within each screen.
  • a prediction block is generated by using an average of a vertical prediction sample block and a horizontal prediction sample block.
  • the vertical prediction sample block may be generated through the lower left sample of the current block and the upper samples of the current block
  • the horizontal prediction sample block may be generated through the upper right sample of the current block and the left samples of the current block.
  • the prediction sample may be generated by applying a weight between the vertical prediction sample and the horizontal prediction sample based on the horizontal and/or vertical size of the current block.
  • the prediction sample when the size of the current block is greater in the horizontal size than the vertical size, the prediction sample may be generated by giving the vertical prediction sample block a higher (or lower) weight than the horizontal prediction sample block. For example, when the size of the current block is larger horizontally than vertically, a weighted average is performed by assigning a weight of '3' to the vertical prediction sample and a weight of '1' to the horizontal prediction sample. I can.
  • the prediction block when the intra prediction mode information of the current block is the DC mode, the prediction block may be generated using an average value of the reference block.
  • an average value can be calculated using only the upper sample of the current block.
  • the horizontal size of the current block is smaller than the vertical size, an average value can be calculated using only the left sample of the current block. If the horizontal size and the vertical size of the block are the same, the average value can be calculated using both the top sample and the left sample of the current block.
  • the prediction mode information in the screen of the current block is a planar mode or a DC mode
  • the left or upper sample of the current block belongs to another tile (slice)
  • the corresponding sample cannot be used as a reference sample, so it is used.
  • Samples of impossible locations may not be used to generate predictive samples. For example, if the upper sample of the current block belongs to another tile (slice) and the prediction mode in the current screen is Planar mode, a prediction sample for the current block is generated using only the horizontal prediction sample without generating a vertical prediction sample. Can be generated.
  • the average value is calculated using only the left sample of the current block, regardless of the horizontal or vertical size of the current block.
  • a prediction sample for the current block can be generated with the average value.
  • a prediction sample may be generated through a weight-based interpolation method.
  • an interpolation filter used for prediction samples of the current block using information such as intra prediction mode, location information of reference pixels, ISP mode information, whether a luminance or color difference signal, transform block size information, and size information of the current coding block. The kind of can be derived.
  • a 4-tap Cubic or 4-tap Gaussian filter may be used as the interpolation filter.
  • an intra prediction mode and a location-based prediction sample filtering process may be performed.
  • the intensity of filtering or whether to perform filtering is information on the location of the reference pixel, information on the horizontal and vertical size of the transform block of the current CU or neighboring CU, the luminance signal, the color difference signal, whether or not ISP mode is used, information on the prediction mode in the current CU, and the current In-screen prediction mode information of the CU adjacent to the CU, CIIP (Combined inter and intra prediction) mode information, information on whether a reference pixel belongs to a picture boundary or other tile, IBC (Intra block copy) mode information, intra-screen and screen information It may be derived using at least one of information on inter-coding mode information, quantization parameters, and CTU boundaries.
  • filtering may be performed on an intra prediction sample of the current coding unit.
  • the arbitrary value may be a multiple of 2.
  • filtering for an intra prediction sample of the current coding unit may be performed when the current coding unit represents a luminance signal and the intra prediction sample is a sample predicted from a reference pixel adjacent to the current block. .
  • one CU block may be divided into two or more sub-blocks to be encoded.
  • the sub-blocks may be sequentially encoded, and the current sub-block may be encoded with reference to a reconstructed pixel of a previous sub-block.
  • the intra prediction mode of the current block may be shared by the divided sub-blocks.
  • the intra prediction modes of each of the divided sub-blocks of the current block are different, but may have a value that is close.
  • the intra prediction mode of the sub-blocks may be an intra prediction mode that is close to each other, and may be +-1 or +-2 different from each other.
  • the intra prediction mode for the second partitioned block may improve encoding efficiency by transmitting only a difference value from the first partitioned block.
  • the intra prediction mode for the third partitioned block may be transmitted to the decoder as a difference value from the second partitioned block.
  • the difference value may be encoded using the syntax described with reference to FIG. 7 and output as a bitstream.
  • FIG. 7 is a diagram illustrating syntax for differently setting an intra prediction mode of a divided block according to an embodiment of the present invention.
  • the ISP differential direction flag (isp_diff_direction_flag) may be set to '1' in the case of the + direction, and may be set to '0' in the case of the-direction.
  • the difference value of the intra prediction mode may be '1' or more, and when it is '0', the difference value may be '0'.
  • the second ISP difference value flag (isp_diff_greater1_flag) is '1'
  • the difference value of the intra prediction mode may be '2', and when it is '0', the difference value may be '1'.
  • the decoder may obtain the intra prediction mode of the second divided block by adding a difference value between the decoded intra prediction mode of the first divided block and the intra prediction mode of the second divided block.
  • the third partitioned block and the fourth partitioned block may use the intra prediction mode of the first partitioned block as a prediction value as in the second partitioned block. That is, the intra prediction mode of the fourth divided block may obtain an intra prediction mode of the fourth divided block by adding a difference value between the intra prediction mode of the first divided block and the intra prediction mode of the fourth divided block. .
  • the multi-type tree structure can provide high flexibility in partitioning blocks.
  • 8 shows a form in which a current coding unit is implicitly divided according to an embodiment of the present invention.
  • the ISP division mode information indicating the division type of the ISP mode is an upper coding unit. It can be set implicitly using the division information of. In this case, the ISP division mode information need not be transmitted to the decoder.
  • the centrally located coding unit may be vertically divided into two or more sub-blocks.
  • information about the vertical direction may be implicitly set.
  • the division of the sub-block may be implicitly performed in the horizontal direction.
  • the ISP mode segmentation information may be set to an implicit segmentation mode using at least one of segmentation information of an upper coding unit, segmentation information of a neighboring coding unit, and segmentation information of a neighboring ISP mode.
  • Whether to use the ISP mode may be determined according to the horizontal or vertical size of the coding unit. In an embodiment, when the horizontal or vertical size of the coding unit is greater than or equal to an arbitrary size, the ISP mode may not be used. Further, when the horizontal or vertical size of the coding unit is less than or equal to an arbitrary size, it is possible to set not to use the ISP mode.
  • the luminance signal when the coding unit block is encoded in the ISP mode, the luminance signal may be divided into two or more and encoded.
  • the color difference signal can be encoded without being divided.
  • the color difference signal may be divided and encoded in the same form as the luminance signal.
  • the color difference signal in the case of a 4:2:0 video format, may have a size of 1/4 of the luminance signal. If the size of the current coding unit block is an arbitrary size and the luminance signal is divided into four sub-blocks and encoded, the corresponding color difference signal may be divided into two sub-blocks and encoded. In one embodiment, when the size of the coding unit block is 8x16 (luminance signal: 8x16, color difference signal: 4x8), the luminance signal may be divided into four 8x4 blocks in the horizontal direction and encoded, and the corresponding color difference signal is It can be encoded by dividing it into two 4x4 blocks.
  • division direction information of the luminance signal and the color difference signal may be shared with each other.
  • the number of divisions of the color difference signal may vary according to the sub-sampling format of the color difference signal.
  • the ISP split mode of the coding unit to which the ISP mode is applied may have a high probability of being in the vertical direction.
  • the intra prediction mode of each sub-block divided into the ISP mode is also in the vertical direction. Accordingly, in order to derive the intra prediction mode of the sub-block divided by the ISP mode, the division information of the upper coding unit can be used.
  • the intra prediction mode of each sub-block divided into the ISP mode is determined by using at least one of the division information of the upper coding unit, the division information of the neighboring coding unit, the division information of the neighboring ISP mode, and the intra prediction mode of the neighboring block. You can induce.
  • each partition block is partitioned in the form of an asymmetric block partition (ABP).
  • ABSP asymmetric block partition
  • FIG 9 shows an example of asymmetric block division according to an embodiment of the present invention.
  • one CU block in the ISP mode, one CU block may be divided into up to four sub-blocks, and there is a dependency between each sub-block, so there is a problem of reducing throughput during encoding and decoding.
  • the ISP mode may have a high probability of being selected mainly at the boundary of the object. Therefore, when the ISP mode is applied to the current coding unit block, as shown in FIG. 9, one coding unit block may be divided into two sub-blocks, and may be divided into an asymmetric block partition (hereinafter referred to as ABP). I can.
  • ABP asymmetric block partition
  • one coding unit block can be divided into two sub-blocks, and one of an asymmetric block partition (ABP) type and a symmetric block partition (SBP) type is used. Can be divided by
  • the corresponding split mode information may be included in the bitstream and transmitted to the decoder.
  • the decoder may parse the corresponding information to determine which subblock type is to be divided, and may set the intra prediction mode of each divided subblock to have different values.
  • At least one or more of the divided sub-blocks may not have a transform coefficient (when the coded block flag (CBF) is mode '0').
  • CBF coded block flag
  • the CBF of the first sub-block when the CBF of the first sub-block is '1', the CBF of the second sub-block may be '0'. Only the CBF of the first sub-block is included in the bitstream, and the CBF of the second sub-block is not included in the bitstream, and may be derived through the CBF of the first sub-block.
  • the sub-block ISP mode can be deactivated only when the current block is smaller than a certain size.
  • the ISP mode can be applied with the dependency removed.
  • a method of removing the dependency between sub-blocks of the current block may be performed in the prediction step.
  • a reference pixel used for prediction of each subblock uses a pixel adjacent to the subblock. Therefore, in order to predict the current sub-block, the previous sub-block must have been encoded or decoded. Accordingly, encoding and decoding dependence may exist between the current subblock and the previous subblock.
  • a pixel adjacent to a coding block including each sub-block may be used as a reference pixel used to derive a prediction sample for each sub-block.
  • the current sub-block may use previously coded and decoded blocks, not the previous sub-block, as reference pixels, there is no dependency between the previous sub-blocks.
  • Whether to perform the method of removing the dependency may be selectively applied according to the size condition of the coding block. For example, the above method can be applied only when the size of the coding block is less than or equal to an arbitrary size (eg, 8x8).
  • an arbitrary size eg, 8x8
  • FIG. 10 is a diagram for explaining a method of deriving a reference pixel for a subblock according to an embodiment of the present invention.
  • a current coding block 20 may configure reference pixels 21 using adjacent pixels, and the reference pixels 21 are all sub-blocks #1, #2, and #3, #4) can share with each other.
  • the complexity can be reduced.
  • the reference pixels 21 used by each sub-block (#1, #2, #3, #4) are all the same, a blocking phenomenon between each sub-block (#1, #2, #3, #4) occurs. I can not. Therefore, there is no need to perform a deblocking filter at the boundary of each sub-block (#1, #2, #3, #4).
  • a reference pixel may be configured only with pixels adjacent to the last sub-block among pixels adjacent to the coding block.
  • a blocking phenomenon may occur between each sub-block. Accordingly, deblocking filtering may be performed at the boundary of each sub-block, and the intensity of filtering may be determined according to characteristics of the sub-blocks or reference pixels.
  • interpolation may be used to increase prediction efficiency.
  • a weight value applied to a reference pixel may be determined according to information such as an intra prediction mode of the current block, a signal type (whether it is a luminance signal), and location information of the reference pixel. If the current block is the last sub-block (#4), among the pixels recognized in the coding block, a pixel adjacent to the last sub-block is predicted after interpolation is performed using a different weight (or higher weight) than other pixels. You can create a sample.
  • the prediction efficiency may be increased by the interpolation method, so that a blocking phenomenon between the sub-blocks may hardly exist. Accordingly, at the boundary of each sub-block, the intensity of filtering such as deblocking may be set weakly or strongly.
  • each sub-block has a dependency problem with other adjacent sub-blocks, a case in which a reference pixel cannot be derived through the other sub-blocks may occur.
  • the adjacent pixel may not be used.
  • adjacent pixels included in the other sub-blocks may be used as reference pixels by replacing them with pixels that are closest thereto.
  • the left pixel of the current sub-block when all pixels on the left side of the current sub-block correspond to other sub-blocks, the left pixel of the current sub-block may not be used, and the left pixel of the current coding block may be replaced with a reference pixel.
  • the upper pixel of the current sub-block is replaced with the left pixel of the current coding block to be a reference pixel. Can be configured.
  • a smoothing filter for a reference pixel during intra prediction in order to improve coding efficiency, not only a smoothing filter for a reference pixel during intra prediction, but also an interpolation method when generating a prediction sample may be used.
  • this interpolation method has a disadvantage of increasing computational complexity.
  • a 4-tap Cubic or 4-tap Gaussian filter is selectively performed for a luminance signal according to an intra prediction mode, a reference pixel position, whether it is an ISP block, and a block size, and bi-linear for a color difference signal. Filtering can be performed.
  • an interpolation method when the intra prediction mode for the luminance signal is vertical or horizontal, an interpolation method may not be performed or a 4-tap Cubic filter may be performed.
  • the interpolation method since prediction is performed by dividing one coding block into several subblocks in the ISP mode, the interpolation method may not be performed because prediction efficiency is high.
  • whether or not the interpolation method is applied may be changed only when the size of the block is more than (or less than) a predetermined size.
  • the interpolation method when the size of the sub-block (or coding block) is 8x8, 8x4, or 4x8 or less, the interpolation method is not applied, or a low-complexity bi-linear filter applied to the color difference signal may be applied to the luminance signal. In another embodiment, the interpolation method may be applied only when the size of the sub-block (or coding block) is 8x8, 8x4, or 4x8 or less.
  • each sub-block may share an intra prediction mode.
  • the intra prediction modes of all sub-blocks may have the same value.
  • the intra prediction modes of each sub-block may be different.
  • CBF coded block flag
  • deblocking filtering may not be performed at the boundary of each sub-block. However, since a transform coefficient other than '0' may exist in the fourth sub-block, deblocking filtering may be performed at the boundary between the third sub-block and the fourth sub-block. Alternatively, even if there are no transform coefficients in the first sub-block, the second sub-block, and the third sub-block, if the intra prediction modes are different from each other, deblocking filtering may be performed at the boundary between each sub-block.
  • the size of the block (the size of the width and height) is equal to an arbitrary value, the size of the block (the size of the width and height) is greater than an arbitrary value, or the size of the block (the size of the width and height) Applicable only when is less than any value.
  • Screen Contents may include an image of an electronic document on a computer such as Word or PowerPoint, a virtual reality image, an animation image, a remote connection image, and a screen mirroring image. Such screen content may repeatedly include the same or similar pattern within a picture or between pictures. In an embodiment, a portion most similar to a current block may be found in an already reconstructed region in a current picture, and a block vector related to a distance between the current block may be included in the bitstream and transmitted to the decoder.
  • IBC Intra Block Copy
  • a prediction sample for a current block may be obtained from a reference buffer region that has been undone in a current picture by using the block vector.
  • the current block may be reconstructed using the prediction sample, and in this case, the reference buffer region may be periodically updated.
  • the periodic update of the reference buffer area may be performed before the first coding tree unit of each row of the current picture is decoded.
  • the intra block copy mode may be signaled by being integrated with a general inter-picture encoding mode.
  • the reference picture may be located in a region that has already been encoded among pictures currently being encoded. If all the areas that have already been coded in the current picture are used as reference pictures, the coding efficiency can be improved because the area to be referred to becomes wider, but there is a disadvantage that a lot of memory is used, so it is desirable to use only the reference picture memory of a preset size. Do.
  • the reference picture in the intra block copy mode may be continuously and periodically updated until encoding of the current picture is completed.
  • the process of updating the reference picture may be performed after coding of the current block, the current coding unit, the current coding tree unit, the current tile, and the current tile group is completed.
  • the reference picture update process may be performed before the first coding tree unit of each row of the current picture is encoded. Alternatively, the reference picture update process may be performed after the last CTU of each row of the current picture is encoded.
  • the reference buffer memory may be set to a specific value or an invalid value. Alternatively, in the reference picture update process, the contents of the reference buffer memory may be initialized or disabled.
  • the reference picture in the intra block copy mode may be continuously and periodically updated until decoding of the current picture is completed.
  • This reference picture update process may be performed after decoding of the current block, the current coding unit, the current coding tree unit, the current tile, and the current tile group is completed.
  • the reference picture update process may be performed before the first coding tree unit of each row of the current picture is decoded.
  • the reference picture update process may be performed after the last CTU of each row of the current picture is decoded.
  • a tile group (or slice) is composed of a plurality of tiles, and the horizontal position of each tile is different, but the vertical position may be the same.
  • a tile group may be designated for each row in one picture.
  • a tile group may be designated by N rows in one picture.
  • the current tile group is not included in the reference picture in the intra block copy mode, and only other tile groups may be included in the reference picture in the IBC mode. Accordingly, in the case of the intra block copy mode in the decoder, memory management and implementation may be facilitated because only memory corresponding to a previously defined tile group size is required.
  • the intra block copy mode configures a region already encoded and decoded in the current picture as a reference region, finds a block most similar to the current block in the reference region, and expresses its position information as a vector, Information on the vector can be delivered to the decoder.
  • the decoder may construct a prediction block for the current block by bringing the prediction block for the current block to the reference region using the vector information.
  • the reference region may not be restored normally.
  • the current block encoded in the intra block copy mode may not be normally decoded, and this error may propagate to the next block.
  • the next block may generate a prediction block without referring to the intra block copy block in order to prevent propagation of the error.
  • the block encoded in the intra block copy mode may not be used as a reference pixel of the current block.
  • the block encoded in the intra block copy mode may be configured through an intra prediction method.
  • the intra-block copy mode may further include additional information on the intra-block copy mode to generate a prediction block through the intra-prediction method to generate a prediction block. If no error occurs, the decoder may perform decoding in the intra block copy mode, and may not use the additionally transmitted intra-picture encoding mode.
  • the decoder may not perform decoding in the intra block copy mode, but may construct the current block by performing intra prediction through the additionally transmitted intra-encoding mode.
  • the error information is information used in the intra block copy mode, it may not be used for the current block configured through intra-prediction.
  • the intra-picture encoding mode is not transmitted through a bitstream, but an intra-picture encoding mode derived through a neighboring block is used, or DC, Horizontal , Vertical, Planar, or other directional modes may be arbitrarily selected and used to implicitly generate a prediction block.
  • the intra block copy mode may use a region similar to the current block in the reference region as a prediction block for the current block. This is the same concept as the inter prediction, and the prediction efficiency is higher than that of the intra prediction, and thus there may be little or no error signal when considering the desired image quality.
  • an error signal of a block encoded in the intra block copy mode may not exist under certain conditions, and a target image quality may be achieved without sending an error signal.
  • the decoder may implicitly set that error information does not exist in the luminance signal and the color difference signal for the block encoded in the intra block copy mode, and may not parse information related to the error signal.
  • the “7.3.7.5 Coding unit syntax” part in JVET-N1001-v2 may be modified as shown in FIG. 10.
  • the JVET-N1001-v2 document can be found at http://phenix.it-sudparis.eu/jvet/doc_end_user/documents/14_Geneva/wg11/JVET-N1001-v2.zip.
  • 11 is an example of a coding unit syntax according to an embodiment of the present invention.
  • cu_cbf information is parsed only when it is not encoded in the intra block copy mode, and in the case of the intra block copy mode, cu_cbf is implicitly set to '0', so that information related to an error signal is not parsed or decoded. Can be avoided.
  • cu_cbf in the case of the intra block copy mode, may be implicitly set to '1' so that information related to an error signal may be parsed or decoded.
  • one of tu_cbf_luma, tu_cbf_cb, and tu_cbf_cr may be implicitly set to '0' to prevent an error signal related to a luminance or color difference signal from being parsed or decoded.
  • tu_cbf_luma, tu_cbf_cb, and tu_cbf_cr information may be parsed only when it is not encoded in the intra block copy mode. Meanwhile, in the case of the intra block copy mode, one of tu_cbf_luma, tu_cbf_cb, and tu_cbf_cr may be implicitly set to '0', so that information related to an error signal is not parsed or decoded.
  • whether to implicitly parse or decode information related to an error signal in the intra block copy mode is determined by the horizontal and vertical size information of a transform block of a current coding unit or a neighboring coding unit, a luminance signal, a color difference signal, an adjacent It can be derived by using at least one of information on intra prediction mode information of a coding unit, information on whether a current coding unit or an adjacent coding unit is a directional mode, a quantization parameter, and information on a coding tree unit boundary. Alternatively, it is possible to induce whether to independently parse or decode information related to the error signal by using one of the information.
  • a method of not implicitly parsing or decoding information related to error signaling in the intra block copy mode may be equally applied to a triangle merge mode, a CIIP mode, and a CCLM mode.
  • the triangle merge mode and the CIIP mode may be performed only when the horizontal x vertical size of the current block is 64 or more.
  • the triangle merge mode and the CIIP mode may be mainly used in a monotonous area such as a background part or an object interior. In the case of a monotonous area, an error signal may be relatively small compared to an object boundary. Therefore, in the case of a block encoded in the triangle merge mode, information related to an error signal may not be implicitly parsed or decoded.
  • cu_cbf information is parsed only when the current block is not encoded in triangle merge mode or CIIP mode, and in case of triangle merge mode or CIIP mode, cu_cbf is implicitly set to '0', and an error signal and You can make sure that related information is not parsed or decrypted.
  • tu_cbf_luma, tu_cbf_cb, and tu_cbf_cr information may be parsed only when it is not encoded in the triangle merge mode or the CIIP mode.
  • tu_cbf_luma, tu_cbf_cb, and tu_cbf_cr is implicitly set to '0', so that information related to an error signal is not parsed or decoded.
  • the CCLM mode is a method of predicting a color difference signal with reference to a luminance signal, thereby increasing prediction efficiency for a color difference signal.
  • Color difference signal whether ISP mode is used, current CU's intra prediction mode information, current CU and adjacent CU's intra prediction mode information, current CU and adjacent CU's directional mode information, reference pixel is picture boundary or other tile It may be derived using at least one or more of information on whether it belongs to, information on intra- and inter-screen encoding modes, quantization parameters, and information on a CTU boundary. Alternatively, it can be derived using one of the above information independently.
  • the size of the block (the size of the width and height) is equal to an arbitrary value, the size of the block (the size of the width and height) is greater than an arbitrary value, or the size of the block (the size of the width and height) Applicable only when is less than any value.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

La présente invention concerne un procédé de décodage de signal vidéo, et un appareil associé. Un procédé de décodage d'image selon un mode de réalisation de la présente invention comprend les étapes consistant à : diviser un bloc de codage courant en au moins deux sous-blocs; dériver une information de mode d'intra-prédiction concernant chacun des sous-blocs; générer une image d'intra-prédiction pour chacun des sous-blocs en utilisant l'information de mode d'intra-prédiction; appliquer sélectivement un filtrage à l'image d'intra-prédiction; reconstruire chacun des sous-blocs en utilisant l'image d'intra-prédiction à laquelle le filtrage a été appliqué; et débloquer-filtrer sélectivement la limite entre les sous-blocs reconstruits.
PCT/KR2020/003266 2019-03-08 2020-03-09 Procédé et dispositif de codage ou de décodage d'un signal vidéo WO2020184936A1 (fr)

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
KR10-2019-0026857 2019-03-08
KR20190026857 2019-03-08
KR10-2019-0054254 2019-05-09
KR20190054254 2019-05-09
KR1020200028855A KR20200107871A (ko) 2019-03-08 2020-03-09 비디오 신호의 부호화 또는 복호화 방법 및 장치
KR10-2020-0028855 2020-03-09

Publications (1)

Publication Number Publication Date
WO2020184936A1 true WO2020184936A1 (fr) 2020-09-17

Family

ID=72427469

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2020/003266 WO2020184936A1 (fr) 2019-03-08 2020-03-09 Procédé et dispositif de codage ou de décodage d'un signal vidéo

Country Status (1)

Country Link
WO (1) WO2020184936A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116134813A (zh) * 2021-09-15 2023-05-16 腾讯美国有限责任公司 通过使用块矢量传播ibc块的帧内预测模式信息

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20160130869A (ko) * 2011-06-23 2016-11-14 가부시키가이샤 제이브이씨 켄우드 화상 인코딩 장치, 화상 인코딩 방법 및 화상 인코딩 프로그램, 및 화상 디코딩 장치, 화상 디코딩 방법 및 화상 디코딩 프로그램
KR20170020928A (ko) * 2014-07-07 2017-02-24 에이치에프아이 이노베이션 인크. 인트라 블록 카피 검색 및 보상 범위의 방법
KR101844698B1 (ko) * 2014-05-23 2018-04-02 후아웨이 테크놀러지 컴퍼니 리미티드 블록-예측 기법들에 사용하기 위한 사전-예측 필터링을 위한 방법 및 장치
KR20180107097A (ko) * 2016-02-16 2018-10-01 삼성전자주식회사 비디오 복호화 방법 및 그 장치 및 비디오 부호화 방법 및 그 장치

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20160130869A (ko) * 2011-06-23 2016-11-14 가부시키가이샤 제이브이씨 켄우드 화상 인코딩 장치, 화상 인코딩 방법 및 화상 인코딩 프로그램, 및 화상 디코딩 장치, 화상 디코딩 방법 및 화상 디코딩 프로그램
KR101844698B1 (ko) * 2014-05-23 2018-04-02 후아웨이 테크놀러지 컴퍼니 리미티드 블록-예측 기법들에 사용하기 위한 사전-예측 필터링을 위한 방법 및 장치
KR20170020928A (ko) * 2014-07-07 2017-02-24 에이치에프아이 이노베이션 인크. 인트라 블록 카피 검색 및 보상 범위의 방법
KR20180107097A (ko) * 2016-02-16 2018-10-01 삼성전자주식회사 비디오 복호화 방법 및 그 장치 및 비디오 부호화 방법 및 그 장치

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
BROSS, BENJAMIN ET AL.: "Versatile Video Coding (Draft 4", JVET-M1001-V5 . JOINT VIDEO EXPERTS TEAM (JVET) OF ITU-T SG 16 WP 3 AND ISO/IEC JTC 1/SC 29/WG 11. 13TH MEETING, 27 February 2019 (2019-02-27), Marrakech, pages 1 - 274, XP030254441 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116134813A (zh) * 2021-09-15 2023-05-16 腾讯美国有限责任公司 通过使用块矢量传播ibc块的帧内预测模式信息

Similar Documents

Publication Publication Date Title
WO2018212578A1 (fr) Procédé et dispositif de traitement de signal vidéo
WO2018066959A1 (fr) Procédé et appareil de traitement de signal vidéo
WO2018212577A1 (fr) Procédé et dispositif de traitement de signal vidéo
WO2018174593A1 (fr) Procédé de filtrage en boucle selon une norme de classification de pixels adaptative
WO2016200100A1 (fr) Procédé et appareil de codage ou de décodage d'image au moyen d'une syntaxe de signalisation pour une prédiction de poids adaptatif
WO2018056703A1 (fr) Procédé et appareil de traitement de signal vidéo
WO2018106047A1 (fr) Procédé et appareil de traitement de signal vidéo
WO2018008904A2 (fr) Procédé et appareil de traitement de signal vidéo
WO2018044088A1 (fr) Procédé et dispositif de traitement d'un signal vidéo
WO2018044087A1 (fr) Procédé et dispositif de traitement de signal vidéo
WO2018236028A1 (fr) Procédé de traitement d'image basé sur un mode d'intra-prédiction et appareil associé
WO2020162737A1 (fr) Procédé de traitement du signal vidéo et dispositif utilisant une transformée secondaire
WO2011021839A2 (fr) Procédé et appareil de codage vidéo et procédé et appareil de décodage vidéo
WO2018044089A1 (fr) Procédé et dispositif pour traiter un signal vidéo
WO2018008905A1 (fr) Procédé et appareil de traitement de signal vidéo
WO2020009419A1 (fr) Procédé et dispositif de codage vidéo utilisant un candidat de fusion
WO2019235891A1 (fr) Procédé et appareil de traitement de signal vidéo
WO2013154366A1 (fr) Procédé de transformation faisant appel à des informations de bloc, et appareil utilisant ce procédé
WO2021071183A1 (fr) Procédé et dispositif permettant de réaliser une transformation inverse sur des coefficients de transformée d'un bloc courant
WO2019078664A1 (fr) Procédé et dispositif de traitement de signal vidéo
WO2016159610A1 (fr) Procédé et appareil de traitement de signal vidéo
WO2016048092A1 (fr) Procédé et dispositif de traitement de signal vidéo
WO2013109123A1 (fr) Procédé et dispositif de codage vidéo permettant d'améliorer la vitesse de traitement de prédiction intra, et procédé et dispositif de décodage vidéo
WO2016064123A1 (fr) Procédé et appareil de traitement de signal vidéo
WO2020242145A1 (fr) Procédé et appareil de codage vidéo utilisant un ensemble adaptatif de paramètres

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20769601

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20769601

Country of ref document: EP

Kind code of ref document: A1