KR101743665B1 - Method and apparatus for processing a video signal based on intra prediction - Google Patents

Method and apparatus for processing a video signal based on intra prediction Download PDF

Info

Publication number
KR101743665B1
KR101743665B1 KR1020160012978A KR20160012978A KR101743665B1 KR 101743665 B1 KR101743665 B1 KR 101743665B1 KR 1020160012978 A KR1020160012978 A KR 1020160012978A KR 20160012978 A KR20160012978 A KR 20160012978A KR 101743665 B1 KR101743665 B1 KR 101743665B1
Authority
KR
South Korea
Prior art keywords
current block
distribution type
mode
unit
coefficient
Prior art date
Application number
KR1020160012978A
Other languages
Korean (ko)
Inventor
이영렬
김가람
김남욱
Original Assignee
세종대학교산학협력단
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 세종대학교산학협력단 filed Critical 세종대학교산학협력단
Priority to KR1020160012978A priority Critical patent/KR101743665B1/en
Application granted granted Critical
Publication of KR101743665B1 publication Critical patent/KR101743665B1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/593Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/18Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a set of transform coefficients

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

According to an aspect of the present invention, there is provided a video signal processing method for obtaining a transform coefficient of a current block in a predetermined scanning order from a bitstream, determining an intra prediction mode of a current block based on a transform coefficient, And intra prediction is performed on a current block using a neighboring sample.

Description

BACKGROUND OF THE INVENTION 1. Field of the Invention [0001] The present invention relates to an intra prediction based video signal processing method and apparatus,

The present invention relates to a video signal processing method and apparatus.

Recently, the demand for high resolution and high quality images such as high definition (HD) image and ultra high definition (UHD) image is increasing in various applications. As the image data has high resolution and high quality, the amount of data increases relative to the existing image data. Therefore, when the image data is transmitted using a medium such as a wired / wireless broadband line or stored using an existing storage medium, The storage cost is increased. High-efficiency image compression techniques can be utilized to solve such problems as image data becomes high-resolution and high-quality.

An inter picture prediction technique for predicting a pixel value included in a current picture from a previous or a subsequent picture of a current picture by an image compression technique, an intra picture prediction technique for predicting a pixel value included in a current picture using pixel information in the current picture, There are various techniques such as an entropy encoding technique in which a short code is assigned to a value having a high appearance frequency and a long code is assigned to a value having a low appearance frequency. Image data can be effectively compressed and transmitted or stored using such an image compression technique.

On the other hand, demand for high-resolution images is increasing, and demand for stereoscopic image content as a new image service is also increasing. Video compression techniques are being discussed to effectively provide high resolution and ultra-high resolution stereoscopic content.

It is an object of the present invention to provide a method and apparatus for fast intra prediction coding in encoding / decoding a video signal.

It is an object of the present invention to provide a method and apparatus for deriving an intra prediction mode based on a transform coefficient in encoding / decoding a video signal.

A method for decoding a video signal according to the present invention includes the steps of obtaining a transform coefficient of a current block according to a predetermined scan order from a bitstream, determining an intra prediction mode of the current block based on the transform coefficient, And perform intra prediction on the current block using a neighboring sample adjacent to the current block.

In the video signal decoding method according to the present invention, the step of determining the intra prediction mode may include determining a coefficient distribution type of the current block based on the transform coefficient, The intra prediction mode of the current block can be determined.

In the video signal decoding method according to the present invention, the coefficient distribution type may indicate a distribution map of non-zero transform coefficients or an edge direction in the current block.

In the video signal decoding method according to the present invention, the coefficient distribution type may be a first distribution type in which a non-zero transform coefficient is divided into a left upper end region of the current block, A third distribution type indicating that the non-zero transform coefficient is included in the left side region of the current block, and a second distribution type indicating that the transform coefficient is symmetrically shifted with respect to a diagonal line having a predetermined angle Min or a fifth distribution type indicating that the first distribution type does not correspond to the first distribution type to the fourth distribution type.

In the video signal decoding method according to the present invention, the coefficient distribution type may be determined through comparison between the magnitude of the transform coefficient of the current block and a predetermined threshold value.

In the video signal decoding method according to the present invention, the threshold value may be a fixed constant value preset in the image decoding apparatus or a constant value variably determined based on the transform coefficient of the current block.

In the video signal decoding method according to the present invention, the coefficient distribution type may be determined based on a comparison between a sum of transform coefficients located in an upper column of the current block and a sum of transform coefficients located in a left row.

A video signal encoding method according to the present invention determines a candidate mode of a current block based on a sum of absolute transformed difference (SATD), re-determines the candidate mode based on a coefficient distribution type of the current block, The intra prediction mode of the current block can be determined based on the number of candidate modes.

In the method of encoding a video signal according to the present invention, the SATD may be calculated for each of N intra-image modes defined in the image encoding apparatus.

In the method of encoding a video signal according to the present invention, three intra prediction modes may be determined to be the candidate modes in order of the smallest value of the calculated SATD.

In the video signal encoding method according to the present invention, when the coefficient of distribution of the current block is not 0 and the transform coefficient is a first distribution type indicating a fraction in the upper left area of the current block, the candidate mode of the current block is DC mode.

In the method of encoding a video signal according to the present invention, when the coefficient of distribution of the current block is not 0 and the transform coefficient is a second distribution type indicating that the current block includes the upper part of the current block, The candidate mode corresponding to the index 0 can be redetermined in the vertical mode.

In the video signal encoding method according to the present invention, when the number of the recursive candidate modes is one, the recursive candidate mode may be set to the intra prediction mode of the current block.

In the video signal encoding method according to the present invention, when there are a plurality of the number of the recursive candidate modes, the intra prediction mode of the current block may be determined based on a flag (coded_block_flag) related to the current block.

According to the present invention, it is possible to perform fast intra prediction encoding / decoding based on the transform coefficients.

According to the present invention, the intra prediction mode can be efficiently derived based on the distribution of the transform coefficients.

1 is a block diagram illustrating an image encoding apparatus according to an embodiment of the present invention.
2 is a block diagram illustrating an image decoding apparatus according to an embodiment of the present invention.
FIG. 3 illustrates a method of encoding an intra prediction mode based on a distribution of transform coefficients, according to an embodiment of the present invention. Referring to FIG.
FIG. 4 illustrates a process of performing intra prediction using a transform coefficient-based intra prediction mode according to an embodiment of the present invention. Referring to FIG.
Fig. 5 shows a kind of coefficient distribution type according to an embodiment to which the present invention is applied.

While the invention is susceptible to various modifications and alternative forms, specific embodiments thereof are shown by way of example in the drawings and will herein be described in detail. It should be understood, however, that the invention is not intended to be limited to the particular embodiments, but includes all modifications, equivalents, and alternatives falling within the spirit and scope of the invention. Like reference numerals are used for like elements in describing each drawing.

The terms first, second, etc. may be used to describe various components, but the components should not be limited by the terms. The terms are used only for the purpose of distinguishing one component from another. For example, without departing from the scope of the present invention, the first component may be referred to as a second component, and similarly, the second component may also be referred to as a first component. And / or < / RTI > includes any combination of a plurality of related listed items or any of a plurality of related listed items.

It is to be understood that when an element is referred to as being "connected" or "connected" to another element, it may be directly connected or connected to the other element, . On the other hand, when an element is referred to as being "directly connected" or "directly connected" to another element, it should be understood that there are no other elements in between.

The terminology used in this application is used only to describe a specific embodiment and is not intended to limit the invention. The singular expressions include plural expressions unless the context clearly dictates otherwise. In the present application, the terms "comprises" or "having" and the like are used to specify that there is a feature, a number, a step, an operation, an element, a component or a combination thereof described in the specification, But do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, or combinations thereof.

Hereinafter, preferred embodiments of the present invention will be described in detail with reference to the accompanying drawings. Hereinafter, the same reference numerals will be used for the same constituent elements in the drawings, and redundant explanations for the same constituent elements will be omitted.

1 is a block diagram illustrating an image encoding apparatus according to an embodiment of the present invention.

1, the image encoding apparatus 100 includes a picture division unit 110, prediction units 120 and 125, a transform unit 130, a quantization unit 135, a reordering unit 160, an entropy encoding unit An inverse quantization unit 140, an inverse transform unit 145, a filter unit 150, and a memory 155. [

Each of the components shown in FIG. 1 is shown independently to represent different characteristic functions in the image encoding apparatus, and does not mean that each component is composed of separate hardware or one software configuration unit. That is, each constituent unit is included in each constituent unit for convenience of explanation, and at least two constituent units of the constituent units may be combined to form one constituent unit, or one constituent unit may be divided into a plurality of constituent units to perform a function. The integrated embodiments and separate embodiments of the components are also included within the scope of the present invention, unless they depart from the essence of the present invention.

In addition, some of the components are not essential components to perform essential functions in the present invention, but may be optional components only to improve performance. The present invention can be implemented only with components essential for realizing the essence of the present invention, except for the components used for the performance improvement, and can be implemented by only including the essential components except the optional components used for performance improvement Are also included in the scope of the present invention.

The picture division unit 110 may divide the input picture into at least one processing unit. At this time, the processing unit may be a prediction unit (PU), a transform unit (TU), or a coding unit (CU). The picture division unit 110 divides one picture into a plurality of coding units, a prediction unit, and a combination of conversion units, and generates a coding unit, a prediction unit, and a conversion unit combination So that the picture can be encoded.

For example, one picture may be divided into a plurality of coding units. In order to divide a coding unit in a picture, a recursive tree structure such as a quad tree structure can be used. In a coding or decoding scheme in which one picture or a largest coding unit is used as a root and divided into other coding units A unit can be divided with as many child nodes as the number of divided coding units. Under certain constraints, an encoding unit that is no longer segmented becomes a leaf node. That is, when it is assumed that only one square division is possible for one coding unit, one coding unit can be divided into a maximum of four different coding units.

Hereinafter, in the embodiment of the present invention, a coding unit may be used as a unit for performing coding, or may be used as a unit for performing decoding.

The prediction unit may be one divided into at least one square or rectangular shape having the same size in one coding unit, and one of the prediction units in one coding unit may be divided into another prediction Or may have a shape and / or size different from the unit.

If a prediction unit performing intra prediction on the basis of an encoding unit is not the minimum encoding unit at the time of generation, intraprediction can be performed without dividing the prediction unit into a plurality of prediction units NxN.

The prediction units 120 and 125 may include an inter prediction unit 120 for performing inter prediction and an intra prediction unit 125 for performing intra prediction. It is possible to determine whether to use inter prediction or intra prediction for a prediction unit and to determine concrete information (e.g., intra prediction mode, motion vector, reference picture, etc.) according to each prediction method. At this time, the processing unit in which the prediction is performed may be different from the processing unit in which the prediction method and the concrete contents are determined. For example, the method of prediction, the prediction mode and the like are determined as a prediction unit, and the execution of the prediction may be performed in a conversion unit. The residual value (residual block) between the generated prediction block and the original block can be input to the conversion unit 130. [ In addition, the prediction mode information, motion vector information, and the like used for prediction can be encoded by the entropy encoding unit 165 together with the residual value and transmitted to the decoder. When a particular encoding mode is used, it is also possible to directly encode the original block and transmit it to the decoding unit without generating a prediction block through the prediction units 120 and 125.

The inter-prediction unit 120 may predict a prediction unit based on information of at least one of a previous picture or a following picture of the current picture, and may predict a prediction unit based on information of a partially- Unit may be predicted. The inter prediction unit 120 may include a reference picture interpolation unit, a motion prediction unit, and a motion compensation unit.

In the reference picture interpolating section, the reference picture information is supplied from the memory 155 and pixel information of an integer pixel or less can be generated in the reference picture. In the case of a luminance pixel, a DCT-based interpolation filter having a different filter coefficient may be used to generate pixel information of an integer number of pixels or less in units of quarter pixels. In the case of a color difference signal, a DCT-based 4-tap interpolation filter having a different filter coefficient may be used to generate pixel information of an integer number of pixels or less in units of 1/8 pixel.

The motion prediction unit may perform motion prediction based on the reference picture interpolated by the reference picture interpolating unit. Various methods such as Full Search-based Block Matching Algorithm (FBMA), Three Step Search (TSS), and New Three-Step Search Algorithm (NTS) can be used as methods for calculating motion vectors. The motion vector may have a motion vector value of 1/2 or 1/4 pixel unit based on the interpolated pixel. The motion prediction unit can predict the current prediction unit by making the motion prediction method different. Various methods such as a skip method, a merge method, an AMVP (Advanced Motion Vector Prediction) method, and an Intra Block Copy method can be used as the motion prediction method.

The intra prediction unit 125 can generate a prediction unit based on reference pixel information around the current block which is pixel information in the current picture. In the case where the neighboring block of the current prediction unit is the block in which the inter prediction is performed so that the reference pixel is the pixel performing the inter prediction, the reference pixel included in the block in which the inter prediction is performed is referred to as the reference pixel Information. That is, when the reference pixel is not available, the reference pixel information that is not available may be replaced by at least one reference pixel among the available reference pixels.

In intra prediction, the prediction mode may have a directional prediction mode in which reference pixel information is used according to a prediction direction, and a non-directional mode in which direction information is not used in prediction. The number of directional prediction modes may be equal to or greater than 33 as defined in the HEVC standard, and may extend to a number within the range of, for example, 60 to 70. [ The mode for predicting the luminance information may be different from the mode for predicting the chrominance information and the intra prediction mode information or predicted luminance signal information used for predicting the luminance information may be utilized to predict the chrominance information.

When intraprediction is performed, when the size of the prediction unit is the same as the size of the conversion unit, intra prediction is performed on the prediction unit based on pixels existing on the left side of the prediction unit, pixels existing on the upper left side, Can be performed. However, when intra prediction is performed, when the size of the prediction unit differs from the size of the conversion unit, intraprediction can be performed using the reference pixel based on the conversion unit. It is also possible to use intra prediction using N x N divisions for only the minimum coding units.

The intra prediction method can generate a prediction block after applying an AIS (Adaptive Intra Smoothing) filter to the reference pixel according to the prediction mode. The type of the AIS filter applied to the reference pixel may be different. In order to perform the intra prediction method, the intra prediction mode of the current prediction unit can be predicted from the intra prediction mode of the prediction unit existing around the current prediction unit. In the case where the prediction mode of the current prediction unit is predicted using the mode information predicted from the peripheral prediction unit, if the intra prediction mode of the current prediction unit is the same as the intra prediction mode of the current prediction unit, The prediction mode information of the current block can be encoded by performing entropy encoding if the prediction mode of the current prediction unit is different from the prediction mode of the neighbor prediction unit.

In addition, a residual block including a prediction unit that has been predicted based on the prediction unit generated by the prediction units 120 and 125 and a residual value that is a difference value from the original block of the prediction unit may be generated. The generated residual block may be input to the transform unit 130. [

The transform unit 130 transforms the residual block including the residual information of the prediction unit generated through the original block and the predictors 120 and 125 into a DCT (Discrete Cosine Transform), a DST (Discrete Sine Transform), a KLT You can convert using the same conversion method. The decision to apply the DCT, DST, or KLT to transform the residual block may be based on the intra prediction mode information of the prediction unit used to generate the residual block.

The quantization unit 135 may quantize the values converted into the frequency domain by the conversion unit 130. [ The quantization factor may vary depending on the block or the importance of the image. The values calculated by the quantization unit 135 may be provided to the inverse quantization unit 140 and the reorder unit 160.

The reordering unit 160 can reorder the coefficient values with respect to the quantized residual values.

The reordering unit 160 may change the two-dimensional block type coefficient to a one-dimensional vector form through a coefficient scanning method. For example, the rearranging unit 160 may scan a DC coefficient to a coefficient in a high frequency region using a Zig-Zag scan method, and change the DC coefficient to a one-dimensional vector form. Instead of the jig-jag scan, a vertical scan may be used to scan two-dimensional block type coefficients in a column direction, and a horizontal scan to scan a two-dimensional block type coefficient in a row direction depending on the size of the conversion unit and the intra prediction mode. That is, it is possible to determine whether any scanning method among the jig-jag scan, the vertical direction scan and the horizontal direction scan is used according to the size of the conversion unit and the intra prediction mode.

The entropy encoding unit 165 may perform entropy encoding based on the values calculated by the reordering unit 160. For entropy encoding, various encoding methods such as Exponential Golomb, Context-Adaptive Variable Length Coding (CAVLC), and Context-Adaptive Binary Arithmetic Coding (CABAC) may be used.

The entropy encoding unit 165 receives the residual value count information of the encoding unit, the block type information, the prediction mode information, the division unit information, the prediction unit information and the transmission unit information, and the motion information of the motion unit from the reordering unit 160 and the prediction units 120 and 125 Vector information, reference frame information, interpolation information of a block, filtering information, and the like.

The entropy encoding unit 165 can entropy-encode the coefficient value of the encoding unit input by the reordering unit 160. [

The inverse quantization unit 140 and the inverse transformation unit 145 inverse quantize the quantized values in the quantization unit 135 and inversely transform the converted values in the conversion unit 130. [ The residual value generated by the inverse quantization unit 140 and the inverse transform unit 145 is combined with the prediction unit predicted through the motion estimation unit, the motion compensation unit and the intra prediction unit included in the prediction units 120 and 125, A block (Reconstructed Block) can be generated.

The filter unit 150 may include at least one of a deblocking filter, an offset correction unit, and an adaptive loop filter (ALF).

The deblocking filter can remove block distortion caused by the boundary between the blocks in the reconstructed picture. It may be determined whether to apply a deblocking filter to the current block based on pixels included in a few columns or rows included in the block to determine whether to perform deblocking. When a deblocking filter is applied to a block, a strong filter or a weak filter may be applied according to the deblocking filtering strength required. In applying the deblocking filter, horizontal filtering and vertical filtering may be performed concurrently in performing vertical filtering and horizontal filtering.

The offset correction unit may correct the offset of the deblocked image with respect to the original image in units of pixels. In order to perform offset correction for a specific picture, pixels included in an image are divided into a predetermined number of areas, and then an area to be offset is determined and an offset is applied to the area. Alternatively, Can be used.

Adaptive Loop Filtering (ALF) can be performed based on a comparison between the filtered reconstructed image and the original image. After dividing the pixels included in the image into a predetermined group, one filter to be applied to the group may be determined and different filtering may be performed for each group. The information related to whether to apply the ALF may be transmitted for each coding unit (CU), and the shape and the filter coefficient of the ALF filter to be applied may be changed according to each block. Also, an ALF filter of the same type (fixed form) may be applied irrespective of the characteristics of the application target block.

The memory 155 may store the reconstructed block or picture calculated through the filter unit 150 and the reconstructed block or picture stored therein may be provided to the predictor 120 or 125 when the inter prediction is performed.

2 is a block diagram illustrating an image decoding apparatus according to an embodiment of the present invention.

2, the image decoder 200 includes an entropy decoding unit 210, a reordering unit 215, an inverse quantization unit 220, an inverse transform unit 225, prediction units 230 and 235, 240, and a memory 245 may be included.

When an image bitstream is input in the image encoder, the input bitstream may be decoded in a procedure opposite to that of the image encoder.

The entropy decoding unit 210 can perform entropy decoding in a procedure opposite to that in which entropy encoding is performed in the entropy encoding unit of the image encoder. For example, various methods such as Exponential Golomb, Context-Adaptive Variable Length Coding (CAVLC), and Context-Adaptive Binary Arithmetic Coding (CABAC) may be applied in accordance with the method performed by the image encoder.

The entropy decoding unit 210 may decode information related to intra prediction and inter prediction performed in the encoder.

The reordering unit 215 can perform reordering based on a method in which the entropy decoding unit 210 rearranges the entropy-decoded bitstreams in the encoding unit. The coefficients represented by the one-dimensional vector form can be rearranged by restoring the coefficients of the two-dimensional block form again. The reordering unit 215 can perform reordering by receiving information related to the coefficient scanning performed by the encoding unit and performing a reverse scanning based on the scanning order performed by the encoding unit.

The inverse quantization unit 220 can perform inverse quantization based on the quantization parameters provided by the encoder and the coefficient values of the re-arranged blocks.

The inverse transform unit 225 may perform an inverse DCT, an inverse DST, and an inverse KLT on the DCT, DST, and KLT transformations performed by the transform unit on the quantization result performed by the image encoder. The inverse transform can be performed based on the transmission unit determined by the image encoder. In the inverse transform unit 225 of the image decoder, a transform technique (e.g., DCT, DST, KLT) may be selectively performed according to a plurality of information such as a prediction method, a size of a current block, and a prediction direction.

The prediction units 230 and 235 can generate a prediction block based on the prediction block generation related information provided by the entropy decoding unit 210 and the previously decoded block or picture information provided in the memory 245. [

As described above, when intra prediction is performed in the same manner as in the image encoder, when the size of the prediction unit is the same as the size of the conversion unit, pixels existing on the left side of the prediction unit, pixels existing on the upper left side, However, when the size of the prediction unit differs from the size of the prediction unit in intra prediction, intraprediction is performed using a reference pixel based on the conversion unit . It is also possible to use intra prediction using N x N divisions for only the minimum coding unit.

The prediction units 230 and 235 may include a prediction unit determination unit, an inter prediction unit, and an intra prediction unit. The prediction unit determination unit receives various information such as prediction unit information input from the entropy decoding unit 210, prediction mode information of the intra prediction method, motion prediction related information of the inter prediction method, and identifies prediction units in the current coding unit. It is possible to determine whether the unit performs inter prediction or intra prediction. The inter prediction unit 230 predicts the current prediction based on the information included in at least one of the previous picture of the current picture or the following picture including the current prediction unit by using information necessary for inter prediction of the current prediction unit provided by the image encoder, Unit can be performed. Alternatively, the inter prediction may be performed on the basis of the information of the partial region previously reconstructed in the current picture including the current prediction unit.

In order to perform inter prediction, a motion prediction method of a prediction unit included in a corresponding encoding unit on the basis of an encoding unit includes a skip mode, a merge mode, an AMVP mode, and an intra block copy mode It is possible to judge whether or not it is any method.

The intra prediction unit 235 can generate a prediction block based on the pixel information in the current picture. If the prediction unit is a prediction unit that performs intra prediction, the intra prediction can be performed based on the intra prediction mode information of the prediction unit provided by the image encoder. The intraprediction unit 235 may include an AIS (Adaptive Intra Smoothing) filter, a reference pixel interpolator, and a DC filter. The AIS filter performs filtering on the reference pixels of the current block and can determine whether to apply the filter according to the prediction mode of the current prediction unit. The AIS filtering can be performed on the reference pixel of the current block using the prediction mode of the prediction unit provided in the image encoder and the AIS filter information. When the prediction mode of the current block is a mode in which AIS filtering is not performed, the AIS filter may not be applied.

The reference pixel interpolator may interpolate the reference pixels to generate reference pixels in units of pixels less than or equal to an integer value when the prediction mode of the prediction unit is a prediction unit that performs intra prediction based on pixel values obtained by interpolating reference pixels. The reference pixel may not be interpolated in the prediction mode in which the prediction mode of the current prediction unit generates the prediction block without interpolating the reference pixel. The DC filter can generate a prediction block through filtering when the prediction mode of the current block is the DC mode.

The restored block or picture may be provided to the filter unit 240. The filter unit 240 may include a deblocking filter, an offset correction unit, and an ALF.

When information on whether a deblocking filter is applied to a corresponding block or picture from the image encoder or a deblocking filter is applied, information on whether a strong filter or a weak filter is applied can be provided. In the deblocking filter of the video decoder, the deblocking filter related information provided by the video encoder is provided, and the video decoder can perform deblocking filtering for the corresponding block.

The offset correction unit may perform offset correction on the reconstructed image based on the type of offset correction applied to the image, offset information, and the like during encoding.

The ALF can be applied to an encoding unit on the basis of ALF application information and ALF coefficient information provided from an encoder. Such ALF information may be provided in a specific parameter set.

The memory 245 may store the reconstructed picture or block to be used as a reference picture or a reference block, and may also provide the reconstructed picture to the output unit.

As described above, in the embodiment of the present invention, a coding unit (coding unit) is used as a coding unit for convenience of explanation, but it may be a unit for performing not only coding but also decoding.

FIG. 3 illustrates a method of encoding an intra prediction mode based on a distribution of transform coefficients, according to an embodiment of the present invention. Referring to FIG.

Hereinafter, the present embodiment will be described based on 35 intra prediction modes defined in the HEVC for the convenience of description. However, even when 35 or more intra prediction modes (i.e., an extended intra prediction mode) are used, Lt; / RTI >

Referring to FIG. 3, a candidate mode of a current block may be determined based on a sum of absolute transformed difference (SATD) (S300).

The SATD means the transformed residual block, and the residual block can mean the difference between the original block and the prediction block. More specifically, a residual block may be generated for each of the 35 intra-prediction modes by generating a prediction block for each of the 35 intra-prediction modes predefined in the image encoding apparatus and dividing the prediction block by the original block. SATD can be calculated by converting the generated residual block. It is possible to select three intraprediction modes in ascending order of the value of the calculated SATD and determine it as the candidate mode of the current block. For the three intraprediction modes, indexes of 0, 1, and 2 may be assigned in the order of smaller SATD values.

Referring to FIG. 3, the candidate mode may be re-determined based on the coefficient distribution type of the current block (S310).

The coefficient distribution type of the present invention may indicate the position / bias of the non-zero transform coefficients, the symmetry of the transform coefficients, or the edge direction in the current block.

A second distribution type in which a non-zero transform coefficient indicates a fraction to the upper left area of the current block, a non-zero transform coefficient indicates a fraction to the upper area of the current block, A fourth distribution type symmetrically indicating that the conversion coefficient is minute based on a diagonal line having a predetermined angle, a fourth distribution type indicating that the conversion coefficient is not included in the first distribution type, And a fifth distribution type indicating that the distribution type does not correspond to the fourth distribution type. At least one of the first distribution type to the fifth distribution type may be used as a candidate for the coefficient distribution type of the current block, which will be described in detail with reference to FIG.

If the coefficient distribution type of the current block is the first distribution type, the candidate mode of the current block can be redetermined to the DC mode. For example, when the candidate mode corresponding to the index 0 in the candidate mode determined in S300 is not the DC mode, it can be replaced with the DC mode. The candidate modes corresponding to the indices 1 and 2 of the candidate modes determined in S300 may be removed in the candidate mode. When the coefficient distribution type of the current block is the first distribution type, the number of candidate modes can be reduced to one through the above-described recrystallization process.

If the coefficient distribution type of the current block is the fourth distribution type, the candidate mode of the current block can be redetermined to the diagonal mode. For example, if the candidate mode corresponding to the index 0 in the candidate mode determined in S300 is not the diagonal mode, this may be replaced by the diagonal mode. The candidate modes corresponding to the indices 1 and 2 of the candidate modes determined in S300 may be removed in the candidate mode. When the coefficient distribution type of the current block is the fourth distribution type, the number of candidate modes can be reduced to one through the above-described recrystallization process.

When the coefficient distribution type of the current block is the second distribution type, the candidate mode corresponding to the index 0 in the candidate mode determined in S300 is determined as the vertical mode, the index 1 is assigned to the candidate mode corresponding to the index 0, And the index 2 can be assigned to the candidate mode. The candidate mode corresponding to the existing index 2 can be removed in the candidate mode. When the coefficient distribution type of the current block is the second distribution type, the number of candidate modes can be maintained to be three through the recrystallization process described above.

When the coefficient distribution type of the current block is the third distribution type, the candidate mode corresponding to the index 0 in the candidate mode determined in S300 is determined as the horizontal mode, the index 1 is assigned to the candidate mode corresponding to the index 0, And the index 2 can be assigned to the candidate mode. The candidate mode corresponding to the existing index 2 can be removed in the candidate mode. When the coefficient distribution type of the current block is the third distribution type, the number of candidate modes can be maintained to be three through the recrystallization process described above.

If the coefficient distribution type of the current block is the fifth distribution type, it can be confirmed whether or not the planar mode exists in the candidate mode determined in S300. As a result, when a planar mode exists, a planar mode is assigned to a position of an index 0, and a candidate mode which is a position of an index 0 can be assigned to a position of an index 1. At this time, the candidate mode corresponding to index 2 may be removed. When the coefficient distribution type of the current block is the fifth distribution type, the number of candidate modes can be reduced to two through the recrystallization process described above.

Referring to FIG. 3, it can be determined whether the number of candidate modes re-determined in S310 is one (S320).

If the number of redetermined candidate modes is 1, the redetermined candidate mode can be set to the intra prediction mode of the current block (S330).

If the coefficient distribution type of the current block is the first distribution type (or the fourth distribution type), the current block has one candidate mode, and the DC mode (or diagonal mode) can be set to the intra prediction mode.

On the other hand, if the number of recursive candidate modes is not one, it can be checked whether there is a candidate mode in which a flag (coded_block_flag) of the current block among the plurality of candidate modes is 0 (S340).

Here, coded_block_flag is a syntax indicating whether or not a non-zero conversion coefficient exists in the current block. That is, if the value of coded_block_flag is 1, there is at least one non-zero transform coefficient in the current block, and if it is 0, it indicates that the current block includes only the transform coefficient of 0.

The transform coefficients of the current block are calculated for each of the plurality of candidate modes, and it is possible to check whether the value of coded_block_flag is 0 or 1 based on the calculated transform coefficients.

If it is determined in step S340 that there is a candidate mode in which the value of coded_block_flag is 0, the candidate mode may be set to the intra prediction mode of the current block in step S350.

On the other hand, when there is no candidate mode in which the value of coded_block_flag is 0, the intra prediction mode of the current block can be determined from a plurality of candidate modes re-determined in S310 based on rate-distortion optimization (RDO) .

That is, the most efficient candidate mode among the plurality of candidate modes re-determined in S310 can be set to the intra-prediction mode of the current block, and the index of the candidate mode can be encoded.

FIG. 4 illustrates a process of performing intra prediction using a transform coefficient-based intra prediction mode according to an embodiment of the present invention. Referring to FIG.

Referring to FIG. 4, transform coefficients of a current block may be obtained from a bitstream (S400).

Specifically, the transformation coefficients of the two-dimensional array can be obtained according to a predetermined scan order. Examples of scan sequences are z scan, diagonal scan, vertical scan, and horizontal scan. Here, the scan order can be determined based on the encoded scan type information. The scan type information may be coded for each of a luminance component (luma component) and a chrominance component (chroma component) of the current block. Alternatively, only the scan type information for the luminance component is encoded, and the scan type information for the color difference component may be derived based on the scan type information for the luminance component. Alternatively, the scan type information may be derived based on table information predefined in the image decoding apparatus based on the size of the block (e.g., CU, PU, or TU). Here, the pre-defined table information can define the mapping relationship between the size of the block and the scan type information.

Referring to FIG. 4, the coefficient distribution type of the current block can be determined based on the transform coefficients obtained in S400 (S410).

The coefficient distribution type of the present invention may indicate the position / bias of the non-zero transform coefficients, the symmetry of the transform coefficients, or the edge direction in the current block.

1. Types of coefficient distribution type

A second distribution type in which a non-zero transform coefficient indicates a fraction to the upper left area of the current block, a non-zero transform coefficient indicates a fraction to the upper area of the current block, A fourth distribution type symmetrically indicating that the conversion coefficient is minute based on a diagonal line having a predetermined angle, a fourth distribution type indicating that the conversion coefficient is not included in the first distribution type, And a fifth distribution type indicating that the distribution type does not correspond to the fourth distribution type. At least one of the first distribution type to the fifth distribution type may be used as a candidate for the coefficient distribution type of the current block, which will be described in detail with reference to FIG.

2. Method of determining coefficient distribution type

The coefficient distribution type may be determined through comparison between the magnitude (or absolute value) of the transform coefficient of the current block and a predetermined threshold value. For example, a region having a conversion coefficient of a value larger than a threshold value may mean that a distribution coefficient of a non-zero conversion coefficient is high. On the contrary, a region having a conversion coefficient of a value smaller than the threshold value may be a non- It can mean that the distribution is low. The threshold value may be a fixed constant set in the image decoding apparatus or may be a constant value that is variably determined based on a transform coefficient of the current block. For example, the threshold value may be determined as an average value of the transform coefficients belonging to the current block. Through this comparison process, it can be confirmed whether the non-zero transform coefficients have a high distribution in the upper left area of the current block or a higher distribution in the upper or left area.

The comparison with the threshold value may be performed in units of samples (or pixels) of the current block. For example, all or some of the samples belonging to the current block can be compared with the threshold value. Alternatively, the threshold value may be compared with a sample at a specific position in the current block. Here, the sample at the specific position may include at least one of four samples located at each corner of the current block. Alternatively, the sample at the specific position may include at least one of a sample located in the upper left region of the current block, a sample located in the lower left region, a sample located in the upper right region, or a sample located in the lower right region.

On the other hand, it may be compared with the threshold value in units of subblocks belonging to the current block. Here, a sub-block may be defined as a square or rectangular block (for example, NxN, NxM) having a predetermined size and a group of samples composed of one or more samples. For comparison with a threshold value, only samples of a specific position (e.g., a corner sample, a center sample, etc.) in the sub-block may be selectively used and an average value of the transform coefficients of the sub-block may be used. However, the present invention is not limited to this, and it is needless to say that a maximum value, a minimum value, or a mode value among the transform coefficients of the subblock may be used.

Meanwhile, as will be described later with reference to FIG. 5, the coefficient distribution type may be determined through comparison between the sum of the transform coefficients located in the upper column of the current block and the sum of the transform coefficients located in the left row.

Referring to FIG. 4, the intra prediction mode of the current block may be determined based on the coefficient distribution type determined in S410 (S420).

For example, when the coefficient distribution type of the current block is determined to be the first distribution type, this means a flat image with no edge in the block, and the intra prediction mode of the current block can be determined as the DC mode.

When the coefficient distribution type of the current block is determined to be the second distribution type, this means that the edge in the vertical direction exists in the block, and the intra prediction mode of the current block can be determined as a mode having the vertical direction. When the value of the vertical mode is assumed to be N, the mode having the vertical direction may include at least one of the intra prediction modes having a value within the range of (N-n) to (N + n).

If the coefficient distribution type of the current block is determined to be the third distribution type, this means that the edge in the horizontal direction exists in the block, and the intra prediction mode of the current block can be determined as a mode having the horizontal direction. If the value of the horizontal mode is assumed to be M, the mode having the horizontal direction may include at least one of the intra prediction modes having a value within the range of (M-n) to (M + n).

If the coefficient distribution type of the current block is determined to be the fourth distribution type, this means that an edge in the diagonal direction exists in the block, and the intra prediction mode of the current block is a diagonal mode (for example, diagonal mode) Can be determined. When the value of the diagonal mode is assumed to be X, the mode having the horizontal direction may include at least one of the X to (X + m) range or the intra prediction mode having a value within the range (X-m) to X.

Where N, M and X are arbitrary constant values equal to or greater than 2, and n and m can mean any constant value equal to or greater than zero. Alternatively, n and m may denote an offset to specify a range of modes belonging to a similar group.

If the coefficient distribution type of the current block is determined to be the fifth distribution type, the intra prediction mode of the current block may be determined to be a pre-defined intra prediction mode (for example, a planar mode or a DC mode) in the image decoding apparatus.

The intra prediction mode determined according to the coefficient distribution type may be one, two, three, or more. When there are a plurality of intra prediction modes corresponding to the coefficient distribution type, these plurality of intra prediction modes can be regarded as candidate modes for the intra prediction mode of the current block, and one of these candidate modes can be predicted by intra prediction Mode. For this purpose, index information specifying one of the candidate modes may be signaled.

The number of intra prediction modes corresponding to the coefficient distribution type of the current block may be variable depending on the coefficient distribution type. For example, when the coefficient distribution type of the current block is the first distribution type, the number of intra prediction modes corresponding to the first distribution type may be one. On the other hand, when the coefficient distribution type of the current block is the second or third distribution type, the number of corresponding intra prediction modes may be three. However, this limits the number of intra prediction modes corresponding to the coefficient distribution type, and does not limit the number itself. Therefore, even in the case of the first distribution type, the number of intra prediction modes corresponding to the first distribution type can be two or more, and similarly, even when the coefficient distribution type of the current block is the second or third distribution type The number of intra prediction modes may be two or four or more.

Referring to FIG. 4, intra prediction may be performed on the current block based on the intra prediction mode determined in S420 (S430).

Specifically, a prediction sample of a current block can be obtained using an intra prediction mode of the current block and a neighboring sample adjacent to the current block. The neighbor samples refer to restoration samples of at least one neighboring block adjacent to the left, top, top right, bottom left, or top left of the current block, and the positions of neighboring samples may be specified by the intra prediction mode of the current block .

A residual sample of the current block may be obtained by selectively performing at least one of inverse quantization and inverse transformation on the transform coefficients obtained in S400. DCT, DST, KLT, or the like can be used as the conversion type for the inverse transform. The DCT, DST, KLT, Any of the conversion types may be selectively used. A residual sample may be added to the predicted sample to derive a reconstruction sample of the current block.

Fig. 5 shows a kind of coefficient distribution type according to an embodiment to which the present invention is applied.

The distribution of the transform coefficients varies according to the edge direction in the block, and the coefficient distribution type of the current block can be defined according to the distribution of the inboard coefficients. In FIG. 5, the dark region indicates that the value of the transform coefficient is not 0, and the bright region indicates that the value of the transform coefficient is zero.

As shown in FIG. 5 (a), when the current block is a flat image without edges, non-zero transform coefficients among the transform coefficients of the current block are mainly distributed in the upper left region of the current block. A coefficient distribution type having a distribution as shown in FIG. 5 (a) is defined as a first distribution type. 5 (a) shows that only the upper left sample of the current block has non-zero transform coefficients, but this is only an example. Even if at least one of the right sample, the bottom sample, or the bottom right sample adjacent to the upper left sample of the current block has a non-zero transform coefficient, it can be regarded as a first distribution type.

Referring to FIG. 5 (b), when the current block is an image having edges in the vertical direction, non-zero transform coefficients of the current block are mainly distributed in the upper region of the current block. A coefficient distribution type having the distribution as shown in Fig. 5 (b) is defined as a second distribution type.

Whether the coefficient distribution type of the current block is the second distribution type can be determined by comparing the sum of the transform coefficients located at the upper row of the current block and the sum of the transform coefficients located at the left column. Specifically, when the sum of the transform coefficients located in the upper row of the current block is larger than the sum of the transform coefficients located in the left column, the coefficient distribution type of the current block can be determined as the second distribution type. Here, the upper column may mean the first column located at the uppermost end of the current block, or may mean a plurality of columns located at the upper end. In addition, the left row may mean the first row located on the leftmost side of the current block, or it may mean a plurality of rows located on the left side.

Referring to FIG. 5C, when the current block is an image having edges in the horizontal direction, non-zero transform coefficients among the transform coefficients of the current block are mainly distributed in the left region of the current block. The coefficient distribution type having the distribution as shown in Fig. 5 (c) is defined as the third distribution type.

5 (b), when the sum of the transform coefficients located in the left column of the current block is larger than the sum of the transform coefficients located in the upper row, the coefficient distribution type of the current block is the third distribution Type. Here, the upper row may mean the first row located at the top of the current block, or it may mean a plurality of rows located at the uppermost row. In addition, the left row may mean the first row located on the leftmost side of the current block, or it may mean a plurality of rows located on the left side.

Referring to FIG. 5 (d), when the current block is an image having an angle of 45 degrees, the transform coefficients of the current block are distributed symmetrically with respect to the diagonal of 45 degrees. The coefficient distribution type having the distribution as shown in Fig. 5 (d) is defined as the fourth distribution type.

Also, there may be cases where the current block has both vertical and horizontal edges, or there may be cases where the transform coefficients are not symmetrically distributed. Since this case does not correspond to the first to fourth distribution types described above, a fifth distribution type may be further defined to represent the coefficient distribution type of such a block.

Claims (13)

Obtaining a transform coefficient of a current block from a bit stream according to a predetermined scan order;
Determining an intra prediction mode of the current block based on the transform coefficient; And
And performing intra prediction on the current block using the intra prediction mode and neighbor samples adjacent to the current block,
Wherein the step of determining the intra-
Determining a coefficient distribution type of the current block based on a comparison between a transform coefficient of the current block and a predetermined threshold value; And
Determining an intra prediction mode of the current block corresponding to the coefficient distribution type,
Wherein the threshold value is variably determined based on a transform coefficient of the current block .
delete The method according to claim 1,
Wherein the coefficient distribution type indicates a distribution map of non-zero transform coefficients or an edge direction in the current block.
The method according to claim 1,
Wherein the coefficient distribution type is a first distribution type in which a non-zero transform coefficient is divided into a left upper end region of the current block, a second distribution type in which the non-zero transform coefficient is divided into an upper end region of the current block, A third distribution type in which the non-zero transform coefficient indicates a fraction to the left region of the current block, a fourth distribution type in which a transform coefficient is symmetrically represented by a diagonal line having a predetermined angle, And a fifth distribution type indicating that the first distribution type does not correspond to the first distribution type or the fourth distribution type.
delete delete delete Determining a candidate mode of a current block based on a sum of absolute transformed difference (SATD);
Re-determining the candidate mode based on the coefficient distribution type of the current block; And
Determining an intra prediction mode of the current block based on the number of the recursive candidate modes,
Wherein the intra prediction mode of the current block is determined based on a flag (coded_block_flag) related to the current block when the number of the recursive candidate modes is plural,
Wherein the flag (coded_block_flag) indicates whether or not a non-zero transform coefficient exists in the current block .
9. The method of claim 8,
The SATD is calculated for each of N intra prediction modes predefined in the image encoding apparatus,
Wherein the three intra prediction modes are determined to be the candidate mode in the ascending order of the calculated SATD values.
9. The method of claim 8,
Wherein the candidate mode of the current block is redetermined in the DC mode if the transform coefficient having a non-zero coefficient distribution type of the current block is a first distribution type indicating a fraction in the upper left region of the current block.
9. The method of claim 8,
The candidate mode corresponding to the index 0 of the current block candidate mode is a vertical distribution mode when the coefficient of distribution of the current block is not 0, A method of encoding a video signal that is recalculated.
9. The method of claim 8,
And if the number of recursive candidate modes is 1, the recursive candidate mode is set to the intra prediction mode of the current block.
delete
KR1020160012978A 2016-02-02 2016-02-02 Method and apparatus for processing a video signal based on intra prediction KR101743665B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020160012978A KR101743665B1 (en) 2016-02-02 2016-02-02 Method and apparatus for processing a video signal based on intra prediction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020160012978A KR101743665B1 (en) 2016-02-02 2016-02-02 Method and apparatus for processing a video signal based on intra prediction

Publications (1)

Publication Number Publication Date
KR101743665B1 true KR101743665B1 (en) 2017-06-05

Family

ID=59222955

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020160012978A KR101743665B1 (en) 2016-02-02 2016-02-02 Method and apparatus for processing a video signal based on intra prediction

Country Status (1)

Country Link
KR (1) KR101743665B1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019022568A1 (en) * 2017-07-28 2019-01-31 한국전자통신연구원 Image processing method, and image encoding/decoding method and device which use same
WO2020130514A1 (en) * 2018-12-17 2020-06-25 엘지전자 주식회사 Method and device for deciding transform coefficient scan order on basis of high frequency zeroing
KR20210030890A (en) * 2019-09-10 2021-03-18 삼성전자주식회사 Image decoding apparatus and method using tool set, and image encoding apparatus and method

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2015027116A (en) * 2014-11-07 2015-02-05 株式会社Nttドコモ Image prediction coding method, image prediction coding device, image prediction coding program, image prediction decoding method, image prediction decoding device, and image prediction decoding program

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2015027116A (en) * 2014-11-07 2015-02-05 株式会社Nttドコモ Image prediction coding method, image prediction coding device, image prediction coding program, image prediction decoding method, image prediction decoding device, and image prediction decoding program

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019022568A1 (en) * 2017-07-28 2019-01-31 한국전자통신연구원 Image processing method, and image encoding/decoding method and device which use same
US11044471B2 (en) 2017-07-28 2021-06-22 Electronics And Telecommunications Research Institute Image processing method, and image encoding/decoding method and device which use same
US11463690B2 (en) 2017-07-28 2022-10-04 Electronics And Telecommunications Research Institute Image processing method, and image encoding/decoding method and device which use same
US11838502B2 (en) 2017-07-28 2023-12-05 Electronics And Telecommunications Research Institute Image processing method, and image encoding/decoding method and device which use same
WO2020130514A1 (en) * 2018-12-17 2020-06-25 엘지전자 주식회사 Method and device for deciding transform coefficient scan order on basis of high frequency zeroing
US11838512B2 (en) 2018-12-17 2023-12-05 Lg Electronics Inc. Method of determining transform coefficient scan order based on high frequency zeroing and apparatus thereof
KR20210030890A (en) * 2019-09-10 2021-03-18 삼성전자주식회사 Image decoding apparatus and method using tool set, and image encoding apparatus and method
KR102492522B1 (en) 2019-09-10 2023-01-27 삼성전자주식회사 Image decoding apparatus and method using tool set, and image encoding apparatus and method
KR102673736B1 (en) 2019-09-10 2024-06-11 삼성전자주식회사 Image decoding apparatus and method using tool set, and image encoding apparatus and method

Similar Documents

Publication Publication Date Title
KR102540995B1 (en) Intra prediction method of chrominance block using luminance sample, and apparatus using same
KR101549910B1 (en) Adaptive transform method based on in-screen rediction and apparatus using the method
KR102061201B1 (en) Methods of transformation based on block information and appratuses using the same
KR20170026276A (en) Method and apparatus for processing a video signal
KR20170116850A (en) Method and apparatus for processing a video signal based on intra prediction
KR20170108367A (en) Method and apparatus for processing a video signal based on intra prediction
KR101974952B1 (en) Methods of coding intra prediction mode using two candidate intra prediction modes and apparatuses using the same
KR20160118945A (en) Method and apparatus for processing a video signal
KR20160037111A (en) Method and apparatus for processing a video signal
KR20160088243A (en) Method and apparatus for processing a video signal
KR20180033030A (en) Method and apparatus for processing a video signal based on adaptive block patitioning
KR20160093565A (en) Method and apparatus for processing a video signal
KR20160093564A (en) Method and apparatus for processing a video signal
KR101743665B1 (en) Method and apparatus for processing a video signal based on intra prediction
KR20220019731A (en) A video encoding/decoding method and apparatus
KR101802375B1 (en) Methods of derivation of temporal motion vector predictor and appratuses using the same
KR20190140820A (en) A method and an apparatus for processing a video signal based on reference between components
KR20210153547A (en) method and apparatus for encoding/decoding a VIDEO SIGNAL, and a recording medium storing a bitstream
KR20140004825A (en) Method and apparatus of syntax element binarization for entropy coding and entropy decoding
KR20210118768A (en) Method of processing a video and device therefor
KR20230092798A (en) Method and apparatus for encoding/decoding image
KR20190023294A (en) Method and apparatus for encoding/decoding a video signal

Legal Events

Date Code Title Description
E701 Decision to grant or registration of patent right
GRNT Written decision to grant