KR102055451B1 - Apparatus and method for decoding image - Google Patents

Apparatus and method for decoding image Download PDF

Info

Publication number
KR102055451B1
KR102055451B1 KR1020110065220A KR20110065220A KR102055451B1 KR 102055451 B1 KR102055451 B1 KR 102055451B1 KR 1020110065220 A KR1020110065220 A KR 1020110065220A KR 20110065220 A KR20110065220 A KR 20110065220A KR 102055451 B1 KR102055451 B1 KR 102055451B1
Authority
KR
South Korea
Prior art keywords
block
prediction
unit
mode
pixel
Prior art date
Application number
KR1020110065220A
Other languages
Korean (ko)
Other versions
KR20130063044A (en
Inventor
임종근
Original Assignee
엠앤케이홀딩스 주식회사
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 엠앤케이홀딩스 주식회사 filed Critical 엠앤케이홀딩스 주식회사
Priority to KR1020110065220A priority Critical patent/KR102055451B1/en
Publication of KR20130063044A publication Critical patent/KR20130063044A/en
Application granted granted Critical
Publication of KR102055451B1 publication Critical patent/KR102055451B1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/105Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/124Quantisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/13Adaptive entropy coding, e.g. adaptive variable length coding [AVLC] or context adaptive binary arithmetic coding [CABAC]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/182Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a pixel
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation

Abstract

An image decoding apparatus and method are disclosed. The image decoding apparatus according to the present invention reconstructs a prediction block in coding units, and generates a prediction block by linearly interpolating available pixels when a pixel value located at a boundary of a neighboring block cannot be used when generating a prediction block. In addition, when information of a block to be decoded is not available, based on pixel values of pixels located at boundaries of blocks adjacent to the block, pixel values of pixels located in a direction from the upper left to the lower right of the corresponding block are linearly interpolated. Pixel values of the remaining pixels are determined.

Figure R1020110065220

Description

Image decoding apparatus and method {Apparatus and method for decoding image}

The present invention relates to an apparatus and a method for decoding an image, and more particularly, to an apparatus and a method for decoding an image encoded by a coding unit preset by the apparatus for encoding an image.

Regarding the compression and reconstruction of images, efforts are continuously made to reduce the compression ratio of the image and the complexity of the system from MPEG to H.264. In particular, with the combination of video compression technology and communication technology, there is a growing demand for a technology capable of restoring the original video at the time of restoration while reducing the amount of data. In order to respond to these demands, more advanced image compression technology is currently being studied, and a new image compression standard is recently discussed under the name HEVC.

SUMMARY OF THE INVENTION The present invention has been made in an effort to provide an image decoding apparatus and method capable of accurately decoding an image while reducing the complexity of the overall apparatus.

In order to achieve the above technical problem, an image decoding apparatus according to the present invention includes information for generating a prediction block by decoding a bit stream received from an image encoding apparatus, a residual value expressed in a one-dimensional vector form, and a quantization parameter. An entropy decoding unit for restoring; A rearranging unit for rearranging the residual values and restoring the coefficients in a two-dimensional block form; An inverse quantizer for inversely quantizing the coefficients of the two-dimensional block form based on the quantization parameter; An inverse transform unit for generating a residual block by inversely transforming the coefficient of the two-dimensional block form inversely quantized by the inverse quantization unit; An intra prediction unit for decoding an intra prediction mode of a current block to be decoded based on the information for generating the prediction block, and generating a prediction block according to the intra prediction mode; And an adder for reconstructing an original block based on the residual block and the prediction block, wherein the intra prediction unit is adjacent to the current block to be decoded when information for generating the prediction block is not available. Linearly interpolate pixel values of pixels located at the boundary of blocks to determine pixel values of pixels located on a straight line connecting the lower right side from the upper left side of the current block to be decoded, and determine the pixel values of the current block to be decoded. The pixel values of the remaining pixels of the current block to be decoded are linearly interpolated by linearly interpolating pixel values of pixels positioned on a straight line connecting the lower right side and pixel values of pixels positioned at boundaries of blocks adjacent to the current block to be decoded. .

According to the image decoding apparatus and method according to the present invention, it is possible to accurately decode the image while reducing the complexity of the overall apparatus.

1 is a block diagram showing the configuration of a preferred embodiment of an image decoding apparatus according to the present invention;
2 is a diagram illustrating a configuration of a prediction unit of a preferred embodiment of an image decoding apparatus according to the present invention;
3 is a diagram illustrating an inter-screen prediction unit of a preferred embodiment of an image decoding apparatus according to the present invention;
4 is a flowchart illustrating a process of performing motion compensation based on a prediction unit in the motion compensation unit 320 according to an exemplary embodiment of the image decoding apparatus according to the present invention;
5 is a diagram illustrating a configuration of an intra prediction unit of a preferred embodiment of an image decoding apparatus according to the present invention;
6 shows examples of ALF 140;
7 is a diagram illustrating a configuration of a preferred embodiment of an image encoding apparatus 700;
8 is a diagram illustrating a process of dividing a maximum coding unit into at least one coding unit;
9 is a diagram showing the detailed configuration of the prediction unit 710;
10 is a diagram illustrating a method of generating a prediction unit by the prediction unit generator 910;
11 is a diagram illustrating a detailed configuration of an inter prediction unit;
12 illustrates spatial merging candidate blocks in a prediction unit merging method;
13 is a diagram illustrating a method for obtaining a temporal merging candidate block in a prediction unit merging method;
14 illustrates a spatial AMVP candidate block;
15 illustrates a temporal AMVP candidate block;
16 illustrates a prediction mode;
17 is a flowchart illustrating a process of performing an intra prediction method performed by an image encoding apparatus;
18 is a diagram illustrating a reference pixel of a current prediction unit using an AIS filter and a type of AIS filter applied in a DC mode;
19 illustrates a method of generating a prediction block from a reference pixel according to an intra prediction mode;
20 illustrates a method of generating a prediction block when the prediction mode is the DC mode;
21 is a diagram illustrating a configuration of an intra prediction unit of a video encoding apparatus;
22 is a diagram illustrating a transformation unit partitioning method;
23 is a view showing a method of performing a transformation in a transmission unit having a size of 4x4;
24 illustrates a method of applying a deblocking filter;
25 illustrates a method of using ALF;
FIG. 26 is a diagram illustrating a method of generating a prediction block by the prediction unit 130 of the image decoding apparatus 100 according to the present invention with respect to a lost block.
FIG. 27 illustrates a method of interpolating pixel values located at boundaries of restoration blocks required by the prediction unit 130 of the image decoding apparatus 100 according to the present invention, in which CUs belonging to the same macro block belong to different slices. A drawing, and
FIG. 28 illustrates a method of determining boundary pixel values of a block located above a block to be decoded when the pixel values of some blocks P and B located above the block to be decoded are not used.

Hereinafter, exemplary embodiments of an image decoding apparatus and method according to the present invention will be described in detail with reference to the accompanying drawings. In the following description, the same reference numerals are used for the same elements in each drawing, and duplicate descriptions of the same elements are omitted.

1 is a block diagram showing the configuration of a preferred embodiment of an image decoding apparatus according to the present invention.

Referring to FIG. 1, the image decoding apparatus 100 according to the present invention includes an entropy decoding unit 110, a reordering unit 115, an inverse quantization unit 120, an inverse transform unit 125, a prediction unit 130, and a D. A blocking filter 135, an adaptive loop filter (ALF) 140, and a memory 145 are provided.

When the image data encoded by the image encoding apparatus or the image bitstream encoded and transmitted by the image encoding apparatus is input, the image decoding apparatus 100 performs a procedure opposite to that of the image encoding apparatus. Can be decrypted.

The entropy decoder 110 performs entropy decoding by a process opposite to that of the entropy encoding unit in the image encoding apparatus. For example, a variable length code (VLC) table used to perform entropy encoding in an image encoding apparatus is implemented as the same VLC table in the entropy decoder to be used for entropy decoding. In addition, the entropy decoder 110 may be provided with additional information required for performing entropy encoding from the image encoder. Information for generating a prediction block from the information decoded by the entropy decoder 110 is provided to the predictor 130, and the residual value is provided to the reordering unit 115. In addition, like the image encoder, the entropy decoder 110 may change a code word assignment table using a counter or direct swapping method, and perform entropy decoding based on the changed code word assignment table. can do.

The reordering unit 115 may reorder the bitstreams entropy decoded by the entropy decoding unit 110 based on a method of rearranging the bitstreams by the image encoder. In this case, the reordering unit 115 may reorder the coefficients expressed in the form of a one-dimensional vector by restoring the coefficients in the form of a two-dimensional block. The reordering unit 115 may receive the information related to the coefficient scanning performed by the image encoder and perform the rearrangement by performing a reverse scanning method based on the scanning order performed by the image encoder.

The inverse quantization unit 120 performs inverse quantization based on quantization parameters provided from the image encoding apparatus and coefficient values of blocks rearranged by the reordering unit 115.

The inverse transform unit 125 performs a discrete cosine transform (DCT) and a discrete sine transform (DST) performed by the transform unit of the image encoder on the quantization result performed by the quantization unit of the image encoder. Perform reverse DCT and reverse DST. In this case, the inverse transform may be performed based on a coding unit (CU) determined by the image encoding apparatus, or may be performed based on a transform unit (TU). In the transform unit of the image encoding apparatus, DCT and DST may be selectively performed according to a plurality of pieces of information, such as a prediction method, a size and a prediction direction of a current block, and the inverse transform unit 125 of the image decoding apparatus 100 may perform The inverse transform may be performed based on the transform information performed by the transform unit of the. For example, in the case of a 4x4 (pixel) size block, the inverse transform unit 125 selectively selects an inverse DCT or an inverse DST according to a motion prediction mode as described in Table 1 based on the intra mode information of the block. Can be used to perform inverse transformation. The intra prediction mode information of the block may be provided by the prediction unit 130. In addition, the inverse transform unit 125 may perform inverse transform in division units based on the division unit information provided by the image encoding apparatus.

mode
(mode)
Vertical transformation
(vertical transform)
Horizontal translation
(horizontal transform)
0 VER to VER + 8 DST DCT One HOR to HOR + 8 DCT DST 2 DC DCT DCT 3 VER + 8 to VER-1 DST DST 4 HOR + 8 to HOR DST DST

Referring to Table 1, DCT and DST may be selectively used as a transformation method according to the direction of intra prediction. When the motion prediction mode is a motion prediction mode located between VER and VER + 8, the transformation may be performed using DST in performing vertical transformation and DCT in performing horizontal transformation. That is, the conversion method can be adaptively changed according to the mode.

The prediction unit 130 generates a prediction block based on prediction block generation related information provided from the entropy decoding unit 110 and previously decoded blocks or picture information read from the memory 145. 2 is a diagram illustrating a configuration of a prediction unit of a preferred embodiment of an image decoding apparatus according to the present invention.

Referring to FIG. 2, the prediction unit 130 includes a prediction unit determination unit 210 an inter prediction unit 220 and an intra prediction unit 230.

The prediction unit determiner 210 receives various information from the entropy decoder 110 such as prediction unit information, prediction mode information of the intra prediction method, and motion prediction related information of the inter prediction method. Also, the prediction unit determiner 210 distinguishes a prediction unit from the current coding unit, and determines whether the prediction unit performs inter prediction or intra prediction.

The inter prediction unit 220 may use information necessary for inter prediction of the current prediction unit provided from the image encoding apparatus, based on information included in at least one of a previous picture or a subsequent picture of the current picture including the current prediction unit. The inter prediction is performed on the current prediction unit.

3 is a diagram illustrating an inter prediction unit of a preferred embodiment of an image decoding apparatus according to the present invention.

Referring to FIG. 3, the inter picture predictor 220 includes a reference picture interpolator 310 and a motion compensator 320.

When the motion vector provided from the image encoding apparatus has a 1/2 pixel unit or a 1/4 pixel unit motion vector, the reference picture interpolator 310 has pixels of integer pixels or less based on the reference picture read from the memory 145. Generate information. In the case of luminance pixels, a DCT based 8-tap interpolation filter having different filter coefficients may be used to generate pixel information of integer pixels or less in units of 1/4 pixels. In the case of a chrominance signal, a DCT-based interpolation filter having different filter coefficients may be used to generate pixel information of an integer pixel or less in units of 1/8 pixels. In order to generate pixel information of less than or equal to the pixel, the position of the pixel and the corresponding filter coefficient may be used in the same manner as the reference picture interpolation unit of the image encoding apparatus.

The motion compensator 320 generates a prediction block by performing motion compensation on the current block based on motion prediction related information such as motion vector information and reference picture information provided from the image encoding apparatus.

4 is a flowchart illustrating a process of performing motion compensation based on a prediction unit in the motion compensation unit 320 according to an exemplary embodiment of the image decoding apparatus according to the present invention.

Referring to FIG. 4, the motion compensator 320 determines whether the prediction unit included in the division unit is a skip mode based on the division unit (S400). In this case, the motion compensator 320 may determine whether the prediction block included in the current partitioning unit is a skip mode based on the partitioning unit. If the division unit is the skip mode, the motion compensator 320 performs a merge skip (S410).

The prediction unit included in the split unit based on one split unit may have a skip mode. The skip mode is a mode in which residual values of the prediction block and the original block are not transmitted. When the prediction unit included in one division unit has a skip mode, the motion compensator 320 may perform a merge skip to generate a prediction block.

Merge skip is a method of using motion prediction related information of one merge skip candidate block among neighboring merge skip candidate blocks as motion prediction related information of a current prediction unit. That is, when the prediction unit is a merge skip block, motion prediction related information such as a motion vector, a reference picture index, and the like included in the merge skip candidate block indicated by the merge index of the current prediction unit may be obtained. Can be used as

Meanwhile, when the prediction unit included in the division unit is not the merge skip mode, the motion compensator 320 determines whether the prediction unit is a prediction unit merge block (S420). If the current prediction unit is not predicted using the merge skip mode, the prediction unit may be a prediction unit merge block (PU Merge Block) or an AMVP block (Advanced Motion Vector Prediction Block). When the division unit is a prediction unit merge block, the motion compensation unit 320 generates a prediction block by motion compensation of the prediction unit merge block (S430).

The motion compensation unit 320 moves the motion of one prediction unit merge candidate block among five prediction unit merge candidate blocks including four spatial merge candidate blocks located in a neighboring block of the current block and one temporal merge candidate block located in a reference picture. The prediction block may be generated based on the related information. Accordingly, the image decoding apparatus 100 receives the candidate block information used to generate the prediction unit merge block in the image encoding apparatus, and uses the motion prediction related information such as the motion vector and the reference picture information of the candidate block to predict the same as the candidate block. You can create a block. The prediction block generated by the motion compensator 320 is combined with the residual block provided by the inverse transform unit 125 to generate a reconstruction block.

If it is determined in step S420 that the prediction unit is not the prediction unit merge block, the motion compensation unit 320 generates a prediction block by motion compensation of the AMVP block (S440). When the current prediction unit is not the prediction unit merge block, the current block becomes an AMVP block, and the AMVP prediction method may be performed on the prediction unit. In the image decoding apparatus 100, when the prediction unit is an AMVP block, information related to which AMVP candidate block is used from two spatial AMVP candidate blocks around the current prediction unit and temporal AMVP candidate blocks included in another picture from the image encoding apparatus. And a prediction block is generated by using motion vector difference value information of the used AMVP candidate block and the current block. That is, for the AMVP block, a prediction block is generated based on the AMVP candidate block information used for motion compensation in the current prediction unit provided by the image encoding apparatus and the motion vector difference value information of the AMVP candidate block used for prediction and the current block. The generated prediction block is combined with the residual block provided by the inverse transform unit 125 to generate a reconstruction block.

Meanwhile, the intra prediction unit 230 generates a prediction block based on pixel information in the current picture. If the prediction unit is a prediction unit that has performed intra prediction, the intra prediction unit 230 performs intra prediction based on intra prediction mode information of the prediction unit provided by the image encoding apparatus.

5 is a diagram illustrating a configuration of an intra prediction unit of a preferred embodiment of an image decoding apparatus according to the present invention.

Referring to FIG. 5, the intra prediction unit 500 includes an AIS filter 510, a reference pixel interpolator 520, and a DC filter 530.

The AIS filter 510 filters the reference pixels of the current block and determines whether to apply the filter according to the prediction mode of the current prediction unit. Therefore, AIS filtering may be performed on the reference pixel of the current block by using the prediction mode of the prediction unit and the AIS filter information provided by the image encoding apparatus. If the prediction mode of the current block is a mode that does not perform AIS filtering, the AIS filter may not be applied.

When the prediction mode of the prediction unit is a prediction unit that performs intra prediction based on the pixel value of the interpolation of the reference pixel, the reference pixel interpolator 520 interpolates the reference pixel to generate a reference pixel having an integer value or less. do. If the prediction mode of the current prediction unit is a prediction mode for generating a prediction block without interpolating the reference pixel, the reference pixel may not be interpolated. The DC filter 530 generates the prediction block through filtering when the prediction mode of the current block is the DC mode.

The reconstructed block may be generated by combining the prediction block generated by the predictor 130 and the residual block provided by the inverse transform unit 125, and the reconstructed block or picture may be provided to the deblocking filter 135. In this case, information about whether the deblocking filter is applied to the corresponding block or picture and when the deblocking filter is applied to the image encoding apparatus may receive information about whether a strong filter or a weak filter is applied. The deblocking filter 135 of the image decoding apparatus 100 performs deblocking filtering on the block based on the deblocking filter related information provided from the image encoding apparatus. In this case, as in the image encoding apparatus, the deblocking filter 135 may first perform vertical deblocking filtering and horizontal deblocking filtering, but may perform at least one of vertical deblocking and horizontal deblocking in the overlapping portions. Vertical deblocking filtering or horizontal deblocking filtering, which has not been previously performed, may be performed at a portion where vertical deblocking filtering and horizontal deblocking filtering overlap. Through this deblocking filtering process, parallel processing of deblocking filtering is possible.

The ALF 140 performs additional adaptive loop filtering based on a value obtained by comparing the reconstructed image filtered through the deblocking filter 135 with the original image. In this case, the ALF 140 may apply ALF to a corresponding coding unit based on ALF-related information such as ALF application information, ALF size information, and ALF coefficient information provided from the image encoding apparatus for each coding unit. In this case, like the image encoding apparatus, the ALF 140 may be a diamond-shaped filter having one size of a filter having a size of 5 Tap, 7 Tap, and 9 Tap. 6 shows examples of ALF 140. Referring to FIG. 6, the ALF 140 may use a 5-tap filter 2000, a 7-tap filter 2010, and a 9-tap filter 2020. The coefficient values of each filter can be used symmetrically. Such ALF related information (filter coefficient information, ALF On / Off information, filter type information) may be included in each slice header and transmitted. In the case of a chrominance signal, a filter may be applied on a picture basis and a rectangular ALF may be applied.

The reconstructed picture or block is stored in the memory 145, and the reconstructed picture or block stored in the memory 145 may be used as a reference picture or a reference block, and the reconstructed picture is output through an output terminal.

FIG. 7 is a diagram illustrating a configuration of a preferred embodiment of the image encoding apparatus 700.

Referring to FIG. 7, the image encoding apparatus 700 may include a picture splitter 705, a predictor 710, a transformer 715, a quantizer 720, a reordering unit 725, and an entropy encoder 730. And an inverse quantizer 735, an inverse transformer 740, a deblocking filter 745, an adaptive loop filter (ALF) 750, a memory 755, and a controller 760.

The picture dividing unit 705 divides the input picture into at least one coding unit. A coding unit (CU) that is a division unit of a picture is one unit that performs encoding in the image encoding apparatus 700. In this case, the coding units may have sizes of 64x64, 32x32, 16x16, and 8x8 (pixels). Coding units may be hierarchically divided with depth information based on a quad tree structure. A coding unit having the largest size may be defined as a largest coding unit (LCU), and a coding unit having the smallest size may be defined as a smallest coding unit (SCU). Information related to the largest coding unit (LCU) and the minimum coding unit (SCU) may be included in a sequence parameter set (SPS) to be transmitted to an externally equipped device such as an image decoding device or a storage device according to the present invention. Can be. The maximum coding unit may be hierarchically divided into coding units having a small size so that the coding unit may be split at a low cost based on a cost function for calculating the efficiency of encoding.

The split coding unit may be represented by flag information such as a split flag. In addition, coding units may be classified into skip coding units (Skip CUs) and non-skip coding units (Non-Skip CUs) according to a method of performing motion prediction of the prediction units defined for the coding units. In the case of a skip coding unit, a prediction unit in a coding unit performs motion prediction using a merge skip method, and in the case of a non-skip coding unit (Non-Skip CU), intra prediction or inter screen You can make predictions.

8 is a diagram illustrating a process of dividing a maximum coding unit into at least one coding unit.

Referring to FIG. 8, whether one coding unit is split may be expressed by depth information and a split flag. One coding unit may be divided into a plurality of small coding units based on size information, depth information, and split flag information of the LCU. The size information of the largest coding unit, the split depth information, and whether to split the current coding unit may be included in a sequence parameter set (SPS) on the bitstream and transmitted to the image decoding apparatus according to the present invention.

In FIG. 8, step S800 illustrates a case in which a maximum coding unit (LCU) has a size of 64 × 64 pixels and a split depth of 0 in a split tree. In operation S800, the flag indicating whether to split the right block is set to 1, and the maximum coding unit (LCU) may be divided into four coding units having a square size of 32 × 32 pixels. In operation S800, since the flag indicating whether to split the left block is 0, the maximum coding unit (LCU) is encoded in one coding unit without being divided.

In operation S810, the maximum coding unit has a size of 64 × 64 pixels and a split depth of 1 in the split tree. That is, the division depth 1 indicates that the division is divided once in the largest coding unit. When the size of the maximum coding unit is 64x64 (pixels), the coding unit size of the division depth 1 is 32x32 (pixels). Similarly to step S800, the right block in step S810 may be split into a coding unit having a split flag of 1 and a coding unit of 16 × 16 (pixel) size having four identical sizes of 32 × 32 (pixel) size coding units. In operation S810, since the split flag is 0, the coding unit having a size of 32x32 (pixel) is encoded into one coding unit without being split. In the same manner as in operation S810, one block may be sequentially divided up to a coding unit having a size of 8 × 8 (pixel), at least according to the division depth information.

Step S820 illustrates a coding unit when the maximum coding unit size is 64x64 (pixels) and the split depth is four. When the division depth is 4, the size of the coding unit is 8x8 (pixel), which is the smallest coding unit that is the smallest size as the coding unit. Since the minimum coding unit can no longer be split into small CUs, there is no split flag. In the above description, for convenience, an 8x8 (pixel) size is set as the minimum coding unit, but in some cases, a coding unit larger than the 8x8 (pixel) size instead of the 8x8 (pixel) size may also be the minimum coding unit.

The picture dividing unit 705 generates a plurality of coding unit combinations for one picture. For example, one 64x64 (pixel) size coding unit may be divided into a prediction unit (PU) and a transform unit (TU) having various combinations.

The controller 760 calculates a division cost according to the division of the coding unit divided by the picture division unit 705 based on a predetermined cost function. If the picture division unit 705 is configured to calculate the division cost according to the division of the block, the control unit 760 does not calculate the division cost separately.

The above-described picture divider 705 is shown as an independent component for convenience of description, and the picture divider 705 may be included in other components of the image encoding apparatus 700. For example, the controller 760 or the predictor 710 may include a function of a block divider that divides one block into a plurality of coding units. As described above, for convenience of description, merging and division of various components included in the image encoding apparatus 700 are not separately disclosed, but as described above, each component included in the image encoding apparatus 700 is described. One component may be divided into a plurality of components, or a plurality of components may be implemented as one component.

Hereinafter, an encoding process performed by the image encoding apparatus 700 based on one 64x64 (pixel) size coding unit divided by the picture splitter 705 will be described.

The prediction unit 710 generates a prediction block based on the coding unit provided from the picture dividing unit 705. The prediction unit 710 divides one coding unit into at least one prediction unit (PU) for performing intra prediction or inter prediction. In this case, the prediction unit 710 may include an inter prediction unit and an intra prediction unit.

9 is a diagram illustrating a detailed configuration of the prediction unit 710.

9, the prediction unit 710 includes a prediction unit generator 910, an inter prediction unit 920, an intra prediction unit 930, and a prediction mode determination unit 940.

The prediction unit generator 910 divides the coding unit provided from the picture splitter 705 to generate a prediction unit. 10 is a diagram illustrating a method of generating a prediction unit by the prediction unit generator 910. Referring to FIG. 10, one coding unit 1000 may be divided into prediction units having different sizes according to whether intra prediction or inter prediction is performed. When performing intra prediction, one coding unit may be divided into prediction units 1010 having sizes of 2N × 2N and N × N (pixel). In contrast, when performing inter-screen prediction, one coding unit may be divided into prediction units having a size of 2Nx2N, 2NxN, Nx2N, and NxN (pixel). In this case, an NxN (pixel) size prediction unit may be limited to be used as a prediction unit only in a minimum coding unit that is a coding unit of the smallest unit in order to prevent duplication of calculations for calculating a prediction cost. The size of the prediction unit may be at most 32x32, 16x16, 8x8 and 4x4 (pixels). One coding unit may be split into at least one prediction unit. The inter prediction unit 920 splits one coding unit into a plurality of prediction units based on information of at least one picture of a previous picture or a subsequent picture of the current picture.

11 is a diagram illustrating a detailed configuration of an inter prediction unit.

Referring to FIG. 11, the inter picture predictor 920 includes a reference picture interpolator 1110, a motion predictor 1120, and a motion compensator 1130.

The reference picture interpolator 1110 generates pixel information of an integer pixel or less in the reference picture based on the reference picture information provided from the memory 755. In the case of luminance pixels, a DCT based 8-tap interpolation filter having different filter coefficients may be used to generate pixel information of integer pixels or less in units of 1/4 pixels. In addition, in the case of the chrominance pixel, a DCT-based interpolation filter having different filter coefficients may be used to generate pixel information of an integer pixel or less in units of 1/8 pixels.

Table 2 below is a table showing filter coefficients according to positions of pixels for generating pixel information of integer pixels or less for luminance pixels, and Table 3 for generating pixel information of integer pixel units or less for chrominance pixels. The table shows an example of filter coefficients depending on the position of the pixel.

Position Filter Coefficients 1/4 {-1,4, -10,57,19, -7,3, -1} 2/4 {-1,4, -11,40,40, -11,4, -1} 3/4 {-1,3, -7,19,57, -10,4, -1}

Position Filter Coefficients 1/8 {-3,60,8, -1} 2/8 {-4,54,16, -2} 3/8 {-5,46,27, -4} 4/8 {-4,36,36, -4} 5/8 {-4,27,46, -5} 6/8 {-2,16,54, -4} 7/8 {-1,8,60, -3}

The motion predictor 1120 performs motion prediction based on the reference picture interpolated by the reference picture interpolator 1110. In this case, various methods such as a full search-based block matching algorithm (FBMA), a three step search (TSS), and a new three-step search algorithm (NTS) may be used as a method for calculating a motion vector. The motion vector may have a motion vector value of 1/2 or 1/4 pixel units. Also, the motion predictor 1120 may predict the current prediction unit by using a different motion prediction method. The motion prediction method may be classified into a merge skip method, a prediction unit merge (PU merge) method, and an advanced motion vector prediction (AMVP) method.

The merge skip method is a method of generating a prediction block by receiving motion prediction related information of the current block from one of the neighboring blocks. The merge skip method generates a prediction unit merge method and a prediction block, which will be described below. Although the method is the same, the merge skip method does not transmit residual values of the original block and the prediction block to the image decoding apparatus, unlike the prediction unit merging method. The merge skip method includes motion-related information, such as a motion vector and a reference picture index, included in a block around the current prediction unit indicated by the merge index of the current prediction unit and a block included in another picture, and the motion related prediction of the current block. Can be used as information. The prediction block generation method of the detailed merge skip method will be referred to when the prediction unit merging method will be described later.

In the prediction unit merge method, one prediction unit may be provided with motion prediction related information from five prediction unit merge candidate blocks. When the motion vector related information of the current prediction unit is the same as the motion vector related information of at least one of the prediction unit merge candidate blocks, the current prediction unit may be predicted using the prediction unit merge method.

The prediction unit merging method may generate a prediction block for the current prediction unit by using motion prediction related information (motion vector, reference picture index, etc.) of five candidate blocks. In the prediction unit merging method, the merging candidate block may include four spatial merging candidate blocks spatially located in the same picture as the current prediction unit and one temporal merging candidate block located in a picture different from the current block.

12 illustrates spatial merging candidate blocks in the prediction unit merging method.

Referring to FIG. 12, in the prediction unit merging method, the spatial merging candidate block is the upper left block 1200 of the current prediction unit, the upper left block 1210 of the current prediction unit, and the upper right block sharing one point with the current prediction unit. 1220, the lower left block 1230 sharing one point with the current prediction unit. If there is a candidate block having the same motion prediction related information as the current block among four spatial merging candidate blocks, motion prediction related information such as reference picture index information and motion vector information of the candidate block having the same motion prediction related information as the current block May be used as the motion prediction related information of the current prediction unit.

13 is a diagram illustrating a method for obtaining a temporal merging candidate block in a prediction unit merging method.

Referring to FIG. 13, in order to obtain a temporal merging candidate block, the 4x4 sized blocks 1305, 1310, and 1315 of the 4x4 sized block are determined based on the 4x4 sized blocks 1305, 1310, and 1315 positioned in the center of the current prediction unit. When a block existing at the same location (hereinafter, a co-located block) is determined as a temporal merging candidate block in the prediction unit merging method, when the current prediction unit and the motion prediction related information of the temporal merging candidate block are the same, the temporal The motion prediction related information of the merging candidate block may be used as the motion prediction related information of the current prediction unit When the prediction units illustrated in Fig. 13 each use a 32x32 size as a prediction unit (1300), the 32x16 size is used as the prediction unit. In the case of using 1310, the case of using the 16 × 16 size as a prediction unit is 1320.

In the prediction unit merging method, a 4x4 colocated block of a reference picture located closest to the current picture may be a temporal merging candidate block. If the temporal merging candidate block has two motion vectors of the temporal merging candidate block using bidirectional prediction, the motion vector passing through the current prediction unit of the two motion vectors is used, or if the two motion vectors are in the same direction, A motion vector of short size may be used as the motion vector of the current prediction unit. The motion vector of the temporal merging candidate block may be scaled through a distance relationship between the picture including the current prediction unit and the picture including the same location block.

The AMVP method may also receive motion prediction related information from an AMVP candidate block in the vicinity of the current block. The prediction unit merging method and the AMVP method are identical to receive motion prediction information from neighboring blocks of the current block.However, unlike the prediction unit merging method, motion vector difference information between the AMVP candidate block and the current prediction unit is generated to generate motion vector difference information. Can be used as motion prediction related information.

14 illustrates a spatial AMVP candidate block.

Referring to FIG. 14, two spatial AMVP spatial candidate blocks may be selected around the current prediction unit. One AMVP spatial candidate block sequentially selects available candidate blocks among the lower left block 1400 and the lower left block 1410 of the current prediction unit, which share one point with the current prediction unit, in the direction of the arrow. The other spatial AMVP candidate block includes an upper right block 1420 sharing one point with the current prediction unit, an upper right block 1430 of the current prediction unit, and an upper left block 1440 sharing one point with the current prediction unit. Among the available candidate blocks, the candidate blocks may be sequentially selected in the direction indicated by the arrow. An inter prediction block using the same motion prediction direction as the current prediction unit and having the same reference index may be an available spatial AMVP candidate block referred to by the current prediction unit.

15 illustrates a temporal AMVP candidate block.

Referring to FIG. 15, in order to generate a temporal AMVP candidate block, a co-located block 1500 is found in a reference picture based on a current prediction unit. A block 1510 located at the lower right of the same position block 1500 may be selected as a temporal AMVP candidate block. In the AMVP method, the motion prediction related information of one of the two spatial AMVP candidate blocks and one temporal AMVP candidate block described above is used as the motion vector related information of the current prediction unit, and additionally, the motion of the current prediction unit. The difference between the vector value and the motion vector value of the selected AMVP candidate block may be used as motion prediction related information of the current block.

The motion prediction unit 1120 may determine an optimal prediction method among the above-described motion prediction methods, a merge skip method, a prediction unit merge (PU merge) method, and an advanced motion vector prediction (AMVP) method in the current prediction unit. Decide on For convenience of description, the motion prediction unit 1120 is described as determining the prediction method of the current prediction unit. However, the optimal prediction is also performed in various components such as the controller 760, the prediction mode determiner 940, and the motion compensator 1130. Determination of the method may be made.

The motion compensator 1130 generates a prediction block based on motion prediction related information such as a motion vector and reference picture information calculated by the motion predictor 1120. The motion prediction block generated by the motion compensator 1130 may be classified into a merge skip block, a prediction unit merge block, and an advanced motion vector prediction block according to a motion prediction method. have.

The intra prediction unit 930 may predict the prediction unit based on the pixel information in the current picture.

16 is a diagram illustrating a prediction mode.

Referring to FIG. 16, the prediction mode may have a mode number from mode 0 to mode 33. The prediction mode has 34 kinds of modes, and the 34 kinds of modes may have 32 directional modes and two non-directional modes, DC prediction mode and planar mode (Planar mode). FIG. 16 shows a planar mode which is a non-directional prediction mode among prediction modes. In order to know the value of the specific pixel 1650 in the corresponding mode, the average value may be generated as the pixel value by interpolating once in the horizontal direction or the vertical direction. In this case, the value of the corresponding pixel 1650 may be interpolated using the pixel value of the reference pixel 1655 positioned on the left side of the current pixel and the pixel value of the right pixel 1665 from which the pixel value of the reference pixel 1660 has been copied. . In addition, the value of the corresponding pixel 1650 may be interpolated using the pixel value of the reference pixel 1670 positioned at the top of the current pixel and the pixel value of the lower pixel 1680 from which the pixel value of the reference pixel 1675 has been copied. have. In the planar mode, the pixel value of the prediction unit may be calculated using the average value of the pixel values interpolated in the vertical direction and the horizontal direction in this manner. The prediction block using the planar mode may not transmit a residual value with the original block to the decoder. The reference pixel for predicting the current prediction unit includes the upper right pixel 1600, the upper pixel 1610, the upper left pixel 1620, the left pixel 1630, and the lower left pixel 1640 of the current block. It may be a reference pixel for intra prediction.

Meanwhile, the number of prediction modes used may vary depending on the size of the prediction unit. Table 4 below describes the number of modes according to the size of the prediction unit in the luminance information and the color difference information.

Component PU size Number of intra modes Luma



4 17
8 34 16 34 32 34 64 3 Chroma All PU 4

Referring to Table 4, in order to intra-predict luminance information, a 4x4 size prediction unit may have 17 prediction modes from mode 0 to mode 16, and 8x8 size, 16x16 size, and 32x32 size prediction units are 34 units. It may have a prediction mode. A prediction unit of size 64x64 may have three prediction modes. Four modes may be used to predict the color difference information in the screen. Modes for predicting color difference information in a screen may include a vertical mode (0 mode), a horizontal mode (1 mode), a DC mode (2 mode), and a linear prediction mode (Linear Model Mode, LM Mode). The linear prediction mode is a mode for predicting color difference information using a predetermined equation from luminance information.

In order to transmit the prediction mode information of the current prediction unit, the prediction mode of the current prediction unit is predicted using the prediction modes of the upper block and the left block of the current prediction unit, and the prediction mode of the predicted current prediction unit is compared with the actual prediction mode of the current block. In comparison, prediction mode information of the current prediction unit may be transmitted. If neither the top mode nor the left side prediction mode of the current prediction unit is available, the DC prediction mode is inserted into the first prediction mode candidate list. For example, when the upper block and the left block of the current prediction unit do not exist or when the upper block and the left block of the current prediction unit use inter prediction, the DC prediction mode may be inserted into the first prediction mode candidate list. .

If the prediction mode is available only in one block of the upper block and the left block, or if the prediction modes of the upper block and the left block are the same, one prediction mode is inserted into the first prediction mode candidate list. If both prediction modes are available in the upper block and the left block, the prediction mode with the smallest mode number among the prediction modes is inserted into the first prediction mode candidate list, and the prediction mode with the large mode number among the prediction modes is added to the second block. Insert into the prediction mode candidate list. For example, when the prediction mode of the top block of the current prediction unit is mode 1 (vertical prediction mode) and the prediction mode of the left block is mode 2 (DC prediction mode), mode 1 is the first prediction mode candidate list. Mode 2 may be inserted into the second prediction mode candidate list.

If the prediction mode of the current prediction unit is the same as the prediction mode inserted into the first prediction mode candidate list, a flag indicating that the current prediction mode is the same as the prediction mode inserted into the first prediction mode candidate list is transmitted to the image decoder. By using the method of transmitting flag information that the prediction mode of the current prediction unit is the same as the prediction mode present in the first prediction mode candidate list, using less bits than directly encoding the prediction mode number information of the current prediction unit. The intra prediction mode of the current prediction unit may be transmitted. Similarly, when the prediction mode of the current prediction unit is the same as the prediction mode inserted into the second prediction mode candidate list, the flag may be transmitted that the current prediction mode is the same as the prediction mode inserted into the second prediction mode candidate list.

When the prediction mode of the current prediction unit is not present in the first prediction mode candidate list or the second prediction mode candidate list, in transmitting prediction mode information of the current prediction unit, the first prediction mode candidate list and the second prediction mode candidate When the prediction mode existing in the list is smaller than the intra prediction mode of the current prediction unit, the prediction mode existing in the first prediction mode candidate list and the second prediction mode candidate list in the table corresponding to the prediction mode and the code word is excluded from the table. A code word is assigned to the prediction mode of the current prediction unit by using a method. For example, when the prediction mode of the current prediction unit is mode 4, the intra prediction mode of the top block of the current prediction mode is mode 0, and the prediction mode of the left block of the current prediction unit is mode 1, Code assigned to the prediction mode of the current prediction unit by deleting mode 0 and mode 1 from the table where the prediction mode and the codeword are matched to transmit the prediction mode information, and filling up the remaining prediction mode in the deleted position. The number of word bits can be reduced. That is, in the table, mode 2 may be allocated to existing mode 0, mode 3 may be assigned to mode 1, and mode 4 may be allocated to mode 2. That is, in order to transmit the prediction mode information of the current prediction unit, a code word assigned to the existing mode 2 may be allocated to the mode 4 and thus the prediction mode information of the current prediction unit may be transmitted.

17 is a flowchart illustrating a process of performing an intra prediction method performed by an image encoding apparatus.

Referring to FIG. 17, an adaptive intra smoothing (AIS) filter is applied to a reference pixel (S1700). At this time, whether to apply the AIS filter may vary depending on the prediction mode. Table 5 below describes whether the AIS filter is applied according to the intra prediction mode and the AIS filter applied when the prediction mode is the DC mode (mode 2).

Mode information Block size 4 × 4 8 × 8 16 × 16 32 × 32 64 × 64 Mode 1 0 0 0 0 0 Mode 2 0 0 0 0 0 Mode 3 A B C 0 0 Mode 4 One One One One One Mode 5 0 One One One One Mode 6 0 One One One 0 Mode 7 One One One One 0 Mode 8 0 One One One 0 Mode 9 0 One One One 0 Mode 10 One One One One 0 Mode 11 0 0 One One 0 Mode 12 0 0 One One 0 Mode 13 0 0 One One 0 Mode 14 0 0 One One 0 Mode 15 0 0 One One 0 Mode 16 0 0 One One 0 Mode 17 0 0 One One 0 Mode 18 0 0 One One 0 Mode 19 0 0 0 One 0 Mod 20 0 0 0 One 0 Mode 21 0 0 0 One 0 Mode 22 0 0 0 One 0 Mode 23 0 0 0 One 0 Mode 24 0 0 0 One 0 Mode 25 0 0 0 One 0 Mode 26 0 0 0 One 0 Mode 27 0 0 0 One 0 Mode 28 0 0 0 One 0 Mode 29 0 0 0 0 0 Mode 30 0 0 0 0 0 Mode 31 0 0 0 0 0 Mode 32 0 0 0 0 0 Mode 33 0 0 0 0 0 Mode 34 0 0 0 0 0

Referring to Table 5, the AIS filter of 3 taps of [1, 2, 1] may be applied according to the prediction mode. In the case of applying the AIS filter, 1, and 0 in the case of applying the AIS filter, and in the DC mode, a newly defined AIS filter to be described later may be used.

FIG. 18 is a diagram illustrating a reference pixel of a current prediction unit using an AIS filter and a type of AIS filter applied in a DC mode.

Referring to FIG. 18, when the size of the current prediction unit is 8x8, the pixels 1800 at the top and top right, the pixel 1810 at the top left, and the pixels 1820 at the left and bottom left of the current prediction unit are currently The prediction unit may be a reference pixel for intra prediction, and an AIS filter may be applied to the reference pixel. Also, at the bottom of FIG. 18, new AIS filter A, filter B, and filter C for performing AIS when generating a prediction block in DC mode are shown. If the prediction mode is not the DC mode, the three-tap AIS filter of [1, 2, 1] may be applied instead of the new AIS filter A, filter B, and filter C.

As can be seen from Table 5, if the prediction block is generated in mode 1 or 2, the AIS filter is not used. If the prediction block is generated in DC mode 3, the filter B is used to generate the reference pixel. Can be AIS filtered. If the prediction block is generated in mode 4, the 3-tap AIS filter of [1, 2, 1] is used to AIS filter the reference pixel.

Next, a prediction block is generated (S1810). The prediction block may be predicted and generated according to the distance between the reference pixel and the pixel to be predicted through the reference pixel interpolation method. 19 illustrates a method of generating a prediction block from a reference pixel according to an intra prediction mode. Referring to FIG. 19, when the prediction direction is the same as the arrow direction, the value of the first pixel 1930, which is a pixel located at the lower right of the current block, is the first reference pixel 1910 located at the right side of the column at the top of the current block. It can be filled with the value of. The value of the second pixel 1900 included in the current prediction unit may be calculated by interpolating pixel values of the first reference pixel 1910 and the second reference pixel 1920 in units of 1/8 pixels. For example, since the second pixel 1900 is positioned above two columns above the first pixel 1930, the second pixel 1900 is not formed using the same pixel value as the first reference pixel 1910 and the second reference pixel 1920. The first reference pixel 1910 and the second reference pixel 1920 are interpolated in units of 1/8 pixels, and the pixel values of the first reference pixel 1910 and the second reference pixel 1920 are interpolated at a ratio of 6 to 2. It may have a pixel value.

Meanwhile, when the prediction mode is the DC mode, filtering may be performed using a 2-tap filter or a 3-tap filter (S1820). When the prediction mode is the DC mode, since the difference between the pixel value of the reference pixel and the predicted pixel value in the screen of the current prediction unit is large, a filter may be used to filter some of the reference pixel and the pixel of the current prediction unit.

20 illustrates a method of generating a prediction block when the prediction mode is the DC mode.

Referring to FIG. 20, when the prediction unit generates the prediction block using the DC mode, the upper prediction pixel 2000, the left prediction pixel 2010, and the upper left prediction pixel 2020 of the prediction block may correspond to the current prediction unit. The reference pixel may be filtered using the 2-tap filter 2030 or the 3-tap filter 2040. The two tap filter 2030 or the three tap filter 2040 may be used in the same sense as a DC filter. A two tap filter 2030 and a three tap filter 2040 are disclosed that may be used when the prediction mode is a DC mode. The 2-tap filter 2040 is used for the upper prediction pixel value 2000 and the left prediction pixel value 2010 of the current prediction block, and the 3-tap filter 2030 is applied to the prediction pixel value 2020 included in the upper left of the current block. Can be used.

21 is a diagram illustrating a configuration of an intra prediction unit of a video encoding apparatus.

Referring to FIG. 21, the intra prediction unit 930 includes an AIS filter 2110, a reference pixel interpolator 2120, and a DC filter 2130.

The AIS filter 2110 performs AIS filtering on a reference pixel of a current prediction unit, and generates a three-tap AIS filter of [1, 2, 1] or a prediction block of a DC mode shown in FIG. 18 according to a prediction mode. In this case, AIS filtering of reference pixels of the current prediction unit may be performed based on filter A, filter B, and filter C, which are AIS filters for performing AIS filtering. The reference pixel interpolator 2120 may interpolate the reference pixel in units of 1/8 pixels to predict the current block according to the prediction mode. The DC filter 2130 performs filtering when the prediction mode of the current prediction unit is the DC mode. The DC filter 2130 uses a 2-tap filter or a 3-tap filter as shown in FIG. Can be filtered.

The prediction mode determiner 940 determines a predetermined prediction block having the smallest cost by using a predetermined cost function among the prediction blocks generated by the inter prediction unit 920 and the intra prediction unit 930. For example, the prediction mode determiner 940 may select a prediction unit having a minimum cost in view of a rate distortion cost (RD-COST) from among prediction units generated based on a coding unit by the prediction unit generator 910, in a current division unit. It can be determined by the prediction unit of. In addition, the prediction unit 900 may generate an optimal prediction unit based on the coding unit provided from the picture dividing unit 705. Since one coding unit may include a plurality of combinations of coding units, assuming one coding unit, there may be a plurality of coding units having various coding unit combinations divided by various methods even for one coding unit. The installment 705 may be input to the predictor 710. Accordingly, the prediction unit 710 may generate an optimal prediction unit for various combinations of coding units. In addition, various coding unit combinations generated by the picture splitter 705 and the predictor 710 and information related to the various prediction unit combinations are input to the controller 760.

The prediction unit 710 generates a prediction block. The prediction block generated as described above is used to generate a residual block, and the residual block is composed of a residual value which is a difference value between the prediction block and the original block. The generated residual block is input to the converter 715. In addition, information related to generation of various prediction blocks such as prediction mode information, prediction unit division information, and motion vector information generated by the prediction unit 710 is provided to the entropy coding unit 730 to be entropy coded.

The transform unit 715 may transform the residual block including residual information of the original block and the prediction block generated by the prediction unit 710 using a discrete cosine transform (DCT) or a discrete sine transform (DST). Can be. The DST may be selectively used according to a prediction mode when transforming a 4x4 size prediction block that performs intra prediction. For prediction blocks that are not 4x4 in size, they can be transformed using DCT. If the current block is a skip mode among inter prediction modes, the residual value is not transmitted to the image decoder, and thus the residual value of the corresponding block may not be converted.

The transform unit 715 may determine a transform unit based on the split coding unit and the prediction unit. In the following description, it is assumed that a transmission unit is split in one coding unit.

22 is a diagram illustrating a transform unit division method.

Referring to FIG. 22, whether a transmission unit is divided may be expressed using predetermined flag information. For example, one coding unit may be divided into a plurality of transmission units as illustrated on the right side of FIG. 22 using the quadtree structure shown on the left side of FIG. 22. At this time, the largest transform unit (Largest Transform Unit, LTU) may be 32x32 size and the smallest transform unit (Smallest Transform Unit, SCU) may be 4x4 size. In addition, as a transmission unit, one prediction unit may be used as it is, or one prediction unit may be divided into four transmission units.

In addition, the converter 715 may select an optimal transmission unit using a predetermined cost function that generates and splits a transmission unit combination that is possible in the current coding unit to calculate a report transmission cost. The transmission unit related cost for one block may be delivered to the controller 760. Assuming a 64x64 sized maximum coding unit block, the controller 760 estimates the unit cost according to the coding unit split combination for the 64x64 size block, and the prediction unit according to the optimal prediction unit for each coding unit split combination for the 64x64 size block. Optimal split unit and prediction unit for one 64x64 size block, taking into account the cost and the transmission unit cost required to transmit the optimal prediction unit for each of the coding unit split combination for a 64x64 size block and the coding unit split combination for a 64x64 size block And a transmission unit.

23 is a diagram illustrating a method of performing a transformation in a transmission unit having a size of 4x4.

Referring to FIG. 23, in the vertical mode, the vertical direction transformation may apply DST, and the horizontal direction transformation may apply DCT to convert the pixel value of the current block into the frequency domain. The 4x4 sized block may be adaptively applied to the DCT or DST according to the intra prediction mode. Details of the adaptive transformation method according to the prediction mode are as described in Table 2.

The quantization unit 720 quantizes the values transformed by the transformer 715 into the frequency domain. In this case, the quantization coefficient may change according to the block or the importance of the image. The value calculated by the quantization unit 720 is provided to the inverse quantization unit 755 and the reordering unit 725.

The reordering unit 725 rearranges the coefficient values in order to increase the efficiency of entropy encoding by the entropy encoder 730. In this case, the reordering unit 725 may change the two-dimensional block shape coefficient into a one-dimensional vector form through a coefficient scanning method. In addition, the reordering unit 725 may increase the entropy coding efficiency in the entropy encoder 730 by changing the order of coefficient scanning based on probabilistic statistics of coefficients provided from the quantizer 720.

The entropy encoder 730 performs entropy encoding based on the values calculated by the reordering unit 725. Entropy coding may be performed by an encoding method such as Exponential Golomb, Context-Adaptive Variable Length Coding (CAVLC), or Context-Adaptive Binary Arithmetic Coding (CABAC). The entropy encoder 730 is a residual coefficient coefficient information, block type information, prediction mode information, partition unit information, prediction unit information and transmission unit information, motion vector from the reordering unit 725 and the prediction unit 710 Various information such as information, reference frame information, interpolation information of a block, and filtering information may be encoded.

The entropy encoder 730 may store a table for performing entropy encoding, such as a variable length coding table (VLC), and perform entropy encoding using the stored VLC table. In performing entropy encoding, some codewords included in the table may be assigned a codeword for the number of codes of the information by using a counter or direct swapping method. have. For example, for the top few codes that have been assigned a small number of code words in a table that maps code numbers to code words, use a counter to add up the number of occurrences of the code count to the shortest number of code counts. You can adaptively change the order of mapping of tables that map codewords to code counts so that you can assign codewords. When the number of counts counted in the counter reaches a predetermined threshold, counting may be performed again by dividing the count count recorded in the counter in half. The number of codes in the table that does not count is the bit allocated to the number of codes by converting the code number and place immediately above when the information corresponding to the number of codes occurs using the direct swapping method. Entropy coding can be performed with a small number.

The inverse quantizer 735 inverse quantizes the values quantized by the quantizer 720, and the inverse transformer 740 inversely transforms the values converted by the transformer 715. The residual value generated by the inverse quantizer 735 and the inverse transformer 740 is combined with the prediction block predicted by the predictor 710 to generate a reconstructed block.

The deblocking filter 745 may remove block distortion caused by boundaries between blocks in the reconstructed picture. In this case, to determine whether to perform deblocking, it may be determined whether the deblocking filter is applied to the current block based on the pixels included in several columns or rows of the block. When the deblocking filter is applied to the block, a strong filter or a weak filter may be applied according to the required deblocking filtering strength. In addition, in applying the deblocking filter 745, horizontal filtering and vertical filtering may be performed in parallel when vertical filtering and horizontal filtering are performed.

24 is a diagram illustrating a method of applying a deblocking filter.

Referring to FIG. 24, the controller 760 determines whether to apply a deblocking filter based on a predetermined column and row (S2400). In order to determine whether to apply a deblocking filter to the boundary of the block, the controller 760 determines the third and sixth columns (

Figure 112011050343337-pat00001
,
Figure 112011050343337-pat00002
) And rows (
Figure 112011050343337-pat00003
,
Figure 112011050343337-pat00004
Based on the six pixel values included in), it may be determined whether to apply the deblocking filter using Equation 1 below.

Figure 112011050343337-pat00005

If the equation 1 is satisfied, that is, the formula on the left

Figure 112011050343337-pat00006
If smaller, the deblocking filter can be applied to the current block.

Next, the controller 760 may determine the strength of the filtering to be applied to each column and each row (S2410). In applying the deblocking filter 745, the deblocking filtering may be performed by applying at least one of a strong filter and a weak filter. Equation 2 below shows an equation for determining whether to use strong filtering or weak filtering.

Figure 112011050343337-pat00007

If Equation 2 is satisfied, a strong filter may be applied to perform strong deblocking filtering. If the equation 2 is not satisfied, the weak filter is applied to perform weak deblocking filtering. Rows 1 through 8 in the block (

Figure 112011050343337-pat00008
), Rows 1 to 8 (
Figure 112011050343337-pat00009
), It may be decided what kind of deblocking filter to apply.

The steps S2400 and S2410 may be performed by the deblocking filter 745 instead of the controller 760.

Next, vertical deblocking filtering and horizontal deblocking filtering are performed by the deblocking filter 745 (S2420). Horizontal deblocking filtering

Figure 112011050343337-pat00010
Is applied to the vertical deblocking filtering
Figure 112011050343337-pat00011
Can be applied to When deblocking is performed based on 8 pixels located at the boundary of each block, 16 pixels in which both horizontal deblocking filtering and vertical deblocking filtering should be performed are processed in parallel with horizontal deblocking filtering and vertical deblocking filtering. For this purpose, only the deblocking filtering of the vertical deblocking filtering or the horizontal deblocking filtering may be performed. That is, the vertical deblocking filtering and the horizontal deblocking filtering are performed on the block, but the filtering applied to the overlapping portion may be one of the vertical deblocking and the horizontal deblocking.

Next, the deblocking filter 745 performs deblocking filtering at a portion where vertical deblocking filtering and horizontal deblocking filtering overlap with each other (S2430). Where the vertical deblocking filtering and the horizontal deblocking filtering overlap (

Figure 112011050343337-pat00012
In the step S2420, vertical deblocking filtering or horizontal deblocking filtering may not be performed. Through the deblocking filtering process for such a block, parallel processing may be performed when the deblocking filtering is performed.

The ALF 750 filters the block through the deblocking filter 745 and performs filtering based on a value obtained by comparing the reconstructed image with the original image. The ALF 750 may calculate one Laplacian Activity value based on the 4x4 block, and thus, one ALF may be applied to all pixels included in the 4x4 block. For information related to whether to apply the ALF 750, the luminance signal may be transmitted for each coding unit (CU), and the size and coefficient of the ALF to be applied may vary according to each block. The ALF 750 may be a diamond shaped filter having a size of one of 5 Tap, 7 Tap, and 9 Tap sizes. Such filtering related information (filter coefficient information, ALF On / Off information, filter type information) of the ALF 750 may be included in each slice header in the bitstream and transmitted to the image decoder according to the present invention. In the case of a color difference signal, ALF may be applied on a picture basis and a rectangular ALF may be applied.

25 is a diagram illustrating a method of using ALF.

Referring to FIG. 25, one Laplacian Activity value is calculated in units of blocks based on a Laplacian based activity metric (S2500). In this case, one Laplacian activity value may be calculated per 4x4 block by averaging the Laplacian activity values of the pixels included in the 4x4 block based on the Laplacian activity metric. The Laplacian activity value of the pixel may be calculated through Equation 3 below.

Figure 112011050343337-pat00013

Next, the directional information is calculated based on the pixels included in the 4x4 block size (S2510). Directional information may be calculated based on Equations 4 and 5 below.

Figure 112011050343337-pat00014

Figure 112011050343337-pat00015

Equation 4 shows a vertical activity value and a horizontal activity value of a 4 × 4 size block. In practice, the vertical activity value and the horizontal activity value of the 4x4 size block may be calculated in the process of classifying the pixels into a plurality of groups based on Equation 3 without needing to obtain them separately. Equation 5 is an equation for calculating directional information based on a predetermined threshold value and a vertical activity value and a horizontal activity value of the block calculated by Equation 4. As shown in Equation 5, the directional information may have a value of 0, 1, or 2.

Next, an activity metric for a block is calculated based on the Laplacian activity metric and the directional information (S2520). In this process, a new activity metric is calculated using the value calculated using the Laplacian activity metric in step S2500 based on the 4x4 block size plus the directional information calculated in step S2510. A new activity metric in blocks can be calculated based on Equation 6 below.

Figure 112011050343337-pat00016

Based on the calculated activity metric, one ALF may be applied in units of 4 × 4 blocks to filter the blocks.

Next, the coefficient of the ALF filter is calculated (S2530). The ALF filter coefficients may be calculated based on Equation 7 below.

Figure 112011050343337-pat00017

here

Figure 112011050343337-pat00018
Denotes a cross-correlation matrix of the original pixel and the reconstructed pixel.
Figure 112011050343337-pat00019
Denotes an auto-correlation matrix of the reconstructed pixel.
Figure 112011050343337-pat00020
Is the number of filter coefficients
Figure 112011050343337-pat00021
Is a coefficient of the filter and one filtering coefficient may be calculated according to the classification of each block.

Figure 112011050343337-pat00022

In the process of calculating the filtering coefficients of the ALF, some of the classifications of blocks that have been previously classified into a plurality of existing blocks may be combined into one classification.

Next, the coefficient of the ALF is encoded (S2540). The coefficient of the first ALF may be encoded by an exponential Golomb coding method. When there are a plurality of ALFs used in one coding unit, the coefficients of the remaining ALFs may be directly or predictively encoded by an exponential Golomb code according to the number of bits for transmitting the coefficients.

Next, an ALF control map is determined (S2550). Whether to apply the ALF may be determined based on Equation 9 below.

Figure 112011050343337-pat00023

When the equation of the upper part of Equation 9 is satisfied, ALF may be applied, and otherwise, ALF may not be applied.

The application of ALF may be performed in a coding unit (CU), but the optimal application of ALF may be determined based on depth information on a quad tree structure that determines a coding unit.

The optimum depth on the quad tree structure may be determined by comparing RD cost values calculated based on Equation 10 below.

Figure 112011050343337-pat00024

In equation (10)

Figure 112011050343337-pat00025
Denotes the number of bits used to transmit ALF application information in a coding unit of one slice. In addition, the number of taps of the ALF filter to be applied to the block may be determined based on the RD cost calculated in Equation 10.

In the method of using the ALF described with reference to FIG. 25, each step is performed by the controller 760, but may be performed by the ALF 750 or by another component.

The reconstructed block or picture calculated through the deblocking filter 745 and the ALF 750 is stored in the memory 755, and the stored reconstructed block or picture is provided to the predictor 710 when performing inter prediction.

The controller 760 controls each component of the image encoding apparatus 100. In addition, the controller 760 does not determine an optimal division unit, prediction unit, and transmission unit in the picture division unit 705, the prediction unit 710, and the conversion unit 715, and then assigns the current block to the current block based on a predetermined cost function. An optimal coding unit, a prediction unit, and a transmission unit may be calculated. For example, the controller 760 may calculate the division cost for the division unit provided from the picture division unit 705 or the prediction cost for the prediction unit provided from the prediction unit 710 based on a predetermined cost function. The calculation cost for the conversion unit provided from the conversion unit 715 may be calculated to determine a division unit, a prediction unit, and a conversion unit for one block. The cost function for calculating the optimal splitting unit, prediction unit, and transmission unit may be different. In addition, the controller 760 may be connected to the quantization unit 720, the reordering unit 725, and the entropy encoding unit 730 to control the encoding of the image encoder 700.

The image decoding apparatus 100 according to the present invention described with reference to FIGS. 1 to 5 reconstructs a residual block in a transform unit (TU) in the inverse transform unit 125, and predicts a prediction unit in the prediction unit 130. Prediction Unit (PU) is used to generate the prediction block. However, since the image encoding apparatus performs encoding in a coding unit (CU), it is preferable that the image decoding apparatus 100 performs decoding in the same coding unit.

In this case, the prediction unit 130 of the image decoding apparatus 100 according to the present invention generates a prediction block in units of CUs based on the CU information and the PU information received from the image encoding apparatus 700. In addition, the inverse transformer 125 generates a residual block in units of CUs based on the CU information and the TU information received from the image encoding apparatus 700. Accordingly, the prediction unit 130 of the image decoding apparatus 100 generates a prediction block corresponding to the PU corresponding to each CU based on the CU tree information and the PU tree information received from the image encoding apparatus 700. In addition, the inverse transformer 125 of the image decoding apparatus 100 generates a residual block corresponding to a TU corresponding to each CU based on the CU tree information and the TU tree information received from the image encoding apparatus 700. In this way, the prediction block and the residual block generated in a CU unit are added to each other to generate a reconstruction block. In this case, when the CU block is in the inter prediction mode, the prediction unit 130 performs intra prediction mode decoding to generate a prediction block in PU units, and when the CU block is the intra prediction mode, the prediction unit 130. After decoding the motion information generates a prediction block.

If the restoration is not performed in units of CUs, additional PU information is required when generating the residual block based on the TU tree information, thereby increasing the complexity of the entire apparatus. In addition, when the size of the PU and the size of the TU are 2N × N and 2N × 2N, respectively, a size mismatch exists between the two, so that the prediction unit 130 and the inverse transform unit 125 each use a prediction block in CU units. It is desirable to generate a residual block.

On the other hand, when the image decoding apparatus 100 according to the present invention performs intra prediction, information about a block to be decoded may be lost to generate a prediction block. In this case, the predictor 130 of the image decoding apparatus 100 according to the present invention is a pixel of the reconstructed block located on the upper left, upper and left sides of the block to be decoded among the reconstructed blocks already restored and stored in the memory 145. A prediction block may be generated based on the value of.

FIG. 26 is a diagram illustrating a method of generating a prediction block by the prediction unit 130 of the image decoding apparatus 100 according to the present invention.

Referring to FIG. 26, the predictor 130 of the image decoding apparatus 100 according to the present invention reconstructs the first pixel 2610 located at the lower right and the upper right of the pixels of the reconstructed block located at the upper left of the block to be decoded. The pixel value of the pixel 2640 located below and below the block to be decoded based on the second pixel 2620 located below and among the pixels of the block and / or the third pixel 2630 located below and below the pixel of the reconstruction block located on the left. Determine. Therefore, the pixel value of the pixel 2640 located under the right side of the block to be decoded may be determined by Equation 11 below.

Figure 112011050343337-pat00026

Here, a is a pixel 2610 located under the right side of the reconstruction block located on the upper left side of the block to be decoded, b is a pixel 2620 located under the right side of the reconstruction block located above the block to be decoded, and c is a left side of the block to be decoded. Pixel 2630 located at the bottom right of the reconstruction block located at d, pixel d 2640 located at the bottom right of the block to be decoded, Δx is the difference between the pixel values of a and b, Δy is the difference between the pixel values of a and c, n Is the number of pixels located between a and b, and m is the number of pixels located between a and c.

In addition, the prediction unit 130 of the image decoding apparatus 100 according to the present invention determines the pixel values of diagonal pixels, which are pixels located in the lower right direction from the upper left of the block to be decoded by linearly interpolating the pixels a and the pixels d. The pixel values of the remaining pixels are determined based on the pixel values of the diagonal pixels.

Meanwhile, the image encoding apparatus 700 illustrated in FIG. 7 may slice a picture based on a CU unit rather than a macro block unit. In this case, a situation occurs in which CUs belonging to the same macro block belong to different slices. In this case, some CUs among CUs located at slice boundaries may use pixel values of some blocks of the reconstruction block located at the upper left, upper side, left side, lower left side, and upper right side. Therefore, the prediction unit 130 of the image decoding apparatus 100 according to the present invention generates a prediction block corresponding to a block to be decoded by interpolating pixels located at the boundary of the remaining necessary reconstruction blocks based on the pixel values of the available reconstruction blocks. can do.

FIG. 27 illustrates a method of interpolating pixel values located at boundaries of restoration blocks required by the prediction unit 130 of the image decoding apparatus 100 according to the present invention, in which CUs belonging to the same macro block belong to different slices. One drawing.

Referring to FIG. 27, a block having a reference number 2710 among a block to be decoded may use a reconstructed block located at a left side and a right top side, and a block having a reference number 2720 may use a reconstructed block located at a top right side. Therefore, when the prediction unit 130 of the image decoding apparatus 100 according to the present invention generates a prediction block for a block having a reference number 2710, the pixel 2712 located on the upper right side of the reconstruction block located on the left side of the block. Linear interpolation is performed using the pixel values of the pixels 2714 located at the lower left of the reconstruction block located at the upper side of the upper and lower reconstructed blocks. . In addition, when the prediction unit 130 of the image decoding apparatus 100 according to the present invention generates a prediction block for a block having a reference number 2720, the pixels 2722 located at the lower left of the reconstruction block located on the upper right side of the block. ) Is copied to the pixel values of the pixels located at the boundary between the upper block and the left block, and then used to generate a prediction block for the block having the reference number 2720.

Also, there may be a case where some of the pixels located at the boundary of the upper side, left side, lower left side, and upper right side of the block to be restored are not available. FIG. 28 illustrates a method of determining boundary pixel values of a block located above a block to be decoded when the pixel values of some blocks P and B located above the block to be decoded are not used. Referring to FIG. 28, the reconstructed block located on the left side of the reconstructed blocks located above the block 2810 to be decoded is a block corresponding to the I picture, and thus the value of the pixel 2812 located on the lower right side of the corresponding block may be used. . In addition, since the pixel located on the upper right side of the block 2810 to be decoded is also a block corresponding to the I picture, the value of the pixel 2814 located on the lower left side of the block may be used. Therefore, the prediction unit 130 of the image decoding apparatus 100 according to the present invention linearly interpolates pixel values of two pixels 2812 and 2814 to determine pixel values of pixels located at a lower boundary of a block located at an upper side thereof. Generate a prediction block for a block having a reference number 2810.

In the above description, terms such as 'first' and 'second' are used to describe various components, but each component should not be limited by these terms. That is, terms such as 'first' and 'second' are used for the purpose of distinguishing one component from another component. For example, a 'first component' may be referred to as a 'second component' without departing from the scope of the present invention, and similarly, the 'second component' may also be called 'first component'. Can be. In addition, the term 'and / or' is used to mean a combination of a plurality of related items or any item of a plurality of related items.

In addition, each component shown in each drawing is independently shown to represent different characteristic functions in the image decoding apparatus or the image encoding apparatus, and each component is separated hardware or one software component unit. It does not mean that it is made. That is, each component is divided into separate components for convenience of description, and at least two components are combined to form one component, or one component is divided into a plurality of components to perform the same function. Can be. In addition, the integrated and separated embodiments of each of these components are included in the scope of the present invention without departing from the spirit of the present invention.

In addition, some of the components may not be essential components for performing essential functions in the present invention, but may be optional components for improving performance. The present invention can be implemented with only the components essential for implementing the essentials of the present invention except for the components used for improving the performance, and the structure including only the essential components except for the optional components used for improving the performance. It is included in the scope of the invention.

On the other hand, when a component is said to be 'connected' or 'connected' to another component, it may be directly connected or connected to the other component, but other components may be present in the middle. Should be understood. On the other hand, when a component is said to be 'directly connected' or 'directly connected' to another component, it should be understood that no other component exists in the middle.

Also, the terminology used herein is for the purpose of describing particular example embodiments only and is not intended to be limiting of the present invention. And the singular expression includes the plural expression unless the context clearly indicates otherwise. Furthermore, the terms 'comprise', 'have', and 'include' in this specification are intended to designate that the features, numbers, steps, operations, components, parts, or combinations thereof described in the specification are present, It should be understood that it does not exclude in advance the possibility of the presence or addition of one or more other features or numbers, steps, operations, components, parts or combinations thereof.

The invention can also be embodied as computer readable code on a computer readable recording medium. The computer-readable recording medium includes all kinds of recording devices in which data that can be read by a computer system is stored. Examples of computer-readable recording media include ROM, RAM, CD-ROM, magnetic tape, floppy disk, optical data storage, and the like, and may also be implemented in the form of a carrier wave (for example, transmission over the Internet). Include. The computer readable recording medium can also be distributed over network coupled computer systems so that the computer readable code is stored and executed in a distributed fashion.

Although the preferred embodiments of the present invention have been shown and described above, the present invention is not limited to the specific preferred embodiments described above, and the present invention belongs to the present invention without departing from the gist of the present invention as claimed in the claims. Various modifications can be made by those skilled in the art, and such changes are within the scope of the claims.

Claims (1)

An entropy decoder configured to decode the bit stream received from the image encoding apparatus and to generate information for generating a prediction block, a residual value expressed in the form of a one-dimensional vector, and a quantization parameter;
A rearranging unit for rearranging the residual values and restoring the coefficients in a two-dimensional block form;
An inverse quantization unit which inversely quantizes the coefficient of the two-dimensional block form based on the quantization parameter;
An inverse transform unit for generating a residual block by inversely transforming the coefficient of the two-dimensional block form inversely quantized by the inverse quantization unit;
An intra prediction unit for decoding an intra prediction mode of a current block to be decoded based on the information for generating the prediction block, and generating a prediction block according to the intra prediction mode; And
And an adder for reconstructing an original block based on the residual block and the prediction block.
When the information for generating the prediction block is lost, the intra prediction unit generates a pixel located below the current block by the following equation,
Figure 112019093312068-pat00055

Here, a is a pixel located under the right side of the reconstruction block adjacent to the left upper side of the current block, b is a pixel located under the right side of the reconstruction block located above the current block, and c is a left side of the reconstruction block located on the left side of the current block. A pixel located at the bottom right, d is a pixel located at the bottom right of the current block, Δx is the difference between the pixel values of a and b, Δy is the difference between the pixel values of a and c, n is the number of pixels located between a and b, And m is the number of pixels located between a and c,
Linearly interpolates pixel values of pixels located at the lower right side of the reconstructed block adjacent to the upper left side of the current block and pixels located at the lower right side of the current block to form a straight line connecting the upper left side to the lower right side of the current block. And determining pixel values of the diagonal pixels, which are located pixels, and determining pixel values of the remaining pixels of the current block based on the pixel values of the diagonal pixels.
KR1020110065220A 2011-07-01 2011-07-01 Apparatus and method for decoding image KR102055451B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020110065220A KR102055451B1 (en) 2011-07-01 2011-07-01 Apparatus and method for decoding image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020110065220A KR102055451B1 (en) 2011-07-01 2011-07-01 Apparatus and method for decoding image

Related Child Applications (1)

Application Number Title Priority Date Filing Date
KR1020180027711A Division KR101978090B1 (en) 2018-03-08 2018-03-08 Apparatus and method for decoding an image

Publications (2)

Publication Number Publication Date
KR20130063044A KR20130063044A (en) 2013-06-14
KR102055451B1 true KR102055451B1 (en) 2019-12-13

Family

ID=48860498

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020110065220A KR102055451B1 (en) 2011-07-01 2011-07-01 Apparatus and method for decoding image

Country Status (1)

Country Link
KR (1) KR102055451B1 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101529650B1 (en) * 2013-07-02 2015-06-19 성균관대학교산학협력단 Selective transform method and apparatus, inverse transform method and apparatus for video coding
KR20180058224A (en) * 2015-10-22 2018-05-31 엘지전자 주식회사 Modeling-based image decoding method and apparatus in video coding system
CN116866563A (en) * 2018-03-21 2023-10-10 Lx 半导体科技有限公司 Image encoding/decoding method, storage medium, and image data transmission method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2003248913A1 (en) * 2003-01-10 2004-08-10 Thomson Licensing S.A. Defining interpolation filters for error concealment in a coded image

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Yongbing Lin, "Simplified Planar Intra Prediction", version 3, JCTVC-E289, 19 March 2011.*
김동형 외, "H.264 인트라 프레임을 위한 저복잡도 공간적 에러은닉 기법." 한국통신학회 논문지 31.5C, 2006.5.31.*

Also Published As

Publication number Publication date
KR20130063044A (en) 2013-06-14

Similar Documents

Publication Publication Date Title
KR102410032B1 (en) Method and apparatus for processing a video signal
KR102416257B1 (en) Method and apparatus for processing a video signal
KR102383104B1 (en) Method and apparatus for processing a video signal
KR102424419B1 (en) Method and apparatus for processing a video signal
KR102383105B1 (en) Method and apparatus for processing a video signal
KR102555352B1 (en) Intra prediction method and encoding apparatus and decoding apparatus using same
KR102350988B1 (en) Intra-prediction method, and encoder and decoder using same
KR102383106B1 (en) Method and apparatus for processing a video signal
WO2017190288A1 (en) Intra-picture prediction using non-adjacent reference lines of sample values
WO2020227405A1 (en) Clipping prediction samples in matrix intra prediction mode
KR20180126382A (en) Method and apparatus for processing a video signal
KR102424420B1 (en) Method and apparatus for processing a video signal
KR102435000B1 (en) Method and apparatus for processing a video signal
KR102539354B1 (en) Method for processing image based on intra prediction mode and apparatus therefor
KR20180126384A (en) Method and apparatus for processing a video signal
KR20200028860A (en) Method and apparatus for encoding/decoding an image using intra prediction
KR20180031615A (en) Method and apparatus for processing a video signal
WO2012138032A1 (en) Method for encoding and decoding image information
KR20220003124A (en) Video coding method and apparatus using adaptive parameter set
KR102431287B1 (en) Method and apparatus for encoding/decoding a video signal
KR102055451B1 (en) Apparatus and method for decoding image
KR102124089B1 (en) Apparatus and method for decoding an image
US11343496B2 (en) Method and apparatus for encoding/decoding an image based on in-loop filter
KR102103100B1 (en) Apparatus and method for decoding an image
KR20210153547A (en) method and apparatus for encoding/decoding a VIDEO SIGNAL, and a recording medium storing a bitstream

Legal Events

Date Code Title Description
N231 Notification of change of applicant
A201 Request for examination
E902 Notification of reason for refusal
E601 Decision to refuse application
A107 Divisional application of patent
J201 Request for trial against refusal decision
J301 Trial decision

Free format text: TRIAL NUMBER: 2018101001086; TRIAL DECISION FOR APPEAL AGAINST DECISION TO DECLINE REFUSAL REQUESTED 20180309

Effective date: 20190626

S901 Examination by remand of revocation
E902 Notification of reason for refusal
GRNO Decision to grant (after opposition)