KR102055451B1 - Apparatus and method for decoding image - Google Patents
Apparatus and method for decoding image Download PDFInfo
- Publication number
- KR102055451B1 KR102055451B1 KR1020110065220A KR20110065220A KR102055451B1 KR 102055451 B1 KR102055451 B1 KR 102055451B1 KR 1020110065220 A KR1020110065220 A KR 1020110065220A KR 20110065220 A KR20110065220 A KR 20110065220A KR 102055451 B1 KR102055451 B1 KR 102055451B1
- Authority
- KR
- South Korea
- Prior art keywords
- block
- prediction
- unit
- mode
- pixel
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
- H04N19/105—Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/124—Quantisation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/13—Adaptive entropy coding, e.g. adaptive variable length coding [AVLC] or context adaptive binary arithmetic coding [CABAC]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/176—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/182—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a pixel
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
Abstract
An image decoding apparatus and method are disclosed. The image decoding apparatus according to the present invention reconstructs a prediction block in coding units, and generates a prediction block by linearly interpolating available pixels when a pixel value located at a boundary of a neighboring block cannot be used when generating a prediction block. In addition, when information of a block to be decoded is not available, based on pixel values of pixels located at boundaries of blocks adjacent to the block, pixel values of pixels located in a direction from the upper left to the lower right of the corresponding block are linearly interpolated. Pixel values of the remaining pixels are determined.
Description
The present invention relates to an apparatus and a method for decoding an image, and more particularly, to an apparatus and a method for decoding an image encoded by a coding unit preset by the apparatus for encoding an image.
Regarding the compression and reconstruction of images, efforts are continuously made to reduce the compression ratio of the image and the complexity of the system from MPEG to H.264. In particular, with the combination of video compression technology and communication technology, there is a growing demand for a technology capable of restoring the original video at the time of restoration while reducing the amount of data. In order to respond to these demands, more advanced image compression technology is currently being studied, and a new image compression standard is recently discussed under the name HEVC.
SUMMARY OF THE INVENTION The present invention has been made in an effort to provide an image decoding apparatus and method capable of accurately decoding an image while reducing the complexity of the overall apparatus.
In order to achieve the above technical problem, an image decoding apparatus according to the present invention includes information for generating a prediction block by decoding a bit stream received from an image encoding apparatus, a residual value expressed in a one-dimensional vector form, and a quantization parameter. An entropy decoding unit for restoring; A rearranging unit for rearranging the residual values and restoring the coefficients in a two-dimensional block form; An inverse quantizer for inversely quantizing the coefficients of the two-dimensional block form based on the quantization parameter; An inverse transform unit for generating a residual block by inversely transforming the coefficient of the two-dimensional block form inversely quantized by the inverse quantization unit; An intra prediction unit for decoding an intra prediction mode of a current block to be decoded based on the information for generating the prediction block, and generating a prediction block according to the intra prediction mode; And an adder for reconstructing an original block based on the residual block and the prediction block, wherein the intra prediction unit is adjacent to the current block to be decoded when information for generating the prediction block is not available. Linearly interpolate pixel values of pixels located at the boundary of blocks to determine pixel values of pixels located on a straight line connecting the lower right side from the upper left side of the current block to be decoded, and determine the pixel values of the current block to be decoded. The pixel values of the remaining pixels of the current block to be decoded are linearly interpolated by linearly interpolating pixel values of pixels positioned on a straight line connecting the lower right side and pixel values of pixels positioned at boundaries of blocks adjacent to the current block to be decoded. .
According to the image decoding apparatus and method according to the present invention, it is possible to accurately decode the image while reducing the complexity of the overall apparatus.
1 is a block diagram showing the configuration of a preferred embodiment of an image decoding apparatus according to the present invention;
2 is a diagram illustrating a configuration of a prediction unit of a preferred embodiment of an image decoding apparatus according to the present invention;
3 is a diagram illustrating an inter-screen prediction unit of a preferred embodiment of an image decoding apparatus according to the present invention;
4 is a flowchart illustrating a process of performing motion compensation based on a prediction unit in the motion compensation unit 320 according to an exemplary embodiment of the image decoding apparatus according to the present invention;
5 is a diagram illustrating a configuration of an intra prediction unit of a preferred embodiment of an image decoding apparatus according to the present invention;
6 shows examples of ALF 140;
7 is a diagram illustrating a configuration of a preferred embodiment of an image encoding apparatus 700;
8 is a diagram illustrating a process of dividing a maximum coding unit into at least one coding unit;
9 is a diagram showing the detailed configuration of the
10 is a diagram illustrating a method of generating a prediction unit by the
11 is a diagram illustrating a detailed configuration of an inter prediction unit;
12 illustrates spatial merging candidate blocks in a prediction unit merging method;
13 is a diagram illustrating a method for obtaining a temporal merging candidate block in a prediction unit merging method;
14 illustrates a spatial AMVP candidate block;
15 illustrates a temporal AMVP candidate block;
16 illustrates a prediction mode;
17 is a flowchart illustrating a process of performing an intra prediction method performed by an image encoding apparatus;
18 is a diagram illustrating a reference pixel of a current prediction unit using an AIS filter and a type of AIS filter applied in a DC mode;
19 illustrates a method of generating a prediction block from a reference pixel according to an intra prediction mode;
20 illustrates a method of generating a prediction block when the prediction mode is the DC mode;
21 is a diagram illustrating a configuration of an intra prediction unit of a video encoding apparatus;
22 is a diagram illustrating a transformation unit partitioning method;
23 is a view showing a method of performing a transformation in a transmission unit having a size of 4x4;
24 illustrates a method of applying a deblocking filter;
25 illustrates a method of using ALF;
FIG. 26 is a diagram illustrating a method of generating a prediction block by the
FIG. 27 illustrates a method of interpolating pixel values located at boundaries of restoration blocks required by the
FIG. 28 illustrates a method of determining boundary pixel values of a block located above a block to be decoded when the pixel values of some blocks P and B located above the block to be decoded are not used.
Hereinafter, exemplary embodiments of an image decoding apparatus and method according to the present invention will be described in detail with reference to the accompanying drawings. In the following description, the same reference numerals are used for the same elements in each drawing, and duplicate descriptions of the same elements are omitted.
1 is a block diagram showing the configuration of a preferred embodiment of an image decoding apparatus according to the present invention.
Referring to FIG. 1, the
When the image data encoded by the image encoding apparatus or the image bitstream encoded and transmitted by the image encoding apparatus is input, the
The
The
The
The
(mode)
(vertical transform)
(horizontal transform)
Referring to Table 1, DCT and DST may be selectively used as a transformation method according to the direction of intra prediction. When the motion prediction mode is a motion prediction mode located between VER and VER + 8, the transformation may be performed using DST in performing vertical transformation and DCT in performing horizontal transformation. That is, the conversion method can be adaptively changed according to the mode.
The
Referring to FIG. 2, the
The
The
3 is a diagram illustrating an inter prediction unit of a preferred embodiment of an image decoding apparatus according to the present invention.
Referring to FIG. 3, the
When the motion vector provided from the image encoding apparatus has a 1/2 pixel unit or a 1/4 pixel unit motion vector, the
The motion compensator 320 generates a prediction block by performing motion compensation on the current block based on motion prediction related information such as motion vector information and reference picture information provided from the image encoding apparatus.
4 is a flowchart illustrating a process of performing motion compensation based on a prediction unit in the motion compensation unit 320 according to an exemplary embodiment of the image decoding apparatus according to the present invention.
Referring to FIG. 4, the motion compensator 320 determines whether the prediction unit included in the division unit is a skip mode based on the division unit (S400). In this case, the motion compensator 320 may determine whether the prediction block included in the current partitioning unit is a skip mode based on the partitioning unit. If the division unit is the skip mode, the motion compensator 320 performs a merge skip (S410).
The prediction unit included in the split unit based on one split unit may have a skip mode. The skip mode is a mode in which residual values of the prediction block and the original block are not transmitted. When the prediction unit included in one division unit has a skip mode, the motion compensator 320 may perform a merge skip to generate a prediction block.
Merge skip is a method of using motion prediction related information of one merge skip candidate block among neighboring merge skip candidate blocks as motion prediction related information of a current prediction unit. That is, when the prediction unit is a merge skip block, motion prediction related information such as a motion vector, a reference picture index, and the like included in the merge skip candidate block indicated by the merge index of the current prediction unit may be obtained. Can be used as
Meanwhile, when the prediction unit included in the division unit is not the merge skip mode, the motion compensator 320 determines whether the prediction unit is a prediction unit merge block (S420). If the current prediction unit is not predicted using the merge skip mode, the prediction unit may be a prediction unit merge block (PU Merge Block) or an AMVP block (Advanced Motion Vector Prediction Block). When the division unit is a prediction unit merge block, the motion compensation unit 320 generates a prediction block by motion compensation of the prediction unit merge block (S430).
The motion compensation unit 320 moves the motion of one prediction unit merge candidate block among five prediction unit merge candidate blocks including four spatial merge candidate blocks located in a neighboring block of the current block and one temporal merge candidate block located in a reference picture. The prediction block may be generated based on the related information. Accordingly, the
If it is determined in step S420 that the prediction unit is not the prediction unit merge block, the motion compensation unit 320 generates a prediction block by motion compensation of the AMVP block (S440). When the current prediction unit is not the prediction unit merge block, the current block becomes an AMVP block, and the AMVP prediction method may be performed on the prediction unit. In the
Meanwhile, the
5 is a diagram illustrating a configuration of an intra prediction unit of a preferred embodiment of an image decoding apparatus according to the present invention.
Referring to FIG. 5, the intra prediction unit 500 includes an AIS filter 510, a reference pixel interpolator 520, and a DC filter 530.
The AIS filter 510 filters the reference pixels of the current block and determines whether to apply the filter according to the prediction mode of the current prediction unit. Therefore, AIS filtering may be performed on the reference pixel of the current block by using the prediction mode of the prediction unit and the AIS filter information provided by the image encoding apparatus. If the prediction mode of the current block is a mode that does not perform AIS filtering, the AIS filter may not be applied.
When the prediction mode of the prediction unit is a prediction unit that performs intra prediction based on the pixel value of the interpolation of the reference pixel, the reference pixel interpolator 520 interpolates the reference pixel to generate a reference pixel having an integer value or less. do. If the prediction mode of the current prediction unit is a prediction mode for generating a prediction block without interpolating the reference pixel, the reference pixel may not be interpolated. The DC filter 530 generates the prediction block through filtering when the prediction mode of the current block is the DC mode.
The reconstructed block may be generated by combining the prediction block generated by the
The
The reconstructed picture or block is stored in the
FIG. 7 is a diagram illustrating a configuration of a preferred embodiment of the image encoding apparatus 700.
Referring to FIG. 7, the image encoding apparatus 700 may include a
The
The split coding unit may be represented by flag information such as a split flag. In addition, coding units may be classified into skip coding units (Skip CUs) and non-skip coding units (Non-Skip CUs) according to a method of performing motion prediction of the prediction units defined for the coding units. In the case of a skip coding unit, a prediction unit in a coding unit performs motion prediction using a merge skip method, and in the case of a non-skip coding unit (Non-Skip CU), intra prediction or inter screen You can make predictions.
8 is a diagram illustrating a process of dividing a maximum coding unit into at least one coding unit.
Referring to FIG. 8, whether one coding unit is split may be expressed by depth information and a split flag. One coding unit may be divided into a plurality of small coding units based on size information, depth information, and split flag information of the LCU. The size information of the largest coding unit, the split depth information, and whether to split the current coding unit may be included in a sequence parameter set (SPS) on the bitstream and transmitted to the image decoding apparatus according to the present invention.
In FIG. 8, step S800 illustrates a case in which a maximum coding unit (LCU) has a size of 64 × 64 pixels and a split depth of 0 in a split tree. In operation S800, the flag indicating whether to split the right block is set to 1, and the maximum coding unit (LCU) may be divided into four coding units having a square size of 32 × 32 pixels. In operation S800, since the flag indicating whether to split the left block is 0, the maximum coding unit (LCU) is encoded in one coding unit without being divided.
In operation S810, the maximum coding unit has a size of 64 × 64 pixels and a split depth of 1 in the split tree. That is, the
Step S820 illustrates a coding unit when the maximum coding unit size is 64x64 (pixels) and the split depth is four. When the division depth is 4, the size of the coding unit is 8x8 (pixel), which is the smallest coding unit that is the smallest size as the coding unit. Since the minimum coding unit can no longer be split into small CUs, there is no split flag. In the above description, for convenience, an 8x8 (pixel) size is set as the minimum coding unit, but in some cases, a coding unit larger than the 8x8 (pixel) size instead of the 8x8 (pixel) size may also be the minimum coding unit.
The
The
The above-described
Hereinafter, an encoding process performed by the image encoding apparatus 700 based on one 64x64 (pixel) size coding unit divided by the
The
9 is a diagram illustrating a detailed configuration of the
9, the
The
11 is a diagram illustrating a detailed configuration of an inter prediction unit.
Referring to FIG. 11, the
The
Table 2 below is a table showing filter coefficients according to positions of pixels for generating pixel information of integer pixels or less for luminance pixels, and Table 3 for generating pixel information of integer pixel units or less for chrominance pixels. The table shows an example of filter coefficients depending on the position of the pixel.
The
The merge skip method is a method of generating a prediction block by receiving motion prediction related information of the current block from one of the neighboring blocks. The merge skip method generates a prediction unit merge method and a prediction block, which will be described below. Although the method is the same, the merge skip method does not transmit residual values of the original block and the prediction block to the image decoding apparatus, unlike the prediction unit merging method. The merge skip method includes motion-related information, such as a motion vector and a reference picture index, included in a block around the current prediction unit indicated by the merge index of the current prediction unit and a block included in another picture, and the motion related prediction of the current block. Can be used as information. The prediction block generation method of the detailed merge skip method will be referred to when the prediction unit merging method will be described later.
In the prediction unit merge method, one prediction unit may be provided with motion prediction related information from five prediction unit merge candidate blocks. When the motion vector related information of the current prediction unit is the same as the motion vector related information of at least one of the prediction unit merge candidate blocks, the current prediction unit may be predicted using the prediction unit merge method.
The prediction unit merging method may generate a prediction block for the current prediction unit by using motion prediction related information (motion vector, reference picture index, etc.) of five candidate blocks. In the prediction unit merging method, the merging candidate block may include four spatial merging candidate blocks spatially located in the same picture as the current prediction unit and one temporal merging candidate block located in a picture different from the current block.
12 illustrates spatial merging candidate blocks in the prediction unit merging method.
Referring to FIG. 12, in the prediction unit merging method, the spatial merging candidate block is the upper
13 is a diagram illustrating a method for obtaining a temporal merging candidate block in a prediction unit merging method.
Referring to FIG. 13, in order to obtain a temporal merging candidate block, the 4x4
In the prediction unit merging method, a 4x4 colocated block of a reference picture located closest to the current picture may be a temporal merging candidate block. If the temporal merging candidate block has two motion vectors of the temporal merging candidate block using bidirectional prediction, the motion vector passing through the current prediction unit of the two motion vectors is used, or if the two motion vectors are in the same direction, A motion vector of short size may be used as the motion vector of the current prediction unit. The motion vector of the temporal merging candidate block may be scaled through a distance relationship between the picture including the current prediction unit and the picture including the same location block.
The AMVP method may also receive motion prediction related information from an AMVP candidate block in the vicinity of the current block. The prediction unit merging method and the AMVP method are identical to receive motion prediction information from neighboring blocks of the current block.However, unlike the prediction unit merging method, motion vector difference information between the AMVP candidate block and the current prediction unit is generated to generate motion vector difference information. Can be used as motion prediction related information.
14 illustrates a spatial AMVP candidate block.
Referring to FIG. 14, two spatial AMVP spatial candidate blocks may be selected around the current prediction unit. One AMVP spatial candidate block sequentially selects available candidate blocks among the lower
15 illustrates a temporal AMVP candidate block.
Referring to FIG. 15, in order to generate a temporal AMVP candidate block, a
The
The
The
16 is a diagram illustrating a prediction mode.
Referring to FIG. 16, the prediction mode may have a mode number from
Meanwhile, the number of prediction modes used may vary depending on the size of the prediction unit. Table 4 below describes the number of modes according to the size of the prediction unit in the luminance information and the color difference information.
Referring to Table 4, in order to intra-predict luminance information, a 4x4 size prediction unit may have 17 prediction modes from
In order to transmit the prediction mode information of the current prediction unit, the prediction mode of the current prediction unit is predicted using the prediction modes of the upper block and the left block of the current prediction unit, and the prediction mode of the predicted current prediction unit is compared with the actual prediction mode of the current block. In comparison, prediction mode information of the current prediction unit may be transmitted. If neither the top mode nor the left side prediction mode of the current prediction unit is available, the DC prediction mode is inserted into the first prediction mode candidate list. For example, when the upper block and the left block of the current prediction unit do not exist or when the upper block and the left block of the current prediction unit use inter prediction, the DC prediction mode may be inserted into the first prediction mode candidate list. .
If the prediction mode is available only in one block of the upper block and the left block, or if the prediction modes of the upper block and the left block are the same, one prediction mode is inserted into the first prediction mode candidate list. If both prediction modes are available in the upper block and the left block, the prediction mode with the smallest mode number among the prediction modes is inserted into the first prediction mode candidate list, and the prediction mode with the large mode number among the prediction modes is added to the second block. Insert into the prediction mode candidate list. For example, when the prediction mode of the top block of the current prediction unit is mode 1 (vertical prediction mode) and the prediction mode of the left block is mode 2 (DC prediction mode),
If the prediction mode of the current prediction unit is the same as the prediction mode inserted into the first prediction mode candidate list, a flag indicating that the current prediction mode is the same as the prediction mode inserted into the first prediction mode candidate list is transmitted to the image decoder. By using the method of transmitting flag information that the prediction mode of the current prediction unit is the same as the prediction mode present in the first prediction mode candidate list, using less bits than directly encoding the prediction mode number information of the current prediction unit. The intra prediction mode of the current prediction unit may be transmitted. Similarly, when the prediction mode of the current prediction unit is the same as the prediction mode inserted into the second prediction mode candidate list, the flag may be transmitted that the current prediction mode is the same as the prediction mode inserted into the second prediction mode candidate list.
When the prediction mode of the current prediction unit is not present in the first prediction mode candidate list or the second prediction mode candidate list, in transmitting prediction mode information of the current prediction unit, the first prediction mode candidate list and the second prediction mode candidate When the prediction mode existing in the list is smaller than the intra prediction mode of the current prediction unit, the prediction mode existing in the first prediction mode candidate list and the second prediction mode candidate list in the table corresponding to the prediction mode and the code word is excluded from the table. A code word is assigned to the prediction mode of the current prediction unit by using a method. For example, when the prediction mode of the current prediction unit is
17 is a flowchart illustrating a process of performing an intra prediction method performed by an image encoding apparatus.
Referring to FIG. 17, an adaptive intra smoothing (AIS) filter is applied to a reference pixel (S1700). At this time, whether to apply the AIS filter may vary depending on the prediction mode. Table 5 below describes whether the AIS filter is applied according to the intra prediction mode and the AIS filter applied when the prediction mode is the DC mode (mode 2).
Referring to Table 5, the AIS filter of 3 taps of [1, 2, 1] may be applied according to the prediction mode. In the case of applying the AIS filter, 1, and 0 in the case of applying the AIS filter, and in the DC mode, a newly defined AIS filter to be described later may be used.
FIG. 18 is a diagram illustrating a reference pixel of a current prediction unit using an AIS filter and a type of AIS filter applied in a DC mode.
Referring to FIG. 18, when the size of the current prediction unit is 8x8, the
As can be seen from Table 5, if the prediction block is generated in
Next, a prediction block is generated (S1810). The prediction block may be predicted and generated according to the distance between the reference pixel and the pixel to be predicted through the reference pixel interpolation method. 19 illustrates a method of generating a prediction block from a reference pixel according to an intra prediction mode. Referring to FIG. 19, when the prediction direction is the same as the arrow direction, the value of the first pixel 1930, which is a pixel located at the lower right of the current block, is the
Meanwhile, when the prediction mode is the DC mode, filtering may be performed using a 2-tap filter or a 3-tap filter (S1820). When the prediction mode is the DC mode, since the difference between the pixel value of the reference pixel and the predicted pixel value in the screen of the current prediction unit is large, a filter may be used to filter some of the reference pixel and the pixel of the current prediction unit.
20 illustrates a method of generating a prediction block when the prediction mode is the DC mode.
Referring to FIG. 20, when the prediction unit generates the prediction block using the DC mode, the
21 is a diagram illustrating a configuration of an intra prediction unit of a video encoding apparatus.
Referring to FIG. 21, the
The
The
The
The
The
22 is a diagram illustrating a transform unit division method.
Referring to FIG. 22, whether a transmission unit is divided may be expressed using predetermined flag information. For example, one coding unit may be divided into a plurality of transmission units as illustrated on the right side of FIG. 22 using the quadtree structure shown on the left side of FIG. 22. At this time, the largest transform unit (Largest Transform Unit, LTU) may be 32x32 size and the smallest transform unit (Smallest Transform Unit, SCU) may be 4x4 size. In addition, as a transmission unit, one prediction unit may be used as it is, or one prediction unit may be divided into four transmission units.
In addition, the
23 is a diagram illustrating a method of performing a transformation in a transmission unit having a size of 4x4.
Referring to FIG. 23, in the vertical mode, the vertical direction transformation may apply DST, and the horizontal direction transformation may apply DCT to convert the pixel value of the current block into the frequency domain. The 4x4 sized block may be adaptively applied to the DCT or DST according to the intra prediction mode. Details of the adaptive transformation method according to the prediction mode are as described in Table 2.
The
The
The
The
The
The
24 is a diagram illustrating a method of applying a deblocking filter.
Referring to FIG. 24, the
If the
Next, the
If
The steps S2400 and S2410 may be performed by the
Next, vertical deblocking filtering and horizontal deblocking filtering are performed by the deblocking filter 745 (S2420). Horizontal deblocking filtering
Is applied to the vertical deblocking filtering Can be applied to When deblocking is performed based on 8 pixels located at the boundary of each block, 16 pixels in which both horizontal deblocking filtering and vertical deblocking filtering should be performed are processed in parallel with horizontal deblocking filtering and vertical deblocking filtering. For this purpose, only the deblocking filtering of the vertical deblocking filtering or the horizontal deblocking filtering may be performed. That is, the vertical deblocking filtering and the horizontal deblocking filtering are performed on the block, but the filtering applied to the overlapping portion may be one of the vertical deblocking and the horizontal deblocking.Next, the
The
25 is a diagram illustrating a method of using ALF.
Referring to FIG. 25, one Laplacian Activity value is calculated in units of blocks based on a Laplacian based activity metric (S2500). In this case, one Laplacian activity value may be calculated per 4x4 block by averaging the Laplacian activity values of the pixels included in the 4x4 block based on the Laplacian activity metric. The Laplacian activity value of the pixel may be calculated through
Next, the directional information is calculated based on the pixels included in the 4x4 block size (S2510). Directional information may be calculated based on
Next, an activity metric for a block is calculated based on the Laplacian activity metric and the directional information (S2520). In this process, a new activity metric is calculated using the value calculated using the Laplacian activity metric in step S2500 based on the 4x4 block size plus the directional information calculated in step S2510. A new activity metric in blocks can be calculated based on
Based on the calculated activity metric, one ALF may be applied in units of 4 × 4 blocks to filter the blocks.
Next, the coefficient of the ALF filter is calculated (S2530). The ALF filter coefficients may be calculated based on
here
Denotes a cross-correlation matrix of the original pixel and the reconstructed pixel. Denotes an auto-correlation matrix of the reconstructed pixel. Is the number of filter coefficients Is a coefficient of the filter and one filtering coefficient may be calculated according to the classification of each block.
In the process of calculating the filtering coefficients of the ALF, some of the classifications of blocks that have been previously classified into a plurality of existing blocks may be combined into one classification.
Next, the coefficient of the ALF is encoded (S2540). The coefficient of the first ALF may be encoded by an exponential Golomb coding method. When there are a plurality of ALFs used in one coding unit, the coefficients of the remaining ALFs may be directly or predictively encoded by an exponential Golomb code according to the number of bits for transmitting the coefficients.
Next, an ALF control map is determined (S2550). Whether to apply the ALF may be determined based on
When the equation of the upper part of
The application of ALF may be performed in a coding unit (CU), but the optimal application of ALF may be determined based on depth information on a quad tree structure that determines a coding unit.
The optimum depth on the quad tree structure may be determined by comparing RD cost values calculated based on
In equation (10)
Denotes the number of bits used to transmit ALF application information in a coding unit of one slice. In addition, the number of taps of the ALF filter to be applied to the block may be determined based on the RD cost calculated inIn the method of using the ALF described with reference to FIG. 25, each step is performed by the
The reconstructed block or picture calculated through the
The
The
In this case, the
If the restoration is not performed in units of CUs, additional PU information is required when generating the residual block based on the TU tree information, thereby increasing the complexity of the entire apparatus. In addition, when the size of the PU and the size of the TU are 2N × N and 2N × 2N, respectively, a size mismatch exists between the two, so that the
On the other hand, when the
FIG. 26 is a diagram illustrating a method of generating a prediction block by the
Referring to FIG. 26, the
Here, a is a
In addition, the
Meanwhile, the image encoding apparatus 700 illustrated in FIG. 7 may slice a picture based on a CU unit rather than a macro block unit. In this case, a situation occurs in which CUs belonging to the same macro block belong to different slices. In this case, some CUs among CUs located at slice boundaries may use pixel values of some blocks of the reconstruction block located at the upper left, upper side, left side, lower left side, and upper right side. Therefore, the
FIG. 27 illustrates a method of interpolating pixel values located at boundaries of restoration blocks required by the
Referring to FIG. 27, a block having a
Also, there may be a case where some of the pixels located at the boundary of the upper side, left side, lower left side, and upper right side of the block to be restored are not available. FIG. 28 illustrates a method of determining boundary pixel values of a block located above a block to be decoded when the pixel values of some blocks P and B located above the block to be decoded are not used. Referring to FIG. 28, the reconstructed block located on the left side of the reconstructed blocks located above the
In the above description, terms such as 'first' and 'second' are used to describe various components, but each component should not be limited by these terms. That is, terms such as 'first' and 'second' are used for the purpose of distinguishing one component from another component. For example, a 'first component' may be referred to as a 'second component' without departing from the scope of the present invention, and similarly, the 'second component' may also be called 'first component'. Can be. In addition, the term 'and / or' is used to mean a combination of a plurality of related items or any item of a plurality of related items.
In addition, each component shown in each drawing is independently shown to represent different characteristic functions in the image decoding apparatus or the image encoding apparatus, and each component is separated hardware or one software component unit. It does not mean that it is made. That is, each component is divided into separate components for convenience of description, and at least two components are combined to form one component, or one component is divided into a plurality of components to perform the same function. Can be. In addition, the integrated and separated embodiments of each of these components are included in the scope of the present invention without departing from the spirit of the present invention.
In addition, some of the components may not be essential components for performing essential functions in the present invention, but may be optional components for improving performance. The present invention can be implemented with only the components essential for implementing the essentials of the present invention except for the components used for improving the performance, and the structure including only the essential components except for the optional components used for improving the performance. It is included in the scope of the invention.
On the other hand, when a component is said to be 'connected' or 'connected' to another component, it may be directly connected or connected to the other component, but other components may be present in the middle. Should be understood. On the other hand, when a component is said to be 'directly connected' or 'directly connected' to another component, it should be understood that no other component exists in the middle.
Also, the terminology used herein is for the purpose of describing particular example embodiments only and is not intended to be limiting of the present invention. And the singular expression includes the plural expression unless the context clearly indicates otherwise. Furthermore, the terms 'comprise', 'have', and 'include' in this specification are intended to designate that the features, numbers, steps, operations, components, parts, or combinations thereof described in the specification are present, It should be understood that it does not exclude in advance the possibility of the presence or addition of one or more other features or numbers, steps, operations, components, parts or combinations thereof.
The invention can also be embodied as computer readable code on a computer readable recording medium. The computer-readable recording medium includes all kinds of recording devices in which data that can be read by a computer system is stored. Examples of computer-readable recording media include ROM, RAM, CD-ROM, magnetic tape, floppy disk, optical data storage, and the like, and may also be implemented in the form of a carrier wave (for example, transmission over the Internet). Include. The computer readable recording medium can also be distributed over network coupled computer systems so that the computer readable code is stored and executed in a distributed fashion.
Although the preferred embodiments of the present invention have been shown and described above, the present invention is not limited to the specific preferred embodiments described above, and the present invention belongs to the present invention without departing from the gist of the present invention as claimed in the claims. Various modifications can be made by those skilled in the art, and such changes are within the scope of the claims.
Claims (1)
A rearranging unit for rearranging the residual values and restoring the coefficients in a two-dimensional block form;
An inverse quantization unit which inversely quantizes the coefficient of the two-dimensional block form based on the quantization parameter;
An inverse transform unit for generating a residual block by inversely transforming the coefficient of the two-dimensional block form inversely quantized by the inverse quantization unit;
An intra prediction unit for decoding an intra prediction mode of a current block to be decoded based on the information for generating the prediction block, and generating a prediction block according to the intra prediction mode; And
And an adder for reconstructing an original block based on the residual block and the prediction block.
When the information for generating the prediction block is lost, the intra prediction unit generates a pixel located below the current block by the following equation,
Here, a is a pixel located under the right side of the reconstruction block adjacent to the left upper side of the current block, b is a pixel located under the right side of the reconstruction block located above the current block, and c is a left side of the reconstruction block located on the left side of the current block. A pixel located at the bottom right, d is a pixel located at the bottom right of the current block, Δx is the difference between the pixel values of a and b, Δy is the difference between the pixel values of a and c, n is the number of pixels located between a and b, And m is the number of pixels located between a and c,
Linearly interpolates pixel values of pixels located at the lower right side of the reconstructed block adjacent to the upper left side of the current block and pixels located at the lower right side of the current block to form a straight line connecting the upper left side to the lower right side of the current block. And determining pixel values of the diagonal pixels, which are located pixels, and determining pixel values of the remaining pixels of the current block based on the pixel values of the diagonal pixels.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020110065220A KR102055451B1 (en) | 2011-07-01 | 2011-07-01 | Apparatus and method for decoding image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020110065220A KR102055451B1 (en) | 2011-07-01 | 2011-07-01 | Apparatus and method for decoding image |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
KR1020180027711A Division KR101978090B1 (en) | 2018-03-08 | 2018-03-08 | Apparatus and method for decoding an image |
Publications (2)
Publication Number | Publication Date |
---|---|
KR20130063044A KR20130063044A (en) | 2013-06-14 |
KR102055451B1 true KR102055451B1 (en) | 2019-12-13 |
Family
ID=48860498
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
KR1020110065220A KR102055451B1 (en) | 2011-07-01 | 2011-07-01 | Apparatus and method for decoding image |
Country Status (1)
Country | Link |
---|---|
KR (1) | KR102055451B1 (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101529650B1 (en) * | 2013-07-02 | 2015-06-19 | 성균관대학교산학협력단 | Selective transform method and apparatus, inverse transform method and apparatus for video coding |
KR20180058224A (en) * | 2015-10-22 | 2018-05-31 | 엘지전자 주식회사 | Modeling-based image decoding method and apparatus in video coding system |
CN116866563A (en) * | 2018-03-21 | 2023-10-10 | Lx 半导体科技有限公司 | Image encoding/decoding method, storage medium, and image data transmission method |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
AU2003248913A1 (en) * | 2003-01-10 | 2004-08-10 | Thomson Licensing S.A. | Defining interpolation filters for error concealment in a coded image |
-
2011
- 2011-07-01 KR KR1020110065220A patent/KR102055451B1/en active IP Right Grant
Non-Patent Citations (2)
Title |
---|
Yongbing Lin, "Simplified Planar Intra Prediction", version 3, JCTVC-E289, 19 March 2011.* |
김동형 외, "H.264 인트라 프레임을 위한 저복잡도 공간적 에러은닉 기법." 한국통신학회 논문지 31.5C, 2006.5.31.* |
Also Published As
Publication number | Publication date |
---|---|
KR20130063044A (en) | 2013-06-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR102410032B1 (en) | Method and apparatus for processing a video signal | |
KR102416257B1 (en) | Method and apparatus for processing a video signal | |
KR102383104B1 (en) | Method and apparatus for processing a video signal | |
KR102424419B1 (en) | Method and apparatus for processing a video signal | |
KR102383105B1 (en) | Method and apparatus for processing a video signal | |
KR102555352B1 (en) | Intra prediction method and encoding apparatus and decoding apparatus using same | |
KR102350988B1 (en) | Intra-prediction method, and encoder and decoder using same | |
KR102383106B1 (en) | Method and apparatus for processing a video signal | |
WO2017190288A1 (en) | Intra-picture prediction using non-adjacent reference lines of sample values | |
WO2020227405A1 (en) | Clipping prediction samples in matrix intra prediction mode | |
KR20180126382A (en) | Method and apparatus for processing a video signal | |
KR102424420B1 (en) | Method and apparatus for processing a video signal | |
KR102435000B1 (en) | Method and apparatus for processing a video signal | |
KR102539354B1 (en) | Method for processing image based on intra prediction mode and apparatus therefor | |
KR20180126384A (en) | Method and apparatus for processing a video signal | |
KR20200028860A (en) | Method and apparatus for encoding/decoding an image using intra prediction | |
KR20180031615A (en) | Method and apparatus for processing a video signal | |
WO2012138032A1 (en) | Method for encoding and decoding image information | |
KR20220003124A (en) | Video coding method and apparatus using adaptive parameter set | |
KR102431287B1 (en) | Method and apparatus for encoding/decoding a video signal | |
KR102055451B1 (en) | Apparatus and method for decoding image | |
KR102124089B1 (en) | Apparatus and method for decoding an image | |
US11343496B2 (en) | Method and apparatus for encoding/decoding an image based on in-loop filter | |
KR102103100B1 (en) | Apparatus and method for decoding an image | |
KR20210153547A (en) | method and apparatus for encoding/decoding a VIDEO SIGNAL, and a recording medium storing a bitstream |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
N231 | Notification of change of applicant | ||
A201 | Request for examination | ||
E902 | Notification of reason for refusal | ||
E601 | Decision to refuse application | ||
A107 | Divisional application of patent | ||
J201 | Request for trial against refusal decision | ||
J301 | Trial decision |
Free format text: TRIAL NUMBER: 2018101001086; TRIAL DECISION FOR APPEAL AGAINST DECISION TO DECLINE REFUSAL REQUESTED 20180309 Effective date: 20190626 |
|
S901 | Examination by remand of revocation | ||
E902 | Notification of reason for refusal | ||
GRNO | Decision to grant (after opposition) |