KR20140089488A - Method and apparatus for encoding video, and method and apparatus for decoding video - Google Patents
Method and apparatus for encoding video, and method and apparatus for decoding video Download PDFInfo
- Publication number
- KR20140089488A KR20140089488A KR1020140001509A KR20140001509A KR20140089488A KR 20140089488 A KR20140089488 A KR 20140089488A KR 1020140001509 A KR1020140001509 A KR 1020140001509A KR 20140001509 A KR20140001509 A KR 20140001509A KR 20140089488 A KR20140089488 A KR 20140089488A
- Authority
- KR
- South Korea
- Prior art keywords
- unit
- neighboring pixels
- encoding
- depth
- current block
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/117—Filters, e.g. for pre-processing or post-processing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/157—Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
- H04N19/159—Prediction type, e.g. intra-frame, inter-frame or bidirectional frame prediction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/176—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/186—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
Description
BACKGROUND OF THE
In an image compression method such as MPEG-1, MPEG-2, MPEG-4, and H.264 / MPEG-4 Advanced Video Coding (AVC), one picture is divided into macroblocks to encode an image. Then, each macroblock is encoded in all coding modes available for inter prediction and intra prediction, and then an encoding mode is selected according to the bit rate required for encoding the macroblock and the degree of distortion between the original macroblock and the decoded macroblock. And the macro block is encoded.
Background of the Invention [0002] As the development and dissemination of hardware capable of playing back and storing high-resolution or high-definition video content increases the need for video codecs to effectively encode or decode high-definition or high-definition video content. According to the existing video codec, video is encoded according to a limited prediction mode based on a macroblock of a predetermined size.
SUMMARY OF THE INVENTION The present invention has been made in view of the above problems, and it is an object of the present invention to determine whether to filter neighboring pixels used as reference pixels in intra prediction of a chrominance component block independently of a luminance component block.
According to an aspect of the present invention, there is provided a video decoding method including: obtaining size information and intraprediction mode information of a current block of a chrominance component from a bitstream; And determines surrounding pixels used for intra prediction of the current block among the filtered neighboring pixels obtained by filtering the reconstructed neighboring pixels and the reconstructed neighboring pixels of the current block based on the size information and the intra prediction mode information step; And performing intra prediction on the current block according to the intra prediction mode information using the determined neighboring pixels.
A video decoding apparatus according to an exemplary embodiment includes a neighboring pixel filtering unit that filters neighboring pixels previously reconstructed from a current block of a chrominance component to generate filtered neighboring pixels; Obtaining the size information and the intra-prediction mode information of the current block of the chrominance component from the bitstream, filtering the reconstructed neighboring pixels and the reconstructed neighboring pixels of the current block based on the size information and the intra- A reference pixel determination unit for determining a neighboring pixel used for intra prediction of the current block among the filtered neighboring pixels; And an intra prediction unit performing intra prediction on the current block according to the intra prediction mode information using the determined neighboring pixels.
According to an embodiment of the present invention, there is provided a video encoding method comprising: filtering surrounding pixels of a current block of a chrominance component to be encoded to obtain filtered neighboring pixels; Determining neighboring pixels to be used for intraprediction of the current block among the filtered neighboring pixels and original neighboring pixels based on the size of the current block and an intra prediction mode to be performed; And performing intra prediction on the current block using the determined neighboring pixels.
According to embodiments of the present invention, the use of the filtered neighboring pixels in the intra-prediction of the chrominance component block is caused more than in the intra-prediction of the luminance component block, thereby improving the prediction efficiency in intra-prediction of the chrominance component .
1 shows a block diagram of a video coding apparatus based on a coding unit of a tree structure according to an embodiment of the present invention.
2 shows a block diagram of a video decoding apparatus based on a coding unit of a tree structure according to an embodiment of the present invention.
FIG. 3 illustrates a concept of an encoding unit according to an embodiment of the present invention.
4 is a block diagram of an image encoding unit based on an encoding unit according to an embodiment of the present invention.
5 is a block diagram of an image decoding unit based on an encoding unit according to an embodiment of the present invention.
FIG. 6 illustrates a depth-based encoding unit and a partition according to an exemplary embodiment of the present invention.
FIG. 7 shows a relationship between an encoding unit and a conversion unit according to an embodiment of the present invention.
FIG. 8 illustrates depth-specific encoding information, in accordance with an embodiment of the present invention.
FIG. 9 shows a depth encoding unit according to an embodiment of the present invention.
FIGS. 10, 11 and 12 show the relationship between an encoding unit, a prediction unit, and a conversion unit according to an embodiment of the present invention.
Fig. 13 shows the relationship between the encoding unit, the prediction unit and the conversion unit according to the encoding mode information in Table 1. Fig.
14 is a block diagram showing a configuration of an
FIG. 15 illustrates intra prediction modes according to an embodiment.
FIG. 16 specifically shows the prediction angle of the intra-prediction mode having the direction shown in FIG.
FIG. 17 is a reference diagram for explaining a case where an intra prediction mode value is obtained through linear interpolation for an intra prediction mode having
18 is a diagram illustrating neighboring pixels used in a current block and an intra prediction according to an embodiment of the present invention.
19 is a reference diagram for explaining a process of filtering surrounding pixels according to an embodiment of the present invention.
Figure 20 shows the surrounding pixels to be filtered.
21 is a flowchart of a video encoding method according to an embodiment.
22 is a flowchart of a video decoding method according to an embodiment.
Hereinafter, preferred embodiments of the present invention will be described in detail with reference to the accompanying drawings.
1 shows a block diagram of a video encoding apparatus according to an embodiment of the present invention.
The
The maximum coding
An encoding unit according to an embodiment may be characterized by a maximum size and a depth. The depth indicates the number of times the coding unit is spatially divided from the maximum coding unit. As the depth increases, the depth coding unit can be divided from the maximum coding unit to the minimum coding unit. The depth of the maximum encoding unit is the highest depth and the minimum encoding unit can be defined as the least significant encoding unit. As the depth of the maximum encoding unit increases, the size of the depth-dependent encoding unit decreases, so that the encoding unit of the higher depth may include a plurality of lower-depth encoding units.
As described above, according to the maximum size of an encoding unit, the image data of the current picture is divided into a maximum encoding unit, and each maximum encoding unit may include encoding units divided by depth. Since the maximum encoding unit according to an embodiment is divided by depth, image data of a spatial domain included in the maximum encoding unit can be hierarchically classified according to depth.
The maximum depth for limiting the total number of times the height and width of the maximum encoding unit can be hierarchically divided and the maximum size of the encoding unit may be preset.
The encoding
The image data in the maximum encoding unit is encoded based on the depth encoding unit according to at least one depth below the maximum depth, and the encoding results based on the respective depth encoding units are compared. As a result of the comparison of the encoding error of the depth-dependent encoding unit, the depth with the smallest encoding error can be selected. At least one coding depth may be determined for each maximum coding unit.
As the depth of the maximum encoding unit increases, the encoding unit is hierarchically divided and divided, and the number of encoding units increases. In addition, even if encoding units of the same depth included in one maximum encoding unit, the encoding error of each data is measured and it is determined whether or not the encoding unit is divided into lower depths. Therefore, even if the data included in one maximum coding unit has a different coding error according to the position, the coding depth can be determined depending on the position. Accordingly, one or more coding depths may be set for one maximum coding unit, and data of the maximum coding unit may be divided according to one or more coding depth encoding units.
Therefore, the encoding unit determiner 120 according to the embodiment can determine encoding units according to the tree structure included in the current maximum encoding unit. The 'encoding units according to the tree structure' according to an exemplary embodiment includes encoding units of depth determined by the encoding depth, among all depth encoding units included in the current maximum encoding unit. The coding unit of coding depth can be hierarchically determined in depth in the same coding area within the maximum coding unit, and independently determined in other areas. Similarly, the coding depth for the current area can be determined independently of the coding depth for the other area.
The maximum depth according to one embodiment is an index related to the number of divisions from the maximum encoding unit to the minimum encoding unit. The first maximum depth according to an exemplary embodiment may indicate the total number of division from the maximum encoding unit to the minimum encoding unit. The second maximum depth according to an exemplary embodiment may represent the total number of depth levels from the maximum encoding unit to the minimum encoding unit. For example, when the depth of the maximum encoding unit is 0, the depth of the encoding unit in which the maximum encoding unit is divided once may be set to 1, and the depth of the encoding unit that is divided twice may be set to 2. In this case, if the coding unit divided four times from the maximum coding unit is the minimum coding unit, since the depth levels of
The predictive encoding and frequency conversion of the maximum encoding unit can be performed. Likewise, predictive coding and frequency conversion are performed on the basis of the depth coding unit for each maximum coding unit and for each depth below the maximum depth.
Since the number of coding units per depth is increased every time the maximum coding unit is divided by the depth, the coding including the predictive coding and the frequency conversion should be performed for every depth coding unit as the depth increases. For convenience of explanation, predictive coding and frequency conversion will be described based on a coding unit of current depth among at least one maximum coding unit.
The
For example, the
For predictive coding of the maximum coding unit, predictive coding may be performed based on a coding unit of coding depth according to an embodiment, i.e., a coding unit which is not further divided. Hereinafter, the more unfragmented encoding units that are the basis of predictive encoding will be referred to as 'prediction units'. The partition in which the prediction unit is divided may include a data unit in which at least one of the height and the width of the prediction unit and the prediction unit is divided.
For example, if the encoding unit of size 2Nx2N (where N is a positive integer) is not further divided, it is a prediction unit of size 2Nx2N, and the size of the partition may be 2Nx2N, 2NxN, Nx2N, NxN, and the like. The partition type according to an embodiment is not limited to symmetric partitions in which the height or width of a prediction unit is divided by a symmetric ratio, but also partitions partitioned asymmetrically, such as 1: n or n: 1, Partitioned partitions, arbitrary type partitions, and the like.
The prediction mode of the prediction unit may be at least one of an intra mode, an inter mode, and a skip mode. For example, intra mode and inter mode can be performed for partitions of 2Nx2N, 2NxN, Nx2N, NxN sizes. In addition, the skip mode can be performed only for a partition of 2Nx2N size. Encoding is performed independently for each prediction unit within an encoding unit, and a prediction mode having the smallest encoding error can be selected.
In addition, the
For frequency conversion of a coding unit, frequency conversion may be performed based on a data unit having a size smaller than or equal to the coding unit. For example, a data unit for frequency conversion may include a data unit for intra mode and a data unit for inter mode.
Hereinafter, the data unit on which the frequency conversion is based may be referred to as a 'conversion unit'. In a similar manner to the encoding unit, the conversion unit in the encoding unit is also recursively divided into smaller-sized conversion units, and the residual data of the encoding unit can be divided according to the conversion unit according to the tree structure according to the conversion depth.
For a conversion unit according to one embodiment, a conversion depth indicating the number of times of division until the conversion unit is divided by the height and width of the encoding unit can be set. For example, if the size of the conversion unit of the current encoding unit of size 2Nx2N is 2Nx2N, the conversion depth is set to 0 if the conversion depth is 0, if the conversion unit size is NxN, and if the conversion unit size is N / 2xN / 2, . That is, a conversion unit according to the tree structure can be set for the conversion unit according to the conversion depth.
The coding information according to the coding depth needs not only the coding depth but also prediction related information and frequency conversion related information. Therefore, the coding
A method of determining a coding unit and a partition according to a tree structure of a maximum coding unit according to an embodiment will be described later in detail with reference to FIGS.
The encoding
The
The encoded image data may be a result of encoding residual data of the image.
The information on the depth-dependent coding mode may include coding depth information, partition type information of a prediction unit, prediction mode information, size information of a conversion unit, and the like.
The coding depth information can be defined using depth division information indicating whether or not coding is performed at the lower depth coding unit without coding at the current depth. If the current depth of the current encoding unit is the encoding depth, the current encoding unit is encoded in the current depth encoding unit, so that the division information of the current depth can be defined so as not to be further divided into lower depths. On the other hand, if the current depth of the current encoding unit is not the encoding depth, the encoding using the lower depth encoding unit should be tried. Therefore, the division information of the current depth may be defined to be divided into the lower depth encoding units.
If the current depth is not the encoding depth, encoding is performed on the encoding unit divided into lower-depth encoding units. Since there are one or more lower-level coding units in the current-depth coding unit, the coding is repeatedly performed for each lower-level coding unit so that recursive coding can be performed for each coding unit of the same depth.
Since the coding units of the tree structure are determined in one maximum coding unit and information on at least one coding mode is determined for each coding unit of coding depth, information on at least one coding mode is determined for one maximum coding unit . Since the data of the maximum encoding unit is hierarchically divided according to the depth and the depth of encoding may be different for each position, information on the encoding depth and the encoding mode may be set for the data.
Accordingly, the
The minimum unit according to an exemplary embodiment is a square data unit having a minimum coding unit size of 4 divided by a minimum coding depth and is a unit of a maximum size that can be included in all coding units, Square data unit.
For example, the encoding information output through the
According to the simplest embodiment of the
Therefore, the
Therefore, if an image having a very high image resolution or a very large data amount is encoded in units of existing macroblocks, the number of macroblocks per picture becomes excessively large. This increases the amount of compression information generated for each macroblock, so that the burden of transmission of compressed information increases and the data compression efficiency tends to decrease. Therefore, the video encoding apparatus according to an embodiment can increase the maximum size of the encoding unit in consideration of the image size, and adjust the encoding unit in consideration of the image characteristic, so that the image compression efficiency can be increased.
2 shows a block diagram of a video decoding apparatus according to an embodiment of the present invention.
The
The receiving unit 205 receives and parses the bitstream of the encoded video. The image data and encoding
Also, the image data and encoding
Information on the coding depth and coding mode per coding unit can be set for one or more coding depth information, and the information on the coding mode for each coding depth is divided into partition type information of the coding unit, prediction mode information, The size information of the image data, and the like. In addition, as the encoding depth information, depth-based segmentation information may be extracted.
The encoding depth and encoding mode information extracted by the image data and encoding
The encoding information for the encoding depth and the encoding mode according to the embodiment may be allocated for a predetermined data unit among the encoding unit, the prediction unit and the minimum unit. Therefore, the image data and the encoding
The image
The image
In addition, the image
The image
In other words, the encoding information set for the predetermined unit of data among the encoding unit, the prediction unit and the minimum unit is observed, and the data units holding the encoding information including the same division information are collected, and the image
The
Accordingly, even if an image with a high resolution or an excessively large amount of data is used, the information on the optimal encoding mode transmitted from the encoding end is used, and the image data is efficiently encoded according to the encoding unit size and encoding mode, Can be decoded and restored.
Hereinafter, a method of determining encoding units, prediction units, and conversion units according to a tree structure according to an embodiment of the present invention will be described with reference to FIG. 3 to FIG.
FIG. 3 shows the concept of a hierarchical coding unit.
An example of an encoding unit is that the size of an encoding unit is represented by a width x height, and may include 32x32, 16x16, and 8x8 from an encoding unit having a size of 64x64. The encoding unit of size 64x64 can be divided into the partitions of size 64x64, 64x32, 32x64, 32x32, and the encoding unit of size 32x32 is the partitions of size 32x32, 32x16, 16x32, 16x16 and the encoding unit of size 16x16 is the size of 16x16 , 16x8, 8x16, and 8x8, and a size 8x8 encoding unit can be divided into partitions of size 8x8, 8x4, 4x8, and 4x4.
With respect to the
It is preferable that the maximum size of the coding size is relatively large in order to improve the coding efficiency as well as to accurately characterize the image characteristics when the resolution or the data amount is large. Therefore, the maximum size of the
Since the maximum depth of the
Since the maximum depth of the
4 is a block diagram of an image encoding unit based on an encoding unit according to an embodiment of the present invention.
The
The data output from the
The
In particular, the
5 is a block diagram of an image decoding unit based on an encoding unit according to an embodiment of the present invention.
The
The
The data in the spatial domain that has passed through the
In order to decode the image data in the image
The
In particular, the
FIG. 6 illustrates a depth-based encoding unit and a partition according to an exemplary embodiment of the present invention.
The
The
That is, the
Prediction units and partitions of coding units are arranged along the horizontal axis for each depth. That is, if the
Likewise, the prediction unit of the
Likewise, the prediction unit of the
Finally, the prediction unit of the
The encoding
The number of coding units per depth to include data of the same range and size increases as the depth of the coding unit increases. For example, for data containing one coding unit at
For each depth-of-field coding, encoding is performed for each prediction unit of the depth-dependent coding unit along the horizontal axis of the
FIG. 7 shows a relationship between an encoding unit and a conversion unit according to an embodiment of the present invention.
The
For example, in the
In addition, the data of the encoding unit 710 of 64x64 size is encoded by performing the frequency conversion with the conversion units of 32x32, 16x16, 8x8, and 4x4 size of 64x64 or smaller, respectively, and then the conversion unit having the smallest error with the original Can be selected.
FIG. 8 illustrates depth-specific encoding information, in accordance with an embodiment of the present invention.
The
The partition type information 800 represents information on the type of partition in which the prediction unit of the current encoding unit is divided, as a data unit for predictive encoding of the current encoding unit. For example, the current encoding unit CU_0 of size 2Nx2N may be any one of a
The prediction mode information 810 indicates a prediction mode of each partition. For example, it is determined whether the partition indicated by the information 800 relating to the partition type is predictive-encoded in one of the
In addition, the information 820 on the conversion unit size indicates whether to perform frequency conversion on the basis of which conversion unit the current encoding unit is performed. For example, the conversion unit may be one of a first
The video data and encoding
FIG. 9 shows a depth encoding unit according to an embodiment of the present invention.
Partition information may be used to indicate changes in depth. The division information indicates whether the current-depth encoding unit is divided into lower-depth encoding units.
The
For each partition type, predictive encoding should be repeatedly performed for each partition of size 2N_0x2N_0, two 2N_0xN_0 partitions, two N_0x2N_0 partitions, and four N_0xN_0 partitions. For a partition of size 2N_0x2N_0, size N_0x2N_0, size 2N_0xN_0 and size N_0xN_0, predictive coding can be performed in intra mode and inter mode. The skip mode can be performed only on the partition of size 2N_0x2N_0 with predictive coding.
If the encoding error caused by one of the
If the coding error by the
A
If the encoding error by the
If the maximum depth is d, depth division information is set up to depth d-1, and division information can be set up to depth d-2. That is, when the encoding is performed from the depth d-2 to the depth d-1, the prediction encoding of the
(D-1) x2N_ (d-1), two size 2N_ (d-1) xN_ (d-1) partitions, and two sizes N_ (d-1) and the partition of four sizes N_ (d-1) xN_ (d-1), the partition type in which the minimum coding error occurs can be retrieved .
Even if the coding error by the
The
In this way, the minimum coding error of each of the
The video data and encoding
FIGS. 10, 11 and 12 show the relationship between an encoding unit, a prediction unit, and a frequency conversion unit according to an embodiment of the present invention.
The coding unit 1010 is coding units for coding depth determined by the
When the depth of the maximum encoding unit is 0, the depth of the
Some
The image data of a
Accordingly, the encoding unit is recursively performed for each encoding unit of the hierarchical structure for each maximum encoding unit, and the optimal encoding unit is determined, so that encoding units according to the recursive tree structure can be configured. Division type information, unit type information, prediction mode information, and conversion unit size information. Table 1 below shows an example that can be set in the
Inter
Skip (2Nx2N only)
2NxN
Nx2N
NxN
2NxnD
nLx2N
nRx2N
(Symmetrical partition type)
N / 2xN / 2
(Asymmetric partition type)
The
The division information indicates whether the current encoding unit is divided into low-depth encoding units. If the division information of the current depth d is 0, since the depth at which the current encoding unit is not further divided into the current encoding unit is the encoding depth, the partition type information, prediction mode, and conversion unit size information are defined . When it is necessary to further divide by one division according to the division information, encoding should be performed independently for each of four divided sub-depth coding units.
The prediction mode may be represented by one of an intra mode, an inter mode, and a skip mode. Intra mode and inter mode can be defined in all partition types, and skip mode can be defined only in partition type 2Nx2N.
The partition type information indicates symmetrical partition types 2Nx2N, 2NxN, Nx2N and NxN in which the height or width of the predicted unit is divided into symmetric proportions and asymmetric partition types 2NxnU, 2NxnD, nLx2N, and nRx2N divided by the asymmetric ratio . Asymmetric partition types 2NxnU and 2NxnD are respectively divided into heights 1: 3 and 3: 1, and asymmetric partition types nLx2N and nRx2N are respectively divided into widths of 1: 3 and 3: 1.
The conversion unit size can be set to two kinds of sizes in the intra mode and two kinds of sizes in the inter mode. That is, if the conversion unit division information is 0, the size of the conversion unit is set to the size 2Nx2N of the current encoding unit. If the conversion unit division information is 1, a conversion unit of the size where the current encoding unit is divided can be set. Also, if the partition type for the current encoding unit of size 2Nx2N is a symmetric partition type, the size of the conversion unit may be set to NxN, or N / 2xN / 2 if it is an asymmetric partition type.
The encoding information of the encoding units according to the tree structure according to an exemplary embodiment may be allocated to at least one of encoding units, prediction units, and minimum unit units of the encoding depth. The coding unit of the coding depth may include one or more prediction units and minimum units having the same coding information.
Therefore, if encoding information held in adjacent data units is checked, it can be confirmed whether or not the encoded information is included in the encoding unit of the same encoding depth. In addition, since the encoding unit of the encoding depth can be identified by using the encoding information held by the data unit, the distribution of encoding depths within the maximum encoding unit can be inferred.
Therefore, in this case, when the current encoding unit is predicted with reference to the neighboring data unit, the encoding information of the data unit in the depth encoding unit adjacent to the current encoding unit can be directly referenced and used.
In another embodiment, when predictive encoding is performed with reference to a current encoding unit with reference to a surrounding encoding unit, data adjacent to the current encoding unit in the depth encoding unit is encoded using the encoding information of adjacent encoding units The surrounding encoding unit may be referred to by being searched.
Fig. 13 shows the relationship between the encoding unit, the prediction unit and the conversion unit according to the encoding mode information in Table 1. Fig.
The
If the partition type information is set to one of the symmetric
When the partition type information is set to one of the asymmetric
The
14 is a block diagram showing a configuration of an
Referring to FIG. 14, an
The neighboring
The reference
According to one embodiment, a YCbCr (or YUV) color format composed of luminance and chrominance components is used. The reason why the color format composed of the luminance and chrominance components is used is to use a fact that the human eye is sensitive to the luminance components as compared with the chrominance components and allocate a relatively larger bandwidth to the luminance components than to the chrominance components to efficiently encode the video will be. 4: 4: 4 color format, 4: 2: 2 color format and 4: 2: 0 color format may be used depending on the resolution of video of luminance component and video of chrominance component in the color format of video according to an embodiment . The 4: 4: 4 color format is a case where the video of the luminance component and the video of the chrominance component have the same resolution. The 4: 2: 2 color format is a case where the color difference signal has a resolution of 1/2 in either the horizontal or vertical direction of the luminance signal. The 4: 2: 0 color format is a case where the color difference signal has a resolution of 1/2 for both the horizontal and vertical directions of the luminance signal.
The reference
{
Diff = min (abs (prediction_mode-horizontal_mode), abs (prediction_mode-vertical_mode));
If Diff> Thres_val, then use filtered reference pixel;
else use original reference pixel;
If (prediction_mode) == DC mode) use original reference pixel;
Thres_val = {10, // 4x4 block
7, // 8x8 block
1, // 16x16 block
0 // 32x32 block}
}
The intra prediction mode index is a value assigned to each intra prediction mode as shown in FIG. 15 described later. For example, when the intra prediction mode index is 0, it is a planar mode. When the intra prediction mode index is 1, Horizontal mode, and 26 indicates a vertical mode. The pseudo code is analyzed to find the absolute value of the difference between the index of the current intra prediction mode and the index of the horizontal mode (prediction_mode - horizontal_mode), the difference between the index of the current intra prediction mode and the index of the vertical mode (prediction_mode - vertical_mode) And when the Diff is greater than the threshold value, the filtered neighboring pixels are divided into a luminance component block, a luminance component block, and a luminance component block, in accordance with a result obtained by comparing Diff with a predetermined threshold value Thres_val determined based on the size of the luminance component block. When the Diff is less than or equal to the threshold value, the pixel around the circle is used as a reference pixel. The reference
Neighboring pixels used as reference pixels at the intra-picture side of the luminance component block may be determined based on the following Table 2 according to the magnitude of the luminance component according to the pseudo code described above and the type of the intra prediction mode. In Table 2, the reference index of the prediction mode using the original surrounding pixels is set to 0, and the reference index of the prediction mode using the first filtered neighboring pixels is set to 1. That is, in the case of a reference index having a value of 0, a circle surrounding pixels is used for the luminance component block, and in the case of a reference index having a value of 1, the filtered neighboring pixels are used for the luminance component block . For example, in Table 2, when intraprediction having a prediction mode index of 2 is performed for a luminance block of 32x32 size, since the reference index has a value of 1, intra prediction with a prediction mode index of 2 for a 32x32 luminance block It indicates that the filtered neighboring pixel is used as a reference pixel instead of the surrounding pixels.
Table 2 is only one example, and whether to use the filtered neighboring pixels according to various block sizes and intra prediction modes can be set in a different manner.
The
In addition, the reference
It is assumed that 36 intra prediction modes are available for the block of chrominance components, as shown in FIG. 15 to be described later. If the intra-prediction mode index indicating the 36 intra-prediction modes is referred to as prediction_mode, the reference pixel determination unit determines the surrounding pixels restored based on the size of the block of the chrominance component and the intra- The neighboring pixels to be used for intra prediction of the chrominance component block among the filtered neighboring pixels can be determined.
{
Diff = min (abs (prediction_mode-horizontal_mode), abs (prediction_mode-vertical_mode));
If Diff> Thres_val, then use filtered reference pixel;
else use original reference pixel;
If (prediction_mode) == DC mode) use original reference pixel;
Thres_val = {6, // 4x4 block
1, // 8x8 block
0, // 16x16 block
0 // 32x32 block}
}
Since the threshold value (Thres_val) is smaller than that of the luminance component by analyzing the pseudo code, the case where the filtered neighboring pixels are used as the reference pixels in the intra prediction is increased in the case of the chrominance components.
The neighboring pixels used as reference pixels in the intra prediction of the chrominance component block according to the size of the chrominance component block according to the pseudo code and the kind of the intra prediction mode can be determined based on the following Table 3. [ In Table 3, the reference index of the prediction mode using the original surrounding pixels is set to 0, and the reference index of the prediction mode using the first filtered neighboring pixels is set to 1. In other words, in the case of a reference index having a value of 0, a circle surrounding pixels is used for the chrominance component block, and in the case of a reference index having a value of 1, a filtered neighboring pixel is used for the chrominance component block .
Comparing Table 2 and Table 3, the case of performing the intra-prediction using the filtered neighboring pixels is increased when the reference index is 1 in the case of the chrominance component.
In addition, the reference
Specifically, the reference
{
Diff = min (abs (prediction_mode-horizontal_mode), abs (prediction_mode-vertical_mode));
If Diff> Thres_val, then use filtered reference pixel;
else use original reference pixel;
If (prediction_mode) == DC mode) use original reference pixel;
Thres_val = {6, // 4x4 block
2, // 8x8 block
1, // 16x16 block
0 // 32x32 block}
If ((use filtered reference pixel) && (block size> = 16x16))
use twice filtered reference pixel
}
If the pseudo code is determined to use the filtered neighboring pixels for a predetermined size, for example, 16x16 or more chrominance component blocks, the neighboring pixels filtered twice may be set as reference pixels for intraprediction have. The neighboring pixels used as reference pixels in the intra prediction of the chrominance component block according to the size of the chrominance component block according to the pseudo code and the type of the intra prediction mode can be determined based on the following Table 4. [ In Table 4, the reference index of the prediction mode using the original surrounding pixels is 0, the reference index of the prediction mode using the first filtered neighboring pixels is 1, the reference index of the prediction mode using the second filtered neighboring pixels is 2 . That is, in the case of a reference index having a value of 0, a circle surrounding pixels is used for the chrominance component block, and in the case of a reference index having a value of 1, the surrounding pixels filtered once for the chrominance component block are used And in the case of a reference index having a value of 2, neighboring pixels filtered twice for the chrominance component block are used.
As illustrated in Table 4, when it is determined to use the filtered neighboring pixels for the chrominance component block having a size of 16x16 or more, the neighboring pixels twice filtered are used as reference pixels.
As described above, according to the embodiments of the present invention, the use of the filtered neighboring pixels at the intra-prediction of the chrominance component block is caused more than at the intra-prediction of the luminance component block, The efficiency can be improved.
Hereinafter, an intra prediction method according to an embodiment will be described in detail.
FIG. 15 illustrates intra prediction modes according to an embodiment.
Referring to FIG. 15, according to an embodiment, a larger number of intra prediction modes than the intra prediction mode used in conventional H.264 / AVC can be used. According to one embodiment, a total of 35 intra prediction modes may be used for the blocks of luminance components. FIG. 15 shows a prediction mode index allocated according to the intra prediction mode. The 0-th intra-prediction mode is a planar mode, the 1-th intra-prediction mode is a DC mode, and the 2-th to 34-th intra modes are intra-prediction modes having directionality as shown in FIG. For the block of chrominance components, Intra_FromLuma mode using Intra prediction mode of luminance component in addition to 35 intra-prediction modes can be added. The prediction mode index of Intra_FromLuma mode is assigned a value of 36.
The planner mode is a mode in which a value obtained by linearly interpolating the upper left peripheral pixel of the current block and the left peripheral pixel of the same row as the current pixel of the current block and a value obtained by linearly interpolating the upper peripheral pixels of the upper left neighboring pixel and the upper neighboring pixel Prediction mode in which each pixel of the current block is predicted using the prediction mode. The DC mode is an intra prediction mode in which an average value of neighboring pixels of the current block is used as a predicted value.
The intra-prediction mode having 33 directionality from No. 2 to No. 34 is an intra-prediction mode in which predicted values are generated by copying peripheral pixels determined using a directional line as shown in FIG.
FIG. 16 specifically shows the prediction angle of the intra-prediction mode having the direction shown in FIG. 16, each arrow represents a point matched according to the intra-prediction mode having the direction of Fig.
When a line having a direction according to the intra prediction mode is accurately matched to neighboring pixels around a pixel to be intra-predicted in the current block, the matched neighboring pixel can be used as a reference pixel. However, a line having a direction according to the intra-prediction mode may pass between surrounding pixels. In this case, the value of the neighboring pixel is generated by linear interpolation according to the position of the pixel.
FIG. 17 is a reference diagram for explaining a case where an intra prediction mode value is obtained through linear interpolation for the
Referring to FIG. 17, x is a point at which a line having a direction according to the
18 is a diagram illustrating neighboring pixels used in a current block and an intra prediction according to an embodiment of the present invention.
Referring to FIG. 18, the
FIG. 19 is a reference diagram for explaining a process of filtering surrounding pixels according to an embodiment of the present invention, and FIG. 20 shows surrounding pixels to be filtered.
Referring to FIG. 19, if the neighboring pixels of 2 nTbs adjacent to the upper side and the left side of the current block of nTbs * nTbs size are ContextOrg [n] (n is an integer from 0 to 2 nTbs-1), the
[Equation 1]
ContextFiltered1 [n] = (ContextOrg [n-1] + 2 * ContextOrg [n] + ContextOrg [n + 1]) / 4
Referring to Equation (1), the
Similarly, the
&Quot; (2) "
ContextFiltered2 [n] = (ContextFiltered1 [n-1] + 2 * ContextFiltered1 [n] + ContextFiltered1 [n + 1]) / 4
The
21 is a flowchart of a video encoding method according to an embodiment.
Referring to FIG. 21, in
In step 2120, the reference
In
22 is a flowchart of a video decoding method according to an embodiment.
22, when the size information and intra prediction mode information of the current block of chrominance components are obtained from the bitstream in
In
The image coding and decoding method according to the present invention can also be implemented as computer-readable codes on a computer-readable recording medium. A computer-readable recording medium includes all kinds of recording apparatuses in which data that can be read by a computer system is stored. Examples of the computer-readable recording medium include ROM, RAM, CD-ROM, magnetic tape, floppy disk, optical data storage, and the like. The computer readable recording medium may also be distributed over a networked computer system so that computer readable code can be stored and executed in a distributed manner.
The present invention has been described with reference to the preferred embodiments. It will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims. Therefore, the disclosed embodiments should be considered in an illustrative rather than a restrictive sense. The scope of the present invention is defined by the appended claims rather than by the foregoing description, and all differences within the scope of equivalents thereof should be construed as being included in the present invention.
Claims (15)
Obtaining size information and intraprediction mode information of a current block of a chrominance component from a bitstream;
And determines surrounding pixels used for intra prediction of the current block among the filtered neighboring pixels obtained by filtering the reconstructed neighboring pixels and the reconstructed neighboring pixels of the current block based on the size information and the intra prediction mode information step; And
And performing intra prediction on the current block according to the intra prediction mode information using the determined neighboring pixels.
The step of determining the surrounding pixels
The neighboring pixels filtered with respect to the current block of the chrominance component are intra-coded in accordance with a determination scheme that is independent of a determination scheme used to determine whether intra-prediction is performed on neighboring pixels filtered at the time of intraprediction of a block of luminance components constituting the video, Predicted video data to be used for prediction.
The step of determining the surrounding pixels
The reconstructed neighboring pixels of the current block, neighboring pixels of the reconstructed neighboring pixels once, and neighboring pixels of the reconstructed neighboring pixels of the neighboring pixels twice, based on the size information and the intra-prediction mode information, And determining one filtered group of neighboring pixels.
The step of determining the surrounding pixels
Determining a difference between an intra prediction mode index and a horizontal intra prediction mode index of the current block and a difference value between an intra prediction mode index of the current block and a vertical intra prediction mode index;
And comparing the determined difference value with a predetermined threshold value determined based on the size information to perform intra prediction of the current block among the filtered neighboring pixels obtained by filtering the reconstructed neighboring pixels and the reconstructed neighboring pixels of the current block And determining a peripheral pixel to be used.
Wherein the partition is a prediction unit or prediction unit included in an encoding unit obtained by dividing a current picture based on a depth of the maximum encoding unit and a depth of the maximum encoding unit. Video decoding method.
A neighboring pixel filtering unit for filtering neighboring pixels reconstructed before the current block of the chrominance component to generate filtered neighboring pixels;
Obtaining the size information and the intra-prediction mode information of the current block of the chrominance component from the bitstream, filtering the reconstructed neighboring pixels and the reconstructed neighboring pixels of the current block based on the size information and the intra- A reference pixel determination unit for determining a neighboring pixel used for intra prediction of the current block among the filtered neighboring pixels; And
And an intra prediction unit performing intra prediction on the current block according to the intra prediction mode information using the determined neighboring pixels.
The reference pixel determination unit
The neighboring pixels filtered with respect to the current block of the chrominance component are intra-coded in accordance with a determination scheme that is independent of a determination scheme used to determine whether intra-prediction is performed on neighboring pixels filtered at the time of intraprediction of a block of luminance components constituting the video, And determines whether or not to use the prediction for prediction.
The reference pixel determination unit
The reconstructed neighboring pixels of the current block, neighboring pixels of the reconstructed neighboring pixels once, and neighboring pixels of the reconstructed neighboring pixels of the neighboring pixels twice, based on the size information and the intra-prediction mode information, And determines one filtered group of neighboring pixels.
The reference pixel determination unit
A difference value between an intra prediction mode index and a horizontal direction intra prediction mode index of the current block and a difference value between an intra prediction mode index and a vertical direction intra prediction mode index of the current block, And a threshold value determined based on the size information to compare the reconstructed neighboring pixels of the current block and the reconstructed neighboring pixels, And a pixel is determined.
Wherein the partition is a prediction unit or prediction unit included in an encoding unit obtained by dividing a current picture based on a depth of the maximum encoding unit and a depth of the maximum encoding unit. Video decoding apparatus.
Filtering neighboring pixels of a current block of a chrominance component to be encoded to obtain filtered neighboring pixels;
Determining neighboring pixels to be used for intraprediction of the current block among the filtered neighboring pixels and original neighboring pixels based on the size of the current block and an intra prediction mode to be performed; And
And performing intra prediction on the current block using the determined neighboring pixels.
The step of determining the surrounding pixels
The neighboring pixels filtered with respect to the current block of the chrominance component are intra-coded in accordance with a determination scheme that is independent of a determination scheme used to determine whether intra-prediction is performed on neighboring pixels filtered at the time of intraprediction of a block of luminance components constituting the video, Predicted video data to be used for prediction.
The step of determining the surrounding pixels
The reconstructed neighboring pixels of the current block, neighboring pixels of the reconstructed neighboring pixels once, and neighboring pixels of the reconstructed neighboring pixels of the neighboring pixels twice, based on the size information and the intra-prediction mode information, And determining one filtered neighboring pixel group.
The step of determining the surrounding pixels
Determining a difference between an intra prediction mode index and a horizontal intra prediction mode index of the current block and a difference value between an intra prediction mode index of the current block and a vertical intra prediction mode index;
And comparing the determined difference value with a predetermined threshold value determined based on the size information to perform intra prediction of the current block among the filtered neighboring pixels obtained by filtering the reconstructed neighboring pixels and the reconstructed neighboring pixels of the current block And determining a surrounding pixel to be used.
Wherein the partition is a prediction unit or prediction unit included in an encoding unit obtained by dividing a current picture based on a depth of the maximum encoding unit and a depth of the maximum encoding unit. / RTI >
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201361748819P | 2013-01-04 | 2013-01-04 | |
US61/748,819 | 2013-01-04 |
Publications (1)
Publication Number | Publication Date |
---|---|
KR20140089488A true KR20140089488A (en) | 2014-07-15 |
Family
ID=51062345
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
KR1020140001509A KR20140089488A (en) | 2013-01-04 | 2014-01-06 | Method and apparatus for encoding video, and method and apparatus for decoding video |
Country Status (2)
Country | Link |
---|---|
KR (1) | KR20140089488A (en) |
WO (1) | WO2014107073A1 (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018052224A1 (en) * | 2016-09-13 | 2018-03-22 | 한국전자통신연구원 | Video encoding/decoding method and device, and recording medium having bitstream stored therein |
WO2019203487A1 (en) * | 2018-04-19 | 2019-10-24 | 엘지전자 주식회사 | Method and apparatus for encoding image on basis of intra prediction |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3477951B1 (en) * | 2016-06-24 | 2021-12-08 | KT Corporation | Adaptive reference sample filtering for intra prediction using distant pixel lines |
CN108429910B (en) * | 2017-02-15 | 2021-09-10 | 扬智科技股份有限公司 | Image compression method |
GB2587982B (en) | 2018-06-08 | 2023-01-04 | Kt Corp | Method and apparatus for processing video signal |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8224076B2 (en) * | 2006-11-29 | 2012-07-17 | Panasonic Corporation | Image processing method and image processing apparatus |
KR101601840B1 (en) * | 2009-02-23 | 2016-03-22 | 에스케이 텔레콤주식회사 | Video Encoding/Decoding Method and Apparatus Using Channel Correlation and Computer Readable Recording Medium Therefor |
KR101510108B1 (en) * | 2009-08-17 | 2015-04-10 | 삼성전자주식회사 | Method and apparatus for encoding video, and method and apparatus for decoding video |
EP2635030A4 (en) * | 2010-10-26 | 2016-07-13 | Humax Co Ltd | Adaptive intra-prediction encoding and decoding method |
KR20120140181A (en) * | 2011-06-20 | 2012-12-28 | 한국전자통신연구원 | Method and apparatus for encoding and decoding using filtering for prediction block boundary |
-
2014
- 2014-01-06 KR KR1020140001509A patent/KR20140089488A/en not_active Application Discontinuation
- 2014-01-06 WO PCT/KR2014/000108 patent/WO2014107073A1/en active Application Filing
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018052224A1 (en) * | 2016-09-13 | 2018-03-22 | 한국전자통신연구원 | Video encoding/decoding method and device, and recording medium having bitstream stored therein |
US11128862B2 (en) | 2016-09-13 | 2021-09-21 | Research and Telecommunications Research Insitute | Video encoding/decoding method and device, and recording medium having bitstream stored therein |
US11805247B2 (en) | 2016-09-13 | 2023-10-31 | Electronics And Telecommunications Research Institute | Video encoding/decoding method and device, and recording medium having bitstream stored therein |
WO2019203487A1 (en) * | 2018-04-19 | 2019-10-24 | 엘지전자 주식회사 | Method and apparatus for encoding image on basis of intra prediction |
Also Published As
Publication number | Publication date |
---|---|
WO2014107073A1 (en) | 2014-07-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR102003047B1 (en) | Method and apparatus for video encoding with in-loop filtering based on tree-structured data unit, method and apparatus for video decoding with the same | |
KR101600063B1 (en) | Method and apparatus for video intra prediction encoding, and method and apparatus for video intra prediction decoding | |
KR101957945B1 (en) | Method and apparatus for video encoding with deblocking filtering based on tree-structured data unit, and method and apparatus for video decoding with the same | |
KR101995551B1 (en) | Method and apparatus for decoding image | |
KR102169608B1 (en) | Method and apparatus for encoding and decoding video to enhance intra prediction process speed | |
KR20150021822A (en) | Method and apparatus for determining intra prediction mode | |
KR20140113854A (en) | Method and apparatus for encoding and decoding video | |
KR20140089488A (en) | Method and apparatus for encoding video, and method and apparatus for decoding video | |
KR20180090971A (en) | Method and apparatus for image encoding | |
KR101607613B1 (en) | Method and apparatus for image encoding, and method and apparatus for image decoding | |
KR20150045980A (en) | Method and apparatus for image encoding, and method and apparatus for image decoding | |
KR101607614B1 (en) | Method and apparatus for image encoding, and method and apparatus for image decoding | |
KR101607611B1 (en) | Method and apparatus for image encoding, and method and apparatus for image decoding | |
KR101606683B1 (en) | Method and apparatus for image encoding, and method and apparatus for image decoding | |
KR101606853B1 (en) | Method and apparatus for image encoding, and method and apparatus for image decoding | |
KR101886259B1 (en) | Method and apparatus for image encoding, and computer-readable medium including encoded bitstream | |
KR20170026423A (en) | Method and apparatus for image decoding | |
KR20140140001A (en) | Method and apparatus for encoding video, and method and apparatus for decoding video |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WITN | Withdrawal due to no request for examination |