KR20130050863A - Method and apparatus for video encoding with prediction and compensation using reference picture list, method and apparatus for video decoding with prediction and compensation using reference picture list - Google Patents
Method and apparatus for video encoding with prediction and compensation using reference picture list, method and apparatus for video decoding with prediction and compensation using reference picture list Download PDFInfo
- Publication number
- KR20130050863A KR20130050863A KR1020120037555A KR20120037555A KR20130050863A KR 20130050863 A KR20130050863 A KR 20130050863A KR 1020120037555 A KR1020120037555 A KR 1020120037555A KR 20120037555 A KR20120037555 A KR 20120037555A KR 20130050863 A KR20130050863 A KR 20130050863A
- Authority
- KR
- South Korea
- Prior art keywords
- information
- reference list
- unit
- encoding
- picture
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
- H04N19/105—Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/577—Motion compensation with bidirectional frame interpolation, i.e. using B-pictures
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
According to the present invention, a reference is assigned to an LC reference list for an LC reference list including L0 reference list which is list information of a reference picture for predictive encoding of an image having a B slice type and at least one reference picture included in the L1 reference list. A video prediction that sets a default number of pictures for each picture and predicts and encodes the picture using an LC reference list including one or more reference pictures among the reference pictures included in the LO reference list and the L1 reference list based on the LC default number. The coding method is disclosed.
Description
The present invention relates to video encoding and decoding involving video prediction and video prediction.
Background of the Invention [0002] As the development and dissemination of hardware capable of playing back and storing high-resolution or high-definition video content increases the need for video codecs to effectively encode or decode high-definition or high-definition video content. According to the conventional video codec, video is encoded according to a limited encoding method based on a macroblock of a predetermined size.
The video codec reduces the amount of data by using a prediction technique by using a feature that images of a video are highly correlated with each other temporally or spatially. According to the prediction technique, in order to predict the current image using the surrounding image, image information is recorded using a temporal or spatial distance between the images, a prediction error, and the like.
In the video prediction encoding method according to an embodiment of the present invention, an LC reference list including L0 reference list which is list information of a reference image for prediction encoding of an image having a B slice type and one or more reference images included in the L1 reference list Setting, for each picture, LC default number information indicating a basic valid number of reference pictures allocated to the LC reference list for the picture; Determining the LC reference list including at least one reference picture among the reference pictures included in the LO reference list and the L1 reference list based on the LC default number information; And predicting encoding the image of the B slice type using the determined LC reference list.
The setting of the LC default number information for each picture according to an embodiment may include assigning the LC default number information to the LC reference list based on reference list active number change confirmation information indicating whether the effective number of the reference image is randomly changed for each slice. And setting LC active number change confirmation information indicating whether the effective number of the reference image is randomly changed, and LC active number information indicating the current valid number of the reference image after the random change, for each slice.
The setting of the LC default number information for each picture according to an embodiment includes setting LC change related information including information on a method of changing a reference image or a reference order of the LC reference list for each slice. can do.
According to an embodiment, the video prediction encoding method may omit transmission of the LC combination confirmation information indicating whether the LC reference list is configured using one or more reference images of the L0 reference list and the L1 reference list. .
According to an embodiment, the video predictive encoding method includes: transmitting the LC default number information along with parameters for a current picture; Transmitting the LC active number information together with parameters for a current slice; And transmitting the LC change related information together with parameters for the current slice.
In the video predictive decoding method according to an embodiment of the present invention, an LC reference list including L0 reference list which is list information of a reference image for predictive decoding of an image having a B slice type and at least one reference image included in the L1 reference list For each picture, reading LC default number information indicating a basic valid number of reference pictures allocated to the LC reference list; Determining the LC reference list including at least one reference picture among the reference pictures included in the LO reference list and the L1 reference list based on the LC default number information; And predicting and decoding the image of the B slice type using the determined LC reference list.
The step of reading LC default number information for each picture according to an embodiment may include assigning to the LC reference list based on reference list active number change confirmation information indicating whether the effective number of the reference image is randomly changed for each slice. Reading LC active number change confirmation information indicating whether the effective number of reference images is randomly changed; And reading LC active number information indicating a current valid number of reference images of the LC reference list after the random change, based on the read LC active number change confirming information.
According to an embodiment, reading the LC default number information for each picture may include reading LC change related information including information about a reference image or a reference order of the LC reference list for each slice. Can be.
The determining of the LC reference list according to an embodiment may be performed without reading LC combination identification information indicating whether to construct the LC reference list using one or more reference images of the L0 reference list and the L1 reference list. The LC reference list can be determined.
According to an embodiment, a video predictive decoding method includes: extracting, from a received video stream, the LC default number information along with parameters for a current picture; Extracting the LC active number information along with parameters for a current slice; And extracting the LC change related information together with the parameters for the current slice.
According to an embodiment of the present invention, an apparatus for predicting and encoding video includes an LC reference list including an L0 reference list, which is list information of a reference image for predictive encoding of an image having a B slice type, and one or more reference images included in the L1 reference list. An LC-related information setting unit for setting, for each picture, LC default number information indicating a basic valid number of reference pictures allocated to the LC reference list; And determining the LC reference list including at least one reference picture among the reference pictures included in the LO reference list and the L1 reference list based on the LC default number information, and using the determined LC reference list to determine the B reference list. It includes a prediction coding unit for predicting encoding the image of the slice type
According to an embodiment of the present invention, an apparatus for predicting and decoding video includes an LC reference list including L0 reference list, which is list information of a reference image for predictive decoding of an image having a B slice type, and one or more reference images included in the L1 reference list. An LC related information reading unit for reading LC default number information indicating a basic valid number of reference pictures allocated to the LC reference list for each picture; And determining the LC reference list including at least one reference picture among the reference pictures included in the LO reference list and the L1 reference list based on the LC default number information, and using the determined LC reference list to determine the B reference list. It includes a prediction decoding unit for predicting and decoding the image of the slice type.
The present invention includes a computer-readable recording medium having a program recorded thereon for implementing the video predictive encoding method according to each embodiment. The present invention includes a computer-readable recording medium having recorded thereon a program for computerically implementing a video predictive decoding method according to each embodiment.
1A and 1B illustrate block diagrams of a video predictor encoding apparatus and a video encoder according to an embodiment.
2 is a block diagram of a video prediction decoding apparatus and a video decoder according to an embodiment.
3 shows a display order and a coding order of a picture sequence of a video.
FIG. 4 illustrates a relationship between L0, L1, and LC reference lists of pictures that are B slices in the picture sequence of FIG. 3.
5 shows syntax of LC related information set for each slice.
6 is a diagram illustrating syntax of reference list default number information according to an embodiment.
7 illustrates LC reference lists set according to LC default number information according to an embodiment.
8 and 9 illustrate syntaxes of reference list active number related information according to various embodiments.
10 is a diagram illustrating syntax of reference list change information, according to an exemplary embodiment.
11 is a flowchart of a video prediction encoding method, according to an embodiment.
12 is a flowchart of a video predictive decoding method according to an embodiment.
13 is a block diagram of a video encoding apparatus involving video prediction based on coding units having a tree structure, according to an embodiment of the present invention.
14 is a block diagram of a video decoding apparatus involving video prediction based on coding units having a tree structure, according to an embodiment of the present invention.
15 illustrates a concept of coding units, according to an embodiment of the present invention.
16 is a block diagram of an image encoder based on coding units, according to an embodiment of the present invention.
17 is a block diagram of an image decoder based on coding units, according to an embodiment of the present invention.
FIG. 18 shows a depth encoding unit and a partition according to an embodiment of the present invention.
19 illustrates a relationship between a coding unit and transformation units, according to an embodiment of the present invention.
20 illustrates encoding information according to depths, according to an embodiment of the present invention.
21 is a diagram of deeper coding units according to depths, according to an embodiment of the present invention.
22, 23, and 24 illustrate a relationship between a coding unit, a prediction unit, and a transformation unit, according to an embodiment of the present invention.
FIG. 25 illustrates a relationship between coding units, prediction units, and transformation units, according to encoding mode information of Table 1. FIG.
26 is a flowchart of a video encoding method based on coding units having a tree structure, according to an embodiment of the present invention.
27 is a flowchart of a video decoding method based on coding units, according to a tree structure, according to an embodiment of the present invention.
Hereinafter, referring to FIGS. 1A to 12, a method and apparatus for predicting a video capable of bi-prediction using a reference list, a video predictive encoding method and apparatus, and a video predictive decoding method and The apparatus is disclosed.
1A and 1B show block diagrams of a video predictor encoding
The video predictor encoding
The video predictor encoding
The video predictor encoding
The video prediction encoding
Prediction encoding includes inter prediction that predicts a current image using temporal and backward images, and intra prediction that predicts a current image using spatial surrounding images. Therefore, the inter-prediction image is used as a reference image temporally, and in the intra prediction, the surrounding image is spatially used as the reference image, and thus the current image can be predicted. The current image and the reference image may be image data units including pictures, frames, fields, slices, and the like.
The video
The reference list may include an L0 reference list, an L1 reference list, and an LC reference list. For example, the reference list for forward prediction of an image having a P slice type may include an L0 reference list for
In addition, the reference list for pair prediction of the B slice may further include an LC reference list. The LC reference list may include one or more reference pictures of reference pictures of the L0 reference list and reference pictures of the L1 reference list.
The L0 reference list, L1 reference list, and LC reference list may each include an index indicating one or more reference images and reference order information. The basic valid number of reference pictures allocated to the reference list may be limited in advance. However, the number or reference order of reference images may be changed for each image as necessary. Accordingly, the video
The
The
For intra prediction, an index indicating a reference region among neighboring regions adjacent to the current region in the same image as the current image may be determined as reference information.
The
The
In addition, the LC list may include both an index for a reference picture for forward prediction and an index for a reference picture for backward prediction.
The
The
For example, the
The
According to an embodiment, the LC related
The effective number of reference pictures allocated to the reference list means the number of pictures valid for reference. The default number of reference lists indicates the basic valid number of reference pictures allocated to the reference list. The LC related
The
The
The LC related
The active number of reference images of the reference list according to an embodiment indicates the current valid number of reference images when the effective number of reference images for the current image is arbitrarily changed.
According to an embodiment, the LC-related
According to an embodiment, the LC related
The LC related
The reference list change related information according to an embodiment indicates information about a method of changing reference images or changing reference sequences assigned to the reference list.
According to an embodiment, the LC related
According to an embodiment, the LC related
The video
The video
The video
For example, the video
The
The video
The video
The coding unit according to an embodiment may include coding units having a tree structure according to an embodiment, as well as a block of a fixedly determined form. According to an embodiment, coding units having a tree structure, prediction units, and partitions according to the tree structure will be described below with reference to FIGS. 13 to 27.
The video
The
Accordingly, compression encoding of the
The
2 is a block diagram of a video
The video
The video
The video
The video
The LC related
The video
The LC related
According to an embodiment, the LC related
According to an embodiment, the LC related
According to an embodiment, the LC related
The LC related
The
The LC related
For example, the LC related
According to an embodiment, the LC related
The LC-related
The LC related
The LC related
The
The
Since the video
The video
The coding unit according to an embodiment may include not only a fixed macroblock in a fixed form but also coding units having a tree structure according to an embodiment. According to an embodiment, coding units having a tree structure, prediction units and partitions according to the tree structure will be described below with reference to FIGS. 7 to 19.
When the video
The video
3 shows a display order and a coding order of a picture sequence of a video.
The indices of
FIG. 4 illustrates a relationship between L0, L1, and LC reference lists of pictures having a B slice type in the picture sequence of FIG. 3.
A picture having a B slice type may refer to up to four pictures, and the L0 and L1 reference lists may include up to two reference pictures, respectively.
In the table 40, the POC column lists the index of the current picture in decoding order. In the L0, L1 and LC columns of the table 40, the indices of the reference pictures allocated to the L0, L1 and LC reference lists for the current picture are listed, respectively.
In principle, among the pictures coded before the current picture, the pictures that have a leading index (following) from the current picture may be allocated as the reference picture to the L0 reference list (L1 reference list). In addition, the reference order of the reference picture may be determined in the order of closer to the current picture among the pictures that have an index (first) following the current picture.
Taking
Exceptionally, when there are less than two pictures that are indexed before the current picture among the pictures coded before the current picture, the display order of the current picture among the pictures that are trailing (preceding) among the first coded pictures. The picture closest to may be adopted as the reference picture of the L0 reference list (L1 reference list).
Taking
Taking
The LC reference list may consist of a combination of the reference picture of the L0 reference list and the reference picture of the L1 reference list. Therefore, the relationship between the number of reference images (N_L0) of the L0 reference list, the number of reference images (N_L1) of the L1 reference list, and the number of reference images (N_LC) of the LC reference list may be determined as shown in relation (A) below. .
Relation (A): N_LC = [0, (N_L0 + N_L1)]
That is, the number N_LC of reference images of the LC reference list may be equal to or greater than zero, and may be less than or equal to the sum of the number N_L0 of reference images of the L0 reference list and the number N_L1 of reference images of the L1 reference list.
Referring to Table 400 as an example, the LC reference list may include up to four reference images, but the
Accordingly, the number of reference pictures and reference pictures of the LC reference list may be determined depending on the state of reference pictures allocated to the L0 reference list and the L1 reference list.
The video
5 shows syntax of LC related information set for each slice.
The slice header (slice_header ()) 50 is set for each slice, and each
The LC related
For example, if it is determined that the LC reference list is formed by combining the reference images of the L0 / L1 reference list based on the LC
When the LC reference list is changed according to the LC
Thus, in the case of FIG. 5, if LC related
6 is a diagram illustrating syntax of reference list default number information according to an embodiment.
PPS information pic_parameter_set_rbsp () 60 according to an embodiment may include L0 default number information (num_ref_idx_l0_default_active_minus1, 61), L1 default number information (num_ref_idx_l1_default_active_minus1, 63), and LC default number information (num_ref_idactive_min_default_min_default_min) .
Therefore, the basic validity number for the LC reference list can be set with the L0 reference list and the L1 reference list for every picture. In addition, for each
In addition, although LC combination confirmation information (ref_pic_list_combination_flag, 53) is set and transmitted / read in each slice in FIG. 5, in FIG. May be implied. Therefore, as the LC
7 illustrates LC reference lists set according to LC default number information according to an embodiment.
The LC default number information num_ref_idx_lc_default_active according to an embodiment may indicate a basic valid number of reference pictures allocated to the LC reference list.
The
Similar to the
For example, when the LC default number information (num_ref_idx_lc_default_active) is set to 0, the first table 70 shows reference images of L0, L1, and LC reference list. When the LC default number information (num_ref_idx_lc_default_active) is set to 0 according to an embodiment, the default valid number of reference images for the LC reference list is not set separately and is automatically set according to the states of the L0 reference list and the L1 reference list. The base validity number of the reference list can be determined. Therefore, in the first table 70, the LC reference list, the LC reference list for each current picture may include a combination of all reference pictures of the L0 reference list and the L1 reference list.
For example, when the LC default number information (num_ref_idx_lc_default_active) is set to 2, the second diagram 75 shows reference images of L0, L1, and LC reference list. When the LC default number information (num_ref_idx_lc_default_active) is set to a value other than 0, the basic valid number of reference images for the LC reference list may be determined as the value of the LC default number information. Therefore, since the basic valid number of reference pictures for the LC reference list in the second table 75 is determined to be two, the LC reference list for the current picture is the current picture among all reference pictures of the L0 reference list and the L1 reference list. Only two reference images close to may be included.
Also, with reference to FIG. 6, an embodiment in which the default number information of each of the L0 / L1 / LC reference lists is included in the
Accordingly, the default number information for the L0 / L1 / LC reference list according to another embodiment may be transmitted / read along with sequence parameters, adaptation parameters or arbitrary parameters. The default number information for the L0 / L1 / LC reference list according to another embodiment may be set for each slice to be transmitted / read with various parameters for each slice, or set for each sequence for various parameters for each sequence. Can be transmitted / read together.
8 and 9 illustrate syntaxes of reference list active number related information according to various embodiments.
8 and 9, the slice headers (slice_header (), 80, and 90) include L0 active number related information, L1 active number related information, and LC active number related information, respectively.
Specifically, starting from the
First, the reference list active number change
When it is determined that the valid number of reference images of the reference list is randomly changed based on the reference list active number
If it is determined that the effective number of reference images of the reference list has been arbitrarily changed, and the current slice is a B slice type, the L1 active number information (num_ref_idx_l1_active_minus1, 85) may be read. The current valid number of reference pictures allocated to the L1 reference list may be determined based on the L1
In addition, separately from the L0 active number related information and the L1 active number related information, when the current slice is a B slice type, the LC active number related information may be read. First, LC active number change confirmation information (num_ref_idx_lc_active_override_flag) 87 may be read. Based on the LC active number
Based on the LC active number
Next, referring to the
In addition, when it is determined that the effective number of reference images of the reference list is randomly changed and the current slice is a B slice type, the L1 active number information (num_ref_idx_l1_active_minus1, 95) and the LC active number information (num_ref_idx_lc_active_minus1, 97) may be read together. . The current valid number of reference images allocated to the L1 reference list is determined based on the L1
Therefore, in the case of FIGS. 8 and 9, active number related information about the L0 / L1 / LC reference list may be set for each slice. In addition, in the embodiment described in FIG. 5, the LC
The LC
10 is a diagram illustrating syntax of reference list change information, according to an exemplary embodiment.
In FIG. 10, the slice header slice_header () 150 includes reference list change related information ref_pic_list_modification () 151.
The reference list change
First, when the current slice is a P or B slice type, the L0 modification confirmation information ref_pic_list_modification_flag_10 and 153 may be read. Based on the L0
When it is determined that the reference picture of the L0 reference list is changed based on the L0
Next, when the current slice is a B slice type, the L1 modification confirmation information ref_pic_list_modification_flag_l1 and 161 may be read. Based on the L1
The reference picture number difference information (abs_diff_pic_num_minus1, 157, 165) indicates a difference value between the video number of the reference picture to be allocated to the current index of the current reference list and the predicted value of the reference picture number. The long term reference picture number information (long_term_pic_num, 159, 167) indicates the number of the long term picture to be allocated to the current index of the current reference list. The long term image may be reference frames or reference fields, and the long term reference
Therefore, based on the reference image
When the current slice is a B slice type, LC change confirmation information (ref_pic_list_modification_flag_lc) 169 may be read. When it is determined that the LC reference list is changed according to the LC
Accordingly, although the reference picture to be moved and allocated to the current index of the current LC reference list is again determined among the reference pictures of the L0 reference list or the L1 reference list, the reference picture may be changed to another reference picture or the reference order of the reference picture may be changed. have.
Accordingly, in the case of FIG. 10, change related information of a reference picture with respect to the L0 / L1 / LC reference list may be set for each slice. In addition, in the embodiment described in FIG. 5, the LC change related
In addition, various embodiments in which the active header related information and the change related information of the L0 / L1 / LC reference list are described in the
Accordingly, the active number related information and the change related information for the L0 / L1 / LC reference list according to another embodiment may be transmitted / read along with the adaptation parameters or any parameters. Active number related information and change related information for the L0 / L1 / LC reference list according to another embodiment may be set for each sequence and transmitted / read together with various parameters for each sequence.
11 is a flowchart of a video prediction encoding method, according to an embodiment.
In
In
In
According to an embodiment, the video predictive encoding method may omit setting of the LC combination confirmation information indicating whether the LC reference list is configured using one or more reference images of the L0 reference list and the L1 reference list.
In addition to the LC default number information for each picture set in
According to one embodiment, for each slice, based on the reference list active number change confirmation information, the LC active number change confirmation information and the LC active number information may be set. Further, for each slice, LC active number information may be set together with at least one of L0 active number information and L1 active number information based on the reference list active number change confirmation information.
According to an embodiment, LC change related information may be set for each slice. In addition, for each slice, along with at least one of the L0 change related information and the L1 change related information, the LC change related information may be set for each slice.
According to an embodiment, the LC default number information set for the current picture may be transmitted together with the parameters for the current picture. In addition, according to an embodiment, the LC active number information set for the current slice may be transmitted together with the parameters for the current slice. According to an embodiment, the LC change related information set for the current picture may be transmitted together with the parameters for the current slice.
12 is a flowchart of a video predictive decoding method according to an embodiment.
In
In
In
According to an embodiment, the video predictive decoding method may determine an LC reference list without reading LC combination confirmation information indicating whether to construct an LC reference list using one or more reference images of the L0 reference list and the L1 reference list. have.
In addition to the LC default number information read in
According to one embodiment, for each slice, the reference list active number change confirmation information is read, and based on the reference list active number change confirmation information, the LC active number information is combined with at least one of the L0 active number information and the L1 active number information. Can be read.
In addition to the LC default number information read in
Hereinafter, a video encoding method and apparatus for performing prediction encoding on a prediction unit and a partition based on coding units having a tree structure, and a video decoding method and apparatus for performing prediction decoding will be described with reference to FIGS. 13 to 27.
13 is a block diagram of a video encoding apparatus involving video prediction based on coding units having a tree structure, according to an embodiment of the present invention.
According to an embodiment, the
The maximum coding
The coding unit according to an embodiment may be characterized by a maximum size and depth. The depth indicates the number of times the coding unit is spatially divided from the maximum coding unit. As the depth increases, the depth coding unit can be divided from the maximum coding unit to the minimum coding unit. The depth of the largest coding unit is the highest depth and the minimum coding unit may be defined as the lowest coding unit. As the maximum coding unit decreases as the depth increases, the size of the coding unit for each depth decreases, and thus, the coding unit of the higher depth may include coding units of a plurality of lower depths.
As described above, the image data of the current picture may be divided into maximum coding units according to the maximum size of the coding unit, and each maximum coding unit may include coding units divided by depths. Since the maximum coding unit is divided according to depths, image data of a spatial domain included in the maximum coding unit may be hierarchically classified according to depths.
The maximum depth and the maximum size of the coding unit that limit the total number of times of hierarchically dividing the height and the width of the maximum coding unit may be preset.
The encoding
Image data in the largest coding unit is encoded based on coding units according to depths according to at least one depth less than or equal to the maximum depth, and encoding results based on the coding units for each depth are compared. As a result of comparing the encoding error of the coding units according to depths, a depth having the smallest encoding error may be selected. At least one coding depth may be determined for each maximum coding unit.
As the depth of the maximum coding unit increases, the coding unit is divided into hierarchically and the number of coding units increases. In addition, even in the case of coding units having the same depth included in one largest coding unit, a coding error of each data is measured, and whether or not division into a lower depth is determined. Therefore, even in the data included in one largest coding unit, since the encoding error for each depth is different according to the position, the coding depth may be differently determined according to the position. Accordingly, one or more coding depths may be set for one maximum coding unit, and data of the maximum coding unit may be partitioned according to coding units of one or more coding depths.
Therefore, the
The maximum depth according to one embodiment is an index related to the number of divisions from the maximum encoding unit to the minimum encoding unit. The first maximum depth according to an exemplary embodiment may indicate the total number of division from the maximum encoding unit to the minimum encoding unit. The second maximum depth according to an exemplary embodiment may represent the total number of depth levels from the maximum encoding unit to the minimum encoding unit. For example, when the depth of the maximum encoding unit is 0, the depth of the encoding unit in which the maximum encoding unit is divided once may be set to 1, and the depth of the encoding unit that is divided twice may be set to 2. In this case, if the coding unit divided four times from the maximum coding unit is the minimum coding unit, since the depth levels of
Prediction encoding and transformation of the largest coding unit may be performed. Similarly, prediction encoding and transformation are performed based on depth-wise coding units for each maximum coding unit and for each depth below the maximum depth.
Since the number of coding units for each depth increases each time the maximum coding unit is divided for each depth, encoding including prediction encoding and transformation should be performed on all the coding units for each depth generated as the depth deepens. For convenience of explanation, prediction encoding and transformation will be described based on coding units of a current depth among at least one maximum coding unit.
The
For example, the
For prediction encoding of the largest coding unit, prediction encoding may be performed based on a coding unit of a coding depth, that is, a more strange undivided coding unit, according to an embodiment. Hereinafter, a more strange undivided coding unit on which prediction encoding is based is referred to as a 'prediction unit'. The partition in which the prediction unit is divided may include a data unit in which at least one of the height and the width of the prediction unit and the prediction unit is divided. A partition is a data unit in which a prediction unit of a coding unit is divided, and a prediction unit may be a partition having the same size as a coding unit.
For example, if the encoding unit of size 2Nx2N (where N is a positive integer) is not further divided, it is a prediction unit of size 2Nx2N, and the size of the partition may be 2Nx2N, 2NxN, Nx2N, NxN, and the like. According to an embodiment, the partition type includes not only symmetric partitions in which the height or width of the prediction unit is divided by a symmetrical ratio, but also partitions divided in an asymmetrical ratio, such as 1: n or n: 1, by a geometric form. It may optionally include partitioned partitions, arbitrary types of partitions, and the like.
The prediction mode of the prediction unit may be at least one of an intra mode, an inter mode, and a skip mode. For example, the intra mode and the inter mode may be performed for partitions having sizes of 2N × 2N, 2N × N, N × 2N, and N × N. In addition, the skip mode can be performed only for a partition of 2Nx2N size. The encoding may be performed independently for each prediction unit within the coding unit to select a prediction mode having the smallest encoding error.
In addition, the
The conversion unit in the encoding unit is also recursively divided into smaller conversion units in a similar manner to the encoding unit according to the tree structure according to the embodiment, And can be partitioned according to the conversion unit.
For a conversion unit according to one embodiment, a conversion depth indicating the number of times of division until the conversion unit is divided by the height and width of the encoding unit can be set. For example, if the size of the conversion unit of the current encoding unit of size 2Nx2N is 2Nx2N, the conversion depth is set to 0 if the conversion depth is 0, if the conversion unit size is NxN, and if the conversion unit size is N / 2xN / 2, . That is, a conversion unit according to the tree structure can be set for the conversion unit according to the conversion depth.
The coding information according to the coding depth needs not only the coding depth but also prediction related information and conversion related information. Therefore, the coding
A method of determining a coding unit, a prediction unit / partition, and a transformation unit according to a tree structure of a maximum coding unit according to an embodiment will be described in detail with reference to FIGS. 9 to 19.
The encoding
The
The encoded image data may be a result of encoding residual data of the image.
The information on the depth-dependent coding mode may include coding depth information, partition type information of a prediction unit, prediction mode information, size information of a conversion unit, and the like.
The coded depth information may be defined using depth-specific segmentation information indicating whether to encode to a coding unit of a lower depth without encoding to the current depth. If the current depth of the current coding unit is a coding depth, since the current coding unit is encoded in a coding unit of the current depth, split information of the current depth may be defined so that it is no longer divided into lower depths. On the contrary, if the current depth of the current coding unit is not the coding depth, encoding should be attempted using the coding unit of the lower depth, and thus split information of the current depth may be defined to be divided into coding units of the lower depth.
If the current depth is not the coded depth, encoding is performed on the coding unit divided into the coding units of the lower depth. Since at least one coding unit of a lower depth exists in the coding unit of the current depth, encoding may be repeatedly performed for each coding unit of each lower depth, and recursive coding may be performed for each coding unit of the same depth.
Since the coding units of the tree structure are determined in one maximum coding unit and information on at least one coding mode is determined for each coding unit of coding depth, information on at least one coding mode is determined for one maximum coding unit . Since the data of the maximum encoding unit is hierarchically divided according to the depth and the depth of encoding may be different for each position, information on the encoding depth and the encoding mode may be set for the data.
Accordingly, the
The minimum unit according to an exemplary embodiment is a square data unit having a minimum coding unit having the lowest coding depth divided into quadrants. The minimum unit according to an exemplary embodiment may be a maximum size square data unit that can be included in all coding units, prediction units, partition units, and conversion units included in the maximum coding unit.
For example, the encoding information output through the
Information on the maximum size of a coding unit defined for each picture, slice or GOP, and information on the maximum depth can be inserted into a header, a sequence parameter set, or a picture parameter set of a bitstream.
Information on the maximum size of the conversion unit allowed for the current video and information on the minimum size of the conversion unit can also be output through a header, a sequence parameter set, or a picture parameter set or the like of the bit stream. The
According to an embodiment of the simplest form of the
Therefore, the
Therefore, if an image having a very high image resolution or a very large data amount is encoded in units of existing macroblocks, the number of macroblocks per picture becomes excessively large. This increases the amount of compression information generated for each macroblock, so that the burden of transmission of compressed information increases and the data compression efficiency tends to decrease. Therefore, the video encoding apparatus according to an embodiment can increase the maximum size of the encoding unit in consideration of the image size, and adjust the encoding unit in consideration of the image characteristic, so that the image compression efficiency can be increased.
The
The
The
The
The
The
Reference list related information for pair prediction according to an embodiment may be encoded for each slice including the current partition, for each sequence, or for each picture.
14 is a block diagram of a video decoding apparatus involving video prediction based on coding units having a tree structure, according to an embodiment of the present invention.
A
Definition of various terms such as a coding unit, a depth, a prediction unit, a transformation unit, and information about various encoding modes for a decoding operation of the
The receiving
Also, the image data and encoding
Information on the coding depth and coding mode per coding unit can be set for one or more coding depth information, and the information on the coding mode for each coding depth is divided into partition type information of the coding unit, prediction mode information, The size information of the image data, and the like. In addition, as the encoding depth information, depth-based segmentation information may be extracted.
The information about the coded depth and the encoding mode according to the maximum coding units extracted by the image data and the
The encoding information for the encoding depth and the encoding mode according to the embodiment may be allocated for a predetermined data unit among the encoding unit, the prediction unit and the minimum unit. Therefore, the image data and the encoding
The
The image
In addition, the image
The image
In other words, the encoding information set for the predetermined unit of data among the encoding unit, the prediction unit and the minimum unit is observed, and the data units holding the encoding information including the same division information are collected, and the image
In addition, the
The image data and
Also, the image data and
The image data and
The
The
The
Accordingly, the
As a result, the
Therefore, even if a high resolution image or an excessively large amount of data is used, the image data can be efficiently used according to the coding unit size and the encoding mode that are adaptively determined according to the characteristics of the image by using the information about the optimum encoding mode transmitted from the encoding end. Can be decoded and restored.
15 illustrates a concept of coding units, according to an embodiment of the present invention.
An example of an encoding unit is that the size of an encoding unit is represented by a width x height, and may include 32x32, 16x16, and 8x8 from an encoding unit having a size of 64x64. The encoding unit of size 64x64 can be divided into the partitions of size 64x64, 64x32, 32x64, 32x32, and the encoding unit of size 32x32 is the partitions of size 32x32, 32x16, 16x32, 16x16 and the encoding unit of size 16x16 is the size of 16x16 , 16x8, 8x16, and 8x8, and a size 8x8 encoding unit can be divided into partitions of size 8x8, 8x4, 4x8, and 4x4.
With respect to the
It is preferable that the maximum size of the coding size is relatively large in order to improve the coding efficiency as well as to accurately characterize the image characteristics when the resolution or the data amount is large. Accordingly, the
Since the maximum depth of the
Since the maximum depth of the
16 is a block diagram of an image encoding unit based on an encoding unit according to an embodiment of the present invention.
The
The data output from the
In order to be applied to the
In particular, the
The
17 is a block diagram of an image decoder based on coding units, according to an embodiment of the present invention.
The
For the image data of the spatial domain, the
Data in the spatial domain that has passed through the
In order to decode the image data in the
In order to be applied to the
In particular, the
The
18 is a diagram of deeper coding units according to depths, and partitions, according to an embodiment of the present invention.
The
The
That is, the
Prediction units and partitions of coding units are arranged along the horizontal axis for each depth. That is, if the
Likewise, the prediction unit of the
Similarly, the prediction unit of the
Likewise, the prediction unit of the
Finally, the coding unit 650 of size 4x4 having a depth of 4 is the minimum coding unit and the coding unit of the lowest depth, and the corresponding prediction unit may also be set only as the partition 650 having a size of 4x4.
The
The number of deeper coding units according to depths for including data having the same range and size increases as the depth increases. For example, four coding units of
For each depth coding, encoding may be performed for each prediction unit of a coding unit according to depths along a horizontal axis of the
19 illustrates a relationship between a coding unit and transformation units, according to an embodiment of the present invention.
The
For example, in the
In addition, the data of the 64x64 encoding unit 710 is converted into 32x32, 16x16, 8x8, and 4x4 conversion units each having a size of 64x64 or smaller, and then a conversion unit having the smallest error with the original is selected .
20 illustrates encoding information according to depths, according to an embodiment of the present invention.
The
The information about the partition type 800 is a data unit for prediction encoding of the current coding unit and indicates information about a partition type in which the prediction unit of the current coding unit is divided. For example, the current encoding unit CU_0 of size 2Nx2N may be any one of a
The prediction mode information 810 indicates a prediction mode of each partition. For example, through the information 810 about the prediction mode, whether the partition indicated by the information 800 about the partition type is predictive encoding is performed in one of the
In addition, the information 820 on the conversion unit size indicates whether to perform conversion based on which conversion unit the current encoding unit is to be converted. For example, the transform unit may be one of a first intra
The video data and encoding
21 is a diagram of deeper coding units according to depths, according to an embodiment of the present invention.
Partition information may be used to indicate changes in depth. The division information indicates whether the current-depth encoding unit is divided into lower-depth encoding units.
The
For each partition type, predictive coding must be performed repeatedly for one 2N_0x2N_0 partition, two 2N_0xN_0 partitions, two N_0x2N_0 partitions, and four N_0xN_0 partitions. Prediction encoding may be performed in intra mode and inter mode on partitions having a size 2N_0x2N_0, a size N_0x2N_0 and a size 2N_0xN_0, and a size N_0xN_0. The skip mode may be performed only for predictive encoding on partitions having a size of 2N_0x2N_0.
If the encoding error caused by one of the
If the coding error by the
The
If the coding error by the
If the maximum depth is d, the depth-based coding unit is set up to the depth d-1, and the division information can be set up to the depth d-2. That is, when encoding is performed from the depth d-2 to the depth d-1 to the depth d-1, the prediction encoding of the
Among the partition types, one partition 2N_ (d-1) x2N_ (d-1), two partitions 2N_ (d-1) xN_ (d-1), two sizes N_ (d-1) x2N_ By encoding through prediction encoding repeatedly for each partition of (d-1) and four partitions of size N_ (d-1) xN_ (d-1), a partition type for generating a minimum encoding error may be searched. .
Even if the encoding error of the
The
In this way, the depth with the smallest error can be determined by comparing the minimum coding errors for all depths of
The video data and encoding
22, 23, and 24 illustrate a relationship between a coding unit, a prediction unit, and a transformation unit, according to an embodiment of the present invention.
The coding units 1010 are coding units according to coding depths determined by the
If the depth-based coding units 1010 have a depth of 0, the
Some
The image data of a
Thus, for each maximum encoding unit, the encoding units are recursively performed for each encoding unit hierarchically structured in each region, and the optimal encoding unit is determined, so that encoding units according to the recursive tree structure can be constructed. The encoding information may include split information about a coding unit, partition type information, prediction mode information, and transformation unit size information. Table 1 below shows an example that can be set in the
Inter
Skip (2Nx2N only)
2NxN
Nx2N
NxN
2NxnD
nLx2N
nRx2N
(Symmetrical partition type)
N / 2xN / 2
(Asymmetric partition type)
The
The split information indicates whether the current coding unit is split into coding units of a lower depth. If the division information of the current depth d is 0, since the depth at which the current encoding unit is not further divided into the current encoding unit is the encoding depth, the partition type information, prediction mode, and conversion unit size information are defined . If it is to be further split by the split information, encoding should be performed independently for each coding unit of the divided four lower depths.
The prediction mode may be represented by one of an intra mode, an inter mode, and a skip mode. Intra mode and inter mode can be defined in all partition types, and skip mode can be defined only in partition type 2Nx2N.
The partition type information indicates symmetrical partition types 2Nx2N, 2NxN, Nx2N and NxN in which the height or width of the predicted unit is divided into symmetrical proportions and asymmetric partition types 2NxnU, 2NxnD, nLx2N, nRx2N divided by the asymmetric ratio . Asymmetric partition types 2NxnU and 2NxnD are respectively divided into heights 1: 3 and 3: 1, and asymmetric partition types nLx2N and nRx2N are respectively divided into 1: 3 and 3: 1 widths.
The conversion unit size may be set to two kinds of sizes in the intra mode and two kinds of sizes in the inter mode. That is, if the conversion unit division information is 0, the size of the conversion unit is set to the size 2Nx2N of the current encoding unit. If the conversion unit division information is 1, a conversion unit of the size where the current encoding unit is divided can be set. Also, if the partition type for the current encoding unit of size 2Nx2N is a symmetric partition type, the size of the conversion unit may be set to NxN, or N / 2xN / 2 if it is an asymmetric partition type.
Encoding information of coding units having a tree structure according to an embodiment may be allocated to at least one of a coding unit, a prediction unit, and a minimum unit unit of a coding depth. The coding unit of the coding depth may include one or more prediction units and minimum units having the same coding information.
Therefore, if the encoding information held by each adjacent data unit is checked, it may be determined whether the adjacent data units are included in the coding unit having the same coding depth. In addition, since the encoding unit of the encoding depth can be identified by using the encoding information held by the data unit, the distribution of encoding depths within the maximum encoding unit can be inferred.
Therefore, in this case, when the current encoding unit is predicted with reference to the neighboring data unit, the encoding information of the data unit in the depth encoding unit adjacent to the current encoding unit can be directly referenced and used.
In another embodiment, when prediction encoding is performed by referring to a neighboring coding unit, data adjacent to the current coding unit in a depth-specific coding unit is encoded by using encoding information of adjacent coding units. The neighboring coding unit may be referred to by searching.
FIG. 25 illustrates a relationship between coding units, prediction units, and transformation units, according to encoding mode information of Table 1. FIG.
The
The TU size flag is a kind of conversion index, and the size of the conversion unit corresponding to the conversion index can be changed according to the prediction unit type or partition type of the coding unit.
For example, when the partition type information is set to one of the symmetric
When the partition type information is set to one of the asymmetric
26 is a flowchart of a video encoding method based on coding units having a tree structure, according to an embodiment of the present invention.
In operation 1210, the current image is split into at least one maximum coding unit. In addition, a maximum depth indicating the total number of possible divisions may be set in advance.
In operation 1220, the depth of the at least one partitioned region in which the region of the largest coding unit is divided for each depth is encoded. The depth at which the final encoding result is output for each of the at least one divided region is determined, and the coding unit according to the tree structure is determined.
The maximum coding unit is spatially divided whenever the depth is deep, and is divided into coding units of a lower depth. Each coding unit may be divided into coding units having a lower depth while being spatially divided independently of other adjacent coding units. Encoding must be repeatedly performed for each coding unit for each depth.
Among coding units having a hierarchical structure obtained by dividing a current image, prediction and compensation operations may be performed on prediction units and partitions included in each coding unit. For pair prediction of a B slice type partition, a L0 / L1 / LC reference list may be used to determine a reference picture.
The prediction error of the current partition may be determined by performing prediction encoding on the current partition by referring to the reference image for unidirectional prediction indicated by the L0 / L1 / LC reference list according to the reference order. In addition, in order to compensate for the motion of the prediction error of the current partition, the prediction region of the current partition may be restored as the reference image indicated by the L0 / L1 / LC reference list is referenced according to the reference order. Coding units having a coding depth determined in this manner may be determined as a coding unit having a tree structure.
In operation 1230, image data, which is a final encoding result of at least one divided region, for each maximum coding unit, and information about an encoding depth and an encoding mode are output. The information about the encoding mode may include information about the coded depth or split information, partition type information of a prediction unit, prediction mode information, and the like.
That is, for each largest coding unit, a quantized transform coefficient of a prediction error generated by bi-prediction and unidirectional prediction may be output for each coding unit having a tree structure.
Information about a coded depth and an encoding mode constituting coding units having a tree structure according to an embodiment may be encoded and output. Reference information including an index indicating a reference image determined by bi-prediction and unidirectional prediction, motion information indicating a reference block, and the like may be output together with a quantized transform coefficient of a prediction error and prediction mode information. have.
As prediction mode information about pair prediction of a B slice type according to an embodiment, reference list related information may be encoded and output. For example, L0 / L1 / LC default number information, L0 / L1 / LC active number related information, L0 / L1 / LC change related information, etc. may be encoded and output as prediction mode information. Reference list related information for pair prediction according to an embodiment may be encoded for each slice including the current partition, for each sequence, or for each picture.
The information about the encoded encoding mode may be transmitted to the decoding end together with the encoded image data.
27 is a flowchart of a video decoding method based on coding units, according to a tree structure, according to an embodiment of the present invention.
In step 1310, the bitstream for the encoded video is received and parsed.
In operation 1320, image data of the current picture allocated to the largest coding unit having the maximum size, and information about a coded depth and an encoding mode for each largest coding unit are extracted from the parsed bitstream. The coded depth of each largest coding unit is a depth selected to have the smallest coding error for each largest coding unit in the encoding process of the current picture. In encoding by the largest coding unit, image data is encoded based on at least one data unit obtained by hierarchically dividing the maximum coding unit by depth.
According to the information about the coding depth and the encoding mode according to an embodiment, the maximum coding unit may be split into coding units having a tree structure. Coding units according to coding units having a tree structure are coding units of coding depths, respectively. Accordingly, the efficiency of encoding and decoding of an image can be improved by decoding the respective image data after determining the coding depth of each coding unit.
As information about an encoding mode in an embodiment, reference information and prediction mode information for prediction decoding according to an embodiment may be extracted. As reference information for prediction decoding according to an embodiment, an index indicating a reference image, motion information, and the like may be extracted.
As prediction mode information for bi-prediction and unidirectional prediction of a B slice type image, L0 / L1 / LC default number information, L0 / L1 / LC active number related information, and L0 / L1 / LC change Reference list related information including related information may be extracted. According to an exemplary embodiment, the L0 / L1 / LC default number information may be read for each picture. According to an exemplary embodiment, the L0 / L1 / LC active number related information and the L0 / L1 / LC change related information may be read per slice, per picture, or per sequence.
In operation 1330, image data of each maximum coding unit is decoded based on the information about the coded depth and the encoding mode for each maximum coding unit. While decoding is performed on the current coding unit based on the information about the coded depth and the encoding mode, a prediction unit or a partition is determined based on the partition type information, and a prediction mode is determined for each partition based on the prediction mode information. Predictive decoding may be performed for each.
For motion compensation on a partition of a B slice type capable of pair prediction, a reference list including a reference picture and a reference order may be determined. By referring to the reference images indicated by the L0 / L1 / LC reference list and the corresponding reference order, a restoration region may be generated by performing motion compensation on a prediction error of a partition.
For each coding unit, image data of a spatial region may be reconstructed while decoding is performed for each maximum coding unit, and a picture and a video, which is a picture sequence, may be reconstructed. The restored video can be played back by the playback apparatus, stored in a storage medium, or transmitted over a network.
The above-described embodiments of the present invention can be embodied in a general-purpose digital computer that can be embodied as a program that can be executed by a computer and operates the program using a computer-readable recording medium. The computer-readable recording medium may include a storage medium such as a magnetic storage medium (eg, a ROM, a floppy disk, a hard disk, etc.) and an optical reading medium (eg, a CD-ROM, a DVD, etc.).
So far I looked at the center of the preferred embodiment for the present invention. It will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims. Therefore, the disclosed embodiments should be considered in an illustrative rather than a restrictive sense. The scope of the present invention is defined by the appended claims rather than by the foregoing description, and all differences within the scope of equivalents thereof should be construed as being included in the present invention.
Claims (24)
A base of a reference picture assigned to the LC reference list for an LC reference list including L0 reference list, which is list information of a reference picture for predictive encoding of an image having a B slice type, and at least one reference picture included in the L1 reference list. Setting LC default number information indicating the effective number for each picture;
Determining the LC reference list including at least one reference picture among the reference pictures included in the LO reference list and the L1 reference list based on the LC default number information; And
Predicting encoding the B slice type image using the determined LC reference list.
LC active indicating whether the effective number of reference images allocated to the LC reference list is arbitrarily changed based on the reference list active number change confirmation information indicating whether or not the effective number of reference images of any reference list is changed for each slice. And setting the number change confirmation information and the LC active number information indicating the current valid number of the reference image after the random change for each slice.
And setting, for each slice, LC change related information including information on a method of changing a reference picture or a reference order of the LC reference list.
And transmitting the LC combination acknowledgment information indicating whether to construct the LC reference list using one or more reference images of the L0 reference list and the L1 reference list.
The LC default number information together with at least one of L0 default number information indicating a basic valid number of reference pictures allocated to the L0 reference list and L1 default number information indicating a basic valid number of reference pictures allocated to the L1 reference list And setting the picture for each picture.
For each slice, based on the reference list active number change confirmation information indicating whether the effective number of the reference image is randomly changed, L0 active number information indicating the current valid number of reference images of the L0 reference list after the random change and the random number Setting LC active number information indicating a current valid number of reference images of the LC reference list after the random change together with at least one of L1 active number information indicating a current valid number of reference images in the L1 reference list after the change; Video prediction encoding method comprising a.
At least one of L0 change related information including information about a reference image or a reference order of the L0 reference list and information about a method of changing the reference image or reference order of the L1 reference list; And setting the LC change related information for each slice.
And transmitting the LC default number information together with the parameters for the current picture.
And transmitting the LC active number information together with parameters for a current slice.
And transmitting the LC change related information together with parameters for a current slice.
A reference picture allocated to the LC reference list for each picture for an LC reference list including L0 reference list, which is list information of a reference picture for predictive decoding of an image having a B slice type, and at least one reference picture included in the L1 reference list. Reading LC default number information indicating the basic effective number of;
Determining the LC reference list including at least one reference picture among the reference pictures included in the LO reference list and the L1 reference list based on the LC default number information; And
Predicting and decoding the B slice type image using the determined LC reference list.
LC active indicating whether the effective number of reference images allocated to the LC reference list is arbitrarily changed based on the reference list active number change confirmation information indicating whether or not the effective number of reference images of any reference list is changed for each slice. Reading the number change confirmation information; And
And reading LC active number information indicating a current valid number of reference images of the LC reference list after the random change, based on the read LC active number change confirming information.
And reading LC change related information including information on a reference image of the LC reference list or a method of changing the reference order for each slice.
Video prediction decoding, characterized in that the LC reference list can be determined without reading LC combination confirmation information indicating whether to construct the LC reference list using one or more reference images of the L0 reference list and the L1 reference list. Way.
The LC default number information together with at least one of L0 default number information indicating a basic valid number of reference pictures allocated to the L0 reference list and L1 default number information indicating a basic valid number of reference pictures allocated to the L1 reference list And decoding the pictures for each of the pictures.
Reading reference list active number change confirmation information indicating whether the effective number of the reference image is randomly changed for each slice;
L0 active number information indicating the current valid number of reference images of the L0 reference list after the random change based on the read reference list active number change confirming information, and the current validity of the reference image of the L1 reference list after the random change. And reading LC active number information indicating a current valid number of reference images of the LC reference list after at least one of the L1 active number information indicating the number.
At least one of L0 change related information including information about a reference image or a reference order of the L0 reference list and information about a method of changing the reference image or reference order of the L1 reference list; And reading the LC change related information for each slice.
Extracting, from the received video stream, the LC default number information along with parameters for a current picture; And
And reading out the extracted LC default number information.
Extracting, from the received video stream, the LC active number information along with parameters for the current slice; And
And decoding the extracted LC active number information.
Extracting, from the received videostream, the LC change related information along with parameters for the current slice; And
And reading out the extracted LC change related information.
A base of a reference picture assigned to the LC reference list for an LC reference list including L0 reference list, which is list information of a reference picture for predictive encoding of an image having a B slice type, and at least one reference picture included in the L1 reference list. An LC-related information setting unit for setting LC default number information indicating the effective number for each picture; And
Based on the LC default number information, the LC reference list including one or more reference images among the reference pictures included in the LO reference list and the L1 reference list is determined, and the B slice is determined using the determined LC reference list. And a predictive encoding unit configured to predictively encode an image of a type.
A reference picture allocated to the LC reference list for each picture for an LC reference list including L0 reference list, which is list information of a reference picture for predictive decoding of an image having a B slice type, and at least one reference picture included in the L1 reference list. An LC-related information reading section for reading LC default number information indicating a basic valid number of; And
Based on the LC default number information, the LC reference list including one or more reference images among the reference pictures included in the LO reference list and the L1 reference list is determined, and the B slice is determined using the determined LC reference list. And a predictive decoding unit configured to predict and decode an image of a type.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US13/672,311 US20130114710A1 (en) | 2011-11-08 | 2012-11-08 | Method and apparatus for encoding video by prediction using reference picture list, and method and apparatus for decoding video by performing compensation using reference picture list |
Applications Claiming Priority (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201161557053P | 2011-11-08 | 2011-11-08 | |
US61/557,053 | 2011-11-08 | ||
US201161564066P | 2011-11-28 | 2011-11-28 | |
US61/564,066 | 2011-11-28 | ||
US201261587327P | 2012-01-17 | 2012-01-17 | |
US61/587,327 | 2012-01-17 |
Publications (1)
Publication Number | Publication Date |
---|---|
KR20130050863A true KR20130050863A (en) | 2013-05-16 |
Family
ID=48661113
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
KR1020120037555A KR20130050863A (en) | 2011-11-08 | 2012-04-10 | Method and apparatus for video encoding with prediction and compensation using reference picture list, method and apparatus for video decoding with prediction and compensation using reference picture list |
Country Status (1)
Country | Link |
---|---|
KR (1) | KR20130050863A (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20150050560A (en) * | 2012-09-28 | 2015-05-08 | 엘지전자 주식회사 | Video decoding method and apparatus using the same |
WO2015102271A1 (en) * | 2014-01-02 | 2015-07-09 | 한국전자통신연구원 | Method for decoding image and apparatus using same |
US9967571B2 (en) | 2014-01-02 | 2018-05-08 | Electronics And Telecommunications Research Institute | Method for decoding image and apparatus using same |
CN113508590A (en) * | 2019-02-28 | 2021-10-15 | 三星电子株式会社 | Apparatus for encoding and decoding image and method for encoding and decoding image thereof |
CN115134593A (en) * | 2015-06-05 | 2022-09-30 | 杜比实验室特许公司 | Image encoding and decoding method for performing inter prediction, bit stream storage method |
-
2012
- 2012-04-10 KR KR1020120037555A patent/KR20130050863A/en not_active Application Discontinuation
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20150050560A (en) * | 2012-09-28 | 2015-05-08 | 엘지전자 주식회사 | Video decoding method and apparatus using the same |
US12069283B2 (en) | 2012-09-28 | 2024-08-20 | Lg Electronics Inc. | Video decoding method and apparatus using the same |
US9560369B2 (en) | 2012-09-28 | 2017-01-31 | Lg Electronics Inc. | Video decoding method and apparatus using the same |
US11259038B2 (en) | 2012-09-28 | 2022-02-22 | Lg Electronics Inc. | Video decoding method and apparatus using the same |
US10390032B2 (en) | 2012-09-28 | 2019-08-20 | Lg Electronics Inc. | Video decoding method and apparatus using the same |
US10375400B2 (en) | 2014-01-02 | 2019-08-06 | Electronics And Telecommunications Research Institute | Method for decoding image and apparatus using same |
US10326997B2 (en) | 2014-01-02 | 2019-06-18 | Electronics And Telecommunications Research Institute | Method for decoding image and apparatus using same |
US10291920B2 (en) | 2014-01-02 | 2019-05-14 | Electronics And Telecommunications Research Institute | Method for decoding image and apparatus using same |
US10397584B2 (en) | 2014-01-02 | 2019-08-27 | Electronics And Telecommunications Research Institute | Method for decoding image and apparatus using same |
US9967571B2 (en) | 2014-01-02 | 2018-05-08 | Electronics And Telecommunications Research Institute | Method for decoding image and apparatus using same |
WO2015102271A1 (en) * | 2014-01-02 | 2015-07-09 | 한국전자통신연구원 | Method for decoding image and apparatus using same |
CN115134593A (en) * | 2015-06-05 | 2022-09-30 | 杜比实验室特许公司 | Image encoding and decoding method for performing inter prediction, bit stream storage method |
CN115134594A (en) * | 2015-06-05 | 2022-09-30 | 杜比实验室特许公司 | Image encoding and decoding method for performing inter prediction, bit stream storage method |
US12088788B2 (en) | 2015-06-05 | 2024-09-10 | Dolby Laboratories Licensing Corporation | Method and device for encoding and decoding intra-frame prediction |
CN113508590A (en) * | 2019-02-28 | 2021-10-15 | 三星电子株式会社 | Apparatus for encoding and decoding image and method for encoding and decoding image thereof |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR102041886B1 (en) | Method and apparatus for video encoding with inter prediction using collocated picture, method and apparatus for video decoding with inter prediction using collocated picture | |
KR102072733B1 (en) | Method and apparatus for video encoding based on coding units according tree structure, method and apparatus for video decoding based on coding units according tree structure | |
KR102003047B1 (en) | Method and apparatus for video encoding with in-loop filtering based on tree-structured data unit, method and apparatus for video decoding with the same | |
CN110999304B (en) | Image processing method and image encoding/decoding method, and apparatus using the same | |
KR101639334B1 (en) | Method and apparatus for encoding and decoding motion vector | |
KR102111768B1 (en) | Method and apparatus for encoding video, and method and apparatus for decoding video with changing scan order according to hierarchical coding unit | |
KR102070431B1 (en) | Method and apparatus for encoding video with restricting bi-directional prediction and block merging, method and apparatus for decoding video | |
KR102169608B1 (en) | Method and apparatus for encoding and decoding video to enhance intra prediction process speed | |
KR102179383B1 (en) | Method and apparatus for determining merge mode | |
KR20130004548A (en) | Method and apparatus for video encoding with intra prediction by unification of availability check, method and apparatus for video decoding with intra prediction by unification of availability check | |
KR20120104128A (en) | Method and apparatus for encoding and decoding image | |
KR20110083369A (en) | Method and apparatus for video encoding using deblocking filtering, and method and apparatus for video decoding using the same | |
KR20130001708A (en) | Method and apparatus for encoding and decoding motion information | |
KR102169610B1 (en) | Method and apparatus for determining intra prediction mode | |
KR102088383B1 (en) | Method and apparatus for encoding and decoding video | |
KR101465977B1 (en) | Method and apparatus for encoding/decoding video for parallel processing | |
KR20120080548A (en) | Method and apparatus for prediction using bi- and uni-directional prediction, method and apparatus for video encoding/decoding with prediction and compensation using bi- and uni-directional prediction | |
KR20130086009A (en) | Method and apparatus for encoding/decoding video using unified syntax for parallel processing | |
KR20130105214A (en) | Method and apparatus for scalable video encoding, method and apparatus for scalable video decoding | |
KR20130050863A (en) | Method and apparatus for video encoding with prediction and compensation using reference picture list, method and apparatus for video decoding with prediction and compensation using reference picture list | |
KR102219909B1 (en) | Method and apparatus for decoding multi-layer video, and method and apparatus for encoding multi-layer video | |
KR102057195B1 (en) | Method and apparatus for scalable video encoding based on coding units of tree structure, method and apparatus for scalable video decoding based on coding units of tree structure | |
KR20140004591A (en) | Method and apparatus for generating 3d video data stream, and method and appratus for reproducing 3d video data stream |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WITN | Withdrawal due to no request for examination |