WO2016117930A1 - 인터 레이어 비디오 복호화 방법 및 그 장치 및 인터 레이어 비디오 부호화 방법 및 그 장치 - Google Patents
인터 레이어 비디오 복호화 방법 및 그 장치 및 인터 레이어 비디오 부호화 방법 및 그 장치 Download PDFInfo
- Publication number
- WO2016117930A1 WO2016117930A1 PCT/KR2016/000597 KR2016000597W WO2016117930A1 WO 2016117930 A1 WO2016117930 A1 WO 2016117930A1 KR 2016000597 W KR2016000597 W KR 2016000597W WO 2016117930 A1 WO2016117930 A1 WO 2016117930A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- block
- image
- layer
- prediction
- sample
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/30—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
- H04N19/31—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability in the temporal domain
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/136—Incoming video signal characteristics or properties
- H04N19/137—Motion inside a coding unit, e.g. average field, frame or block difference
- H04N19/139—Analysis of motion vectors, e.g. their magnitude, direction, variance or reliability
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/157—Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
- H04N19/159—Prediction type, e.g. intra-frame, inter-frame or bidirectional frame prediction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/176—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/187—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a scalable video layer
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/30—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/513—Processing of motion vectors
- H04N19/517—Processing of motion vectors by encoding
- H04N19/52—Processing of motion vectors by encoding by predictive encoding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/597—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
Definitions
- the present invention relates to an interlayer video encoding method and an interlayer video decoding method.
- the present invention relates to an interlayer video encoding method and a decoding method for determining interlayer prediction by determining a reference block having motion information.
- video codec for efficiently encoding or decoding high resolution or high definition video content.
- video is encoded according to a limited encoding method based on coding units having a tree structure.
- Image data in the spatial domain is transformed into coefficients in the frequency domain using frequency transformation.
- the video codec divides an image into blocks having a predetermined size for fast operation of frequency conversion, performs DCT conversion for each block, and encodes frequency coefficients in units of blocks. Compared to the image data of the spatial domain, the coefficients of the frequency domain are easily compressed. In particular, since the image pixel value of the spatial domain is expressed as a prediction error through inter prediction or intra prediction of the video codec, when frequency conversion is performed on the prediction error, much data may be converted to zero.
- the video codec reduces data volume by substituting data repeatedly generated continuously with small size data.
- the multilayer video codec encodes and decodes a first layer video and one or more second layer videos.
- the amount of data of the first layer video and the second layer video may be reduced by removing temporal / spatial redundancy of the first layer video and the second layer video and redundancy between layers.
- a block of the second layer image indicated by the disparity vector of the current block included in the first layer image is determined, and a block including a sample adjacent to the block of the second layer image has motion information.
- An interlayer video decoding method includes: obtaining a disparity vector of a current block included in a first layer image; Determining a block of a second layer image corresponding to the current block by using the obtained disparity vector; Determining a reference block that includes a sample that abuts a boundary of the block; Obtaining a motion vector of the reference block; And determining a motion vector of the current block with respect to the first layer image by using the obtained motion vector.
- the sample may be a sample in contact with the lower right side of the block in the second layer image.
- the current coding unit is one of at least one coding unit determined in the first layer image by using split information about a coding unit obtained from a bitstream, and the current block is included in at least one prediction unit determined from the current coding unit. It can be one.
- the reference block including the sample may be a block including the sample among the plurality of divided blocks when the second layer image is divided into a plurality of blocks having a predetermined size.
- the first sample is the sample in contact with the boundary of the block of the second layer image.
- the method may include determining the reference block from a block including a second sample adjacent to an inside of the determined second layer image boundary.
- the disparity vector is a vector with 1/4 sample accuracy
- the first layer image is a first view image
- the first layer image is a second view image
- the reference image may be an image having a time different from that of the first layer image.
- An interlayer video encoding method includes: obtaining a disparity vector of a current block included in a first layer image; Determining a block of a second layer image corresponding to the current block by using the obtained disparity vector; Determining a reference block that includes a sample that abuts a boundary of the block; Obtaining a motion vector of the reference block; Determining a motion vector of a current block with respect to the first layer image by using the obtained motion vector; Determining a prediction block of the current block by using the determined motion vector; And encoding a residual block related to the current block by using the prediction block of the current block.
- the sample may be a sample contacting a lower right side of a block in the second layer image.
- the reference block including the sample may be a block including the sample among the plurality of divided blocks when the second layer image is divided into a plurality of blocks having a predetermined size.
- the sample is in contact with a boundary of a block of the second layer image,
- the method may include determining a block including a second sample adjacent to an inside of the determined second layer image boundary as the reference block.
- the disparity vector is a vector with 1/4 sample accuracy
- An interlayer video decoding apparatus may include a disparity vector obtaining unit configured to obtain a disparity vector indicating a corresponding block of a second layer image decoded from a current block of a first layer image; And a reference block including a sample in contact with a boundary of a corresponding block with respect to the second layer image by using the obtained disparity vector, obtaining a motion vector for the reference block, and obtaining the obtained motion vector. And a decoder configured to obtain a prediction block of the current block with respect to the first layer image.
- An interlayer video encoding apparatus may include a disparity vector obtaining unit configured to obtain a disparity vector indicating a corresponding block of a second layer image decoded from a current block of a first layer image; And a reference block including a sample in contact with a boundary of a corresponding block with respect to the second layer image by using the obtained disparity vector, obtaining a motion vector for the reference block, and obtaining the obtained motion vector. And a coding unit configured to obtain a prediction block of the current block, and to encode the first layer image including the current block by using the prediction block of the current block.
- a computer-readable recording medium having recorded thereon a program for implementing a method according to various embodiments may be included.
- determining a block of another layer image corresponding to the current block using the disparity vector of the current block determining a reference block including a sample bordering with a boundary, and obtaining a motion vector of the reference block.
- FIG. 1A is a block diagram of an interlayer video encoding apparatus, according to various embodiments.
- FIG. 1B is a flowchart of an interlayer video encoding method, according to various embodiments.
- FIG. 2A is a block diagram of an interlayer video decoding apparatus, according to various embodiments.
- 2B is a flowchart of an interlayer video decoding method, according to various embodiments.
- 3A is a diagram illustrating an interlayer prediction structure, according to various embodiments.
- 3B is a diagram illustrating a multilayer video, according to various embodiments.
- 3C is a diagram illustrating NAL units including encoded data of a multilayer video, according to various embodiments.
- 4A is a diagram for describing a disparity vector for interlayer prediction, according to various embodiments.
- 4B is a diagram for describing a spatial neighboring block candidate for predicting a disparity vector, according to various embodiments.
- 4C is a diagram illustrating a temporal neighboring block candidate for predicting a disparity vector, according to various embodiments.
- FIG. 5 is a diagram for describing a process of determining, by an interlayer video decoding apparatus, a sample included in a reference block by using a disparity vector to determine a reference block for referring to motion information according to various embodiments.
- FIG. 6 is a diagram for describing a process of determining, by an interlayer video decoding apparatus, a sample included in a reference block by using a disparity vector to determine a reference block for referring to motion information, according to various embodiments.
- FIG. 7 is a diagram for describing a process of performing prediction by determining a reference block according to various prediction methods by an interlayer video decoding apparatus according to various embodiments.
- FIG. 8 is a block diagram of a video encoding apparatus based on coding units according to a tree structure, according to an embodiment.
- FIG. 9 is a block diagram of a video decoding apparatus based on coding units according to a tree structure, according to an embodiment.
- FIG. 10 illustrates a concept of coding units, according to various embodiments.
- FIG. 11 is a block diagram of an image encoder based on coding units, according to various embodiments.
- FIG. 12 is a block diagram of an image decoder based on coding units, according to various embodiments.
- FIG. 13 is a diagram illustrating coding units and partitions, according to various embodiments.
- FIG. 14 illustrates a relationship between a coding unit and transformation units, according to various embodiments.
- 15 illustrates encoding information, according to an embodiment.
- 16 is a diagram of coding units, according to various embodiments.
- 17, 18, and 19 illustrate a relationship between coding units, prediction units, and transformation units, according to various embodiments.
- FIG. 20 illustrates a relationship between a coding unit, a prediction unit, and a transformation unit, according to encoding mode information of Table 1.
- FIG. 20 illustrates a relationship between a coding unit, a prediction unit, and a transformation unit, according to encoding mode information of Table 1.
- 21 illustrates a physical structure of a disc in which a program is stored, according to various embodiments.
- Fig. 22 shows a disc drive for recording and reading a program by using the disc.
- FIG. 23 shows an overall structure of a content supply system for providing a content distribution service.
- 24 and 25 illustrate an external structure and an internal structure of a mobile phone to which a video encoding method and a video decoding method according to various embodiments are applied.
- 26 illustrates a digital broadcasting system employing a communication system.
- FIG. 27 is a diagram illustrating a network structure of a cloud computing system using a video encoding apparatus and a video decoding apparatus, according to various embodiments.
- the 'image' may be a still image of the video or a video, that is, the video itself.
- sample means data to be processed as data allocated to a sampling position of an image.
- the pixels in the spatial domain image may be samples.
- the "current block” may mean a block of an image to be encoded or decoded.
- 'Neighboring Block' represents at least one coded or decoded block neighboring the current block.
- the neighboring block may be located at the top of the current block, at the top right of the current block, at the left of the current block, or at the top left of the current block. It may also include temporally neighboring blocks as well as spatially neighboring blocks.
- the temporally neighboring neighboring blocks may include co-located blocks or neighboring blocks of the same location block as the current block of the reference picture.
- FIG. 1A is a block diagram of an interlayer video encoding apparatus, according to various embodiments.
- the interlayer video encoding apparatus 10 includes an encoder 12 and a bitstream generator 18.
- the encoder 12 may include a first layer encoder 14 and a second layer encoder 16.
- the second layer encoder 16 may include a disparity vector obtainer 17.
- the interlayer video encoding apparatus 10 classifies and encodes a plurality of image sequences for each layer according to a scalable video coding scheme, and includes a separate stream including data encoded for each layer. You can output
- the first layer encoder 12 may encode first layer images and output a first layer stream including encoded data of the first layer images.
- the second layer encoder 16 may encode the second layer images and output a second layer stream including encoded data of the second layer images.
- the interlayer video encoding apparatus 10 may represent and encode the first layer stream and the second layer stream as one bitstream through a multiplexer.
- the interlayer video encoding apparatus 10 may encode the first layer image sequence and the second layer image sequence into different layers.
- low resolution images may be encoded as first layer images, and high resolution images may be encoded as second layer images.
- An encoding result of the first layer images may be output as a first layer stream, and an encoding result of the second layer images may be output as a second layer stream.
- a multiview video may be encoded according to a scalable video coding scheme.
- Left view images may be encoded as first layer images
- right view images may be encoded as second layer images.
- the center view images, the left view images and the right view images are respectively encoded, among which the center view images are encoded as the first layer images, the left view images are the first layer images, and the right view images are the third It may be encoded as layer images.
- the center view color image, the center view depth image, the left view color image, the left view depth image, the right view color image, and the right view depth image may be respectively a first layer image, a second layer image, a third layer image, and a first image.
- the center view color image, the center view depth image, the left view depth image, the left view color image, the right view depth image, and the right view color image may be the first layer image, the second layer image, the third layer image, It may also be encoded as a fourth layer image, a fifth layer image, and a sixth layer image.
- a scalable video coding scheme may be performed according to temporal hierarchical prediction based on temporal scalability.
- a first layer stream including encoding information generated by encoding images of a base frame rate may be output.
- Temporal levels may be classified according to frame rates, and each temporal layer may be encoded into each layer.
- the second layer stream including the encoding information of the high frame rate may be output by further encoding the images of the higher frame rate with reference to the images of the base frame rate.
- the texture image may be encoded as the first layer images, and the depth image may be encoded as the second layer images.
- An encoding result of the first layer images may be output as a first layer stream, and second layer images may be encoded and output as a second layer stream with reference to the first layer image.
- scalable video coding may be performed on the first layer and the plurality of enhancement layers (second layer, third layer, ..., K-th layer).
- the first layer images and the K-th layer images may be encoded. Accordingly, the encoding results of the first layer images are output to the first layer stream, and the encoding results of the first, second, ..., K-th layer images are respectively output to the first, second, ..., K-th layer streams. Can be.
- the interlayer video encoding apparatus 10 may perform inter prediction to predict a current image by referring to images in a single layer.
- inter prediction a motion vector included in motion information between the current image and the reference image, and a residual component between the current image and the reference image, and the corresponding area of the first layer (base layer)can be predicted from
- the motion information may include information about a motion vector, a reference picture index, and a prediction direction.
- a position difference component between the current image and a reference image of another layer and a residual component between the current image and a reference image of another layer may be generated.
- a position difference component between the current image and the reference image of another layer may be expressed as a disparity vector.
- the interlayer video encoding apparatus 10 may predict the prediction information of the second layer images by referring to the prediction information of the first layer images, or may perform inter-layer prediction to generate the prediction image.
- the prediction information may include information indicating a motion vector, a disparity vector, a reference picture index, and a prediction direction.
- the interlayer video encoding apparatus 10 derives a disparity vector between a current image and a reference image of another layer, and derives a reference image of another layer according to the derived disparity vector.
- the prediction image may be generated, and a residual component that is a difference component between the prediction image and the current image may be generated.
- a motion vector is derived from an image of a layer different from the current image, and a prediction image is generated by using a reference image which is similar to the current image according to the derived motion vector and is the same layer as the current image.
- a residual component that is a difference component between the prediction image and the current image may be generated.
- one first layer image and a first layer image may be formed according to a multilayer prediction structure. Inter-layer prediction between three layer images and inter-layer prediction between a second layer image and a third layer image may be performed.
- the interlayer prediction structure will be described later with reference to FIG. 3A.
- the interlayer video encoding apparatus 10 encodes each block of each image of the video for each layer.
- the type of block may be square or rectangular, and may be any geometric shape. It is not limited to data units of a certain size.
- the block may be a maximum coding unit, a coding unit, a prediction unit, a transformation unit, or the like among coding units having a tree structure.
- the maximum coding unit including the coding units of the tree structure may be a coding tree unit, a coding block tree, a block tree, a root block tree, a coding tree, a coding root, or a tree. It may also be called variously as a trunk trunk.
- a video encoding and decoding method based on coding units having a tree structure will be described later with reference to FIGS. 8 to 20.
- Inter prediction and inter layer prediction may be performed based on a data unit of a coding unit, a prediction unit, or a transformation unit.
- the first layer encoder 14 may generate symbol data by performing source coding operations including inter prediction or intra prediction on the first layer images.
- the symbol data represents the value of each encoding parameter and the sample value of the residual.
- the first layer encoder 14 generates symbol data by performing inter prediction or intra prediction, transformation, and quantization on samples of data units of the first layer images, and performs entropy encoding on the symbol data.
- the first layer stream may be generated by performing the operation.
- the second layer encoder 16 may encode second layer images based on coding units having a tree structure.
- the second layer encoder 16 generates symbol data by performing inter / intra prediction, transformation, and quantization on samples of a coding unit of a second layer image, and performs entropy encoding on the symbol data, thereby performing a second layer. You can create a stream.
- the second layer encoder 16 may perform interlayer prediction that predicts the second layer image by using prediction information of the first layer image.
- the second layer encoder 16 uses the prediction information of the first layer reconstructed image corresponding to the second layer current image to encode the second layer original image of the second layer image sequence through the interlayer prediction structure.
- the prediction information of the second layer current image may be determined, and a prediction error between the second layer original image and the second layer prediction image may be encoded by generating a second layer prediction image based on the determined prediction information.
- the second layer encoder 16 may determine the block of the first layer image to be referred to by the block of the second layer image by performing interlayer prediction on the second layer image for each coding unit or prediction unit. For example, a reconstruction block of the first layer image positioned corresponding to the position of the current block in the second layer image may be determined. The second layer encoder 16 may determine a second layer prediction block by using a first layer reconstruction block corresponding to the second layer block. In this case, the second layer encoder 16 may determine the second layer prediction block by using the first layer reconstruction block located at the same point as the second layer block.
- the second layer encoder 16 uses the first layer reconstruction block positioned at a point corresponding to the disparity information of the second layer block.
- the second layer prediction block may be determined.
- the disparity information may include information regarding a disparity vector, a reference view picture index, a reference picture index, and a prediction direction.
- the second layer encoder 16 may use the second layer prediction block determined by using the first layer reconstruction block according to the interlayer prediction structure as a reference image for interlayer prediction of the second layer original block.
- the second layer encoder 16 converts a residual component according to an error between the sample value of the second layer prediction block and the sample value of the second layer original block, that is, the interlayer prediction, by using the first layer reconstructed image. It can be quantized and entropy encoded.
- the first layer image to be encoded may be a first view video
- the second layer image may be a second view video. Since the multi-view images are acquired at the same time, the similarity is very high for each image at each viewpoint.
- the multi-view image may have a disparity due to different angles of shot, lighting, or characteristics of an imaging tool (camera, lens, etc.).
- the disparity is represented as a disparity vector
- the disparity compensated prediction is performed by using a disparity vector to perform a disparity compensated prediction that finds and encodes an area most similar to a block to be currently encoded in an image of another viewpoint. It can increase.
- the first layer image to be encoded may be a texture image
- the second layer image may be a depth image. Since the texture-depth images are acquired at the same time, the similarity of the prediction technique for each image is very high.
- motion compensation prediction or disparity compensation that finds a block included in a texture image co-located with a block included in the depth image and encodes the depth image using motion information or disparity information of the texture image By performing the prediction, the coding efficiency can be increased.
- the second layer encoder 16 may perform encoding on the current block of the second layer. For example, the second layer encoder 16 may determine at least one coding unit in the second layer image, and perform encoding on at least one coding unit. In this case, a coding unit in which current encoding is performed among at least one coding unit is called a current coding unit. In addition, the current block may be one of at least one prediction unit determined from the current coding unit.
- the disparity vector acquirer 17 may obtain a disparity vector of the current block.
- the disparity vector may be a vector having 1/4 sample accuracy.
- a vector with 1/4 sample accuracy means that the minimum unit of the vector value is a 1/4 sample value. For example, when the height and width of one sample are 1, the minimum unit of the value for each component of the disparity vector may be 1/4. However, a vector having 1/4 sample accuracy may be a non-integer value.
- the vector value may be modified such that the value of the vector component is an integer. For example, when the vector value is (0.25, 0.75), the value of the modified vector component may be stored by multiplying by 4 (scaling) and expressing it as (1,3) and a buffer (not shown).
- the second layer encoder 16 may determine a block of the first layer image corresponding to the current block by using the obtained disparity vector.
- the second layer encoder 16 determines (descale) the value of the vector component before the correction from the value of the modified vector component and uses the value of the vector component before the correction.
- the block of the first layer image corresponding to the current block may be determined.
- the value of the integer vector component may be determined from the modified vector component value. For example, if the value of the modified vector component is (1,3), it is possible to determine (0,0) as the value of the vector component by dividing 4 (inverse scaling, taking only the quotient from the division result and discarding the remainder). have.
- the value of the modified vector component is (1, 3)
- take the rounding into account when dividing 4 in descaling, add 2 of each component, then take only the quotient from the result of division and discard the rest) (0, 1) may be determined as the value of the vector component.
- the second layer encoder 16 determines a predetermined position with respect to the block of the first layer image by adding a value of a disparity vector component from a component indicating a predetermined position of the current block, and the second layer encoder 16 determines A block of the first layer image may be determined based on a predetermined position with respect to the block of the first layer image.
- the second layer encoder 16 may determine a reference block including a sample in contact with the block of the first layer image.
- the reference block refers to a block used for referring to prediction information.
- the reference block may be a block used for referring to motion information in the process of inter-layer prediction of the current block.
- the second layer encoder 16 may determine a reference block including a sample in contact with an edge of the block of the first layer image.
- the second layer encoder 16 may determine a reference block including a sample in contact with the lower right side of the block of the first layer image.
- the second layer encoder 16 may determine at least one block from the first layer image based on a block unit for storing motion information.
- the block including the sample may be one of the determined at least one block.
- the first layer image may be divided into a plurality of blocks having a predetermined size
- the reference block including the sample may be a block including the sample among the plurality of blocks.
- the second layer encoder 16 may determine the inside of the boundary of the second layer image when the first sample is out of the boundary of the second layer image.
- a second sample adjacent to may be determined, and a block including a second sample adjacent to the inside of the boundary of the determined second layer image may be determined as a reference block.
- the second layer encoder 16 may obtain a motion vector of the reference block.
- the second layer encoder 16 may acquire the prediction direction information and the reference direction index from the reference block together with the motion vector of the reference block.
- the second layer encoder 16 may determine the motion vector of the current block with respect to the second layer image by using the obtained motion vector.
- the second layer encoder 16 may determine the prediction direction information and the reference direction index of the current block by using the obtained prediction direction information and the reference direction index.
- the second layer encoder 16 may determine a block in the reference image using the motion vector of the current block, and determine the prediction block of the current block using the determined block in the reference image.
- the reference image may mean an image of the same layer as the second layer image and an image having a time different from that of the second layer image.
- the first layer image may be a first view image
- the second layer image may be a second view image.
- the second layer encoder 16 may determine the residual block of the current block by using the prediction block of the current block.
- the second layer encoder 16 may determine a residual block indicating a difference between the original sample value of the current block and the sample value of the prediction block.
- the second layer encoder 16 may perform transformation on the residual block and entropy-encode the transformed residual block.
- the bitstream generator 18 may generate a bitstream including interlayer prediction information determined in relation to encoded video and interlayer prediction. For example, the bitstream generator 18 may generate a bitstream including a residual block regarding an entropy coded current block. That is, the bitstream generator 18 may include information about the entropy encoded residue as the encoded video. The generated bitstream may be transmitted to the decoding apparatus. So far, the second layer encoder 16 has described a method of performing block-based interlayer motion prediction. However, the present invention is not limited thereto, and the second layer encoder 16 may perform interlayer motion prediction based on the subblock.
- the second layer encoder 16 determines at least one subblock from a current block of the second layer image, determines a candidate subblock of the first layer image by using the determined subblock, and then selects a candidate subblock.
- the motion vector of the current block (subblock of) may be determined using the motion information of the block including the sample in contact with the block.
- the interlayer video encoding apparatus 10 converts a residual component according to an interlayer prediction, that is, an error between a sample value of a second layer prediction block and a sample value of a second layer original block by using a first layer reconstructed image. It can be quantized and entropy encoded. In addition, an error between prediction information may also be entropy encoded.
- the interlayer video encoding apparatus 10 may encode the current layer image sequence by referring to the first layer reconstructed images through the interlayer prediction structure.
- the interlayer video encoding apparatus 10 may encode a second layer image sequence according to a single layer prediction structure without referring to other layer samples. Therefore, care should be taken not to limit the interpretation that the interlayer video encoding apparatus 10 performs only inter prediction of the interlayer prediction structure in order to encode the second layer image sequence.
- first layer image is an image encoded first, and is an image to which motion information is referred.
- second layer image, the third layer image, the K-th layer image should not be limited to the image referring to the motion information of the first layer image.
- the second layer image is a previously encoded image, and refers to an image to which motion information is referred, and the first layer image refers to an image referring to motion information as an image currently encoded. do.
- FIG. 1B is a flowchart of an interlayer video encoding method, according to various embodiments.
- the interlayer video encoding apparatus 10 may obtain a disparity vector of a current block included in a first layer image.
- the disparity vector of the current block may be determined by obtaining information about the disparity vector of the current block from the bitstream. Alternatively, the disparity vector of the current block may be derived from the disparity vector regarding the neighboring block of the current block.
- the interlayer video encoding apparatus 10 may determine a block of a second layer image corresponding to the current block by using the obtained disparity vector. The interlayer video encoding apparatus 10 may determine a block of the second layer image indicated by the obtained disparity vector from the current block.
- the interlayer video encoding apparatus 10 may determine a reference block including a sample in contact with a boundary of the block. The interlayer video encoding apparatus 10 may determine a reference block including a sample contacting an edge of the block.
- the interlayer video encoding apparatus 10 may obtain a motion vector of the reference block.
- the interlayer video encoding apparatus 10 may obtain motion information including a motion vector.
- the motion information may include information about a motion vector, a reference picture index, and a prediction direction.
- the interlayer video encoding apparatus 10 may determine a motion vector of the current block with respect to the first layer image by using the obtained motion vector.
- the interlayer video encoding apparatus 10 may determine motion information of the current block for the first layer image by using motion information including a motion vector.
- the interlayer video encoding apparatus 10 may determine a prediction block of the current block for the first layer image by using the determined motion vector of the current block.
- the interlayer video encoding apparatus 10 may determine the prediction block of the current block with respect to the first layer image by using motion information of the current block including the motion vector of the current block.
- the interlayer video encoding apparatus 10 determines at least one of the L0 prediction list and the L1 prediction list by using the prediction direction information, and uses the reference image index in the at least one prediction list to be the same as the first layer image.
- Determine an image of another time belonging to a layer determine a block corresponding to the current block from an image belonging to the same layer as the first layer image by using a motion vector, and predict a current block using a sample value of the determined block
- the sample value of the block can be determined.
- the interlayer video encoding apparatus 10 may determine a residual block of the current block by using the prediction block of the current block and encode the same.
- the interlayer video encoding apparatus 10 may generate a bitstream including the residual block.
- the interlayer video encoding apparatus 10 may include a central processor (not shown) that collectively controls the first layer encoder 14, the second layer encoder 16, and the bitstream generator 18. have.
- the first layer encoder 14, the second layer encoder 16, and the bitstream generator 18 may be operated by their own processors (not shown), and the processors (not shown) may be mutually organic.
- the interlayer video encoding apparatus 10 may operate as a whole.
- the first layer encoder 14, the second layer encoder 16, and the bitstream generator 18 may be controlled by the control of an external processor (not shown) of the interlayer video encoding apparatus 10. It may be.
- the interlayer video encoding apparatus 10 may include one or more data storage units (not shown) that store input and output data of the first layer encoder 14, the second layer encoder 16, and the bitstream generator 18. It may include.
- the interlayer video encoding apparatus 10 may include a memory controller (not shown) that manages data input and output of the data storage unit (not shown).
- the interlayer video encoding apparatus 10 may perform a video encoding operation including transformation by operating in conjunction with an internal video encoding processor or an external video encoding processor to output a video encoding result.
- the internal video encoding processor of the interlayer video encoding apparatus 10 may implement a video encoding operation as a separate processor.
- the inter-layer video encoding apparatus 10, the central computing unit, or the graphics processing unit may include a video encoding processing module to implement a basic video encoding operation.
- FIG. 2A is a block diagram of an interlayer video decoding apparatus, according to various embodiments.
- the interlayer video decoding apparatus 20 may include a decoder 22.
- the decoder 22 may include a first layer decoder 24 and a second layer decoder 26.
- the second layer decoder 26 may include a disparity vector obtainer 27.
- the interlayer video decoding apparatus 20 may receive a bitstream of an encoded video.
- the interlayer video decoding apparatus 20 receives a bitstream of a video encoded for each layer.
- the interlayer video decoding apparatus 20 may receive bitstreams for each layer according to a scalable encoding method.
- the number of layers of the bitstreams received by the interlayer video decoding apparatus 20 is not limited.
- the first layer decoder 24 of the interlayer video decoding apparatus 20 receives and decodes the first layer stream, and the second layer decoder 26 decodes the second layer stream. An embodiment of receiving and decoding will be described in detail.
- the interlayer video decoding apparatus 20 may receive a stream in which image sequences having different resolutions are encoded in different layers.
- the low resolution image sequence may be reconstructed by decoding the first layer stream, and the high resolution image sequence may be reconstructed by decoding the second layer stream.
- a multiview video may be decoded according to a scalable video coding scheme.
- left view images may be reconstructed by decoding the first layer stream.
- Right-view images may be reconstructed by further decoding the second layer stream in addition to the first layer stream.
- the center view images may be reconstructed by decoding the first layer stream.
- Left view images may be reconstructed by further decoding a second layer stream in addition to the first layer stream.
- Right-view images may be reconstructed by further decoding the third layer stream in addition to the first layer stream.
- the texture image may be reconstructed by decoding the first layer stream.
- the depth image may be reconstructed by further decoding the two layer stream using the reconstructed texture image.
- a scalable video coding scheme based on temporal scalability may be performed. Images of the base frame rate may be reconstructed by decoding the first layer stream. The high frame rate images may be reconstructed by further decoding the second layer stream in addition to the first layer stream.
- first layer images may be reconstructed from the first layer stream, and second layer images may be further reconstructed by further decoding the second layer stream with reference to the first layer reconstructed images.
- the K-th layer images may be further reconstructed by further decoding the K-th layer stream with reference to the second layer reconstruction image.
- the interlayer video decoding apparatus 20 obtains encoded data of first layer images and second layer images from a first layer stream and a second layer stream, and adds a motion vector and an interlayer generated by inter prediction.
- the prediction information generated by the prediction can be further obtained.
- the interlayer video decoding apparatus 20 may decode inter-predicted data for each layer and may decode inter-layer predicted data among a plurality of layers. Reconstruction via motion compensation and interlayer video decoding may be performed based on a coding unit or a prediction unit.
- images may be reconstructed by performing motion compensation for the current image with reference to reconstructed images predicted through inter prediction of the same layer.
- the motion compensation refers to an operation of reconstructing a reconstructed image of the current image by synthesizing a reference image determined using the motion vector of the current image and a residual component of the current image.
- the interlayer video decoding apparatus 20 may perform interlayer video decoding by referring to prediction information of the first layer images in order to decode a second layer image predicted through interlayer prediction.
- Interlayer video decoding refers to an operation of reconstructing prediction information of a current image using prediction information of a reference block of another layer to determine prediction information of a current image.
- the interlayer video decoding apparatus 20 may perform interlayer video decoding for reconstructing third layer images predicted using the second layer images.
- the interlayer prediction structure will be described later with reference to FIG. 3A.
- the second layer decoder 26 may decode the second layer stream without referring to the first layer image sequence. Therefore, care should be taken not to limit the interpretation that the second layer decoder 26 performs inter-layer prediction in order to decode the second layer image sequence.
- the interlayer video decoding apparatus 20 decodes each block of each image of the video.
- the block may be a maximum coding unit, a coding unit, a prediction unit, a transformation unit, or the like among coding units having a tree structure.
- the first layer decoder 24 may decode the first layer image by using encoding symbols of the parsed first layer image.
- the first layer decoder 24 may perform encoding units having a tree structure for each maximum coding unit of the first layer stream. Decryption may be performed on a basis.
- the first layer decoder 24 may perform entropy decoding for each largest coding unit to obtain encoded information and encoded data.
- the first layer decoder 24 may reconstruct the residual component by performing inverse quantization and inverse transformation on the encoded data obtained from the stream.
- the first layer decoder 21 may directly receive a bitstream of quantized transform coefficients. As a result of performing inverse quantization and inverse transformation on the quantized transform coefficients, the residual component of the images may be reconstructed.
- the first layer decoder 24 may determine the predicted image through motion compensation between the same layer images, and reconstruct the first layer images by combining the predicted image and the residual component.
- the second layer decoder 26 may generate a second layer prediction image by using samples of the first layer reconstruction image.
- the second layer decoder 26 may decode the second layer stream to obtain a prediction error according to interlayer prediction.
- the second layer decoder 26 may generate the second layer reconstruction image by combining the prediction error with the second layer prediction image.
- the second layer decoder 26 may determine the second layer prediction image by using the first layer reconstructed image decoded by the first layer decoder 24.
- the second layer decoder 26 may determine a block of the first layer image to be referred to by the coding unit or the prediction unit of the second layer image, according to the interlayer prediction structure. For example, a reconstruction block of the first layer image positioned corresponding to the position of the current block in the second layer image may be determined.
- the second layer decoder 26 may determine the second layer prediction block by using the first layer reconstruction block corresponding to the second layer block.
- the second layer decoder 26 may determine the second layer prediction block by using the first layer reconstruction block co-located at the same point as the second layer block.
- the second layer decoder 26 may use the second layer prediction block determined by using the first layer reconstruction block according to the interlayer prediction structure as a reference image for interlayer prediction of the second layer original block. In this case, the second layer decoder 26 may reconstruct the second layer block by synthesizing the sample value of the second layer prediction block determined using the first layer reconstructed image and the residual component according to the interlayer prediction. Can be.
- the encoded first layer image may be a first view image
- the second layer image may be a second view image
- the encoded first layer image may be a texture image
- the second layer image may be a depth image
- the similarity is very high for each image at each viewpoint. Therefore, by using the disparity vector, disparity compensation is performed to find and decode a region most similar to the block to be currently decoded in an image of another view, thereby improving decoding efficiency.
- the interlayer video decoding apparatus 20 may obtain a disparity vector for interlayer prediction through a bitstream or predict it from other encoding information.
- the disparity vector may be predicted from the disparity vector of neighboring blocks of the currently reconstructed block.
- the interlayer video decoding apparatus 20 may determine the base disparity vector as the disparity vector if it does not predict the disparity vector from the disparity vector of the neighboring block.
- the second layer decoder 26 may perform decoding on the current block of the second layer image. For example, the second layer decoder 26 may determine at least one coding unit in the second layer image, and may decode at least one coding unit. In this case, a coding unit in which current encoding is performed among at least one coding unit is called a current coding unit. In addition, the current block may be one of at least one prediction unit determined from the current coding unit.
- the disparity vector acquirer 27 may obtain a disparity vector of the current block.
- the second layer decoder 26 may determine a block of the first layer image corresponding to the current block by using the obtained disparity vector.
- the second layer decoder 26 may determine a block of the first layer image corresponding to the current block by using the obtained disparity vector.
- the second layer decoder 26 may determine a block of the first layer image indicated by the disparity vector from the current block.
- the second layer decoder 26 may determine a reference block including a sample in contact with the block of the first layer image.
- the reference block refers to a block used for referring to prediction information in the prediction process.
- the reference block may be a block used for referring to motion information in the process of inter-layer prediction of the current block.
- the second layer decoder 26 may determine a reference block including a sample in contact with an edge of the block of the first layer image.
- the second layer decoder 26 may determine a reference block including a sample in contact with the lower right side of the block of the first layer image.
- the second layer decoder 26 may determine at least one reference block from the first layer image based on a block unit for storing motion information.
- the reference block including the sample may be one of the determined at least one reference block.
- the first layer image may be divided into a plurality of blocks having a predetermined size, and the reference block including the sample may be a block including the sample among the plurality of blocks.
- the second layer decoder 26 may determine the boundary of the second layer image when the first sample is out of the boundary of the second layer image.
- a second sample adjacent to the inside of the second sample may be determined, and a block including the second sample adjacent to the inside of the boundary of the determined second layer image may be determined as the reference block.
- the second layer decoder 26 may acquire motion information of the reference block. For example, the second layer decoder 26 may obtain a motion vector of the reference block. In addition, the second layer decoder 26 may acquire information of a prediction direction and a reference direction index of the reference block.
- the second layer decoder 26 may determine the motion information of the current block with respect to the second layer image by using the obtained motion information. For example, the second layer decoder 26 may determine the motion vector of the current block with respect to the second layer image by using the obtained motion vector. In addition, the second layer decoder 26 may determine the information of the prediction direction and the reference direction index of the current block by using the acquired information of the prediction direction and the reference direction index.
- the second layer decoder 26 may determine a block in the reference picture using the motion information of the current block, and determine the prediction block of the current block using the determined block in the reference picture. For example, the second layer decoder 26 determines a reference picture using information of a prediction direction of the current block and a reference direction index, and determines a block in the reference picture using the motion vector of the current block. The sample value of the block may be used to determine the sample value of the prediction block with respect to the current block.
- the reference image may be an image of the same layer as the second layer image and an image having a time different from that of the second layer image.
- the second layer decoder 26 may reconstruct the current block using the residual block of the current block obtained from the bitstream and the prediction block of the current block.
- the interlayer video decoding apparatus 20 has been described on the assumption that it is determined to perform interlayer prediction.
- the interlayer video decoding apparatus 20 determines a motion vector candidate, generates a merge candidate list including a merge candidate associated with the motion vector candidate, and performs interlayer prediction on the current block using the merge candidate list. It will be described in detail the process to decide to.
- the interlayer video decoding apparatus 20 determines various motion vector candidates by predicting various motion vectors to perform interlayer prediction on the current block.
- the interlayer video decoding apparatus 20 may determine at least one motion vector predicted from at least one spatial candidate block as at least one motion vector candidate. In addition, the interlayer video decoding apparatus 20 may determine at least one motion vector predicted from at least one temporal candidate block as at least one motion vector candidate.
- the interlayer video decoding apparatus 20 may determine a motion vector candidate (hereinafter, referred to as an inter-view motion prediction candidate) for inter-view motion prediction.
- a motion vector candidate hereinafter, referred to as an inter-view motion prediction candidate
- the interlayer video decoding apparatus 20 may determine a motion vector candidate (hereinafter, referred to as a shifted inter-view motion prediction candidate) for the shifted inter-view motion prediction.
- a motion vector candidate hereinafter, referred to as a shifted inter-view motion prediction candidate
- the interlayer video decoding apparatus 20 may determine a disparity vector of a current block of a second layer image (eg, an image related to a second view) and use the determined disparity vector to determine a first layer image (first).
- Block of the image may be determined.
- the determination on the block of the first layer image is to determine the sample in the first layer image corresponding to the sample of the specific position in the current block, and to determine the block including the corresponding sample
- the motion vector candidate may mean a motion vector for a block including the corresponding sample.
- the sample in the block may be a sample located at the center of the block.
- the interlayer video decoding apparatus 20 determines the disparity vector of the current block of the second layer image (the image related to the second view), and determines the determined disparity vector.
- a block of the first layer image (the image related to the first view) may be determined, a sample in contact with the determined block may be determined, and a motion vector regarding the block including the determined sample.
- the interlayer video decoding apparatus 20 uses a reference view index and a disparity vector derived from a neighboring block of the current block to determine an inter-view motion prediction candidate, for example, at a first view. Can be determined.
- the reference view index is an index indicating an image of a view to be referred to among a plurality of views, and it is assumed here that the reference view index indicates a first layer image (eg, an image relating to the first view).
- the interlayer video decoding apparatus 20 may determine a sample in a block included in the first layer image, obtain a motion vector of a block including the determined sample, and determine the motion vector candidate of the current block. In addition, the interlayer video decoding apparatus 20 may determine the prediction direction and the reference image index by using the prediction direction information and the reference image index of the block including the sample, and determine the determined prediction direction together with the motion vector candidate. .
- the interlayer video decoding apparatus 20 obtains information of a prediction direction, and stores a reference image in at least one prediction list of prediction lists (L0 prediction list and L1 prediction list) of a block including a sample according to the obtained prediction direction. An index indicating a reference picture index may be obtained, and the prediction direction and the reference picture index of the current block may be determined using the obtained prediction direction and the reference picture index. Can be determined with the motion vector candidate.
- the prediction direction information is information indicating at least one prediction direction of the L1 prediction list and the L0 prediction list.
- the prediction direction information may include L0 prediction direction information indicating that the L0 prediction list is available and L1 prediction direction information indicating that the L1 prediction list is available. That is, determining the prediction direction means determining which prediction list of the L0 prediction list and the L1 prediction list to predict.
- the reference picture index may include an index indicating a picture to be referred to among the pictures included in the L0 prediction list and an index indicating a picture to be referred to among the pictures included in the L1 prediction list.
- the interlayer video decoding apparatus 20 generates a merge candidate list when determining the motion vector candidate.
- the interlayer video decoding apparatus 20 includes various merge candidates such as spatial merge candidates, temporal merge candidates, inter-view motion prediction merge candidates, inter-view disparity prediction merge candidates, and motion parameter inherited merge candidates. Create a merge candidate list.
- a motion vector candidate, a prediction direction, and a reference picture index that may be used for interlayer prediction may be determined for the merge candidate.
- the merge candidate may be an indicator indicating a motion vector prediction technique, and more specifically, the merge candidate may mean a block used in the motion vector prediction technique.
- the interlayer video decoding apparatus 20 determines whether each merge candidate is available according to the priority of each merge candidate.
- the merge candidate available means that at least one prediction direction associated with the merge candidate is determined.
- the interlayer video decoding apparatus 20 adds the available merge candidates to the merge candidate list.
- the interlayer video decoding apparatus 20 determines whether a temporal merging candidate is available, and adds the temporal merging candidate to the merging candidate list if the temporal merging candidate is available.
- the interlayer video decoding apparatus 20 may determine whether an inter-view motion prediction merging candidate, which is a next priority, is available according to the priority of the merge candidate.
- the interlayer video decoding apparatus 20 adds the inter-view motion prediction merging candidate to the merge candidate list. If the motion parameter inheritance merge candidate is available, the interlayer video decoding apparatus 20 adds the motion parameter inheritance merge candidate to the merge candidate list.
- the interlayer video decoding apparatus 20 adds available merge candidates according to priorities among the merge candidates, and when the number of merge candidates currently added is larger than the number of merge candidates determined in relation to the merge candidate list, Do not add merge candidates to the merge candidate list.
- the interlayer video decoding apparatus 20 obtains a merge index.
- the merge index may be obtained from the bitstream.
- the merge index refers to an index indicating one of merge candidates added to the merge candidate list.
- the interlayer video decoding apparatus 20 determines one merge candidate from the merge candidate list by using the merge index.
- the interlayer video decoding apparatus 20 may determine a motion vector candidate, prediction direction information, and reference picture index determined through inter-view motion prediction. Perform motion compensation using. If one merge candidate determined using the merge index is a shifted inter-view motion prediction merge candidate, the interlayer video decoding apparatus 20 is associated with the shifted inter-view motion prediction merge candidate. Motion compensation may be performed using a shifted inter-view motion vector candidate, prediction direction information, and a reference picture index.
- the interlayer video decoding apparatus 20 uses the corresponding motion vector candidate when the merge candidate determined by using the merge index is one of inter-view motion prediction merge candidate and shifted inter-view motion prediction merge candidate. By performing motion compensation on the current block, a prediction sample value for the current block is generated.
- the interlayer video decoding apparatus 40 determines the reference picture indicated by the reference picture index from the prediction list indicated by the prediction direction information by using the motion vector candidate and the prediction direction information, and determines the reference block for the current block in the reference picture. do.
- the interlayer video decoding apparatus 40 generates a predicted sample value for the current block by using the determined sample value of the reference block.
- the interlayer video decoding apparatus 20 may determine at least one of an L0 prediction list and an L1 prediction list using prediction direction information, and may include images included in at least one prediction list determined using a reference image index.
- the reference image can be determined.
- the interlayer video decoding apparatus 20 may determine a block in the reference image from the current block by using a motion vector included in the determined motion vector candidate, and may use a sample value of the block in the reference image. It is possible to determine the sample value of the prediction block for the current block.
- the interlayer video decoding apparatus 20 may determine an inter-view motion prediction candidate based on a subblock. Similarly, the interlayer video decoding apparatus 20 may determine the shifted inter-view motion prediction candidate based on the subblock.
- the subblock-based inter-view motion prediction candidate determines at least one subblock from a current block of the second layer image, determines a candidate subblock of the first layer image by using the determined subblock, and within the candidate subblock.
- a motion vector candidate determined for a current block (for a subblock of) using motion information of a block including a sample, wherein the subblock-based shifted inter-view motion prediction candidate includes a sample adjacent to the candidate subblock A motion vector candidate determined for the current block (subblock of) using the motion information of the block.
- the subblock-based inter-view motion prediction candidate and the subblock-based shifted inter-view motion prediction candidate may minimize prediction error by predicting a motion vector using a subblock smaller than or equal to the size of the current block.
- the interlayer video decoding apparatus 20 determines that depth based block partition prediction is not performed on a current block included in a first layer image, and the first layer image is determined as a texture. texture), a subblock-based inter-view motion prediction candidate may be determined. If not, the interlayer video decoding apparatus 20 may determine an inter-view motion prediction candidate. In addition, when the current image is not a depth image, the interlayer video decoding apparatus 20 may determine a shifted inter-view motion prediction candidate.
- the interlayer video decoding apparatus 20 determines an inter-view motion prediction candidate by determining a motion vector using a position, a width and a height of a current block, a reference view index for the current block, and a disparity vector for the current block. You can decide.
- the reference image index and the prediction direction may be determined in relation to the inter-view motion prediction candidate together with the motion vector.
- the interlayer video decoding apparatus 20 may determine a block of the second layer image by using a disparity vector of the current block.
- the interlayer video decoding apparatus 20 may determine an inter-view motion prediction candidate by determining a sample in a block of the second layer image.
- the interlayer video decoding apparatus 20 may determine a reference block including the samples in the block. In this case, when the determined reference block is not encoded in the intra prediction mode, a prediction direction for the reference block, a reference picture index for the reference block, and a motion vector are obtained from the reference block, and the reference picture index in the reference picture list for the reference block is obtained.
- the reference picture index may be determined as i, and the motion vector of the reference block is inter- The view motion prediction candidate may be determined.
- the interlayer video decoding apparatus 20 may determine an inter-view motion prediction candidate by determining a sample adjacent to a block of the shifted second layer image.
- the interlayer video decoding apparatus 20 may determine a reference block including samples adjacent to the block. After determining the reference block, the process of determining the shifted inter-view motion prediction candidate is the same as the process of determining the inter-view motion prediction candidate, and thus a detailed description thereof will be omitted.
- the interlayer video decoding apparatus 20 determines a block of the first layer image by using a disparity vector of a current block included in the second layer image.
- the interlayer video decoding apparatus 20 may determine at least one subblock from a block of the first layer image.
- the interlayer video decoding apparatus 20 may determine at least one subblock from the current block.
- the number of at least one subblock determined from the current block and the relative position of the at least one subblock in the current block may include the number of the at least one subblock determined from the block of the second layer image and the at least one subblock in the current block.
- the relative position of may be the same.
- the interlayer video decoding apparatus 20 may determine an inter-view motion prediction candidate for each subblock in the current block.
- the process of determining the inter-view motion prediction candidate for each subblock in the current block is similar to the process of determining the inter-view motion prediction code for the current block, but differs in the size and position of the block. Is omitted. If there is a subblock in which the inter-view motion prediction candidate cannot be determined among the subblocks in the current block, the inter-view motion prediction candidate of the corresponding subblock may be determined using the inter-view motion prediction candidate of another subblock. have.
- the interlayer video decoding apparatus 20 performs motion compensation on each subblock by using the inter-view motion prediction candidate of the subblock.
- the first layer image has been described above on the assumption that the first layer image is the base layer image and the second layer image is the enhancement layer image.
- the base layer refers to a layer that can be restored using only its own layer
- the enhancement layer refers to a layer that can be restored using information of another layer.
- first layer image is an image encoded first, and is an image to which motion information is referred.
- second layer image, the third layer image, the K-layer image should not be limitedly interpreted as an image referring to the motion information of the first layer image.
- the second layer image is a previously encoded image and refers to an image to which motion information is referred
- the first layer image refers to an image referring to motion information as an image currently encoded. can do.
- 2B is a flowchart of an interlayer video decoding method, according to various embodiments.
- the interlayer video decoding apparatus 20 may obtain a disparity vector of a current block included in a first layer image.
- the interlayer video decoding apparatus 20 may determine a block of a second layer image corresponding to the current block by using the obtained disparity vector.
- the interlayer video decoding apparatus 20 may determine a reference block including a sample in contact with the boundary of the block.
- the interlayer video decoding apparatus 20 may determine a reference block including a sample contacting an edge of the block.
- the interlayer video decoding apparatus 20 may obtain a motion vector of the reference block.
- the interlayer video decoding apparatus 20 may acquire a prediction direction and a reference image index of the reference block together with the motion vector of the reference block.
- the interlayer video decoding apparatus 20 may determine a motion vector of a current block with respect to a first layer image by using the obtained motion vector.
- the interlayer video decoding apparatus 20 may determine the prediction direction and the reference image index of the current block by using the obtained prediction direction and the reference image index.
- the interlayer video decoding apparatus 20 may determine the prediction block of the current block by using the determined motion vector of the current block. In detail, the interlayer video decoding apparatus 20 determines a reference picture using the prediction direction and the reference picture index of the current block, determines a block in the reference picture using the motion vector of the current block, and determines the determined reference block. The sample value may be used to determine the sample value of the prediction block with respect to the current block.
- the interlayer video decoding apparatus 20 may obtain a residual block of the current block from the bitstream, and the interlayer video decoding apparatus 20 may use the residual block of the current block and the prediction block of the current block. You can restore the current block. In detail, the interlayer video decoding apparatus 20 may reconstruct the current block by adding the sample value of the residual block of the current block to the sample value of the prediction block of the current block.
- interlayer prediction structure that may be performed in the interlayer video encoding apparatus 10 according to various embodiments will be described with reference to FIG. 3A.
- 3A illustrates an interlayer prediction structure, according to various embodiments.
- the interlayer video encoding apparatus 10 predictively encodes base view images, left view images, and right view images according to the reproduction order 30 of the multiview video prediction structure illustrated in FIG. 3A. Can be.
- images of the same view are arranged in the horizontal direction. Therefore, left view images labeled 'Left' are arranged in a row in the horizontal direction, basic view images labeled 'Center' are arranged in a row in the horizontal direction, and right view images labeled 'Right' are arranged in a row in the horizontal direction. It is becoming.
- the base view images may be center view images, in contrast to left / right view images.
- images having the same POC order are arranged in the vertical direction.
- the POC order of an image indicates a reproduction order of images constituting the video.
- 'POC X' displayed in the multi-view video prediction structure 30 indicates a relative reproduction order of the images located in the corresponding column.
- the left view images labeled 'Left' are arranged in the horizontal direction according to the POC order (playing order), and the base view images labeled 'Center' These images are arranged in the horizontal direction according to the POC order (playing order), and right-view images marked as 'Right' are arranged in the horizontal direction according to the POC order (playing order).
- both the left view image and the right view image located in the same column as the base view image are images having different viewpoints but having the same POC order (playing order).
- Each GOP includes images between successive anchor pictures and one anchor picture.
- An anchor picture is a random access point.
- the anchor picture When a video is played at random, when the playback position is randomly selected from among images arranged according to the playback order of the video, that is, the POC order, the anchor picture has the nearest POC order at the playback position. Is played.
- Base view images include base view anchor pictures 31, 32, 33, 34, and 35
- left view images include left view anchor pictures 131, 132, 133, 134, and 135
- the images include right-view anchor pictures 231, 232, 233, 234, and 235.
- Multi-view images may be played back in GOP order and predicted (restored).
- images included in GOP 0 may be reproduced, and then images included in GOP 1 may be reproduced. That is, images included in each GOP may be reproduced in the order of GOP 0, GOP 1, GOP 2, and GOP 3.
- the images included in GOP 1 may be predicted (restored). That is, images included in each GOP may be predicted (restored) in the order of GOP 0, GOP 1, GOP 2, and GOP 3.
- both inter-view prediction (inter layer prediction) and inter prediction are performed on the images.
- the image at which the arrow starts is a reference image
- the image at which the arrow ends is an image predicted using the reference image.
- the prediction result of the base view images may be encoded and output in the form of a base view image stream, and the prediction result of the additional view images may be encoded and output in the form of a layer bitstream.
- the prediction encoding result of the left view images may be output as the first layer bitstream, and the prediction encoding result of the right view images may be output as the second layer bitstream.
- B-picture type pictures are predicted with reference to an I-picture type anchor picture followed by a POC order and an I-picture type anchor picture following it.
- the b-picture type pictures are predicted by referring to an I-picture type anchor picture followed by a POC order and a subsequent B-picture type picture or by referring to a B-picture type picture followed by a POC order and an I-picture type anchor picture following it. .
- inter-view prediction (inter layer prediction) referring to different view images and inter prediction referring to the same view images are performed, respectively.
- inter-view prediction (inter layer prediction) with reference to the base view anchor pictures 31, 32, 33, 34, and 35 having the same POC order, respectively. This can be done.
- the base view images 31, 32, 33, 34, 35 having the same POC order or the left view anchor pictures 131, 132, 133, 134 and 135 may perform inter-view prediction.
- the remaining images other than the anchor pictures 131, 132, 133, 134, 135, 231, 232, 233, 234, and 235 among the left view images and the right view images other view images having the same POC are also displayed.
- Reference inter-view prediction (inter layer prediction) may be performed.
- the remaining images other than the anchor pictures 131, 132, 133, 134, 135, 231, 232, 233, 234, and 235 among the left view images and the right view images are predicted with reference to the same view images.
- left view images and the right view images may not be predicted with reference to the anchor picture having the playback order that precedes the additional view images of the same view. That is, for inter prediction of the current left view image, left view images other than a left view anchor picture having a playback order preceding the current left view image may be referenced. Similarly, for inter prediction of a current right view point image, right view images except for a right view anchor picture whose reproduction order precedes the current right view point image may be referred to.
- the left view image that belongs to the previous GOP that precedes the current GOP to which the current left view image belongs is not referenced and is left view point that belongs to the current GOP but is reconstructed before the current left view image.
- the prediction is performed with reference to the image. The same applies to the right view image.
- the interlayer video decoding apparatus 20 may reconstruct base view images, left view images, and right view images according to the reproduction order 30 of the multiview video prediction structure illustrated in FIG. 3A. have.
- the left view images may be reconstructed through inter-view disparity compensation referring to the base view images and inter motion compensation referring to the left view images.
- the right view images may be reconstructed through inter-view disparity compensation referring to the base view images and the left view images and inter motion compensation referring to the right view images.
- Reference images must be reconstructed first for disparity compensation and motion compensation of left view images and right view images.
- the left view images may be reconstructed through inter motion compensation referring to the reconstructed left view reference image.
- the right view images may be reconstructed through inter motion compensation referring to the reconstructed right view reference image.
- a left view image belonging to a previous GOP that precedes the current GOP to which the current left view image belongs is not referenced, and is left in the current GOP but reconstructed before the current left view image. It is preferable that only the viewpoint image is referred to. The same applies to the right view image.
- 3B is a diagram illustrating a multilayer video, according to various embodiments.
- the interlayer video encoding apparatus 10 may include various spatial resolutions, various quality, various frame rates, A scalable bitstream may be output by encoding multilayer image sequences having different viewpoints. That is, the interlayer video encoding apparatus 10 may generate and output a scalable video bitstream by encoding an input image according to various scalability types. Scalability includes temporal, spatial, image quality, multi-point scalability, and combinations of such scalability. These scalabilities can be classified according to each type. In addition, scalabilities can be distinguished by dimension identifiers within each type.
- scalability has scalability types such as temporal, spatial, image quality and multi-point scalability.
- scalability types such as temporal, spatial, image quality and multi-point scalability.
- Each type may be divided into scalability dimension identifiers. For example, if you have different scalability, you can have different dimension identifiers. For example, the higher the scalability of the scalability type, the higher the scalability dimension may be assigned.
- a bitstream is called scalable if it can be separated from the bitstream into valid substreams.
- the spatially scalable bitstream includes substreams of various resolutions.
- the scalability dimension is used to distinguish different scalability from the same scalability type.
- the scalability dimension may be represented by a scalability dimension identifier.
- the spatially scalable bitstream may be divided into substreams having different resolutions such as QVGA, VGA, WVGA, and the like.
- layers with different resolutions can be distinguished using dimensional identifiers.
- the QVGA substream may have 0 as the spatial scalability dimension identifier value
- the VGA substream may have 1 as the spatial scalability dimension identifier value
- the WVGA substream may have 2 as the spatial scalability dimension identifier value. It can have
- a temporally scalable bitstream includes substreams having various frame rates.
- a temporally scalable bitstream may be divided into substreams having a frame rate of 7.5 Hz, a frame rate of 15 Hz, a frame rate of 30 Hz, and a frame rate of 60 Hz.
- Image quality scalable bitstreams can be divided into substreams having different qualities according to the Coarse-Grained Scalability (CGS) method, the Medium-Grained Scalability (MGS) method, and the Fine-Grained Scalability (GFS) method.
- CGS Coarse-Grained Scalability
- MMS Medium-Grained Scalability
- GFS Fine-Grained Scalability
- Temporal scalability may also be divided into different dimensions according to different frame rates
- image quality scalability may also be divided into different dimensions according to different methods.
- a multiview scalable bitstream includes substreams of different views within one bitstream.
- a bitstream includes a left image and a right image.
- the scalable bitstream may include substreams related to encoded data of a multiview image and a depth map. Viewability scalability may also be divided into different dimensions according to each view.
- the scalable video bitstream may include substreams in which at least one of temporal, spatial, image quality, and multi-point scalability is encoded with image sequences of a multilayer including different images.
- the image sequence 3010 of the first layer, the image sequence 3020 of the second layer, and the image sequence 3030 of the nth (n is an integer) layer may be image sequences having at least one of a resolution, an image quality, and a viewpoint. have.
- an image sequence of one layer among the image sequence 3010 of the first layer, the image sequence 3020 of the second layer, and the image sequence 3030 of the nth (n is an integer) layer may be an image sequence of the base layer.
- the image sequences of the other layers may be image sequences of the enhancement layer.
- the image sequence 3010 of the first layer may include images of a first viewpoint
- the image sequence 3020 of the second layer may include images of a second viewpoint
- the image sequence 3030 of the n th layer may include an n th viewpoint.
- the image sequence 3010 of the first layer is a left view image of the base layer
- the image sequence 3020 of the second layer is a right view image of the base layer
- the image sequence 3030 of the nth layer is It may be a right view image.
- the present invention is not limited to the above example, and the image sequences 3010, 3020, and 3030 having different scalable extension types may be image sequences having different image attributes.
- 3C is a diagram illustrating NAL units including encoded data of a multilayer video, according to various embodiments.
- the bitstream generator 18 outputs network abstraction layer (NAL) units including encoded multilayer video data and additional information.
- NAL network abstraction layer
- the video parameter set (hereinafter referred to as "VPS") includes information applied to the multilayer image sequences 3120, 3130, and 3140 included in the multilayer video.
- the NAL unit including the information about the VPS is called a VPS NAL unit 3110.
- the VPS NAL unit 3110 includes a common syntax element shared by the multilayer image sequences 3120, 3130, and 3140, information about an operation point, and a profile to prevent unnecessary information from being transmitted. Includes essential information about the operating point needed during the session negotiation phase, such as (profile) or level.
- the VPS NAL unit 3110 according to an embodiment includes scalability information related to a scalability identifier for implementing scalability in multilayer video.
- the scalability information is information for determining scalability applied to the multilayer image sequences 3120, 3130, and 3140 included in the multilayer video.
- the scalability information includes information on scalability type and scalability dimension applied to the multilayer image sequences 3120, 3120, and 3140 included in the multilayer video.
- scalability information may be directly obtained from a value of a layer identifier included in a NAL unit header.
- the layer identifier is an identifier for distinguishing a plurality of layers included in the VPS.
- the VPS may signal a layer identifier for each layer through a VPS extension.
- the layer identifier for each layer of the VPS may be included in the VPS NAL unit and signaled. For example, layer identifiers of NAL units belonging to a specific layer of the VPS may be included in the VPS NAL unit.
- the layer identifier of the NAL unit belonging to the VPS may be signaled through a VPS extension. Accordingly, in the decoding / decoding method according to various embodiments, scalability information on a layer of NAL units belonging to a corresponding VPS may be obtained using a VPS using layer identifier values of corresponding NAL units.
- 4A is a diagram for describing a disparity vector for interlayer prediction, according to various embodiments.
- an interlayer video decoding apparatus 20 refers to a first layer corresponding to a current block 1401 of a second layer current picture 1400 using a disparity vector DV.
- Inter-layer prediction for determining the first layer reference block 1403 of the image 1402 may be performed, and disparity compensation may be performed using the first layer reference block 1403.
- the interlayer video decoding apparatus 20 may perform reference motion of the first layer reference block 1403 indicated by the disparity vector DV of the second layer current block 1401 for inter motion compensation.
- a vector MV ref may be obtained and the motion vector MV cur of the current block 1401 may be predicted using the obtained reference motion vector MV ref ).
- the interlayer video decoding apparatus 20 may perform motion compensation between the second layer images T n -1 and T n using the predicted motion vector MV ref .
- the disparity vector may be transmitted from the encoding apparatus to the decoding apparatus through the bitstream as separate information, and may be predicted based on a depth image or a neighboring block of the current block. That is, the predicted disparity vector may be a neighboring blocks disparity vector (NBDV) and a depth oritented NBDV (DoNBDV).
- NBDV neighboring blocks disparity vector
- DoNBDV depth oritented NBDV
- the NBDV means the disparity vector of the current block predicted using the obtained disparity vector.
- a depth block corresponding to the current block may be determined using NBDV.
- the representative depth value among the depth values included in the determined depth block is determined, and the representative depth value is converted into a disparity vector using a camera parameter.
- DoNBDV means a disparity vector predicted using the converted disparity vector.
- 4B is a diagram for describing a spatial neighboring block candidate for predicting a disparity vector, according to various embodiments.
- the interlayer video decoding apparatus 20 may search for spatial neighboring block candidates in a predetermined search order (eg, in order to predict a disparity vector of the current block 1500 in the current picture 4000). search by z-scan or raster scan).
- the neighboring block candidates searched here may be blocks spatially neighboring the current block 1500.
- the spatially neighboring blocks may be coding units or prediction units.
- the interlayer video decoding apparatus 20 may include a neighboring block A0 1510 located at the lower left of the current block 1500 and a neighboring block A1 located at the left of the current block 1500. 1520, a peripheral block B0 1530 located at an upper right side of the current block 1500, a neighboring block B1 1540 located at an upper end of the current block 1500, and located at an upper left side of the current block 1500.
- the neighboring block B2 1550 may be spatial neighboring block candidates for obtaining the disparity vector. In order to obtain a disparity vector, neighboring blocks of a predetermined position may be searched in order of neighboring block candidates A1 1520, B1 1540, B0 1530, A0 1510, and B2 1550.
- 4C is a diagram for describing a temporal neighboring block candidate for predicting a disparity vector, according to various embodiments.
- the interlayer video decoding apparatus 20 is included in the reference image 4100 and interpolates with the current block 1500 for inter prediction of the current block 1500 included in the current picture 4000.
- At least one of a co-located block Col 1560 and a block around the collocated block Col 1560 may be included in the temporal neighboring block candidate.
- the lower right block BR 1570 of the coll Col 1560, which is co-located may be included in the temporal prediction candidate.
- a block used for determining a temporal prediction candidate may be a coding unit or a prediction unit.
- FIG. 5 is a diagram for describing a process of determining, by an interlayer video decoding apparatus, a sample included in a reference block by using a disparity vector to determine a reference block for referring to motion information according to various embodiments.
- the interlayer video decoding apparatus 20 determines to perform interlayer prediction on the current block 5010 of the view 1 image 5000, the disparity vector 5020 of the current block 5010 is determined. Can be determined. For example, the interlayer video decoding apparatus 20 may derive the disparity vector 5020 of the current block 5010 from neighboring blocks, or obtain the disparity vector 5020 of the current block 5010 from the bitstream. can do.
- the interlayer video decoding apparatus 20 may determine a block 5110 of the viewpoint 0 image 5100 corresponding to the current block 5010 using the disparity vector 5020.
- the viewpoint 0 image 5100 may mean an image of another viewpoint at the same time zone as the viewpoint 1 image 5000.
- the interlayer video decoding apparatus 20 may determine pos in which is located inside the block 5110 of the viewpoint 0 image 5100 to determine a reference block.
- the sample 5200 may be determined, and a block including the determined sample 5200 may be determined as a reference block.
- the reference block including the determined sample 5200 may be the same as the block 5110.
- the interlayer video decoding apparatus 20 may determine a block 5201 including motion information as a reference block without determining the block 5110 as a reference block.
- the interlayer video decoding apparatus 20 may determine units having a predetermined size for storing motion information from the viewpoint 0 image 5100 (hereinafter, referred to as a motion information unit).
- the interlayer video decoding apparatus 20 may determine, as a reference block, a motion information unit including a sample among the determined motion information units.
- the interlayer video decoding apparatus 20 may determine one or more prediction units.
- the interlayer video decoding apparatus 20 may determine a prediction unit including a sample among the determined prediction units as a reference block.
- the interlayer video decoding apparatus 20 obtains a motion vector of the reference block, determines a motion vector of the current block 5010 using the motion vector of the reference block, and performs motion compensation on the current block using the motion vector. Can be done.
- the interlayer video decoding apparatus 20 may determine the motion vector of the reference block as the inter-view motion prediction candidate.
- the interlayer video decoding apparatus 20 may determine the prediction direction and the reference image index of the current block together with the motion vector.
- the interlayer video decoding apparatus 20 determines a merge candidate of one of a plurality of merge candidates including a merge candidate (inter-view motion prediction merge candidate) associated with the inter-view motion prediction candidate, and relates to one merge candidate. Prediction on the current block may be performed using the motion information.
- motion information related to the inter-view motion prediction merging candidate (inter-view motion prediction candidate and its prediction direction related thereto) is determined.
- reference image index) to perform prediction on the current block.
- the interlayer video decoding apparatus 20 may determine the motion information of the current block by using the motion information of the inter-view motion prediction candidate, and perform motion compensation on the current block by using the motion information of the current block.
- the interlayer video decoding apparatus 20 may determine one of the samples 5301, 5302, 5303, 5304, 5305, 5306, 5307, and 5308 that are in contact with the block 5110 of the viewpoint 0 image 5100 to determine a reference block.
- a sample may be determined, and a block including the determined one sample (hereinafter referred to as a reference sample) may be determined as a reference block.
- the interlayer video decoding apparatus 20 may determine a motion information unit including a reference sample among the motion information units of the view 0 image 5100 as a reference block, or may select a reference sample among the prediction units of the view 0 image 5100.
- the including prediction unit may be determined as a reference block.
- the interlayer video decoding apparatus 20 may obtain a motion vector of the reference block, determine a motion vector of the current block 5010 using the motion vector of the reference block, and perform motion compensation using the motion vector. .
- the interlayer video decoding apparatus 20 may determine the motion vector of the reference block as the shifted inter-view motion prediction candidate.
- the interlayer video decoding apparatus 20 may determine the prediction direction and the reference image index of the current block together with the motion vector.
- the interlayer video decoding apparatus 20 determines a merge candidate of one of a plurality of merge candidates including a merge candidate (inter-view motion prediction merge candidate) associated with the inter-view motion prediction candidate, and relates to one merge candidate. Prediction on the current block may be performed using the motion information.
- the interlayer video decoding apparatus 20 determines a shifted inter-view motion prediction merging candidate among a plurality of merging candidates, the interlayer video decoding apparatus 20 performs prediction on the current block by using motion information of the shifted inter-view motion prediction candidate. can do.
- the interlayer video decoding apparatus 20 may determine motion information of the current block using motion information of the shifted inter-view motion prediction candidate, and perform motion compensation on the current block using motion information of the current block. have.
- a sample of one of the samples 5301, 5302, 5303, 5304, 5305, 5306, 5307, and 5308 in contact with the block 5110 of the viewpoint 0 image 5100 may be predetermined.
- the interlayer video decoding apparatus 20 may determine pos 1 among samples 5301, 5302, 5303, 5304, 5305, 5306, 5307, and 5308 that are in contact with the block 5110 of the viewpoint 0 image 5100.
- a sample 5301 may be determined in advance, a block 5202 including the sample 5301 may be determined as a reference block, and a motion vector of the reference block may be determined as a shifted inter motion prediction candidate.
- the interlayer video decoding apparatus 20 may determine the prediction direction and the reference image index of the current block by using the prediction direction and the reference image index of the reference block together with the motion vector.
- the locations of 5307 and 5308 may be predetermined based on block 5110.
- the interlayer video decoding apparatus 20 contacts pos 1 at the lower right side of the block 5110 to induce the lower right block of the block 5110 to the reference block.
- a sample 5301 can be determined and pos 1 A block including the sample 5301 may be determined as a reference block.
- the interlayer video decoding apparatus 20 may determine a pos 2 sample 5302 contacting the right side of the block 5110 to derive the right block of the block 5110 as a reference block, and may determine the pos 2 sample 5302.
- the containing block may be determined as a reference block.
- the interlayer video decoding apparatus 20 may determine the pos 3 sample 5301 contacting the upper right side of the block 5110 to derive the upper right block of the block 5110 to the reference block, and determine the pos 3 sample 5301.
- the containing block may be determined as a reference block.
- the interlayer video decoding apparatus 20 determines a pos 4 sample 5304 adjacent to the upper portion of the block 5110 to derive the upper block of the block 5110, and to determine the pos 4.
- a block including the sample 5304 may be determined as a reference block.
- Inter-layer video decoding apparatus 20 includes a block including pos 5 samples (5305) determining, and pos 5 samples (5305) in contact with the upper-left block on the left side of the block 5110 to derive the reference block (5110)
- a block to be determined may be determined as a reference block.
- the interlayer video decoding apparatus 20 determines a pos 6 sample 5306 adjacent to the left side of the block 5110 to derive the left block as the reference block, and determines the block including the pos 6 sample 5306 as the reference block. You can decide.
- the interlayer video decoding apparatus 20 determines a pos 7 sample 5307 in contact with a lower left side of the block 5110 to derive the lower left block as a reference block, and refers to a block including the pos 7 sample 5307. Can be determined by block.
- the interlayer video decoding apparatus 20 determines a pos 8 sample 5308 adjacent to the bottom of the block 5110 to derive the lower block as a reference block, and then determines a block including the pos 8 sample 5308 as a reference block. You can decide.
- the positions of pos 2 sample 5302, pos 4 sample 5304, pos 6 sample 5306, pos 8 sample 5308 are in line with pos in sample 5300 located inside block 5110. Can be predetermined.
- the position of the pos in sample 5300 located inside the block 5110 may be located at the center of the block 5110, but is not limited thereto and may be located at another point within the block, in this case, pos
- the positions of the 2 samples 5302, the pos 4 sample 5304, the pos 6 sample 5306, and the pos 8 sample 5308 may be changed according to the positions of the pos in sample 5300.
- the interlayer video decoding apparatus 20 may obtain motion information of the reference block and determine the motion information of the current block by using the obtained motion information.
- the interlayer video decoding apparatus 20 may determine a block of a reference image (not shown) at a different time from the viewpoint 1 image 5000, and determine a prediction block of the current block by using the block of the reference image.
- the interlayer video decoding apparatus 20 may obtain a residual block of the current block from the bitstream, and reconstruct the current block using the prediction block of the current block and the residual block of the current block.
- the interlayer video decoding apparatus 20 may determine among the samples 5301, 5302, 5303, 5304, 5305, 5306, 5307, and 5308 adjacent to the block 5110 of the viewpoint 0 image 5100 to determine the reference block.
- One sample can be determined.
- the interlayer video decoding apparatus 20 may use a motion vector related to a block including the samples 5301, 5302, 5303, 5304, 5305, 5306, 5307, and 5308 according to a predetermined scan order.
- One of the samples 5301, 5302, 5303, 5304, 5305, 5306, 5307, 5308 may be determined based on the availability of the determined motion vector.
- the interlayer video decoding apparatus 20 obtains information representing one sample of the samples from the bitstream, and uses the obtained information 5301, 5302, 5303, 5304, 5305, 5306, 5307, and 5308. Can be determined.
- the interlayer video decoding apparatus 20 compares the motion information of each block including the samples 5301, 5302, 5303, 5304, 5305, 5306, 5307, and 5308 according to a predetermined scan order to optimize the motion.
- a sample included in a block having motion information may be determined.
- the interlayer video decoding apparatus 20 may determine one of the samples 5301, 5302, 5303, 5304, 5305, 5306, 5307, and 5308 based on the positional relationship between the current block 5010 and the block 5110. Can be determined.
- the interlayer video decoding apparatus 20 may block 5210 when the position of the block 5110 in the viewpoint 0 image 5100 is the lower right of the position of the current block 5010 in the viewpoint 1 image 5000.
- the sample 5301 located at the lower right side of the side may be determined.
- the process of determining the reference block of the image at different viewpoints has been described above, but the present invention is not limited thereto, and the current block is determined by using the motion vector in the process of determining the reference block of the image at different times within the same viewpoint.
- An internal sample of a block co-located as 5010 or a sample contacting a block at the same position as the current block 5010 may be determined, and a block including the determined sample may be determined as a reference block.
- the interlayer video decoding apparatus 20 has described the process of determining one of the samples 5301, 5302, 5303, 5304, 5305, 5306, 5307, and 5308, but the samples 5301, 5302, 5303, and the like. Not limited to 5304, 5305, 5306, 5307, 5308, other samples in contact with block 5110 may be used, and only some of the samples 5301, 5302, 5303, 5304, 5305, 5306, 5307, 5308 Those skilled in the art can readily understand that it can be used.
- FIG. 6 is a diagram for describing a process of determining, by an interlayer video decoding apparatus, a sample included in a reference block by using a disparity vector to determine a reference block for referring to motion information, according to various embodiments.
- the interlayer video decoding apparatus 20 determines the shifted inter-view motion prediction candidate.
- the interlayer video decoding apparatus 20 may determine the disparity vector 6020 of the current block 6010 in the view 1 image 6000 using the disparity vector of the neighboring block.
- the interlayer video decoding apparatus 20 may determine the block 6110 of the time zero image 6100 indicated by the disparity vector using the disparity vector 6020 of the current block 6010.
- Inter-layer video decoding apparatus 20 does not determine the sample pos 9 (6200) detached from the block 6110 to determine a reference block, pos directly in contact with the block (6110) 1 Sample 6201 can be determined.
- the interlayer video decoding apparatus 20 may determine the position of the pos 1 sample 6201 using the following equations (1) and (2).
- xRefFull may indicate the x-coordinate position of the pos 1 sample 6201 and yRefFull may indicate the y-coordinate position of the pos 1 sample.
- xPb may indicate the x-coordinate position of the current block (upper left upper pixel)
- yPb may indicate the y-coordinate position of the current block (upper left upper pixel).
- nPbW may indicate the width of the current block
- nPbH may indicate the height of the current block.
- DVx may mean an x component regarding a disparity vector having 1/4 pixel accuracy (ie, fractional pixel accuracy)
- DVy may mean a y component regarding a disparity vector having 1/4 pixel accuracy.
- DVx and DVy are disparity vectors with 1/4 pixel accuracy, they may include a decimal representation, but it is assumed here that they are expressed in integer form by multiplying by four. In this case, xRefFull and yRefFull may indicate the position of an integer pixel.
- the interlayer video decoding apparatus 20 may additionally determine the position of a sample to determine the reference block.
- the interlayer video decoding apparatus 20 may additionally determine the position of the sample by using the position of the pos 1 sample.
- the interlayer video decoding apparatus 20 may determine the position (xRef, yRef) of the sample by the following equations (3) and (4).
- PicWidthInSamplesL means the width of the whole image based on luma samples
- PicHeightInSamplesL means the height of the whole image based on luma samples
- N is the value of log2 in the unit of a block of a certain size where motion information is stored. Can mean.
- N may be 3.
- the Clip3 (x, y, z) function may be a function of outputting x when z ⁇ x, an output of y when z> y, and an output of z otherwise.
- the interlayer video decoding apparatus 20 may obtain motion information of the block 6202 including samples located at (xRef, yRef) and determine the motion information of the current block using the obtained motion information.
- a block including a sample located at (xRef, yRef) may be a prediction unit including a sample among prediction units determined from the viewpoint 0 image 6100.
- FIG. 7 is a diagram for describing a process of determining, by an interlayer video decoding apparatus 20, a reference block according to various prediction methods, according to various embodiments.
- the interlayer video decoding apparatus 20 determines one sample of one of the samples 7015, 7016, 7017, 7018, and 7019 included in the neighboring block of the block 7011, and at least one of them individually.
- a block including a sample of may be determined as each reference block, motion information of the reference block may be obtained, and motion information of the block 7031 currently to be decoded may be determined using motion information of the reference block.
- the interlayer video decoding apparatus 20 may derive the motion information of the block 7031 to be currently decoded using the motion information of the neighboring block, and to decode the current information using the derived motion information of the block 7071 to be decoded.
- a block 7021 included in an image 7020 at a time different from that of the first image 7010 and at a time different from that of the first image 7010 may be determined, and an internal pos in (COL) of the block may be determined.
- One sample of the sample 7022 or the pos 1 (BR) sample 7203 in contact with the block may be determined.
- the interlayer video decoding apparatus 20 may determine a block including one sample, and determine the motion information of the block 7071 using motion information of the block including one sample.
- the interlayer video decoding apparatus 20 may fetch the motion information stored in the memory from the memory in the process of obtaining the motion information of the block including one sample.
- the second image 7100 may be different from the viewpoint of the first image 7010, may be an image having a different time from that of the first image 7010, and may be an image having the same time as the image 7020.
- the interlayer video decoding apparatus 20 determines the disparity vector 7102 of the current block 7101 in the second image 7100, and uses the disparity vector 7102 of the current block 7101 to display the image 7020. Block 7021 included in.
- the interlayer video decoding apparatus 20 may determine a sample 7024 away from the block 7021, and may determine a block 7031 including the sample 7024 as a reference block.
- the interlayer video decoding apparatus 20 may obtain motion information of the block 7031, and determine motion information of the current block 7101 in the second image 7100 using the motion information of the block 7030.
- the sample 7024 is different from the sample 7203 previously used at the time of decoding the first image, and thus, the block 7031 in which the motion information is obtained is also used in decoding the first image. Can be different. Therefore, the motion information of the block 7030 previously fetched from the memory cannot be used, and the motion information of the block 7031 must be additionally fetched.
- the interlayer video decoding apparatus 20 fetches each motion information twice, the complexity of the memory may increase. However, when it is determined that the same sample 7003 is used when decoding the first image 7010 and when decoding the second image 7100, the block 7030 including the same sample 7003 is determined in the same manner. Therefore, the motion information of the same block 7031 is used. Therefore, when the interlayer video decoding apparatus 20 decodes the first image 7010, the motion information of the current block 7101 of the second image 7100 may be determined using the motion information fetched. That is, since the interlayer video decoding apparatus 20 does not additionally fetch motion information, the interlayer video decoding apparatus 20 may reduce memory complexity.
- the interlayer video decoding apparatus 20 determines the sample 7023 instead of the sample 7024, and determines the block 7031 of the sample 7024. By determining the block 7030 that is closer to the reference block as the reference block, the motion information of the current block using the motion information of the reference block that is more similar to the motion information of the current block than the motion information of the block 7031 away from the block 7021. By determining the efficiency of encoding and decoding can be improved.
- interlayer video encoding apparatus 10 blocks in which video data is divided are divided into coding units having a tree structure, and As described above, coding units, prediction units, and transformation units are sometimes used for inter-layer prediction or inter prediction.
- coding units, prediction units, and transformation units are sometimes used for inter-layer prediction or inter prediction.
- FIGS. 8 to 20 a video encoding method and apparatus, a video decoding method, and apparatus based on coding units and transformation units having a tree structure according to various embodiments will be described with reference to FIGS. 8 to 20.
- the encoding / decoding process for the first layer images and the encoding / decoding process for the second layer images are performed separately. That is, when inter-layer prediction occurs in the multi-layer video, the encoding / decoding result of the single layer video may be cross-referenced, but a separate encoding / decoding process occurs for each single layer video.
- the video encoding process and the video decoding process based on coding units having a tree structure described below with reference to FIGS. 8 to 20 are video encoding processes and video decoding processes for single layer video, and thus inter prediction and motion compensation are performed. This is detailed. However, as described above with reference to FIGS. 1A through 7, inter-layer prediction and compensation between base view images and second layer images are performed to encode / decode a video stream.
- the encoder 12 may perform video encoding for each single layer video.
- the video encoding apparatus 100 of FIG. 8 may be controlled to perform encoding of the single layer video allocated to each video encoding apparatus 100 by including the number of layers of the multilayer video.
- the interlayer video encoding apparatus 10 may perform inter-view prediction using encoding results of separate single views of each video encoding apparatus 100. Accordingly, the encoder 12 of the interlayer video encoding apparatus 10 may generate a base view video stream and a second layer video stream that contain encoding results for each layer.
- the decoder 22 of the interlayer video decoding apparatus 20 in order for the decoder 22 of the interlayer video decoding apparatus 20 according to various embodiments to decode a multilayer video based on coding units having a tree structure, the received first layer video stream and the second layer are decoded.
- the video decoding apparatus 200 of FIG. 9 includes the number of layers of the multilayer video, and performs decoding of the single layer video allocated to each video decoding apparatus 200.
- the interlayer video decoding apparatus 20 may perform interlayer compensation by using a decoding result of a separate single layer of each video decoding apparatus 200. Accordingly, the decoder 22 of the interlayer video decoding apparatus 20 may generate first layer images and second layer images reconstructed for each layer.
- FIG. 8 is a block diagram of a video encoding apparatus 100 based on coding units having a tree structure, according to an embodiment.
- the video encoding apparatus 100 including video prediction based on coding units having a tree structure includes a coding unit determiner 120 and an output unit 130.
- the video encoding apparatus 100 that includes video prediction based on coding units having a tree structure is abbreviated as “video encoding apparatus 100”.
- the coding unit determiner 120 may partition the current picture based on a maximum coding unit that is a coding unit having a maximum size for the current picture of the image. If the current picture is larger than the maximum coding unit, image data of the current picture may be split into at least one maximum coding unit.
- the maximum coding unit may be a data unit having a size of 32x32, 64x64, 128x128, 256x256, or the like, and may be a square data unit having a square of two horizontal and vertical sizes.
- Coding units may be characterized by a maximum size and depth.
- the depth indicates the number of times the coding unit is spatially divided from the maximum coding unit, and as the depth increases, the coding unit for each depth may be split from the maximum coding unit to the minimum coding unit.
- the depth of the largest coding unit is the highest depth and the minimum coding unit may be defined as the lowest coding unit.
- the maximum coding unit decreases as the depth increases, the size of the coding unit for each depth decreases, and thus, the coding unit of the higher depth may include coding units of a plurality of lower depths.
- the image data of the current picture may be divided into maximum coding units according to the maximum size of the coding unit, and each maximum coding unit may include coding units divided by depths. Since the maximum coding unit is divided according to depths according to various embodiments, image data of a spatial domain included in the maximum coding unit may be hierarchically classified according to depth.
- the maximum depth and the maximum size of the coding unit that limit the total number of times of hierarchically dividing the height and the width of the maximum coding unit may be preset.
- the coding unit determiner 120 encodes at least one divided region obtained by dividing the region of the largest coding unit for each depth, and determines a depth at which the final encoding result is output for each of the at least one divided region. That is, the coding unit determiner 120 encodes the image data in coding units according to depths for each maximum coding unit of the current picture, and selects the depth at which the smallest coding error occurs to determine the final depth. The determined final depth and the image data for each maximum coding unit are output to the outputter 130.
- Image data in the largest coding unit is encoded based on coding units according to depths according to at least one depth less than or equal to the maximum depth, and encoding results based on the coding units for each depth are compared. As a result of comparing the encoding error of the coding units according to depths, a depth having the smallest encoding error may be selected. At least one final depth may be determined for each maximum coding unit.
- the coding unit is divided into hierarchically and the number of coding units increases.
- a coding error of each data is measured, and whether or not division into a lower depth is determined. Therefore, even in the data included in one largest coding unit, since the encoding error for each depth is different according to the position, the final depth may be differently determined according to the position. Accordingly, one or more final depths may be set for one maximum coding unit, and data of the maximum coding unit may be partitioned according to coding units of one or more final depths.
- the coding unit determiner 120 may determine coding units having a tree structure included in the current maximum coding unit.
- the coding units according to a tree structure according to various embodiments include coding units having a depth determined as a final depth among all deeper coding units included in the current maximum coding unit.
- the coding unit of the final depth may be determined hierarchically according to the depth in the same region within the maximum coding unit, and may be independently determined for the other regions.
- the final depth for the current area can be determined independently of the final depth for the other area.
- the maximum depth according to various embodiments is an index related to the number of divisions from the maximum coding unit to the minimum coding unit.
- the first maximum depth according to various embodiments may indicate the total number of divisions from the maximum coding unit to the minimum coding unit.
- the second maximum depth according to various embodiments may indicate the total number of depth levels from the maximum coding unit to the minimum coding unit. For example, when the depth of the largest coding unit is 0, the depth of the coding unit obtained by dividing the largest coding unit once may be set to 1, and the depth of the coding unit divided twice may be set to 2. In this case, if the coding unit divided four times from the maximum coding unit is the minimum coding unit, since depth levels of 0, 1, 2, 3, and 4 exist, the first maximum depth is set to 4 and the second maximum depth is set to 5. Can be.
- Predictive encoding and transformation of the largest coding unit may be performed. Similarly, prediction encoding and transformation are performed based on depth-wise coding units for each maximum coding unit and for each depth less than or equal to the maximum depth.
- encoding including prediction encoding and transformation should be performed on all the coding units for each depth generated as the depth deepens.
- the prediction encoding and the transformation will be described based on the coding unit of the current depth among at least one maximum coding unit.
- the video encoding apparatus 100 may variously select a size or shape of a data unit for encoding image data.
- the encoding of the image data is performed through prediction encoding, transforming, entropy encoding, and the like.
- the same data unit may be used in every step, or the data unit may be changed in steps.
- the video encoding apparatus 100 may select not only a coding unit for encoding the image data, but also a data unit different from the coding unit in order to perform predictive encoding of the image data in the coding unit.
- prediction encoding may be performed based on coding units of a final depth, that is, stranger undivided coding units, according to various embodiments.
- the intra-partition prediction unit for prediction is determined from the coding unit.
- the prediction unit may include a partition in which at least one of a coding unit and a height and a width of the coding unit are divided.
- the partition is a data unit in which a coding unit is divided, and may be a partition having the same size as the coding unit.
- the partition mode may be formed in a geometric form, as well as partitions divided in an asymmetrical ratio such as 1: n or n: 1, as well as symmetric partitions in which a height or width of a prediction unit is divided in a symmetrical ratio. It may optionally include partitioned partitions, arbitrary types of partitions, and the like.
- the prediction mode of the prediction unit may be at least one of an intra mode, an inter mode, and a skip mode.
- the intra mode and the inter mode may be performed on partitions having sizes of 2N ⁇ 2N, 2N ⁇ N, N ⁇ 2N, and N ⁇ N.
- the skip mode may be performed only for partitions having a size of 2N ⁇ 2N.
- the encoding may be performed independently for each prediction unit within the coding unit to select a prediction mode having the smallest encoding error.
- the video encoding apparatus 100 may perform conversion of image data of a coding unit based on not only a coding unit for encoding image data, but also a data unit different from the coding unit.
- the transformation may be performed based on a transformation unit having a size smaller than or equal to the coding unit.
- the transformation unit may include a data unit for intra mode and a transformation unit for inter mode.
- the transformation unit in the coding unit is also recursively divided into smaller transformation units, so that the residual data of the coding unit is determined according to the tree structure according to the transformation depth. Can be partitioned according to the conversion unit.
- a transformation depth indicating a number of divisions between the height and the width of the coding unit divided to the transformation unit may be set. For example, if the size of the transform unit of the current coding unit of size 2Nx2N is 2Nx2N, the transform depth is 0, the transform depth 1 if the size of the transform unit is NxN, and the transform depth 2 if the size of the transform unit is N / 2xN / 2. Can be. That is, the transformation unit having a tree structure may also be set for the transformation unit according to the transformation depth.
- the split information for each depth requires not only depth but also prediction related information and transformation related information. Accordingly, the coding unit determiner 120 may determine not only the depth that generates the minimum coding error, but also a partition mode in which the prediction unit is divided into partitions, a prediction mode for each prediction unit, and a size of a transformation unit for transformation.
- a method of determining a coding unit, a prediction unit / partition, and a transformation unit according to a tree structure of a maximum coding unit according to various embodiments will be described in detail with reference to FIGS. 9 to 19.
- the coding unit determiner 120 may measure a coding error of coding units according to depths using a Lagrangian Multiplier-based rate-distortion optimization technique.
- the output unit 130 outputs the image data and the split information according to depths of the maximum coding unit, which are encoded based on at least one depth determined by the coding unit determiner 120, in a bitstream form.
- the encoded image data may be a result of encoding residual data of the image.
- the split information for each depth may include depth information, partition mode information of a prediction unit, prediction mode information, split information of a transformation unit, and the like.
- the final depth information may be defined using depth-specific segmentation information indicating whether to encode in a coding unit of a lower depth rather than encoding the current depth. If the current depth of the current coding unit is a depth, since the current coding unit is encoded in a coding unit of the current depth, split information of the current depth may be defined so that it is no longer divided into lower depths. On the contrary, if the current depth of the current coding unit is not the depth, encoding should be attempted using the coding unit of the lower depth, and thus split information of the current depth may be defined to be divided into coding units of the lower depth.
- encoding is performed on the coding unit divided into the coding units of the lower depth. Since at least one coding unit of a lower depth exists in the coding unit of the current depth, encoding may be repeatedly performed for each coding unit of each lower depth, and recursive coding may be performed for each coding unit of the same depth.
- coding units having a tree structure are determined in one largest coding unit and at least one split information should be determined for each coding unit of a depth, at least one split information may be determined for one maximum coding unit.
- the depth since the data of the largest coding unit is partitioned hierarchically according to the depth, the depth may be different for each location, and thus depth and split information may be set for the data.
- the output unit 130 may allocate encoding information about a corresponding depth and an encoding mode to at least one of a coding unit, a prediction unit, and a minimum unit included in the maximum coding unit.
- a minimum unit is a square data unit of a size obtained by dividing a minimum coding unit, which is a lowest depth, into four segments.
- the minimum unit may be a square data unit having a maximum size that may be included in all coding units, prediction units, partition units, and transformation units included in the maximum coding unit.
- the encoding information output through the output unit 130 may be classified into encoding information according to depth coding units and encoding information according to prediction units.
- the encoding information for each coding unit according to depth may include prediction mode information and partition size information.
- the encoding information transmitted for each prediction unit includes information about an estimation direction of the inter mode, information about a reference image index of the inter mode, information about a motion vector, information about a chroma component of an intra mode, and information about an inter mode of an intra mode. And the like.
- Information about the maximum size and information about the maximum depth of the coding unit defined for each picture, slice, or GOP may be inserted into a header, a sequence parameter set, or a picture parameter set of the bitstream.
- the information on the maximum size of the transform unit and the minimum size of the transform unit allowed for the current video may also be output through a header, a sequence parameter set, a picture parameter set, or the like of the bitstream.
- the output unit 130 may encode and output reference information, prediction information, slice type information, and the like related to prediction.
- a coding unit according to depths is a coding unit having a size in which a height and a width of a coding unit of one layer higher depth are divided by half. That is, if the size of the coding unit of the current depth is 2Nx2N, the size of the coding unit of the lower depth is NxN.
- the current coding unit having a size of 2N ⁇ 2N may include up to four lower depth coding units having a size of N ⁇ N.
- the video encoding apparatus 100 determines a coding unit having an optimal shape and size for each maximum coding unit based on the size and the maximum depth of the maximum coding unit determined in consideration of the characteristics of the current picture. Coding units may be configured. In addition, since each of the maximum coding units may be encoded in various prediction modes and transformation methods, an optimal coding mode may be determined in consideration of image characteristics of coding units having various image sizes.
- the video encoding apparatus may increase the maximum size of the coding unit in consideration of the size of the image and adjust the coding unit in consideration of the image characteristic, thereby increasing image compression efficiency.
- the interlayer video encoding apparatus 10 described above with reference to FIG. 1A may include as many video encoding apparatuses 100 as the number of layers for encoding single layer images for each layer of a multilayer video.
- the first layer encoder 12 may include one video encoding apparatus 100
- the second layer encoder 16 may include as many video encoding apparatuses 100 as the number of second layers. Can be.
- the coding unit determiner 120 determines a prediction unit for inter-image prediction for each coding unit having a tree structure for each maximum coding unit, and for each prediction unit. Inter-prediction may be performed.
- the coding unit determiner 120 determines a coding unit and a prediction unit having a tree structure for each maximum coding unit, and performs inter prediction for each prediction unit. Can be.
- the video encoding apparatus 100 may encode the luminance difference to compensate for the luminance difference between the first layer image and the second layer image. However, whether to perform luminance may be determined according to an encoding mode of a coding unit. For example, luminance compensation may be performed only for prediction units having a size of 2N ⁇ 2N.
- FIG. 9 is a block diagram of a video decoding apparatus 200 based on coding units having a tree structure, according to various embodiments.
- a video decoding apparatus 200 including video prediction based on coding units having a tree structure includes a receiver 210, image data and encoding information extractor 220, and image data decoder 230. do.
- the video decoding apparatus 200 that includes video prediction based on coding units having a tree structure is abbreviated as “video decoding apparatus 200”.
- the receiver 210 receives and parses a bitstream of an encoded video.
- the image data and encoding information extractor 220 extracts image data encoded for each coding unit from the parsed bitstream according to coding units having a tree structure for each maximum coding unit, and outputs the encoded image data to the image data decoder 230.
- the image data and encoding information extractor 220 may extract information about a maximum size of a coding unit of the current picture from a header, a sequence parameter set, or a picture parameter set for the current picture.
- the image data and encoding information extractor 220 extracts the final depth and the split information of the coding units having a tree structure for each maximum coding unit from the parsed bitstream.
- the extracted final depth and split information are output to the image data decoder 230. That is, the image data of the bit string may be divided into maximum coding units so that the image data decoder 230 may decode the image data for each maximum coding unit.
- the depth and split information for each largest coding unit may be set for one or more depth information, and the split information for each depth may include partition mode information, prediction mode information, split information of a transform unit, and the like, of a corresponding coding unit. .
- depth-specific segmentation information may be extracted.
- the depth and split information for each largest coding unit extracted by the image data and encoding information extractor 220 are repeatedly repeated for each coding unit for each deeper coding unit, as in the video encoding apparatus 100 according to various embodiments. Depth and split information determined to perform encoding to generate a minimum encoding error. Therefore, the video decoding apparatus 200 may reconstruct an image by decoding data according to an encoding method that generates a minimum encoding error.
- the image data and encoding information extractor 220 may determine the predetermined data unit. Depth and segmentation information can be extracted for each. If the depth and the split information of the corresponding maximum coding unit are recorded for each predetermined data unit, the predetermined data units having the same depth and the split information may be inferred as data units included in the same maximum coding unit.
- the image data decoder 230 reconstructs the current picture by decoding image data of each maximum coding unit based on the depth and the split information for each maximum coding unit. That is, the image data decoder 230 may decode the encoded image data based on the read partition mode, the prediction mode, and the transformation unit for each coding unit among the coding units having the tree structure included in the maximum coding unit. Can be.
- the decoding process may include a prediction process including intra prediction and motion compensation, and an inverse transform process.
- the image data decoder 230 may perform intra prediction or motion compensation according to each partition and prediction mode for each coding unit, based on the partition mode information and the prediction mode information of the prediction unit of the coding unit according to depths.
- the image data decoder 230 may read transform unit information having a tree structure for each coding unit, and perform inverse transform based on the transformation unit for each coding unit, for inverse transformation for each largest coding unit. Through inverse transformation, the pixel value of the spatial region of the coding unit may be restored.
- the image data decoder 230 may determine the depth of the current maximum coding unit by using the split information for each depth. If the split information indicates that the split information is no longer divided at the current depth, the current depth is the depth. Therefore, the image data decoder 230 may decode the coding unit of the current depth using the partition mode, the prediction mode, and the transformation unit size information of the prediction unit, for the image data of the current maximum coding unit.
- the image data decoder 230 It may be regarded as one data unit to be decoded in the same encoding mode.
- the decoding of the current coding unit may be performed by obtaining information about an encoding mode for each coding unit determined in this way.
- the interlayer video decoding apparatus 20 described above with reference to FIG. 2A decodes the received first layer image stream and the second layer image stream to reconstruct the first layer images and the second layer images.
- the device 200 may include the number of viewpoints.
- the image data decoder 230 of the video decoding apparatus 200 may maximize the samples of the first layer images extracted from the first layer image stream by the extractor 220. It may be divided into coding units having a tree structure of the coding units. The image data decoder 230 may reconstruct the first layer images by performing motion compensation for each coding unit according to a tree structure of samples of the first layer images, for each prediction unit for inter-image prediction.
- the image data decoder 230 of the video decoding apparatus 200 may maximize the samples of the second layer images extracted from the second layer image stream by the extractor 220. It may be divided into coding units having a tree structure of the coding units. The image data decoder 230 may reconstruct the second layer images by performing motion compensation for each prediction unit for inter-image prediction for each coding unit of the samples of the second layer images.
- the extractor 220 may obtain information related to the luminance error from the bitstream to compensate for the luminance difference between the first layer image and the second layer image. However, whether to perform luminance may be determined according to an encoding mode of a coding unit. For example, luminance compensation may be performed only for prediction units having a size of 2N ⁇ 2N.
- the video decoding apparatus 200 may obtain information about a coding unit that generates a minimum coding error by recursively encoding each maximum coding unit in the encoding process, and use the same to decode the current picture. That is, decoding of encoded image data of coding units having a tree structure determined as an optimal coding unit for each maximum coding unit can be performed.
- the image data is efficiently decoded according to the size and encoding mode of a coding unit adaptively determined according to the characteristics of the image using the optimal split information transmitted from the encoding end. Can be restored
- FIG. 10 illustrates a concept of coding units, according to various embodiments.
- a size of a coding unit may be expressed by a width x height, and may include 32x32, 16x16, and 8x8 from a coding unit having a size of 64x64.
- Coding units of size 64x64 may be partitioned into partitions of size 64x64, 64x32, 32x64, and 32x32, coding units of size 32x32 are partitions of size 32x32, 32x16, 16x32, and 16x16, and coding units of size 16x16 are 16x16.
- Coding units of size 8x8 may be divided into partitions of size 8x8, 8x4, 4x8, and 4x4, into partitions of 16x8, 8x16, and 8x8.
- the resolution is set to 1920x1080, the maximum size of the coding unit is 64, and the maximum depth is 2.
- the resolution is set to 1920x1080, the maximum size of the coding unit is 64, and the maximum depth is 3.
- the resolution is set to 352x288, the maximum size of the coding unit is 16, and the maximum depth is 1.
- the maximum depth illustrated in FIG. 10 represents the total number of divisions from the maximum coding unit to the minimum coding unit.
- the maximum size of the coding size is relatively large not only to improve the coding efficiency but also to accurately shape the image characteristics. Accordingly, the video data 310 or 320 having a higher resolution than the video data 330 may be selected to have a maximum size of 64.
- the coding unit 315 of the video data 310 is divided twice from a maximum coding unit having a long axis size of 64, and the depth is deepened by two layers, so that the long axis size is 32, 16. Up to coding units may be included.
- the coding unit 335 of the video data 330 is divided once from coding units having a long axis size of 16, and the depth is deepened by one layer to increase the long axis size to 8. Up to coding units may be included.
- the coding unit 325 of the video data 320 is divided three times from the largest coding unit having a long axis size of 64, and the depth is three layers deep, so that the long axis size is 32, 16. , Up to 8 coding units may be included. As the depth increases, the expressive power of the detailed information may be improved.
- FIG. 11 is a block diagram of an image encoder 400 based on coding units, according to various embodiments.
- the image encoder 400 performs operations performed by the picture encoder 120 of the video encoding apparatus 100 to encode image data. That is, the intra prediction unit 420 performs intra prediction on each coding unit of the intra mode of the current image 405, and the inter prediction unit 415 performs the current image on the prediction unit of the coding unit of the inter mode. Inter-prediction is performed using the reference image acquired at 405 and the reconstructed picture buffer 410.
- the current image 405 may be divided into maximum coding units and then sequentially encoded. In this case, encoding may be performed on the coding unit in which the largest coding unit is to be divided into a tree structure.
- Residual data is generated by subtracting the prediction data for the coding unit of each mode output from the intra prediction unit 420 or the inter prediction unit 415 from the data for the encoding unit of the current image 405, and
- the dew data is output as transform coefficients quantized for each transform unit through the transform unit 425 and the quantization unit 430.
- the quantized transform coefficients are reconstructed into residue data in the spatial domain through the inverse quantizer 445 and the inverse transformer 450.
- Residual data of the reconstructed spatial domain is added to the prediction data of the coding unit of each mode output from the intra predictor 420 or the inter predictor 415, thereby adding the residual data of the spatial domain to the coding unit of the current image 405. The data is restored.
- the reconstructed spatial region data is generated as a reconstructed image through the deblocking unit 455 and the SAO performing unit 460.
- the generated reconstructed image is stored in the reconstructed picture buffer 410.
- the reconstructed images stored in the reconstructed picture buffer 410 may be used as reference images for inter prediction of another image.
- the transform coefficients quantized by the transformer 425 and the quantizer 430 may be output as the bitstream 440 through the entropy encoder 435.
- the inter predictor 415, the intra predictor 420, and the transformer each have a tree structure for each maximum coding unit. An operation based on each coding unit among the coding units may be performed.
- the intra prediction unit 420 and the inter prediction unit 415 determine the partition mode and the prediction mode of each coding unit among the coding units having a tree structure in consideration of the maximum size and the maximum depth of the current maximum coding unit.
- the transform unit 425 may determine whether to split the transform unit according to the quad tree in each coding unit among the coding units having the tree structure.
- FIG. 12 is a block diagram of an image decoder 500 based on coding units, according to various embodiments.
- the entropy decoding unit 515 parses the encoded image data to be decoded from the bitstream 505 and encoding information necessary for decoding.
- the encoded image data is a quantized transform coefficient
- the inverse quantizer 520 and the inverse transform unit 525 reconstruct residue data from the quantized transform coefficients.
- the intra prediction unit 540 performs intra prediction for each prediction unit with respect to the coding unit of the intra mode.
- the inter prediction unit 535 performs inter prediction using the reference image obtained from the reconstructed picture buffer 530 for each coding unit of the coding mode of the inter mode among the current pictures.
- the data of the spatial domain of the coding unit of the current image 405 is reconstructed and restored.
- the data of the space area may be output as a reconstructed image 560 via the deblocking unit 545 and the SAO performing unit 550.
- the reconstructed images stored in the reconstructed picture buffer 530 may be output as reference images.
- step-by-step operations after the entropy decoder 515 of the image decoder 500 may be performed.
- the entropy decoder 515, the inverse quantizer 520, and the inverse transformer ( 525, the intra prediction unit 540, the inter prediction unit 535, the deblocking unit 545, and the SAO performer 550 based on each coding unit among coding units having a tree structure for each maximum coding unit. You can do it.
- the intra predictor 540 and the inter predictor 535 determine a partition mode and a prediction mode for each coding unit among coding units having a tree structure, and the inverse transformer 525 has a quad tree structure for each coding unit. It is possible to determine whether to divide the conversion unit according to.
- the encoding operation of FIG. 10 and the decoding operation of FIG. 11 describe the video stream encoding operation and the decoding operation in a single layer, respectively. Therefore, if the encoder 12 of FIG. 1A encodes a video stream of two or more layers, the encoder 12 may include an image encoder 400 for each layer. Similarly, if the decoder 26 of FIG. 2A decodes a video stream of two or more layers, it may include an image decoder 500 for each layer.
- FIG. 13 is a diagram illustrating deeper coding units according to depths, and partitions, according to various embodiments.
- the video encoding apparatus 100 according to various embodiments and the video decoding apparatus 200 according to various embodiments use hierarchical coding units to consider image characteristics.
- the maximum height, width, and maximum depth of the coding unit may be adaptively determined according to the characteristics of the image, and may be variously set according to a user's request. According to the maximum size of the preset coding unit, the size of the coding unit for each depth may be determined.
- the hierarchical structure 600 of a coding unit illustrates a case in which a maximum height and a width of a coding unit are 64 and a maximum depth is three.
- the maximum depth indicates the total number of divisions from the maximum coding unit to the minimum coding unit. Since the depth deepens along the vertical axis of the hierarchical structure 600 of the coding unit according to various embodiments, the height and the width of the coding unit for each depth are divided.
- a prediction unit and a partition on which the prediction encoding of each depth-based coding unit is shown along the horizontal axis of the hierarchical structure 600 of the coding unit are illustrated.
- the coding unit 610 has a depth of 0 as the largest coding unit of the hierarchical structure 600 of the coding unit, and the size, ie, the height and width, of the coding unit is 64x64.
- a depth deeper along the vertical axis includes a coding unit 620 of depth 1 having a size of 32x32, a coding unit 630 of depth 2 having a size of 16x16, and a coding unit 640 of depth 3 having a size of 8x8.
- a coding unit 640 of depth 3 having a size of 8 ⁇ 8 is a minimum coding unit.
- Prediction units and partitions of the coding unit are arranged along the horizontal axis for each depth. That is, if the coding unit 610 of size 64x64 having a depth of zero is a prediction unit, the prediction unit may include a partition 610 of size 64x64, partitions 612 of size 64x32, and size included in the coding unit 610 of size 64x64. 32x64 partitions 614, 32x32 partitions 616.
- the prediction unit of the coding unit 620 having a size of 32x32 having a depth of 1 includes a partition 620 of size 32x32, partitions 622 of size 32x16 and a partition of size 16x32 included in the coding unit 620 of size 32x32. 624, partitions 626 of size 16x16.
- the prediction unit of the coding unit 630 of size 16x16 having a depth of 2 includes a partition 630 of size 16x16, partitions 632 of size 16x8, and a partition of size 8x16 included in the coding unit 630 of size 16x16. 634, partitions 636 of size 8x8.
- the prediction unit of the coding unit 640 of size 8x8 having a depth of 3 includes a partition 640 of size 8x8, partitions 642 of size 8x4 and a partition of size 4x8 included in the coding unit 640 of size 8x8. 644, partitions 646 of size 4x4.
- the coding unit determiner 120 of the video encoding apparatus 100 may determine the depth of the maximum coding unit 610 for each coding unit of each depth included in the maximum coding unit 610. Encoding must be performed.
- the number of deeper coding units according to depths for including data having the same range and size increases as the depth increases. For example, four coding units of depth 2 are required for data included in one coding unit of depth 1. Therefore, in order to compare the encoding results of the same data for each depth, each of the coding units having one depth 1 and four coding units having four depths 2 should be encoded.
- encoding may be performed for each prediction unit of a coding unit according to depths along a horizontal axis of the hierarchical structure 600 of the coding unit, and a representative coding error, which is the smallest coding error at a corresponding depth, may be selected. .
- a depth deeper along the vertical axis of the hierarchical structure 600 of the coding unit the encoding may be performed for each depth, and the minimum coding error may be searched by comparing the representative coding error for each depth.
- the depth and partition in which the minimum coding error occurs in the maximum coding unit 610 may be selected as the depth and partition mode of the maximum coding unit 610.
- FIG. 14 illustrates a relationship between a coding unit and transformation units, according to various embodiments.
- the video encoding apparatus 100 encodes or decodes an image in coding units having a size smaller than or equal to the maximum coding unit for each maximum coding unit.
- the size of a transformation unit for transformation in the encoding process may be selected based on a data unit that is not larger than each coding unit.
- the 32x32 transform unit 720 may be selected. The conversion can be performed.
- the data of the 64x64 coding unit 710 is transformed into 32x32, 16x16, 8x8, and 4x4 transform units of 64x64 size or less, and then encoded, and the transform unit having the least error with the original is selected. Can be.
- 15 illustrates encoding information, according to various embodiments.
- the output unit 130 of the video encoding apparatus 100 is split information, and information about a partition mode 800, information 810 about a prediction mode, and transform unit size for each coding unit of each depth.
- Information 820 may be encoded and transmitted.
- the information about the partition mode 800 is a data unit for predictive encoding of the current coding unit and indicates information about a partition type in which the prediction unit of the current coding unit is divided.
- the current coding unit CU_0 of size 2Nx2N may be any one of a partition 802 of size 2Nx2N, a partition 804 of size 2NxN, a partition 806 of size Nx2N, and a partition 808 of size NxN. It can be divided and used.
- the information 800 about the partition mode of the current coding unit represents one of a partition 802 of size 2Nx2N, a partition 804 of size 2NxN, a partition 806 of size Nx2N, and a partition 808 of size NxN. It is set to.
- Information 810 relating to the prediction mode indicates the prediction mode of each partition. For example, through the information 810 about the prediction mode, whether the partition indicated by the information 800 about the partition mode is performed in one of the intra mode 812, the inter mode 814, and the skip mode 816 is performed. Whether or not can be set.
- the information about the transform unit size 820 indicates whether to transform the current coding unit based on the transform unit.
- the transform unit may be one of a first intra transform unit size 822, a second intra transform unit size 824, a first inter transform unit size 826, and a second inter transform unit size 828. have.
- the image data and encoding information extractor 210 of the video decoding apparatus 200 may include information about a partition mode 800, information 810 about a prediction mode, and transformation for each depth-based coding unit. Information 820 about the unit size may be extracted and used for decoding.
- 16 is a diagram of deeper coding units according to depths, according to various embodiments.
- Segmentation information may be used to indicate a change in depth.
- the split information indicates whether a coding unit of a current depth is split into coding units of a lower depth.
- the prediction unit 910 for predictive encoding of the coding unit 900 having depth 0 and 2N_0x2N_0 size includes a partition mode 912 of 2N_0x2N_0 size, a partition mode 914 of 2N_0xN_0 size, a partition mode 916 of N_0x2N_0 size, and N_0xN_0 May include a partition mode 918 of size.
- partition mode 912, 914, 916, and 918 in which the prediction unit is divided by a symmetrical ratio are illustrated, as described above, the partition mode is not limited thereto, and asymmetric partitions, arbitrary partitions, geometric partitions, and the like. It may include.
- prediction coding For each partition mode, prediction coding must be performed repeatedly for one 2N_0x2N_0 partition, two 2N_0xN_0 partitions, two N_0x2N_0 partitions, and four N_0xN_0 partitions.
- prediction encoding For partitions having a size 2N_0x2N_0, a size N_0x2N_0, a size 2N_0xN_0, and a size N_0xN_0, prediction encoding may be performed in an intra mode and an inter mode.
- the skip mode may be performed only for prediction encoding on partitions having a size of 2N_0x2N_0.
- the depth 0 is changed to 1 and split (920), and the encoding is repeatedly performed on the depth 2 and the coding units 930 of the partition mode of size N_0xN_0.
- the depth 1 is changed to the depth 2 and split (950), and repeatedly for the depth 2 and the coding units 960 of the size N_2xN_2.
- the encoding may be performed to search for a minimum encoding error.
- depth-based coding units may be set until depth d-1, and split information may be set up to depth d-2. That is, when encoding is performed from the depth d-2 to the depth d-1 to the depth d-1, the prediction encoding of the coding unit 980 of the depth d-1 and the size 2N_ (d-1) x2N_ (d-1)
- the prediction unit for 990 is a partition mode 992 of size 2N_ (d-1) x2N_ (d-1), a partition mode 994 of size 2N_ (d-1) xN_ (d-1), and size
- a partition mode 996 of N_ (d-1) x2N_ (d-1) and a partition mode 998 of size N_ (d-1) xN_ (d-1) may be included.
- partition mode one partition 2N_ (d-1) x2N_ (d-1), two partitions 2N_ (d-1) xN_ (d-1), two sizes N_ (d-1) x2N_
- a partition mode in which a minimum encoding error occurs may be searched.
- the coding unit CU_ (d-1) of the depth d-1 is no longer
- the depth of the current maximum coding unit 900 may be determined as the depth d-1, and the partition mode may be determined as N_ (d-1) xN_ (d-1) without going through a division process into lower depths.
- split information is not set for the coding unit 952 having the depth d-1.
- the data unit 999 may be referred to as a 'minimum unit' for the current maximum coding unit.
- the minimum unit may be a square data unit having a size obtained by dividing a minimum coding unit, which is a lowest depth, into four divisions.
- the video encoding apparatus 100 compares the encoding errors for each depth of the coding unit 900, selects the depth at which the smallest encoding error occurs, and determines the depth.
- the partition mode and the prediction mode may be set to the encoding mode of the depth.
- depths with the smallest error can be determined by comparing the minimum coding errors for all depths of depths 0, 1, ..., d-1, and d.
- the depth, the partition mode of the prediction unit, and the prediction mode may be encoded and transmitted as split information.
- the coding unit since the coding unit must be split from the depth 0 to the depth, only the split information of the depth is set to '0', and the split information for each depth except the depth should be set to '1'.
- the image data and encoding information extractor 220 of the video decoding apparatus 200 may extract information about a depth and a prediction unit of the coding unit 900 and use the same to decode the coding unit 912. have.
- the video decoding apparatus 200 may grasp the depth of which the split information is '0' as the depth by using the split information for each depth, and may use the split information for the corresponding depth for decoding.
- 17, 18, and 19 illustrate a relationship between coding units, prediction units, and transformation units, according to various embodiments.
- the coding units 1010 are deeper coding units determined by the video encoding apparatus 100 according to various embodiments with respect to the maximum coding unit.
- the prediction unit 1060 is partitions of prediction units of each deeper coding unit among the coding units 1010, and the transform unit 1070 is transform units of each deeper coding unit.
- the depth-based coding units 1010 have a depth of 0
- the coding units 1012 and 1054 have a depth of 1
- the coding units 1014, 1016, 1018, 1028, 1050, and 1052 have depths.
- coding units 1020, 1022, 1024, 1026, 1030, 1032, and 1048 have a depth of three
- coding units 1040, 1042, 1044, and 1046 have a depth of four.
- partitions 1014, 1016, 1022, 1032, 1048, 1050, 1052, and 1054 of the prediction units 1060 are obtained by splitting coding units. That is, partitions 1014, 1022, 1050, and 1054 are 2NxN partition modes, partitions 1016, 1048, and 1052 are Nx2N partition modes, and partitions 1032 are NxN partition modes. Prediction units and partitions of the coding units 1010 according to depths are smaller than or equal to each coding unit.
- the image data of the part 1052 of the transformation units 1070 is transformed or inversely transformed into a data unit having a smaller size than the coding unit.
- the transformation units 1014, 1016, 1022, 1032, 1048, 1050, 1052, and 1054 are data units having different sizes or shapes when compared to corresponding prediction units and partitions among the prediction units 1060. That is, the video encoding apparatus 100 according to various embodiments and the video decoding apparatus 200 according to an embodiment may be intra prediction / motion estimation / motion compensation operations and transform / inverse transform operations for the same coding unit. Each can be performed on a separate data unit.
- coding is performed recursively for each coding unit having a hierarchical structure for each largest coding unit to determine an optimal coding unit.
- coding units having a recursive tree structure may be configured.
- the encoding information may include split information about the coding unit, partition mode information, prediction mode information, and transformation unit size information. Table 1 below shows an example that can be set in the video encoding apparatus 100 according to various embodiments and the video decoding apparatus 200 according to various embodiments.
- the output unit 130 of the video encoding apparatus 100 outputs encoding information about coding units having a tree structure, and the encoding information extracting unit of the video decoding apparatus 200 according to various embodiments of the present disclosure.
- 220 may extract encoding information about coding units having a tree structure from the received bitstream.
- the split information indicates whether the current coding unit is split into coding units of a lower depth. If the split information of the current depth d is 0, partition mode information, prediction mode, and transform unit size information may be defined for the depth since the current coding unit is a depth in which the current coding unit is no longer divided into lower coding units. have. If it is to be further split by the split information, encoding should be performed independently for each coding unit of the divided four lower depths.
- the prediction mode may be represented by one of an intra mode, an inter mode, and a skip mode.
- Intra mode and inter mode can be defined in all partition modes, and skip mode can only be defined in partition mode 2Nx2N.
- the partition mode information indicates symmetric partition modes 2Nx2N, 2NxN, Nx2N, and NxN, in which the height or width of the prediction unit is divided by symmetrical ratios, and asymmetric partition modes 2NxnU, 2NxnD, nLx2N, nRx2N, divided by asymmetrical ratios.
- the asymmetric partition modes 2NxnU and 2NxnD are divided into heights of 1: 3 and 3: 1, respectively, and the asymmetric partition modes nLx2N and nRx2N are divided into 1: 3 and 3: 1 widths, respectively.
- the conversion unit size may be set to two kinds of sizes in the intra mode and two kinds of sizes in the inter mode. That is, if the transformation unit split information is 0, the size of the transformation unit is set to the size 2Nx2N of the current coding unit. If the transform unit split information is 1, a transform unit having a size obtained by dividing the current coding unit may be set. In addition, if the partition mode for the current coding unit having a size of 2Nx2N is a symmetric partition mode, the size of the transform unit may be set to NxN, and N / 2xN / 2 if it is an asymmetric partition mode.
- Encoding information of coding units having a tree structure may be allocated to at least one of a coding unit, a prediction unit, and a minimum unit unit of a depth.
- the coding unit of the depth may include at least one prediction unit and at least one minimum unit having the same encoding information.
- the encoding information held by each adjacent data unit is checked, it may be determined whether the data is included in the coding unit having the same depth.
- the coding unit of the corresponding depth may be identified using the encoding information held by the data unit, the distribution of depths within the maximum coding unit may be inferred.
- the encoding information of the data unit in the depth-specific coding unit adjacent to the current coding unit may be directly referenced and used.
- the prediction coding when the prediction coding is performed by referring to the neighboring coding unit, the data adjacent to the current coding unit in the coding unit according to depths is encoded by using the encoding information of the adjacent coding units according to depths.
- the neighboring coding unit may be referred to by searching.
- FIG. 20 illustrates a relationship between a coding unit, a prediction unit, and a transformation unit, according to encoding mode information of Table 1.
- FIG. 20 illustrates a relationship between a coding unit, a prediction unit, and a transformation unit, according to encoding mode information of Table 1.
- the maximum coding unit 1300 includes coding units 1302, 1304, 1306, 1312, 1314, 1316, and 1318 of depths. Since one coding unit 1318 is a coding unit of depth, split information may be set to zero.
- the partition mode information of the coding unit 1318 having a size of 2Nx2N includes partition modes 2Nx2N 1322, 2NxN 1324, Nx2N 1326, NxN 1328, 2NxnU 1332, 2NxnD 1334, and nLx2N 1336. And nRx2N 1338.
- the transform unit split information (TU size flag) is a type of transform index, and a size of a transform unit corresponding to the transform index may be changed according to a prediction unit type or a partition mode of the coding unit.
- the partition mode information is set to one of symmetric partition modes 2Nx2N 1322, 2NxN 1324, Nx2N 1326, and NxN 1328
- the conversion unit partition information is 0, a conversion unit of size 2Nx2N ( 1342 is set, and if the transform unit split information is 1, a transform unit 1344 of size NxN may be set.
- partition mode information is set to one of asymmetric partition modes 2NxnU (1332), 2NxnD (1334), nLx2N (1336), and nRx2N (1338), if the conversion unit partition information (TU size flag) is 0, a conversion unit of size 2Nx2N ( 1352 is set, and if the transform unit split information is 1, a transform unit 1354 of size N / 2 ⁇ N / 2 may be set.
- the conversion unit splitting information (TU size flag) described above with reference to FIG. 19 is a flag having a value of 0 or 1
- the conversion unit splitting information according to various embodiments is not limited to a 1-bit flag and is set to 0 according to a setting. , 1, 2, 3., etc., and may be divided hierarchically.
- the transformation unit partition information may be used as an embodiment of the transformation index.
- the size of the transformation unit actually used may be expressed.
- the video encoding apparatus 100 may encode maximum transform unit size information, minimum transform unit size information, and maximum transform unit split information.
- the encoded maximum transform unit size information, minimum transform unit size information, and maximum transform unit split information may be inserted into the SPS.
- the video decoding apparatus 200 may use the maximum transform unit size information, the minimum transform unit size information, and the maximum transform unit split information to use for video decoding.
- the maximum transform unit split information is defined as 'MaxTransformSizeIndex'
- the minimum transform unit size is 'MinTransformSize'
- the transform unit split information is 0,
- the minimum transform unit possible in the current coding unit is defined as 'RootTuSize'.
- the size 'CurrMinTuSize' can be defined as in relation (1) below.
- 'RootTuSize' which is a transform unit size when the transform unit split information is 0, may indicate a maximum transform unit size that can be adopted in the system. That is, according to relation (1), 'RootTuSize / (2 ⁇ MaxTransformSizeIndex)' is a transformation obtained by dividing 'RootTuSize', which is the size of the transformation unit when the transformation unit division information is 0, by the number of times corresponding to the maximum transformation unit division information. Since the unit size is 'MinTransformSize' is the minimum transform unit size, a smaller value among them may be the minimum transform unit size 'CurrMinTuSize' possible in the current coding unit.
- RootTuSize may vary depending on the prediction mode.
- RootTuSize may be determined according to the following relation (2).
- 'MaxTransformSize' represents the maximum transform unit size
- 'PUSize' represents the current prediction unit size.
- RootTuSize min (MaxTransformSize, PUSize) ......... (2)
- 'RootTuSize' which is a transform unit size when the transform unit split information is 0, may be set to a smaller value among the maximum transform unit size and the current prediction unit size.
- 'RootTuSize' may be determined according to Equation (3) below.
- 'PartitionSize' represents the size of the current partition unit.
- RootTuSize min (MaxTransformSize, PartitionSize) ........... (3)
- the conversion unit size 'RootTuSize' when the conversion unit split information is 0 may be set to a smaller value among the maximum conversion unit size and the current partition unit size.
- the current maximum conversion unit size 'RootTuSize' according to various embodiments that vary according to the prediction mode of the partition unit is only an embodiment, and a factor determining the current maximum conversion unit size is not limited thereto.
- the image data of the spatial domain is encoded for each coding unit of the tree structure, and the video decoding method based on the coding units of the tree structure.
- decoding is performed for each largest coding unit, and image data of a spatial region may be reconstructed to reconstruct a picture and a video that is a picture sequence.
- the reconstructed video can be played back by a playback device, stored in a storage medium, or transmitted over a network.
- the above-described embodiments can be written as a program that can be executed in a computer, and can be implemented in a general-purpose digital computer which operates the program using a computer-readable recording medium.
- the computer-readable recording medium may include a storage medium such as a magnetic storage medium (eg, a ROM, a floppy disk, a hard disk, etc.) and an optical reading medium (eg, a CD-ROM, a DVD, etc.).
- the interlayer video encoding method and / or video encoding method described above with reference to FIGS. 1A through 20 are collectively referred to as a video encoding method.
- the inter-layer video decoding method and / or video decoding method described above with reference to FIGS. 1A to 20 are referred to as a video decoding method.
- the video encoding apparatus composed of the interlayer video encoding apparatus 10, the video encoding apparatus 100, or the image encoding unit 400 described above with reference to FIGS. 1A to 20 is collectively referred to as a “video encoding apparatus”.
- the video decoding apparatus including the interlayer video decoding apparatus 20, the video decoding apparatus 200, or the image decoding unit 500 described above with reference to FIGS. 1A to 20 is referred to as a “video decoding apparatus”.
- the disk 26000 described above as a storage medium may be a hard drive, a CD-ROM disk, a Blu-ray disk, or a DVD disk.
- the disk 26000 is composed of a plurality of concentric tracks tr, and the tracks are divided into a predetermined number of sectors Se in the circumferential direction.
- a program for implementing the above-described quantization parameter determination method, video encoding method, and video decoding method may be allocated and stored in a specific area of the disc 26000 which stores the program according to the above-described various embodiments.
- a computer system achieved using a storage medium storing a program for implementing the above-described video encoding method and video decoding method will be described below with reference to FIG. 22.
- the computer system 26700 may store a program for implementing at least one of a video encoding method and a video decoding method on the disc 26000 using the disc drive 26800.
- the program may be read from the disk 26000 by the disk drive 26800, and the program may be transferred to the computer system 26700.
- a program for implementing at least one of a video encoding method and a video decoding method may be stored in a memory card, a ROM cassette, and a solid state drive (SSD).
- FIG. 23 illustrates an overall structure of a content supply system 11000 for providing a content distribution service.
- the service area of the communication system is divided into cells of a predetermined size, and wireless base stations 11700, 11800, 11900, and 12000 that serve as base stations are installed in each cell.
- the content supply system 11000 includes a plurality of independent devices.
- independent devices such as a computer 12100, a personal digital assistant (PDA) 12200, a camera 12300, and a mobile phone 12500 may be an Internet service provider 11200, a communication network 11400, and a wireless base station. 11700, 11800, 11900, and 12000 to connect to the Internet 11100.
- PDA personal digital assistant
- the content supply system 11000 is not limited to the structure shown in FIG. 24, and devices may be selectively connected.
- the independent devices may be directly connected to the communication network 11400 without passing through the wireless base stations 11700, 11800, 11900, and 12000.
- the video camera 12300 is an imaging device capable of capturing video images like a digital video camera.
- the mobile phone 12500 is such as Personal Digital Communications (PDC), code division multiple access (CDMA), wideband code division multiple access (W-CDMA), Global System for Mobile Communications (GSM), and Personal Handyphone System (PHS). At least one communication scheme among various protocols may be adopted.
- PDC Personal Digital Communications
- CDMA code division multiple access
- W-CDMA wideband code division multiple access
- GSM Global System for Mobile Communications
- PHS Personal Handyphone System
- the video camera 12300 may be connected to the streaming server 11300 through the wireless base station 11900 and the communication network 11400.
- the streaming server 11300 may stream and transmit the content transmitted by the user using the video camera 12300 through real time broadcasting.
- Content received from the video camera 12300 may be encoded by the video camera 12300 or the streaming server 11300.
- Video data captured by the video camera 12300 may be transmitted to the streaming server 11300 via the computer 12100.
- Video data captured by the camera 12600 may also be transmitted to the streaming server 11300 via the computer 12100.
- the camera 12600 is an imaging device capable of capturing both still and video images, like a digital camera.
- Video data received from the camera 12600 may be encoded by the camera 12600 or the computer 12100.
- Software for video encoding and decoding may be stored in a computer readable recording medium such as a CD-ROM disk, a floppy disk, a hard disk drive, an SSD, or a memory card that the computer 12100 may access.
- video data may be received from the mobile phone 12500.
- the video data may be encoded by a large scale integrated circuit (LSI) system installed in the video camera 12300, the mobile phone 12500, or the camera 12600.
- LSI large scale integrated circuit
- a user is recorded using a video camera 12300, a camera 12600, a mobile phone 12500, or another imaging device.
- the content is encoded and sent to the streaming server 11300.
- the streaming server 11300 may stream and transmit content data to other clients who have requested the content data.
- the clients are devices capable of decoding the encoded content data, and may be, for example, a computer 12100, a PDA 12200, a video camera 12300, or a mobile phone 12500.
- the content supply system 11000 allows clients to receive and play encoded content data.
- the content supply system 11000 enables clients to receive and decode and reproduce encoded content data in real time, thereby enabling personal broadcasting.
- the video encoding apparatus and the video decoding apparatus may be applied to encoding and decoding operations of independent devices included in the content supply system 11000.
- the mobile phone 12500 is not limited in functionality and may be a smart phone that can change or expand a substantial portion of its functions through an application program.
- the mobile phone 12500 includes a built-in antenna 12510 for exchanging RF signals with the wireless base station 12000, and displays images captured by the camera 1530 or images received and decoded by the antenna 12510. And a display screen 12520 such as an LCD (Liquid Crystal Display) and an OLED (Organic Light Emitting Diodes) screen for displaying.
- the smartphone 12510 includes an operation panel 12540 including a control button and a touch panel. When the display screen 12520 is a touch screen, the operation panel 12540 further includes a touch sensing panel of the display screen 12520.
- the smart phone 12510 includes a speaker 12580 or another type of audio output unit for outputting voice and sound, and a microphone 12550 or another type of audio input unit for inputting voice and sound.
- the smartphone 12510 further includes a camera 1530 such as a CCD camera for capturing video and still images.
- the smartphone 12510 may be a storage medium for storing encoded or decoded data, such as video or still images captured by the camera 1530, received by an e-mail, or obtained in another form. 12570); And a slot 12560 for mounting the storage medium 12570 to the mobile phone 12500.
- the storage medium 12570 may be another type of flash memory such as an electrically erasable and programmable read only memory (EEPROM) embedded in an SD card or a plastic case.
- EEPROM electrically erasable and programmable read only memory
- FIG. 25 illustrates an internal structure of the mobile phone 12500.
- the power supply circuit 12700 the operation input controller 12640, the image encoder 12720, and the camera interface (12630), LCD control unit (12620), image decoding unit (12690), multiplexer / demultiplexer (12680), recording / reading unit (12670), modulation / demodulation unit (12660) and
- the sound processor 12650 is connected to the central controller 12710 through the synchronization bus 1730.
- the power supply circuit 12700 supplies power to each part of the mobile phone 12500 from the battery pack, thereby causing the mobile phone 12500 to operate. Can be set to an operating mode.
- the central controller 12710 includes a CPU, a read only memory (ROM), and a random access memory (RAM).
- the digital signal is generated in the mobile phone 12500 under the control of the central controller 12710, for example, the digital sound signal is generated in the sound processor 12650.
- the image encoder 12720 may generate a digital image signal, and text data of the message may be generated through the operation panel 12540 and the operation input controller 12640.
- the modulator / demodulator 12660 modulates a frequency band of the digital signal, and the communication circuit 12610 is a band-modulated digital signal. Digital-to-analog conversion and frequency conversion are performed on the acoustic signal.
- the transmission signal output from the communication circuit 12610 may be transmitted to the voice communication base station or the radio base station 12000 through the antenna 12510.
- the sound signal acquired by the microphone 12550 is converted into a digital sound signal by the sound processor 12650 under the control of the central controller 12710.
- the generated digital sound signal may be converted into a transmission signal through the modulation / demodulation unit 12660 and the communication circuit 12610 and transmitted through the antenna 12510.
- the text data of the message is input using the operation panel 12540, and the text data is transmitted to the central controller 12610 through the operation input controller 12640.
- the text data is converted into a transmission signal through the modulator / demodulator 12660 and the communication circuit 12610, and transmitted to the radio base station 12000 through the antenna 12510.
- the image data photographed by the camera 1530 is provided to the image encoder 12720 through the camera interface 12630.
- the image data photographed by the camera 1252 may be directly displayed on the display screen 12520 through the camera interface 12630 and the LCD controller 12620.
- the structure of the image encoder 12720 may correspond to the structure of the video encoding apparatus described above.
- the image encoder 12720 encodes the image data provided from the camera 1252 according to the above-described video encoding method, converts the image data into compression-encoded image data, and multiplexes / demultiplexes 12680 the encoded image data.
- the sound signal obtained by the microphone 12550 of the mobile phone 12500 is also converted into digital sound data through the sound processor 12650 during recording of the camera 1250, and the digital sound data is converted into the multiplex / demultiplexer 12680. Can be delivered.
- the multiplexer / demultiplexer 12680 multiplexes the encoded image data provided from the image encoder 12720 together with the acoustic data provided from the sound processor 12650.
- the multiplexed data may be converted into a transmission signal through the modulation / demodulation unit 12660 and the communication circuit 12610 and transmitted through the antenna 12510.
- the signal received through the antenna converts the digital signal through a frequency recovery (Analog-Digital conversion) process .
- the modulator / demodulator 12660 demodulates the frequency band of the digital signal.
- the band demodulated digital signal is transmitted to the video decoder 12690, the sound processor 12650, or the LCD controller 12620 according to the type.
- the mobile phone 12500 When the mobile phone 12500 is in the call mode, the mobile phone 12500 amplifies a signal received through the antenna 12510 and generates a digital sound signal through frequency conversion and analog-to-digital conversion processing.
- the received digital sound signal is converted into an analog sound signal through the modulator / demodulator 12660 and the sound processor 12650 under the control of the central controller 12710, and the analog sound signal is output through the speaker 12580. .
- a signal received from the radio base station 12000 via the antenna 12510 is converted into multiplexed data as a result of the processing of the modulator / demodulator 12660.
- the output and multiplexed data is transmitted to the multiplexer / demultiplexer 12680.
- the multiplexer / demultiplexer 12680 demultiplexes the multiplexed data to separate the encoded video data stream and the encoded audio data stream.
- the encoded video data stream is provided to the video decoder 12690, and the encoded audio data stream is provided to the sound processor 12650.
- the structure of the image decoder 12690 may correspond to the structure of the video decoding apparatus described above.
- the image decoder 12690 decodes the encoded video data to generate reconstructed video data by using the above-described video decoding method, and restores the reconstructed video data to the display screen 1252 via the LCD controller 1262.
- Video data can be provided.
- video data of a video file accessed from a website of the Internet can be displayed on the display screen 1252.
- the sound processor 1265 may convert the audio data into an analog sound signal and provide the analog sound signal to the speaker 1258. Accordingly, audio data contained in a video file accessed from a website of the Internet can also be reproduced in the speaker 1258.
- the mobile phone 1250 or another type of communication terminal may be a transmitting / receiving terminal including both a video encoding apparatus and a video decoding apparatus, a transmitting terminal including only the video encoding apparatus described above, or a receiving terminal including only a video decoding apparatus.
- FIG. 26 illustrates a digital broadcasting system employing a communication system, according to various embodiments.
- the digital broadcasting system according to various embodiments of FIG. 26 may receive a digital broadcast transmitted through a satellite or terrestrial network using a video encoding apparatus and a video decoding apparatus.
- the broadcast station 12890 transmits the video data stream to the communication satellite or the broadcast satellite 12900 through radio waves.
- the broadcast satellite 12900 transmits a broadcast signal, and the broadcast signal is received by the antenna 12860 in the home to the satellite broadcast receiver.
- the encoded video stream may be decoded and played back by the TV receiver 12610, set-top box 12870, or other device.
- the playback device 12230 may read and decode the encoded video stream recorded on the storage medium 12020 such as a disk and a memory card.
- the reconstructed video signal may thus be reproduced in the monitor 12840, for example.
- the video decoding apparatus may also be mounted in the set top box 12870 connected to the antenna 12860 for satellite / terrestrial broadcasting or the cable antenna 12850 for cable TV reception. Output data of the set-top box 12870 may also be reproduced by the TV monitor 12880.
- a video decoding apparatus may be mounted on the TV receiver 12810 itself instead of the set top box 12870.
- An automobile 12920 with an appropriate antenna 12910 may receive signals from satellite 12800 or radio base station 11700.
- the decoded video may be played on the display screen of the car navigation system 12930 mounted on the car 12920.
- the video signal may be encoded by the video encoding apparatus and recorded and stored in a storage medium.
- the video signal may be stored in the DVD disk 12960 by the DVD recorder, or the video signal may be stored in the hard disk by the hard disk recorder 12950.
- the video signal may be stored in the SD card 12970. If the hard disk recorder 12950 includes a video decoding apparatus according to various embodiments, the video signal recorded on the DVD disk 12960, the SD card 12970, or another type of storage medium may be reproduced on the monitor 12880. have.
- the vehicle navigation system 12930 may not include the camera 1530, the camera interface 12630, and the image encoder 12720 of FIG. 26.
- the computer 12100 and the TV receiver 12610 may not include the camera 1250, the camera interface 12630, and the image encoder 12720 of FIG. 26.
- FIG. 27 illustrates a network structure of a cloud computing system using a video encoding apparatus and a video decoding apparatus, according to various embodiments.
- the cloud computing system may include a cloud computing server 14100, a user DB 14100, a computing resource 14200, and a user terminal.
- the cloud computing system provides an on demand outsourcing service of computing resources through an information communication network such as the Internet at the request of a user terminal.
- service providers integrate the computing resources of data centers located in different physical locations into virtualization technology to provide users with the services they need.
- the service user does not install and use computing resources such as application, storage, operating system, and security in each user's own terminal, but services in virtual space created through virtualization technology. You can choose as many times as you want.
- a user terminal of a specific service user accesses the cloud computing server 14100 through an information communication network including the Internet and a mobile communication network.
- the user terminals may be provided with a cloud computing service, particularly a video playback service, from the cloud computing server 14100.
- the user terminal may be any electronic device capable of accessing the Internet, such as a desktop PC 14300, a smart TV 14400, a smartphone 14500, a notebook 14600, a portable multimedia player (PMP) 14700, a tablet PC 14800, and the like. It can be a device.
- the cloud computing server 14100 may integrate and provide a plurality of computing resources 14200 distributed in a cloud network to a user terminal.
- the plurality of computing resources 14200 include various data services and may include data uploaded from a user terminal.
- the cloud computing server 14100 integrates a video database distributed in various places into a virtualization technology to provide a service required by a user terminal.
- the user DB 14100 stores user information subscribed to a cloud computing service.
- the user information may include login information and personal credit information such as an address and a name.
- the user information may include an index of the video.
- the index may include a list of videos that have been played, a list of videos being played, and a stop time of the videos being played.
- Information about a video stored in the user DB 14100 may be shared among user devices.
- the playback history of the predetermined video service is stored in the user DB 14100.
- the cloud computing server 14100 searches for and plays a predetermined video service with reference to the user DB 14100.
- the smartphone 14500 receives the video data stream through the cloud computing server 14100, the operation of decoding the video data stream and playing the video may be performed by the operation of the mobile phone 12500 described above with reference to FIG. 24. similar.
- the cloud computing server 14100 may refer to a playback history of a predetermined video service stored in the user DB 14100. For example, the cloud computing server 14100 receives a playback request for a video stored in the user DB 14100 from a user terminal. If the video was being played before, the cloud computing server 14100 may have a streaming method different depending on whether the video is played from the beginning or from the previous stop point according to the user terminal selection. For example, when the user terminal requests to play from the beginning, the cloud computing server 14100 streams the video to the user terminal from the first frame. On the other hand, if the terminal requests to continue playing from the previous stop point, the cloud computing server 14100 streams the video to the user terminal from the frame at the stop point.
- the user terminal may include the video decoding apparatus described above with reference to FIGS. 1A through 20.
- the user terminal may include the video encoding apparatus described above with reference to FIGS. 1A through 20.
- the user terminal may include both the video encoding apparatus and the video decoding apparatus described above with reference to FIGS. 1A through 20.
- FIGS. 21 through 27 Various embodiments of utilizing the video encoding method, the video decoding method, the video encoding apparatus, and the video decoding apparatus described above with reference to FIGS. 1A through 20 are described above with reference to FIGS. 21 through 27. However, various embodiments in which the video encoding method and the video decoding method described above with reference to FIGS. 1A to 20 are stored in a storage medium or the video encoding apparatus and the video decoding apparatus are implemented in the device are illustrated in FIGS. 21 to 27. It is not limited to.
Abstract
Description
Claims (15)
- 제1 레이어 영상에 포함된 현재 블록의 디스패리티 벡터를 획득하는 단계;상기 획득된 디스패리티 벡터를 이용하여 상기 현재 블록에 대응하는 제2 레이어 영상의 블록을 결정하는 단계;상기 블록의 경계에 접하는 샘플을 포함하는 참조 블록을 결정하는 단계;상기 참조 블록의 움직임 벡터를 획득하는 단계; 및상기 획득된 움직임 벡터를 이용하여 상기 제1 레이어 영상에 대한 현재 블록의 움직임 벡터를 결정하는 단계를 포함하는 인터 레이어 비디오 복호화 방법.
- 제 1 항에 있어서,상기 샘플은 상기 제2 레이어 영상 내 블록의 우측 하단에 접하는 샘플인 것을 특징으로 하는 인터 레이어 비디오 복호화 방법.
- 제 1 항에 있어서,현재 부호화 단위는 비트스트림으로부터 획득한 부호화 단위에 관한 분할 정보를 이용하여 상기 제1 레이어 영상에서 결정된 적어도 하나 이상의 부호화 단위 중 하나이고,상기 현재 블록은 상기 현재 부호화 단위로부터 결정된 적어도 하나 이상의 예측 단위 중 하나인 것을 특징으로 하는 인터 레이어 비디오 복호화 방법.
- 제 1 항에 있어서,상기 샘플을 포함하는 상기 참조 블록은상기 제2 레이어 영상이 소정의 크기를 갖는 복수의 블록들로 분할될 때, 상기 분할된 복수의 블록들 중 상기 샘플을 포함하는 블록인 것을 특징으로 하는 인터 레이어 비디오 복호화 방법.
- 제 1 항에 있어서,제1 샘플은 상기 제2 레이어 영상의 블록의 경계에 접하는 상기 샘플이고,상기 획득된 디스패리티 벡터를 이용하여 상기 블록의 경계에 접하는 샘플을 포함하는 참조 블록을 결정하는 단계는,상기 제1 샘플이 상기 제2 레이어 영상의 경계를 벗어나는 경우, 상기 제2 레이어 영상의 경계의 내부에 인접하는 제2 샘플을 결정하는 단계; 및상기 결정된 제2 레이어 영상 경계의 내부에 인접하는 제2 샘플을 포함하는 블록을 상기 참조 블록으로 결정하는 단계를 포함하는 것을 특징으로 하는 인터 레이어 비디오 복호화 방법.
- 제 1 항에 있어서,상기 디스패리티 벡터는 1/4 샘플 정확도(sample accuracy)를 갖는 벡터이고,상기 획득된 디스패리티 벡터를 이용하여 상기 제2 레이어 영상에 대한 대응블록의 경계에 접하는 샘플을 포함하는 참조 블록을 결정하는 단계는,정수 샘플 정확도의 디스패리티 벡터를 생성하기 위해 상기 디스패리티 벡터에 대해 반올림 연산을 수행하는 단계; 및상기 정수 샘플 정확도의 디스패리티 벡터, 상기 대응 블록의 위치 및 크기(너비 및 높이)를 이용하여 상기 대응블록의 경계에 접하는 샘플을 포함하는 상기 참조 블록을 결정하는 단계를 포함하는 것을 특징으로 하는 인터 레이어 비디오 복호화 방법.
- 제 1 항에 있어서,상기 제1 레이어 영상은 제1 시점(view) 영상이고, 상기 제2 레이어 영상은 제2 시점(view) 영상이고,상기 획득된 움직임 벡터가 가리키는 상기 제1 시점의 참조 영상 내 블록을 이용하여 상기 현재 블록의 예측 블록을 결정하는 단계를 더 포함하고,상기 참조 영상은 상기 제1 레이어 영상의 시간과 다른 시간의 영상인 것을 특징으로 하는 인터 레이어 비디오 복호화 방법.
- 제1 레이어 영상에 포함된 현재 블록의 디스패리티 벡터를 획득하는 단계;상기 획득된 디스패리티 벡터를 이용하여 상기 현재 블록에 대응하는 제2 레이어 영상의 블록을 결정하는 단계;상기 블록의 경계에 접하는 샘플을 포함하는 참조 블록을 결정하는 단계;상기 참조 블록의 움직임 벡터를 획득하는 단계;상기 획득된 움직임 벡터를 이용하여 상기 제1 레이어 영상에 대한 현재 블록의 움직임 벡터를 결정하는 단계;상기 결정된 움직임 벡터를 이용하여 상기 현재 블록의 예측 블록을 결정하는 단계; 및상기 현재 블록의 예측 블록을 이용하여 상기 현재 블록에 관한 레지듀얼 블록을 부호화하는 단계를 포함하는 인터 레이어 비디오 부호화 방법.
- 제 8 항에 있어서,상기 샘플은 상기 제2 레이어 영상내 블록의 우측 하단에 접하는 샘플인 것을 특징으로 하는 인터 레이어 비디오 부호화 방법.
- 제 8 항에 있어서,상기 샘플을 포함하는 참조 블록은상기 제2 레이어 영상이 소정의 크기를 갖는 복수의 블록들로 분할될 때, 상기 분할된 복수의 블록들 중 상기 샘플을 포함하는 블록인 것을 특징으로 하는 인터 레이어 비디오 부호화 방법.
- 제 8 항에 있어서,제1 샘플은 상기 제2 레이어 영상에 대한 블록의 경계에 접하는 상기 샘플이고,상기 획득된 디스패리티 벡터를 이용하여 상기 제2 레이어 영상에 대한 대응블록의 경계에 접하는 샘플을 포함하는 참조 블록을 결정하는 단계는,상기 제1 샘플이 상기 제2 레이어 영상의 경계를 벗어나는 경우, 상기 제2 레이어 영상의 경계의 내부에 인접하는 제2 샘플을 결정하는 단계; 및상기 결정된 제2 레이어 영상 경계의 내부에 인접하는 제2 샘플을 포함하는 블록을 상기 참조 블록으로 결정하는 단계를 포함하는 것을 특징으로 하는 인터 레이어 비디오 부호화 방법.
- 제 8 항에 있어서,상기 디스패리티 벡터는 1/4 샘플 정확도를 갖는 벡터이고,상기 획득된 디스패리티 벡터를 이용하여 상기 제2 레이어 영상에 대한 대응블록의 경계에 접하는 샘플을 포함하는 참조 블록을 결정하는 단계는,정수 샘플 정확도(Integer sample accurancy)의 디스패리티 벡터를 생성하기 위해 상기 디스패리티 벡터에 대해 반올림 연산을 수행하는 단계; 및상기 정수 샘플 정확도의 디스패리티 벡터, 상기 대응 블록의 위치 및 크기(너비 및 높이)를 이용하여 상기 대응블록의 경계에 접하는 샘플을 포함하는 상기 참조 블록을 결정하는 단계를 포함하는 것을 특징으로 하는 인터 레이어 비디오 부호화 방법.
- 제1 레이어 영상의 현재 블록으로부터 복호화된 제2 레이어 영상의 대응 블록을 가리키는 디스패리티 벡터를 획득하는 디스패리티 벡터 획득부; 및상기 획득된 디스패리티 벡터를 이용하여 상기 제2 레이어 영상에 대한 대응블록의 경계에 접하는 샘플을 포함하는 참조 블록을 결정하고, 상기 참조 블록에 관한 움직임 벡터를 획득하고, 상기 획득된 움직임 벡터를 이용하여 상기 제1 레이어 영상에 대한 현재 블록의 예측 블록을 획득하는 복호화부를 포함하는 인터 레이어 비디오 복호화 장치.
- 제1 레이어 영상의 현재 블록으로부터 부복호화된 제2 레이어 영상의 대응 블록을 가리키는 디스패리티 벡터를 획득하는 디스패리티 벡터 획득부; 및상기 획득된 디스패리티 벡터를 이용하여 상기 제2 레이어 영상에 대한 대응블록의 경계에 접하는 샘플을 포함하는 참조 블록을 결정하고, 상기 참조 블록에 관한 움직임 벡터를 획득하고, 상기 획득된 움직임 벡터를 이용하여 상기 현재 블록의 예측 블록을 획득하고, 상기 현재 블록의 예측 블록을 이용하여 상기 현재 블록을 포함하는 상기 제1 레이어 영상을 부호화하는 부호화부를 포함하는 인터 레이어 비디오 부호화 장치.
- 제 1 항 내지 제 14 항 중 어느 한 항의 방법을 구현하기 위한 프로그램이 기록된 컴퓨터로 판독 가능한 기록매체.
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/545,432 US10820007B2 (en) | 2015-01-21 | 2016-01-20 | Method and apparatus for decoding inter-layer video, and method and apparatus for encoding inter-layer video |
KR1020177019874A KR102149827B1 (ko) | 2015-01-21 | 2016-01-20 | 인터 레이어 비디오 복호화 방법 및 그 장치 및 인터 레이어 비디오 부호화 방법 및 그 장치 |
EP16740403.7A EP3247114A4 (en) | 2015-01-21 | 2016-01-20 | Method and apparatus for decoding inter-layer video, and method and apparatus for encoding inter-layer video |
CN201680017423.1A CN107409214B (zh) | 2015-01-21 | 2016-01-20 | 用于对层间视频进行解码的方法和设备以及用于对层间视频进行编码的方法和设备 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201562106154P | 2015-01-21 | 2015-01-21 | |
US62/106,154 | 2015-01-21 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2016117930A1 true WO2016117930A1 (ko) | 2016-07-28 |
Family
ID=56417391
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2016/000597 WO2016117930A1 (ko) | 2015-01-21 | 2016-01-20 | 인터 레이어 비디오 복호화 방법 및 그 장치 및 인터 레이어 비디오 부호화 방법 및 그 장치 |
Country Status (5)
Country | Link |
---|---|
US (1) | US10820007B2 (ko) |
EP (1) | EP3247114A4 (ko) |
KR (1) | KR102149827B1 (ko) |
CN (1) | CN107409214B (ko) |
WO (1) | WO2016117930A1 (ko) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109005412A (zh) * | 2017-06-06 | 2018-12-14 | 北京三星通信技术研究有限公司 | 运动矢量获取的方法及设备 |
Families Citing this family (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106060561B (zh) | 2010-04-13 | 2019-06-28 | Ge视频压缩有限责任公司 | 解码器、重建数组的方法、编码器、编码方法及数据流 |
CN105959703B (zh) | 2010-04-13 | 2019-06-04 | Ge视频压缩有限责任公司 | 解码器、编码器、生成数据流的方法及解码数据流的方法 |
CN106231332B (zh) | 2010-04-13 | 2020-04-14 | Ge视频压缩有限责任公司 | 解码器、解码方法、编码器以及编码方法 |
EP3490257B1 (en) | 2010-04-13 | 2024-01-10 | GE Video Compression, LLC | Sample region merging |
KR102584349B1 (ko) * | 2016-03-28 | 2023-10-04 | 로즈데일 다이나믹스 엘엘씨 | 인터 예측 모드 기반 영상 처리 방법 및 이를 위한 장치 |
MX2018014493A (es) * | 2016-05-25 | 2019-08-12 | Arris Entpr Llc | Particionamiento binario, ternario, cuaternario para jvet. |
CN110178371A (zh) * | 2017-01-16 | 2019-08-27 | 世宗大学校产学协力团 | 影像编码/解码方法及装置 |
US10785494B2 (en) * | 2017-10-11 | 2020-09-22 | Qualcomm Incorporated | Low-complexity design for FRUC |
EP3806471A4 (en) * | 2018-06-01 | 2022-06-08 | Sharp Kabushiki Kaisha | PICTURE DECODING DEVICE AND PICTURE CODING DEVICE |
CN116781896A (zh) * | 2018-06-27 | 2023-09-19 | Lg电子株式会社 | 对视频信号进行编解码的方法和发送方法 |
CN113424536B (zh) | 2018-11-30 | 2024-01-30 | 腾讯美国有限责任公司 | 用于视频编解码的方法和装置 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20080036910A (ko) * | 2006-10-24 | 2008-04-29 | 엘지전자 주식회사 | 비디오 신호 디코딩 방법 및 장치 |
US20120177125A1 (en) * | 2011-01-12 | 2012-07-12 | Toshiyasu Sugio | Moving picture coding method and moving picture decoding method |
JP2012253460A (ja) * | 2011-05-31 | 2012-12-20 | Jvc Kenwood Corp | 動画像復号装置、動画像復号方法及び動画像復号プログラム |
KR20140122195A (ko) * | 2013-04-05 | 2014-10-17 | 삼성전자주식회사 | 인터 레이어 복호화 및 부호화 방법 및 장치를 위한 인터 예측 후보 결정 방법 |
Family Cites Families (27)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4762938B2 (ja) * | 2007-03-06 | 2011-08-31 | 三菱電機株式会社 | データ埋め込み装置、データ抽出装置、データ埋め込み方法およびデータ抽出方法 |
US8917775B2 (en) | 2007-05-02 | 2014-12-23 | Samsung Electronics Co., Ltd. | Method and apparatus for encoding and decoding multi-view video data |
KR101431546B1 (ko) | 2007-05-02 | 2014-08-22 | 삼성전자주식회사 | 다시점 동영상의 부호화 및 복호화 방법과 그 장치 |
US20090116558A1 (en) * | 2007-10-15 | 2009-05-07 | Nokia Corporation | Motion skip and single-loop encoding for multi-view video content |
KR101520619B1 (ko) * | 2008-02-20 | 2015-05-18 | 삼성전자주식회사 | 스테레오 동기화를 위한 스테레오스코픽 영상의 시점 결정방법 및 장치 |
CN102055963A (zh) * | 2008-06-13 | 2011-05-11 | 华为技术有限公司 | 一种视频编码、解码方法及编码、解码装置 |
WO2010043773A1 (en) * | 2008-10-17 | 2010-04-22 | Nokia Corporation | Sharing of motion vector in 3d video coding |
JP5429034B2 (ja) * | 2009-06-29 | 2014-02-26 | ソニー株式会社 | 立体画像データ送信装置、立体画像データ送信方法、立体画像データ受信装置および立体画像データ受信方法 |
JP5747559B2 (ja) * | 2011-03-01 | 2015-07-15 | 富士通株式会社 | 動画像復号方法、動画像符号化方法、動画像復号装置、及び動画像復号プログラム |
CN105187840A (zh) | 2011-05-31 | 2015-12-23 | Jvc建伍株式会社 | 动图像解码装置、动图像解码方法、接收装置及接收方法 |
KR20130037161A (ko) * | 2011-10-05 | 2013-04-15 | 한국전자통신연구원 | 스케일러블 비디오 코딩을 위한 향상된 계층간 움직임 정보 예측 방법 및 그 장치 |
US10075728B2 (en) * | 2012-10-01 | 2018-09-11 | Inria Institut National De Recherche En Informatique Et En Automatique | Method and device for motion information prediction refinement |
US9357214B2 (en) | 2012-12-07 | 2016-05-31 | Qualcomm Incorporated | Advanced merge/skip mode and advanced motion vector prediction (AMVP) mode for 3D video |
US9948939B2 (en) * | 2012-12-07 | 2018-04-17 | Qualcomm Incorporated | Advanced residual prediction in scalable and multi-view video coding |
US20140218473A1 (en) * | 2013-01-07 | 2014-08-07 | Nokia Corporation | Method and apparatus for video coding and decoding |
US10194146B2 (en) * | 2013-03-26 | 2019-01-29 | Qualcomm Incorporated | Device and method for scalable coding of video information |
US9609347B2 (en) * | 2013-04-04 | 2017-03-28 | Qualcomm Incorporated | Advanced merge mode for three-dimensional (3D) video coding |
US20140301463A1 (en) * | 2013-04-05 | 2014-10-09 | Nokia Corporation | Method and apparatus for video coding and decoding |
EP2932716A4 (en) * | 2013-04-10 | 2016-07-06 | Mediatek Inc | METHOD AND DEVICE FOR SELECTION OF INTERCONNECTION CANDIDATES FOR THREE-DIMENSIONAL VIDEO-CORDING |
US9930363B2 (en) * | 2013-04-12 | 2018-03-27 | Nokia Technologies Oy | Harmonized inter-view and view synthesis prediction for 3D video coding |
EP3013049A4 (en) * | 2013-06-18 | 2017-02-22 | Sharp Kabushiki Kaisha | Illumination compensation device, lm predict device, image decoding device, image coding device |
JP6545672B2 (ja) * | 2013-10-18 | 2019-07-17 | エルジー エレクトロニクス インコーポレイティド | マルチビュービデオコーディングにおいて、ビュー合成予測方法及びこれを利用したマージ候補リスト構成方法 |
KR102227279B1 (ko) * | 2013-10-24 | 2021-03-12 | 한국전자통신연구원 | 비디오 부호화/복호화 방법 및 장치 |
JP6469588B2 (ja) * | 2013-12-19 | 2019-02-13 | シャープ株式会社 | 残差予測装置、画像復号装置、画像符号化装置、残差予測方法、画像復号方法、および画像符号化方法 |
WO2015100710A1 (en) * | 2014-01-02 | 2015-07-09 | Mediatek Singapore Pte. Ltd. | Existence of inter-view reference picture and availability of 3dvc coding tools |
US10554967B2 (en) * | 2014-03-21 | 2020-02-04 | Futurewei Technologies, Inc. | Illumination compensation (IC) refinement based on positional pairings among pixels |
TWI566576B (zh) * | 2014-06-03 | 2017-01-11 | 宏碁股份有限公司 | 立體影像合成方法及裝置 |
-
2016
- 2016-01-20 KR KR1020177019874A patent/KR102149827B1/ko active IP Right Grant
- 2016-01-20 US US15/545,432 patent/US10820007B2/en active Active
- 2016-01-20 WO PCT/KR2016/000597 patent/WO2016117930A1/ko active Application Filing
- 2016-01-20 CN CN201680017423.1A patent/CN107409214B/zh active Active
- 2016-01-20 EP EP16740403.7A patent/EP3247114A4/en not_active Ceased
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20080036910A (ko) * | 2006-10-24 | 2008-04-29 | 엘지전자 주식회사 | 비디오 신호 디코딩 방법 및 장치 |
US20120177125A1 (en) * | 2011-01-12 | 2012-07-12 | Toshiyasu Sugio | Moving picture coding method and moving picture decoding method |
JP2012253460A (ja) * | 2011-05-31 | 2012-12-20 | Jvc Kenwood Corp | 動画像復号装置、動画像復号方法及び動画像復号プログラム |
KR20140122195A (ko) * | 2013-04-05 | 2014-10-17 | 삼성전자주식회사 | 인터 레이어 복호화 및 부호화 방법 및 장치를 위한 인터 예측 후보 결정 방법 |
Non-Patent Citations (2)
Title |
---|
G. TECH. ET AL.: "3D-HEVC Draft Text 3", JCT3V DOCUMENT : JCT3V-G1001-V1, 17 January 2014 (2014-01-17), pages 1 - 89, XP055402729, Retrieved from the Internet <URL:http://phenix.int-evry.fr/jc13v/doc_end_user/current_document.php?id=1882> * |
See also references of EP3247114A4 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109005412A (zh) * | 2017-06-06 | 2018-12-14 | 北京三星通信技术研究有限公司 | 运动矢量获取的方法及设备 |
CN109005412B (zh) * | 2017-06-06 | 2022-06-07 | 北京三星通信技术研究有限公司 | 运动矢量获取的方法及设备 |
Also Published As
Publication number | Publication date |
---|---|
KR20170100564A (ko) | 2017-09-04 |
US20180007379A1 (en) | 2018-01-04 |
KR102149827B1 (ko) | 2020-08-31 |
CN107409214B (zh) | 2021-02-02 |
EP3247114A1 (en) | 2017-11-22 |
US10820007B2 (en) | 2020-10-27 |
EP3247114A4 (en) | 2018-01-17 |
CN107409214A (zh) | 2017-11-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2015137783A1 (ko) | 인터 레이어 비디오의 복호화 및 부호화를 위한 머지 후보 리스트 구성 방법 및 장치 | |
WO2015099506A1 (ko) | 서브블록 기반 예측을 수행하는 인터 레이어 비디오 복호화 방법 및 그 장치 및 서브블록 기반 예측을 수행하는 인터 레이어 비디오 부호화 방법 및 그 장치 | |
WO2015005753A1 (ko) | 깊이 기반 디스패리티 벡터를 이용하는 인터 레이어 비디오 복호화 방법 및 그 장치, 깊이 기반 디스패리티 벡터를 이용하는 인터 레이어 비디오 부호화 방법 및 장치 | |
WO2015152608A4 (ko) | 서브블록 기반 예측을 수행하는 인터 레이어 비디오 복호화 방법 및 그 장치 및 서브블록 기반 예측을 수행하는 인터 레이어 비디오 부호화 방법 및 그 장치 | |
WO2016117930A1 (ko) | 인터 레이어 비디오 복호화 방법 및 그 장치 및 인터 레이어 비디오 부호화 방법 및 그 장치 | |
WO2014030920A1 (ko) | 트리 구조의 부호화 단위에 기초한 예측 정보의 인터-레이어 비디오 부호화 방법 및 그 장치, 트리 구조의 부호화 단위에 기초한 예측 정보의 인터-레이어 비디오 복호화 방법 및 그 장치 | |
WO2013162311A1 (ko) | 다시점 비디오 예측을 위한 참조픽처세트를 이용하는 다시점 비디오 부호화 방법 및 그 장치, 다시점 비디오 예측을 위한 참조픽처세트를 이용하는 다시점 비디오 복호화 방법 및 그 장치 | |
WO2015133866A1 (ko) | 서브 블록 기반 예측을 수행하는 인터 레이어 비디오 복호화 방법 및 그 장치 및 서브 블록 기반 예측을 수행하는 인터 레이어 비디오 부호화 방법 및 그 장치 | |
WO2014163467A1 (ko) | 랜덤 엑세스를 위한 멀티 레이어 비디오 부호화 방법 및 그 장치, 랜덤 엑세스를 위한 멀티 레이어 비디오 복호화 방법 및 그 장치 | |
WO2014109594A1 (ko) | 휘도차를 보상하기 위한 인터 레이어 비디오 부호화 방법 및 그 장치, 비디오 복호화 방법 및 그 장치 | |
WO2015053598A1 (ko) | 멀티 레이어 비디오 부호화 방법 및 장치, 멀티 레이어 비디오 복호화 방법 및 장치 | |
WO2014163458A1 (ko) | 인터 레이어 복호화 및 부호화 방법 및 장치를 위한 인터 예측 후보 결정 방법 | |
WO2013157817A1 (ko) | 트리 구조의 부호화 단위에 기초한 다시점 비디오 부호화 방법 및 그 장치, 트리 구조의 부호화 단위에 기초한 다시점 비디오 복호화 방법 및 그 장치 | |
WO2015053601A1 (ko) | 멀티 레이어 비디오 부호화 방법 및 그 장치, 멀티 레이어 비디오 복호화 방법 및 그 장치 | |
WO2015012622A1 (ko) | 움직임 벡터 결정 방법 및 그 장치 | |
WO2015053597A1 (ko) | 멀티 레이어 비디오 부호화 방법 및 장치, 멀티 레이어 비디오 복호화 방법 및 장치 | |
WO2015194896A1 (ko) | 휘도차를 보상하기 위한 인터 레이어 비디오 부호화 방법 및 그 장치, 비디오 복호화 방법 및 그 장치 | |
WO2015005749A1 (ko) | 인터 레이어 비디오 복호화 및 부호화 장치 및 방법을 위한 블록 기반 디스패리티 벡터 예측 방법 | |
WO2016072753A1 (ko) | 샘플 단위 예측 부호화 장치 및 방법 | |
WO2015009113A1 (ko) | 인터 레이어 비디오 복호화 및 부호화 장치 및 방법을 위한 깊이 영상의 화면내 예측 방법 | |
WO2014175647A1 (ko) | 시점 합성 예측을 이용한 다시점 비디오 부호화 방법 및 그 장치, 다시점 비디오 복호화 방법 및 그 장치 | |
WO2014171769A1 (ko) | 시점 합성 예측을 이용한 다시점 비디오 부호화 방법 및 그 장치, 다시점 비디오 복호화 방법 및 그 장치 | |
WO2013162251A1 (ko) | 다시점 비디오 예측을 위한 참조리스트를 이용하는 다시점 비디오 부호화 방법 및 그 장치, 다시점 비디오 예측을 위한 참조리스트를 이용하는 다시점 비디오 복호화 방법 및 그 장치 | |
WO2014129872A1 (ko) | 메모리 대역폭 및 연산량을 고려한 스케일러블 비디오 부호화 장치 및 방법, 스케일러블 비디오 복호화 장치 및 방법 | |
WO2015093920A1 (ko) | 휘도 보상을 이용한 인터 레이어 비디오 부호화 방법 및 그 장치, 비디오 복호화 방법 및 그 장치 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 16740403 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 20177019874 Country of ref document: KR Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 15545432 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
REEP | Request for entry into the european phase |
Ref document number: 2016740403 Country of ref document: EP |