KR20150033576A - A method and an apparatus for encoding/decoding a multi-layer video signal - Google Patents

A method and an apparatus for encoding/decoding a multi-layer video signal Download PDF

Info

Publication number
KR20150033576A
KR20150033576A KR20140127397A KR20140127397A KR20150033576A KR 20150033576 A KR20150033576 A KR 20150033576A KR 20140127397 A KR20140127397 A KR 20140127397A KR 20140127397 A KR20140127397 A KR 20140127397A KR 20150033576 A KR20150033576 A KR 20150033576A
Authority
KR
South Korea
Prior art keywords
layer
picture
block
prediction
inter
Prior art date
Application number
KR20140127397A
Other languages
Korean (ko)
Inventor
이배근
김주영
Original Assignee
주식회사 케이티
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 주식회사 케이티 filed Critical 주식회사 케이티
Publication of KR20150033576A publication Critical patent/KR20150033576A/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/105Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/187Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a scalable video layer
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/30Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/30Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
    • H04N19/33Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability in the spatial domain
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/70Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards

Abstract

According to the present invention, a method for decoding a multi-layer video signal determines a corresponding picture of a reference layer used for predicting layers of a present picture in a present layer, generates an interlayer reference picture by up-sampling the determined corresponding picture, specifies a reference block in the interlayer reference picture used for predicting layers of a present block based on spatial offset information, and operates prediction of layers of the present block by using the specified reference block.

Description

[0001] METHOD AND APPARATUS FOR ENCODING / DECODING A MULTI-LAYER VIDEO SIGNAL [0002]

The present invention relates to a multi-layer video signal encoding / decoding method and apparatus.

Recently, the demand for high resolution and high quality images such as high definition (HD) image and ultra high definition (UHD) image is increasing in various applications. As the image data has high resolution and high quality, the amount of data increases relative to the existing image data. Therefore, when the image data is transmitted using a medium such as a wired / wireless broadband line or stored using an existing storage medium, The storage cost is increased. High-efficiency image compression techniques can be utilized to solve such problems as image data becomes high-resolution and high-quality.

An inter picture prediction technique for predicting a pixel value included in a current picture from a previous or a subsequent picture of a current picture by an image compression technique, an intra picture prediction technique for predicting a pixel value included in a current picture using pixel information in the current picture, There are various techniques such as an entropy encoding technique in which a short code is assigned to a value having a high appearance frequency and a long code is assigned to a value having a low appearance frequency. Image data can be effectively compressed and transmitted or stored using such an image compression technique.

On the other hand, demand for high-resolution images is increasing, and demand for stereoscopic image content as a new image service is also increasing. Video compression techniques are being discussed to effectively provide high resolution and ultra-high resolution stereoscopic content.

An object of the present invention is to provide a method and apparatus for determining a corresponding picture of a reference layer used for intra-layer prediction of a current picture in encoding / decoding a multi-layer video signal.

It is an object of the present invention to provide a method and apparatus for up-sampling a corresponding picture of a reference layer in encoding / decoding a multi-layer video signal.

An object of the present invention is to provide a method and apparatus for specifying a reference block of a reference layer used for intra-layer prediction of a current block in encoding / decoding a multi-layer video signal.

An object of the present invention is to provide a method and apparatus for constructing a reference picture list using an interlayer reference picture in encoding / decoding a multi-layer video signal.

An object of the present invention is to provide a method and apparatus for effectively deriving texture information of a current layer through inter-layer prediction in encoding / decoding a multi-layer video signal.

A method and apparatus for decoding a multi-layer video signal according to the present invention determines a corresponding picture of a reference layer used for inter-layer prediction of a current picture in a current layer, up-samples the determined corresponding picture to generate an inter- , The reference block in the inter-layer reference picture used for inter-layer prediction of the current block is specified based on the spatial offset information, and inter-layer prediction of the current block is performed using the specified reference block .

In the multi-layer video signal decoding method and apparatus according to the present invention, the specification of the reference block is performed based on an inter-layer prediction limitation flag.

In the multi-layer video signal decoding method and apparatus according to the present invention, it is preferable that the inter-layer prediction restriction flag is set so that, in inter-layer prediction of the current block, whether a constraint that a reference block specified based on the spatial offset information is used Or not.

In the method and apparatus for decoding a multi-layer video signal according to the present invention, the reference block is specified as a block at a position shifted by an offset according to the spatial offset information based on a call block belonging to the inter- do.

In the method and apparatus for decoding a multi-layer video signal according to the present invention, the spatial offset information includes at least one of a segment offset and a block offset.

A method and apparatus for encoding a multi-layer video signal according to the present invention determines a corresponding picture of a reference layer used for inter-layer prediction of a current picture in a current layer, up-samples the determined corresponding picture to generate an inter- , The reference block in the inter-layer reference picture used for inter-layer prediction of the current block is specified based on the spatial offset information, and inter-layer prediction of the current block is performed using the specified reference block .

In the method and apparatus for encoding a multi-layer video signal according to the present invention, the specification of the reference block is performed based on an inter-layer prediction restriction flag.

In the multi-layer video signal encoding method and apparatus according to the present invention, it is preferable that the inter-layer prediction restriction flag is set so that, in the inter-layer prediction of the current block, whether a constraint that a reference block specified based on the spatial offset information is used Or not.

In the method and apparatus for encoding a multi-layer video signal according to the present invention, the reference block is specified as a block at a position shifted by an offset according to the spatial offset information based on a call block belonging to the interlayer reference picture do.

In the method and apparatus for encoding a multi-layer video signal according to the present invention, the spatial offset information includes at least one of a segment offset and a block offset.

According to the present invention, the corresponding picture of the reference layer used for inter-layer prediction of the current picture in the current layer can be effectively determined.

According to the present invention, a picture of a reference layer can be effectively upsampled.

According to the present invention, it is possible to effectively construct a reference picture list including an interlayer reference picture.

According to the present invention, it is possible to effectively determine a reference block of a reference layer used for inter-layer prediction of a current block.

According to the present invention, texture information of a current layer can be effectively guided through inter-layer prediction.

1 is a block diagram schematically illustrating an encoding apparatus according to an embodiment of the present invention.
2 is a block diagram schematically illustrating a decoding apparatus according to an embodiment of the present invention.
FIG. 3 is a flowchart illustrating a process of inter-prediction of a current layer using a corresponding picture of a reference layer, to which the present invention is applied.
FIG. 4 shows a method of determining a corresponding picture of a reference layer based on a reference active flag according to an embodiment of the present invention.
FIG. 5 shows a syntax table of a reference active flag according to an embodiment to which the present invention is applied.
FIG. 6 illustrates a method of obtaining interlayer reference information for a current picture according to an embodiment of the present invention. Referring to FIG.
FIG. 7 illustrates a syntax table of interlayer reference information according to an embodiment of the present invention. Referring to FIG.
8 is a flowchart illustrating a method of upsampling a corresponding picture of a reference layer to which the present invention is applied.
FIG. 9 illustrates a method of specifying a near reference picture stored in a decoding picture buffer according to an embodiment of the present invention. Referring to FIG.
FIG. 10 illustrates a method of specifying a long-term reference picture according to an embodiment of the present invention. Referring to FIG.
11 illustrates a method of constructing a reference picture list using a near reference picture and a long distance reference picture according to an embodiment of the present invention.
FIG. 12 illustrates a process of performing inter-layer prediction of a current block based on spatial offset information according to an embodiment of the present invention. Referring to FIG.
13 to 15 show a syntax table of spatial offset information according to an embodiment to which the present invention is applied.
FIG. 16 illustrates a method of specifying a reference block used for inter-layer prediction of a current block using spatial offset information according to an embodiment of the present invention.

Hereinafter, preferred embodiments of the present invention will be described in detail with reference to the accompanying drawings. Prior to this, terms and words used in the present specification and claims should not be construed as limited to ordinary or dictionary terms, and the inventor should appropriately interpret the concepts of the terms appropriately It should be interpreted in accordance with the meaning and concept consistent with the technical idea of the present invention based on the principle that it can be defined. Therefore, the embodiments described in this specification and the configurations shown in the drawings are merely the most preferred embodiments of the present invention and do not represent all the technical ideas of the present invention. Therefore, It is to be understood that equivalents and modifications are possible.

When an element is referred to herein as being "connected" or "connected" to another element, it may mean directly connected or connected to the other element, Element may be present. In addition, the content of " including " a specific configuration in this specification does not exclude a configuration other than the configuration, and means that additional configurations can be included in the scope of the present invention or the scope of the present invention.

The terms first, second, etc. may be used to describe various configurations, but the configurations are not limited by the term. The terms are used for the purpose of distinguishing one configuration from another. For example, without departing from the scope of the present invention, the first configuration may be referred to as the second configuration, and similarly, the second configuration may be named as the first configuration.

In addition, the components shown in the embodiments of the present invention are shown independently to represent different characteristic functions, and do not mean that the components are composed of separate hardware or software constituent units. That is, each constituent unit is included in each constituent unit for convenience of explanation, and at least two constituent units of each constituent unit may form one constituent unit or one constituent unit may be divided into a plurality of constituent units to perform a function. The integrated embodiments and the separate embodiments of each component are also included in the scope of the present invention unless they depart from the essence of the present invention.

In addition, some of the components are not essential components to perform essential functions in the present invention, but may be optional components only to improve performance. The present invention can be implemented only with components essential for realizing the essence of the present invention, except for the components used for the performance improvement, and can be implemented by only including the essential components except the optional components used for performance improvement Are also included in the scope of the present invention.

The coding and decoding of video supporting a plurality of layers (multi-layers) in a bitstream is referred to as scalable video coding. Since there is a strong correlation between a plurality of layers, it is possible to remove redundant elements of data and improve the coding performance of an image by performing prediction using such a relation. Hereinafter, prediction of the current layer using information of another layer is referred to as inter-layer prediction or inter-layer prediction.

The plurality of layers may have different resolutions, where the resolution may refer to at least one of spatial resolution, temporal resolution, and image quality. Resampling such as up-sampling or down-sampling of a layer may be performed to adjust the resolution in the inter-layer prediction.

1 is a block diagram schematically illustrating an encoding apparatus according to an embodiment of the present invention.

The encoding apparatus 100 according to the present invention includes an encoding unit 100a for an upper layer and an encoding unit 100b for a lower layer.

The upper layer may be represented by a current layer or an enhancement layer and the lower layer may be represented by an enhancement layer, a base layer, or a reference layer having a resolution lower than that of the upper layer . The upper layer and the lower layer may have different spatial resolution, temporal resolution according to the frame rate, and image quality depending on the color format or the quantization size. Upsampling or downsampling of a layer may be performed when a resolution change is required to perform inter-layer prediction.

The encoding unit 100a of the upper layer includes a decomposing unit 110, a predicting unit 120, a transforming unit 130, a quantizing unit 140, a rearranging unit 150, an entropy encoding unit 160, 170, an inverse transform unit 180, a filter unit 190, and a memory 195.

The lower layer encoding unit 100b includes a partitioning unit 111, a predicting unit 125, a transforming unit 131, a quantizing unit 141, a reordering unit 151, an entropy coding unit 161, an inverse quantization unit 171, an inverse transform unit 181, a filter unit 191, and a memory 196.

The encoding unit may be implemented by the image encoding method described in the embodiments of the present invention, but operations in some components may not be performed for lowering the complexity of the encoding apparatus or for fast real-time encoding. For example, in performing intra-picture prediction in the prediction unit, it is not necessary to use a method of selecting an optimal intra-picture coding method using all the intra-picture prediction mode methods in order to perform coding in real time, The intra-picture prediction mode may be used as the final intra-picture prediction mode. As another example, it is also possible to restrictively use the type of the prediction block used in intra-picture prediction or inter-picture prediction.

The unit of the block processed by the encoding apparatus may be a coding unit for performing encoding, a prediction unit for performing prediction, and a conversion unit for performing conversion. The coding unit can be expressed by CU (Coding Unit), the prediction unit by PU (Prediction Unit), and the conversion unit by TU (Transform Unit).

In the division units 110 and 111, the layer image is divided into a plurality of encoding blocks, a prediction block, and a conversion block, and is divided into a coding block, a prediction block, Can be selected to divide the layer. For example, a recursive tree structure such as a quad tree structure can be used to divide an encoding unit in a layer image. Hereinafter, in the embodiment of the present invention, the meaning of a coding block may be used not only for a coding block but also for a block to perform decoding.

The prediction block may be a unit for performing prediction such as intra-picture prediction or inter-picture prediction. The block for intra prediction may be a square block such as 2Nx2N, NxN. As a block for performing inter picture prediction, there is a prediction block dividing method using AMP (Asymmetric Motion Partitioning), which is a square shape such as 2Nx2N or NxN or a rectangular shape or an asymmetric shape such as 2NxN and Nx2N. The method of performing the transform in the transform unit 115 may vary depending on the type of the prediction block.

The prediction units 120 and 125 of the encoding units 100a and 100b include intra prediction units 121 and 126 for performing intra prediction and inter prediction units for performing inter prediction, (122, 127). The predicting unit 120 of the upper layer encoding unit 100a may further include an inter-layer predicting unit 123 that performs prediction on an upper layer using information of a lower layer.

The prediction units 120 and 125 can determine whether to use inter-picture prediction or intra-picture prediction for the prediction block. The process of determining an intra prediction mode in units of prediction blocks in performing intra prediction and performing intra prediction on the basis of the determined intra prediction mode may be performed on a conversion block basis. The residual value (residual block) between the generated prediction block and the original block can be input to the conversion units 130 and 131. In addition, the prediction mode information, motion information, and the like used for prediction can be encoded by the entropy encoding unit 130 and transmitted to the decoding apparatus together with the residual value.

When the PCM (Pulse Coded Modulation) coding mode is used, it is also possible to directly encode the original block and transmit it to the decoding unit without performing the prediction through the prediction units 120 and 125.

Intra prediction units 121 and 126 can generate a predicted block on the basis of reference pixels existing in the vicinity of the current block (block to be predicted). In the intra prediction method, the intra prediction mode may have a directional prediction mode using the reference pixel according to the prediction direction and a non-directional mode not considering the prediction direction. The mode for predicting luma information and the mode for predicting chrominance information may be different types. In order to predict the color difference information, an intra prediction mode in which luma information is predicted or predicted luma information can be utilized. If the reference pixel is not available, replace the unavailable reference pixel with another pixel and use it to create a prediction block.

The prediction block may include a plurality of transform blocks. When intra prediction is performed, if the size of the prediction block and the size of the transform block are the same, a pixel existing on the left side of the prediction block, In-picture prediction for the prediction block based on the pixels existing in the prediction block. However, when intra prediction is performed, when the size of the prediction block is different from the size of the transform block, when a plurality of transform blocks are included in the prediction block, the intra-picture prediction is performed using the neighboring pixels adjacent to the transform block as reference pixels. Can be performed. Here, the neighboring pixels adjacent to the transform block may include at least one of neighboring pixels adjacent to the prediction block and pixels already decoded in the prediction block.

The intra-picture prediction method can generate a prediction block after applying a mode dependent intra-smoothing (MDIS) filter to the reference picture according to the intra-picture prediction mode. The type of MDIS filter applied to the reference pixel may be different. The MDIS filter can be used to reduce residuals in intra-frame predicted blocks generated after performing intra-prediction and applied to reference pixels and prediction as additional filters applied to intra-frame predicted blocks. In performing MDIS filtering, the filtering of the reference pixel and some columns included in the intra prediction block can perform filtering according to the direction of the intra prediction mode.

The inter-picture prediction units 122 and 127 can perform prediction by referring to information of a block included in at least one of a previous picture of a current picture or a following picture. The inter-picture prediction units 122 and 127 may include a reference picture interpolating unit, a motion predicting unit, and a motion compensating unit.

In the reference picture interpolating unit, the reference picture information is supplied from the memories 195 and 196, and pixel information of an integer pixel or less can be generated in the reference picture. In the case of luma pixels, a DCT-based interpolation filter (DCT) based on a different filter coefficient may be used to generate pixel information of an integer number of pixels or less in units of quarter pixels. In the case of a color difference signal, a DCT-based 4-tap interpolation filter having a different filter coefficient may be used to generate pixel information of an integer number of pixels or less in units of 1/8 pixel.

The inter-picture prediction units 122 and 127 can perform motion prediction based on the reference pictures interpolated by the reference picture interpolating unit. Various methods such as Full Search-based Block Matching Algorithm (FBMA), Three Step Search (TSS), and New Three-Step Search Algorithm (NTS) can be used to calculate motion vectors. The motion vector may have a motion vector value of 1/2 or 1/4 pixel unit based on the interpolated pixel. The inter-picture prediction units 122 and 127 can perform prediction on the current block by applying one inter-picture prediction method among various inter-picture prediction methods.

As the inter-picture prediction method, various methods such as a skip method, a merge method, and a method using a motion vector predictor (MVP) can be used.

In the inter-picture prediction, information such as motion information, such as reference indices, motion vectors, and residual signals, is entropy-encoded and transmitted to the decoding unit. When the skip mode is applied, a residual signal is not generated, so that the conversion and quantization process for the residual signal may be omitted.

The inter-layer predicting unit 123 performs inter-layer prediction for predicting an upper layer using information of the lower layer. The inter-layer predicting unit 123 may perform inter-layer prediction using texture information and motion information of a lower layer.

Inter-layer prediction can predict a current block of an upper layer using motion information on a picture of a lower layer (reference layer) using a picture of a lower layer as a reference picture. A picture of a reference layer used as a reference picture in inter-layer prediction may be a picture sampled according to the resolution of the current layer. In addition, the motion information may include a motion vector and a reference index. At this time, the value of the motion vector for the picture of the reference layer can be set to (0, 0).

As an example of inter-layer prediction, a prediction method of using a picture of a lower layer as a reference picture has been described, but the present invention is not limited to this. The inter-layer predicting unit 123 may perform inter-layer texture prediction, inter-layer motion prediction, inter-layer syntax prediction, inter-layer difference prediction, and the like.

Inter-layer texture prediction can derive the texture of the current layer based on the texture of the reference layer. The texture of the reference layer can be sampled according to the resolution of the current layer, and the inter-layer predicting unit 123 can predict the texture of the current layer based on the texture of the sampled reference layer.

The inter-layer motion prediction can derive the motion vector of the current layer based on the motion vector of the reference layer. At this time, the motion vector of the reference layer can be scaled according to the resolution of the current layer. In the inter-layer syntax prediction, the syntax of the current layer can be predicted based on the syntax of the reference layer. For example, the inter-layer predicting unit 123 may use the syntax of the reference layer as the syntax of the current layer. In the inter-layer difference prediction, the picture of the current layer can be restored by using the difference between the restored image of the reference layer and the restored image of the current layer.

A residual block including residue information which is a difference value between the prediction blocks generated by the prediction units 120 and 125 and the reconstruction blocks of the prediction blocks is generated and the residual blocks are input to the transform units 130 and 131. [

The transforming units 130 and 131 can transform the residual block using a transform method such as DCT (Discrete Cosine Transform) or DST (Discrete Sine Transform). Whether to apply the DCT or the DST to transform the residual block can be determined based on the intra prediction mode information and the prediction block size information of the prediction block used to generate the residual block. That is, the transforming units 130 and 131 can apply the transforming method differently according to the size of the prediction block and the prediction method.

The quantization units 140 and 141 may quantize the values converted into the frequency domain by the transform units 130 and 131. [ The quantization factor may vary depending on the block or the importance of the image. The values calculated by the quantization units 140 and 141 may be provided to the dequantization units 170 and 17 and the reordering units 150 and 151, respectively.

The reordering units 150 and 151 can reorder the coefficient values with respect to the quantized residual values. The reordering units 150 and 151 may change the two-dimensional block type coefficient to a one-dimensional vector form through a coefficient scanning method. For example, the rearrangement units 150 and 151 may scan a DC coefficient to a coefficient of a high frequency region using a Zig-Zag scan method, and change the DC coefficient to a one-dimensional vector form. A vertical scanning method of scanning a two-dimensional block type coefficient in a column direction instead of a jig-jag scanning method according to a size of a conversion block and an intra-picture prediction mode, and a horizontal scanning method of scanning a two- Can be used. That is, it is possible to determine whether any scan method among the jig-jag scan, the vertical scan and the horizontal scan is used according to the size of the conversion block and the intra prediction mode.

The entropy encoding units 160 and 161 can perform entropy encoding based on the values calculated by the reordering units 150 and 151. [ For entropy encoding, various encoding methods such as Exponential Golomb, Context-Adaptive Variable Length Coding (CAVLC), and Context-Adaptive Binary Arithmetic Coding (CABAC) may be used.

The entropy encoding units 160 and 161 receive the residual value coefficient information, the block type information, the prediction mode information, the division unit information, the prediction block information, and the transmission information of the encoding block from the reordering units 150 and 151 and the prediction units 120 and 125, And various information such as unit information, motion information, reference frame information, block interpolation information, filtering information, and the like, and performs entropy encoding based on a predetermined encoding method. In addition, the entropy encoding units 160 and 161 can entropy-encode the coefficient values of the encoding units input from the reordering units 150 and 151.

The entropy encoding units 160 and 161 may encode the intra-picture prediction mode information of the current block by performing binarization on the intra-picture prediction mode information. The entropy encoding units 160 and 161 may include a codeword mapping unit for performing such a binarization operation, and binarization may be performed differently depending on the size of a prediction block for performing intra prediction. In the codeword mapping unit, a codeword mapping table may be adaptively generated or stored in advance through a binarization operation. In another embodiment, the entropy encoding units 160 and 161 may represent the current intra prediction mode information using a codeword mapping unit that performs codeword mapping and a codeword mapping unit that performs codeword mapping. In the codeword mapping unit and the codeword mapping unit, a codeword mapping table and a codeword mapping table may be generated or stored.

The inverse quantization units 170 and 171 and the inverse transform units 180 and 181 dequantize the quantized values in the quantization units 140 and 141 and invert the converted values in the transform units 130 and 131. The residual values generated by the inverse quantization units 170 and 171 and the inverse transform units 180 and 181 are predicted through a motion estimation unit, a motion compensation unit, and an intra prediction unit included in the prediction units 120 and 125, It can be combined with the prediction block to generate a reconstructed block.

The filter units 190 and 191 may include at least one of a deblocking filter and an offset correcting unit.

The deblocking filter can remove block distortion caused by the boundary between the blocks in the reconstructed picture. It may be determined whether to apply a deblocking filter to the current block based on pixels included in a few columns or rows included in the block to determine whether to perform deblocking. When a deblocking filter is applied to a block, a strong filter or a weak filter may be applied according to the deblocking filtering strength required. In applying the deblocking filter, horizontal filtering and vertical filtering may be performed concurrently when vertical filtering and horizontal filtering are performed.

The offset correction unit may correct the offset of the deblocked image with respect to the original image in units of pixels. In order to perform offset correction for a specific picture, pixels included in an image are divided into a predetermined area, and then an area to be offset is determined, and an offset is applied to the area, or an offset is applied considering edge information of each pixel Can be used.

The filter units 190 and 191 may apply only the deblocking filter without applying both the deblocking filter and the offset correction, or both the deblocking filter and the offset correction.

The memories 195 and 196 may store restored blocks or pictures calculated through the filter units 190 and 191 and the stored restored blocks or pictures may be provided to the predicting units 120 and 125 have.

The information output from the entropy encoding unit 100b of the lower layer and the information output from the entropy encoding unit 100a of the upper layer can be multiplexed by the MUX 197 and output as a bitstream.

The MUX 197 may be included in the encoding unit 100a of the upper layer or the encoding unit 100b of the lower layer or may be implemented as an independent device or module separate from the encoding unit 100. [

2 is a block diagram schematically illustrating a decoding apparatus according to an embodiment of the present invention.

As shown in FIG. 2, the decoding apparatus 200 includes a decoding unit 200a of an upper layer and a decoding unit 200b of a lower layer.

The decryption unit 200a of the upper layer includes an entropy decoding unit 210, a reordering unit 220, an inverse quantization unit 230, an inverse transformation unit 240, a prediction unit 250, a filter unit 260, a memory 270 ).

The lower layer decoding unit 200b includes an entropy decoding unit 211, a rearrangement unit 221, an inverse quantization unit 231, an inverse transformation unit 241, a prediction unit 251, a filter unit 261, a memory 271 ).

When a bitstream including a plurality of layers is transmitted from the encoding apparatus, the DEMUX 280 demultiplexes information for each layer and transmits the demultiplexed information to the decoding units 200a and 200b for the respective layers. The input bitstream can be decoded in a procedure opposite to that of the encoding apparatus.

The entropy decoding units 210 and 211 may perform entropy decoding in a procedure opposite to that in which entropy encoding is performed in the entropy encoding unit of the encoding apparatus. The information for generating a prediction block from the information decoded by the entropy decoding units 210 and 211 is provided to the predictors 250 and 251 and the residual values obtained by performing entropy decoding in the entropy decoding units 210 and 211, (220, 221).

As with the entropy encoding units 160 and 161, the entropy decoding units 210 and 211 may use at least one of CABAC and CAVLC.

The entropy decoding units 210 and 211 can decode information related to the intra-picture prediction and the inter-picture prediction performed by the coding apparatus. The entropy decoding units 210 and 211 may include a codeword mapping table for generating a codeword including the codeword mapping unit in the in-picture prediction mode number. The codeword mapping table may be pre-stored or adaptively generated. When using the code-mapped mapping table, a code-mapped mapping unit for performing code-mapped mapping may additionally be provided.

The reordering units 220 and 221 can perform reordering based on a method in which the entropy decoding units 210 and 211 rearrange the entropy-decoded bitstreams in the encoding unit. The coefficients represented by the one-dimensional vector form can be rearranged by restoring the coefficients of the two-dimensional block form again. The reordering units 220 and 221 can perform reordering by providing information related to the coefficient scanning performed by the encoding unit and performing a reverse scanning based on the scanning order performed by the encoding unit.

The inverse quantization units 230 and 231 may perform inverse quantization based on the quantization parameters provided by the encoding apparatus and the coefficient values of the re-arranged blocks.

The inverse transform units 240 and 241 may perform inverse DCT or inverse DST on the DCT or DST performed by the transform units 130 and 131 with respect to the quantization result performed by the encoding apparatus. The inverse transform can be performed based on the transmission unit determined by the encoding apparatus. In the transforming unit of the encoding apparatus, DCT and DST can be selectively performed according to a plurality of information such as a prediction method, a size and a prediction direction of a current block, and the inverse transforming units 240 and 241 of a decoding apparatus It is possible to perform an inverse conversion based on the performed conversion information. Conversion can be performed based on an encoding block rather than a conversion block.

The prediction units 250 and 251 can generate prediction blocks based on the prediction block generation related information provided by the entropy decoding units 210 and 211 and the previously decoded blocks or picture information provided in the memories 270 and 271 .

The prediction units 250 and 251 may include a prediction unit determination unit, an inter-frame prediction unit, and an intra-frame prediction unit.

The prediction unit determination unit receives various information such as prediction unit information input from the entropy decoding unit, prediction mode information of the intra prediction method, motion prediction related information of the inter picture prediction method, and separates prediction blocks in the current coding block. It is possible to determine whether the inter-picture prediction is performed or the intra-picture prediction is performed.

The inter-picture prediction unit uses the information necessary for the inter-picture prediction of the current prediction block provided by the coding apparatus to predict the current picture based on the information included in at least one of the previous picture of the current picture or the following picture The inter-picture prediction can be performed. In order to perform inter-picture prediction, a motion prediction method of a prediction block included in a coded block based on a coded block is classified into a skip mode, a merge mode, a mode using an MVP (motion vector predictor) Mode) can be determined.

The intra prediction unit can generate a prediction block based on the reconstructed pixel information in the current picture. If the prediction block is a prediction block in which intra prediction is performed, intra prediction can be performed based on intra prediction mode information of the prediction block provided by the encoder. The intra-picture prediction unit includes an MDIS filter that performs filtering on the reference pixels of the current block, a reference pixel interpolator that interpolates reference pixels to generate reference pixels of a pixel unit less than an integer value, Lt; RTI ID = 0.0 > DCF < / RTI >

The predicting unit 250 of the upper layer decoding unit 200a may further include an inter-layer predicting unit for performing inter-layer prediction for predicting an upper layer using information of a lower layer.

The inter-layer prediction unit may perform inter-layer prediction using intra-picture prediction mode information, motion information, and the like.

Inter-layer prediction can predict a current block of an upper layer using motion information on a lower layer (reference layer) picture using a picture of a lower layer as a reference picture.

A picture of a reference layer used as a reference picture in inter-layer prediction may be a picture sampled according to the resolution of the current layer. In addition, the motion information may include a motion vector and a reference index. At this time, the value of the motion vector for the picture of the reference layer can be set to (0, 0).

As an example of inter-layer prediction, a prediction method of using a picture of a lower layer as a reference picture has been described, but the present invention is not limited to this. The inter-layer predicting unit 123 may further perform inter-layer texture prediction, inter-layer motion prediction, inter-layer syntax prediction, and inter-layer difference prediction.

Inter-layer texture prediction can derive the texture of the current layer based on the texture of the reference layer. The texture of the reference layer can be sampled to the resolution of the current layer, and the inter-layer prediction unit can predict the texture of the current layer based on the sampled texture. The inter-layer motion prediction can derive the motion vector of the current layer based on the motion vector of the reference layer. At this time, the motion vector of the reference layer can be scaled according to the resolution of the current layer. In the inter-layer syntax prediction, the syntax of the current layer can be predicted based on the syntax of the reference layer. For example, the inter-layer predicting unit 123 may use the syntax of the reference layer as the syntax of the current layer. In the inter-layer difference prediction, the picture of the current layer can be restored by using the difference between the restored image of the reference layer and the restored image of the current layer.

The reconstructed block or picture may be provided to the filter units 260 and 261. The filter units 260 and 261 may include a deblocking filter and an offset correction unit.

Information on whether or not a deblocking filter has been applied to the block or picture from the encoding device and information on whether a strong filter or a weak filter is applied can be provided when the deblocking filter is applied. In the deblocking filter of the decoding apparatus, the deblocking filter related information provided by the encoding apparatus is provided, and the decoding apparatus can perform deblocking filtering on the corresponding block.

The offset correction unit may perform offset correction on the reconstructed image based on the type of offset correction applied to the image and the offset value information during encoding.

The memories 270 and 271 can store the reconstructed picture or block to be used as a reference picture or a reference block, and can also output the reconstructed picture.

The encoding apparatus and the decoding apparatus can perform encoding on three or more layers instead of two layers. In this case, the encoding unit for the upper layer and the decoding unit for the upper layer are provided in a plurality corresponding to the number of the upper layers .

In SVC (Scalable Video Coding) which supports multi-layer structure, there is a relation between layers. By using this association, prediction can be performed to remove redundant elements of data and enhance the image coding performance.

Therefore, in the case of predicting a picture (video) of a current layer (enhancement layer) to be encoded / decoded, not only inter prediction or intra prediction using information of the current layer but also interlayer prediction using information of another layer can be performed .

In performing inter-layer prediction, the current layer may generate a prediction sample of a current layer using a decoded picture of a reference layer used for inter-layer prediction as a reference picture.

At this time, since at least one of the spatial resolution, the temporal resolution, and the image quality may be different between the current layer and the reference layer (i.e., due to the inter-layer scalability difference), the picture of the decoded reference layer, After resampling is performed, it can be used as a reference picture for interlayer prediction of the current layer. Resampling means up-sampling or down-sampling of the samples of the reference layer picture in accordance with the picture size of the current layer.

In this specification, a current layer refers to a layer on which encoding or decoding is currently performed, and may be an enhancement layer or an upper layer. A reference layer is a layer that the current layer refers to for interlayer prediction, and can be a base layer or a lower layer. A picture of a reference layer (i.e., a reference picture) used for inter-layer prediction of the current layer may be referred to as an inter-layer reference picture or a reference picture between layers.

FIG. 3 is a flowchart illustrating a process of inter-prediction of a current layer using a corresponding picture of a reference layer, to which the present invention is applied.

Referring to FIG. 3, a corresponding picture of a reference layer used for inter-layer prediction of a current picture of a current layer can be determined (S300).

The reference layer may refer to a base layer or other enhancement layer having a lower resolution than the current layer. The corresponding picture may mean a picture located in the same time zone as the current picture of the current layer.

For example, the corresponding picture may be a picture having picture order count (POC) information that is the same as the current picture of the current layer. The corresponding picture may belong to the same access unit (AU) as the current picture of the current layer. The corresponding picture may have the same temporal level identifier (TemporalID) as the current picture of the current layer. Here, the time level identifier may mean an identifier for specifying each of a plurality of scalably coded layers according to a temporal resolution.

The current block may use a corresponding picture of one or more reference layers for inter-layer prediction, and a method of specifying the corresponding picture will be described with reference to FIGS. 4 to 7. FIG.

The corresponding picture determined in step S300 may be up-sampled to generate an inter-layer reference picture (S310).

Here, the interlayer reference picture can be used as a reference picture for intra-layer prediction of the current picture.

Specifically, the interlayer reference picture may include at least one of a first interlayer reference picture and a second interlayer reference picture. The first interlayer reference picture indicates a reference picture that has been subjected to filtering for an integer position and the second interlayer reference picture may refer to a reference picture that has not been filtered for an integer position.

Herein, the integer position may mean a pixel in the integer unit of the corresponding picture to be upsampled. Alternatively, in an upsampling process, when interpolation is performed in units of integer pixels or less, that is, 1 / n pixels, n phases are generated, and at this time, ). Filtering for the integer position can be performed using the surrounding integer position. The surrounding integer position may be located in the same row or the same column as the current integer position being filtered. The surrounding integer position may include a plurality of integer positions belonging to the same row or the same column. Here, the plurality of integer positions may be arranged in the same column or sequentially in the same row. A specific upsampling method will be described with reference to FIG.

However, if the current layer and the reference layer have the same resolution, the above-described upsampling process may be omitted. That is, the corresponding picture of the reference layer can be used as an interlayer reference picture.

A reference picture list including the inter-layer reference picture and the temporal reference picture generated in step S310 may be generated (S320).

First, the reference picture list may include a reference picture belonging to the same layer as the current picture (hereinafter referred to as a temporal reference picture). The temporal reference picture may refer to a picture having an output order (e.g., picture order count, POC) different from the current picture. A method of generating a reference picture list composed of temporal reference pictures will be described with reference to FIGS. 9 to 11. FIG.

On the other hand, when the current picture performs inter-layer prediction, the reference picture list may further include an inter-layer reference picture. That is, in a multi-layer structure (for example, scalable video coding and multi-view video coding), not only reference pictures of the same layer but also reference pictures of other layers can be used as reference pictures of the enhancement layer.

Specifically, a picture belonging to the reference layer can be used as a reference picture. Here, the reference layer can be identified by a reference layer identifier (RefPiclayerId). The reference layer identifier can be derived based on the syntax of the slice header, inter_layer_pred_layer_idc (hereinafter referred to as an interlayer indicator). The interlayer indicator may indicate the layer of the picture used by the current picture for inter-layer prediction. In this manner, a reference picture list including an interlayer reference picture of a reference layer specified by the reference layer identifier can be generated.

Meanwhile, as described in step S310, the interlayer reference picture may include at least one of a first interlayer reference picture and a second interlayer reference picture. Therefore, it is possible to generate a reference picture list including any one of the first interlayer reference picture and the second interlayer reference picture, and generate a reference picture list including both the first interlayer reference picture and the second interlayer reference picture You may.

In order to selectively use the first interlayer reference picture and the second interlayer reference picture, both the first interlayer reference picture and the second interlayer reference picture are used in units of pictures, Can be selected. Furthermore, when either one of the first interlayer reference picture and the second interlayer reference picture is selected and used, it is possible to select which of the two interlayer reference pictures to use. To do this, the encoder can signal information about which of the two interlayer reference pictures to use.

Alternatively, a reference index may be used for the selective use. Specifically, only the first inter-layer reference picture may be selected by the reference index in units of prediction blocks, or only the second inter-layer reference picture may be selected, and both the first and second inter- .

When an interlayer reference picture is added to the reference picture list, it is necessary to change the number of reference pictures arranged in the reference picture list or the range of the number of reference indices allocated for each reference picture.

Here, it is assumed that the range of the num_ref_idx_l0_active_minus1 and num_ref_idx_l1_active_minus1 syntaxes of the slice header indicating the reference index maximum value of the reference picture list for the base layer has a value between 0 and 14.

In the case of using either the first interlayer reference picture or the second interlayer reference picture, the range of the num_ref_idx_l0_active_minus1 and the num_ref_idx_l1_active_minus1 syntaxes indicating the maximum value of the reference index of the reference picture list for the current layer is a value between 0 and 15 Can be defined. Alternatively, even when both the first interlayer reference picture and the second interlayer reference picture are used, when two interlayer reference pictures are added to different reference picture lists, the range of num_ref_idx_l0_active_minus1 and num_ref_idx_l1_active_minus1 is a value between 0 and 15 Can be defined.

For example, when the number of temporal reference pictures in the reference picture list L0 is 15, 16 total reference pictures exist when the first or second interlaced reference pictures are added to the reference picture list, and the value of num_ref_idx_l0_active_minus1 is 15 do.

Alternatively, when both the first interlayer reference picture and the second interlayer reference picture are used and two interlayer reference pictures are added to the same reference picture list, the reference index maximum value of the reference picture list for the current layer The range of the num_ref_idx_l0_active_minus1 and the num_ref_idx_l1_active_minus1 syntaxes may be defined as a value between 0 and 16.

For example, if the number of temporal reference pictures in the reference picture list L0 is 15, and a first interlaced reference picture and a second interlaced reference picture are added to the reference picture list L0, a total of 17 reference pictures exist, and num_ref_idx_l0_active_minus1 The value is 16.

The inter prediction of the current picture may be performed based on the reference picture list generated in step S320 (S330).

Specifically, the reference picture corresponding to the reference index of the current block is selected from the reference picture list. The selected reference picture may be a temporal reference picture or an interlayer reference picture (i.e., a corresponding picture of a reference layer or a corresponding picture that is upsampled) in the same layer as the current block.

The reference block in the reference picture is specified based on the motion vector of the current block and the reconstructed sample value or texture information of the specified reference block is used to predict the sample value or the texture information of the current block.

If the reference picture corresponding to the reference index of the current block is an interlayer reference picture, the reference block may be a block (hereinafter referred to as a call block) at the same position as the current block. To this end, if the reference picture of the current block is an interlayer reference picture, the motion vector of the current block may be set to (0, 0).

Alternatively, when the reference picture corresponding to the reference index of the current block is an interlayer reference picture, a block at a position shifted by a predetermined offset in the call block may be used as a reference block. Let's take a closer look.

FIG. 4 illustrates a method of determining a corresponding picture of a reference layer based on a reference active flag. FIG. 5 illustrates an embodiment in which the present invention is applied. Fig.

Referring to FIG. 4, a reference active flag may be obtained from the bitstream (S400).

The reference active flag (all_ref_layers_active_flag) may indicate whether the constraint that the corresponding layer of all layers having a direct dependency with the current layer is used for inter-layer prediction of the current picture is applied. Referring to FIG. 5, the reference active flag may be obtained in a video parameter set.

Here, whether or not the layer has the current layer and the direct dependency can be determined based on the direct dependency flag (direct_dependency_flag [i] [j]). The direct dependency flag (direct_dependency_flag [i] [j]) may indicate whether the j-th layer is used for inter-layer prediction of the i-th current layer.

For example, if the value of the direct dependency flag is 1, the j-th layer can be used for inter-layer prediction of the i-th current layer, and if the value of the direct dependency flag is 0, it may not be used for inter-layer prediction of the i-th current layer.

It is possible to check whether the value of the reference active flag is 1 (S410).

When the value of the reference active flag is 1, a restriction is applied that the corresponding picture of all layers having the current layer and direct dependency is used for inter-layer prediction of the current picture. In this case, the corresponding picture of all the layers having the current layer and the direct dependency can be included in the reference picture list of the current picture. Accordingly, the corresponding picture of all the layers having the current layer and the direct dependency can be determined as a corresponding picture used for inter-layer prediction of the current picture (S420).

On the other hand, when the value of the reference active flag is 0, the constraint that the corresponding picture of all layers having the current layer and the direct dependency is used for inter-layer prediction of the current picture is not applied. That is, the current picture of the current layer may perform inter-layer prediction using the corresponding picture of all layers having the current layer and direct dependency, or only the corresponding picture of all layers having the current layer and direct dependency Alternatively, inter-layer prediction may be performed. That is, when the value of the reference active flag is 0, the corresponding picture of the current layer and the corresponding layer of all layers having direct dependency may be included in the reference picture list of the current picture, or only some corresponding pictures may be selectively included. Therefore, it is necessary to specify a corresponding picture used for intra-layer prediction of the current picture among the layers having the current layer and direct dependency. For this purpose, the interlayer reference information for the current picture may be obtained (S430).

Here, the interlayer reference information may include at least one of an interlayer prediction flag, the number of reference pictures, or a reference layer identifier.

Specifically, the inter-layer prediction flag may indicate whether inter-layer prediction is used in the decoding process of the current picture. The number information of reference pictures can indicate the number of corresponding pictures used in the inter-layer prediction of the current picture. The number information of the reference pictures can be encoded and coded to a value obtained by subtracting 1 from the number of corresponding pictures used for inter-layer prediction of the current picture for coding efficiency. The reference layer identifier may mean a layer identifier (layerId) of a layer including a corresponding picture used for inter-layer prediction of the current picture.

A method of acquiring the interlayer reference information will be described with reference to FIGS. 6 and 7. FIG.

Based on the interlayer reference information obtained in S430, a corresponding picture used for inter-layer prediction can be determined (S440).

For example, when the value of the inter-layer prediction flag of the current picture is 1, the current picture means that inter-layer prediction is performed. In this case, among the layers having the current layer and the direct dependency, the corresponding picture of the layer specified by the reference layer identifier can be determined as the corresponding picture used for inter-layer prediction of the current picture.

On the other hand, if the value of the interlayer prediction flag of the current picture is 0, the current picture does not perform the inter-layer prediction, so that the corresponding picture of all the layers having the current layer and direct dependency is inter- It will not be used.

FIG. 6 illustrates a method of obtaining interlayer reference information for a current picture according to an embodiment of the present invention. FIG. 7 illustrates an example of a syntax table of interlayer reference information FIG.

Referring to FIG. 6, the interlayer prediction flag may be obtained based on the reference active flag (S600).

Referring to FIG. 7, the interlayer prediction flag inter_layer_pred_enabled_flag may be obtained only when the value of the reference active flag (all_ref_layers_active_flag) is 0 (S700).

When the value of the reference active flag is 1, this means that the corresponding picture of all layers having the current layer and direct dependency is used for inter-layer prediction of the current picture. Therefore, in this case, it is not necessary to signal the interlayer prediction flag in the header information (for example, the slice segment header) of the current picture.

Referring to FIG. 7, if the layer identifier (nuh_layer_id) of the current layer including the current picture is greater than 0, the interlayer prediction flag can be obtained. If the layer identifier of the current layer is 0, the current layer corresponds to a base layer that does not perform inter-layer prediction among the multi-layers.

7, the interlayer prediction flag can be obtained when the number of layers having the current layer and the direct dependency (NumDirectRefLayers) is at least one or more. This is because, if there is no layer having a current layer and a direct dependency, all pictures in the current layer do not perform inter-layer prediction.

Referring to FIG. 6, it can be checked whether the value of the interlayer prediction flag obtained in S600 is 1 (S610).

As a result of the determination in S610, when the value of the interlayer prediction flag is 1, the number information of the reference pictures can be obtained (S620).

As shown in FIG. 4, the number information of reference pictures can represent the number of corresponding pictures used in the inter-layer prediction of the current picture among the corresponding pictures of the layer having the current layer and direct dependency.

7, when the number of layers having the current layer and the direct dependency (NumDirectRefLayers) is 1, the number of corresponding pictures used in the inter-layer prediction of the current picture can not exceed 1. Therefore, It is not necessary to signal the number information (num_inter_layer_ref_pics_minus1) of the reference pictures. In this case, the number information of the reference picture may not be obtained, and the number of corresponding pictures used for inter-layer prediction of the current picture may be derived to one.

On the other hand, the number information of the reference pictures can be limitedly obtained based on the maximum active reference flag.

Here, the maximum active reference flag may indicate whether or not at most one corresponding picture is used for intra-layer prediction of the current picture. For example, when the value of the maximum active reference flag is 1, the current picture performs inter-layer prediction using only a maximum of one corresponding picture. If the value of the maximum active reference flag is 0, Inter-layer prediction can be performed using a picture.

Referring to FIG. 7, the number information of the reference pictures can be obtained only when the value of the maximum active reference flag (max_one_active_ref_layer_flag) is zero. That is, when the value of the maximum active reference flag is 1, the number of corresponding pictures used in the inter-layer prediction of the current picture is limited to one, so it is not necessary to signal the number information of the reference picture.

Referring to FIG. 6, a reference layer identifier may be obtained based on the number information of reference pictures obtained in S620 (S630).

7, the number of corresponding pictures (NumActiveRefLayerPics) used in the inter-layer prediction of the current picture among the corresponding pictures of the layer having the current layer and the direct dependency, the number of the layers having the current layer and the direct dependency Lt; RTI ID = 0.0 > (NumDirectRefLayers) < / RTI > are different. Here, the variable NumActiveRefLayerPics can be derived from the number information of the reference picture. For example, when the number information of the reference picture is coded with a value obtained by subtracting 1 from the number of corresponding pictures used in the inter-layer prediction of the current picture, the variable NumActiveRefLayerPics is obtained by adding 1 to the number information of the reference picture obtained in S620 Lt; / RTI >

If the variable NumActiveRefLayerPics and the variable NumDirectRefLayers are the same, this means that the corresponding picture of the layer having the current layer and direct dependency is the corresponding picture used for inter-layer prediction of the current picture. Therefore, there is no need to signal the reference layer identifier.

8 is a flowchart illustrating a method of upsampling a corresponding picture of a reference layer to which the present invention is applied.

Referring to FIG. 8, a reference sample position of a reference layer corresponding to a current sample position of a current layer may be derived (S800).

Since the resolutions of the current layer and the reference layer may be different, a reference sample position corresponding to the current sample position can be derived taking into account the difference in resolution between them. That is, the aspect ratio between the picture of the current layer and the picture of the reference layer can be considered. In addition, since the upsampled picture of the reference layer may not coincide with the picture of the current layer, an offset for correcting the upsampled picture may be required.

For example, the reference sample position may be derived taking into account the scale factor and the upsampled reference layer offset.

Here, the scale factor can be calculated based on the ratio of the width and the height between the current picture of the current layer and the corresponding picture of the reference layer.

The upsampled reference layer offset may mean position difference information between any one of the samples located at the edge of the current picture and one of the samples located at the edge of the interlayer reference picture. For example, the upsampled reference layer offset includes positional difference information in the horizontal / vertical direction between the upper left sample of the current picture and the upper left sample of the interlayer reference picture, and the difference information between the lower right sample of the current picture and the lower right sample Directional horizontal / vertical directional difference information.

The upsampled reference layer offset may be obtained from the bitstream. For example, the upsampled reference layer offset may be obtained from at least one of a Video Parameter Set, a Sequence Parameter Set, a Picture Parameter Set, and a Slice Header .

The filter coefficient of the up-sampling filter can be determined considering the phase of the reference sample position derived in S800 (S810).

Here, the up-sampling filter may use either a fixed up-sampling filter or an adaptive up-sampling filter.

1. Fixed Upsampling Filter

The fixed up-sampling filter may refer to an up-sampling filter having a predetermined filter coefficient without considering the characteristics of the image. A tap filter can be used as the fixed up-sampling filter, which can be defined for the luminance component and the chrominance component, respectively. A fixed up-sampling filter having an accuracy of 1/16 sample units will be described with reference to Tables 1 to 2 below.

Phase p
Interpolation filter coefficient
f [p, 0] f [p, 1] f [p, 2] f [p, 3] f [p, 4] f [p, 5] f [p, 6] f [p, 7] 0 0 0 0 64 0 0 0 0 One 0 One -3 63 4 -2 One 0 2 -One 2 -5 62 8 -3 One 0 3 -One 3 -8 60 13 -4 One 0 4 -One 4 -10 58 17 -5 One 0 5 -One 4 -11 52 26 -8 3 -One 6 -One 3 -3 47 31 -10 4 -One 7 -One 4 -11 45 34 -10 4 -One 8 -One 4 -11 40 40 -11 4 -One 9 -One 4 -10 34 45 -11 4 -One 10 -One 4 -10 31 47 -9 3 -One 11 -One 3 -8 26 52 -11 4 -One 12 0 One -5 17 58 -10 4 -One 13 0 One -4 13 60 -8 3 -One 14 0 One -3 8 62 -5 2 -One 15 0 One -2 4 63 -3 One 0

Table 1 is a table defining the filter coefficients of the fixed up-sampling filter with respect to the luminance component.

As shown in Table 1, in the case of upsampling on the luminance component, an 8-tap filter is applied. That is, interpolation can be performed using a reference sample of the reference layer corresponding to the current sample of the current layer and a neighboring sample adjacent to the reference sample. Here, the neighbor samples can be specified according to the direction in which the interpolation is performed. For example, when interpolation is performed in the horizontal direction, the neighboring sample may include three consecutive samples to the left and four consecutive samples to the right based on the reference sample. Alternatively, when interpolation is performed in the vertical direction, the neighboring sample may include three consecutive samples at the top and four consecutive samples at the bottom based on the reference sample.

Since interpolation is performed with an accuracy of 1/16 sample units, there are a total of 16 phases. This is to support resolution of various magnifications such as 2 times and 1.5 times.

In addition, the fixed up-sampling filter may use different filter coefficients for each phase (p). The size of each filter coefficient may be defined to fall within a range of 0 to 63, except when the phase p is zero. This means that the filtering is performed with a precision of 6 bits. Here, the phase (p) of 0 means the position of an integer multiple of n when interpolation is performed in 1 / n sample units.

Phase p
Interpolation filter coefficient
f [p, 0] f [p, 1] f [p, 2] f [p, 3] 0 0 64 0 0 One -2 62 4 0 2 -2 58 10 -2 3 -4 56 14 -2 4 -4 54 16 -2 5 -6 52 20 -2 6 -6 46 28 -4 7 -4 42 30 -4 8 -4 36 36 -4 9 -4 30 42 -4 10 -4 28 46 -6 11 -2 20 52 -6 12 -2 16 54 -4 13 -2 14 56 -4 14 -2 10 58 -2 15 0 4 62 -2

Table 2 defines the filter coefficients of the fixed up-sampling filter for the chrominance components.

As shown in Table 2, in case of up-sampling for the chrominance components, a 4-tap filter can be applied unlike the luminance component. That is, interpolation can be performed using a reference sample of the reference layer corresponding to the current sample of the current layer and a neighboring sample adjacent to the reference sample. Here, the neighbor samples can be specified according to the direction in which the interpolation is performed. For example, when interpolation is performed in the horizontal direction, the neighboring sample may include one continuous sample to the left and two consecutive samples to the right based on the reference sample. Alternatively, when interpolation is performed in the vertical direction, the neighboring sample may include one continuous sample at the top and two consecutive samples at the bottom based on the reference sample.

On the other hand, as in the case of the luminance component, since interpolation is performed with an accuracy of 1/16 sample units, there are a total of 16 phases, and different filter coefficients can be used for each phase (p). And, the size of each filter coefficient can be defined to fall in the range of 0 to 62, except when the phase (p) is zero. This also means that filtering is performed with a precision of 6 bits.

The 8-tap filter is applied to the luminance component and the 4-tap filter is applied to the chrominance component. However, the present invention is not limited to this, and the order of the tap filter may be variably determined in consideration of the coding efficiency Of course it is.

2. Adaptive up-sampling filter

It is possible to determine the optimum filter coefficient in the encoder considering the feature of the image without using the fixed filter coefficient, signaling it to the decoder, and transmit it to the decoder. It is the adaptive up-sampling filter that uses adaptively determined filter coefficients in the encoder. Since the characteristics of the image are different in picture units, it is possible to improve the coding efficiency by using an adaptive up-sampling filter capable of expressing characteristics of the image better than using a fixed up-sampling filter in all cases.

The inter-layer reference picture can be generated by applying the filter coefficient determined in step S810 to the corresponding picture of the reference layer (S820).

Specifically, the filter coefficient of the determined up-sampling filter may be applied to the samples of the corresponding picture to perform interpolation. Here, the interpolation may be performed primarily in the horizontal direction and may be performed in the vertical direction with respect to the sample generated after the interpolation in the horizontal direction.

FIG. 9 illustrates a method of specifying a near reference picture stored in a decoding picture buffer according to an embodiment of the present invention. Referring to FIG.

The temporal reference picture may be stored in the decoding picture buffer (DPB) and used as a reference picture if necessary for inter prediction of the current picture. The temporal reference picture stored in the decoding picture buffer may include a short-term reference picture. The near reference picture means a picture in which the difference between the current picture and the POC value is not large.

Information specifying the near reference picture to be stored in the decoding picture buffer at the current time is composed of a reference order of picture (POC) and a flag indicating whether to directly refer to the current picture (for example, used_by_curr_pic_s0_flag, used_by_curr_pic_s1_flag) , Which is referred to as a reference picture set. Specifically, when the value of the used_by_curr_pic_s0_flag [i] is 0, if the i-th nearest reference picture in the near reference picture set has a value smaller than the output order (POC) of the current picture, the i-th nearest reference picture is the reference picture Is not used. If the value of the used_by_curr_pic_s1_flag [i] is 0, if the i-th nearest reference picture in the near reference picture set has a value larger than the output order (POC) of the current picture, the i-th nearest reference picture is a reference picture of the current picture Is not used.

Referring to FIG. 9, in the case of a picture having a POC value of 26, all three pictures (that is, pictures having POC values of 25, 24, and 20) can be used as a near reference picture in inter prediction. However, since the value of used_by_curr_pic_s0_flag of a picture having a POC value of 25 is 0, a picture having a POC value of 25 is not directly used for inter prediction of a picture having a POC value of 26.

 Thus, the near reference picture can be specified based on the output order (POC) of the reference picture and the flag indicating whether the current picture is used as the reference picture.

On the other hand, a picture not shown in the reference picture set for the current picture can be displayed (for example, unused for reference) not to be used as a reference picture, and further removed from the decoding picture buffer.

FIG. 10 illustrates a method of specifying a long-term reference picture according to an embodiment of the present invention. Referring to FIG.

In the case of the long-distance reference picture, since the difference between the current picture and the POC value is large, it can be expressed using the least significant bit (LSB) and the most significant bit (MSB) of the POC value.

Therefore, the POC value of the long-distance reference picture can be derived by using the difference between the LSB value of the POC value of the reference picture, the POC value of the current picture, and the MSB of the POC value of the current picture and the MSB of the POC value of the reference picture.

For example, it is assumed that the POC value of the current picture is 331, the maximum value that can be represented by the LSB is 32, and the picture having the POC value of 308 is used as the long-distance reference picture.

In this case, the POC value 331 of the current picture can be expressed as 32 * 10 + 11, where 10 is the MSB value and 11 is the LSB value. The POC value 308 of the long distance reference picture is represented by 32 * 9 + 20, where 9 is the MSB value and 20 is the LSB value. At this time, the POC value of the long-distance reference picture can be derived as shown in the equation shown in FIG.

11 illustrates a method of constructing a reference picture list using a near reference picture and a long distance reference picture according to an embodiment of the present invention.

Referring to FIG. 11, a reference picture list including a temporal reference picture can be generated considering whether a temporal reference picture is a near reference picture and a POC value of a near reference picture. Here, the reference picture list may include at least one of a reference picture list 0 for L0 prediction and a reference picture list 1 for L1 prediction.

Specifically, in the reference picture list 0, a near reference picture (RefPicSetCurr0) having a POC value smaller than the current picture, a near reference picture (RefPicSetCurr1) having a POC value larger than the current picture, and a long distance reference picture (RefPicSetLtCurr) have.

On the other hand, in the reference picture list 1, a near reference picture (RefPicSetCurr1) having a POC value larger than the current picture, a near reference picture (RefPicSetCurr0) having a POC value smaller than the current picture, and a long distance reference picture (RefPicSetLtCurr) .

In addition, a plurality of temporal reference pictures included in the reference picture list may be rearranged to improve the coding efficiency of the reference index of the temporal reference picture. This can be performed adaptively based on the list rearrangement flag (list_modification_present_flag). Here, the list rearrangement flag is information for specifying whether or not reference pictures in the reference picture list are rearranged. The list rearrangement flag can be signaled for the reference picture list 0 and the reference picture list 1, respectively.

For example, when the value of the list rearrangement flag (list_modification_present_flag) is 0, the reference pictures in the reference picture list are not rearranged, and only when the value of the list rearrangement flag (list_modification_present_flag) is 1, The reference pictures can be rearranged.

If the value of the list rearrangement flag (list_modification_present_flag) is 1, the reference pictures in the reference picture list can be rearranged using the list entry information list_entry [i]. Here, the list entry information (list_entry [i]) can specify the reference index of the reference picture to be located at the current position (i.e., the i-th entry) in the reference picture list.

Specifically, the reference picture corresponding to the list entry information (list_entry [i]) can be specified in the pre-generated reference picture list, and the specified reference picture can be rearranged to the i-th entry in the reference picture list.

The list entry information can be obtained by the number of reference pictures included in the reference picture list or by the maximum reference index value of the reference picture list. Also, the list entry information can be obtained in consideration of the slice type of the current picture. That is, if the slice type of the current picture is a P slice, the list entry information list_entry_l0 [i] for the reference picture list 0 is acquired. If the slice type of the current picture is a B slice, It is possible to additionally obtain the entry information list_entry_l1 [i].

FIG. 12 illustrates a process of performing inter-layer prediction of a current block based on spatial offset information according to an embodiment of the present invention. Referring to FIG.

Referring to FIG. 12, spatial offset information may be obtained (S1200).

The spatial offset information may indicate a positional difference between the call block and the reference block of the current block. The call block is a block belonging to the corresponding picture of the reference layer, and may be a block at the same position as the current block. The reference block is a block belonging to the corresponding picture of the reference layer and may be a block used for inter-layer prediction of the current block.

In particular, the spatial offset information may include at least one of a segment offset and a block offset. The segment offset may indicate a spatial area that is not used for inter-layer prediction of a picture of a current layer in a picture of a reference layer, as an offset of a slice segment, a tile, or a coding tree unit column (CTU row). Alternatively, the segment offset is an offset of a slice segment, a tile or a unit of a coding tree unit (CTU row), and is a spatial area for specifying a reference block used for inter-layer prediction of a picture of a current layer in a picture of a reference layer You can also direct. The block offset is an offset in units of blocks and can indicate a spatial area that is not used for intra-layer prediction of a picture of a current layer in a picture of a reference layer together with the segment offset. Alternatively, the block offset may indicate a spatial area for specifying a reference block used for inter-layer prediction of a picture of a current layer in a picture of the reference layer, as an offset in units of blocks. Here, a block unit may mean a coding tree unit, and a coding tree unit may mean a maximum coding unit of a video sequence.

The spatial offset information may be obtained from video usability information (VUI) that belongs to a video parameter set. Video usability information may not be used to decode luminance components and chrominance components, but may refer to information used for decoder conformance or output timing conformance. A specific method of obtaining the spatial offset information will be described with reference to FIGS. 13 to 15. FIG.

13 to 15 show a syntax table of spatial offset information according to an embodiment to which the present invention is applied.

First Embodiment

Referring to FIG. 13, an interlayer prediction constraint flag ilp_restricted_ref_layers_flag may be obtained (S1300).

Here, the inter-layer prediction limitation flag may indicate whether or not the constraint that the reference block of the reference layer specified based on the spatial offset information is used in inter-layer prediction of the current block is applied. For example, when the value of the inter-layer prediction restriction flag is 1, this means that the constraint that the inter-layer prediction of the current block is performed using the reference block specified based on the spatial offset information is applied. In contrast, when the value of the inter-layer prediction restriction flag is 0, this means that the above constraint is not applied. Therefore, the current block may perform inter-layer prediction using the reference block specified based on the spatial offset information, It may not be performed.

If the value of the inter-layer prediction restriction flag is 1, the segment offset (min_spatial_segment_offset_plus1 [i] [j]) can be obtained (S1310).

Here, the segment offset min_spatial_segment_offset_plus1 [i] [j] is not used for inter-layer prediction of a picture belonging to the i-th current layer in each picture of the j-th reference layer having the i-th current layer and direct dependency It can indicate a spatial area.

The block offset flag ctu_based_offset_enable_flag [i] [j] may be obtained based on the segment offset obtained in step S1310 (S1320).

Here, the block offset flag ctu_based_offset_enable_flag [i] [j] indicates whether or not a block offset is used in indicating a spatial area which is not used for inter-layer prediction of a picture belonging to the i-th current layer in the picture of the j- Can be specified. For example, when the value of the block offset flag is 1, a spatial area which is not used for intra-layer prediction of a picture belonging to the i-th current layer in the picture of the j-th reference layer is It can mean directed. Conversely, when the value of the block offset flag is 0, it can mean that the spatial area which is not used for the inter-layer prediction of the picture belonging to the i-th current layer in the picture of the j-th reference layer is indicated only by the segment offset.

On the other hand, the block offset flag can be obtained when the value of the segment offset obtained in step S1310 is greater than zero. When the value of the segment offset is larger than 0, there is a spatial area which is not used for intra-layer prediction of the picture belonging to the i-th current layer in the picture of the j-th reference layer.

The block offset (min_horizontal_ctu_offset_plus1) may be obtained based on the block offset flag obtained in step S1320 (S1330).

Specifically, when a spatial area which is not used for inter-layer prediction of a picture belonging to an i-th current layer in a picture of a j-th reference layer according to a block offset flag is indicated using a block offset together with a segment offset And the value of the offset flag is 1).

Second Embodiment

Referring to FIG. 14, an inter-layer prediction limit flag ilp_restricted_ref_layers_flag can be obtained (S1400).

As described above with reference to Fig. 13, the inter-layer prediction limiting flag may indicate whether or not a constraint that a reference block of a reference layer specified based on spatial offset information is used in inter-layer prediction of the current block is applied, A detailed description will be omitted.

If the value of the inter-layer prediction restriction flag obtained in step S1400 is 1, the segment offset (min_spatial_segment_offset_plus1 [i] [j]) can be obtained (S1410).

Here, the segment offset min_spatial_segment_offset_plus1 [i] [j] is not used for inter-layer prediction of a picture belonging to the i-th current layer in each picture of the j-th reference layer having the i-th current layer and direct dependency It can indicate a spatial area.

The block offset (min_horizontal_ctu_offset [i] [j]) may be obtained based on the segment offset obtained in step S1410 (S1420).

Specifically, the block offset may be obtained when the value of the segment offset obtained in step S1410 is greater than zero. When the value of the segment offset is larger than 0, there is a spatial area which is not used for intra-layer prediction of the picture belonging to the i-th current layer in the picture of the j-th reference layer.

13, a block offset (min_horizontal_ctu_offset [i] [j]) for determining the size of a spatial area which is not used for inter-layer prediction can be signaled instead of the above-described syntax min_horizontal_ctu_offset_plus1. If the value of the block offset min_horizontal_ctu_offset [i] [j] is 0, it means that there is no spatial area which is not used for inter-layer prediction in the picture of the jth layer. Therefore, the syntax ctu_based_offset_enable_flag [i ] [j] may not be used.

Third Embodiment

Referring to FIG. 15, an interlayer prediction constraint flag ilp_restricted_ref_layers_flag may be obtained (S1500).

Here, the inter-layer prediction limitation flag may indicate whether or not the constraint that the reference block of the reference layer specified based on the spatial offset information is used in the inter-layer prediction of the current block is applied. A detailed description will be omitted.

If the value of the inter-layer prediction restriction flag is 1, the segment offset (min_spatial_segment_offset_plus1 [i] [j]) can be obtained (S1510).

Here, the segment offset min_spatial_segment_offset_plus1 [i] [j] is not used for inter-layer prediction of a picture belonging to the i-th current layer in each picture of the j-th reference layer having the i-th current layer and direct dependency It can indicate a spatial area.

The block offset flag ctu_based_offset_enable_flag [i] [j] may be obtained based on the segment offset obtained in step S1510 (S1520).

Here, the block offset flag ctu_based_offset_enable_flag [i] [j] indicates whether or not a block offset is used in indicating a spatial area which is not used for inter-layer prediction of a picture belonging to the i-th current layer in the picture of the j- Can be specified. For example, when the value of the block offset flag is 1, a spatial area which is not used for intra-layer prediction of a picture belonging to the i-th current layer in the picture of the j-th reference layer is It can mean directed. Conversely, when the value of the block offset flag is 0, it can mean that the spatial area which is not used for the inter-layer prediction of the picture belonging to the i-th current layer in the picture of the j-th reference layer is indicated only by the segment offset.

On the other hand, the block offset flag can be obtained when the value of the segment offset obtained in step S1510 is greater than zero. When the value of the segment offset is larger than 0, there is a spatial area which is not used for intra-layer prediction of the picture belonging to the i-th current layer in the picture of the j-th reference layer.

The block offset (min_horizontal_ctu_offset_minus1 [i] [j]) may be obtained based on the block offset flag obtained in step S1520 (S1530).

Specifically, when a spatial area which is not used for inter-layer prediction of a picture belonging to an i-th current layer in a picture of a j-th reference layer according to a block offset flag is indicated using a block offset together with a segment offset And the value of the offset flag is 1).

Referring to FIG. 12, a reference block used for inter-layer prediction of a current block can be specified based on spatial offset information (S1210).

Specifically, the reference block can be specified as a block at a position shifted by an offset according to the spatial offset information with reference to the position of the call block, and will be described with reference to FIG.

FIG. 16 illustrates a method of specifying a reference block used for inter-layer prediction of a current block using spatial offset information according to an embodiment of the present invention.

Referring to FIG. 16, colCtbAddr [i] [j] of the jth reference layer indicates the position of a block, that is, a call block, located at the same position as the current block belonging to the i-th current layer. The jth reference layer may be a layer having an i < th > current layer and a direct dependency. refCtbAddr [i] [j] denotes the position of the reference block used for inter-layer prediction of the current block. refCtbAddr [i] [j] may be determined based on colCtbAddr [i] [j] and spatial offset information (for example, min_spatial_segment_offset_plus1 and min_horizontal_ctu_offset_plus1). Here, the position of the call block or the reference block may be expressed by a raster scan address. The concrete method of determining refCtbAddr [i] [j] will be described based on the first to third embodiments.

The method refCtbAddr [i] [j] according to the first embodiment

Specifically, refCtbAddr [i] [j] can be derived as shown in Equation (1).

Figure pat00001

XOffset [i] [j] and yOffset [i] [j] in Equation (1) can be derived as shown in Equation (2).

Figure pat00002

In Equation (2), xColCtb [i] [j] corresponds to the x component of the coordinate value of the call block belonging to the picture of the jth layer. refPicWidthInCTBY [i] [j] represents the number of coding tree units included in the coding tree unit column in the picture of the jth layer.

Further, minHorizontalCtbOffset [i] [j] can be derived based on the block offset (min_horizontal_ctu_offset_plus1). For example, if the value of the block offset is 0, as shown in Equation 3, minHorizontalCtbOffset [i] [j] can be derived to (refPicWidthInCtbY [i] [j] -1) MinHorizontalCtbOffset [i] [j] may be derived to (min_horizontal_ctu_offset_plus1-1).

Figure pat00003

Alternatively, minHorizontalCtbOffset [i] [j] may be derived as shown in Equation (4), considering whether the value of the block offset is greater than 0 or considering whether the value of the block offset is 1 or not.

Figure pat00004

The method refCtbAddr [i] [j] according to the second embodiment

Specifically, refCtbAddr [i] [j] can be derived as shown in Equation (5).

Figure pat00005

In Equation (5), xOffset [i] [j] and yOffset [i] [j] can be derived as shown in Equation (6).

Figure pat00006

In Equation (6), xColCtb [i] [j] and refPicWidthInCtbY [i] [j] have been described in Equation (2), and a detailed description thereof will be omitted here.

The method refCtbAddr [i] [j] according to the third embodiment

Specifically, refCtbAddr [i] [j] can be derived as shown in Equation (7).

Figure pat00007

In Equation (7), xOffset [i] [j] and yOffset [i] [j] can be derived as follows:

Figure pat00008

In the equation (8), xColCtb [i] [j] and refPicWidthInCtbY [i] [j] have been described in Equation (2), and a detailed description thereof will be omitted here.

Referring to FIG. 12, inter-layer prediction of the current block may be performed based on the reference block specified in step S1210 (S1220).

Claims (15)

Determining a corresponding picture of a reference layer used for inter-layer prediction of a current picture in a current layer;
Up-sampling the determined corresponding picture to generate an inter-layer reference picture;
Identifying a reference block in the interlayer reference picture used for inter-layer prediction of a current block based on the spatial offset information; And
And performing inter-layer prediction of the current block using the specified reference block.
2. The method of claim 1, wherein the step of specifying the reference block is performed based on an inter-layer prediction restriction flag,
Wherein the interlayer prediction restriction flag indicates whether or not a restriction that a reference block specified based on the spatial offset information is used in inter-layer prediction of the current block is applied.
3. The method of claim 2, wherein the reference block is specified as a block at a position shifted by an offset according to the spatial offset information based on a call block belonging to the interlaced reference picture. 4. The method of claim 3, wherein the spatial offset information comprises at least one of a segment offset and a block offset. Determines a corresponding picture of a reference layer used for inter-layer prediction of a current picture in a current layer, up-samples the determined corresponding picture to generate an inter-layer reference picture, and performs inter- And a prediction unit for specifying a reference block in the interlayer reference picture to be used and performing inter-layer prediction of the current block using the specified reference block. 6. The method of claim 5, wherein the specification of the reference block is performed based on an inter-layer prediction limitation flag,
Wherein the interlayer prediction restriction flag indicates whether or not a constraint that a reference block specified based on the spatial offset information is used in inter-layer prediction of the current block is applied.
7. The apparatus of claim 6, wherein the reference block is specified as a block at a position shifted by an offset according to the spatial offset information based on a call block belonging to the inter-layer reference picture. 8. The apparatus of claim 7, wherein the spatial offset information includes at least one of a segment offset and a block offset. Determining a corresponding picture of a reference layer used for inter-layer prediction of a current picture in a current layer;
Up-sampling the determined corresponding picture to generate an inter-layer reference picture;
Identifying a reference block in the interlayer reference picture used for inter-layer prediction of a current block based on the spatial offset information; And
And performing inter-layer prediction of the current block using the specified reference block.
10. The method of claim 9, wherein the step of specifying the reference block is performed based on an inter-layer prediction limitation flag,
Wherein the interlayer prediction restriction flag indicates whether or not a restriction that a reference block specified based on the spatial offset information is used in the inter-layer prediction of the current block is applied.
11. The method of claim 10, wherein the reference block is specified as a block at a position shifted by an offset according to the spatial offset information based on a call block belonging to the interlaced reference picture. 12. The method of claim 11, wherein the spatial offset information includes at least one of a segment offset and a block offset. Determines a corresponding picture of a reference layer used for inter-layer prediction of a current picture in a current layer, up-samples the determined corresponding picture to generate an inter-layer reference picture, and performs inter- And a prediction unit for specifying a reference block in the inter-layer reference picture to be used and performing inter-layer prediction of the current block using the specified reference block. 14. The method of claim 13, wherein the specification of the reference block is performed based on an inter-layer prediction limitation flag,
Wherein the interlayer prediction restriction flag indicates whether or not a constraint that a reference block specified based on the spatial offset information is used in inter-layer prediction of the current block is applied.
15. The apparatus of claim 14, wherein the reference block is specified as a block at a position shifted by an offset according to the spatial offset information with respect to a call block belonging to the interlayer reference picture,
Wherein the spatial offset information comprises at least one of a segment offset and a block offset.
KR20140127397A 2013-09-24 2014-09-24 A method and an apparatus for encoding/decoding a multi-layer video signal KR20150033576A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR20130113176 2013-09-24
KR1020130113176 2013-09-24

Related Child Applications (1)

Application Number Title Priority Date Filing Date
KR1020150160316A Division KR20150133685A (en) 2013-09-24 2015-11-16 A method and an apparatus for encoding/decoding a multi-layer video signal

Publications (1)

Publication Number Publication Date
KR20150033576A true KR20150033576A (en) 2015-04-01

Family

ID=52743908

Family Applications (3)

Application Number Title Priority Date Filing Date
KR20140127398A KR20150033577A (en) 2013-09-24 2014-09-24 A method and an apparatus for encoding/decoding a multi-layer video signal
KR20140127397A KR20150033576A (en) 2013-09-24 2014-09-24 A method and an apparatus for encoding/decoding a multi-layer video signal
KR1020150160316A KR20150133685A (en) 2013-09-24 2015-11-16 A method and an apparatus for encoding/decoding a multi-layer video signal

Family Applications Before (1)

Application Number Title Priority Date Filing Date
KR20140127398A KR20150033577A (en) 2013-09-24 2014-09-24 A method and an apparatus for encoding/decoding a multi-layer video signal

Family Applications After (1)

Application Number Title Priority Date Filing Date
KR1020150160316A KR20150133685A (en) 2013-09-24 2015-11-16 A method and an apparatus for encoding/decoding a multi-layer video signal

Country Status (2)

Country Link
KR (3) KR20150033577A (en)
WO (2) WO2015046866A1 (en)

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103765899B (en) * 2011-06-15 2018-03-06 韩国电子通信研究院 For coding and decoding the method for telescopic video and using its equipment
EP2772052A1 (en) * 2011-10-24 2014-09-03 Telefonaktiebolaget LM Ericsson (PUBL) Reference picture marking
US9258559B2 (en) * 2011-12-20 2016-02-09 Qualcomm Incorporated Reference picture list construction for multi-view and three-dimensional video coding
US9351004B2 (en) * 2011-12-27 2016-05-24 Ati Technologies Ulc Multiview video coding reference picture selection under a one reference picture constraint

Also Published As

Publication number Publication date
KR20150033577A (en) 2015-04-01
WO2015046866A1 (en) 2015-04-02
WO2015046867A1 (en) 2015-04-02
KR20150133685A (en) 2015-11-30

Similar Documents

Publication Publication Date Title
KR20150014871A (en) A method and an apparatus for encoding/decoding a scalable video signal
US20160330458A1 (en) Scalable video signal encoding/decoding method and apparatus
KR20140145560A (en) A method and an apparatus for encoding/decoding a scalable video signal
KR20150099496A (en) A method and an apparatus for encoding and decoding a scalable video signal
KR20150133683A (en) A method and an apparatus for encoding and decoding a scalable video signal
KR20150133682A (en) A method and an apparatus for encoding and decoding a scalable video signal
KR20150099497A (en) A method and an apparatus for encoding and decoding a multi-layer video signal
KR20150075041A (en) A method and an apparatus for encoding/decoding a multi-layer video signal
KR20150133681A (en) A method and an apparatus for encoding and decoding a scalable video signal
KR20150110294A (en) A method and an apparatus for encoding/decoding a multi-layer video signal
KR20150064678A (en) A method and an apparatus for encoding and decoding a multi-layer video signal
KR20150133684A (en) A method and an apparatus for encoding and decoding a scalable video signal
KR20150009468A (en) A method and an apparatus for encoding/decoding a scalable video signal
KR20150099495A (en) A method and an apparatus for encoding and decoding a scalable video signal
KR20150133685A (en) A method and an apparatus for encoding/decoding a multi-layer video signal
KR20150064676A (en) A method and an apparatus for encoding/decoding a multi-layer video signal
KR20150043990A (en) A method and an apparatus for encoding/decoding a multi-layer video signal
KR20150043989A (en) A method and an apparatus for encoding/decoding a multi-layer video signal
KR20150071653A (en) A method and an apparatus for encoding/decoding a multi-layer video signal
KR20150037659A (en) A method and an apparatus for encoding/decoding a multi-layer video signal
KR20150048077A (en) A method and an apparatus for encoding/decoding a multi-layer video signal
KR20150014872A (en) A method and an apparatus for encoding/decoding a scalable video signal
KR20150044394A (en) A method and an apparatus for encoding/decoding a multi-layer video signal
KR20150037660A (en) A method and an apparatus for encoding and decoding a multi-layer video signal
KR20140145559A (en) A method and an apparatus for encoding/decoding a scalable video signal

Legal Events

Date Code Title Description
A201 Request for examination
A302 Request for accelerated examination
E902 Notification of reason for refusal
A107 Divisional application of patent