WO2013168952A1 - Procédé de prédiction intercouche et appareil utilisant celui-ci - Google Patents

Procédé de prédiction intercouche et appareil utilisant celui-ci Download PDF

Info

Publication number
WO2013168952A1
WO2013168952A1 PCT/KR2013/003936 KR2013003936W WO2013168952A1 WO 2013168952 A1 WO2013168952 A1 WO 2013168952A1 KR 2013003936 W KR2013003936 W KR 2013003936W WO 2013168952 A1 WO2013168952 A1 WO 2013168952A1
Authority
WO
WIPO (PCT)
Prior art keywords
block
prediction
weight
sample value
layer
Prior art date
Application number
PCT/KR2013/003936
Other languages
English (en)
Korean (ko)
Inventor
박준영
김철근
헨드리헨드리
전병문
김정선
Original Assignee
엘지전자 주식회사
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 엘지전자 주식회사 filed Critical 엘지전자 주식회사
Publication of WO2013168952A1 publication Critical patent/WO2013168952A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/59Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial sub-sampling or interpolation, e.g. alteration of picture size or resolution
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/30Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
    • H04N19/33Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability in the spatial domain
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction

Definitions

  • the present invention relates to video compression techniques, and more particularly, to a method and apparatus for performing scalable video coding.
  • video quality of the terminal device can be supported and the network environment is diversified, in general, video of general quality may be used in one environment, but higher quality video may be used in another environment. .
  • a consumer who purchases video content on a mobile terminal can view the same video content on a larger screen and at a higher resolution through a large display in the home.
  • UHD Ultra High Definition
  • the quality of the image for example, the image quality, the resolution of the image, the size of the image. It is necessary to provide scalability in the frame rate of video and the like. In addition, various image processing methods associated with such scalability should be discussed.
  • An object of the present invention is to provide a method for performing intra prediction on a current layer using information of another layer and an apparatus using the same.
  • Another object of the present invention is to provide a method for generating a prediction block of a current layer using information of another layer and an apparatus using the same.
  • Another object of the present invention is to provide a method for performing intra bi-prediction of a current layer using other layer information and an apparatus using the same.
  • the method may include generating a second block including a second block, and generating a prediction block of the current block by combining sample values of the first block and the second block.
  • the sample value of the prediction block may be calculated based on an arithmetic mean of the sample value of the first block and the sample value of the second block.
  • the sample value of the prediction block may be calculated based on a weighted average of the sample value of the first block and the sample value of the second block.
  • the generating of the prediction block may include calculating a sample value of the prediction block based on a weighted average of the sample value of the first block and the sample value of the second block, and converting the weight of the first block into a first weight, When the value for the second block is a second weight, the ratio of the first weight to the second value may be set higher from the upper left to the lower right of the prediction block.
  • the weight of the first block is a first weight and a value for the second block is a second weight, and the intra prediction mode of the current block is a vertical mode
  • the first value for the second value may be set higher from the top to the bottom of the prediction block.
  • the weight of the first block is a first weight and the value of the second block is called a second weight, and the intra prediction mode of the current block is a horizontal mode
  • the first value of the second value of the second block may be set higher from the left to the right of the prediction block.
  • the generating of the prediction block may add an offset for giving an effect of one of rounding and rounding to the sample value of the prediction block.
  • the video decoding apparatus is derived based on a first block composed of a value upsampled a reconstruction value of a reference block of a reference layer corresponding to the current block, and the intra prediction mode of the current block.
  • a prediction block configured to generate a prediction block of the current block by combining a second block including a prediction value and sample values of the first block and the second block.
  • a method of performing intra prediction on a current layer using information of another layer and an apparatus using the same are provided.
  • a method of generating a prediction block of a current layer using information of another layer and an apparatus using the same are provided.
  • a method of performing intra bi-prediction on a current layer using other layer information and an apparatus using the same are provided.
  • FIG. 1 is a block diagram schematically illustrating a video encoding apparatus supporting scalability according to an embodiment of the present invention.
  • FIG. 2 is a block diagram schematically illustrating a video decoding apparatus supporting scalability according to an embodiment of the present invention.
  • FIG. 3 is a block diagram illustrating an example of inter-layer prediction in an encoding apparatus and a decoding apparatus that perform scalable coding according to the present invention.
  • FIG. 4 is a diagram illustrating an intra prediction mode for a luminance component.
  • FIG. 5 is a diagram illustrating a current block and neighboring blocks in which intra prediction is performed according to the present invention.
  • FIG. 6 is a diagram for describing a method of deriving a left prediction mode and an up prediction mode from a neighboring block according to the present invention.
  • FIG. 7 is a control flowchart illustrating a method of forming an MPM list when the prediction modes derived from neighboring blocks are the same according to the present invention.
  • FIG. 8 is a control flowchart illustrating a method of forming an MPM list when prediction modes derived from neighboring blocks are different according to the present invention.
  • FIG. 9 is a diagram for describing generating a prediction value of a current block according to the present invention.
  • FIG. 10 illustrates a current block and a reference block according to an embodiment of the present invention.
  • FIG. 11 illustrates a plurality of prediction blocks for generating a prediction block of a current block according to an embodiment of the present invention.
  • FIG. 12 is a control flowchart illustrating a method of generating a prediction block of a current block according to another embodiment of the present invention.
  • FIG. 13 is a control flowchart illustrating a method of generating a prediction block of a current block according to an embodiment of the present invention.
  • FIG. 14 illustrates a prediction block partitioned into subblocks according to an embodiment of the present invention.
  • 15 is a control flowchart illustrating a method of generating a prediction block of a current block according to another embodiment of the present invention.
  • FIG. 16 illustrates a prediction block partitioned into subblocks according to another embodiment of the present invention.
  • 17 is a control flowchart illustrating a method of generating a prediction block of a current block according to another embodiment of the present invention.
  • FIG. 18 illustrates a prediction block partitioned into subblocks according to another embodiment of the present invention.
  • each of the components in the drawings described in the present invention are shown independently for the convenience of description of the different characteristic functions in the video encoding apparatus / decoding apparatus, each component is a separate hardware or separate software It does not mean that it is implemented.
  • two or more of each configuration may be combined to form one configuration, or one configuration may be divided into a plurality of configurations.
  • Embodiments in which each configuration is integrated and / or separated are also included in the scope of the present invention without departing from the spirit of the present invention.
  • input signals may be processed for each layer.
  • the input signals may differ in at least one of resolution, frame rate, bit-depth, color format, and aspect ratio. Can be.
  • scalable coding includes scalable encoding and scalable decoding.
  • prediction between layers is performed by using differences between layers, that is, based on scalability, thereby reducing overlapping transmission / processing of information and increasing compression efficiency.
  • FIG. 1 is a block diagram schematically illustrating a video encoding apparatus supporting scalability according to an embodiment of the present invention.
  • the encoding apparatus 100 includes an encoder 105 for layer 1 and an encoder 135 for layer 0.
  • Layer 0 may be a base layer, a reference layer, or a lower layer
  • layer 1 may be an enhancement layer, a current layer, or an upper layer.
  • the encoding unit 105 of the layer 1 includes a prediction unit 110, a transform / quantization unit 115, a filtering unit 120, a decoded picture buffer (DPB) 125, an entropy coding unit 130, and a MUX (Multiplexer, 165).
  • the encoding unit 135 of the layer 0 includes a prediction unit 140, a transform / quantization unit 145, a filtering unit 150, a DPB 155, and an entropy coding unit 160.
  • the prediction units 110 and 140 may perform inter prediction and intra prediction on the input image.
  • the prediction units 110 and 140 may perform prediction in predetermined processing units.
  • the performing unit of prediction may be a coding unit (CU), a prediction unit (PU), or a transform unit (TU).
  • the prediction units 110 and 140 may determine whether to apply inter prediction or intra prediction in a CU unit, determine a mode of prediction in a PU unit, and perform prediction in a PU unit or a TU unit. have. Prediction performed includes generation of a prediction block and generation of a residual block (residual signal).
  • a prediction block may be generated by performing prediction based on information of at least one picture of a previous picture and / or a subsequent picture of the current picture.
  • prediction blocks may be generated by performing prediction based on pixel information in a current picture.
  • inter prediction there are a skip mode, a merge mode, a motion vector predictor (MVP) mode method, and the like.
  • a reference picture may be selected with respect to the current PU that is a prediction target, and a reference block corresponding to the current PU may be selected within the reference picture.
  • the prediction units 110 and 140 may generate a prediction block based on the reference block.
  • the prediction block may be generated in integer sample units or may be generated in integer or less pixel units.
  • the motion vector may also be expressed in units of integer pixels or units of integer pixels or less.
  • motion information that is, information such as an index of a reference picture, a motion vector, and a residual signal
  • residuals may not be generated, transformed, quantized, or transmitted.
  • the prediction mode may have 33 directional prediction modes and at least two non-directional modes.
  • the non-directional mode may include a DC prediction mode and a planner mode (Planar mode).
  • a prediction block may be generated after applying a filter to a reference sample.
  • the PU may be a block of various sizes / types, for example, in the case of inter prediction, the PU may be a 2N ⁇ 2N block, a 2N ⁇ N block, an N ⁇ 2N block, an N ⁇ N block (N is an integer), or the like.
  • the PU In the case of intra prediction, the PU may be a 2N ⁇ 2N block or an N ⁇ N block (where N is an integer).
  • the PU of the N ⁇ N block size may be set to apply only in a specific case.
  • the NxN block size PU may be used only for the minimum size CU or only for intra prediction.
  • PUs such as N ⁇ mN blocks, mN ⁇ N blocks, 2N ⁇ mN blocks, or mN ⁇ 2N blocks (m ⁇ 1) may be further defined and used.
  • the prediction unit 110 may perform prediction for layer 1 using the information of the layer 0.
  • a method of predicting information of a current layer using information of another layer is referred to as inter-layer prediction for convenience of description.
  • Information of the current layer that is predicted using information of another layer may include texture, motion information, unit information, predetermined parameters (eg, filtering parameters, etc.).
  • information of another layer used for prediction for the current layer may include texture, motion information, unit information, and predetermined parameters (eg, filtering parameters).
  • inter-layer motion prediction is also referred to as inter-layer inter prediction.
  • prediction of a current block of layer 1 may be performed using motion information of layer 0 (reference layer or base layer).
  • motion information of a reference layer may be scaled.
  • inter-layer texture prediction is also called inter-layer intra prediction or intra base layer (BL) prediction.
  • Inter layer texture prediction may be applied when a reference block in a reference layer is reconstructed by intra prediction.
  • the texture of the reference block in the reference layer may be used as a prediction value for the current block of the enhancement layer.
  • the texture of the reference block may be scaled by upsampling.
  • inter-layer unit parameter prediction derives unit (CU, PU, and / or TU) information of a base layer and uses it as unit information of an enhancement layer, or based on unit information of a base layer. Unit information may be determined.
  • the unit information may include information at each unit level.
  • information about a partition (CU, PU and / or TU) may include information on transform, information on prediction, and information on coding.
  • information on a PU partition and information on prediction (eg, motion information, information on a prediction mode, etc.) may be included.
  • the information about the TU may include information about a TU partition, information on transform (transform coefficient, transform method, etc.).
  • the unit information may include only the partition information of the processing unit (eg, CU, PU, TU, etc.).
  • inter-layer parameter prediction may derive a parameter used in the base layer to reuse it in the enhancement layer or predict a parameter for the enhancement layer based on the parameter used in the base layer.
  • interlayer prediction As an example of interlayer prediction, interlayer texture prediction, interlayer motion prediction, interlayer unit information prediction, and interlayer parameter prediction have been described. However, the interlayer prediction applicable to the present invention is not limited thereto.
  • the prediction unit 110 may use interlayer residual prediction, which predicts the residual of the current layer using the residual information of another layer as interlayer prediction, and performs prediction on the current block in the current layer based on the prediction. It may be.
  • the prediction unit 110 may predict the current block in the current layer by using a difference (differential image) image between the reconstructed picture of the current layer and the resampled picture of another layer as the inter-layer prediction. Inter-layer difference prediction may be performed.
  • the prediction unit 110 may use interlayer syntax prediction that predicts or generates a texture of a current block using syntax information of another layer as interlayer prediction.
  • the syntax information of the reference layer used for prediction of the current block may be information about an intra prediction mode, motion information, and the like.
  • inter-layer syntax prediction may be performed by referring to the intra prediction mode from a block to which the intra prediction mode is applied in the reference layer and referring to motion information from the block MV to which the inter prediction mode is applied.
  • the reference layer is a P slice or a B slice
  • the reference block in the slice may be a block to which an intra prediction mode is applied.
  • inter-layer prediction may be performed to generate / predict a texture for the current block by using an intra prediction mode of the reference block among syntax information of the reference layer.
  • the prediction information of the layer 0 may be used to predict the current block while additionally using unit information or filtering parameter information of the corresponding layer 0 or the corresponding block.
  • This combination of inter-layer prediction methods can also be applied to the predictions described below in this specification.
  • the transform / quantization units 115 and 145 may perform transform on the residual block in transform block units to generate transform coefficients and quantize the transform coefficients.
  • the transform block is a block of samples and is a block to which the same transform is applied.
  • the transform block can be a transform unit (TU) and can have a quad tree structure.
  • the transform / quantization units 115 and 145 may generate a 2D array of transform coefficients by performing transform according to the prediction mode applied to the residual block and the size of the block. For example, if intra prediction is applied to a residual block and the block is a 4x4 residual array, the residual block is transformed using a discrete sine transform (DST), otherwise the residual block is transformed into a discrete cosine transform (DCT). Can be converted using.
  • DST discrete sine transform
  • DCT discrete cosine transform
  • the transform / quantization unit 115 and 145 may quantize the transform coefficients to generate quantized transform coefficients.
  • the transform / quantization units 115 and 145 may transfer the quantized transform coefficients to the entropy coding units 130 and 180.
  • the transform / quantization unit 145 may rearrange the two-dimensional array of quantized transform coefficients into one-dimensional arrays according to a predetermined scan order and transfer them to the entropy coding units 130 and 180.
  • the transform / quantizers 115 and 145 may transfer the reconstructed block generated based on the residual and the predictive block to the filtering units 120 and 150 for inter prediction.
  • the transform / quantization units 115 and 145 may skip transform and perform quantization only or omit both transform and quantization as necessary.
  • the transform / quantization unit 115 or 165 may omit the transform for a block having a specific prediction method or a specific size block, or a block of a specific size to which a specific prediction block is applied.
  • the entropy coding units 130 and 160 may perform entropy encoding on the quantized transform coefficients.
  • Entropy encoding may use, for example, an encoding method such as Exponential Golomb, Context-Adaptive Binary Arithmetic Coding (CABAC), or the like.
  • CABAC Context-Adaptive Binary Arithmetic Coding
  • the filtering units 120 and 150 may apply a deblocking filter, an adaptive loop filter (ALF), and a sample adaptive offset (SAO) to the reconstructed picture.
  • ALF adaptive loop filter
  • SAO sample adaptive offset
  • the deblocking filter may remove distortion generated at the boundary between blocks in the reconstructed picture.
  • the adaptive loop filter may perform filtering based on a value obtained by comparing the reconstructed image with the original image after the block is filtered through the deblocking filter.
  • the SAO restores the offset difference from the original image on a pixel-by-pixel basis to the residual block to which the deblocking filter is applied, and is applied in the form of a band offset and an edge offset.
  • the filtering units 120 and 150 may apply only the deblocking filter, only the deblocking filter and the ALF, or may apply only the deblocking filter and the SAO without applying all of the deblocking filter, ALF, and SAO.
  • the DPBs 125 and 155 may receive the reconstructed block or the reconstructed picture from the filtering units 120 and 150 and store the received reconstruction picture.
  • the DPBs 125 and 155 may provide a reconstructed block or picture to the predictors 110 and 140 that perform inter prediction.
  • Information output from the entropy coding unit 160 of layer 0 and information output from the entropy coding unit 130 of layer 1 may be multiplexed by the MUX 185 and output as a bitstream.
  • the encoding unit 105 of the layer 1 has been described as including the MUX 165.
  • the MUX is separate from the encoding unit 105 of the layer 1 and the encoding unit 135 of the layer 0. It may be a device or a module of.
  • the encoding device of FIG. 1 may be implemented as an electronic device capable of capturing and encoding an image, including a camera.
  • the encoding device may be implemented in or included in a personal terminal such as a television, computer system, portable telephone or tablet PC, or the like.
  • FIG. 2 is a block diagram illustrating an example of interlayer prediction in an encoding apparatus that performs scalable coding according to the present invention.
  • the decoding apparatus 200 includes a decoder 210 of layer 1 and a decoder 250 of layer 0.
  • Layer 0 may be a base layer, a reference layer, or a lower layer
  • layer 1 may be an enhancement layer, a current layer, or an upper layer.
  • the decoding unit 210 of the layer 1 includes an entropy decoding unit 215, a reordering unit 220, an inverse quantization unit 225, an inverse transform unit 230, a prediction unit 235, a filtering unit 240, and a memory. can do.
  • the decoding unit 250 of the layer 0 includes an entropy decoding unit 255, a reordering unit 260, an inverse quantization unit 265, an inverse transform unit 270, a prediction unit 275, a filtering unit 280, and a memory 285. ) May be included.
  • the DEMUX 205 may demultiplex the information for each layer and deliver the information to the decoding device for each layer.
  • the entropy decoding units 215 and 255 may perform entropy decoding corresponding to the entropy coding scheme used in the encoding apparatus. For example, when CABAC is used in the encoding apparatus, the entropy decoding units 215 and 255 may also perform entropy decoding using CABAC.
  • Information for generating a prediction block among the information decoded by the entropy decoding units 215 and 255 is provided to the prediction units 235 and 275, and a residual value of which entropy decoding is performed by the entropy decoding units 215 and 255. That is, the quantized transform coefficients may be input to the reordering units 220 and 260.
  • the reordering units 220 and 260 may rearrange the information of the bitstreams entropy decoded by the entropy decoding units 215 and 255, that is, the quantized transform coefficients, based on the reordering method in the encoding apparatus.
  • the reordering units 220 and 260 may rearrange the quantized transform coefficients of the one-dimensional array into the coefficients of the two-dimensional array.
  • the reordering units 220 and 260 may generate a two-dimensional array of coefficients (quantized transform coefficients) by performing scanning based on the prediction mode applied to the current block (transform block) and / or the size of the transform block.
  • the inverse quantizers 225 and 265 may generate transform coefficients by performing inverse quantization based on the quantization parameter provided by the encoding apparatus and the coefficient values of the rearranged block.
  • the inverse transform units 230 and 270 may perform inverse transform on the transform performed by the transform unit of the encoding apparatus.
  • the inverse transform units 230 and 270 may perform inverse DCT and / or inverse DST on a discrete cosine transform (DCT) and a discrete sine transform (DST) performed by an encoding apparatus.
  • DCT discrete cosine transform
  • DST discrete sine transform
  • the DCT and / or DST in the encoding apparatus may be selectively performed according to a plurality of pieces of information, such as a prediction method, a size of a current block, and a prediction direction, and the inverse transformers 230 and 270 of the decoding apparatus may perform transform information performed in the encoding apparatus. Inverse transformation may be performed based on.
  • the inverse transform units 230 and 270 may apply inverse DCT and inverse DST according to a prediction mode / block size.
  • the inverse transformers 230 and 270 may apply an inverse DST to a 4x4 luma block to which intra prediction is applied.
  • the inverse transform units 230 and 270 may fixedly use a specific inverse transform method regardless of the prediction mode / block size.
  • the inverse transform units 330 and 370 may apply only inverse DST to all transform blocks.
  • the inverse transform units 330 and 370 may apply only inverse DCT to all transform blocks.
  • the inverse transform units 230 and 270 may generate a residual or residual block by inversely transforming the transform coefficients or the block of the transform coefficients.
  • the inverse transformers 230 and 270 may also skip the transformation as needed or in accordance with the manner encoded in the encoding apparatus. For example, the inverse transforms 230 and 270 may omit the transform for a block having a specific prediction method or a specific size or a block of a specific size to which a specific prediction block is applied.
  • the prediction units 235 and 275 may perform prediction on the current block based on prediction block generation related information transmitted from the entropy decoding units 215 and 255 and previously decoded blocks and / or picture information provided by the memories 245 and 285.
  • a prediction block can be generated.
  • the prediction units 235 and 275 may perform intra prediction on the current block based on pixel information in the current picture.
  • the prediction units 235 and 275 may perform information on the current block based on information included in at least one of a previous picture or a subsequent picture of the current picture. Inter prediction may be performed. Some or all of the motion information required for inter prediction may be derived from the information received from the encoding apparatus and correspondingly.
  • the prediction block may be a reconstruction block.
  • the prediction unit 235 of layer 1 may perform inter prediction or intra prediction using only information in layer 1, or may perform inter layer prediction using information of another layer (layer 0).
  • the prediction unit 235 of the layer 1 may perform prediction on the current block by using one of the motion information of the layer 1, the texture information of the layer 1, the unit information of the layer 1, and the parameter information of the layer 1.
  • the predictor 235 of the layer 1 may receive motion information of the layer 1 from the predictor 275 of the layer 0 to perform motion prediction.
  • Inter-layer motion prediction is also called inter-layer inter prediction.
  • inter-layer motion prediction prediction of a current block of a current layer (enhanced layer) may be performed using motion information of a reference layer (base layer).
  • the prediction unit 335 may scale and use motion information of the reference layer when necessary.
  • the predictor 235 of the layer 1 may receive texture information of the layer 0 from the predictor 275 of the layer 0 to perform texture prediction.
  • Texture prediction is also called inter layer intra prediction or intra base layer (BL) prediction. Texture prediction may be applied when the reference block of the reference layer is reconstructed by intra prediction.
  • inter-layer intra prediction the texture of the reference block in the reference layer may be used as a prediction value for the current block of the enhancement layer. In this case, the texture of the reference block may be scaled by upsampling.
  • the predictor 235 of the layer 1 may receive unit parameter information of the layer 0 from the predictor 275 of the layer 0 to perform unit parameter prediction.
  • unit parameter prediction unit (CU, PU, and / or TU) information of the base layer may be used as unit information of the enhancement layer, or unit information of the enhancement layer may be determined based on unit information of the base layer.
  • the predictor 235 of the layer 1 may perform parameter prediction by receiving parameter information regarding the filtering of the layer 0 from the predictor 275 of the layer 0.
  • parameter prediction the parameters used in the base layer can be derived and reused in the enhancement layer, or the parameters for the enhancement layer can be predicted based on the parameters used in the base layer.
  • the prediction information of the layer 0 may be used to predict the current block while additionally using unit information or filtering parameter information of the corresponding layer 0 or the corresponding block.
  • This combination of inter-layer prediction methods can also be applied to the predictions described below in this specification.
  • the adders 290 and 295 may generate a reconstruction block using the prediction blocks generated by the predictors 235 and 275 and the residual blocks generated by the inverse transformers 230 and 270.
  • the adders 290 and 295 can be viewed as separate units (restore block generation unit) for generating the reconstruction block.
  • Blocks and / or pictures reconstructed by the adders 290 and 295 may be provided to the filtering units 240 and 280.
  • the filtering unit 240 of the layer 1 filters the reconstructed picture by using parameter information transmitted from the predicting unit 235 of the layer 1 and / or the filtering unit 280 of the layer 0. You can also do
  • the filtering unit 240 may apply filtering to or between layers using the parameters predicted from the parameters of the filtering applied in the layer 0.
  • the memories 245 and 285 may store the reconstructed picture or block to use as a reference picture or reference block.
  • the memories 245 and 285 may output the stored reconstructed picture through a predetermined output unit (not shown) or a display (not shown).
  • the description is divided into a reordering unit, an inverse quantization unit, an inverse transform unit, and the like.
  • a decoding device to perform reordering, inverse quantization, and inverse transformation in order in one module of the inverse quantization / inverse transformation unit It can also be configured.
  • the prediction unit of layer 1 may be different from the interlayer prediction unit that performs prediction using information of another layer (layer 0). It may also be regarded as including an inter / intra predictor for performing prediction without using the information of).
  • the decoding apparatus of FIG. 2 may be implemented as various electronic devices capable of playing back, or playing back and displaying an image.
  • the decoding device may be implemented in or included in a set-top box, a television, a computer system, a portable telephone, a personal terminal such as a tablet PC, or the like.
  • FIG. 3 is a block diagram illustrating an example of inter-layer prediction in an encoding apparatus and a decoding apparatus that perform scalable coding according to the present invention.
  • the predictor 300 of layer 1 includes an inter / intra predictor 340 and an interlayer predictor 350.
  • the prediction unit 300 of the layer 1 may perform interlayer prediction necessary for the prediction of the layer 1 from the information of the layer 0.
  • the interlayer prediction unit 350 may receive interlayer prediction information from the prediction unit 320 and / or the filtering unit 330 of the layer 0 to perform interlayer prediction necessary for the prediction of the layer 1.
  • the inter / intra prediction unit 340 of the layer 1 may perform inter prediction or intra prediction using the information of the layer 1 without using the information of the layer 0.
  • the inter / intra predictor 340 of the layer 1 may perform prediction based on the information of the layer 0 using the information transmitted from the interlayer predictor 350.
  • the filtering unit 310 of the layer 1 may perform the filtering based on the information of the layer 0 or may perform the filtering based on the information of the layer 1.
  • Information of the layer 0 may be transferred from the filtering unit 330 of the layer 0 to the filtering unit 310 of the layer 1, or may be transferred from the interlayer prediction unit 350 of the layer 1 to the filtering unit 310 of the layer 1. It may be.
  • the information transmitted from the layer 0 to the interlayer prediction unit 330 may be at least one of information about a unit parameter of the layer 0, motion information of the layer 0, texture information of the layer 0, and filter parameter information of the layer 0. have.
  • the texture predictor 360 may use the texture of the reference block in the reference layer as a prediction value for the current block of the enhancement layer. In this case, the texture predictor 360 may scale the texture of the reference block by upsampling.
  • the motion predictor 370 may predict the current block of layer 1 (the current layer or the enhancement layer) by using the motion information of the layer 0 (the reference layer or the base layer). In this case, the motion predictor 370 may scale the motion information of the reference layer.
  • the unit information predictor 380 derives unit (CU, PU, and / or TU) information of the base layer and uses the unit information of the enhancement layer based on the unit information of the base layer or uses the unit information of the enhancement layer based on the unit information of the base layer. You can decide.
  • unit (CU, PU, and / or TU) information of the base layer uses the unit information of the enhancement layer based on the unit information of the base layer or uses the unit information of the enhancement layer based on the unit information of the base layer. You can decide.
  • the parameter predictor 390 may derive the parameters used in the base layer to reuse them in the enhancement layer or predict the parameters for the enhancement layer based on the parameters used in the base layer.
  • interlayer prediction As an example of interlayer prediction, interlayer texture prediction, interlayer motion prediction, interlayer unit information prediction, and interlayer parameter prediction have been described. However, the interlayer prediction applicable to the present invention is not limited thereto.
  • the inter-layer prediction unit may further include a sub-prediction unit that performs inter-layer residual prediction, a sub-prediction unit that performs inter-layer syntax prediction, and / or a sub-prediction unit that performs inter-layer difference prediction.
  • the interlayer residual prediction, the interlayer differential prediction, the interlayer syntax prediction, and the like may be performed using a combination of prediction units.
  • the prediction unit 300 may correspond to the prediction unit 110 of FIG. 1, and the filtering unit 310 may include the filtering unit 120 of FIG. 1. It can correspond to.
  • the predictor 320 may correspond to the predictor 140 of FIG. 1
  • the filter 330 may correspond to the filter 150 of FIG. 1.
  • the prediction unit 300 may correspond to the prediction unit 235 of FIG. 2, and the filtering unit 310 is the filtering unit 240 of FIG. 2.
  • the predictor 320 may correspond to the predictor 275 of FIG. 2
  • the filter 330 may correspond to the filter 280 of FIG. 2.
  • inter-layer prediction for predicting information of a current layer using information of another layer may be performed.
  • the current picture of the layer 1 may perform interlayer prediction by using information of the reference picture of the layer 0.
  • Intra prediction generates a prediction block for a current block (hereinafter, referred to as a current block) to be predicted based on the reconstructed pixels in the current layer.
  • FIG. 4 is a diagram illustrating an example of an intra prediction mode.
  • a prediction mode may be largely classified into a directional mode and a non-directional mode according to the direction in which reference pixels used for pixel value prediction are located and a prediction method.
  • 4 shows intra prediction modes for 33 directional prediction modes and at least two non-directional modes.
  • the non-directional mode may include a DC mode and a planer mode.
  • DC mode a single fixed value, for example, the average value of surrounding reconstructed pixel values is used as a prediction value
  • Planer mode vertical interpolation and horizontal use are performed using vertically adjacent pixel values of the current block and horizontally adjacent pixel values.
  • Directional interpolation is performed, and their average value is used as the predicted value.
  • the directional mode is an angular mode and refers to modes indicating a corresponding direction by an angle between a reference pixel located in a predetermined direction and a current pixel, and may include a horizontal mode and a vertical mode.
  • a horizontal mode vertically adjacent pixel values of the current block are used as prediction values of the current block
  • horizontally adjacent pixel values are used as prediction values of the current block.
  • this prediction mode may be specified using a predetermined angle and mode number.
  • a prediction block may be generated after applying a filter to a reference pixel.
  • the prediction mode for the current block may be transmitted with a value indicating the mode itself, but may be derived from information about candidate intra prediction modes that are likely to be the prediction mode of the current block.
  • the candidate intra prediction mode for the current block may be derived using the intra prediction mode of the neighboring block adjacent to the current block, and may be referred to as most probable mode (MPM).
  • MPM most probable mode
  • FIG. 5 is a diagram illustrating a current block and neighboring blocks in which intra prediction is performed according to the present invention.
  • a left peripheral block 520 positioned to the left of the current block 510 and an upper peripheral block 530 positioned above the current block 510 may be used.
  • the sizes of the neighboring blocks 520 and 530 may be the same or different.
  • the MPM is the left prediction mode, the upper prediction mode, It may consist of one or more prediction modes that do not overlap with the left prediction mode and the top prediction mode.
  • FIG. 6 is a diagram for describing a method of deriving a left prediction mode and an up prediction mode from neighboring blocks.
  • a process of deriving two candidate intra prediction modes from the left neighboring block 520 and the upper neighboring block 530 will be described with reference to FIG. 6.
  • the intra prediction mode of the left neighbor block 520 is derived to the left prediction mode (S602).
  • the specific prediction mode may be set.
  • the specific prediction mode may be a DC mode (S603).
  • the upper neighboring block 530 is valid, the upper neighboring block 530 is a block to which intra prediction is applied, and the upper neighboring block 530 is the current block. If the block 510 is in a coding tree block to which it belongs (S604), the intra prediction mode for the upper neighboring block 530 is derived to the upper prediction mode (S605).
  • the upper neighbor block 530 is not valid, information about the prediction mode of the upper neighbor block 530 does not exist, the upper neighbor block 530 is not a block to which intra prediction is applied, or the upper neighbor block ( If 530 is not a block in a coding tree block to which the current block 510 belongs (S604), that is, if any one of the three conditions is not satisfied, the upper prediction mode leads to a specific preset prediction mode. It may be (S606). In this case, the specific prediction mode may be a DC mode.
  • one or more prediction modes may be further derived to form an MPM list including three candidate intra prediction modes.
  • the prediction mode for the current block is in the MPM list. Information about any one of the candidate intra modes may be decoded and encoded. On the other hand, if the prediction mode for the current block cannot be inferred from the intra prediction mode of the neighboring block, the information about the intra prediction mode for the current block may be separately encoded and decoded.
  • FIG. 7 and 8 are control flowcharts illustrating a process of forming an MPM list including three candidate intra prediction modes using the left prediction mode and the up prediction mode derived according to FIG. 6.
  • FIG. 7 illustrates a process of forming an MPM list when the left prediction mode and the up prediction mode are the same
  • FIG. 8 illustrates a process of forming an MPM list when the left prediction mode and the up prediction mode are different.
  • the candidate intra prediction mode indexed first in the MPM list is MPM [0], the second candidate intra prediction mode indexed in MPM [1], and the third candidate intra prediction mode indexed in MPM. Mark as [2].
  • the left prediction mode and the up prediction mode are the same (S701), it is determined whether the left prediction mode is the DC mode or the planar mode (S702).
  • MPM [0] is derived to planar mode
  • MPM [1] is derived to DC mode
  • MPM [2] is derived to vertical mode (S703).
  • MPM [0] is derived to the left prediction mode
  • MPM [1] and MPM [2] are the prediction modes having an angle adjacent to the left prediction mode. Can be derived. Some of them should be able to be expressed in the prediction mode allowed by FIG. 4 while having adjacent angles.
  • MPM [1] and MPM [2] may be derived as follows (S704).
  • MPM [2] 2 + ((Left Prediction Mode -2 +1)% 32)
  • the left prediction mode is the same as the top prediction mode, and the left prediction mode is Planer mode or DC mode
  • MPM [0] is in Planar mode
  • MPM [1] is in DC mode
  • MPM [2] is in Vertical mode.
  • MPM [0] may be derived to the left prediction mode
  • MPM [1] and MPM [2] may be derived to the prediction mode having an angle adjacent to the left prediction mode.
  • MPM [2] can be derived as follows. It is determined whether the MPM [0] and the MPM [1] are in Planar mode (S803), and as a result of the determination, if both the MPM [0] and the MPM [1] are not in the Planar mode, the MPM [2] is induced to the Planar mode ( S804).
  • any one of the MPM [0] and MPM [1] is a planar mode, it is determined whether the MPM [0] and MPM [1] is a DC mode (S805).
  • MPM [2] is induced to DC mode (S806).
  • the MPM [2] may be induced in the vertical mode (S807).
  • MPM [0] and MPM [1] are derived as the left prediction mode and the up prediction mode, and the left prediction mode and the top prediction mode among the planar mode, the DC mode, and the vertical mode. And one not equal to MPM [2].
  • Table 1 schematically illustrates an example of syntax elements that may be applied when encoding and decoding an intra prediction mode of a current block. This syntax element may be applied in a prediction unit (PU) or a coding unit (CU).
  • PU prediction unit
  • CU coding unit
  • the encoding apparatus may encode a prev_intra_luma_pred_flag syntax element indicating whether the intra prediction mode for the current block can be inferred from the intra prediction mode of the neighboring neighboring block, as in the example of Table 1.
  • the encoding apparatus may determine whether the intra prediction mode of the current block 510 is one of the candidate intra prediction modes. That is, the index information of the MPP list is encoded using the mpm_idx syntax element.
  • prev_intra_luma_pred_flag 0 because the intra prediction mode of the current block 510 cannot be inferred from the intra prediction mode of the neighboring neighboring block
  • the encoding apparatus predicts the remaining prediction except the candidate intra prediction mode in the MPM list among the 35 intra prediction modes.
  • Information about the intra prediction mode for the current block 510 among the modes is encoded using the rem_intra_luma_pred_mode syntax element.
  • the decoding apparatus derives a candidate intra prediction mode for the current block 510 to generate an MPM list.
  • the decoding apparatus decodes prev_intra_luma_pred_flag received from the encoding apparatus. If prev_intra_luma_pred_flag is 1, it determines that the intra prediction mode of the current block 510 can be inferred from the intra prediction mode of the neighboring neighboring block, and decodes mpm_idx. By decoding the mpm_idx, the intra prediction mode of the current block 510 may be derived to the prediction mode of any one of the MPM lists.
  • the intra prediction mode of the current block 510 may be decoded by decoding the rem_intra_luma_pred_mode syntax element. .
  • the prediction unit generates a prediction value of the current block by using an intra prediction mode for the derived current block.
  • the prediction value of the current block predicted in the intra prediction mode may be generated as a TU or a PU.
  • the samples adjacent to the left side of the current block and the samples adjacent to the upper side of the current block are called reference samples, and reference values are used to generate prediction values of the current block. If a sample adjacent to the left of the current block or a sample adjacent to the upper side of the current block is not available, the samples that are not available may be replaced with the value of the predetermined sample.
  • the reference sample may be replaced with a predetermined value, for example, a median value of sample values that the image may have.
  • the reference samples are searched in a predetermined direction, and the sample value of the retrieved sample that is available is replaced with the left lowermost sample value.
  • the sample value of the adjacent sample which is immediately below the sample that is not available, is replaced with a reference sample value, and the available sample among the above adjacent samples is available. If a sample that has not been found is found, the sample value of the sample existing on the right side of the sample that is not available can be replaced with the reference sample value.
  • Sample values adjacent to the current block may be filtered. Whether to filter and the filtering coefficient may be set differently according to the size of the transform block and the intra prediction mode. When the step difference existing in the predicted value generated after the reference sample value and the intra prediction is applied through filtering, discontinuity that may occur at the block boundary may be reduced.
  • the prediction value for the current block may be obtained as a linear interpolation value of a plurality of reference sample values.
  • NxN N is an integer
  • N is an integer
  • the position of the upper right sample can be expressed as (N-1, 0) and the position of the lower right sample as (N-1, N-1).
  • the prediction value P (x, y) of the prediction sample may be calculated by Equation 1 using the reference sample c located at (N, -1) and the reference sample d located at (-1, N).
  • the sample value of reference sample a is a (x, -1)
  • the sample value of reference sample b is b (-1, y)
  • the sample value of reference sample c is c (N, -1)
  • the sample value of reference sample d The value is represented by d (-1, N).
  • the prediction value for the current block may be generated using the average value of the reference sample values.
  • the boundary region of the prediction block, that is, the left boundary and the right boundary of the prediction block may be filtered.
  • the prediction value may be generated using the reference sample value present in the direction of the prediction mode. Reference samples may be stretched according to the direction of the prediction mode.
  • the prediction mode is a horizontal or vertical mode
  • the prediction value of the current block located at a boundary adjacent to the reference sample may be generated through arithmetic operation of specific sample values existing at a predetermined position with the reference sample. The arithmetic operation has an effect of applying filtering to the boundary region adjacent to the reference sample, thereby reducing the discontinuity between the current block and the neighboring block.
  • the decoding device and the encoding device may generate a prediction value using the prediction mode information in the current layer when performing the intra prediction of the current block in the current layer, but predict the current block by referring to information of another layer. You can also create blocks.
  • An embodiment of the present invention performs prediction on the current block by using text information of the reference layer.
  • FIG. 10 illustrates a current block and a reference block according to an embodiment of the present invention.
  • a corresponding part of the reference picture 1001 corresponding to the current block 1010 of the current picture 1000 is shown as a reference block 1011.
  • the reference block 1011 may be positioned in accordance with a resolution ratio between the current picture 1000 and the reference picture 1001. That is, the coordinates specifying the position of the current block 1010 may correspond to the specific coordinates of the reference picture 1001 according to the resolution ratio.
  • This reference block 1011 may include one prediction unit or may include a plurality of prediction units.
  • the texture of reference block 1011 may be used to generate predictive values for current block 1010.
  • the prediction block of the current block may be generated using two blocks based on different layers.
  • the first blocks 1110 and A are blocks composed of textures of upsampled reference blocks 1011 corresponding to the current blocks 1010 of FIG. 10 to fit the size of the current blocks 1010.
  • B) represents a block composed of prediction values derived through the intra prediction mode of the current block described above.
  • the prediction block 1130 of the current block may be generated through the first block 1110 and the second block 1120.
  • the size of the first block 1110, the second block 1120, and the prediction block 1130 is N ⁇ N where N is an integer.
  • the position of the upper left sample of the block is (0,0)
  • the position of the lower left sample is (0, N-1)
  • the position of the upper right sample is (N-1, 0).
  • the position of the lower right sample can be expressed as (N-1, N-1).
  • the sample value at position (i, j) of the first block 1110 is A (i, j)
  • the sample value at (i, j) of the second block 1120 is B (i, j).
  • Express. i and j may have an integer value from 0 to N-1.
  • the sample value of the position (i, j) of the prediction block 1130 corresponding to the position (i, j) of the first block 1110 and the position (i, j) of the second block 1120 is P (i, j).
  • the sample value P (i, j) may be generated by combining the sample values A (i, j) and B (i, j), and the generated sample value P (i, j) is the predicted value of the current block 1010. Becomes
  • FIG. 12 is a control flowchart illustrating a method of generating a prediction block of a current block according to an embodiment of the present invention.
  • the prediction unit generates a first block 1110 configured to upsample the reconstructed value of the reference block 1011 of the reference picture 1001 corresponding to the current block 1010 (S1201).
  • the first block 1110 may be generated before the prediction of the current block 1010 and stored in a memory or the like, or may be executed in real time for the calculation of the prediction block 1130 when the prediction of the current block 1010 is performed. It may be generated as.
  • the prediction unit generates a second block 1120 composed of prediction values derived based on the intra prediction mode of the current block 1010 (S1202).
  • the intra prediction mode for the current block 1010 may be derived as described with reference to FIGS. 4 to 8, and the second block 1120 is generated using the derived intra prediction mode and reference sample values that are already reconstructed. Can be.
  • the generating steps of the first block 1110 and the second block 1120 are not limited to the illustrated order, and the two blocks 1110 and 1120 may be generated sequentially or simultaneously.
  • the prediction unit combines the sample values of the first block 1110 and the second block 1120 to generate the prediction block 1130 of the current block 1010 (S1203).
  • the combination may mean performing various operations by using the sample value of the first block 1110 and the sample value of the second block 1120.
  • the sample value P (i, j) of the prediction block 1130 may be calculated as Equation 2.
  • Equation 2 the sample value P (i, j) corresponds to the arithmetic mean value of A (i, j) and B (i, j).
  • Equation 2 may be modified as in Equation 3.
  • Equation 3 adds a predetermined offset 1 to Equation 2, and may compensate for an error that may occur when performing an integer operation according to Equation 2.
  • the sample value P (i, j) of the prediction block 1130 may be calculated as Equation 4.
  • W A and W B represent weights for sample values A (i, j) and sample values B (i, j), and for convenience of description, weights for sample values of the first block or the first block will be described below.
  • a weight for a second block or a sample value of the second block is expressed as a second weight.
  • the prediction block 1130 is generated as a weighted average value of the first block 1110 and the second block 1120, and the first and second weights may have the same value or different values.
  • the first weight may be greater than the second weight, and conversely, the second weight may be greater than the first weight.
  • the weight may be set to enable integer arithmetic.
  • the first weight W A is set to 3 and the second weight W B is set to 1, which is represented by Equation 5.
  • the first weight W A of Equation 4 may be set to 7 and the second weight W B may be set to 1 to be expressed as Equation 6.
  • the weight may have a value other than the examples included in Equations 5 and 6 described above, or may be set regardless of integer arithmetic.
  • the information about the weight may be encoded and transmitted to the decoding apparatus, or may be set to a predetermined value between the encoding apparatus and the decoding apparatus and may not be signaled.
  • Equation 4 a preset offset may be added in addition to Equation 4.
  • the offset is added to reduce the error caused by the integer operation, and the offset has the effect of rounding up or down.
  • Equation 4 to which the offset is added may be expressed as Equation 7.
  • Equation 8 When the offset is added to Equation 5, it is expressed as Equation 8.
  • the information about the offset may be encoded and transmitted to the decoding apparatus, or may be set to a predetermined value between the encoding apparatus and the decoding apparatus and may not be signaled.
  • FIG. 13 is a control flowchart illustrating a method of generating a prediction block of a current block according to an embodiment of the present invention.
  • the prediction unit As in steps S1201 and S1202 of FIG. 12, first, the prediction unit generates a first block including values of upsampled reconstruction values of the reference block 1011 of the reference picture 1001 corresponding to the current block 1010 ( In operation S1301, a second block 1120 including prediction values derived based on the intra prediction mode of the current block 1010 is generated in operation S1302.
  • the intra prediction mode for the current block 1010 may be derived as described with reference to FIGS. 4 to 8, and the second block 1120 is generated using the derived intra prediction mode and the already reconstructed reference sample values. Can be.
  • the sample value P (i, j) of the prediction block 1130 may be calculated through an arithmetic mean of the sample value of the first block 1110 and the sample value of the second block 1120, as shown in Equation 2, or In operation S1303, a weighted average of the sample value of the first block 1110 and the sample value of the second block 1120 may be calculated as shown in FIG. 4.
  • the offset may be added as in Equation 3 and Equation 7 to compensate for an error in integer arithmetic.
  • the prediction block 1130 of the current block 1010 may be calculated using an arithmetic mean or weighted average of the texture of the reference block 1011 and the prediction value derived from the intra prediction mode of the current block, and the like. Offsets with rounding effects may be added to compensate for errors.
  • the predicted value of the second block 1120 of FIG. 11 derived through the intra prediction mode is derived using the sample values adjacent to the left and right sides of the current block 1010 of FIG. 10 as reference samples. Accordingly, the sample value located at the upper left end of the current block 1010 may have higher prediction accuracy than the sample value present at the lower right end.
  • the prediction accuracy does not change according to the position of the sample.
  • the weights of the first block 1110 and the second block 1120 are changed according to the position of the sample of the prediction block 1130, for example, according to a specific direction. (i, j) can be calculated.
  • the first and second weights of Equations 4 and 7 that may be used when calculating the sample value of the prediction block 1130 may vary according to the positions of the samples of the prediction block 1130.
  • the weight of the first block 1110 may be set larger than the weight of the second block 1120 from the upper left end to the lower right end. That is, the ratio of the first weight to the second weight (first weight / second weight) may be increased from the upper left end to the lower right direction of the current block in consideration of the prediction direction.
  • the prediction block may be partitioned into four subblocks 1401, 1402, 1403, and 1404, and the first block 1110 and the calculation of the sample value P (i, j) located in each partition are performed.
  • the weights for the sample values of the second block 1120 may be set differently from each other.
  • the sample value of the second block 1120 corresponding to the first sub block 1401 located at the upper left is high in the prediction accuracy, but is equal to the fourth sub block 1404 located at the lower right.
  • the prediction accuracy of the sample value of the corresponding second block 1120 is relatively lower than the sample value of the second block 1120 corresponding to the first sub block 1401.
  • the ratio (first weight / second weight) of the first weight to the second weight applied to the fourth sub block 1404 is applied to the second weight applied to the first sub block 1401. It may be set higher than the ratio (first weight / second weight) of the first weight.
  • the prediction accuracy of the second block 1120 sample values corresponding to the second sub block 1402 and the third sub block 1403 is a sample value of the second block 1120 corresponding to the first sub block 1401. It is lower than the prediction accuracy of and higher than the prediction accuracy of the sample value of the second block 1120 corresponding to the fourth sub-block 1404.
  • the ratio of the first weight to the second weight applied to calculate the sample values P (i, j) of the second subblock 1402 and the third subblock 1403 (first weight / second weight). Is higher than the ratio of the first weight to the second weight applied to the first sub-block 1401 (first weight / second weight), the first to the second weight applied to the fourth sub-block 1404. It may be set lower than the ratio of the weight (first weight / second weight).
  • the sample value P (i, j) of the prediction block may be expressed as Equation 9.
  • i and j less than N / 2 means the first sub-block 1401 of FIG. 14, i and j means greater than or equal to N / 2 means the fourth sub-block 1404, and in other cases Means a second sub block 1402 and a third sub block 1403.
  • W 1 is the first weight applied to the first sub-block 1401
  • W 2 is the second weight applied to the first sub-block 1401
  • W 3 is applied to the fourth sub-block 1404
  • the first weight, W 4 means the second weight applied to the fourth sub-block 1404
  • W 5 is the first weight applied to the second sub-block 1402 and the third sub-block 1403
  • W 6 denotes a second weight applied to the second sub block 1402 and the third sub block 1403.
  • the first weight and the second weight applied to the sub-block may have the same value, but may be set to different values according to the prediction direction.
  • W 1 and W 2 may be the same value, or W 1 may be larger than W 2 .
  • W 3 and W 4 may be the same value, or W 3 may be larger than W 4 .
  • W 5 and W 6 may be the same value, or W 5 may be greater than W 6 .
  • the ratio of the first weight to the second weight may be set differently according to the sub-block, reflecting the prediction accuracy of the second block 1120.
  • W 1 / W 2 applied to the first sub-block 1401 and W 5 / W 6 applied to the second sub-block 1402 and the third sub-block 1403 may be different from each other.
  • W 5 / W 6 may be set higher than W 1 / W 2 in consideration of a tendency of decreasing the prediction accuracy of the second block 1120.
  • W 1 / W 2 applied to the first sub-block 1401 and W 3 / W 4 applied to the fourth sub-block 1404 may be different from each other, and according to the prediction direction, W 3 / W 4 may be set higher than W 1 / W 2 to reflect the tendency of decreasing prediction accuracy.
  • W 5 / W 6 applied to the second sub block 1402 and the third sub block 1403 and W 3 / W 4 applied to the fourth sub block 1404 may be different from each other.
  • W 3 / W 4 may be set higher than W 5 / W 6 by reflecting a tendency of decreasing the prediction accuracy of the second block 1120.
  • m3 is the offset for integer arithmetic.
  • the offset m1 applied to the first sub block 1401 may be set as an average value of the first weight W 1 and the second weight W 2
  • the offset m 2 applied to the fourth sub block 1404 may be set to the first value.
  • first weight (W 3) and the second may be set as an average value of the weight (W 4), a second sub-block 1402, and the third offset m3 applied to the sub-block 1403 is the first weight (W 5) And the average value of the second weight W 6 .
  • Equation 10 An example of each weight of Equation 9 is expressed as Equation 10.
  • the ratio W 1 / W 2 of the first weight to the second weight is 1, and the second sub block 1402 and the third sub block are 1.
  • the ratio W 5 / W 6 of the first weight to the second weight applied to 1403 is three.
  • the second weight ratio is the lowest, such that the ratio of the first weight to the second weight (W 3 / W 4 ) is the highest 7. Is set to.
  • Equation 10 offsets 1, 3, and 2 are added to each equation to compensate for errors in integer arithmetic.
  • the prediction block may be divided into more sub blocks, and the weight ratio of the first block 1110 to the weight of the second block 1020 for each sub block may be set to increase from the upper left to the lower right. have.
  • 15 is a control flowchart illustrating a method of generating a prediction block of a current block according to the present embodiment.
  • the prediction unit generates a first block 1110 including values of upsampled reconstructed values of the reference block 1011 of the reference picture 1001 corresponding to the current block 1010.
  • a second block 1120 including prediction values derived based on the intra prediction mode of the current block 1010 is generated (S1502).
  • the intra prediction mode for the current block 1010 may be derived as described with reference to FIGS. 4 to 8, and the second block 1120 is generated using the derived intra prediction mode and the already reconstructed reference sample values. Can be.
  • the sample value P (i, j) of the prediction block 1130 may be calculated through a weighted average of the sample value of the first block 1110 and the sample value of the second block 1120, and the second block 1120.
  • the weight may be adjusted to reduce the specific gravity of the second block 1120 as the distance from the reference sample reflects the prediction accuracy of.
  • the weight of the first block 1110 is set higher than the weight of the second block 1120 from the upper left end to the lower right direction of the prediction block 1130, and the first block 1110 according to the set weight.
  • a sample value of the prediction block 1130 is generated by calculating a weighted average of the sample value of the sample value and the sample value of the second block 1120 (S1503).
  • An offset may be added as shown in Equation 9 to compensate for errors for integer arithmetic.
  • a (i, j) and B are adaptively applied according to the intra prediction direction of the current block 1010. i, j) can be combined.
  • the prediction value of the second block 1120 derived through the intra prediction mode has a different prediction accuracy according to the intra prediction mode, that is, the intra prediction direction. In other words, the closer the distance to the reference sample used for intra prediction, the higher the accuracy of the prediction.
  • the intra prediction mode for the current block is the vertical mode (intra prediction mode 26)
  • the closer the position of the sample of the second block 1120 is to the reference sample adjacent to the top of the prediction block the higher the accuracy of the prediction.
  • the prediction accuracy does not change according to the position of the sample.
  • the weight of the second block 1120 for the first block 1110 is set to be lower as the sample value of the prediction block 1130 goes from the top to the bottom direction. Can be.
  • the prediction block may be partitioned into two sub-blocks 1601 and 1602, and the first block 1110 and the second block (1) when calculating the sample value P (i, j) located in each partition
  • the weights for the sample values of 1120 may be set differently from each other.
  • the sample value P (i, j) of the prediction block of FIG. 16 may be expressed as Equation 11.
  • the smaller j than N / 2 means the first sub block 1601 of FIG. 16, and in other cases, the second sub block 1602.
  • W 1 is a first weight applied to the first sub-block 1601
  • W 2 is a second weight applied to the first sub-block 1601
  • W 3 is applied to the second sub-block 1602
  • the first weight, W 4 means a second weight applied to the second sub-block 1602
  • the first weight and the second weight applied to the sub-block may have the same value, but may be set to different values according to the prediction direction.
  • the ratio of the first weight to the second weight may be set larger from the top to the bottom of the prediction block.
  • the sample value of the second block 1120 corresponding to the first sub block 1601 located at the top has a high prediction accuracy, but corresponds to the second sub block 1602 located at the bottom.
  • the prediction accuracy of the sample value of the second block 1120 is relatively lower than the sample value of the second block 1120 corresponding to the first sub block 1601.
  • the first weight W 3 may be set higher than the second weight W 4 when the sample value P (i, j) of the second sub-block 1602 is calculated in consideration of the prediction accuracy.
  • the ratio W 3 / W 4 of the first weight to the second weight applied to the second sub block 1602 is the ratio W of the first weight to the second weight applied to the first sub block 1601. 1 / W 2 ) can be set higher.
  • m1 and m2 are offsets for integer arithmetic.
  • the offset m1 applied to the first sub-block 1601 may be set as an average value of the first weight W 1 and the second weight W 2
  • the offset m2 applied to the second sub-block 1602 is the first value. It may be set to an average value of the first weight W 3 and the second weight W 4 .
  • sample value P (i, j) of the prediction block may be expressed as in Equation 12.
  • the ratio of the first weight to the second weight is 1, and the ratio of the first weight to the second weight applied to the second sub-block 1602 is shown.
  • the ratio is three.
  • offsets 1 and 2 may be added to compensate for errors in integer arithmetic.
  • the prediction block may be divided into more sub blocks, and the weight of the second block 1120 for the first block 1110 for each sub block may be set to decrease from the top to the bottom direction.
  • 17 is a control flowchart illustrating a method of generating a prediction block of a current block according to the present embodiment.
  • a first block 1110 composed of upsampled values of reconstructed values of the reference block 1011 of the reference picture 1001 corresponding to the current block 1010 is generated.
  • a second block 1120 including prediction values derived based on the intra prediction mode of the current block 1010 is generated in operation S1702.
  • the sample value P (i, j) of the prediction block may be calculated through a weighted average of the sample value of the first block 1110 and the sample value of the second block 1120, and the intra prediction mode of the current block 1010. Reflecting this, as the distance from the reference sample increases, the specific gravity of the second block 1120 may be reduced.
  • the weight for the first block 1110 is higher than the weight for the second block 1120 in the direction from the top to the bottom of the prediction block 1130.
  • the sample value of the prediction block 1130 is generated by calculating a weighted average of the sample value of the first block 1110 and the sample value of the second block 1120 according to the set weight.
  • the weight of the first block 1110 is increased from the upper left end to the lower right direction of the prediction block 1130.
  • the weighted average of the sample value of the first block 1110 and the sample value of the second block 1120 may be calculated according to the set weight.
  • the intra prediction mode for the current block is the horizontal mode (intra prediction mode 10)
  • the position of the sample of the second block is a reference to the neighboring sample on the left, that is, adjacent to the left of the prediction block. The closer to the sample, the higher the accuracy of the prediction may be calculated to calculate the sample value P (i, j) of the prediction block 1130.
  • the weight of the second block 1120 with respect to the first block 1110 may be set lower.
  • the prediction block may be partitioned into two sub-blocks 1801 and 1802, and the first block 1110 and the second block (1) when the sample value P (i, j) located in each partition are calculated.
  • the weights for the sample values of 1120 may be set differently from each other.
  • the sample value P (i, j) of the prediction block of FIG. 18 may be expressed as Equation 13.
  • the smaller x than N / 2 means the first sub block 1801 of FIG. 18 and the second sub block 1802 in other cases.
  • W 1 is a first weight applied to the first sub-block 1801
  • W 2 is a second weight applied to the first sub-block 1801
  • W 3 is a first weight applied to the second sub-block 1802.
  • the weight W 4 is a second weight applied to the second sub-block 1802.
  • the first weight and the second weight applied to the sub-block may have the same value, but may be set to different values according to the prediction direction.
  • the ratio of the first weight to the second weight may be set larger from the left side to the right side of the prediction block.
  • the sample value of the second block 1120 corresponding to the first sub block 1801 located on the left has a high prediction accuracy, but corresponds to the second sub block 1802 located on the right.
  • the prediction accuracy of the sample value of the second block 1120 is relatively lower than the sample value of the second block 1120 corresponding to the first sub block 1801.
  • the first weight W 3 may be set higher than the second weight W 4 when the sample value P (i, j) of the second sub-block 1802 is calculated by reflecting the prediction accuracy.
  • the ratio W 3 / W 4 of the first weight to the second weight applied to the second sub block 1802 is the ratio W of the first weight to the second weight applied to the first sub block 1801. 1 / W 2 ) can be set higher.
  • m1 and m2 are offsets for integer operation.
  • the offset m1 applied to the first sub block 1801 may be set as an average value of the first weight W 1 and the second weight W 2
  • the offset m 2 applied to the second sub block 1802 may be set to the first value. It may be set to an average value of the first weight W 3 and the second weight W 4 .
  • sample value P (i, j) of the prediction block may be expressed as in Equation 14.
  • the ratio of the first weight to the second weight is 1, and the ratio of the first weight to the second weight applied to the second sub block 1802 is as follows.
  • the ratio is three.
  • offsets 1 and 2 may be added to compensate for errors in integer arithmetic.
  • the prediction block may be divided into more sub blocks, and the weight of the second block 1120 for the first block 1110 for each sub block may be set to decrease from left to right.
  • the intra prediction mode for the current block may be set to the intra prediction mode applied in the second block, or the DC mode. Or it may be set to a specific prediction mode, such as a planar mode. Information about the prediction mode set to the intra prediction mode of the current block may be stored for use in predicting the direction of the intra prediction block that is subsequently coded or decoded.
  • information for indicating this may be signaled, and information about a prediction mode set as an intra prediction mode of the current block may also be coded and signaled to the decoding apparatus.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

La présente invention concerne un procédé de prédiction intercouche et un appareil utilisant celui-ci. Le procédé de prédiction intercouche comprend les étapes de : génération d'un premier bloc formé de valeurs obtenues par sur-échantillonnage des valeurs récupérées d'un bloc de référence d'une couche de référence correspondant à un bloc courant ; génération d'un second bloc formé de valeurs de prédiction dérivées sur la base d'un mode de prédiction interne du bloc courant ; et génération d'un bloc de prédiction pour le bloc courant par combinaison de valeurs échantillons des premier et second blocs.
PCT/KR2013/003936 2012-05-08 2013-05-07 Procédé de prédiction intercouche et appareil utilisant celui-ci WO2013168952A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201261643910P 2012-05-08 2012-05-08
US61/643,910 2012-05-08

Publications (1)

Publication Number Publication Date
WO2013168952A1 true WO2013168952A1 (fr) 2013-11-14

Family

ID=49550930

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2013/003936 WO2013168952A1 (fr) 2012-05-08 2013-05-07 Procédé de prédiction intercouche et appareil utilisant celui-ci

Country Status (1)

Country Link
WO (1) WO2013168952A1 (fr)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015139175A1 (fr) * 2014-03-17 2015-09-24 Mediatek Singapore Pte. Ltd. Copie pour bloc améliorée
WO2020251423A3 (fr) * 2019-10-07 2021-03-04 Huawei Technologies Co., Ltd. Procédé et appareil d'harmonisation de prédiction pondérée et de bi-prédiction à poids de niveau d'unité de codage
WO2021061020A1 (fr) * 2019-09-23 2021-04-01 Huawei Technologies Co., Ltd. Procédé et appareil de prédiction pondérée pour des modes de partitionnement non rectangulaires
US11218726B2 (en) 2016-10-04 2022-01-04 Kt Corporation Method and apparatus for processing video signal

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20080023727A (ko) * 2005-07-11 2008-03-14 톰슨 라이센싱 매크로블록 적응적 인터-층 인트라 텍스쳐 예측을 위한방법 및 장치
US20080123742A1 (en) * 2006-11-28 2008-05-29 Microsoft Corporation Selective Inter-Layer Prediction in Layered Video Coding
KR20080094041A (ko) * 2006-01-11 2008-10-22 콸콤 인코포레이티드 미세 입도 공간 확장성을 가지는 비디오 코딩
KR100878809B1 (ko) * 2004-09-23 2009-01-14 엘지전자 주식회사 비디오 신호의 디코딩 방법 및 이의 장치
KR20090018019A (ko) * 2005-01-21 2009-02-19 엘지전자 주식회사 베이스 레이어의 내부모드 블록의 예측정보를 이용하여 영상신호를 엔코딩/디코딩하는 방법 및 장치

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100878809B1 (ko) * 2004-09-23 2009-01-14 엘지전자 주식회사 비디오 신호의 디코딩 방법 및 이의 장치
KR20090018019A (ko) * 2005-01-21 2009-02-19 엘지전자 주식회사 베이스 레이어의 내부모드 블록의 예측정보를 이용하여 영상신호를 엔코딩/디코딩하는 방법 및 장치
KR20080023727A (ko) * 2005-07-11 2008-03-14 톰슨 라이센싱 매크로블록 적응적 인터-층 인트라 텍스쳐 예측을 위한방법 및 장치
KR20080094041A (ko) * 2006-01-11 2008-10-22 콸콤 인코포레이티드 미세 입도 공간 확장성을 가지는 비디오 코딩
US20080123742A1 (en) * 2006-11-28 2008-05-29 Microsoft Corporation Selective Inter-Layer Prediction in Layered Video Coding

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015139175A1 (fr) * 2014-03-17 2015-09-24 Mediatek Singapore Pte. Ltd. Copie pour bloc améliorée
US11218726B2 (en) 2016-10-04 2022-01-04 Kt Corporation Method and apparatus for processing video signal
US11700392B2 (en) 2016-10-04 2023-07-11 Kt Corporation Method and apparatus for processing video signal
WO2021061020A1 (fr) * 2019-09-23 2021-04-01 Huawei Technologies Co., Ltd. Procédé et appareil de prédiction pondérée pour des modes de partitionnement non rectangulaires
US11962792B2 (en) 2019-09-23 2024-04-16 Huawei Technologies Co., Ltd. Method and apparatus of weighted prediction for non-rectangular partitioning modes
WO2020251423A3 (fr) * 2019-10-07 2021-03-04 Huawei Technologies Co., Ltd. Procédé et appareil d'harmonisation de prédiction pondérée et de bi-prédiction à poids de niveau d'unité de codage

Similar Documents

Publication Publication Date Title
WO2019107916A1 (fr) Appareil et procédé de décodage d'image basés sur une inter-prédiction dans un système de codage d'image
WO2020226424A1 (fr) Procédé et dispositif de codage/décodage d'image pour réaliser une mip et une lfnst, et procédé de transmission de flux binaire
WO2018030599A1 (fr) Procédé de traitement d'image fondé sur un mode de prédiction intra et dispositif associé
WO2020171444A1 (fr) Procédé et dispositif d'inter-prédiction basés sur un dmvr
WO2014003379A1 (fr) Procédé de décodage d'image et appareil l'utilisant
WO2014038906A1 (fr) Procédé de décodage d'image et appareil utilisant celui-ci
WO2021054676A1 (fr) Procédé et dispositif de codage/décodage d'image permettant de réaliser un prof, et procédé de transmission de flux binaire
WO2014003519A1 (fr) Procédé et appareil de codage de vidéo évolutive et procédé et appareil de décodage de vidéo évolutive
WO2020184848A1 (fr) Procédé et dispositif d'inter-prédiction basés sur un dmvr
WO2019194514A1 (fr) Procédé de traitement d'image fondé sur un mode de prédiction inter et dispositif associé
WO2020055107A1 (fr) Procédé et appareil de décodage d'image basé sur une prédiction de mouvement affine à l'aide d'une liste de candidats mvp affine dans un système de codage d'image
WO2020036390A1 (fr) Procédé et appareil de traitement de signal d'image
WO2020055161A1 (fr) Procédé et appareil de décodage d'image basés sur une prédiction de mouvement dans une unité de sous-blocs dans un système de codage d'image
WO2020184847A1 (fr) Procédé et dispositif d'inter-prédiction basés sur dmvr et bdof
WO2019216714A1 (fr) Procédé de traitement d'image fondé sur un mode de prédiction inter et appareil correspondant
WO2020013569A1 (fr) Procédé et appareil de décodage d'image sur la base d'une prédiction de mouvement affine dans un système de codage d'image
WO2019194463A1 (fr) Procédé de traitement d'image et appareil associé
WO2021040458A1 (fr) Procédé et dispositif de traitement de signal vidéo
WO2015099400A1 (fr) Procédé et appareil de codage/décodage d'un signal vidéo multicouche
WO2013168952A1 (fr) Procédé de prédiction intercouche et appareil utilisant celui-ci
WO2015147426A1 (fr) Procédé et dispositif de codage/décodage de signal vidéo multicouche
WO2020032526A1 (fr) Procédé et dispositif de décodage d'images basé sur la prédiction de mouvement affine au moyen de candidats de mvp affine construite, dans un système de codage d'images
WO2019216736A1 (fr) Procédé de traitement d'image fondé sur un mode de prédiction inter et appareil correspondant
WO2013169049A1 (fr) Procédé de prédiction inter-couches et appareil utilisant ledit procédé
WO2021049865A1 (fr) Procédé et dispositif de codage/décodage d'image pour réaliser un bdof, et procédé de transmission de flux binaire

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13787525

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 13787525

Country of ref document: EP

Kind code of ref document: A1