KR20140127406A - Method for encoding and decoding image, and apparatus thereof - Google Patents

Method for encoding and decoding image, and apparatus thereof Download PDF

Info

Publication number
KR20140127406A
KR20140127406A KR20130045311A KR20130045311A KR20140127406A KR 20140127406 A KR20140127406 A KR 20140127406A KR 20130045311 A KR20130045311 A KR 20130045311A KR 20130045311 A KR20130045311 A KR 20130045311A KR 20140127406 A KR20140127406 A KR 20140127406A
Authority
KR
South Korea
Prior art keywords
enhancement layer
layer
picture
motion information
motion
Prior art date
Application number
KR20130045311A
Other languages
Korean (ko)
Inventor
심동규
유성은
조현호
Original Assignee
인텔렉추얼디스커버리 주식회사
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 인텔렉추얼디스커버리 주식회사 filed Critical 인텔렉추얼디스커버리 주식회사
Priority to KR20130045311A priority Critical patent/KR20140127406A/en
Priority to PCT/KR2014/003554 priority patent/WO2014175658A1/en
Publication of KR20140127406A publication Critical patent/KR20140127406A/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/187Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a scalable video layer
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/30Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/573Motion compensation with multiple frame prediction using two or more reference frames in a given prediction direction

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The present invention relates to a method and apparatus for compensating a motion vector of an enhancement layer for prediction of inter-layer differential coefficients. When a difference coefficient is generated in the reference layer using the motion vector of the enhancement layer, the motion vector information of the enhancement layer may not be used as it is due to mismatch of the frame rate of the reference layer and the enhancement layer. The encoding / decoding method and apparatus include: extracting motion information of an enhancement layer; A motion information adjustment step of the enhancement layer; Layer difference signal prediction step.

Description

METHOD FOR ENCODING AND DECODING IMAGE, AND APPARATUS THEREOF FIELD OF THE INVENTION [0001]

The present invention relates to image processing techniques, and more particularly, to a method and apparatus for predicting differential coefficients between layers in a scalable video codec.

Recently, as the demand for high resolution and high definition video has increased, there has been a need for a highly efficient video compression technology for the next generation video service. In response to these market demands, MPEG and VCEG, which jointly standardized MPEG-2 Video and H.264 / AVC codecs, have been jointly standardized on new video compression technologies since 2010. MPEG and VCEG have established Joint Collaborative Team on Video Coding (JCT-VC) in January 2010 for the development of new standard technologies. In January 2013, JCT-VC will be used for the next generation video standard called HEVC (High Efficiency Video Coding) Technology has been completed. HEVC has a compression efficiency of more than 50% compared to H.264 / AVC High profile, which is known to have the highest compression efficiency, and supports full-HD, 4K-UHD, and 8K-UHD resolution video .

The HEVC standardization for the base layer was established in January 2013 under the name HEVC version 1, and by 2014, HEVC-based scalable video compression standard technology and HEVC-based multi-view video compression standard will be established. HEVC has a basic structure similar to that of conventional video codecs such as H.264 / AVC. However, HEVC uses a new encoding such as quad tree-based variable block size, improved intra and inter prediction techniques, transform kernels up to 32x32, sample adaptive offset Since the HEVC-based scalable video compression technology has been used additionally, it is necessary to develop the inter-layer prediction technique considering these new technologies.

The present invention aims to provide a method and apparatus for minimizing a difference signal of an enhancement layer by further predicting a difference signal of an enhancement layer in a scalable video codec by using a difference signal of a reference layer.

According to an aspect of the present invention, there is provided a motion vector compensation method for predicting an inter-layer differential coefficient, the method comprising: extracting motion information of an enhancement layer; A motion information adjustment step of the enhancement layer; Layer difference signal prediction step.

According to a first aspect of the present invention, there is provided a method of decoding a video using a scalable video codec, the method comprising: extracting motion information of an enhancement layer; Determining whether a picture corresponding to a reference frame index of the enhancement layer exists in a reference layer based on the extracted motion information of the enhancement layer; And adjusting the reference frame index of the enhancement layer if the picture is not present.

According to another aspect of the present invention, there is provided a method of decoding a video using a scalable video codec, the method comprising: extracting motion information of an enhancement layer; Determining whether a picture corresponding to a reference frame index of the enhancement layer exists in a reference layer based on the extracted motion information of the enhancement layer; Adjusting the reference frame index of the enhancement layer if the picture is not present; And scaling the motion vector included in the motion information according to the adjusted index.

According to an embodiment of the present invention, the motion information extracting unit of the enhancement layer extracts the reference frame index and the motion vector of the enhancement layer. The motion information adjustment unit of the enhancement layer checks whether the extracted reference frame index and motion vector are usable in the reference layer. If the motion information of the enhancement layer can not be used in the reference layer due to a different frame rate between layers, Scales the vector. The inter-layer difference signal predicting unit obtains a difference coefficient in the reference layer using the motion information of the enhancement layer or the motion information of the adjusted enhancement layer, and generates a more accurate prediction value using the coefficient value and the prediction block of the enhancement layer The difference coefficient of the enhancement layer can be minimized.

1 is a block diagram showing a configuration of a scalable video encoder.
FIG. 2 is a conceptual diagram for explaining prediction of inter-layer difference coefficients in an image coding / decoding apparatus to which the present invention is applied.
3 is a conceptual diagram for explaining generalized residual prediction for predicting inter-layer difference coefficients in more detail.
4 is a conceptual diagram for explaining a case where a picture corresponding to a reference picture of an enhancement layer in prediction of inter-layer differential signals in a picture decoding apparatus to which the present invention is applied does not exist in a reference layer.
5 is a block diagram illustrating a configuration of an image encoding apparatus according to an embodiment of the present invention.
FIG. 6 is a block diagram illustrating a configuration of an image decoding apparatus according to an embodiment of the present invention. Referring to FIG.
7 is a block diagram illustrating a configuration of an image decoding apparatus according to an embodiment of the present invention.
FIG. 8 is a conceptual diagram for explaining adjustment of motion information by the motion information adjustment unit 720 of FIG.

Hereinafter, embodiments of the present invention will be described in detail with reference to the drawings. In the following description of the embodiments of the present invention, a detailed description of known functions and configurations incorporated herein will be omitted when it may make the subject matter of the present disclosure rather unclear.

It is to be understood that when an element is referred to as being "connected" or "connected" to another element, it may be directly connected or connected to the other element, . In addition, the description of "including" a specific configuration in the present invention does not exclude a configuration other than the configuration, and means that additional configurations can be included in the practice of the present invention or the technical scope of the present invention.

The terms first, second, etc. may be used to describe various components, but the components should not be limited by the terms. The terms are used only for the purpose of distinguishing one component from another. For example, without departing from the scope of the present invention, the first component may be referred to as a second component, and similarly, the second component may also be referred to as a first component.

In addition, the components shown in the embodiments of the present invention are shown independently to represent different characteristic functions, which does not mean that each component is composed of separate hardware or software constituent units. That is, each constituent unit is included in each constituent unit for convenience of explanation, and at least two constituent units of the constituent units may be combined to form one constituent unit, or one constituent unit may be divided into a plurality of constituent units to perform a function. The integrated embodiments and separate embodiments of the components are also included within the scope of the present invention, unless they depart from the essence of the present invention.

In addition, some of the components are not essential components to perform essential functions in the present invention, but may be optional components only to improve performance. The present invention can be implemented only with components essential for realizing the essence of the present invention, except for the components used for the performance improvement, and can be implemented by only including the essential components except the optional components used for performance improvement Are also included in the scope of the present invention.

1 is a block diagram showing a configuration of a scalable video encoder.

Referring to FIG. 1, a scalable video encoder provides spatial scalability, temporal scalability, and SNR scalability. For spatial scalability, multi-layer scheme using up-sampling is used, and temporal scalability uses a hierarchical B picture structure. For image quality scalability, only the quantization coefficient is changed in the same manner as the technique for spatial scalability, or an incremental coding technique for quantization error is used.

The input video 110 is downsampled through a spatial decimation 115. The downsampled image 120 is used as an input of the reference layer and the coding blocks in the picture of the reference layer may be predicted through an intra prediction technique through the intra prediction unit 135 or an inter prediction technique using the motion compensation unit 130 And is effectively encoded. The difference between the original block to be coded and the prediction block generated by the motion compensation unit 130 or the intra prediction unit 135 is subjected to discrete cosine transform or integer transform through the transform unit 140. The transformed difference coefficients are quantized and quantized through the quantization unit 145, and the transformed difference coefficients are entropy-coded through the entropy encoding unit 150. The quantized transform difference coefficient is reconstructed as a difference coefficient through the inverse quantization unit 152 and the inverse transform unit 154 to generate a prediction value to be used in an adjacent block or an adjacent picture. At this time, the recovered difference coefficient value due to the error occurring in the quantization unit 145 may not coincide with the difference coefficient value used as the input of the conversion unit 140. [ The reconstructed difference coefficient value is added to the prediction block generated by the motion compensation unit 130 or the intra prediction unit 135 to restore the pixel value of the block that is currently encoded. The reconstructed block is passed through the in-loop filter 156. When all the blocks in the picture are reconstructed, the reconstructed picture is input to the reconstructed picture buffer 158 and used for inter-picture prediction in the reference layer.

In the enhancement layer, the input video 110 is directly used as an input value to encode it. In order to effectively encode a coded block in a picture as in the reference layer, an intra-picture prediction is performed through the motion compensator 172 or the intra- Or in-picture prediction and generates an optimal prediction block. The block to be coded in the enhancement layer is predicted in the motion compensation unit 172 or the prediction block generated in the intra prediction unit 170, resulting in a difference coefficient in the enhancement layer. The difference coefficient of the enhancement layer is hatched through the transform unit, the quantization unit, and the entropy encoding unit as in the reference layer. As shown in FIG. 1, in a multi-layer structure, coded bits are generated in each layer. The multiplexer 192 constitutes a single bit stream 194.

Although each layer in FIG. 1 can be encoded independently, input video in the lower layer is very similar in that it is down-sampled in video in the upper layer. Therefore, if the reconstructed pixel value of the lower layer video, the motion vector, and the residual signal are used in the enhancement layer, the coding efficiency can be increased.

In FIG. 1, the inter-layer intra-prediction unit 162 reconstructs an image of a reference layer and then interpolates the reconstructed image 180 according to the image size of the enhancement layer and uses the interpolated image 180 as a reference image. When reconstructing an image of a reference layer, a method of decoding a reference image in a frame unit or a method of decoding in a block unit may be used in consideration of reduction in complexity. Especially, when the reference layer is coded in the inter-view prediction mode, the complexity of decoding the reference layer is high. Therefore, H.264 / SVC allows inter-layer intra-layer prediction only when the reference layer is encoded in intra-picture prediction mode only. The reconstructed image 180 in the reference layer is input to the intraprediction unit 170 of the enhancement layer. Thus, the enhancement layer can improve encoding efficiency by using surrounding pixel values in a picture.

In FIG. 1, the inter-layer motion prediction 160 refers to the enhancement layer with motion information 185 such as a motion vector or a reference frame index in the reference layer. Particularly, when coding an image at a low bit rate, since the weight of the motion information is high, the encoding efficiency of the enhancement layer is improved by referring to this information of the reference layer.

In FIG. 1, the inter-layer difference coefficient prediction unit 164 predicts the difference coefficient of the enhancement layer by the value of the decoded difference coefficient 190 in the reference layer. The differential coefficient value of the enhancement layer can be encoded more effectively through the enhancement layer. The difference coefficient 190 decoded by the reference layer is input to the enhancement layer motion compensator 172 according to the implementation method of the encoder, The optimal motion vector can be derived from the decoded difference coefficient 190 of the reference layer.

FIG. 2 is a conceptual diagram for explaining prediction of inter-layer difference coefficients in an image coding / decoding apparatus to which the present invention is applied.

Referring to FIG. 2, in a scalable video encoder, when a block 200 of an enhancement layer is encoded, a prediction block 220 is determined through motion prediction and compensation in a reference picture of an enhancement layer, and a motion vector 210 ) And the reference frame index to the decoder. The difference value for the block 200 and the prediction block 220 to be encoded in the enhancement layer is referred to as a difference value 250 of the enhancement layer. The difference value between the prediction block derived from the enhancement layer and the difference coefficient derived from the reference layer is used The technique of finding the difference coefficient in the layer is called generalized residual prediction (GRP).

In the reference layer up-sampled in the same manner as the resolution of the enhancement layer, the motion vector 210 of the enhancement layer is used as it is on the basis of the block 230 at the same position as the encoding block 200 of the current picture, (240). The difference value between the block 230 in the same position as the enhancement layer and the motion compensation block 240 in the reference layer is calculated in the reference layer as well. The difference signal 250 of the enhancement layer is predicted by the motion compensation block 220 of the enhancement layer and the difference signal 260 calculated by the reference layer.

3 is a conceptual diagram for explaining the generalized residual prediction (GRP) predicting inter-layer difference coefficients in more detail.

Referring to FIG. 3, a motion compensation block 320 is determined through unidirectional prediction when coding a block 300 of an enhancement layer in a scalable video encoder. The motion information 310 (reference frame index, motion vector) for the determined motion compensation block 320 is expressed through a syntax element. In the scalable video decoder, a motion compensation block 320 is obtained by decoding a syntax element for motion information 310 (reference frame index, motion vector) for a block 300 to be decoded in an enhancement layer, and motion compensation is performed on the corresponding block do.

In the GRP technique, the difference coefficient derived from the upsampled base layer is derived, and the derived difference coefficient value is used as the prediction value of the enhancement layer. For this, a coding block 330 in the same position as the coding block 300 in the enhancement layer is selected from the upsampled base layer. The motion compensation block 350 in the base layer is determined using the motion information 310 of the enhancement layer on the basis of the selected block in the base layer.

The difference coefficient 360 in the base layer is calculated as a difference value between the coding block 330 in the base layer and the motion compensation block 350 in the base layer. In the enhancement layer, the weight sum of the motion compensation block 320 derived through temporal prediction in the enhancement layer and the difference coefficient 360 derived from the motion information of the enhancement layer in the reference layer is set to 370, . At this time, the weight coefficient can be selectively written as 0, 0.5, 1, and so on.

When bi-directional prediction is used, the GRP derives the difference coefficient from the reference layer using the bidirectional motion information of the enhancement layer. In the bi-directional prediction, a compensation block in the L0 direction in the enhancement layer, a difference coefficient in the L0 direction derived in the reference layer, a compensation block in the L1 direction in the enhancement layer, The weighting sum of the difference coefficients in the L1 direction derived from the layer is used.

4 is a conceptual diagram for explaining a case where a picture corresponding to a reference picture of an enhancement layer in prediction of inter-layer differential signals in a picture decoding apparatus to which the present invention is applied does not exist in a reference layer.

Referring to FIG. 4, when a block 400 of an enhancement layer is encoded in a scalable video codec, coding can be performed more effectively through prediction of difference coefficients between layers. In this case, the reference layer performs the upsampling in the same manner as the resolution of the enhancement layer, and then calculates the difference coefficient in the reference layer through the motion vector 410 of the enhancement layer. However, when the frame rate of the enhancement layer differs from that of the reference layer, a picture 460 corresponding to the reference frame index of the enhancement layer may not exist in the reference layer. In this case, The motion compensation block 450 can not be obtained.

5 is a block diagram illustrating a configuration of an image encoding apparatus according to an embodiment of the present invention.

Referring to FIG. 5, the image encoding apparatus to which the present invention is applied is based on a GRP that generates a difference coefficient in a reference layer and uses the differential coefficient as a prediction value to predict a difference coefficient of an enhancement layer. For this, the motion information adjuster 510 uses the reference frame index information and the motion vector information of the enhancement layer to be written through the entropy encoder 500 of the enhancement layer. The motion information adjuster 510 determines whether a frame corresponding to the reference frame index of the enhancement layer exists in the reference layer and can derive a prediction block from the reference layer using the motion vector information of the enhancement layer in the corresponding frame. If the motion information of the enhancement layer can not be used as it is in the reference layer, the motion information adjustment unit 510 adjusts the motion information of the enhancement layer to be usable in the reference layer by referring to a context such as a picture order count (POC).

The motion information 515 of the enhancement layer adjusted through the motion information adjustment unit 510 is input to the motion compensation unit 520 of the reference layer. The motion compensation block 530 compensated by the motion compensation unit 520 of the reference layer is input to the base layer difference coefficient generation unit 530. [ At this time, the block 535 at the same position is obtained after the reconstructed picture of the reference layer is upsampled to the resolution of the enhancement layer. At the same time, in the restoration picture buffer of the reference layer, a block 535 at the same position as the block to be encoded by the enhancement layer is input to the base layer difference coefficient generation unit 540. At this time, the block 535 at the same position is obtained after the reconstructed picture of the reference layer is upsampled to the resolution of the enhancement layer. The base layer difference coefficient generator 540 subtracts the motion compensation block 530 derived by using the motion information of the enhancement layer in the reference layer 535 and the reference layer block 535 in the same position as the enhancement layer block, Lt; / RTI > The difference coefficient 550 generated in the base layer is used as an input value to the motion compensation unit 560 of the enhancement layer. The enhancement layer motion compensation unit 560 generates the prediction blocks 370 and 380 through the sum of the difference coefficients 550 of the reference layer and the motion compensation blocks derived through temporal prediction in the enhancement layer when the GRP is used do.

FIG. 6 is a block diagram illustrating a configuration of an image decoding apparatus according to an embodiment of the present invention. Referring to FIG.

Referring to FIG. 6, an image decoding apparatus to which the present invention is applied is based on a GRP that generates a difference coefficient in a reference layer and uses the difference coefficient as a prediction value to predict a difference coefficient of an enhancement layer. For this, the motion information adjuster 625 uses the reference frame index information and the motion vector information 620 of the enhancement layer decoded by the entropy decoding unit of the enhancement layer. The motion information adjuster 625 determines whether a frame corresponding to the reference frame index of the enhancement layer exists in the reference layer and can derive the prediction block from the reference layer using the motion vector information of the enhancement layer in the corresponding frame. If the motion information of the enhancement layer can not be used as it is in the reference layer, the motion information adjustment unit 625 adjusts the motion information of the enhancement layer to be available in the reference layer by referring to a context such as a picture order count (POC).

The motion information 627 of the enhancement layer adjusted through the motion information adjustment unit 625 is input to the motion compensation unit 630 of the reference layer. The motion compensation block 635 compensated in the motion compensator 630 of the reference layer is input to the base layer differential coefficient generator 645. [ At this time, the block 535 at the same position is obtained after the reconstructed picture of the reference layer is upsampled to the resolution of the enhancement layer. At the same time, in the restoration picture buffer of the reference layer, the block 640 at the same position as the block to be decoded by the enhancement layer is input to the base layer difference coefficient generator 645. At this time, the block 635 at the same position is obtained after the reconstruction picture of the reference layer is upsampled to the resolution of the enhancement layer. The base layer difference coefficient generator 645 subtracts the motion compensation block 635 derived by using the reference layer motion information and the reference layer block 640 in the same position as the enhancement layer block, Lt; / RTI > The difference coefficient 650 generated in the base layer is used as an input value to the motion compensation unit 660 of the enhancement layer. The motion compensation unit 660 of the enhancement layer generates the prediction blocks 370 and 380 through the weighting sum of the difference coefficient 650 of the reference layer and the motion compensation block derived through temporal prediction in the enhancement layer when the GRP is used do.

7 is a block diagram illustrating a configuration of an image encoding / decoding apparatus according to an embodiment of the present invention.

Referring to FIG. 7, an apparatus for decoding / decoding an image according to an embodiment of the present invention includes an enhancement layer motion information extracting unit 710, a motion information adjusting unit 720, and an inter-layer difference signal predicting unit 730.

The motion information extracting unit 710 of the enhancement layer extracts reference frame indexes and motion vector information decoded by the enhancement layer in order to use the motion information of the enhancement layer in the reference layer when performing inter-layer differential coefficient prediction.

In a case where the inter-layer difference coefficient prediction is performed, the motion information adjustment unit 720 may determine that the reference frame index and the motion vector information extracted from the enhancement layer can not be directly used in the reference layer, And performs the process of adjusting the motion vector scaling and the reference frame index using the motion information in the enhancement layer.

The base layer difference coefficient generator 730 derives the motion compensation block in the reference layer using the motion information of the enhancement layer that is not adjusted or adjusted through the motion information adjuster 720. At the same time, a block in the same position as the coding block of the enhancement layer is derived from the upsampled reference layer according to the resolution of the enhancement layer, and a difference value from the compensation block derived from the compensation block is calculated to generate a difference coefficient block in the base layer.

FIG. 8 is a conceptual diagram for explaining adjustment of motion information by the motion information adjustment unit 720 of FIG.

Referring to FIG. 8, in a case where it is impossible to directly use the motion information of the enhancement layer in the reference layer, the motion information adjustment unit 720 re-adjusts the motion information in the enhancement layer. For example, if a block 800 to be added / decoded in the enhancement layer generates a prediction block 820 through a motion vector 810, the motion vector may be used in the reference layer as well. However, when the picture 855 at the corresponding position in the reference layer can not be referred to as shown in FIG. 8, the picture 875 corresponding to the closest position in the prediction direction in the reference layer is selected. In the enhancement layer, A new motion vector 830 may be calculated by scaling the motion vector of the enhancement layer based on the selected picture 845 after selecting the picture 845 of the position 845. [ At this time, the picture closest to the prediction direction means a picture having a POC value closer to the POC of the current picture to be decoded / encoded based on the POC.

In the case where two enhancement layers have two motion compensation blocks using bidirectional prediction in the enhancement layer, the method described in FIG. 8 is directly applied to each prediction direction. For example, in the enhancement layer, the motion information for the L0 direction need not be adjusted by the motion information adjusting unit 720, and when only the motion information for the L1 direction needs to be adjusted through the motion information adjusting unit 720, The motion information is adjusted using the method described above. When motion information adjustment is required for both directions L0 and L1, the motion information is adjusted using the method described above for both directions.

The method according to the present invention may be implemented as a program for execution on a computer and stored in a computer-readable recording medium. Examples of the computer-readable recording medium include a ROM, a RAM, a CD- , A floppy disk, an optical data storage device, and the like, and may also be implemented in the form of a carrier wave (for example, transmission over the Internet).

The computer readable recording medium may be distributed over a networked computer system so that computer readable code can be stored and executed in a distributed manner. And, functional programs, codes and code segments for implementing the above method can be easily inferred by programmers of the technical field to which the present invention belongs.

While the present invention has been particularly shown and described with reference to exemplary embodiments thereof, it is to be understood that the invention is not limited to the disclosed exemplary embodiments, but, on the contrary, It should be understood that various modifications may be made by those skilled in the art without departing from the spirit and scope of the present invention.

Claims (10)

A video decoding method using a scalable video codec,
Extracting motion information of the enhancement layer;
Determining whether a picture corresponding to a reference frame index of the enhancement layer exists in a reference layer based on the extracted motion information of the enhancement layer;
Adjusting the reference frame index of the enhancement layer if the picture is not present; And
And scaling a motion vector included in the motion information according to the adjusted index.
The method according to claim 1,
The step of adjusting the index
And adjusting the reference frame index using a frame rate of the enhancement layer, the reference layer, and motion information in the enhancement layer.
The method according to claim 1,
Wherein scaling the motion vector comprises:
And scaling the motion vector using a frame rate of the enhancement layer, the reference layer, and motion information in the enhancement layer.
The method according to claim 1,
Wherein scaling the motion vector comprises:
Selecting a picture closest to the prediction direction in the reference layer and scaling the motion vector of the enhancement layer based on a picture at the same position as the selected picture in the enhancement layer.
5. The method of claim 4,
Wherein the picture closest to the prediction direction includes a picture having a value close to the POC of the current picture to be decoded based on the POC.
A video decoding apparatus using a scalable video codec,
A motion compensation unit for extracting motion information of the enhancement layer; And
Determines whether a picture corresponding to a reference frame index of the enhancement layer exists in a reference layer based on the extracted motion information of the enhancement layer and adjusts the reference frame index of the enhancement layer if the picture does not exist And a motion information adjuster for scaling a motion vector included in the motion information according to the adjusted index.
The method according to claim 6,
Wherein the motion information adjustment unit adjusts the reference frame index using a frame rate of the enhancement layer, the reference layer, and motion information in the enhancement layer.
The method according to claim 6,
Wherein the motion information adjustment unit scales the motion vector using a frame rate of the enhancement layer, the reference layer, and motion information in the enhancement layer.
The method according to claim 6,
The motion information adjusting unit selects a picture closest to the prediction direction in the reference layer and scaling the motion vector of the enhancement layer based on a picture at the same position as the selected picture in the enhancement layer / RTI >
10. The method of claim 9,
And the picture closest to the prediction direction includes a picture having a value close to the POC of the current picture to be decoded based on the POC.
KR20130045311A 2013-04-24 2013-04-24 Method for encoding and decoding image, and apparatus thereof KR20140127406A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
KR20130045311A KR20140127406A (en) 2013-04-24 2013-04-24 Method for encoding and decoding image, and apparatus thereof
PCT/KR2014/003554 WO2014175658A1 (en) 2013-04-24 2014-04-23 Video encoding and decoding method, and apparatus using same

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR20130045311A KR20140127406A (en) 2013-04-24 2013-04-24 Method for encoding and decoding image, and apparatus thereof

Publications (1)

Publication Number Publication Date
KR20140127406A true KR20140127406A (en) 2014-11-04

Family

ID=52451642

Family Applications (1)

Application Number Title Priority Date Filing Date
KR20130045311A KR20140127406A (en) 2013-04-24 2013-04-24 Method for encoding and decoding image, and apparatus thereof

Country Status (1)

Country Link
KR (1) KR20140127406A (en)

Similar Documents

Publication Publication Date Title
JP6416992B2 (en) Method and arrangement for transcoding video bitstreams
US10791333B2 (en) Video encoding using hierarchical algorithms
JP5039142B2 (en) Quality scalable coding method
KR100679031B1 (en) Method for encoding/decoding video based on multi-layer, and apparatus using the method
KR100772883B1 (en) Deblocking filtering method considering intra BL mode, and video encoder/decoder based on multi-layer using the method
KR100772873B1 (en) Video encoding method, video decoding method, video encoder, and video decoder, which use smoothing prediction
KR100763194B1 (en) Intra base prediction method satisfying single loop decoding condition, video coding method and apparatus using the prediction method
KR100791299B1 (en) Multi-layer based video encoding method and apparatus thereof
CN112042200B (en) Method and device for video decoding
KR101253156B1 (en) Method for encoding/decoding video signal
US20090103613A1 (en) Method for Decoding Video Signal Encoded Using Inter-Layer Prediction
KR20110115172A (en) Methods and apparatus for bit depth scalable video encoding and decoding utilizing tone mapping and inverse tone mapping
CN104380745A (en) Method and apparatus of adaptive intra prediction for inter-layer and inter-view coding
CN103098471A (en) Method and apparatus of layered encoding/decoding a picture
EP1659797A2 (en) Method and apparatus for compressing motion vectors in video coder based on multi-layer
KR20140121355A (en) Method and apparatus for image encoding/decoding
KR20060119736A (en) Method for encoding video signal
CN110798686A (en) Video decoding method and device, computer equipment and computer readable storage medium
KR102345770B1 (en) Video encoding and decoding method and device using said method
CN115151941A (en) Method and apparatus for video encoding
KR20150056679A (en) Apparatus and method for construction of inter-layer reference picture in multi-layer video coding
KR20140127406A (en) Method for encoding and decoding image, and apparatus thereof
KR20140127405A (en) Method for encoding and decoding image, and apparatus thereof
KR20140127407A (en) Method for encoding and decoding image, and apparatus thereof
JP2018032913A (en) Video encoder, program and method, and video decoder, program and method, and video transmission system

Legal Events

Date Code Title Description
N231 Notification of change of applicant
WITN Withdrawal due to no request for examination