WO2006080797A1 - Procede de codage/decodage video multicouche utilisant une reestimation residuelle et son appareil - Google Patents

Procede de codage/decodage video multicouche utilisant une reestimation residuelle et son appareil Download PDF

Info

Publication number
WO2006080797A1
WO2006080797A1 PCT/KR2006/000278 KR2006000278W WO2006080797A1 WO 2006080797 A1 WO2006080797 A1 WO 2006080797A1 KR 2006000278 W KR2006000278 W KR 2006000278W WO 2006080797 A1 WO2006080797 A1 WO 2006080797A1
Authority
WO
WIPO (PCT)
Prior art keywords
frame
restored
residual image
unit
encoding
Prior art date
Application number
PCT/KR2006/000278
Other languages
English (en)
Inventor
Bae-Keun Lee
Sang-Chang Cha
Original Assignee
Samsung Electronics Co., Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from KR1020050025238A external-priority patent/KR100703749B1/ko
Application filed by Samsung Electronics Co., Ltd. filed Critical Samsung Electronics Co., Ltd.
Priority to EP06703534A priority Critical patent/EP1842377A1/fr
Publication of WO2006080797A1 publication Critical patent/WO2006080797A1/fr

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/30Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
    • H04N19/34Scalability techniques involving progressive bit-plane based encoding of the enhancement layer, e.g. fine granular scalability [FGS]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/30Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
    • H04N19/33Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability in the spatial domain
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/61Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/85Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
    • H04N19/86Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving reduction of coding artifacts, e.g. of blockiness

Definitions

  • the present invention relates to a multilayer video encoding/decoding, and more particularly, to a multilayer encoding/decoding method using residual re-estimation and an apparatus using the same, in which the number of bits used for bit stream transmission is reduced by encoding and transmitting a residual image obtained by subtracting a predicted frame or a base layer frame from a deblocked restored frame instead of an original frame.
  • data compression is applied to remove data redundancy.
  • data can be compressed by removing spatial redundancy such as a repetition of the same color or object in images, temporal redundancy such as little or no change in adjacent frames of moving image frames or a continuous repetition of sounds in audio, and a visual/ perceptual redundancy, which considers human visual and perceptive insensitivity to high frequencies.
  • spatial redundancy such as a repetition of the same color or object in images
  • temporal redundancy such as little or no change in adjacent frames of moving image frames or a continuous repetition of sounds in audio
  • a visual/ perceptual redundancy which considers human visual and perceptive insensitivity to high frequencies.
  • the temporal redundancy is removed by a temporal prediction based on motion compensation
  • the spatial redundancy is removed by a spatial transform.
  • multimedia data is transmitted over a transmitting medium or a communication network, which may differ in terms of performance, as existing transmission mediums have varying transmission speeds.
  • a transmitting medium or a communication network which may differ in terms of performance, as existing transmission mediums have varying transmission speeds.
  • an ultrahigh-speed communication network can transmit several tens of megabits of data per second, while a mobile communication network has a transmission speed of 384 kilobits per second.
  • a scalable video encoding method is implemented.
  • Such a scalable video encoding method makes it possible to truncate a portion of a compressed bit stream and to adjust the resolution, frame rate and signal-to-noise ratio (SNR) of a video corresponding to the truncated portion of the bit stream.
  • SNR signal-to-noise ratio
  • MPEG-4 Moving Picture Experts Group Layer-4 Video Part 10 has already made progress on a standard for this feature.
  • SNR scalability technique encodes an input video image into two layers having the same frame rate and resolution but different accuracies of quantization.
  • the fine grain SNR (FGS) scalability technique encodes the input video image into a base layer and an enhancement layer, and then encodes a residual image of the enhancement layer.
  • FGS scalability technique may or may not transmit the encoded signals to prevent the signals from being decoded by a decoder according to the network transmission efficiency or the state of the decoder side. Accordingly, data can be properly transmitted with its amount adjusted to the transmission bit rate of a network.
  • an aspect of the present invention is to provide a multilayer video encoding/decoding method using residual re-estimation and an apparatus using the same, in which the number of bits used for encoding a residual image can be efficiently reduced by using a frame, instead of the original frame, from which information to be removed by deblocking has already been removed.
  • Another aspect of the present invention is to provide a multilayer video encoding/ decoding method that can provide a high-quality video image from which block artifacts have been removed by performing a deblocking process for respective layers during the multilayer video encoding/decoding.
  • a multilayer video encoding method which includes (a) encoding a first residual image obtained by subtracting a predicted frame from an original frame, (b) decoding the encoded first residual image and generating a first restored frame by adding the decoded residual image to the predicted frame, (c) deblocking the first restored frame and (d) encoding a second residual image obtained by subtracting the predicted frame from the first deblocked restored frame.
  • a multilayer video decoding method which includes (a) extracting data corresponding to a residual image from a bit stream, (b) restoring the residual image by decoding the data, and (c) restoring a video frame by adding the residual image to a restored predicted frame, wherein the bit stream is a bit stream of an encoded second residual image obtained by (d) encoding a first residual image obtained by subtracting the predicted frame from an original frame, (e) decoding the encoded first residual image and generating a first restored frame by adding the decoded first residual image to the predicted frame, (f) deblocking the first restored frame, and (g) encoding a second residual image obtained by subtracting the predicted frame from the first deblocked restored frame.
  • a multilayer video encoder which includes a temporal transform unit for removing a temporal redundancy of a first residual image obtained by subtracting a predicted frame from an original frame, a spatial transform unit for removing a spatial redundancy of the first residual image from which the temporal redundancy has been removed, a quantization unit for quantizing transform coefficients provided by the spatial transform unit, an entropy encoding unit for encoding the quantized transform coefficients, a de- quantization unit for dequantizing the quantized transform coefficients, an inverse spatial transform unit for generating a first restored residual image by performing an inverse spatial transform on the dequantized transform coefficients, and a deblocking unit for deblocking a first restored frame by adding the first restored residual image to the predicted frame, wherein the spatial transform unit removes the spatial redundancy of a second residual image obtained by subtracting the predicted frame from the first deblocked restored frame.
  • a multilayer video decoder which includes an entropy decoding unit for extracting data corresponding to a residual image from a bit stream, a dequantization unit for dequantizing the extracted data, an inverse spatial transform unit for restoring the residual image by performing an inverse spatial transform on the dequantized data, and an adder for restoring a video frame by adding the restored residual image to a pre-restored predicted frame, wherein the bit stream is a bit stream of an encoded second residual image obtained by (a) encoding a first residual image obtained by subtracting the predicted frame from an original frame, (b) decoding the encoded first residual image and generating a first restored frame by adding the decoded first residual image to the predicted frame, (c) deblocking the first restored frame, and (d) encoding a second residual image obtained by subtracting the predicted frame from the first deblocked restored frame.
  • FlG. 1 is a view illustrating an FGS encoding process in an SVM3.0 process
  • FlG. 2 is a view illustrating an FGS decoding process in an SVM3.0 process
  • FlG. 3 is a view illustrating a residual re-estimation process in an FGS encoding process according to an embodiment of the present invention
  • FlG. 4 is a block diagram illustrating the construction of an encoder according to an embodiment of the present invention.
  • FlG. 5 is a block diagram illustrating the construction of a decoder according to an embodiment of the present invention.
  • FlG. 6 is a view illustrating a residual re-estimation process in a general multilayer structure according to another embodiment of the present invention.
  • FlG. 7 is a block diagram illustrating the construction of an encoder according to another embodiment of the present invention.
  • FlG. 8 is a block diagram illustrating the construction of a decoder according to another embodiment of the present invention.
  • the fine grain SNR (FGS) of a scalable video model (SVM) 3.0 is implemented using a gradual refinement representation.
  • the SNR scalability may be achieved by truncating NAL units obtained as the result of FGS encoding at any point, while the FGS scalability is implemented by using a base layer and an FGS enhancement layer.
  • the base layer is used to generate a base layer frame which represents the minimum video quality and which can be transmitted at the lowest transmission bit rate.
  • the FGS enhancement layer is used to generate NAL units which can be properly truncated and transmitted above the lowest transmission bit rate or which can be properly truncated and decoded by a decoder.
  • the FGS enhancement layer transforms, quantizes and transmits a residual signal obtained by subtracting a restored frame, which is obtained in the base layer or a lower enhancement layer, from the original frame.
  • the SNR scalability is implemented by generating a more tiny residual by gradually reducing quantization parameter values in upper layers.
  • the quantization parameter is calculated by Equation (1).
  • / represents a transform coefficient level encoded in the i-th enhancement layer for the transform coefficient c
  • QP denotes the quantization parameter of k i the corresponding macroblock.
  • the function InverseScaling(.) represents a coefficient restoration process.
  • FlG. 1 is a view illustrating an FGS encoding process in an SVM3.0 process.
  • a base layer frame is obtained using an original frame 20.
  • the original frame is obtained using an original frame 20.
  • a transform & quantization unit 30 performs transform and quantization to generate a base layer frame 60 from the original frame 20.
  • a dequantization & inverse transform unit 40 performs dequantization and inverse transform in order to provide the base layer frame 60, which has passed through the transform and quantization process, to the enhancement layer. This process is to make the base layer frame consistent with a frame decoded by the decoder since the decoder can only recognize the restored frame.
  • a frame of a general FGS base layer is deblocked by a deblocking unit 50 and provided to the enhancement layer.
  • a block artifact may appear because an input frame is encoded and transmitted with block-based information.
  • the deblocking is to cancel the block artifact.
  • the restored frame is deblocked in the case where the restored frame is used as a reference frame for prediction. Through this deblocking process, specified bits are removed by filtering.
  • the residual signal i.e., the difference between the original frame 20 and a restored base layer frame 22 or a restored lower enhancement layer frame 26, is obtained.
  • the residual signal is then added to the original frame by the decoder to restore the original video data.
  • a subtracter 11 of the first enhancement layer subtracts the frame 22 restored from the base layer from the original frame.
  • the residual signal obtained from the subtracter 11 is outputted as a first enhancement layer frame 62 through the transform and quantization unit 32.
  • the first enhancement layer frame 62 is also restored by a de- quantization & inverse transform unit 42 to be provided to the second enhancement layer.
  • An adder 12 generates a new frame 26 by adding the first enhancement layer frame 24 to the restored base layer frame 22, and provides the frame 26 to the second enhancement layer.
  • a subtracter 13 of the second enhancement layer subtracts the frame 26 provided from the first enhancement layer from the original frame 20. This subtracted value is outputted as the second enhancement layer frame 64 through a transform & quantization unit 34. The second enhancement layer frame 64 is then restored by a de- quantization & inverse transform unit 44, and then added to the frame 26 to be provided as a new frame 29. In the case where the second enhancement layer is the uppermost layer, the frame 29 is deblocked through a deblocking unit 52 before it is used as a reference frame for other frames.
  • the base layer frame 60, the first enhancement layer frame 62 and the second enhancement frame 64 may be transmitted in the form of a network abstraction layer (NAL) unit.
  • NAL network abstraction layer
  • the decoder can restore data even if the received NAL unit is partially truncated.
  • FlG. 2 is a view illustrating an FGS decoding process in an SVM3.0 process.
  • An FGS decoder receives the base layer frame 60, the first enhancement layer frame 62 and the second enhancement layer frame 64 obtained by an FGS encoder. Since these frames are encoded data, they are decoded through dequantization & inverse transform units 200, 202 and 204. The frames restored through the de- quantization & inverse transform unit 200 of the base layer are then deblocked by a deblocking unit 210 to be restored to the base layer frame.
  • Restored frames 220, 222, 224 are added together by an adder 230.
  • the added frames are again deblocked by a deblocking unit 240, so that boundaries among the blocks are erased. This process corresponds to the deblocking of the uppermost enhancement layer in the FGS encoder.
  • FlG. 3 is a view illustrating a residual re-estimation process in an FGS encoding process according to an embodiment of the present invention.
  • the restored frame which is used as the reference frame in the enhancement layer of the FGS encoder, is deblocked to be used as a new original frame. Accordingly, a new residual, that is obtained by subtracting the reference frame obtained and restored in the lower layer from the new deblocked original frame, is encoded and transmitted to the decoder, so that the block artifact is reduced by the number of bits of the unnecessary data to be removed by deblocking.
  • a left part 300 in FlG. 3 represents the FGS encoding process in a conventional
  • SVM3.0 process and a right part 350 represents a process added for the residual re- estimation according to an embodiment of the present invention.
  • the FGS encoding of SVM3.0 generates the base layer frame by transforming and quantizing an original frame O in the base layer as described above with reference to FlG. 1.
  • the bit stream of the obtained base layer frame is transmitted to the decoder side and is simultaneously restored through the dequantization and inverse transform process to be used as the reference frame of the enhancement layer.
  • the restored base layer frame passes through a deblocking process D before it is used as a reference frame B of the upper enhancement layer.
  • the residual (hereinafter referred to as 'Rl') obtained by subtracting the reference frame B from the original frame O is transformed and quantized in the same manner as the conventional encoding process, and a restored frame REC is obtained by performing dequantization and inverse transform of the quantized residual. Then, the restored frame REC is obtained. Additionally, a frame O is obtained by performing deblocking D of the restored frame REC , and the residual (hereinafter referred to as 'R2') is re-estimated with reference to the new original frame O instead of the previous original frame.
  • the new residual R2 is expressed by Equation (2), [47]
  • R2 D, (B O +R ⁇ ) - B O
  • Rl' denotes a restored residual after Rl is transformed and quantized.
  • the bit stream of the first FGS layer is obtained by transforming and quantizing the residual obtained by subtracting the reference frame B from the frame O and then transmitted to the decoder. Meanwhile, a frame RECl' restored by adding a value that is obtained by performing dequantization and inverse transform of the re-estimated residual to the reference frame B is used as a reference frame B of the upper en- o i ctr hancement layer (i.e., a second FGS layer).
  • the restored frame REC ' is expressed by Equation (3).
  • the transform and quantization process in the residual re-estimation process is the same as the transform and quantization process used for the FGS encoding of the same layer.
  • a new residual can be encoded and transmitted through the same process as in the first FGS layer as described above.
  • a deblocking D is performed on the base layer, a deblocking D n applied to the enhancement layer can be performed with a weaker strength than the deblocking D .
  • FIG. 4 is a block diagram illustrating the construction of an encoder 400 according to an embodiment of the present invention.
  • the encoder performs the residual re-estimation in the FGS encoding as shown in
  • FIG. 3 may include a base layer encoder 410 and an enhancement layer encoder 450.
  • a base layer and an enhancement layer are used.
  • the present invention can be also applied to cases where more layers are used.
  • the base layer encoder 410 may include a motion estimation unit 412, a motion compensation unit 414, a spatial transform unit 418, a quantization unit 420, an entropy encoding unit 422, a dequantization unit 424, an inverse spatial transform unit 426 and a deblocking unit 430.
  • the motion estimation unit 412 performs motion estimation of the present frame on the basis of the reference frame among input video frames, and obtains motion vectors.
  • the motion vectors for prediction are obtained by receiving the restored frame that has been deblocked from the deblocking unit 430.
  • a widely used block matching algorithm can be used for such motion estimation.
  • the block matching algorithm estimates a displacement that corresponds to the minimum error as a motion vector as it moves a given motion block in pixel units of a specified search area in the reference frame.
  • a motion block having a fixed size or a motion block having a variable size according to a hierarchical variable size block matching (HVSBM) may be used.
  • the motion estimation unit 412 provides motion data such as motion vectors obtained from the motion estimation, the size of the motion block, the reference frame number, and others, to the entropy encoding unit 422.
  • the motion compensation unit 414 generates a temporally predicted frame of the present frame by performing motion compensation for a forward or backward reference frame using the motion vectors calculated by the motion estimation unit 412.
  • the subtracter 416 removes the temporal redundancy existing between the frames by subtracting the temporally predicted frame provided from the motion compensation unit 414 from the present frame.
  • the spatial transform unit 418 removes a spatial redundancy from the frame, from which the temporal redundancy has been removed by the subtracter 416, using a spatial transform method that supports spatial scalability.
  • a discrete cosine transform (DCT), a wavelet transform, and others may be used as the spatial transform method.
  • Coefficients obtained from the spatial transform are transform coefficients. If the DCT method is used as the spatial transform method, the coefficients are DCT coefficients, while if the wavelet transform is used, the coefficients are wavelet coefficients.
  • the quantization unit 420 quantizes the transform coefficients obtained by the spatial transform unit 418. Quantization is a way of indicating the transform coefficients, which are expressed as certain real values, as discrete values by dividing the transform coefficients into specified sections and then matching the discrete values with specified indexes.
  • the entropy encoding unit 422 performs a lossless coding of the transform coefficients quantized by the quantization unit 420 and motion data provided from the motion estimation unit 412, and generates an output bit stream.
  • An arithmetic coding, a variable length coding, and others may be used as the lossless coding method.
  • the video encoder 400 may further include the dequantization unit 424, the inverse spatial transform unit 426, and others.
  • the dequantization unit 424 dequantizes the coefficients quantized by the quantization unit 420. This dequantization process corresponds to the inverse process of the quantization.
  • the inverse spatial transform unit 426 performs the inverse spatial transform of the result of the dequantization, and provides the result of the inverse spatial transform to an adder 428.
  • the adder 428 restores the video frame by adding the restored residual frame provided from the inverse spatial transform unit 426 to the predicted frame provided from the motion compensation unit 414 and stored in a frame buffer (not illustrated), and provides the restored video frame to the deblocking unit 430.
  • the deblocking unit 430 receives the video frame restored by the adder 428 and performs the deblocking to remove the artifact caused by the boundaries of blocks in the frame.
  • the deblocked restored video frame is provided to an enhancement layer encoder 450 as the reference frame.
  • the enhancement layer encoder 450 may include a spatial transform unit 454, a quantization unit 456, an entropy encoding unit 468, a dequantization unit 458, an inverse spatial transform unit 460 and a deblocking unit 464.
  • a subtracter 452 generates a residual frame by subtracting the reference frame provided by the base layer from the current frame.
  • the residual frame is encoded through the spatial transform unit 454 and the quantization unit 456, and is restored through the dequantization unit 458 and the inverse spatial transform unit 460.
  • An adder 462 generates a restored frame by adding the restored residual frame provided from the inverse spatial transform unit 460 to the reference frame provided by the base layer.
  • the restored frame is deblocked by the deblocking unit 464.
  • a subtracter 466 generates and provides a new residual frame to the spatial transform unit 454 in consideration of the deblocked frame as the new current frame.
  • the new residual frame is processed through the spatial transform unit 454, the quantization unit 456 and the entropy encoding unit 468 to be outputted as an enhanced layer bit stream, and then is restored through the dequantization unit 458 and the inverse spatial transform unit 460.
  • the adder 462 adds the restored new residual image to the reference frame provided by the base layer, and provides the restored new frame to the upper enhancement layer as the reference frame.
  • FlG. 5 is a block diagram illustrating the construction of a decoder according to an embodiment of the present invention.
  • a video decoder 500 may include a base layer decoder 510 and an enhancement layer decoder 550.
  • the enhancement layer decoder 550 may include an entropy decoding unit 555, a dequantization 560 and an inverse spatial transform unit 565.
  • the entropy decoding unit 555 extracts texture data by performing the lossless decoding that is reverse to the entropy encoding.
  • the texture information is provided to the dequantization unit 560.
  • the dequantization unit 560 dequantizes the texture information transmitted from the entropy encoding unit 555.
  • the dequantization process is to search for quantized coefficients that match values transferred from the encoder 600 with specified indexes.
  • the inverse spatial transform unit 565 inversely performs the spatial transform and restores the coefficients obtained from the dequantization of the residual image in a spatial domain. For example, if the coefficients are spatially transformed by a wavelet transform method in the video encoder side, the inverse spatial transform unit 565 will perform the inverse wavelet transform, while if the coefficients are transformed by a DCT transform method in the video encoder side, the inverse spatial transform unit will perform the inverse DCT transform.
  • An adder 570 restores the video frame by adding the residual image restored by the inverse spatial transform unit to the reference frame provided from the deblocking unit 540 of the base layer decoder 510.
  • the base layer decoder 510 may include an entropy decoding unit 515, a de- quantization unit 520, an inverse spatial transform unit 525, a motion compensation unit 530 and a deblocking unit 540.
  • the entropy decoding unit 515 performs the lossless decoding that is inverse to the entropy encoding, and extracts texture data and motion data.
  • the texture information is provided to the dequantization unit 520.
  • the motion compensation unit 530 performs motion compensation of the restored video frame using the motion data provided from the entropy decoding unit 515 and generates a motion-compensated frame. This motion compensation process is applied only to the case where the present frame has been encoded by a temporal predication process in the encoder side.
  • An adder 535 restores the video frame by adding the residual image to the motion compensated frame provided from the motion compensation unit 530 if the residual image restored by the inverse spatial transform unit 525 is obtained by the temporal prediction.
  • the deblocking unit 540 which corresponds to the deblocking unit 430 of the base layer encoder as illustrated in FlG. 4, generates the base layer frame by deblocking the video frame restored by the adder 535, and provides the base layer frame to the adder 570 of the enhancement layer decoder 550 as the reference frame.
  • the residual re-estimation process according to the embodiments of the present invention can be extended to a general multilayer video coding. That is, by re-estimating the residual in consideration of the deblocked restored frame as the new original frame, instead of the residual obtained by subtracting the predicted frame from the original frame, unnecessary data to be removed by the deblocking is removed in advance, and the number of bits being transmitted is reduced.
  • FlG. 6 is a view illustrating a residual re-estimation process in a general multilayer structure according to another embodiment of the present invention.
  • the residual image obtained by subtracting a predicted frame P n from an original frame O n is transformed and quantized to be transmitted to the decoder side, and the restored frame REC n is obtained by adding the predicted frame to a value obtained by dequantizing and inverse-transforming the residual. Then, by performing the deblocking D of the REC , n n the reference frame to be provided for prediction is obtained.
  • a frame O n ' obtained by applying the deblocking D n to the restored frame REC that is obtained after the above-described residual creation and frame restoration processes is considered as the new original frame, and a new residual image is obtained by subtracting an inter-predicted frame (or macroblock) P from the frame O n '. Then, the new residual image is transformed and quantized to be transmitted to the decoder side. Also, the frame REC ' restored by performing the transform and quantization of the residual image and adding the quantized residual image to the predicted frame P n , is used as the reference frame for generating a predicted frame of another frame.
  • FlG. 7 is a block diagram illustrating the construction of an encoder according to another embodiment of the present invention.
  • An N-th layer encoder 700 may include a down sampler 715, a motion estimation unit 720, a motion compensation unit 725, a spatial transform unit 735, a quantization unit 740, a de- quantization unit 745, an inverse spatial transform unit 750, a deblocking unit 760, an up sampler 770 and an entropy encoding unit 775.
  • the down sampler 715 performs down-sampling of the original input frame by resolution of the N-th layer. This down-sampling is performed on the assumption that the resolution of an upper enhancement layer and the resolution of the N-th layer differ, and thus the down-sampling may be omitted if the resolutions of both layers are equal to each other.
  • the subtracter 730 removes the temporal redundancy of the video by subtracting a temporally predicted frame obtained by the motion compensation unit 725 from the present frame.
  • the spatial transform unit 735 removes the spatial redundancy of the frame from which the temporal redundancy has been removed by the subtracter 730 using the spatial transform method that supports the spatial scalability. Additionally, the spatial transform unit 735 removes the spatial redundancy of the new residual image obtained by subtracting the temporally predicted frame obtained by the motion compensation unit 725 from the frame restored by an adder 755 and the deblocking unit 760.
  • the adder 755 restores the N-th layer input frame by adding the residual image
  • the deblocking unit 760 generates a new N-th layer input frame by deblocking the
  • the up sampler 770 performs up-sampling of the signal outputted from the adder
  • the up sampler 770 may not be used.
  • FlG. 8 is a block diagram illustrating the construction of a decoder according to another embodiment of the present invention.
  • An N-th layer decoder 800 may include an entropy decoding unit 810, a dequantization unit 820, an inverse spatial transform unit 830, a motion compensation unit 840 and an up sampler 860.
  • the up sampler 860 performs up-sampling of the N-th layer image restored in the
  • N-th layer decoder 800 by resolution of the upper enhancement layer and provides the up-sampled image to the upper enhancement layer. If the resolutions of the upper enhancement layer and the N-th layer are equal to each other, the up-sampling process may be omitted.
  • the respective constituent elements as illustrated in FIGs. 4, 5, 7 and 8 may be software or hardware such as a field-programmable gate array (FPGA) and an application-specific integrated circuit (ASIC).
  • FPGA field-programmable gate array
  • ASIC application-specific integrated circuit
  • the constituent elements are not limited to software or hardware.
  • the constituent elements may be constructed so as to be in a storage medium that can be addressed or to execute one or more processors.
  • the functions provided in the constituent elements may be implemented by subdivided constituent elements, and the constituent elements and functions provided in the constituent elements may be combined together to perform a specified function.
  • the constituent elements may be implemented so as to execute one or more computers in a system.
  • the multilayer video encoding/decoding method using residual re-estimation and an apparatus using the same according to the present invention has at least one of the following effects.
  • the number of bits used for encoding the residual signal can be reduced by using a frame from which redundant information has been removed by deblocking as the original frame.
  • a high-quality video frame from which block artifacts have been removed can be provided by performing a deblocking process for respective layers in the multilayer video encoding/decoding process.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

L'invention concerne un procédé de codage/décodage multicouche utilisant une réestimation résiduelle et un appareil. Le procédé de codage vidéo multicouche consiste: a) à coder une première image résiduelle obtenue par soustraction d'une trame prédite d'une trame originale, b) à décoder la première image résiduelle codée et à générer une première trame prédite par adjonction de l'image résiduelle décodée à la trame prédite, c) à débloquer la première trame restaurée, et d) à coder une seconde image résiduelle obtenue par soustraction de la trame prédite de la première trame restaurée débloquée.
PCT/KR2006/000278 2005-01-27 2006-01-25 Procede de codage/decodage video multicouche utilisant une reestimation residuelle et son appareil WO2006080797A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP06703534A EP1842377A1 (fr) 2005-01-27 2006-01-25 Procede de codage/decodage video multicouche utilisant une reestimation residuelle et son appareil

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US64700005P 2005-01-27 2005-01-27
US60/647,000 2005-01-27
KR1020050025238A KR100703749B1 (ko) 2005-01-27 2005-03-26 잔차 재 추정을 이용한 다 계층 비디오 코딩 및 디코딩방법, 이를 위한 장치
KR10-2005-0025238 2005-03-26

Publications (1)

Publication Number Publication Date
WO2006080797A1 true WO2006080797A1 (fr) 2006-08-03

Family

ID=36740753

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2006/000278 WO2006080797A1 (fr) 2005-01-27 2006-01-25 Procede de codage/decodage video multicouche utilisant une reestimation residuelle et son appareil

Country Status (2)

Country Link
EP (1) EP1842377A1 (fr)
WO (1) WO2006080797A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112789857A (zh) * 2018-08-03 2021-05-11 威诺瓦国际有限公司 用于信号增强编码的变换

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20020064932A (ko) * 2000-10-11 2002-08-10 코닌클리케 필립스 일렉트로닉스 엔.브이. 미세 입상 비디오 인코딩을 위한 공간적 스케일러빌리티
KR20060006711A (ko) * 2004-07-15 2006-01-19 삼성전자주식회사 비디오 코딩 및 디코딩을 위한 시간적 분해 및 역 시간적분해 방법과, 비디오 인코더 및 디코더
KR20060006328A (ko) * 2004-07-15 2006-01-19 삼성전자주식회사 기초 계층을 이용하는 스케일러블 비디오 코딩 방법 및 장치

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20020064932A (ko) * 2000-10-11 2002-08-10 코닌클리케 필립스 일렉트로닉스 엔.브이. 미세 입상 비디오 인코딩을 위한 공간적 스케일러빌리티
KR20060006711A (ko) * 2004-07-15 2006-01-19 삼성전자주식회사 비디오 코딩 및 디코딩을 위한 시간적 분해 및 역 시간적분해 방법과, 비디오 인코더 및 디코더
KR20060006328A (ko) * 2004-07-15 2006-01-19 삼성전자주식회사 기초 계층을 이용하는 스케일러블 비디오 코딩 방법 및 장치

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
RHEE V. AND GIBSON J.D.: "Block-level refinement of motion description in layered H.261 video", PROCEEDINGS OF 1995 CONFERENCE ON SIGNALS, SYSTEMS AND COMPUTERS, vol. 2, 2 November 1995 (1995-11-02), PACIFIC GROVE, CA, USA, pages 1408 - 1412, XP008119940 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112789857A (zh) * 2018-08-03 2021-05-11 威诺瓦国际有限公司 用于信号增强编码的变换

Also Published As

Publication number Publication date
EP1842377A1 (fr) 2007-10-10

Similar Documents

Publication Publication Date Title
US20060165304A1 (en) Multilayer video encoding/decoding method using residual re-estimation and apparatus using the same
US8817872B2 (en) Method and apparatus for encoding/decoding multi-layer video using weighted prediction
JP4891234B2 (ja) グリッド動き推定/補償を用いたスケーラブルビデオ符号化
US20060165302A1 (en) Method of multi-layer based scalable video encoding and decoding and apparatus for the same
US7944975B2 (en) Inter-frame prediction method in video coding, video encoder, video decoding method, and video decoder
US8411753B2 (en) Color space scalable video coding and decoding method and apparatus for the same
KR100703788B1 (ko) 스무딩 예측을 이용한 다계층 기반의 비디오 인코딩 방법,디코딩 방법, 비디오 인코더 및 비디오 디코더
JP4922391B2 (ja) 多階層基盤のビデオエンコーディング方法および装置
US20060120448A1 (en) Method and apparatus for encoding/decoding multi-layer video using DCT upsampling
US20090148054A1 (en) Method, medium and apparatus encoding/decoding image hierarchically
US20060013310A1 (en) Temporal decomposition and inverse temporal decomposition methods for video encoding and decoding and video encoder and decoder
JP2006333519A (ja) スケーラブルビデオコーディング及びデコーディング方法と装置
CA2909595A1 (fr) Procede et appareil de traitement de signal video
US20060013311A1 (en) Video decoding method using smoothing filter and video decoder therefor
EP2152008A1 (fr) Procédé de prédiction d'un bloc perdu ou endommagé d'un cadre de couche spatial amélioré et décodeur SVC adapté correspondant
US20060250520A1 (en) Video coding method and apparatus for reducing mismatch between encoder and decoder
WO2006118384A1 (fr) Procede et appareil destine a coder/decoder une video a couches multiples en utilisant une prediction ponderee
WO2006078109A1 (fr) Procede et dispositif d'encodage et decodage video echelonnable multicouche
EP1889487A1 (fr) Procede de codage video fonde sur des couches multiples, procede de decodage, codeur video, et decodeur video utilisant une prevision de lissage
WO2006059847A1 (fr) Procede et appareil de codage/decodage de video multicouche par sur-echantillonnage dct
EP1842377A1 (fr) Procede de codage/decodage video multicouche utilisant une reestimation residuelle et son appareil
JP5063678B2 (ja) ビットストリームのビット率の調節のための優先権の割当て方法、ビットストリームのビット率の調節方法、ビデオデコーディング方法およびその方法を用いた装置
EP1905238A1 (fr) Procede et appareil de codage video reduisant la desadaptation entre un codeur et un decodeur
Zhang et al. Subband motion compensation for spatially scalable video coding
WO2006104357A1 (fr) Procede pour la compression/decompression des vecteurs de mouvement d'une image non synchronisee et appareil utilisant ce procede

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application
WWE Wipo information: entry into national phase

Ref document number: 2006703534

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 200680003367.2

Country of ref document: CN

WWP Wipo information: published in national office

Ref document number: 2006703534

Country of ref document: EP