WO2007024106A1 - Method for enhancing performance of residual prediction and video encoder and decoder using the same - Google Patents

Method for enhancing performance of residual prediction and video encoder and decoder using the same Download PDF

Info

Publication number
WO2007024106A1
WO2007024106A1 PCT/KR2006/003328 KR2006003328W WO2007024106A1 WO 2007024106 A1 WO2007024106 A1 WO 2007024106A1 KR 2006003328 W KR2006003328 W KR 2006003328W WO 2007024106 A1 WO2007024106 A1 WO 2007024106A1
Authority
WO
WIPO (PCT)
Prior art keywords
residual signal
block
calculating
representative
layer block
Prior art date
Application number
PCT/KR2006/003328
Other languages
French (fr)
Inventor
Kyo-Hyuk Lee
Mathew Manu
Original Assignee
Samsung Electronics Co., Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from KR1020050119785A external-priority patent/KR100746011B1/en
Application filed by Samsung Electronics Co., Ltd. filed Critical Samsung Electronics Co., Ltd.
Publication of WO2007024106A1 publication Critical patent/WO2007024106A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/189Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding
    • H04N19/196Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding being specially adapted for the computation of encoding parameters, e.g. by averaging previously computed encoding parameters
    • H04N19/198Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding being specially adapted for the computation of encoding parameters, e.g. by averaging previously computed encoding parameters including smoothing of a sequence of encoding parameters, e.g. by averaging, by choice of the maximum, minimum or median value
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/105Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/124Quantisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/189Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding
    • H04N19/196Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding being specially adapted for the computation of encoding parameters, e.g. by averaging previously computed encoding parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/30Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/46Embedding additional information in the video signal during the compression process
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/61Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding

Definitions

  • Methods and apparatuses consistent with the present invention relate to a video compression technique, and more particularly, to enhancing the performance of residual prediction in a multi-layered video codec.
  • multimedia data requires a storage media that has a large capacity and a wide bandwidth for transmission since the amount of multimedia data is usually large. Accordingly, a compression coding method is requisite for transmitting multimedia data including text, video, and audio.
  • a basic principle of data compression is removing data redundancy.
  • Data can be compressed by removing spatial redundancy in which the same color or object is repeated in an image, temporal redundancy in which there is little change between adjacent frames in a moving image or repeated sounds in audio, or mental visual redundancy which takes into account human eyesight and its limited perception of high frequency.
  • temporal redundancy is removed by motion compensation based on motion estimation and compensation
  • spatial redundancy is removed by transform coding.
  • transmission media are used. Transmission performance is different depending on the transmission media. Transmission media, which are currently in use, have various transmission rates. For example, an ultrahigh-speed communication network can transmit data of several tens of megabits per second while a mobile communication network has a transmission rate of 384 kilobits per second. Accordingly, to support transmission media having various speeds or to transmit multimedia at a data rate suitable for a given transmission environment, data coding methods which have scalability, such as wavelet video coding and subband video coding, may be used.
  • Scalability indicates a characteristic that enables a decoder or a pre-decoder to partially decode a single compressed bitstream according to various conditions such as a bit rate, an error rate, and system resources.
  • a decoder or a pre-decoder can reconstruct a multimedia sequence having different picture quality, resolutions, or frame rates using only a portion of a bitstream that has been coded according to a method which has scalability.
  • Moving Picture Experts Group-21 Part 13 standardization for scalable video coding is under way. In particular, much effort is being made to implement scalability based on a multi-layered structure.
  • a bitstream may consist of multiple layers, i.e., base layer and first and second enhanced layers with different resolutions, i.e. quarter common intermediate format (QCIF), common intermediate format (CIF), and twice common interchange/intermediate format (2CIF), or frame rates.
  • QCIF quarter common intermediate format
  • CIF common intermediate format
  • 2CIF twice common interchange/intermediate format
  • FIG. 1 illustrates an example of a scalable video coding scheme using a multi- layered structure.
  • a base layer has a QCIF resolution and a frame rate of 15 Hz
  • a first enhanced layer has a CIF resolution and a frame rate of 30 Hz
  • a second enhanced layer has a standard definition (SD) resolution and a frame rate of 60 Hz.
  • SD standard definition
  • Interlayer correlation may be used in encoding a multi-layer video frame.
  • a region 12 in a first enhancement layer video frame may be efficiently encoded using prediction from a corresponding region 13 in a base layer video frame.
  • a region 11 in a second enhancement layer video frame can be efficiently encoded using prediction from the region 12 in the first enhancement layer.
  • an image of the base layer needs to be upsampled before the prediction is performed.
  • JVT Joint Video Team
  • ISO/IEC International Electrotechnical Commission
  • ITU International Telecommunication Union
  • the SVC standard using a multi-layer structure supports intra base layer (BL) prediction and residual prediction in addition to directional intra prediction and inter prediction used in the conventional H.264 to predict a block or macroblock in a current frame.
  • BL base layer
  • the residual prediction involves predicting a residual signal in a current layer from a residual signal in a lower layer and quantizing only a signal corresponding to a difference between the predicted value and the actual value.
  • FIG. 2 is an exemplary diagram illustrating a residual prediction process defined in the SVC standard.
  • step S 1 a predicted block P B for a block O B in a lower layer N- 1 is generated using neighboring frames.
  • step S2 the predicted block P is subtracted from the block O to generate residual R .
  • step S3 the residual R is subjected to quantization/inverse quantization to generate a reconstructed residual R B '.
  • step S4 a predicted block P for a block O in a current layer N is generated using neighboring frames.
  • step S5 the predicted block P is subtracted from the block O to generate residual R . c " c
  • step S6 the residual R obtained in the step S4 is subtracted from the reconstructed residual R B ', and in step S7, the subtraction result R obtained in the step S6 is quantized.
  • the conventional residual prediction process has a drawback in that a residual signal energy is not sufficiently removed in a subtraction step of the residual prediction process because the residual signal R B has a different dynamic range (or error range) from the residual signal R when a quantization parameter for a reference frame used in generating the current layer predicted signal P is different from a quantization parameter for a reference frame used in generating the lower layer predicted signal P B, as shown in FlG. 3.
  • An aspect of the present invention is to provide a method for reducing a quantity of coded data by reducing residual signal energy in residual prediction used in a multi- layered video codec.
  • Another aspect of the present invention is to provide an improved video encoder and video decoder employing the method.
  • a residual prediction method including calculating a first residual signal for a current layer block; calculating a second residual signal for a lower layer block corresponding to the current layer block, performing scaling by multiplying the second residual signal by a scaling factor, and calculating a difference between the first residual signal and the scaled second residual signal.
  • a multi-layer video encoding method including calculating a first residual signal for a current layer block, calculating a second residual signal for a lower layer block corresponding to the current layer block, performing scaling by multiplying the second residual signal by a scaling factor, and calculating a difference between the first residual signal and the scaled second residual signal, and quantizing the difference.
  • a method for generating a multi-layer video bitstream including generating a base layer bitstream and generating an enhancement layer bitstream, wherein the enhancement layer bitstream contains at least one macroblock and each macroblock comprises a field indicating a motion vector, a field specifying a coded residual, and a field indicating a scaling factor for the macroblock, and wherein the scaling factor is used to make a dynamic range of a residual signal for a base layer block substantially equal to a dynamic range of a residual signal for an enhancement layer block.
  • a multi-layer video decoding method including reconstructing a difference signal for a current layer block from an input bitstream, reconstructing a first residual signal for a lower layer block from the input bitstream, performing scaling by multiplying the first residual signal by a scaling factor, and adding the reconstructed difference signal and the scaled first residual signal together and reconstructing a second residual signal for the current layer block.
  • a multi-layer video encoder including means for calculating a first residual signal for a current layer block, means for calculating a second residual signal for a lower layer block corresponding to the current layer block, means for performing scaling by multiplying the second residual signal by a scaling factor, means for calculating a difference between the first residual signal and the scaled second residual signal, and means for quantizing the difference.
  • a multi-layer video decoder including means for reconstructing a difference signal for a current layer block from an input bitstream, means for reconstructing a first residual signal for a lower layer block from the input bitstream, means for performing scaling by multiplying the first residual signal by a scaling factor, and means for adding the reconstructed difference signal and the scaled first residual signal together and reconstructing a second residual signal for the current layer block.
  • FlG. 1 is an exemplary diagram illustrating a conventional scalable video coding
  • FlG. 2 is an exemplary diagram illustrating a residual prediction process defined in a conventional SVC standard
  • FlG. 3 illustrates a dynamic range for a residual signal of the residual prediction process of Fig. 2 that varies for each layer;
  • FlG. 4 illustrates a residual prediction process according to an exemplary embodiment of the present invention
  • FlG. 5 illustrates an example of calculating a motion block representing parameter
  • FlG. 6 is a diagram of a multi-layer video encoder according to an exemplary embodiment of the present invention.
  • FlG. 7 illustrates the structure of a bitstream generated by the video encoder of
  • FIG. 6 is a diagrammatic representation of FIG. 6
  • FlG. 8 is a diagram of a multi-layer video decoder according to an exemplary embodiment of the present invention.
  • FlG. 9 is a diagram of a multi-layer video decoder according to another exemplary embodiment of the present invention.
  • FIG. 4 illustrates a residual prediction process according to an exemplary embodiment of the present invention.
  • step Sl 1 a predicted block P B for a block O B in a lower layer N-I is generated using neighboring frames (hereinafter called 'reference frames').
  • the reference frame is not an original input frame but an image reconstructed after quantization/inverse quantization.
  • step S 12 the predicted block P B is subtracted from the block O B to generate a residual block R .
  • the residual block R is quantized and inversely quantized to obtain a reconstructed block R '.
  • a prime notation mark (') is used herein to denote that a block has been reconstructed after quantization/inverse quantization.
  • step S 14 a predicted block P for a block O in a current layer N is generated using neighboring reference frames. The reference frame is a reconstructed image obtained after quantization/inverse quantization.
  • step S 15 the predicted block P is subtracted from the block O to generate a residual block R .
  • step S 16 quantization parameters QP and QP used in quantizing low layer reference frames and quantization parameters QP and QP used in quantizing high layer reference frames are used to obtain a scaling factor R scale .
  • a difference in dynamic range occurs due to an image quality difference between a current layer reference frame and a lower layer reference frame. Thus, the difference in dynamic range can be represented as a function of current layer reference frames and lower layer reference frames used in quantization.
  • a method for calculating a scaling factor according to an exemplary embodiment of the present invention will be described later in detail.
  • QP denotes a quantization parameter and subscripts
  • B, 0, and C, 1 denote indices of forward and backward reference frames, respectively.
  • step S 17 the reconstructed residual R B ' obtained in the step S 13 is multiplied by the scaling to factor R scale .
  • step * S 18,> the p froduct ( ⁇ R scale x R ⁇ ') / is subtracted from the residual block R obtained in the step S 15 to obtain data R in the current layer for quantization.
  • step S 19 the data R is quantized.
  • P , P , R , and R may have 16*16 pixels or any other macroblock size.
  • FlG. 5 illustrates an example of calculating a quantization parameter QP that is representative of a block ('motion block') that is the smallest n_x_suby unit for obtaining a motion vector based on a forward reference frame ('motion block representing parameter' or 'first representative value').
  • the motion block may have a block size of 16 x 16, 16 x 8, 8 x 16, 8 x 8, 8 x 4, 4 x 8, or 4 x 4.
  • the method illustrated in FlG. 5 can also apply to a backward reference frame.
  • Subscripts n and x respectively denote an index of a layer and a reference list index that may have a value of 0 or 1 depending on the direction of prediction.
  • Subscripts sub and y respectively denote the abbreviation and index of a motion block.
  • a macroblock in a current frame contains at least one motion block.
  • the macroblock consists of four motion blocks (to be denoted by 'y' throughout the specification) having indices of 0 through 3, the four motion blocks match regions on a forward reference frame by motion vectors obtained through motion estimation.
  • each motion block may overlap one, two, or four macroblocks in the forward reference frame.
  • the motion block having an index y of 0 overlaps four macroblocks in the forward reference frame.
  • the motion block having an index y of 3 in the figure also overlaps four macroblocks, whereas the motion block having an index y of 2 overlaps only two macroblocks in the forward reference frame, etc.
  • a motion block representing parameter QP for the motion block 0 may be represented as a function g of the four quantization parameters QP 0 , QP 1 , QP 2 , and QP 3 .
  • Equation (1) areaMBy denotes the area of motion block y, areaOLy denotes the overlapped area of part y, and Z denotes the number of macroblocks in the reference frame that overlap the motion block.
  • a quantization parameter QP representative of a macroblock ('macroblock rep- n resenting parameter' or 'second representative value') will be calculated.
  • Various operations may be used in obtaining the macroblock representing parameter QP n from
  • Equation (2) areaMB denotes the area of macroblock, areaMBy denotes the area of macroblock y,X denotes the number of reference frames and Y denotes the number of indices of motion blocks in a macroblock with respect to a reference index list x.
  • X is 1, while X is 2 in bidirectional prediction.
  • Y is 4 because the macroblock is segmented into four motion blocks.
  • Equation (2) a scaling factor is determined in order to compensate for a dynamic range difference between residual signals that occurs due to a difference between quantization parameters for a current layer reference frame and a lower layer reference frame.
  • the same process of calculating motion block representing parameter and macroblock representing parameter applies to the lower layer.
  • a region in the lower layer corresponding to a macroblock in the current layer may be smaller than the macroblock in the current layer when the current layer has a higher resolution than the lower layer. This is because a residual signal in the lower layer must be upsampled for residual prediction.
  • QP n-1 for the lower layer is obtained based on the region in the lower layer corresponding to the current layer macroblock and motion blocks in the region.
  • QP n-1 for the lower layer is regarded as a macroblock representing parameter because it is calculated using a region corresponding to a current macroblock although the region does not have the same area as the macroblock.
  • a scaling factor R can be defined by Equation (3) scale below:
  • Equation (3) QS n and QS n-1 denote quantization steps corresponding to quantization parameters QP and QP .
  • a quantization step is a value actually applied during quantization while a quantization parameter is an integer index corresponding one-to-one to the quantization step.
  • the QS n and QS n-1 are referred to as 'representative quantization steps'.
  • the representative quantization step can be interpreted as an estimated value of quantization step for a region on a reference frame corresponding to a block in each layer.
  • QP n and QP n-1 should be converted into an integer value if necessary.
  • QP n and QP n-1 may be rounded off, rounded up, or rounded down to the nearest integer.
  • the real- valued QP n and QP n-1 may also be used to interpolate QS n and QS n-l , respectively.
  • QS n and QS n-1 may have a real value interpolated using QP and QP .
  • n n-l
  • quantization parameters are used to calculate a subblock representing parameter and a macroblock representing parameter.
  • quantization steps may be directly applied instead of the quantization parameters.
  • the quantization parameters QP 0 , QP 1 , QP 2 , and QP 3 shown in FIG. 5 will be replaced with quantization steps QS , QS , QS , and QS .
  • the process of converting quantization parameters to quantization steps in Equation (3) may be omitted.
  • FIG. 6 is a diagram of a multi-layer video encoder 1000 according to an exemplary embodiment of the present invention.
  • the multi-layer video encoder 1000 comprises an enhancement layer encoder 200 and a base layer encoder 100. The operation of the multi-layer video encoder 1000 will now be described with reference to FIG. 6.
  • the process of determining a motion vector and a macroblock pattern involves comparing pixels (subpixels) in a current block with pixels (subpixels) of a search area in a reference frame and determining a combination of motion vector and macroblock pattern with a minimum rate-distortion (R-D) cost.
  • R-D rate-distortion
  • the motion estimator 250 sends motion data such as motion vectors obtained as a result of motion estimation, a motion block type, and a reference frame number to an entropy coding unit 225.
  • the motion compensator 255 performs motion compensation on a reference frame using the motion vectors and generates a predicted block (P ) corresponding to a current frame.
  • the predicted block (P ) may be generated by averaging a region corresponding to a motion block in two reference frames.
  • the subtractor 205 subtracts the predicted block (P ) in a current macroblock , and generates a residual signal (R ) .
  • a motion estimator 150 performs motion estimation to the macroblock of a base layer provided by the downsampler 160, and calculates motion vector and macroblock pattern using a similar method as described with reference to the enhancement layer encoder 200.
  • a motion compensator 155 generates a predicted block (P ) by motion compensation of reference frame (the re- constructed frame) of the base layer using the calculated motion vector.
  • the subtracter 105 subtracts the predicted block (P B ) in the macroblock , and generates residual signal (R B ) .
  • a spatial transformer 115 performs spatial transform on a frame in which temporal redundancy has been removed by the subtractor 105 to create transform coefficients.
  • a Discrete Cosine Transform (DCT) or a wavelet transform technique may be used for the spatial transform.
  • DCT Discrete Cosine Transform
  • a DCT coefficient is created when DCT is used for the spatial transform while a wavelet coefficient is produced when wavelet transform is used.
  • a quantizer 120 performs quantization on the transform coefficients obtained by the spatial transformer 115 to create quantization coefficients.
  • quantization is a methodology to express the transformation coefficient expressed in an arbitrary real number as a finite number of bits.
  • Known quantization techniques include scalar quantization, vector quantization, and the like.
  • a simple scalar quantization technique is performed by dividing a transform coefficient by a value of a quantization table mapped to the coefficient and rounding the result to an integer value.
  • An entropy encoder 125 losslessly encodes the quantization coefficients generated by the quantizer 120 and a prediction mode selected by a motion estimator 150 into a base layer bitstream.
  • Various coding schemes such as Huffman Coding, Arithmetic Coding, and Variable Length Coding may be employed for lossless coding.
  • the inverse quantizer 130 performs inverse quantization on the coefficient quantized by the quantizer 120. And, the inverse spatial transformer 135 performs inverse spatial transform on the inversely quantized result that is then sent to the adder 140.
  • the adder 140 adds the predicted block (P B ') to a signal (a reconstructed residual signal R B ') received by the inverse spatial transformer 135, thereby reconstructing a macroblock of a base layer.
  • the reconstructed macroblocks are combined to form a frame or a slice, and thereby those are stored in a frame buffer 145 for a time.
  • the stored frame is provided in the motion estimator 150 and the motion compensator 155 to be used with the reference frame of other frames again.
  • the reconstructed residual signal (R B ') provided from the inverse spatial transformer 135 is used for r esidual prediction.
  • the residual signal (R B ') must be upsampled by an upsampler 165 first.
  • a quantization step calculation unit 310 uses quantization parameters QP and QP
  • a quantization step calculator 320 uses quantization parameters QP and QP for an enhancement layer reference frame received from a quantizer 220 and motion vectors received from a motion estimator 250 to obtain a representative quantization step QS using the Equations (1) and (2). [77] The quantization steps QS and QS are sent to a scaling factor calculator 330 that then divides QS by QS in order to calculate a scaling factor R .
  • a multiplier 340
  • a subtracter 210 subtracts the product from residual signal R output from a subtractor 205 to generate final residual signal R.
  • the final residual signal R is referred to as a difference signal in order to distinguish it from other residual signals R and R obtained by subtracting a predicted signal from an original signal.
  • the difference signal R is spatially transformed by a spatial transformer 215 and then the resulting transform coefficient is fed into the quantizer 220.
  • the quantizer 220 applies quantization to the transform coefficient. When the magnitude of the difference signal R is less than a threshold, the spatial transform will be skipped.
  • the entropy encoder 225 losslessly encodes the quantized results generated by the quantizer 220 and motion data provided by a motion estimator 250, and generates an output enhancement layer bitstream.
  • the adder 240 and the frame buffer 245 of the enhancement layer encoder 200 are the same as the inverse quantizer 130, the inverse spatial transformer 135, the adder 140 and the frame buffer 145 of the base layer encoder 100 discussed previously, a repeated explanation thereof will not be given.
  • FIG. 7 illustrates the structure of a bitstream 50 generated by the video encoder
  • the bitstream 50 consists of a base layer bitstream 51 and an enhancement layer bitstream 52.
  • Each of the base layer bitstream 51 and the enhancement layer bitstream 52 contains a plurality of frames or slices 53 through 56.
  • a bitstream is encoded in slices rather than in frames. Each slice may have the same size as one frame or macroblock.
  • One slice 55 includes a slice header 60 and slice data 70 containing a plurality of macroblocks MB 71 through 74.
  • One macroblock 73 contains an mbjype field 81, a motion vector field 82, a quantization parameter (Q_para) field 84, and a coded residual field 85.
  • the macroblock 85 may further contain a scaling factor field R_scale 83.
  • the mbjype field 81 is used to indicate a value representing the type of macroblock 73. That is, the mbjype field 81 specifies whether the current macroblock 73 is an intra macroblock, inter macroblock, or an intra BL macroblock.
  • the motion vector field 82 indicates a reference frame number, the pattern of the macroblock 73, and motion vectors for motion blocks.
  • the quantization parameter (Q_para) field 84 is used to indicate a quantization parameter for the macroblock 73.
  • the coded residual field 85 specifies the result of quantization performed for the macroblock 73 by the quantizer 220, i.e., coded texture data.
  • the scaling factor field 83 indicates a scaling factor R for the macroblock 73 scale calculated by the scaling factor calculator 330.
  • the macroblock 73 may selectively contain the scaling factor field 83 because a scaling factor can be calculated in a decoder like in an encoder.
  • the size of the bitstream 50 may increase but the amount of computations of decoding decreases.
  • FlG. 8 is a diagram of a multi-layer video decoder 2000 according to an exemplary embodiment of the present invention.
  • the video decoder 2000 comprises an enhancement layer decoder 500 and a base layer decoder 400.
  • the entropy decoding unit 510 performs lossless decoding that is an inverse operation of entropy encoding for an inputted enhancement layer bitstream 52 to extract motion data, and texture data for the enhancement layer .
  • the entropy decoding unit 510 provides the motion data, and the texture data to a motion compensator 570, and an inverse quantizer 520, respectively.
  • the inverse quantizer 520 performs inverse quantization on the texture data received from the entropy decoding unit 510.
  • the inverse quantization parameter (the same as that used in the encoder) which is included in the enhancement layer bitstream 52 in FlG. 7 is used.
  • An inverse spatial transformer 530 performs inverse spatial transform to the results of the inverse quantization.
  • the inverse spatial transform is performed corresponding to the spatial transform at the video encoder. For example, if a wavelet transform is used for spatial transform at the video encoder, the inverse spatial transformer 530 performs inverse wavelet transform. If DCT is used for spatial transform, the inverse spatial transformer 530 performs inverse DCT. After the inverse spatial transform, the difference signal R' at the encoder is reconstructed.
  • an entropy decoder 410 performs lossless decoding that is an inverse operation of entropy encoding for an inputted base layer bitstream 51 to extract motion data, and texture data for the base layer .
  • the texture data are the same as at the enhancement layer decoder 500.
  • a residual signal (R B ' ) of the base layer is re- constructed through an inverse quantizer 420 and an inverse spatial transformer 430. [92] If a base layer has a lower resolution than an enhancement layer, a residual signal
  • R B ' is subjected to upsampling by an upsampler 480.
  • a quantization step calculator 610 uses base layer motion vectors and quantization parameters QP and QP for a base layer reference frame received from the entropy decoder 410 to obtain a representative quantization step QS using the Equations (1) and (2).
  • a quantization step calculator 620 uses enhancement layer motion vectors and quantization parameters QP and QP for an enhancement layer reference frame received from an entropy decoder 510 to obtain a representative quantization step QS using the Equations (1) and (2).
  • the quantization steps QS and QS are sent to a scaling factor calculator 630 that then divides QS by QS in order to calculate a scaling factor R .
  • a multiplier 640 640
  • the adder 540 adds t he difference signal R' output from the inverse spatial transformer 530 to the output of the multiplier 640, thereby reconstructing a residual signal R ' of an enhancement layer.
  • the motion compensator 570 performs motion compensation on at least a reference frame using the motion data provided from the entropy decoding unit 510. After motion-compensation, a generated predicted block (P ) is provided to an adder 550.
  • An adder 550 adds R ' and P ' together to reconstruct a current macroblock and c c ° then combines the macroblocks together to reconstruct an enhancement layer frame.
  • the reconstructed enhancement layer frame is temporarily stored in a frame buffer 560 before being provided to a motion compensator 570 or being externally output.
  • FIG. 9 is a diagram of a multi-layer video decoder 3000 according to another exemplary embodiment of the present invention. Unlike in the video decoder 2000 of FIG. 8, the video decoder 3000 does not include quantization step calculators 610 and 620 or the scaling factor calculator 630 required for obtaining a scaling factor. That is, a scaling factor R for a current macroblock in an enhancement layer bitstream is scale delivered directly to a multiplier 640 for subsequent operation. The operation of the other blocks, however, is the same, and hence will not be described again. [100] If the scaling factor R is received directly from an encoder, the size of a received scale bitstream may increase but the number of computations needed for decoding may be decreased by a certain extent. The video decoder 3000 may be suitably used for a device having low computation capability compared to its reception bandwidth.
  • the video encoder and the video decoder are configured by two layers of a base layer and an enhancement layer, respectively.
  • this is only by way of an example, and the inventive concept may also be used and applied to more than 3 layers by those of ordinary skill in the art in light of the above teachings.
  • various components mean, but are not limited to, software or hardware components, such as Field Programmable Gate Arrays (FPGAs) or Ap- plication Specific Integrated Circuits (ASICs), which perform certain tasks.
  • the components may advantageously be configured to reside on various addressable storage media and configured to execute on one or more processors.
  • the functionality provided for in the components and modules may be combined into fewer components and modules or further separated into additional components and modules.
  • residual prediction is applied to reduce redundancy between layers in inter prediction.
  • the residual prediction can be applied to any type of prediction that involves generating a residual signal.
  • the residual prediction of the present invention can be applied between residual signals generated by intra prediction or between residual signals at different temporal positions in the same layer.
  • the inventive concept of exemplary embodiments of the present invention can efficiently remove residual signal energy during residual prediction by compensating for a dynamic range difference between residual signals that occurs due to a difference between quantization parameters for predicted signals in different layers.
  • the reduction in residual signal energy can decrease the amount of bits generated during quantization.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computing Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

A method and apparatus for enhancing the performance of residual prediction in a multi-layered video codec are provided. A residual prediction method includes calculating a first residual signal for a current layer block; calculating a second residual signal for a lower layer block corresponding to the current layer block; performing scaling by multiplying the second residual signal by a scaling factor; and calculating a difference between the first residual signal and the scaled second residual signal.

Description

Description
METHOD FOR ENHANCING PERFORMANCE OF RESIDUAL PREDICTION AND VIDEO ENCODER AND DECODER USING
THE SAME
Technical Field
[1] Methods and apparatuses consistent with the present invention relate to a video compression technique, and more particularly, to enhancing the performance of residual prediction in a multi-layered video codec.
Background Art
[2] With the development of information communication technology, including the
Internet, video communication as well as text and voice communication, has increased dramatically. Conventional text communication cannot satisfy users' various demands, and thus, multimedia services that can provide various types of information such as text, pictures, and music have increased. However, multimedia data requires a storage media that has a large capacity and a wide bandwidth for transmission since the amount of multimedia data is usually large. Accordingly, a compression coding method is requisite for transmitting multimedia data including text, video, and audio.
[3] A basic principle of data compression is removing data redundancy. Data can be compressed by removing spatial redundancy in which the same color or object is repeated in an image, temporal redundancy in which there is little change between adjacent frames in a moving image or repeated sounds in audio, or mental visual redundancy which takes into account human eyesight and its limited perception of high frequency. In general video coding, temporal redundancy is removed by motion compensation based on motion estimation and compensation, and spatial redundancy is removed by transform coding.
[4] To transmit multimedia generated after removing data redundancy, transmission media are used. Transmission performance is different depending on the transmission media. Transmission media, which are currently in use, have various transmission rates. For example, an ultrahigh-speed communication network can transmit data of several tens of megabits per second while a mobile communication network has a transmission rate of 384 kilobits per second. Accordingly, to support transmission media having various speeds or to transmit multimedia at a data rate suitable for a given transmission environment, data coding methods which have scalability, such as wavelet video coding and subband video coding, may be used.
[5] Scalability indicates a characteristic that enables a decoder or a pre-decoder to partially decode a single compressed bitstream according to various conditions such as a bit rate, an error rate, and system resources. A decoder or a pre-decoder can reconstruct a multimedia sequence having different picture quality, resolutions, or frame rates using only a portion of a bitstream that has been coded according to a method which has scalability.
[6] Moving Picture Experts Group-21 (MPEG-21) Part 13 standardization for scalable video coding is under way. In particular, much effort is being made to implement scalability based on a multi-layered structure. For example, a bitstream may consist of multiple layers, i.e., base layer and first and second enhanced layers with different resolutions, i.e. quarter common intermediate format (QCIF), common intermediate format (CIF), and twice common interchange/intermediate format (2CIF), or frame rates.
[7] FIG. 1 illustrates an example of a scalable video coding scheme using a multi- layered structure. In the scalable video coding scheme shown in FIG. 1, a base layer has a QCIF resolution and a frame rate of 15 Hz, a first enhanced layer has a CIF resolution and a frame rate of 30 Hz, and a second enhanced layer has a standard definition (SD) resolution and a frame rate of 60 Hz.
[8] Interlayer correlation may be used in encoding a multi-layer video frame. For example, a region 12 in a first enhancement layer video frame may be efficiently encoded using prediction from a corresponding region 13 in a base layer video frame. Similarly, a region 11 in a second enhancement layer video frame can be efficiently encoded using prediction from the region 12 in the first enhancement layer. When each layer of a multi-layer video has a different resolution, an image of the base layer needs to be upsampled before the prediction is performed.
[9] In a Scalable Video Coding (SVC) standard that is currently under development by
Joint Video Team (JVT) of International Organization for Standardization / International Electrotechnical Commission (ISO/IEC) and International Telecommunication Union (ITU), research into multi-layer coding as illustrated in FIG. 1 based on conventional H.264 has been actively conducted.
[10] The SVC standard using a multi-layer structure supports intra base layer (BL) prediction and residual prediction in addition to directional intra prediction and inter prediction used in the conventional H.264 to predict a block or macroblock in a current frame.
[11] The residual prediction involves predicting a residual signal in a current layer from a residual signal in a lower layer and quantizing only a signal corresponding to a difference between the predicted value and the actual value.
[12] FIG. 2 is an exemplary diagram illustrating a residual prediction process defined in the SVC standard.
[13] First, in step S 1 , a predicted block P B for a block O B in a lower layer N- 1 is generated using neighboring frames. In step S2, the predicted block P is subtracted from the block O to generate residual R . In step S3, the residual R is subjected to quantization/inverse quantization to generate a reconstructed residual R B '.
[14] In step S4, a predicted block P for a block O in a current layer N is generated using neighboring frames. In step S5, the predicted block P is subtracted from the block O to generate residual R . c " c
[15] In step S6, the residual R obtained in the step S4 is subtracted from the reconstructed residual R B ', and in step S7, the subtraction result R obtained in the step S6 is quantized.
Disclosure of Invention
Technical Problem
[16] However, the conventional residual prediction process has a drawback in that a residual signal energy is not sufficiently removed in a subtraction step of the residual prediction process because the residual signal R B has a different dynamic range (or error range) from the residual signal R when a quantization parameter for a reference frame used in generating the current layer predicted signal P is different from a quantization parameter for a reference frame used in generating the lower layer predicted signal P B, as shown in FlG. 3.
[17] That is to say, although an original image signal in the current layer is similar to an original image signal in the lower layer, the predicted signals P and P for predicting
B C the original image signals may vary according to the quantization parameters of the current layer and the lower layer. Accordingly, the variable residual signals R and R
B C may not be sufficiently removed.
Technical Solution
[18] An aspect of the present invention is to provide a method for reducing a quantity of coded data by reducing residual signal energy in residual prediction used in a multi- layered video codec.
[19] Another aspect of the present invention is to provide an improved video encoder and video decoder employing the method.
[20] These and other aspects of the present invention will be described in or be apparent from the following description of exemplary embodiments of the invention.
[21] According to an exemplary embodiment of the present invention, there is provided a residual prediction method including calculating a first residual signal for a current layer block; calculating a second residual signal for a lower layer block corresponding to the current layer block, performing scaling by multiplying the second residual signal by a scaling factor, and calculating a difference between the first residual signal and the scaled second residual signal. [22] According to another exemplary embodiment of the present invention, there is provided a multi-layer video encoding method including calculating a first residual signal for a current layer block, calculating a second residual signal for a lower layer block corresponding to the current layer block, performing scaling by multiplying the second residual signal by a scaling factor, and calculating a difference between the first residual signal and the scaled second residual signal, and quantizing the difference.
[23] According to still another exemplary embodiment of the present invention, there is provided a method for generating a multi-layer video bitstream including generating a base layer bitstream and generating an enhancement layer bitstream, wherein the enhancement layer bitstream contains at least one macroblock and each macroblock comprises a field indicating a motion vector, a field specifying a coded residual, and a field indicating a scaling factor for the macroblock, and wherein the scaling factor is used to make a dynamic range of a residual signal for a base layer block substantially equal to a dynamic range of a residual signal for an enhancement layer block.
[24] According to yet another exemplary embodiment of the present invention, there is provided a multi-layer video decoding method including reconstructing a difference signal for a current layer block from an input bitstream, reconstructing a first residual signal for a lower layer block from the input bitstream, performing scaling by multiplying the first residual signal by a scaling factor, and adding the reconstructed difference signal and the scaled first residual signal together and reconstructing a second residual signal for the current layer block.
[25] According to a further exemplary embodiment of the present invention, there is provided a multi-layer video encoder including means for calculating a first residual signal for a current layer block, means for calculating a second residual signal for a lower layer block corresponding to the current layer block, means for performing scaling by multiplying the second residual signal by a scaling factor, means for calculating a difference between the first residual signal and the scaled second residual signal, and means for quantizing the difference.
[26] According to yet a further exemplary embodiment of the present invention, there is provided a multi-layer video decoder including means for reconstructing a difference signal for a current layer block from an input bitstream, means for reconstructing a first residual signal for a lower layer block from the input bitstream, means for performing scaling by multiplying the first residual signal by a scaling factor, and means for adding the reconstructed difference signal and the scaled first residual signal together and reconstructing a second residual signal for the current layer block.
Description of Drawings
[27] The above and other aspects of the present invention will become more apparent by describing in detail exemplary embodiments thereof with reference to the attached drawings in which:
[28] FlG. 1 is an exemplary diagram illustrating a conventional scalable video coding
(SVC) scheme using a multi-layer structure;
[29] FlG. 2 is an exemplary diagram illustrating a residual prediction process defined in a conventional SVC standard;
[30] FlG. 3 illustrates a dynamic range for a residual signal of the residual prediction process of Fig. 2 that varies for each layer;
[31] FlG. 4 illustrates a residual prediction process according to an exemplary embodiment of the present invention;
[32] FlG. 5 illustrates an example of calculating a motion block representing parameter;
[33] FlG. 6 is a diagram of a multi-layer video encoder according to an exemplary embodiment of the present invention;
[34] FlG. 7 illustrates the structure of a bitstream generated by the video encoder of
FIG. 6;
[35] FlG. 8 is a diagram of a multi-layer video decoder according to an exemplary embodiment of the present invention; and
[36] FlG. 9 is a diagram of a multi-layer video decoder according to another exemplary embodiment of the present invention.
Mode for Invention
[37] Exemplary embodiments of the present invention will now be described more fully with reference to the accompanying drawings, in which exemplary embodiments of the invention are shown. Various advantages and features of the present invention and methods of accomplishing the same may be understood more readily by reference to the following detailed description of exemplary embodiments and the accompanying drawings. The present invention may, however, be embodied in many different forms and should not be construed as being limited to the exemplary embodiments set forth herein. Rather, these exemplary embodiments are provided so that this disclosure will be thorough and complete and will fully convey the concept of the invention to those skilled in the art, and the present invention will only be defined by the appended claims. Like reference numerals refer to like elements throughout the specification.
[38] FIG. 4 illustrates a residual prediction process according to an exemplary embodiment of the present invention.
[39] In step Sl 1, a predicted block P B for a block O B in a lower layer N-I is generated using neighboring frames (hereinafter called 'reference frames'). The predicted block P
B is generated using an image in the reference frame corresponding to the block O B .
When closed-loop coding is used, the reference frame is not an original input frame but an image reconstructed after quantization/inverse quantization. [40] There are forward prediction (from a temporally previous frame), backward prediction (from a temporally future frame), and bi-directional prediction depending on the type of a reference frame and direction of prediction. While FlG. 4 shows the residual prediction process using bi-directional prediction, forward or backward prediction may be used. Typically, indices in forward prediction and backward predicti on are represented by 0 and 1, respectively. [41] In step S 12, the predicted block P B is subtracted from the block O B to generate a residual block R . In step S 13, the residual block R is quantized and inversely quantized to obtain a reconstructed block R '. A prime notation mark (') is used herein to denote that a block has been reconstructed after quantization/inverse quantization. [42] In step S 14, a predicted block P for a block O in a current layer N is generated using neighboring reference frames. The reference frame is a reconstructed image obtained after quantization/inverse quantization. In step S 15, the predicted block P is subtracted from the block O to generate a residual block R . In step S 16, quantization parameters QP and QP used in quantizing low layer reference frames and quantization parameters QP and QP used in quantizing high layer reference frames are used to obtain a scaling factor R scale . A difference in dynamic range occurs due to an image quality difference between a current layer reference frame and a lower layer reference frame. Thus, the difference in dynamic range can be represented as a function of current layer reference frames and lower layer reference frames used in quantization. A method for calculating a scaling factor according to an exemplary embodiment of the present invention will be described later in detail.
[43] Throughout this specification, QP denotes a quantization parameter and subscripts
B, 0, and C, 1 denote indices of forward and backward reference frames, respectively.
[44] In step S 17, the reconstructed residual R B ' obtained in the step S 13 is multiplied by the scaling to factor R scale . In step * S 18,> the p froduct ( \R scale x R β ')/ is subtracted from the residual block R obtained in the step S 15 to obtain data R in the current layer for quantization. Finally, in step S 19, the data R is quantized. [45] P , P , R , and R may have 16*16 pixels or any other macroblock size.
B C B C
[46] Hereinafter, calculating a scaling factor according to an exemplary embodiment of the present invention will be described in detail with reference to FlG. 5.
[47] As described above, two reference frames may be used for obtaining a predicted block in each layer. FlG. 5 illustrates an example of calculating a quantization parameter QP that is representative of a block ('motion block') that is the smallest n_x_suby unit for obtaining a motion vector based on a forward reference frame ('motion block representing parameter' or 'first representative value'). In H.264, the motion block may have a block size of 16 x 16, 16 x 8, 8 x 16, 8 x 8, 8 x 4, 4 x 8, or 4 x 4. [48] The method illustrated in FlG. 5 can also apply to a backward reference frame.
Subscripts n and x respectively denote an index of a layer and a reference list index that may have a value of 0 or 1 depending on the direction of prediction. Subscripts sub and y respectively denote the abbreviation and index of a motion block.
[49] A macroblock in a current frame contains at least one motion block. For example, assuming that the macroblock consists of four motion blocks (to be denoted by 'y' throughout the specification) having indices of 0 through 3, the four motion blocks match regions on a forward reference frame by motion vectors obtained through motion estimation. In this case, each motion block may overlap one, two, or four macroblocks in the forward reference frame. For example, as illustrated in Fig. 5, the motion block having an index y of 0 overlaps four macroblocks in the forward reference frame. Similarly, the motion block having an index y of 3 in the figure also overlaps four macroblocks, whereas the motion block having an index y of 2 overlaps only two macroblocks in the forward reference frame, etc.
[50] If QP , QP , QP , and QP denote quantization parameters for the four macroblocks, respectively, a motion block representing parameter QP for the motion block 0 may be represented as a function g of the four quantization parameters QP0, QP1, QP2, and QP3.
[51] Various operations such as simple averaging, median, and area weighted averaging may be used in obtaining the motion block representing parameter QP from the n_O_subO four quantization parameters QP , QP , QP , and QP Herein, area weighted averaging is used by way of illustration. [52] The process of calculating the motion block representing parameter QP n_x_suby through weighted averaging is represented by Equation (1) below.
QP1, * → = l-~∑(areaOLy *Qn (1) areaMBy ^T0
[53] In Equation (1), areaMBy denotes the area of motion block y, areaOLy denotes the overlapped area of part y, and Z denotes the number of macroblocks in the reference frame that overlap the motion block.
[54] After calculating the motion block representing parameter QP as described n_x_suby above, a quantization parameter QP representative of a macroblock ('macroblock rep- n resenting parameter' or 'second representative value') will be calculated. Various operations may be used in obtaining the macroblock representing parameter QP n from
QP n_x_suby for the plurality of motion blocks. Herein, area weighted averaging is used by way of illustration. The macroblock representing parameter is defined by Equation (2) below:
QP" =¥il %iareaMϋy*βp-^} (2)
[55] In Equation (2), areaMB denotes the area of macroblock, areaMBy denotes the area of macroblock y,X denotes the number of reference frames and Y denotes the number of indices of motion blocks in a macroblock with respect to a reference index list x. In unidirectional prediction (forward or backward prediction), X is 1, while X is 2 in bidirectional prediction. For the macroblock shown in Fig. 5, Y (Y in the forward prediction) is 4 because the macroblock is segmented into four motion blocks. [56] After determining the macroblock representing parameter QP as shown in n
Equation (2), a scaling factor is determined in order to compensate for a dynamic range difference between residual signals that occurs due to a difference between quantization parameters for a current layer reference frame and a lower layer reference frame. [57] The same process of calculating motion block representing parameter and macroblock representing parameter applies to the lower layer. However, a region in the lower layer corresponding to a macroblock in the current layer may be smaller than the macroblock in the current layer when the current layer has a higher resolution than the lower layer. This is because a residual signal in the lower layer must be upsampled for residual prediction. Thus, QP n-1 for the lower layer is obtained based on the region in the lower layer corresponding to the current layer macroblock and motion blocks in the region. In this case, QP n-1 for the lower layer is regarded as a macroblock representing parameter because it is calculated using a region corresponding to a current macroblock although the region does not have the same area as the macroblock. [58] When QP n and QP n-1 respectively denote macroblock representing parameters for the current layer and lower layer, a scaling factor R can be defined by Equation (3) scale below:
Λ, . = QS» (3)
QSn_,
[59] In Equation (3), QS n and QS n-1 denote quantization steps corresponding to quantization parameters QP and QP .
[60] A quantization step is a value actually applied during quantization while a quantization parameter is an integer index corresponding one-to-one to the quantization step. The QS n and QS n-1 are referred to as 'representative quantization steps'. The representative quantization step can be interpreted as an estimated value of quantization step for a region on a reference frame corresponding to a block in each layer. [61] Because a typical quantization parameter has an integer value but QP n and QP n-1 have a real value, QP n and QP n-1 should be converted into an integer value if necessary.
For conversion, QP n and QP n-1 may be rounded off, rounded up, or rounded down to the nearest integer. The real- valued QP n and QP n-1 may also be used to interpolate QS n and QS n-l , respectively. In this case, QS n and QS n-1 may have a real value interpolated using QP and QP . n n-l
[62] As shown in Equations (1) through (3), quantization parameters are used to calculate a subblock representing parameter and a macroblock representing parameter. Alternatively, quantization steps may be directly applied instead of the quantization parameters. In this case, the quantization parameters QP0, QP1, QP2, and QP3 shown in FIG. 5 will be replaced with quantization steps QS , QS , QS , and QS . In such a case, the process of converting quantization parameters to quantization steps in Equation (3) may be omitted.
[63] FIG. 6 is a diagram of a multi-layer video encoder 1000 according to an exemplary embodiment of the present invention. Referring to Fig. 6, the multi-layer video encoder 1000 comprises an enhancement layer encoder 200 and a base layer encoder 100. The operation of the multi-layer video encoder 1000 will now be described with reference to FIG. 6.
[64] Using the enhancement layer encoder 200 as a starting point, a motion estimator
250 performs motion estimation on a current frame using a reconstructed reference frame to obtain motion vectors. At this time, not only the motion vectors but also a macroblock pattern representing types of motion blocks forming a macroblock can be determined. The process of determining a motion vector and a macroblock pattern involves comparing pixels (subpixels) in a current block with pixels (subpixels) of a search area in a reference frame and determining a combination of motion vector and macroblock pattern with a minimum rate-distortion (R-D) cost.
[65] The motion estimator 250 sends motion data such as motion vectors obtained as a result of motion estimation, a motion block type, and a reference frame number to an entropy coding unit 225.
[66] The motion compensator 255 performs motion compensation on a reference frame using the motion vectors and generates a predicted block (P ) corresponding to a current frame. In a case of using a two-way reference, the predicted block (P ) may be generated by averaging a region corresponding to a motion block in two reference frames.
[67] The subtractor 205 subtracts the predicted block (P ) in a current macroblock , and generates a residual signal (R ) .
[68] Meanwhile, in a base layer encoder 100, a motion estimator 150 performs motion estimation to the macroblock of a base layer provided by the downsampler 160, and calculates motion vector and macroblock pattern using a similar method as described with reference to the enhancement layer encoder 200. A motion compensator 155 generates a predicted block (P ) by motion compensation of reference frame (the re- constructed frame) of the base layer using the calculated motion vector. [69] The subtracter 105 subtracts the predicted block (P B ) in the macroblock , and generates residual signal (R B ) .
[70] A spatial transformer 115 performs spatial transform on a frame in which temporal redundancy has been removed by the subtractor 105 to create transform coefficients. A Discrete Cosine Transform (DCT) or a wavelet transform technique may be used for the spatial transform. A DCT coefficient is created when DCT is used for the spatial transform while a wavelet coefficient is produced when wavelet transform is used.
[71] A quantizer 120 performs quantization on the transform coefficients obtained by the spatial transformer 115 to create quantization coefficients. Here, quantization is a methodology to express the transformation coefficient expressed in an arbitrary real number as a finite number of bits. Known quantization techniques include scalar quantization, vector quantization, and the like. A simple scalar quantization technique is performed by dividing a transform coefficient by a value of a quantization table mapped to the coefficient and rounding the result to an integer value.
[72] An entropy encoder 125 losslessly encodes the quantization coefficients generated by the quantizer 120 and a prediction mode selected by a motion estimator 150 into a base layer bitstream. Various coding schemes such as Huffman Coding, Arithmetic Coding, and Variable Length Coding may be employed for lossless coding.
[73] The inverse quantizer 130 performs inverse quantization on the coefficient quantized by the quantizer 120. And, the inverse spatial transformer 135 performs inverse spatial transform on the inversely quantized result that is then sent to the adder 140.
[74] The adder 140 adds the predicted block (P B ') to a signal (a reconstructed residual signal R B ') received by the inverse spatial transformer 135, thereby reconstructing a macroblock of a base layer. The reconstructed macroblocks are combined to form a frame or a slice, and thereby those are stored in a frame buffer 145 for a time. The stored frame is provided in the motion estimator 150 and the motion compensator 155 to be used with the reference frame of other frames again. [75] The reconstructed residual signal (R B ') provided from the inverse spatial transformer 135 is used for r esidual prediction. When a base layer has a different resolution than an enhancement layer, the residual signal (R B ') must be upsampled by an upsampler 165 first. [76] A quantization step calculation unit 310 uses quantization parameters QP and QP
BO for a base layer reference frame received from the quantizer 120 and motion vectors received from the motion estimator 150 to obtain a representative quantization step QS using the Equations (1) and (2). Similarly, a quantization step calculator 320 uses quantization parameters QP and QP for an enhancement layer reference frame received from a quantizer 220 and motion vectors received from a motion estimator 250 to obtain a representative quantization step QS using the Equations (1) and (2). [77] The quantization steps QS and QS are sent to a scaling factor calculator 330 that then divides QS by QS in order to calculate a scaling factor R . A multiplier 340
1 0 scale multiplies the scaling factor R by U(R ') provided by the base layer encoder 100. [78] A subtracter 210 subtracts the product from residual signal R output from a subtractor 205 to generate final residual signal R. Hereinafter, the final residual signal R is referred to as a difference signal in order to distinguish it from other residual signals R and R obtained by subtracting a predicted signal from an original signal.
C B
[79] The difference signal R is spatially transformed by a spatial transformer 215 and then the resulting transform coefficient is fed into the quantizer 220. The quantizer 220 applies quantization to the transform coefficient. When the magnitude of the difference signal R is less than a threshold, the spatial transform will be skipped.
[80] The entropy encoder 225 losslessly encodes the quantized results generated by the quantizer 220 and motion data provided by a motion estimator 250, and generates an output enhancement layer bitstream.
[81] Since the operations of the inverse quantizer 230, the inverse spatial transformer
235, the adder 240 and the frame buffer 245 of the enhancement layer encoder 200 are the same as the inverse quantizer 130, the inverse spatial transformer 135, the adder 140 and the frame buffer 145 of the base layer encoder 100 discussed previously, a repeated explanation thereof will not be given.
[82] FIG. 7 illustrates the structure of a bitstream 50 generated by the video encoder
1000. The bitstream 50 consists of a base layer bitstream 51 and an enhancement layer bitstream 52. Each of the base layer bitstream 51 and the enhancement layer bitstream 52 contains a plurality of frames or slices 53 through 56. In general, in the H.264 or Scalable Video Coding (SVC) coding standard, a bitstream is encoded in slices rather than in frames. Each slice may have the same size as one frame or macroblock.
[83] One slice 55 includes a slice header 60 and slice data 70 containing a plurality of macroblocks MB 71 through 74.
[84] One macroblock 73 contains an mbjype field 81, a motion vector field 82, a quantization parameter (Q_para) field 84, and a coded residual field 85. The macroblock 85 may further contain a scaling factor field R_scale 83.
[85] The mbjype field 81 is used to indicate a value representing the type of macroblock 73. That is, the mbjype field 81 specifies whether the current macroblock 73 is an intra macroblock, inter macroblock, or an intra BL macroblock. The motion vector field 82 indicates a reference frame number, the pattern of the macroblock 73, and motion vectors for motion blocks. The quantization parameter (Q_para) field 84 is used to indicate a quantization parameter for the macroblock 73. The coded residual field 85 specifies the result of quantization performed for the macroblock 73 by the quantizer 220, i.e., coded texture data. [86] The scaling factor field 83 indicates a scaling factor R for the macroblock 73 scale calculated by the scaling factor calculator 330. The macroblock 73 may selectively contain the scaling factor field 83 because a scaling factor can be calculated in a decoder like in an encoder. When the macroblock 73 contains the scaling factor field 83, the size of the bitstream 50 may increase but the amount of computations of decoding decreases.
[87] FlG. 8 is a diagram of a multi-layer video decoder 2000 according to an exemplary embodiment of the present invention. Referring to FlG. 8, the video decoder 2000 comprises an enhancement layer decoder 500 and a base layer decoder 400.
[88] Using the enhancement layer decoder 500 as a starting point, an entropy decoder
510 performs lossless decoding that is an inverse operation of entropy encoding for an inputted enhancement layer bitstream 52 to extract motion data, and texture data for the enhancement layer . The entropy decoding unit 510 provides the motion data, and the texture data to a motion compensator 570, and an inverse quantizer 520, respectively.
[89] The inverse quantizer 520 performs inverse quantization on the texture data received from the entropy decoding unit 510. The inverse quantization parameter (the same as that used in the encoder) which is included in the enhancement layer bitstream 52 in FlG. 7 is used.
[90] An inverse spatial transformer 530 performs inverse spatial transform to the results of the inverse quantization. The inverse spatial transform is performed corresponding to the spatial transform at the video encoder. For example, if a wavelet transform is used for spatial transform at the video encoder, the inverse spatial transformer 530 performs inverse wavelet transform. If DCT is used for spatial transform, the inverse spatial transformer 530 performs inverse DCT. After the inverse spatial transform, the difference signal R' at the encoder is reconstructed.
[91] Meanwhile, an entropy decoder 410 performs lossless decoding that is an inverse operation of entropy encoding for an inputted base layer bitstream 51 to extract motion data, and texture data for the base layer . The texture data are the same as at the enhancement layer decoder 500. A residual signal (R B ' ) of the base layer is re- constructed through an inverse quantizer 420 and an inverse spatial transformer 430. [92] If a base layer has a lower resolution than an enhancement layer, a residual signal
R B ' is subjected to upsampling by an upsampler 480.
[93] A quantization step calculator 610 uses base layer motion vectors and quantization parameters QP and QP for a base layer reference frame received from the entropy decoder 410 to obtain a representative quantization step QS using the Equations (1) and (2). Similarly, a quantization step calculator 620 uses enhancement layer motion vectors and quantization parameters QP and QP for an enhancement layer reference frame received from an entropy decoder 510 to obtain a representative quantization step QS using the Equations (1) and (2).
[94] The quantization steps QS and QS are sent to a scaling factor calculator 630 that then divides QS by QS in order to calculate a scaling factor R . A multiplier 640
1 0 scale multiplies the scaling factor R by U(R ') provided by the base layer decoder 400. [95] The adder 540 adds t he difference signal R' output from the inverse spatial transformer 530 to the output of the multiplier 640, thereby reconstructing a residual signal R ' of an enhancement layer. [96] The motion compensator 570 performs motion compensation on at least a reference frame using the motion data provided from the entropy decoding unit 510. After motion-compensation, a generated predicted block (P ) is provided to an adder 550.
[97] An adder 550 adds R ' and P ' together to reconstruct a current macroblock and c c ° then combines the macroblocks together to reconstruct an enhancement layer frame. The reconstructed enhancement layer frame is temporarily stored in a frame buffer 560 before being provided to a motion compensator 570 or being externally output.
[98] Since the operation of the adder 450, the motion compensator 470 and the frame buffer 460 of the base layer decoder 400 are the same as the adder 550, the motion compensator 570 and the frame buffer 560 of the enhancement layer decoder 500, a repeated explanation thereof will not be given.
[99] FIG. 9 is a diagram of a multi-layer video decoder 3000 according to another exemplary embodiment of the present invention. Unlike in the video decoder 2000 of FIG. 8, the video decoder 3000 does not include quantization step calculators 610 and 620 or the scaling factor calculator 630 required for obtaining a scaling factor. That is, a scaling factor R for a current macroblock in an enhancement layer bitstream is scale delivered directly to a multiplier 640 for subsequent operation. The operation of the other blocks, however, is the same, and hence will not be described again. [100] If the scaling factor R is received directly from an encoder, the size of a received scale bitstream may increase but the number of computations needed for decoding may be decreased by a certain extent. The video decoder 3000 may be suitably used for a device having low computation capability compared to its reception bandwidth.
[101] In the foregoing description, it has been described that the video encoder and the video decoder are configured by two layers of a base layer and an enhancement layer, respectively. However, this is only by way of an example, and the inventive concept may also be used and applied to more than 3 layers by those of ordinary skill in the art in light of the above teachings.
[102] In FIGS. 6, 8, and 9, various components mean, but are not limited to, software or hardware components, such as Field Programmable Gate Arrays (FPGAs) or Ap- plication Specific Integrated Circuits (ASICs), which perform certain tasks. The components may advantageously be configured to reside on various addressable storage media and configured to execute on one or more processors. The functionality provided for in the components and modules may be combined into fewer components and modules or further separated into additional components and modules.
[103] In the foregoing description, residual prediction according to exemplary embodiments of the present invention is applied to reduce redundancy between layers in inter prediction. However, the residual prediction can be applied to any type of prediction that involves generating a residual signal. To give a non-limiting example, the residual prediction of the present invention can be applied between residual signals generated by intra prediction or between residual signals at different temporal positions in the same layer.
Industrial Applicability
[104] The inventive concept of exemplary embodiments of the present invention can efficiently remove residual signal energy during residual prediction by compensating for a dynamic range difference between residual signals that occurs due to a difference between quantization parameters for predicted signals in different layers. The reduction in residual signal energy can decrease the amount of bits generated during quantization.
[105] While the present invention has been particularly shown and described with reference to certain exemplary embodiments thereof, it will be understood by those of ordinary skill in the art that various changes in form and details may be made therein without departing from the spirit and scope of the present inventive concept as defined by the following claims. Therefore, it is to be understood that the above-described exemplary embodiments have been provided only in a descriptive sense and will not be construed as placing any limitation on the scope of the invention.

Claims

Claims
[ 1 ] A residual prediction method comprising : calculating a first residual signal; calculating a second residual signal; performing scaling by multiplying the second residual signal by a scaling factor; and calculating a difference between the first residual signal and the scaled second residual signal.
[2] The residual predication method of claim 1, wherein the first residual signal is for a current layer block, and the second residual signal is for a lower layer block corresponding to the current layer block.
[3] The residual prediction method of claim 2, further comprising upsampling the second residual signal, wherein in the performing of the scaling, the second residual signal is the upsampled second residual signal.
[4] The residual prediction method of claim 2, wherein the current layer block is a macroblock.
[5] The residual prediction method of claim 2, wherein the calculating of the first residual signal for the current layer block comprises: generating a predicted block for the current layer block using a current layer reference frame; and subtracting the predicted block from the current layer block.
[6] The residual prediction method of claim 5, wherein the current layer reference frame is one of a forward reference frame, a backward reference frame, and a bidirectional reference frame.
[7] The residual prediction method of claim 5, wherein the current layer reference frame is generated after quantization and inverse quantization.
[8] The residual prediction method of claim 2, wherein the calculating of the second residual signal for the lower layer block comprises: generating a predicted block for the lower layer block using a lower layer reference frame; subtracting the predicted block from the lower layer block; and quantizing and inversely quantizing the result of the subtraction.
[9] The residual prediction method of claim 8, wherein the lower layer reference frame is generated after quantization and inverse quantization.
[10] The residual prediction method of claim 2, wherein in the performing of scaling, the scaling factor is obtained by calculating a first representative quantization step for the current layer block, calculating a second representative quantization step for the lower layer block, and dividing the first representative quantization step by the second representative quantization step, wherein the first and second representative quantization steps are estimated values of quantization steps for regions on reference frames corresponding to the current layer block and the lower layer block.
[11] The residual prediction method of claim 10, wherein the first and second representative quantization steps are obtained by calculating a first representative value from quantization parameters for macroblocks in a reference frame overlapping a certain motion block in the current layer block, calculating a second representative value for the current layer block from the first representative value, and converting the second representative value into a corresponding representative quantization step.
[12] The residual prediction method of claim 11, wherein the calculating of the first representative value comprises calculating an average of the quantization parameters by weighting the overlapped areas of the macroblocks.
[13] The residual prediction method of claim 11 , wherein the calculating of the second representative value comprises calculating an average of the first representative values by weighting a size of the motion block.
[14] The residual prediction method of claim 10, wherein the first and second representative quantization steps are obtained by calculating a first representative value from quantization steps for macroblocks in a reference frame overlapping a certain motion block in the current layer block, and calculating a second representative value for the current layer block from the first representative values. A multi-layer video encoding method comprising:
[15] calculating a first residual signal; calculating a second residual signal; performing scaling by multiplying the second residual signal by a scaling factor; and calculating a difference between the first residual signal and the scaled second residual signal; and quantizing the difference.
[16] The multi-layer video encoding method of claim 15, wherein the first residual signal is for a current layer block, and the second residual signal is for a lower layer block corresponding to the current layer block.
[17] The multi-layer video encoding method of claim 16, further comprising performing spatial transform on the difference before the quantizing of the difference.
[18] The multi-layer video encoding method of claim 16, further comprising upsampling the second residual signal, wherein the second residual signal of the performing of the scaling is the upsampled second residual signal.
[19] The multi-layer video encoding method of claim 16, wherein the calculating of the first residual signal for the current layer block comprises: generating a predicted block for the current layer block using a current layer reference frame; and subtracting the predicted block from the current layer block.
[20] The multi-layer video encoding method of claim 16, wherein the calculating of the second residual signal for the lower layer block comprises: generating a predicted block for the lower layer block using a lower layer reference frame; subtracting the predicted block from the lower layer block; and quantizing and inversely quantizing the result of the subtraction.
[21] The multi-layer video encoding method of claim 16, wherein in the performing of scaling, the scaling factor is obtained by calculating a first representative quantization step for the current layer block, calculating a second representative quantization step for the lower layer block, and dividing the first representative quantization step by the second representative quantization step, wherein the first and second representative quantization steps are estimated values of quantization steps for regions on reference frames corresponding to the current layer block and the lower layer block.
[22] The multi-layer video encoding method of claim 21, wherein the calculating of the first and second representative quantization steps comprises: calculating a first representative value from quantization parameters for macroblocks in a reference frame overlapping a certain motion block in the current layer block; calculating a second representative value for the current layer block from the first representative value; and converting the second representative value into a corresponding representative quantization step.
[23] The multi-layer video encoding method of claim 16, wherein the first and second representative quantization steps are obtained by calculating a first representative value from quantization steps for macroblocks in a reference frame overlapping a certain motion block in the current layer block, and calculating a second representative value for the current layer block from the first representative values.
[24] A method for generating a multi-layer video bitstream including generating a base layer bitstream and generating an enhancement layer bitstream, wherein the enhancement layer bitstream contains at least one macroblock and each macroblock comprises a field indicating a motion vector, a field specifying a coded residual, and a field indicating a scaling factor for the macroblock, and wherein the scaling factor is used to make a dynamic range of a residual signal for a base layer block substantially equal to a dynamic range of a residual signal for an enhancement layer block.
[25] The method of claim 24, wherein the macroblock further includes a quantization parameter for the macroblock.
[26] The method of claim 24, wherein the enhancement layer bitstream consists of a plurality of slices and each slice contains at least one macroblock.
[27] A multi-layer video decoding method comprising: reconstructing a difference signal from an input bitstream; reconstructing a first residual signal from the input bitstream; performing scaling by multiplying the first residual signal by a scaling factor; and adding the reconstructed difference signal and the scaled first residual signal together and reconstructing a second residual signal.
[28] The multi-layer video decoding method of claim 27, wherein the difference signal is for a current layer block, the first residual signal is for a lower layer block, and the second residual signal is for the current layer block.
[29] The multi-layer video decoding method of claim 28, further comprising adding together a predicted block for the current layer block, the result of addition, and the second residual signal.
[30] The multi-layer video decoding method of claim 28, further comprising upsampling the first residual signal, wherein in the performing of the scaling, the first residual signal is the upsampled first residual signal.
[31] The multi-layer video decoding method of claim 28, wherein the reconstructing of the difference signal and the reconstructing of the first residual signal comprise inverse quantization and an inverse spatial transform.
[32] The multi-layer video decoding method of claim 28, wherein the current layer block is a macroblock.
[33] The multi-layer video decoding method of claim 28, wherein the bitstream contains the scaling factor.
[34] The multi-layer video decoding method of claim 28, wherein in the performing of scaling, the scaling factor is obtained by calculating a first representative quantization step for the current layer block, calculating a second representative quantization step for the lower layer block, and dividing the first representative quantization step by the second representative quantization step, wherein the first and second representative quantization steps are estimated values of quantization steps for regions on reference frames corresponding to the current layer block and the lower layer block.
[35] The multi-layer video decoding method of claim 34, wherein the first and second representative quantization steps are obtained by calculating a first representative value from quantization parameters for macroblocks in a reference frame overlapping a certain motion block in the current layer block, calculating a second representative value for the current layer block from the first representative value, and converting the second representative value into a corresponding representative quantization step.
[36] The multi-layer video decoding method of claim 35, wherein the calculating of the first representative value comprises calculating an average of the quantization parameters by weighting the overlapped areas of the macroblocks.
[37] The multi-layer video decoding method of claim 35, wherein the calculating of the second representative value comprises calculating an average of the first representative values by weighting a size of the motion block.
[38] The multi-layer video decoding method of claim 34, wherein the first and second representative quantization steps are obtained by calculating a first representative value from quantization steps for macroblocks in a reference frame overlapping a predetermined motion block in the current layer block, and calculating a second representative value for the current layer block from the first representative values.
[39] A multi-layer video encoder comprising: means for calculating a first residual signal; means for calculating a second residual signal; means for performing scaling by multiplying the second residual signal by a scaling factor; and means for calculating a difference between the first residual signal and the scaled second residual signal; and means for quantizing the difference.
[40] The multi-layer video encoder of claim 39, wherein the first residual signal is for a current layer block, and the second residual signal is for a lower layer block corresponding to the current layer block.
[41] A multi-layer video decoder comprising : means for reconstructing a difference signal from an input bitstream; means for reconstructing a first residual signal from the input bitstream; means for performing scaling by multiplying the first residual signal by a scaling factor; and means for adding the reconstructed difference signal and the scaled first residual signal together and reconstructing a second residual signal.
[42] The multi-layer video decoder of claim 41, wherein the difference signal is for a current layer block, the first residual signal is for a lower layer block, and the second residual signal is for the current layer block.
PCT/KR2006/003328 2005-08-24 2006-08-24 Method for enhancing performance of residual prediction and video encoder and decoder using the same WO2007024106A1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US71061305P 2005-08-24 2005-08-24
US60/710,613 2005-08-24
KR1020050119785A KR100746011B1 (en) 2005-08-24 2005-12-08 Method for enhancing performance of residual prediction, video encoder, and video decoder using it
KR10-2005-0119785 2005-12-08

Publications (1)

Publication Number Publication Date
WO2007024106A1 true WO2007024106A1 (en) 2007-03-01

Family

ID=37771808

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2006/003328 WO2007024106A1 (en) 2005-08-24 2006-08-24 Method for enhancing performance of residual prediction and video encoder and decoder using the same

Country Status (1)

Country Link
WO (1) WO2007024106A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010101420A3 (en) * 2009-03-03 2010-12-09 Samsung Electronics Co., Ltd. Apparatus and method for encoding and decoding multilayer videos
CN108632612A (en) * 2018-05-23 2018-10-09 西安万像电子科技有限公司 The method and apparatus for encoding compression
WO2021012942A1 (en) * 2019-07-22 2021-01-28 中兴通讯股份有限公司 Residual coding method and device, residual decoding method and device, storage medium, and electronic device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5973739A (en) * 1992-03-27 1999-10-26 British Telecommunications Public Limited Company Layered video coder
US20020064227A1 (en) * 2000-10-11 2002-05-30 Philips Electronics North America Corporation Method and apparatus for decoding spatially scaled fine granular encoded video signals
US6510177B1 (en) * 2000-03-24 2003-01-21 Microsoft Corporation System and method for layered video coding enhancement

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5973739A (en) * 1992-03-27 1999-10-26 British Telecommunications Public Limited Company Layered video coder
US6510177B1 (en) * 2000-03-24 2003-01-21 Microsoft Corporation System and method for layered video coding enhancement
US20020064227A1 (en) * 2000-10-11 2002-05-30 Philips Electronics North America Corporation Method and apparatus for decoding spatially scaled fine granular encoded video signals

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
OHM J.-R.: "Advances in scalable video coding", PROC. OF THE IEEE, vol. 93, no. 1, January 2005 (2005-01-01), pages 42 - 56, XP011123852 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010101420A3 (en) * 2009-03-03 2010-12-09 Samsung Electronics Co., Ltd. Apparatus and method for encoding and decoding multilayer videos
CN102342105A (en) * 2009-03-03 2012-02-01 三星电子株式会社 Apparatus and method for encoding and decoding multilayer videos
US9106928B2 (en) 2009-03-03 2015-08-11 Samsung Electronics Co., Ltd. Apparatus and method for encoding and decoding multilayer videos
CN102342105B (en) * 2009-03-03 2015-08-19 三星电子株式会社 For carrying out the Apparatus and method for of Code And Decode to multi-layer video
CN108632612A (en) * 2018-05-23 2018-10-09 西安万像电子科技有限公司 The method and apparatus for encoding compression
CN108632612B (en) * 2018-05-23 2021-10-08 上海京像微电子有限公司 Method and apparatus for code compression
WO2021012942A1 (en) * 2019-07-22 2021-01-28 中兴通讯股份有限公司 Residual coding method and device, residual decoding method and device, storage medium, and electronic device

Similar Documents

Publication Publication Date Title
US20070047644A1 (en) Method for enhancing performance of residual prediction and video encoder and decoder using the same
KR100703778B1 (en) Method and apparatus for coding video supporting fast FGS
US8817872B2 (en) Method and apparatus for encoding/decoding multi-layer video using weighted prediction
KR100703788B1 (en) Video encoding method, video decoding method, video encoder, and video decoder, which use smoothing prediction
KR100714696B1 (en) Method and apparatus for coding video using weighted prediction based on multi-layer
JP4891234B2 (en) Scalable video coding using grid motion estimation / compensation
KR101033548B1 (en) Video encoding method, video decoding method, video encoder, and video decoder, which use smoothing prediction
US20060120448A1 (en) Method and apparatus for encoding/decoding multi-layer video using DCT upsampling
KR100781525B1 (en) Method and apparatus for encoding and decoding FGS layers using weighting factor
KR100763179B1 (en) Method for compressing/Reconstructing motion vector of unsynchronized picture and apparatus thereof
US20060104354A1 (en) Multi-layered intra-prediction method and video coding method and apparatus using the same
KR20060135992A (en) Method and apparatus for coding video using weighted prediction based on multi-layer
JP2005507587A (en) Spatial scalable compression
KR20060085148A (en) Method for multi-layer based scalable video coding and decoding, and apparatus for the same
RU2340115C1 (en) Method of coding video signals, supporting fast algorithm of precise scalability on quality
EP1878252A1 (en) Method and apparatus for encoding/decoding multi-layer video using weighted prediction
WO2007024106A1 (en) Method for enhancing performance of residual prediction and video encoder and decoder using the same
EP1889487A1 (en) Multilayer-based video encoding method, decoding method, video encoder, and video decoder using smoothing prediction
EP1817918A1 (en) Method and apparatus for encoding/decoding multi-layer video using dct upsampling
WO2006104357A1 (en) Method for compressing/decompressing motion vectors of unsynchronized picture and apparatus using the same
EP1847129A1 (en) Method and apparatus for compressing multi-layered motion vector

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application
NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 06835205

Country of ref document: EP

Kind code of ref document: A1