WO2006118384A1 - Method and apparatus for encoding/decoding multi-layer video using weighted prediction - Google Patents

Method and apparatus for encoding/decoding multi-layer video using weighted prediction Download PDF

Info

Publication number
WO2006118384A1
WO2006118384A1 PCT/KR2006/001472 KR2006001472W WO2006118384A1 WO 2006118384 A1 WO2006118384 A1 WO 2006118384A1 KR 2006001472 W KR2006001472 W KR 2006001472W WO 2006118384 A1 WO2006118384 A1 WO 2006118384A1
Authority
WO
WIPO (PCT)
Prior art keywords
picture
calculating
current picture
low layer
reference pictures
Prior art date
Application number
PCT/KR2006/001472
Other languages
French (fr)
Inventor
Kyo-Hyuk Lee
Sang-Chang Cha
Woo-Jin Han
Original Assignee
Samsung Electronics Co., Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from KR1020050059834A external-priority patent/KR100763182B1/en
Application filed by Samsung Electronics Co., Ltd. filed Critical Samsung Electronics Co., Ltd.
Priority to CN2006800191163A priority Critical patent/CN101185334B/en
Priority to EP06747383A priority patent/EP1878252A4/en
Publication of WO2006118384A1 publication Critical patent/WO2006118384A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/30Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
    • H04N19/36Scalability techniques involving formatting the layers as a function of picture distortion after decoding, e.g. signal-to-noise [SNR] scalability
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/105Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/109Selection of coding mode or of prediction mode among a plurality of temporal predictive coding modes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/30Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
    • H04N19/33Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability in the spatial domain
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/577Motion compensation with bidirectional frame interpolation, i.e. using B-pictures
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/587Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal sub-sampling or interpolation, e.g. decimation or subsequent interpolation of pictures in a video sequence
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/59Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial sub-sampling or interpolation, e.g. alteration of picture size or resolution

Definitions

  • Apparatuses and methods consistent with the present invention relate to video coding, and more particularly, to effectively coding multiple layers using interlayer information in a multi-layered video codec.
  • Multimedia data requires a large capacity of storage media and a wide bandwidth for transmission since the amount of multimedia data is usually large.
  • a 24-bit true color image having a resolution of 640*480 needs a capacity of 640*480*24 bits, i.e., data of about 7.37 Mbits, per frame.
  • a bandwidth of 221 Mbits/sec is required.
  • a 90-minute movie based on such an image is stored, a storage space of about 1200 Gbits is required.
  • a compression coding method is a requisite for transmitting multimedia data including text, video, and audio.
  • a basic principle of data compression is removing data redundancy.
  • Data can be compressed by removing spatial redundancy in which the same color or object is repeated in an image, temporal redundancy in which there is little change between adjacent frames in a moving image or the same sound is repeated in audio, or mental visual redundancy taking into account human eyesight and limited perception of high frequencies.
  • Data compression can be classified into lossy/lossless compression according to whether source data is lost, intraframe/interframe compression according to whether individual frames are compressed independently, and symmetric/ asymmetric compression according to whether the time required for compression is the same as the time required for recovery.
  • Data compression is defined as real-time compression if the compression/recovery time delay does not exceed 50 ms and is defined as scalable compression when frames have different resolutions.
  • lossless compression is usually used.
  • lossy compression is usually used for multimedia data.
  • intraframe compression is usually used to remove spatial redundancy
  • interframe compression is usually used to remove temporal redundancy.
  • Transmission media for transmitting multimedia information differ in performance according to the types of media transmitted.
  • the transmission media currently in use have a variety of transfer rates, ranging for example, from a very-high speed communication network capable of transmitting the data at a transfer rate of tens of Mbits per second to a mobile communication network having a transfer rate of 384 Kbps.
  • Previous video coding techniques such as MPEG-I, MPEG-2, H.263 or H.264 remove redundancy based on a motion compensated prediction coding technique. Specifically, temporal redundancy is removed by motion compensation, while spatial redundancy is removed by transform coding.
  • Scalability indicates the ability to partially decode a single compressed bitstream, that is, the ability to perform a variety of types of video reproduction.
  • Scalability includes spatial scalability indicating a video resolution, signal-to noise ratio (SNR) scalability indicating a video quality level, temporal scalability indicating a frame rate, and a combination thereof.
  • SNR signal-to noise ratio
  • H.264 SE' is being performed at present by a joint video team (JVT) of the MPEG (Motion Picture Experts Group) and ITU (International Telecommunication Union).
  • JVT Joint video team
  • An advantageous feature of H.264 SE lies in that it exploits the relevancy among layers in order to code a plurality of layers while employing an H.264 coding technique. While the plurality of layers are different from one another in view of resolution, frame rate, SNR, or the like, they basically have a substantial similarity in that they are generated from the same video source. In this regard, a variety of efficient techniques that utilize information about lower layers in coding upper layer data are proposed.
  • FIG. 1 is a diagram for explaining weighted prediction proposed in conventional
  • the weighted prediction allows a motion-compensated reference picture to be appropriately scaled instead of being averaged in order to improve prediction efficiency.
  • a motion block 11 (a 'macroblock or 'subblock' as the basic unit for calculating a motion vector) in a current picture 10 corresponds to a predetermined image 21 in a left reference picture 20 pointed by a forward motion vector 22 while corresponding to a predetermined image 31 in a right reference image 30 pointed by a backward motion vector 32.
  • An encoder reduces the number of bits required to represent the motion block 11 by subtracting a predicted image obtained from the images 21 and 31 from the motion block 11.
  • a conventional encoder not using weighted prediction calculates a predicted image by simply averaging the images 21 and 31.
  • the motion block 11 is not usually identical to an average of the left and right images 21 and 31, it is difficult to obtain an accurate predicted image.
  • a method for determining a predicted image using a weighted sum is proposed in H.264.
  • weighting factors ⁇ and ⁇ are determined for each slice and a sum of products of multiplying the weighting factors ⁇ and ⁇ by the images 21 and 31 are used as a predicted image.
  • the slice may consist of a plurality of macroblocks and be identical to a picture. A plurality of slices may make up a picture.
  • the proposed method can obtain a predicted image with a very small difference from the motion block 11 by adjusting the weighting factors.
  • the method can also improve coding efficiency by subtracting the predicted image from the motion block 11.
  • Illustrative, non-limiting embodiments of the present invention overcome the above disadvantages and other disadvantages not described above. Also, the present invention is not required to overcome the disadvantages described above, and an illustrative, non-limiting embodiment of the present invention may not overcome any of the problems described above. Apparatuses and methods consistent with present invention estimate a weighting factor that will be applied to a picture in a higher layer using information from a picture in a lower layer and perform weighted prediction on the picture in the higher layer using the estimated weighting factor.
  • a method for encoding video by performing weighted prediction on a current picture in a high layer using information on a picture in a low layer including reading the information on the low layer picture, calculating weighting factors using the information on the low layer picture, calculating a weighted sum of reference pictures to the current picture using the weighting factors and generating a predicted picture for the current picture, and encoding a difference between the current picture and the predicted picture.
  • a method for decoding video by performing weighted prediction on a current picture in a high layer using information on a picture in a low layer including extracting texture data and motion data from an input bitstream, reconstructing information about the low layer picture from the texture data, calculating weighting factors using the information on the low layer picture, calculating a weighted sum of reference pictures to the current picture using the weighting factors and generating a predicted picture for the current picture, and reconstructing a residual signal for the current picture from the texture data and adding the reconstructed residual signal to the predicted picture.
  • a video encoder for performing weighted prediction on a current picture in a high layer using information on a picture in a low layer, the video encoder including an element for reading the information on the low layer picture, an element for calculating weighting factors using the information on the low layer picture, an element for calculating a weighted sum of reference pictures to the current picture using the weighting factors and generating a predicted picture for the current picture, and an element for encoding a difference between the current picture and the predicted picture.
  • a video decoder for performing weighted prediction on a current picture in a high layer using information on a picture in a low layer, the method including an element for extracting texture data and motion data from an input bitstream, an element for reconstructing information about the low layer picture from the texture data, an element for calculating weighting factors using the information on the low layer picture, an element for generating a predicted picture for the current picture by calculating a weighted sum of reference pictures to the current picture using the weighting factors, and an element for adding texture data of the current picture among the texture data to the predicted picture.
  • FlG. 1 is a diagram for explaining conventional weighted prediction proposed in
  • FlG. 2 is a flowchart illustrating a multi-layered weighted prediction method according to an embodiment of the present invention
  • FlG. 3 is a flowchart illustrating sub-steps for step S50 illustrated in FlG. 2;
  • FlG. 4 illustrates a multi-layer video structure in which a high layer has double the resolution of a lower layer but the same frame rate as the low layer;
  • FlG. 5 illustrates a multi-layer video structure in which a high layer and a low layer have a Motion-compensated Temporal Filtering (MCTF) structure;
  • MCTF Motion-compensated Temporal Filtering
  • FlG. 6 illustrates a multi-layer video structure in which a high layer and a low layer have a Hierarchical B structure
  • FlG. 7 illustrates a multi-layer video structure in which a high layer has a MCTF structure and a low layer has a Hierarchical B structure;
  • FlG. 8 illustrates a multi-layer video structure in which a high layer and a low layer have the same frame rate and pictures have multiple reference schemes
  • FlG. 9 illustrates an example in which an embodiment of the present invention is applied when a current picture is an asynchronized picture
  • FlG. 10 is a schematic diagram for explaining a method for using a weighting factor calculated from a low layer for a high layer
  • FlG. 11 is a block diagram of a video encoder according to an embodiment of the present invention.
  • FlG. 12 is a block diagram of a video decoder according to an embodiment of the present invention.
  • FlG. 13 is a schematic block diagram of a system in which a video encoder and/or a video decoder according to an exemplary embodiment of the present invention operate.
  • Weighted prediction can be effective for fade-in or fade-out sequences achieved by a gradual increase or decrease in brightness of a picture.
  • a weighting factor to be applied to a high layer is expected to be similar to a weighting factor for a low layer. That is, information from a picture in a low layer can be used to perform weighted prediction on a picture in a high layer.
  • an encoder does not need to transmit weighting factors ⁇ and ⁇ needed for weighted prediction to a decoder because the information from the picture in the low layer is available at both encoder and decoder.
  • the decoder can perform weighted prediction according to the same algorithm as used in the encoder.
  • FlG. 2 is a flowchart illustrating a multi-layered weighted prediction method according to an embodiment of the present invention.
  • a synchronized picture means a picture having a corresponding picture in a lower layer (a 'base picture') at the same position.
  • An asynchronized picture means a picture having no corresponding picture in a lower layer at the same temporal position.
  • the temporally same position can be determined as a Picture of Count (POC) as defined as the Joint Scalable Video Model (JSVM).
  • POC Picture of Count
  • JSVM Joint Scalable Video Model
  • the encoder determines the current picture has the same reference scheme and reference distance as the base picture in step S30.
  • the reference scheme may be forward reference, backward reference, bi-directional reference or other multiple reference schemes.
  • the reference distance refers to a temporal distance between a picture being predicted and a reference picture. In the JSVM, the temporal distance can be represented as a difference between a POC for a picture being predicted and a POC for a reference picture
  • the encoder applies weighted prediction to the current picture in step S50. Conversely, if the current picture does not have different reference schemes and reference distance from the base picture (NO in step S30), the encoder does not apply weighted prediction to the current picture in step S60.
  • FlG. 3 is a flowchart illustrating sub-steps for the step S50 illustrated in FlG. 2.
  • step S51 the encoder reads information on the picture in the low layer.
  • the information on the low layer picture contains a base picture and a reference picture motion-compensated with respect to the base picture.
  • step S52 weighting factors ⁇ and ⁇ are calculated using the information on the low layer picture.
  • a method for calculating the weighting factors ⁇ and ⁇ from the given pictures in step S52 (a 'weighting factor calculation algorithm') will be described later in more detail.
  • step S53 the encoder uses a motion vector to perform motion compensation on a reference picture to obtain the current picture.
  • a motion vector from a high layer estimated by motion estimation is used. If a plurality of reference pictures are used, motion compensation should be performed on each of the plurality of reference pictures using an appropriate motion vector.
  • step S54 the encoder multiplies weighting factors ⁇ and ⁇ by the motion- compensated reference pictures and adds the products to obtain a predicted picture (or predicted slice).
  • the encoder calculates a difference between the current picture (or slice) and the predicted picture (or slice) in step S56 and encodes the difference in step S56.
  • the decoder can calculate a weighting factor used in the encoder by performing weighted prediction in the same manner as the encoder.
  • FIGS. 4 through 8 are diagrams illustrating various multi-layer video structures to which the present invention can be applied.
  • FlG. 4 illustrates a structure in which a picture in a high layer (Layer 2) has double the resolution of, but the same frame rate as, a picture in a low layer (Layer 1) and both layers have a single temporal level.
  • Reference symbols I, P, and B denote an I-picture (or slice), a P-picture (or slice), or a B-picture (or slice), respectively.
  • reference pictures in Layer 1 have the same positions as their counterparts in Layer 2.
  • a current picture in a high layer having a base picture with the same reference scheme and reference distance can be encoded or decoded using a weighting factor applied to the base picture.
  • a non-adjacent picture may be used as a reference picture.
  • FlG. 5 illustrates a multi-layer video structure in which a high layer (Layer 2) has double the frame rate of a low layer (Layer 1).
  • the high layer has one more temporal level than the low layer.
  • the techniques described here can be applied to a structure in which the high and low layers are decomposed into hierarchical temporal levels. That is, pictures 54 through 56 that satisfy the requirements described with reference to FlG. 2 among high-pass pictures in the high layer can effectively be encoded or decoded using weighting factors applied to their corresponding base pictures 57 through 59. On the other hand, weighted prediction is not applied to high-pass pictures 50 through 53 at the highest level (level 2) in the high layer with no corresponding base pictures.
  • FlG. 6 illustrates a multi-layer video structure in which a high layer (Layer 2) and a low layer (Layer 1) have a hierarchical B structure defined in H.264. Like in FlG. 5, the high layer has double the frame rate of the low layer.
  • each layer is decomposed into temporal levels in a different way than for a MCTF structure. That is, a frame rate increases in the order from high to low levels.
  • the frame rate of the high layer is A
  • the encoder transmits only pictures at level 2.
  • the encoder may transmit only pictures at levels 2 and 1.
  • the encoder may transmit pictures at all levels.
  • a high-pass picture 61 has the same reference scheme and reference distance as a base picture 64 and a high-pass picture 62 has the same reference scheme and reference distance as a base picture 65.
  • the high-pass pictures 61 and 62 can be subjected to weighted reference using weighting factors applied to the base pictures 64 and 65.
  • weighted reference is not applied to a high-pass picture 63 because it does not have a corresponding base picture.
  • FlG. 7 illustrates a multi-layer video structure in which a high layer (Layer 2) has a
  • MCTF structure and a low layer has a Hierarchical B structure.
  • weighted prediction is not applied to high-pass pictures at level 2 in the high layer because the high pass pictures at level 2 do not have any corresponding base pictures.
  • high-pass pictures at level 1 or 0 have corresponding base pictures.
  • a high-pass picture 72 has a base picture 75.
  • weighted prediction can be applied without any problem if the two pictures 72 and 75 have the same reference scheme and reference distance. Because the two pictures 72 and 75 actually have the same reference scheme and reference distance, weighted prediction can be applied.
  • weighted prediction can be performed on a picture 73 using a weighting factor assigned to a picture 74.
  • FlG. 8 illustrates a multi-layer video structure in which a high layer (Layer 2) and a low layer (Layer 1) have a single temporal level and all high-pass pictures in the high layer have corresponding base pictures. However, all the high-pass pictures in the high layer do not have the same reference scheme and reference distance as their corresponding base pictures.
  • a high-pass picture 81 has the same reference scheme (bi-directional reference scheme) and the same reference distance (1) as its corresponding base picture 85.
  • a high-pass picture 82 has the same reference scheme (backward reference scheme) and the same reference distance (1) as its corresponding picture 86.
  • high-pass pictures 83 and 84 do not use the same reference schemes as their corresponding base pictures 87 and 88. Thus, weighted prediction is not applied to the high-pass pictures 83 and 84.
  • weighted prediction is applied when the current picture is a synchronized picture and the current picture has the same reference scheme and reference distance as the base picture, the present invention is not limited to the particular embodiments shown in FIGS. 2 through 8.
  • a weighting factor can be calculated using a low layer picture at the closest temporal position to the current picture and its reference picture.
  • a high layer Layer 2
  • a low layer Layer 1
  • pictures 91 and 93 in the high layer have corresponding base pictures and pictures in the high layer and low layer have different reference distances.
  • weighting factors ⁇ and ⁇ for the pictures 91 and 93 can be calculated using a picture 92 in the low layer at the closest temporal position to the pictures 91 and 93 and its reference pictures 94 and 96.
  • a 'weighting factor calculation algorithm' is applied to a low layer picture 92 and the reference pictures 94 and 96 to calculate the weighting factors ⁇ and ⁇ that can be used as weighting factors for the pictures 91 and 93.
  • this method exhibits high coding performance compared to the case where half the weighting factors ⁇ and ⁇ are applied.
  • a low layer picture at the same temporal position as or closest temporal position to a high layer picture is hereinafter generally referred to as a 'base picture'.
  • FlG. 10 is a schematic diagram for explaining a method for using a weighting factor calculated from a low layer for a high layer.
  • a unit block 41 means a region having a size ranging from a block size to a picture size.
  • the unit block 41 may be a motion block, macroblock, slice, or picture.
  • images 42 and 43 in a reference picture corresponding to the unit block 41 have the same size as the unit block 41.
  • Motion vectors 44 and 45 from the unit block 41 mean motion vectors when the unit block 41 is a motion block.
  • the motion vectors 44 and 45 mean a graphical representation of a plurality of motion vectors.
  • the weighting factor calculation algorithm consists of: (a) calculation of a weighted sum of reference pictures to the base picture using predetermined coefficients ⁇ and ⁇ ; and (b) determination of values of the coefficients ⁇ and ⁇ that minimizes k 1 V k r k the square of a difference between the base picture and the weighted sum.
  • ⁇ and ⁇ denote forward and backward coefficients, respectively.
  • a weighted prediction value P(s ) for a pixel s within the unit block 41 can be defined by Equation (1).
  • Equation (2) An error E that is the difference between the actual pixel s and the prediction value P(s ) can be defined by Equation (2):
  • Equation (3) [63]
  • Equation (4) is obtained.
  • is set to 0 so as not to consider the impact of ⁇ .
  • Equation (5) A first weighting factor ⁇ that minimizes ⁇ E can be given by Equation (5):
  • Equation (6) [69]
  • the weighting factors ⁇ k and ⁇ can be used as weighting factors for a block 46 in a high layer corresponding to the unit block 41.
  • the blocks 46 and 41 have the same size.
  • the block 46 may have a larger size than the block 41.
  • the high layer uses the weighting factors ⁇ and ⁇ assigned to the low layer to calculate a weighted sum of reference images 47 and 48 instead of calculating new weighting factors, thus reducing overhead.
  • the process of using the weighting factors assigned to the low layer as weighting factors for the high layer can be performed in the same way both in the encoder and the decoder.
  • AA wweeiigghhttiinngg factor ⁇ that minimizes the ⁇ E in the Equation (7) can be defined bbyy EEqquuaattiioonn ((88)):: [75]
  • FIG. 11 is a block diagram of a video encoder 100 according to an embodiment of the present invention.
  • an input current picture F is fed into a motion estimator 105, a subtracter 115, and a downsampler 170.
  • the video encoder 100 operates in the following manner.
  • the downsampler 170 temporally and spatially downsamples the current picture F.
  • a motion estimator 205 performs motion estimation on the downsampled picture F using a neighboring reference picture to obtain a motion vector MV .
  • An original image F may be used as the reference picture (open-loop coding) or a decoded image F Or ' may be used as the reference picture (closed-loop coding). It is hereinafter assumed that both the high layer and low layer in the video encoder 100 support closed-loop coding. For motion estimation, a block matching algorithm is widely used.
  • a motion compensator 210 uses the motion vector MV to motion compensate for the reference picture F ' to obtain a motion-compensated picture mc(F '). When a plurality of reference pictures F Or ' are used, motion compensation may be performed on each of the plurality of reference pictures F '.
  • the motion-compensated picture mc(F predicted image calculated using the motion-compensated picture mc(F Or ') are fed into a subtracter 215.
  • the motion-compensated picture mc(F ') is input.
  • a predicted picture may be calculated by averaging a plurality of motion-compensated pictures mc(F Or ') before being input to the subtractor 215. Because the present embodiment of the invention is not affected by whether weighted prediction is used for the low layer, FIG. 11 shows that no weighted prediction is applied to the low layer. Of course, weighted prediction may be applied to the low layer.
  • the subtractor 215 subtracts the estimated picture (or each motion-compensated block) from the downsampled picture F and generates a difference signal R , which is then supplied to a transformer 220.
  • the transformer 220 performs a spatial transform on the difference signal R and generates a transform coefficient R ⁇ .
  • the spatial transform method may include a Discrete Cosine Transform (DCT), or wavelet transform. Specifically, DCT coefficients may be created in a case where DCT is employed, and wavelet coefficients may be created in a case where wavelet transform is employed.
  • DCT Discrete Cosine Transform
  • a quantizer 225 quantizes the transform coefficient. Quantization refers to a process of expressing the transform coefficient formed in an arbitrary real value by discrete values. For example, quantization is performed such that the transform coefficient is divided by a predetermined quantization step and the result is then rounded using an integer.
  • the quantization result supplied from the quantizer 225 that is, the quantization coefficient R is supplied to an entropy coding unit 150 and an inverse quantizer 230.
  • the inverse quantizer 230 inversely quantizes the quantization coefficient R Q .
  • the inverse quantization is performed in a reverse order to that of the quantization performed by the transformer 220 to restore values matched to indices generated during quantization according to a predetermined quantization step used in the quantization. Specifically, inverse DCT transform, or inverse wavelet transform may be used.
  • An inverse transformer 235 receives the inverse quantization result and performs an inverse transformation.
  • the inverse transformation is performed in a reverse order to that of the transformation performed by the transformer 220.
  • inverse DCT transformation, or inverse wavelet transformation may be used.
  • An adder 240 adds the inverse quantization result and the predicted picture (or motion-compensated picture) used in the motion compensation step performed by the motion compensator 210 and generates a restored picture F '.
  • a buffer 245 stores an addition result supplied from the adder 240. Thus, not only the currently restored picture F ' but also the previously stored reference picture F ' are stored in the buffer 245.
  • the video encoder 100 operates in the following manner.
  • the video encoder 100 in the higher layer includes a motion estimator 105 calculating a motion vector MV, a motion compensator 110 that motion- compensates for a reference picture F r ' using the motion vector MV, a subtractor 115 calculating a residual signal R between the current picture F and a predicted picture P, a transformer 120 applying transformation to the residual signal R to generate a transform coefficient R , a quantizer 125 quantizing the transform coefficient R to output a quantization coefficient R Q .
  • the higher layer's video encoder further includes an inverse quantizer 130, an inverse transformer 135, an adder 140, and a buffer 145. Since functions and operations of these components are the same as those in the lower layer, their repetitive description will be omitted. The following description will focus on distinguishing features of the components.
  • a weighting factor calculator 180 uses a base picture F in a low layer corresponding to a current picture in a high layer and motion-compensated reference picture me (F Or ') to calculate weighting factors ⁇ and ⁇ .
  • the weighting factors ⁇ and ⁇ can be calculated by the above Equations (5) and (6).
  • a normalized weighting factor can be calculated by the Equation (8). Of course, when a unidirectional reference scheme is used, only one weighting factor may be used.
  • a weighted sum calculator 160 uses the calculated weighting factors ⁇ and ⁇ (or the normalized weighting factor) to calculate a weighted sum of the motion- compensated pictures me (F r ') provided by the motion compensator 110.
  • the weighted sum may be represented by ⁇ *F + ⁇ *F .
  • the weighted sum is represented by ⁇ *F .
  • weighted prediction is performed per motion block, macroblock, slice, or picture. If the weighted prediction is performed in a smaller unit than a picture, a number of a pair of weighting factors ⁇ and ⁇ corresponding to the number of units in the picture should be calculated and different weighting factors will be applied for each unit to calculate a weighted sum.
  • the weighted sum i.e., the predicted picture P, obtained by the weighted sum calculator 160 is fed into the subtracter 115.
  • the subtractor 115 generates the residual signal R by subtracting the predicted picture P from the current picture F and provides the same to the transformer 120.
  • the transformer 120 applies spatial transform to the residual signal R to create the transform coefficient R ⁇ .
  • the quantizer 125 quantizes the transform coefficient R ⁇ .
  • the entropy coding unit 150 performs lossless coding on the motion-estimated motion vectors MV and MV provided from the motion estimators 105 and 205 and the quantization coefficients R and R provided from the quantizers 125 and 225, respectively.
  • the lossless coding methods may include, for example, Huffman coding, arithmetic coding, and any other suitable lossless coding method known to one of ordinary skill in the art.
  • FIG. 12 is a block diagram of a video decoder 300 according to an embodiment of the present invention.
  • An entropy decoding unit 310 performs lossless decoding on input bitstreams and extracts motion vectors MV and MV and texture data R Q and R Q for the respective o o layers.
  • the lossless decoding is performed in a reverse order to that of lossless coding performed by an entropy coder part in each layer.
  • the extracted texture data R of the lower layer is provided to an inverse quantizer 420 and the extracted motion vector MV of the lower layer is provided to a motion compensator 460.
  • the extracted texture data R Q of the higher layer is provided to an inverse quantizer 320 and the extracted motion vector MV of the higher layer is provided to a motion compensator 360.
  • the inverse quantizer 420 performs inverse quantization on the texture data R Q .
  • the inverse quantization is a process of restoring values matched to indices generated during the quantization process using the same quantization table as in the quantization process.
  • An inverse transformer 430 receives the inverse quantization result R ' and performs an inverse transformation.
  • the inverse transformation is performed in a reverse order to that of the transformation process performed by a transformer. Specifically, inverse DCT transformation, or inverse wavelet transformation may be used.
  • the restored residual signal R ' is provided to an adder 440.
  • a motion compensator 460 motion-compensates a low layer's reference picture F ', which is previously restored and stored in a buffer 450, using the extracted motion vector MV , and generates a motion-compensated picture mc(F ').
  • the video decoder 300 operates in the following manner.
  • the video decoder 300 in the higher layer includes an inverse quantizer 320, an inverse transformer 330, a buffer 350, and a motion compensator 360. Since functions and operations of these components are the same as those in the lower layer, their repetitive description will be omitted. The following description will focus on distinguishing features of the components.
  • a weighting factor calculator 380 uses a base picture F and a motion-compensated reference picture mc(F Or ') to calculate weighting factors ⁇ and ⁇ .
  • the weighting factors ⁇ and ⁇ can be calculated by the Equations (5) and (6).
  • a normalized weighting factor can be calculated by the Equation (8).
  • a unidirectional reference scheme when a unidirectional reference scheme is used, only one weighting factor may be used.
  • a weighted sum calculator 370 uses the calculated weighting factors ⁇ and ⁇ (or the normalized weighting factor) to calculate a weighted sum of the motion- compensated pictures mc(F r ') provided by the motion compensator 360.
  • a weighted sum, i.e., a predicted picture P, obtained by the weighted sum calculator 370 is fed into an adder 340.
  • weighted prediction is performed per motion block, macroblock, slice, or picture. If the weighted prediction is performed in a smaller unit than a picture, a number of a pair of weighting factors ⁇ and ⁇ corresponding to the number of units in the picture should be calculated and different weighting factors will be applied for each unit to calculate a weighted sum.
  • the adder 340 adds the inverse transform result R' supplied from the inverse transformer 330 and the predicted picture and restores a current picture F'.
  • a buffer 350 stores the restored current picture F'.
  • FIG. 13 is a schematic block diagram of a system in which the video encoder 100 and the video decoder 300 according to an exemplary embodiment of the present invention operate.
  • the system may be a television (TV), a set-top box, a desktop, laptop, or palmtop computer, a personal digital assistant (PDA), or a video or image storing apparatus (e.g., a video cassette recorder (VCR) or a digital video recorder (DVR)).
  • the system may be a combination of the above-mentioned apparatuses or one of the apparatuses which includes a part of another apparatus among them.
  • the system includes at least one video/image source 910, at least one input/ output unit 920, a processor 940, a memory 950, and a display unit 930.
  • the video/image source 910 may be a TV receiver, a VCR, or other video/image storing apparatus.
  • the video/image source 910 may indicate at least one network connection for receiving a video or an image from a server using Internet, a wide area network (WAN), a local area network (LAN), a terrestrial broadcast system, a cable network, a satellite communication network, a wireless network, a telephone network, or the like.
  • the video/image source 910 may be a combination of the networks or one network including a part of another network among the networks.
  • the input/output unit 920, the processor 940, and the memory 950 communicate with one another through a communication medium 960.
  • the communication medium 960 may be a communication bus, a communication network, or at least one internal connection circuit.
  • Input video/image data received from the video/image source 910 can be processed by the processor 940 using to at least one software program stored in the memory 950 and can be executed by the processor 940 to generate an output video/ image provided to the display unit 930.
  • the software program stored in the memory 950 includes a scalable wavelet-based codec performing a method according to an embodiment of the present invention.
  • the codec may be stored in the memory 950, may be read from a storage medium such as a compact disc-read only memory (CD-ROM) or a floppy disc, or may be downloaded from a predetermined server through a variety of networks.
  • a video encoder and video decoder enable effective weighted prediction on a high layer picture by calculating weighting factors from a base picture without transmitting additional information to the decoder, thus providing for improved compression efficiency of video data.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

A method and apparatus effectively encode multiple layers using interlayer information in a multi-layered video codec. The method includes reading information concerning the low layer picture, calculating weighting factors using the information concerning the low layer picture, calculating a weighted sum of reference pictures for the current picture using the weighting factors and generating a predicted picture for the current picture, and encoding a difference between the current picture and the predicted picture.

Description

Description METHOD AND APPARATUS FOR ENCODING/DECODING
MULTI-LAYER VIDEO USING WEIGHTED PREDICTION
Technical Field
[1] Apparatuses and methods consistent with the present invention relate to video coding, and more particularly, to effectively coding multiple layers using interlayer information in a multi-layered video codec.
Background Art
[2] With the development of information communication technology, including the
Internet, there have been an increasing number of multimedia services containing various kinds of information such as text, video, audio and so on. Multimedia data requires a large capacity of storage media and a wide bandwidth for transmission since the amount of multimedia data is usually large. For example, a 24-bit true color image having a resolution of 640*480 needs a capacity of 640*480*24 bits, i.e., data of about 7.37 Mbits, per frame. When this image is transmitted at a speed of 30 frames per second, a bandwidth of 221 Mbits/sec is required. When a 90-minute movie based on such an image is stored, a storage space of about 1200 Gbits is required. Accordingly, a compression coding method is a requisite for transmitting multimedia data including text, video, and audio.
[3] A basic principle of data compression is removing data redundancy. Data can be compressed by removing spatial redundancy in which the same color or object is repeated in an image, temporal redundancy in which there is little change between adjacent frames in a moving image or the same sound is repeated in audio, or mental visual redundancy taking into account human eyesight and limited perception of high frequencies. Data compression can be classified into lossy/lossless compression according to whether source data is lost, intraframe/interframe compression according to whether individual frames are compressed independently, and symmetric/ asymmetric compression according to whether the time required for compression is the same as the time required for recovery. Data compression is defined as real-time compression if the compression/recovery time delay does not exceed 50 ms and is defined as scalable compression when frames have different resolutions. For text or medical data, lossless compression is usually used. For multimedia data, lossy compression is usually used. Further, intraframe compression is usually used to remove spatial redundancy, and interframe compression is usually used to remove temporal redundancy.
[4] Transmission media for transmitting multimedia information differ in performance according to the types of media transmitted. The transmission media currently in use have a variety of transfer rates, ranging for example, from a very-high speed communication network capable of transmitting the data at a transfer rate of tens of Mbits per second to a mobile communication network having a transfer rate of 384 Kbps. Previous video coding techniques such as MPEG-I, MPEG-2, H.263 or H.264 remove redundancy based on a motion compensated prediction coding technique. Specifically, temporal redundancy is removed by motion compensation, while spatial redundancy is removed by transform coding. These techniques have a good compression rate, but do not provide flexibility for a true scalable bitstream due to the use of a recursive approach in a main algorithm. Thus, recent research has been actively made on wavelet-based scalable video coding. Scalability indicates the ability to partially decode a single compressed bitstream, that is, the ability to perform a variety of types of video reproduction. Scalability includes spatial scalability indicating a video resolution, signal-to noise ratio (SNR) scalability indicating a video quality level, temporal scalability indicating a frame rate, and a combination thereof.
[5] Standardization of H.264 Scalable Extension (hereinafter, to be referred to be as
'H.264 SE') is being performed at present by a joint video team (JVT) of the MPEG (Motion Picture Experts Group) and ITU (International Telecommunication Union). An advantageous feature of H.264 SE lies in that it exploits the relevancy among layers in order to code a plurality of layers while employing an H.264 coding technique. While the plurality of layers are different from one another in view of resolution, frame rate, SNR, or the like, they basically have a substantial similarity in that they are generated from the same video source. In this regard, a variety of efficient techniques that utilize information about lower layers in coding upper layer data are proposed.
[6] FIG. 1 is a diagram for explaining weighted prediction proposed in conventional
H.264. The weighted prediction allows a motion-compensated reference picture to be appropriately scaled instead of being averaged in order to improve prediction efficiency.
[7] A motion block 11 (a 'macroblock or 'subblock' as the basic unit for calculating a motion vector) in a current picture 10 corresponds to a predetermined image 21 in a left reference picture 20 pointed by a forward motion vector 22 while corresponding to a predetermined image 31 in a right reference image 30 pointed by a backward motion vector 32.
[8] An encoder reduces the number of bits required to represent the motion block 11 by subtracting a predicted image obtained from the images 21 and 31 from the motion block 11. A conventional encoder not using weighted prediction calculates a predicted image by simply averaging the images 21 and 31. However, since the motion block 11 is not usually identical to an average of the left and right images 21 and 31, it is difficult to obtain an accurate predicted image.
[9] To overcome this limitation, a method for determining a predicted image using a weighted sum is proposed in H.264. According to the method, weighting factors α and β are determined for each slice and a sum of products of multiplying the weighting factors α and β by the images 21 and 31 are used as a predicted image. The slice may consist of a plurality of macroblocks and be identical to a picture. A plurality of slices may make up a picture. The proposed method can obtain a predicted image with a very small difference from the motion block 11 by adjusting the weighting factors. The method can also improve coding efficiency by subtracting the predicted image from the motion block 11.
Disclosure of Invention
Technical Problem
[10] While the weighted prediction defined in H.264 is very effective, this technique has been applied so far only to single layer video coding. Research has not yet been conducted on application of this technique to multi-layered scalable video coding. Accordingly, there is a need to apply weighted prediction to multi-layered scalable video coding.
Technical Solution
[11] Illustrative, non-limiting embodiments of the present invention overcome the above disadvantages and other disadvantages not described above. Also, the present invention is not required to overcome the disadvantages described above, and an illustrative, non-limiting embodiment of the present invention may not overcome any of the problems described above. Apparatuses and methods consistent with present invention estimate a weighting factor that will be applied to a picture in a higher layer using information from a picture in a lower layer and perform weighted prediction on the picture in the higher layer using the estimated weighting factor.
[12] These apparatuses and methods also provide a algorithm for performing the weighted prediction.
[13] These and other aspects of the present invention will be described in or be apparent from the following description of the preferred embodiments.
[14] According to an aspect of the present invention, there is provided a method for encoding video by performing weighted prediction on a current picture in a high layer using information on a picture in a low layer, the method including reading the information on the low layer picture, calculating weighting factors using the information on the low layer picture, calculating a weighted sum of reference pictures to the current picture using the weighting factors and generating a predicted picture for the current picture, and encoding a difference between the current picture and the predicted picture.
[15] According to another aspect of the present invention, there is provided a method for decoding video by performing weighted prediction on a current picture in a high layer using information on a picture in a low layer, the method including extracting texture data and motion data from an input bitstream, reconstructing information about the low layer picture from the texture data, calculating weighting factors using the information on the low layer picture, calculating a weighted sum of reference pictures to the current picture using the weighting factors and generating a predicted picture for the current picture, and reconstructing a residual signal for the current picture from the texture data and adding the reconstructed residual signal to the predicted picture.
[16] According to still another aspect of the present invention, there is provided a video encoder for performing weighted prediction on a current picture in a high layer using information on a picture in a low layer, the video encoder including an element for reading the information on the low layer picture, an element for calculating weighting factors using the information on the low layer picture, an element for calculating a weighted sum of reference pictures to the current picture using the weighting factors and generating a predicted picture for the current picture, and an element for encoding a difference between the current picture and the predicted picture.
[17] According to yet another aspect of the present invention, there is provided a video decoder for performing weighted prediction on a current picture in a high layer using information on a picture in a low layer, the method including an element for extracting texture data and motion data from an input bitstream, an element for reconstructing information about the low layer picture from the texture data, an element for calculating weighting factors using the information on the low layer picture, an element for generating a predicted picture for the current picture by calculating a weighted sum of reference pictures to the current picture using the weighting factors, and an element for adding texture data of the current picture among the texture data to the predicted picture.
Description of Drawings
[18] The above and other features and advantages of the present invention will become more apparent by describing in detail preferred embodiments thereof with reference to the attached drawings in which:
[19] FlG. 1 is a diagram for explaining conventional weighted prediction proposed in
H.264;
[20] FlG. 2 is a flowchart illustrating a multi-layered weighted prediction method according to an embodiment of the present invention;
[21] FlG. 3 is a flowchart illustrating sub-steps for step S50 illustrated in FlG. 2; [22] FlG. 4 illustrates a multi-layer video structure in which a high layer has double the resolution of a lower layer but the same frame rate as the low layer;
[23] FlG. 5 illustrates a multi-layer video structure in which a high layer and a low layer have a Motion-compensated Temporal Filtering (MCTF) structure;
[24] FlG. 6 illustrates a multi-layer video structure in which a high layer and a low layer have a Hierarchical B structure;
[25] FlG. 7 illustrates a multi-layer video structure in which a high layer has a MCTF structure and a low layer has a Hierarchical B structure;
[26] FlG. 8 illustrates a multi-layer video structure in which a high layer and a low layer have the same frame rate and pictures have multiple reference schemes;
[27] FlG. 9 illustrates an example in which an embodiment of the present invention is applied when a current picture is an asynchronized picture;
[28] FlG. 10 is a schematic diagram for explaining a method for using a weighting factor calculated from a low layer for a high layer;
[29] FlG. 11 is a block diagram of a video encoder according to an embodiment of the present invention; and
[30] FlG. 12 is a block diagram of a video decoder according to an embodiment of the present invention; and
[31] FlG. 13 is a schematic block diagram of a system in which a video encoder and/or a video decoder according to an exemplary embodiment of the present invention operate.
Mode for Invention
[32] Advantages and features of the present invention and methods of accomplishing the same may be understood more readily by reference to the following detailed description of preferred embodiments and the accompanying drawings. The present invention may, however, be embodied in many different forms and should not be c onstrued as being limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete and will fully convey the concept of the invention to those skilled in the art, and the present invention will only be defined by the appended claims. Like reference numerals refer to like elements throughout the specification.
[33] The present invention will now be described more fully with reference to the accompanying drawings, in which embodiments of the invention are shown.
[34] Weighted prediction can be effective for fade-in or fade-out sequences achieved by a gradual increase or decrease in brightness of a picture. When the fade-in or fade-out sequence is coded using a multi-layered scalable video codec, a weighting factor to be applied to a high layer is expected to be similar to a weighting factor for a low layer. That is, information from a picture in a low layer can be used to perform weighted prediction on a picture in a high layer. In this case, an encoder does not need to transmit weighting factors α and β needed for weighted prediction to a decoder because the information from the picture in the low layer is available at both encoder and decoder. Thus, the decoder can perform weighted prediction according to the same algorithm as used in the encoder.
[35] FlG. 2 is a flowchart illustrating a multi-layered weighted prediction method according to an embodiment of the present invention.
[36] Referring to FlG. 2, if a current picture in a high layer is input in step S 10, it is determined in step S20 whether the current picture is a synchronized picture. In the present invention, a synchronized picture means a picture having a corresponding picture in a lower layer (a 'base picture') at the same position. An asynchronized picture means a picture having no corresponding picture in a lower layer at the same temporal position. The temporally same position can be determined as a Picture of Count (POC) as defined as the Joint Scalable Video Model (JSVM).
[37] If the current picture is a synchronized picture (YES in step S20), the encoder determines the current picture has the same reference scheme and reference distance as the base picture in step S30. The reference scheme may be forward reference, backward reference, bi-directional reference or other multiple reference schemes. The reference distance refers to a temporal distance between a picture being predicted and a reference picture. In the JSVM, the temporal distance can be represented as a difference between a POC for a picture being predicted and a POC for a reference picture
[38] If the current picture has the same reference scheme and reference distance as the base picture (YES in step S30), the encoder applies weighted prediction to the current picture in step S50. Conversely, if the current picture does not have different reference schemes and reference distance from the base picture (NO in step S30), the encoder does not apply weighted prediction to the current picture in step S60.
[39] FlG. 3 is a flowchart illustrating sub-steps for the step S50 illustrated in FlG. 2.
Referring to FlG. 3, in step S51, the encoder reads information on the picture in the low layer. The information on the low layer picture contains a base picture and a reference picture motion-compensated with respect to the base picture. In step S52, weighting factors α and β are calculated using the information on the low layer picture. A method for calculating the weighting factors α and β from the given pictures in step S52 (a 'weighting factor calculation algorithm') will be described later in more detail.
[40] In step S53, the encoder uses a motion vector to perform motion compensation on a reference picture to obtain the current picture. During the motion compensation, a motion vector from a high layer estimated by motion estimation is used. If a plurality of reference pictures are used, motion compensation should be performed on each of the plurality of reference pictures using an appropriate motion vector.
[41] In step S54, the encoder multiplies weighting factors α and β by the motion- compensated reference pictures and adds the products to obtain a predicted picture (or predicted slice). The encoder calculates a difference between the current picture (or slice) and the predicted picture (or slice) in step S56 and encodes the difference in step S56.
[42] As described with reference to FlG. 3, it is not necessary to transmit to a decoder an additional flag or weighting factor needed for performing weighted prediction on a high layer. The decoder can calculate a weighting factor used in the encoder by performing weighted prediction in the same manner as the encoder.
[43] FIGS. 4 through 8 are diagrams illustrating various multi-layer video structures to which the present invention can be applied. FlG. 4 illustrates a structure in which a picture in a high layer (Layer 2) has double the resolution of, but the same frame rate as, a picture in a low layer (Layer 1) and both layers have a single temporal level. Reference symbols I, P, and B denote an I-picture (or slice), a P-picture (or slice), or a B-picture (or slice), respectively.
[44] As evident from FlG. 4, since the two layers have different resolutions but corresponding pictures in the two layers have the same reference scheme and reference distance, reference pictures in Layer 1 have the same positions as their counterparts in Layer 2. A current picture in a high layer having a base picture with the same reference scheme and reference distance can be encoded or decoded using a weighting factor applied to the base picture. Of course, when the two layers use the same reference scheme, a non-adjacent picture may be used as a reference picture.
[45] FlG. 5 illustrates a multi-layer video structure in which a high layer (Layer 2) has double the frame rate of a low layer (Layer 1). Referring to FlG. 5, the high layer has one more temporal level than the low layer. The techniques described here can be applied to a structure in which the high and low layers are decomposed into hierarchical temporal levels. That is, pictures 54 through 56 that satisfy the requirements described with reference to FlG. 2 among high-pass pictures in the high layer can effectively be encoded or decoded using weighting factors applied to their corresponding base pictures 57 through 59. On the other hand, weighted prediction is not applied to high-pass pictures 50 through 53 at the highest level (level 2) in the high layer with no corresponding base pictures.
[46] FlG. 6 illustrates a multi-layer video structure in which a high layer (Layer 2) and a low layer (Layer 1) have a hierarchical B structure defined in H.264. Like in FlG. 5, the high layer has double the frame rate of the low layer. In the Hierarchical B structure, each layer is decomposed into temporal levels in a different way than for a MCTF structure. That is, a frame rate increases in the order from high to low levels. Assuming that the frame rate of the high layer is A, when a decoder desires to decode a video with a frame rate of A/4, the encoder transmits only pictures at level 2. When the decoder desires to decode a video with a frame rate of A/2, the encoder may transmit only pictures at levels 2 and 1. When a video with a frame rate of A is desired, the encoder may transmit pictures at all levels.
[47] Referring to FlG. 6, a high-pass picture 61 has the same reference scheme and reference distance as a base picture 64 and a high-pass picture 62 has the same reference scheme and reference distance as a base picture 65. Thus, the high-pass pictures 61 and 62 can be subjected to weighted reference using weighting factors applied to the base pictures 64 and 65. On the other hand, weighted reference is not applied to a high-pass picture 63 because it does not have a corresponding base picture.
[48] FlG. 7 illustrates a multi-layer video structure in which a high layer (Layer 2) has a
MCTF structure and a low layer (Layer 1) has a Hierarchical B structure. Referring to FlG. 7, weighted prediction is not applied to high-pass pictures at level 2 in the high layer because the high pass pictures at level 2 do not have any corresponding base pictures. On the other hand, high-pass pictures at level 1 or 0 have corresponding base pictures. For example, a high-pass picture 72 has a base picture 75. Although the high- pass picture 72 has a MCTF structure and the base picture has a Hierarchical B structure, weighted prediction can be applied without any problem if the two pictures 72 and 75 have the same reference scheme and reference distance. Because the two pictures 72 and 75 actually have the same reference scheme and reference distance, weighted prediction can be applied. Similarly, weighted prediction can be performed on a picture 73 using a weighting factor assigned to a picture 74.
[49] FlG. 8 illustrates a multi-layer video structure in which a high layer (Layer 2) and a low layer (Layer 1) have a single temporal level and all high-pass pictures in the high layer have corresponding base pictures. However, all the high-pass pictures in the high layer do not have the same reference scheme and reference distance as their corresponding base pictures.
[50] For example, a high-pass picture 81 has the same reference scheme (bi-directional reference scheme) and the same reference distance (1) as its corresponding base picture 85. A high-pass picture 82 has the same reference scheme (backward reference scheme) and the same reference distance (1) as its corresponding picture 86. Conversely, high-pass pictures 83 and 84 do not use the same reference schemes as their corresponding base pictures 87 and 88. Thus, weighted prediction is not applied to the high-pass pictures 83 and 84.
[51] While in the above description, weighted prediction is applied when the current picture is a synchronized picture and the current picture has the same reference scheme and reference distance as the base picture, the present invention is not limited to the particular embodiments shown in FIGS. 2 through 8.
[52] When the current picture is an asynchronized picture, a weighting factor can be calculated using a low layer picture at the closest temporal position to the current picture and its reference picture. When a high layer (Layer 2) has double the frame rate of a low layer (Layer 1) as shown in FlG. 9, pictures 91 and 93 in the high layer have corresponding base pictures and pictures in the high layer and low layer have different reference distances. However, weighting factors α and β for the pictures 91 and 93 can be calculated using a picture 92 in the low layer at the closest temporal position to the pictures 91 and 93 and its reference pictures 94 and 96.
[53] For example, a 'weighting factor calculation algorithm' is applied to a low layer picture 92 and the reference pictures 94 and 96 to calculate the weighting factors α and β that can be used as weighting factors for the pictures 91 and 93. As a result, a slight error may occur but does not significantly affect coding performance when brightness gradually increases or decreases like in a fade-in or fade-out sequence. For other sequences, this method exhibits high coding performance compared to the case where half the weighting factors α and β are applied. Hereinafter, a low layer picture at the same temporal position as or closest temporal position to a high layer picture is hereinafter generally referred to as a 'base picture'.
[54] The 'weighting factor calculation algorithm' will now be described. FlG. 10 is a schematic diagram for explaining a method for using a weighting factor calculated from a low layer for a high layer.
[55] An operation occurring in a low layer will first be described referring to FlG. 10.
That is, a unit block 41 means a region having a size ranging from a block size to a picture size. Thus, the unit block 41 may be a motion block, macroblock, slice, or picture. Of course, images 42 and 43 in a reference picture corresponding to the unit block 41 have the same size as the unit block 41. Motion vectors 44 and 45 from the unit block 41 mean motion vectors when the unit block 41 is a motion block. When the unit block 41 is a macroblock, slice, or picture, the motion vectors 44 and 45 mean a graphical representation of a plurality of motion vectors.
[56] The weighting factor calculation algorithm consists of: (a) calculation of a weighted sum of reference pictures to the base picture using predetermined coefficients α and β ; and (b) determination of values of the coefficients α and β that minimizes k 1V k rk the square of a difference between the base picture and the weighted sum. Here, α and β denote forward and backward coefficients, respectively.
[57] When a predicted block for the current unit block 41 is obtained using weighted prediction, i.e., a weighted sum of the reference images 42 and 43 in step (a), a weighted prediction value P(s ) for a pixel s within the unit block 41 can be defined by Equation (1). [58]
P(sk) = ak*sk_ιk*sk+ι ...(1)
[59] where s and s respectively denote pixels within the images 42 and 43 corresponding to the pixel s .
[60] An error E that is the difference between the actual pixel s and the prediction value P(s ) can be defined by Equation (2):
[61]
Ek =sk -p(sk) = sk ~ak *sk-ι ~βk *sk+ι •••(2)
[62] Because E is an error for a single pixel, the square of the sum of errors within the unit block 41 (ΣE 2) is defined by Equation (3): [63]
∑Ek 2=Σ(sk -ak * sk_xk*sk+x )2 ...(3)
[64] When ΣE is partially differentiated with respect to α and the derivative is set equal to zero in order to obtain α that minimizes the ΣE , Equation (4) is obtained. In this case, β is set to 0 so as not to consider the impact of β .
[65]
—^ = -2*Σ(sk*sk_}) + 2akΣsk_ϊ =0 dak
[66] A first weighting factor α that minimizes ΣE can be given by Equation (5):
[67]
Yc * c
<*k - * 2 k"1 -(5)
[68] When being calculated in the same manner as α , a second weighting factor β that
2 k k minimizes the ΣE can be given by Equation (6): [69]
Figure imgf000012_0001
[70] The weighting factors α and β for the unit block 41 in a low layer shown in FIG.
10 can be determined using the aforementioned algorithm. The weighting factors α k and β can be used as weighting factors for a block 46 in a high layer corresponding to the unit block 41. When the high layer has the same resolution as the low layer, the blocks 46 and 41 have the same size. When the high layer has higher resolution than the low layer, the block 46 may have a larger size than the block 41.
[71] The high layer uses the weighting factors α and β assigned to the low layer to calculate a weighted sum of reference images 47 and 48 instead of calculating new weighting factors, thus reducing overhead. The process of using the weighting factors assigned to the low layer as weighting factors for the high layer can be performed in the same way both in the encoder and the decoder.
[72] Although the weighting factors α and β are independently calculated using the k k above-mentioned process, these factors can be normalized on the assumption that α + β = 1. In other words, only one weighting factor α is used for a bi-directional k k reference scheme by using 1-α instead of β . Substituting 1-α into β in the Equation (3) gives Equation (7): [73]
ΣEk 2 = ∑{(sk + sk+ι ) - (sk+ι - sk_ι ) * ak ]2 ... (7)
[74] AA wweeiigghhttiinngg factor α that minimizes the ΣE in the Equation (7) can be defined bbyy EEqquuaattiioonn ((88)):: [75]
_ ∑(sk+i - Sk_1)(sk+1-Sk) αk ~ V (s - s )2 - (8)
[76] Using only one normalized weighting factor α reduces the amount of computations compared to using both the weighting factors α and β . A weighted sum is obtained by using α and 1-α as weighting factors for the reference images 47 and 48, respectively.
[77] FIG. 11 is a block diagram of a video encoder 100 according to an embodiment of the present invention. Referring to FIG. 11, an input current picture F is fed into a motion estimator 105, a subtracter 115, and a downsampler 170.
[78] In a base layer, the video encoder 100 operates in the following manner.
[79] The downsampler 170 temporally and spatially downsamples the current picture F.
[80] A motion estimator 205 performs motion estimation on the downsampled picture F using a neighboring reference picture to obtain a motion vector MV . An original image F may be used as the reference picture (open-loop coding) or a decoded image F Or ' may be used as the reference picture (closed-loop coding). It is hereinafter assumed that both the high layer and low layer in the video encoder 100 support closed-loop coding. For motion estimation, a block matching algorithm is widely used. In other words, a block of a predetermined size in the present frame is compared with corresponding blocks, each being shifted by one pixel or sub-pixel (2/2 pixel, 1/4 pixel, etc.) in a search block of a predetermined range, and the best-matched block having the smallest error is detected. For motion estimation, simple fixed block size motion estimation and hierarchical variable size block matching (HVSBM) may be used. [81] A motion compensator 210 uses the motion vector MV to motion compensate for the reference picture F ' to obtain a motion-compensated picture mc(F '). When a plurality of reference pictures F Or ' are used, motion compensation may be performed on each of the plurality of reference pictures F '. The motion-compensated picture mc(F predicted image calculated using the motion-compensated picture mc(F Or ') are fed into a subtracter 215. When unidirectional reference is used, the motion-compensated picture mc(F ') is input. When bi-directional reference is used, a predicted picture may be calculated by averaging a plurality of motion-compensated pictures mc(F Or ') before being input to the subtractor 215. Because the present embodiment of the invention is not affected by whether weighted prediction is used for the low layer, FIG. 11 shows that no weighted prediction is applied to the low layer. Of course, weighted prediction may be applied to the low layer.
[82] The subtractor 215 subtracts the estimated picture (or each motion-compensated block) from the downsampled picture F and generates a difference signal R , which is then supplied to a transformer 220.
[83] The transformer 220 performs a spatial transform on the difference signal R and generates a transform coefficient R τ. The spatial transform method may include a Discrete Cosine Transform (DCT), or wavelet transform. Specifically, DCT coefficients may be created in a case where DCT is employed, and wavelet coefficients may be created in a case where wavelet transform is employed.
[84] A quantizer 225 quantizes the transform coefficient. Quantization refers to a process of expressing the transform coefficient formed in an arbitrary real value by discrete values. For example, quantization is performed such that the transform coefficient is divided by a predetermined quantization step and the result is then rounded using an integer.
[85] The quantization result supplied from the quantizer 225, that is, the quantization coefficient R is supplied to an entropy coding unit 150 and an inverse quantizer 230.
[86] The inverse quantizer 230 inversely quantizes the quantization coefficient R Q. The inverse quantization is performed in a reverse order to that of the quantization performed by the transformer 220 to restore values matched to indices generated during quantization according to a predetermined quantization step used in the quantization. Specifically, inverse DCT transform, or inverse wavelet transform may be used.
[87] An inverse transformer 235 receives the inverse quantization result and performs an inverse transformation. The inverse transformation is performed in a reverse order to that of the transformation performed by the transformer 220. Specifically, inverse DCT transformation, or inverse wavelet transformation may be used. An adder 240 adds the inverse quantization result and the predicted picture (or motion-compensated picture) used in the motion compensation step performed by the motion compensator 210 and generates a restored picture F '.
[88] A buffer 245 stores an addition result supplied from the adder 240. Thus, not only the currently restored picture F ' but also the previously stored reference picture F ' are stored in the buffer 245.
[89] In a high layer, the video encoder 100 operates in the following manner.
[90] Like in the low layer, the video encoder 100 in the higher layer includes a motion estimator 105 calculating a motion vector MV, a motion compensator 110 that motion- compensates for a reference picture F r ' using the motion vector MV, a subtractor 115 calculating a residual signal R between the current picture F and a predicted picture P, a transformer 120 applying transformation to the residual signal R to generate a transform coefficient R , a quantizer 125 quantizing the transform coefficient R to output a quantization coefficient RQ. The higher layer's video encoder further includes an inverse quantizer 130, an inverse transformer 135, an adder 140, and a buffer 145. Since functions and operations of these components are the same as those in the lower layer, their repetitive description will be omitted. The following description will focus on distinguishing features of the components.
[91] A weighting factor calculator 180 uses a base picture F in a low layer corresponding to a current picture in a high layer and motion-compensated reference picture me (F Or ') to calculate weighting factors α and β. The weighting factors α and β can be calculated by the above Equations (5) and (6). A normalized weighting factor can be calculated by the Equation (8). Of course, when a unidirectional reference scheme is used, only one weighting factor may be used.
[92] A weighted sum calculator 160 uses the calculated weighting factors α and β (or the normalized weighting factor) to calculate a weighted sum of the motion- compensated pictures me (F r ') provided by the motion compensator 110. For example, when F and F are the motion-compensated pictures and weighting factors for F and F a b a are α and β, the weighted sum may be represented by α *F +β*F . When only one b a b motion-compensated picture F and weighting factor α for the motion-compensated a picture F are used, the weighted sum is represented by α *F .
[93] Meanwhile, as described above, weighted prediction is performed per motion block, macroblock, slice, or picture. If the weighted prediction is performed in a smaller unit than a picture, a number of a pair of weighting factors α and β corresponding to the number of units in the picture should be calculated and different weighting factors will be applied for each unit to calculate a weighted sum. The weighted sum, i.e., the predicted picture P, obtained by the weighted sum calculator 160 is fed into the subtracter 115.
[94] The subtractor 115 generates the residual signal R by subtracting the predicted picture P from the current picture F and provides the same to the transformer 120. The transformer 120 applies spatial transform to the residual signal R to create the transform coefficient Rτ. The quantizer 125 quantizes the transform coefficient Rτ.
[95] The entropy coding unit 150 performs lossless coding on the motion-estimated motion vectors MV and MV provided from the motion estimators 105 and 205 and the quantization coefficients R and R provided from the quantizers 125 and 225, respectively. The lossless coding methods may include, for example, Huffman coding, arithmetic coding, and any other suitable lossless coding method known to one of ordinary skill in the art.
[96] FIG. 12 is a block diagram of a video decoder 300 according to an embodiment of the present invention.
[97] An entropy decoding unit 310 performs lossless decoding on input bitstreams and extracts motion vectors MV and MV and texture data RQ and R Q for the respective o o layers. The lossless decoding is performed in a reverse order to that of lossless coding performed by an entropy coder part in each layer.
[98] The extracted texture data R of the lower layer is provided to an inverse quantizer 420 and the extracted motion vector MV of the lower layer is provided to a motion compensator 460. The extracted texture data RQ of the higher layer is provided to an inverse quantizer 320 and the extracted motion vector MV of the higher layer is provided to a motion compensator 360.
[99] A decoding procedure performed in a lower layer will first be described.
[100] The inverse quantizer 420 performs inverse quantization on the texture data R Q.
The inverse quantization is a process of restoring values matched to indices generated during the quantization process using the same quantization table as in the quantization process.
[101] An inverse transformer 430 receives the inverse quantization result R ' and performs an inverse transformation. The inverse transformation is performed in a reverse order to that of the transformation process performed by a transformer. Specifically, inverse DCT transformation, or inverse wavelet transformation may be used. As a result of the inverse transformation, the restored residual signal R ' is provided to an adder 440.
[102] A motion compensator 460 motion-compensates a low layer's reference picture F ', which is previously restored and stored in a buffer 450, using the extracted motion vector MV , and generates a motion-compensated picture mc(F ').
[103] In a high layer, the video decoder 300 operates in the following manner. [104] Like in the low layer, the video decoder 300 in the higher layer includes an inverse quantizer 320, an inverse transformer 330, a buffer 350, and a motion compensator 360. Since functions and operations of these components are the same as those in the lower layer, their repetitive description will be omitted. The following description will focus on distinguishing features of the components.
[105] A weighting factor calculator 380 uses a base picture F and a motion-compensated reference picture mc(F Or ') to calculate weighting factors α and β. The weighting factors α and β can be calculated by the Equations (5) and (6). A normalized weighting factor can be calculated by the Equation (8). Of course, when a unidirectional reference scheme is used, only one weighting factor may be used.
[106] A weighted sum calculator 370 uses the calculated weighting factors α and β (or the normalized weighting factor) to calculate a weighted sum of the motion- compensated pictures mc(F r ') provided by the motion compensator 360. A weighted sum, i.e., a predicted picture P, obtained by the weighted sum calculator 370 is fed into an adder 340.
[107] Like in the video encoder 100, weighted prediction is performed per motion block, macroblock, slice, or picture. If the weighted prediction is performed in a smaller unit than a picture, a number of a pair of weighting factors α and β corresponding to the number of units in the picture should be calculated and different weighting factors will be applied for each unit to calculate a weighted sum.
[108] The adder 340 adds the inverse transform result R' supplied from the inverse transformer 330 and the predicted picture and restores a current picture F'. A buffer 350 stores the restored current picture F'.
[109] FIG. 13 is a schematic block diagram of a system in which the video encoder 100 and the video decoder 300 according to an exemplary embodiment of the present invention operate. The system may be a television (TV), a set-top box, a desktop, laptop, or palmtop computer, a personal digital assistant (PDA), or a video or image storing apparatus (e.g., a video cassette recorder (VCR) or a digital video recorder (DVR)). In addition, the system may be a combination of the above-mentioned apparatuses or one of the apparatuses which includes a part of another apparatus among them. The system includes at least one video/image source 910, at least one input/ output unit 920, a processor 940, a memory 950, and a display unit 930.
[110] The video/image source 910 may be a TV receiver, a VCR, or other video/image storing apparatus. The video/image source 910 may indicate at least one network connection for receiving a video or an image from a server using Internet, a wide area network (WAN), a local area network (LAN), a terrestrial broadcast system, a cable network, a satellite communication network, a wireless network, a telephone network, or the like. In addition, the video/image source 910 may be a combination of the networks or one network including a part of another network among the networks.
[Ill] The input/output unit 920, the processor 940, and the memory 950 communicate with one another through a communication medium 960. The communication medium 960 may be a communication bus, a communication network, or at least one internal connection circuit. Input video/image data received from the video/image source 910 can be processed by the processor 940 using to at least one software program stored in the memory 950 and can be executed by the processor 940 to generate an output video/ image provided to the display unit 930.
[112] In particular, the software program stored in the memory 950 includes a scalable wavelet-based codec performing a method according to an embodiment of the present invention. The codec may be stored in the memory 950, may be read from a storage medium such as a compact disc-read only memory (CD-ROM) or a floppy disc, or may be downloaded from a predetermined server through a variety of networks.
Industrial Applicability
[113] As described above, a video encoder and video decoder according to embodiments of the present invention enable effective weighted prediction on a high layer picture by calculating weighting factors from a base picture without transmitting additional information to the decoder, thus providing for improved compression efficiency of video data.
[114] While the present invention has been particularly shown and described with reference to exemplary embodiments thereof, it will be understood by those of ordinary skill in the art that various changes in form and details may be made therein without departing from the spirit and scope of the present invention as defined by the following claims.

Claims

Claims
[I] A method for encoding video by performing weighted prediction on a current picture in a high layer using information concerning a picture in a low layer, the method comprising: reading the information concerning the low layer picture; calculating a weighting factor using the information concerning the low layer picture; calculating a weighted sum of reference pictures to the current picture using the weighting factor and generating a predicted picture for the current picture; and encoding a difference between the current picture and the predicted picture.
[2] The method of claim 1, wherein the information concerning the low layer picture contains a base picture corresponding to the current picture and reference pictures for the base picture.
[3] The method of claim 2, wherein the reference pictures for the base picture are pictures motion-compensated by motion vectors from the base picture.
[4] The method of claim 2, wherein the base picture is a low layer frame having the closest temporal position to the current picture.
[5] The method of claim 2, wherein the base picture is a low layer frame having the same temporal position as the current picture.
[6] The method of claim 4, wherein the reading of the information, the calculating of the weighting factors, the calculating of the weighted sum of reference pictures, and the encoding of the difference are performed if the current picture has the same reference scheme and reference distance as the base picture.
[7] The method of claim 1, wherein the calculating of the weighting factors and the calculating of the weighted sum of reference pictures are performed in units of one of a picture, a slice, a macroblock, and a motion block.
[8] The method of claim 2, wherein the calculating of the weighting factors comprises: calculating a weighted sum of reference pictures for the base picture using predetermined coefficients; and calculating values of the coefficients minimizing the square of a difference between the base picture and the weighted sum.
[9] The method of claim 8, wherein the coefficients include forward and backward coefficients.
[10] The method of claim 9, wherein the sum of the forward and backward coefficients is 1.
[II] The method of claim 1, wherein the calculating of the weighted sum of reference pictures comprises: if only one reference picture exists for the current picture, calculating the product of the weighting factor and the reference picture for the current picture; and if a plurality of reference pictures exist for the current picture, multiplying appropriate weighting factors by the plurality of reference pictures, respectively, and adding the products together.
[12] A method for decoding video contained in an input bitstream by performing weighted prediction on a current picture in a high layer using information about a picture in a low layer, the method comprising: extracting texture data and motion data from the input bitstream; reconstructing information about the low layer picture from the texture data; calculating weighting factors using the information about the low layer picture; calculating a weighted sum of reference pictures to the current picture using the weighting factors and generating a predicted picture for the current picture; and reconstructing a residual signal for the current picture from the texture data and adding the reconstructed residual signal to the predicted picture.
[13] The method of claim 12, wherein the information concerning the low layer picture contains a base picture corresponding to the current picture and reference pictures for the base picture.
[14] The method of claim 13, wherein the reference pictures for the base picture are pictures motion-compensated by motion vectors from the base picture.
[15] The method of claim 13, wherein the base picture is a low layer frame having the same temporal position as the current picture.
[16] The method of claim 13, wherein the base picture is a low layer frame having the closest temporal position to the current picture.
[17] The method of claim 15, wherein the extracting of the texture data and motion data, the reconstructing of the information about the low layer picture, the calculating of the weighting factors, the calculating of the weighted sum of reference pictures, and the reconstructing of the residual signal are performed if the current picture has the same reference scheme and reference distance as the base picture.
[18] The method of claim 13, wherein the calculating of the weighting factors and the calculating of the weighted sum of reference pictures are performed in units of one of a picture, a slice, a macroblock, and a motion block.
[19] The method of claim 13, wherein the calculating of the weighting factors comprises: calculating a weighted sum of reference pictures for the base picture using predetermined coefficients; and calculating the values of the coefficients that minimize the square of a difference between the base picture and the weighted sum.
[20] The method of claim 19, wherein the coefficients include forward and backward coefficients.
[21] The method of claim 20, wherein the sum of the forward and backward coefficients is 1. [22] The method of claim 12, wherein the calculating of the weighted sum of reference pictures comprises: if one reference picture exists for the current picture, calculating the product of the weighting factor and the reference picture for the current picture; and if a plurality of reference pictures exist for the current picture, multiplying appropriate weighting factors by the plurality of reference pictures, respectively, and adding the products together. [23] A video encoder for performing weighted prediction on a current picture in a high layer using information concerning a picture in a low layer, the video encoder comprising: an element for reading the information concerning the low layer picture; an element for calculating weighting factors using the information concerning the low layer picture; an element for calculating a weighted sum of reference pictures for the current picture using the weighting factors and generating a predicted picture for the current picture; and an element for encoding a difference between the current picture and the predicted picture. [24] A video decoder for performing weighted prediction on a current picture in a high layer using information concerning a picture in a low layer, the method comprising: an element for extracting texture data and motion data from an input bitstream; an element for reconstructing information about the low layer picture from the texture data; an element for calculating weighting factors using the information concerning the low layer picture; an element for generating a predicted picture for the current picture by calculating a weighted sum of reference pictures for the current picture using the weighting factors; and an element for adding texture data of the current picture among the texture data to the predicted picture.
PCT/KR2006/001472 2005-05-02 2006-04-20 Method and apparatus for encoding/decoding multi-layer video using weighted prediction WO2006118384A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN2006800191163A CN101185334B (en) 2005-05-02 2006-04-20 Method and apparatus for encoding/decoding multi-layer video using weighted prediction
EP06747383A EP1878252A4 (en) 2005-05-02 2006-04-20 Method and apparatus for encoding/decoding multi-layer video using weighted prediction

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US67631405P 2005-05-02 2005-05-02
US60/676,314 2005-05-02
KR1020050059834A KR100763182B1 (en) 2005-05-02 2005-07-04 Method and apparatus for coding video using weighted prediction based on multi-layer
KR10-2005-0059834 2005-07-04

Publications (1)

Publication Number Publication Date
WO2006118384A1 true WO2006118384A1 (en) 2006-11-09

Family

ID=37308153

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2006/001472 WO2006118384A1 (en) 2005-05-02 2006-04-20 Method and apparatus for encoding/decoding multi-layer video using weighted prediction

Country Status (2)

Country Link
EP (1) EP1878252A4 (en)
WO (1) WO2006118384A1 (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008148708A1 (en) * 2007-06-05 2008-12-11 Thomson Licensing Device and method for coding a video content in the form of a scalable stream
DE202011106751U1 (en) 2011-10-14 2013-01-18 Voss Automotive Gmbh At least partially heatable cable connector for a heatable media line and ready-made media line with such a cable connector
DE102012004165A1 (en) 2012-03-05 2013-09-05 Voss Automotive Gmbh Control system and method for controlling the assembly of a coupling device
US20130259122A1 (en) * 2012-03-30 2013-10-03 Panasonic Corporation Image coding method and image decoding method
JP2014131210A (en) * 2012-12-28 2014-07-10 Nippon Telegr & Teleph Corp <Ntt> Video encoding method, video decoding method, video encoder, video decoder, video encoding program, video decoding program and recording medium
CN107257489A (en) * 2011-10-13 2017-10-17 杜比国际公司 On an electronic device based on selected picture track reference picture
US11102500B2 (en) 2011-10-13 2021-08-24 Dolby International Ab Tracking a reference picture on an electronic device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0753970A2 (en) 1995-07-14 1997-01-15 Sharp Kabushiki Kaisha Hierarchical video coding device and decoding device
US5742343A (en) 1993-07-13 1998-04-21 Lucent Technologies Inc. Scalable encoding and decoding of high-resolution progressive video
JP2002142227A (en) * 2000-11-02 2002-05-17 Matsushita Electric Ind Co Ltd Hierarchy-type coding device of image signal, and hierarchy-type decoding device
WO2003094526A2 (en) * 2002-04-29 2003-11-13 Koninklijke Philips Electronics N.V. Motion compensated temporal filtering based on multiple reference frames for wavelet coding
WO2005011285A1 (en) * 2003-07-24 2005-02-03 Nippon Telegraph And Telephone Corporation Image encoding device, image decoding device, image encoding method, image decoding method, image encoding program, image decoding program, recording medium containing the image encoding program, and recording medium containing the image decoding program

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5742343A (en) 1993-07-13 1998-04-21 Lucent Technologies Inc. Scalable encoding and decoding of high-resolution progressive video
EP0753970A2 (en) 1995-07-14 1997-01-15 Sharp Kabushiki Kaisha Hierarchical video coding device and decoding device
JP2002142227A (en) * 2000-11-02 2002-05-17 Matsushita Electric Ind Co Ltd Hierarchy-type coding device of image signal, and hierarchy-type decoding device
WO2003094526A2 (en) * 2002-04-29 2003-11-13 Koninklijke Philips Electronics N.V. Motion compensated temporal filtering based on multiple reference frames for wavelet coding
WO2005011285A1 (en) * 2003-07-24 2005-02-03 Nippon Telegraph And Telephone Corporation Image encoding device, image decoding device, image encoding method, image decoding method, image encoding program, image decoding program, recording medium containing the image encoding program, and recording medium containing the image decoding program

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP1878252A4 *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008148708A1 (en) * 2007-06-05 2008-12-11 Thomson Licensing Device and method for coding a video content in the form of a scalable stream
FR2917262A1 (en) * 2007-06-05 2008-12-12 Thomson Licensing Sas DEVICE AND METHOD FOR CODING VIDEO CONTENT IN THE FORM OF A SCALABLE FLOW.
TWI423171B (en) * 2007-06-05 2014-01-11 Thomson Licensing Device and method for coding a video content as a scalable stream
CN107257489A (en) * 2011-10-13 2017-10-17 杜比国际公司 On an electronic device based on selected picture track reference picture
CN107257489B (en) * 2011-10-13 2020-01-03 杜比国际公司 Method for encoding/decoding video stream and apparatus for decoding video stream
US11102500B2 (en) 2011-10-13 2021-08-24 Dolby International Ab Tracking a reference picture on an electronic device
US11943466B2 (en) 2011-10-13 2024-03-26 Dolby International Ab Tracking a reference picture on an electronic device
DE202011106751U1 (en) 2011-10-14 2013-01-18 Voss Automotive Gmbh At least partially heatable cable connector for a heatable media line and ready-made media line with such a cable connector
DE102012004165A1 (en) 2012-03-05 2013-09-05 Voss Automotive Gmbh Control system and method for controlling the assembly of a coupling device
US20130259122A1 (en) * 2012-03-30 2013-10-03 Panasonic Corporation Image coding method and image decoding method
US10390041B2 (en) * 2012-03-30 2019-08-20 Sun Patent Trust Predictive image coding and decoding using two reference pictures
JP2014131210A (en) * 2012-12-28 2014-07-10 Nippon Telegr & Teleph Corp <Ntt> Video encoding method, video decoding method, video encoder, video decoder, video encoding program, video decoding program and recording medium

Also Published As

Publication number Publication date
EP1878252A4 (en) 2013-01-16
EP1878252A1 (en) 2008-01-16

Similar Documents

Publication Publication Date Title
US8817872B2 (en) Method and apparatus for encoding/decoding multi-layer video using weighted prediction
KR100714696B1 (en) Method and apparatus for coding video using weighted prediction based on multi-layer
KR100703788B1 (en) Video encoding method, video decoding method, video encoder, and video decoder, which use smoothing prediction
KR100763179B1 (en) Method for compressing/Reconstructing motion vector of unsynchronized picture and apparatus thereof
KR20060135992A (en) Method and apparatus for coding video using weighted prediction based on multi-layer
US20060209961A1 (en) Video encoding/decoding method and apparatus using motion prediction between temporal levels
US20060013309A1 (en) Video encoding and decoding methods and video encoder and decoder
US20060245495A1 (en) Video coding method and apparatus supporting fast fine granular scalability
US20060008006A1 (en) Video encoding and decoding methods and video encoder and decoder
US20050169371A1 (en) Video coding apparatus and method for inserting key frame adaptively
US20070047644A1 (en) Method for enhancing performance of residual prediction and video encoder and decoder using the same
US7042946B2 (en) Wavelet based coding using motion compensated filtering based on both single and multiple reference frames
US20030202599A1 (en) Scalable wavelet based coding using motion compensated temporal filtering based on multiple reference frames
WO2006004272A1 (en) Inter-frame prediction method in video coding, video encoder, video decoding method, and video decoder
WO2006004331A1 (en) Video encoding and decoding methods and video encoder and decoder
WO2006118384A1 (en) Method and apparatus for encoding/decoding multi-layer video using weighted prediction
WO2006118383A1 (en) Video coding method and apparatus supporting fast fine granular scalability
US20060088100A1 (en) Video coding method and apparatus supporting temporal scalability
US20050286632A1 (en) Efficient motion -vector prediction for unconstrained and lifting-based motion compensated temporal filtering
WO2006132509A1 (en) Multilayer-based video encoding method, decoding method, video encoder, and video decoder using smoothing prediction
WO2007024106A1 (en) Method for enhancing performance of residual prediction and video encoder and decoder using the same
WO2006006793A1 (en) Video encoding and decoding methods and video encoder and decoder
WO2006104357A1 (en) Method for compressing/decompressing motion vectors of unsynchronized picture and apparatus using the same
WO2006043754A1 (en) Video coding method and apparatus supporting temporal scalability
WO2006098586A1 (en) Video encoding/decoding method and apparatus using motion prediction between temporal levels

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 200680019116.3

Country of ref document: CN

121 Ep: the epo has been informed by wipo that ep was designated in this application
REEP Request for entry into the european phase

Ref document number: 2006747383

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 2006747383

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: DE

NENP Non-entry into the national phase

Ref country code: RU

WWP Wipo information: published in national office

Ref document number: 2006747383

Country of ref document: EP