WO2004073312A1 - Video coding - Google Patents

Video coding Download PDF

Info

Publication number
WO2004073312A1
WO2004073312A1 PCT/IB2004/050074 IB2004050074W WO2004073312A1 WO 2004073312 A1 WO2004073312 A1 WO 2004073312A1 IB 2004050074 W IB2004050074 W IB 2004050074W WO 2004073312 A1 WO2004073312 A1 WO 2004073312A1
Authority
WO
WIPO (PCT)
Prior art keywords
base
stream
enhancement
motion vectors
features
Prior art date
Application number
PCT/IB2004/050074
Other languages
French (fr)
Inventor
Wilhelmus H. A. Bruls
Reinier B. M. Klein Gunnewiek
Original Assignee
Koninklijke Philips Electronics N.V.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Koninklijke Philips Electronics N.V. filed Critical Koninklijke Philips Electronics N.V.
Priority to JP2006502560A priority Critical patent/JP2006518568A/en
Priority to US10/545,342 priority patent/US20060133475A1/en
Priority to EP04707996A priority patent/EP1597919A1/en
Publication of WO2004073312A1 publication Critical patent/WO2004073312A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/30Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
    • H04N19/33Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability in the spatial domain
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/30Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors
    • H04N19/517Processing of motion vectors by encoding
    • H04N19/52Processing of motion vectors by encoding by predictive encoding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/56Motion estimation with initialisation of the vector search, e.g. estimating a good candidate to initiate a search

Definitions

  • the invention relates to video coding, and more particularly to spatial scalable video compression schemes.
  • each digital image frame is a still image formed from an array of pixels according to the display resolution of a particular system.
  • the amounts of raw digital information included in high resolution video sequences are massive.
  • compression schemes are used to compress the data.
  • Various video compression standards or processes have been established, including, MPEG-2, MPEG-4, H.263, and H.264.
  • scalability teclmiques There are three axes on which one can deploy scalability. The first is scalability on the time axis, often referred to as temporal scalability. Secondly, there is scalability on the quality axis, often referred to as signal-to-noise scalability or fine-grain scalability. The third axis is the resolution axis (number of pixels in image) often referred to as spatial scalability or layered coding. In layered coding, the bitstream is divided into two or more bitstreams, or layers. Each layer can be combined to form a single high quality signal. For example, the base layer may provide a lower quality video signal, while the enhancement layer provides additional information that can enhance the base layer image.
  • spatial scalability can provide compatibility between different video standards or decoder capabilities.
  • the base layer video may have a lower resolution than the input video sequence, in which case the enhancement layer carries information which can restore the resolution of the base layer to the input sequence level.
  • FIG. 1 illustrates a block diagram of an encoder 100 which supports MPEG-2/MPEG-4 spatial scalability.
  • the encoder 100 comprises a base encoder 112 and an enhancement encoder 114.
  • the base encoder is comprised of a low pass filter and downsampler 120, a motion estimator 122, a motion compensator 124, an orthogonal transform (e.g., Discrete Cosine Transform (DCT)) circuit 130, a quantizer 132, a variable length coder 134, a bitrate control circuit 135, an inverse quantizer 138, an inverse transform circuit 140, switches 128, 144, and an interpolate and upsample circuit 150.
  • DCT Discrete Cosine Transform
  • the enhancement encoder 114 comprises a motion estimator 154, a motion compensator 155, a selector 156, an orthogonal transform (e.g., Discrete Cosine Transform (DCT)) circuit 158, a quantizer 160, a variable length coder 162, a bitrate control circuit 164, an inverse quantizer 166, an inverse transform circuit 168, switches 170 and 172.
  • DCT Discrete Cosine Transform
  • the operations of the individual components are well known in the art and will not be described in detail.
  • the base encoder 112 produces a base stream BS and the enhancement encoder 114 produces an enhancement stream ES based on input INP.
  • FIG. 2 illustrates another known encoder 200 proposed by DemoGrafx (see US 5,852,565).
  • the encoder is comprised of substantially the same components as the encoder 100 and the operation of each is substantially the same so the individual components will not be described.
  • the residue difference between the input block and the upsampled output from the upsampler 150 is inputted into a motion estimator 154.
  • the scaled motion vectors from the base layer are used in the motion estimator 154 as indicated by the dashed line in Figure 2.
  • this arrangement does not significantly overcome the problems of the arrangement illustrated in Figure 1.
  • spatial scalability as illustrated in Figures 1 and 2 is supported by the video compression standards, spatial scalability is not often used due to a lack of coding efficiency.
  • the lack of efficient coding means that, for a given picture quality, the bit rate of the base layer and the enhancement layer for a sequence together are more than the bit rate of the same sequence coded at once.
  • a method and apparatus for providing spatial scalable compression of an input video stream is disclosed.
  • a base stream is encoded which comprises base features.
  • a residual signal is encoded to produce an enhancement stream comprising enhancement features, wherein the residual signal is the difference between original frames of the input video stream an upscaled frames from the base layer.
  • a processed version of the base features are subtracted from the enhancement features in the enhancement stream.
  • a method and apparatus for decoding compressed video information received in a base stream and an enhancement stream is disclosed.
  • the received base stream is decoded.
  • the resolution of the decoded base stream is upconverted.
  • the base features produced by the base stream decoder are added to a residual motion vector signal in the received enhancement stream to form a combined signal.
  • the combined signal is decoded.
  • the upconverted decoded base stream and the decoded combined signal are added together to produce a video output.
  • Figure 1 is a block schematic representation of a known encoder with spatial scalability
  • Figure 2 is a block schematic representation of a known encoder with spatial scalability
  • Figure 3 is a block schematic representation of an encoder with spatial scalability according to one embodiment of the invention.
  • Figure 4 is a block schematic representation of a layered decoder according to one embodiment of the invention.
  • Figure 3 is a schematic diagram of an encoder according to one embodiment of the invention.
  • the motion estimation performed by the encoder 300 is done on the complete image rather than the residual signal as illustrated in Figures 1 and 2. Since the motion estimation is done on the complete image, the motion estimation vectors of the base layer will have a high correlation with the corresponding vectors of the enhancement layer. Thus, the bitrate of the enhancement layer can be reduced by only transmitting the difference between the motion estimation vectors of the base layer and the enhancement layer as described below. While the illustrative embodiment illustrated in Figure 3 refers to motion estimation and motion vectors, it will be understood by those skilled in the art that the invention applies to other base and enhancement features as well.
  • information from the base layer can be used as a prediction for the enhancement layer.
  • the encoding features selected in the base layer e.g., macroblock- type, motion-type, etc., can be used to predict the encoding features used in the enhancement layer.
  • an enhancement stream with a lower bitrate can be obtained.
  • the depicted encoding system 300 accomplishes layered compression, whereby a portion of the channel is used for providing a low resolution base layer and the remaining portion is used for transmitting edge enhancement information, whereby the two signals may be recombined to bring the system up to high resolution.
  • the encoder 300 comprises a base encoder 312 and an enhancement encoder 314.
  • the base encoder is comprised of a low pass filter and downsampler 320, a motion estimator 322, a motion compensator 324, an orthogonal transform (e.g., Discrete Cosine Transform (DCT)) circuit 330, a quantizer 332, a variable length coder (VLC) 334, a bitrate control circuit 335, an inverse quantizer 338, an inverse transform circuit 340, switches 328, 344, and an interpolate and upsample circuit 350.
  • DCT Discrete Cosine Transform
  • VLC variable length coder
  • An input video block 316 is split by a splitter 318 and sent to both the base encoder 312 and the enhancement encoder 314.
  • the input block is inputted into a low pass filter and downsampler 320.
  • the low pass filter reduces the resolution of the video block which is then fed to the motion estimator 322.
  • the motion estimator 322 processes picture data of each frame as an I-picture, a P-picture, or as a B- picture.
  • Each of the pictures of the sequentially entered frames is processed as one of the I-, P-, or B-pictures in a pre-set manner, such as in the sequence of I, B, P, B, P,..., B, P.
  • the motion estimator 322 refers to a pre-set reference frame in a series of pictures stored in a frame memory not illustrated and detects the motion vector of a macro-block, that is, a small block of 16 pixels by 16 lines of the frame being encoded by pattern matching (block Matching) between the macro-block and the reference frame for detecting the motion vector of the macro-block.
  • a macro-block that is, a small block of 16 pixels by 16 lines of the frame being encoded by pattern matching (block Matching) between the macro-block and the reference frame for detecting the motion vector of the macro-block.
  • an intra-coding intra-frame coding
  • a forward predictive coding forward predictive coded
  • a backward predictive coding backward predictive coding
  • a bi-directional predictive-coding there are four picture prediction modes, that is an intra-coding (intra-frame coding), a forward predictive coding, a backward predictive coding, and a bi- directional predictive-coding.
  • An I-picture is an intra-coded picture
  • a P-picture is an intra- coded or forward predictive coded or backward predictive coded picture
  • a B-picture is an intra-coded, a forward predictive coded, or a bi-directional predictive-coded picture.
  • the motion estimator 322 performs forward prediction on a P-picture to detect its motion vector. Additionally, the motion estimator 322 performs forward prediction, backward prediction, and bi-directional prediction for a B-picture to detect the respective motion vectors. In a known manner, the motion estimator 322 searches, in the frame memory, for a block of pixels which most resembles the current input block of pixels. Various search algorithms are known in the art. They are generally based on evaluating the mean absolute difference (MAD) or the mean square error (MSE) between the pixels of the current input block and those of the candidate block. The candidate block having the least MAD or MSE is then selected to be the motion-compensated prediction block. Its relative location with respect to the location of the current input block is the motion vector.
  • MAD mean absolute difference
  • MSE mean square error
  • the motion compensator 324 may read out encoded and already locally decoded picture data stored in the frame memory in accordance with the prediction mode and the motion vector and may supply the read-out data as a prediction picture to arithmetic unit 325 and switch 344.
  • the arithmetic unit 325 also receives the input block and calculates the difference between the input block and the prediction picture from the motion compensator 324. The difference value is then supplied to the DCT circuit 330. If only the prediction mode is received from the motion estimator 322, that is, if the prediction mode is the intra-coding mode, the motion compensator 324 may not output a prediction picture.
  • the arithmetic unit 325 may not perform the above- described processing, but instead may directly output the input block to the DCT circuit 330.
  • the DCT circuit 330 performs DCT processing on the output signal from the arithmetic unit 325 so as to obtain DCT coefficients which are supplied to a quantizer 332.
  • the quantizer 332 sets a quantization step (quantization scale) in accordance with the data storage quantity in a buffer (not illustrated) received as a feedback and quantizes the DCT coefficients from the DCT circuit 330 using the quantization step.
  • the quantized DCT coefficients are supplied to the VLC unit 334 along with the set quantization step.
  • the VLC unit 334 converts the quantization coefficients supplied from the quantizer 332 into a variable length code, such as a Huffman code, in accordance with the quantization step supplied from the quantizer 332.
  • the resulting converted quantization coefficients are outputted to a buffer not illustrated.
  • the quantization coefficients and the quantization step are also supplied to an inverse quantizer 338 which dequantizes the quantization coefficients in accordance with the quantization step so as to convert the same to DCT coefficients.
  • the DCT coefficients are supplied to the inverse DCT unit 340 which performs inverse DCT on the DCT coefficients.
  • the obtained inverse DCT coefficients are then supplied to the arithmetic unit 348.
  • the arithmetic unit 348 receives the inverse DCT coefficients from the inverse
  • the DCT unit 340 and the data from the motion compensator 324 depending on the location of switch 344.
  • the arithmetic unit 348 sums the signal (prediction residuals) from the inverse DCT unit 340 to the predicted picture from the motion compensator 324 to locally decode the original picture. However, if the predition mode indicates intra-coding, the output of the inverse DCT unit 340 may be directly fed to the frame memory.
  • the decoded picture obtained by the arithmetic unit 340 is sent to and stored in the frame memory so as to be used later as a reference picture for an inter-coded picture, forward predictive coded picture, backward predictive coded picture, or a bi-directional predictive coded picture.
  • the enhancement encoder 314 comprises a motion estimator 354, a motion compensator 356, a DCT circuit 368, a quantizer 370, a VLC unit 372, a bitrate controller 374, an inverse quantizer 376, an inverse DCT circuit 378, switches 366 and 382, subtractors 358 and 364, and adders 380 and 388.
  • the enhancement encoder 314 may also include DC-offsets 360 and 384, adder 362 and subtractor 386. The operation of many of these components is similar to the operation of similar components in the base encoder 312 and will not be described in detail.
  • the output of the arithmetic unit 340 is also supplied to the upsampler 350 which generally reconstructs the filtered out resolution from the decoded video stream and provides a video data stream having substantially the same resolution as the high-resolution input. However, because of the filtering and losses resulting from the compression and decompression, certain errors are present in the reconstructed stream. The errors are determined in the subtraction unit 358 by subtracting the reconstructed high-resolution stream from the original, unmodified high resolution stream.
  • the original unmodified high-resolution stream is also provided to the motion estimator 354.
  • the reconstructed high-resolution stream is also provided to an adder 388 which adds the output from the inverse DCT 378 (possibly modified by the output of the motion compensator 356 depending on the position of the switch 382).
  • the output of the adder 388 is supplied to the motion estimator 354.
  • the motion estimation is performed on the upscaled base layer plus the enhancement layer instead of the residual difference between the original high- resolution stream and the reconstructed high-resolution stream.
  • This motion estimation produces motion vectors that track the actual motion better than the vectors produced by the known systems of Figures 1 and 2. This leads to a perceptually better picture quality especially for consumer applications which have lower bit rates than professional applications.
  • a DC-offset operation followed by a clipping operation can be introduced into the enhancement encoder 314, wherein the DC-offset value 360 is added by adder 362 to the residual signal output from the subtraction unit 358.
  • This optional DC-offset and clipping operation allows the use of existing standards, e.g., MPEG, for the enhancement encoder where the pixel values are in a predetermined range, e.g., 0...255.
  • the residual signal is normally concentrated around zero.
  • the concentration of samples can be shifted to the middle of the range, e.g., 128 for 8 bit video samples.
  • the enhancement output stream from the VLC unit 372 is supplied to a split vector unit 390.
  • the motion estimation vectors from the base layer are also supplied to the split vector unit 390.
  • the split vector unit 390 subtracts processed motion estimation vectors of the base layer from the motion estimation vectors of the enhancement layer to produce a residual of the motion estimation vectors.
  • the residual signal is then transmitted.
  • the base motion vectors are scaled in the split vector unit 390 (or a scaling unit not illustrated in Figure 3) to form the processed base motion vectors.
  • the scaling can be performed using a linear or non-linear scaling factor.
  • the horizontal portion of the base motion vector is scaled by a first scaling factor and the vertical portion of the base motion vector is scaled by a second scaling factor.
  • the base macroblock which covers most of the intended enhancement macroblock is selected.
  • the base motion vectors from some or all of the base macroblocks which cover at least a portion of the intended enhancement macroblock are selected.
  • FIG. 4 illustrates a decoder 400 according to one embodiment of the invention for decoding the base and enhancement streams produced by the encoder 300.
  • the base stream is decoded in a base decoder 402.
  • the decoded base stream is then upconverted by an upconverter 404.
  • the upconverted base stream is supplied to an addition unit 406.
  • the vectors from the base layer are sent from the base decoder 402 to the merge vector unit 408.
  • the base motion vectors must, however, first be scaled by the merge vector unit 408 (or a scaling device not illustrated in Figure 4) using the same scaling factors as were used in the split vector unit 390.
  • the merge vector unit 408 adds the processed base vectors to the residual signal in the enhancement stream.
  • the motion vectors of the enhancement stream are reconstituted and the entire enhancement stream can now be decoded by an enhancement decoder 410.
  • the decoded enhancement stream is then added to the upconverted base stream by the addition unit 406 to create the full output signal of the decoder 400. While the illustrative embodiment illustrated in Figure 4 refers to motion vectors, it will be understood by those skilled in the art that the invention applies to other base and enhancement features as well.
  • the above-described embodiments of the invention enhance the efficiency of spatial scalable compression schemes by lowering the bitrate of the enhancement layer by only transmitting a residual of the enhancement features in the enhancement layer.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)

Abstract

A method and apparatus for providing spatial scalable compression of an input video stream is disclosed. A base stream is encoded which comprises base features. A residual signal is encoded to produce an enhancement stream comprising enhancement features, wherein the residual signal is the difference between original frames of the input video stream an upscaled frames from the base layer. A processed version of the base features are subtracted from the enhancement features in the enhancement stream.

Description

Video coding
The invention relates to video coding, and more particularly to spatial scalable video compression schemes.
Because of the massive amounts of data inherent in digital video, the transmission of full-motion, high-definition digital video signals is a significant problem in the development of high-definition television. More particularly, each digital image frame is a still image formed from an array of pixels according to the display resolution of a particular system. As a result, the amounts of raw digital information included in high resolution video sequences are massive. In order to reduce the amount of data that must be sent, compression schemes are used to compress the data. Various video compression standards or processes have been established, including, MPEG-2, MPEG-4, H.263, and H.264.
Many applications are enabled where video is available at various resolutions and/or qualities in one stream. Methods to accomplish this are loosely referred to as scalability teclmiques. There are three axes on which one can deploy scalability. The first is scalability on the time axis, often referred to as temporal scalability. Secondly, there is scalability on the quality axis, often referred to as signal-to-noise scalability or fine-grain scalability. The third axis is the resolution axis (number of pixels in image) often referred to as spatial scalability or layered coding. In layered coding, the bitstream is divided into two or more bitstreams, or layers. Each layer can be combined to form a single high quality signal. For example, the base layer may provide a lower quality video signal, while the enhancement layer provides additional information that can enhance the base layer image.
In particular, spatial scalability can provide compatibility between different video standards or decoder capabilities. With spatial scalability, the base layer video may have a lower resolution than the input video sequence, in which case the enhancement layer carries information which can restore the resolution of the base layer to the input sequence level.
Most video compression standards support spatial scalability. Figure 1 illustrates a block diagram of an encoder 100 which supports MPEG-2/MPEG-4 spatial scalability. The encoder 100 comprises a base encoder 112 and an enhancement encoder 114. The base encoder is comprised of a low pass filter and downsampler 120, a motion estimator 122, a motion compensator 124, an orthogonal transform (e.g., Discrete Cosine Transform (DCT)) circuit 130, a quantizer 132, a variable length coder 134, a bitrate control circuit 135, an inverse quantizer 138, an inverse transform circuit 140, switches 128, 144, and an interpolate and upsample circuit 150. The enhancement encoder 114 comprises a motion estimator 154, a motion compensator 155, a selector 156, an orthogonal transform (e.g., Discrete Cosine Transform (DCT)) circuit 158, a quantizer 160, a variable length coder 162, a bitrate control circuit 164, an inverse quantizer 166, an inverse transform circuit 168, switches 170 and 172. The operations of the individual components are well known in the art and will not be described in detail. The base encoder 112 produces a base stream BS and the enhancement encoder 114 produces an enhancement stream ES based on input INP.
Unfortunately, the coding efficiency of this layered coding scheme is not very good. Indeed, for a given picture quality, the bitrate of the base layer and the enhancement layer together for a sequence is greater than the bitrate of the same sequence coded at once. Figure 2 illustrates another known encoder 200 proposed by DemoGrafx (see US 5,852,565). The encoder is comprised of substantially the same components as the encoder 100 and the operation of each is substantially the same so the individual components will not be described. In this configuration, the residue difference between the input block and the upsampled output from the upsampler 150 is inputted into a motion estimator 154. To guide/help the motion estimation of the enhancement encoder, the scaled motion vectors from the base layer are used in the motion estimator 154 as indicated by the dashed line in Figure 2. However, this arrangement does not significantly overcome the problems of the arrangement illustrated in Figure 1. While spatial scalability, as illustrated in Figures 1 and 2, is supported by the video compression standards, spatial scalability is not often used due to a lack of coding efficiency. The lack of efficient coding means that, for a given picture quality, the bit rate of the base layer and the enhancement layer for a sequence together are more than the bit rate of the same sequence coded at once.
It is an object of the invention to overcome at least part of the above-described deficiencies of the known spatial scalability schemes by providing a method and apparatus for providing more efficient compression by only transmitting a residual of enhancement features in the enhancement stream.
According to one embodiment of the invention, a method and apparatus for providing spatial scalable compression of an input video stream is disclosed. A base stream is encoded which comprises base features. A residual signal is encoded to produce an enhancement stream comprising enhancement features, wherein the residual signal is the difference between original frames of the input video stream an upscaled frames from the base layer. A processed version of the base features are subtracted from the enhancement features in the enhancement stream. According to another embodiment of the invention, a method and apparatus for decoding compressed video information received in a base stream and an enhancement stream is disclosed. The received base stream is decoded. The resolution of the decoded base stream is upconverted. The base features produced by the base stream decoder are added to a residual motion vector signal in the received enhancement stream to form a combined signal. The combined signal is decoded. The upconverted decoded base stream and the decoded combined signal are added together to produce a video output.
These and other aspects of the invention will be apparent from and elucidated with reference to the embodiments described hereafter.
The invention will now be described, by way of example, with reference to the accompanying drawings, wherein:
Figure 1 is a block schematic representation of a known encoder with spatial scalability; Figure 2 is a block schematic representation of a known encoder with spatial scalability;
Figure 3 is a block schematic representation of an encoder with spatial scalability according to one embodiment of the invention;
Figure 4 is a block schematic representation of a layered decoder according to one embodiment of the invention.
Figure 3 is a schematic diagram of an encoder according to one embodiment of the invention. As will be described below, the motion estimation performed by the encoder 300 is done on the complete image rather than the residual signal as illustrated in Figures 1 and 2. Since the motion estimation is done on the complete image, the motion estimation vectors of the base layer will have a high correlation with the corresponding vectors of the enhancement layer. Thus, the bitrate of the enhancement layer can be reduced by only transmitting the difference between the motion estimation vectors of the base layer and the enhancement layer as described below. While the illustrative embodiment illustrated in Figure 3 refers to motion estimation and motion vectors, it will be understood by those skilled in the art that the invention applies to other base and enhancement features as well. According to the invention, information from the base layer can be used as a prediction for the enhancement layer. The encoding features selected in the base layer, e.g., macroblock- type, motion-type, etc., can be used to predict the encoding features used in the enhancement layer. By subtracting the base features from the enhancement features, an enhancement stream with a lower bitrate can be obtained.
The depicted encoding system 300 accomplishes layered compression, whereby a portion of the channel is used for providing a low resolution base layer and the remaining portion is used for transmitting edge enhancement information, whereby the two signals may be recombined to bring the system up to high resolution.
The encoder 300 comprises a base encoder 312 and an enhancement encoder 314. The base encoder is comprised of a low pass filter and downsampler 320, a motion estimator 322, a motion compensator 324, an orthogonal transform (e.g., Discrete Cosine Transform (DCT)) circuit 330, a quantizer 332, a variable length coder (VLC) 334, a bitrate control circuit 335, an inverse quantizer 338, an inverse transform circuit 340, switches 328, 344, and an interpolate and upsample circuit 350.
An input video block 316 is split by a splitter 318 and sent to both the base encoder 312 and the enhancement encoder 314. In the base encoder 312, the input block is inputted into a low pass filter and downsampler 320. The low pass filter reduces the resolution of the video block which is then fed to the motion estimator 322. The motion estimator 322 processes picture data of each frame as an I-picture, a P-picture, or as a B- picture. Each of the pictures of the sequentially entered frames is processed as one of the I-, P-, or B-pictures in a pre-set manner, such as in the sequence of I, B, P, B, P,..., B, P. That is, the motion estimator 322 refers to a pre-set reference frame in a series of pictures stored in a frame memory not illustrated and detects the motion vector of a macro-block, that is, a small block of 16 pixels by 16 lines of the frame being encoded by pattern matching (block Matching) between the macro-block and the reference frame for detecting the motion vector of the macro-block.
In MPEG, there are four picture prediction modes, that is an intra-coding (intra-frame coding), a forward predictive coding, a backward predictive coding, and a bi- directional predictive-coding. An I-picture is an intra-coded picture, a P-picture is an intra- coded or forward predictive coded or backward predictive coded picture, and a B-picture is an intra-coded, a forward predictive coded, or a bi-directional predictive-coded picture.
The motion estimator 322 performs forward prediction on a P-picture to detect its motion vector. Additionally, the motion estimator 322 performs forward prediction, backward prediction, and bi-directional prediction for a B-picture to detect the respective motion vectors. In a known manner, the motion estimator 322 searches, in the frame memory, for a block of pixels which most resembles the current input block of pixels. Various search algorithms are known in the art. They are generally based on evaluating the mean absolute difference (MAD) or the mean square error (MSE) between the pixels of the current input block and those of the candidate block. The candidate block having the least MAD or MSE is then selected to be the motion-compensated prediction block. Its relative location with respect to the location of the current input block is the motion vector.
Upon receiving the prediction mode and the motion vector from the motion estimator 322, the motion compensator 324 may read out encoded and already locally decoded picture data stored in the frame memory in accordance with the prediction mode and the motion vector and may supply the read-out data as a prediction picture to arithmetic unit 325 and switch 344. The arithmetic unit 325 also receives the input block and calculates the difference between the input block and the prediction picture from the motion compensator 324. The difference value is then supplied to the DCT circuit 330. If only the prediction mode is received from the motion estimator 322, that is, if the prediction mode is the intra-coding mode, the motion compensator 324 may not output a prediction picture. In such a situation, the arithmetic unit 325 may not perform the above- described processing, but instead may directly output the input block to the DCT circuit 330. The DCT circuit 330 performs DCT processing on the output signal from the arithmetic unit 325 so as to obtain DCT coefficients which are supplied to a quantizer 332. The quantizer 332 sets a quantization step (quantization scale) in accordance with the data storage quantity in a buffer (not illustrated) received as a feedback and quantizes the DCT coefficients from the DCT circuit 330 using the quantization step. The quantized DCT coefficients are supplied to the VLC unit 334 along with the set quantization step. The VLC unit 334 converts the quantization coefficients supplied from the quantizer 332 into a variable length code, such as a Huffman code, in accordance with the quantization step supplied from the quantizer 332. The resulting converted quantization coefficients are outputted to a buffer not illustrated. The quantization coefficients and the quantization step are also supplied to an inverse quantizer 338 which dequantizes the quantization coefficients in accordance with the quantization step so as to convert the same to DCT coefficients. The DCT coefficients are supplied to the inverse DCT unit 340 which performs inverse DCT on the DCT coefficients. The obtained inverse DCT coefficients are then supplied to the arithmetic unit 348. The arithmetic unit 348 receives the inverse DCT coefficients from the inverse
DCT unit 340 and the data from the motion compensator 324 depending on the location of switch 344. The arithmetic unit 348 sums the signal (prediction residuals) from the inverse DCT unit 340 to the predicted picture from the motion compensator 324 to locally decode the original picture. However, if the predition mode indicates intra-coding, the output of the inverse DCT unit 340 may be directly fed to the frame memory. The decoded picture obtained by the arithmetic unit 340 is sent to and stored in the frame memory so as to be used later as a reference picture for an inter-coded picture, forward predictive coded picture, backward predictive coded picture, or a bi-directional predictive coded picture.
The enhancement encoder 314 comprises a motion estimator 354, a motion compensator 356, a DCT circuit 368, a quantizer 370, a VLC unit 372, a bitrate controller 374, an inverse quantizer 376, an inverse DCT circuit 378, switches 366 and 382, subtractors 358 and 364, and adders 380 and 388. In addition, the enhancement encoder 314 may also include DC-offsets 360 and 384, adder 362 and subtractor 386. The operation of many of these components is similar to the operation of similar components in the base encoder 312 and will not be described in detail.
The output of the arithmetic unit 340 is also supplied to the upsampler 350 which generally reconstructs the filtered out resolution from the decoded video stream and provides a video data stream having substantially the same resolution as the high-resolution input. However, because of the filtering and losses resulting from the compression and decompression, certain errors are present in the reconstructed stream. The errors are determined in the subtraction unit 358 by subtracting the reconstructed high-resolution stream from the original, unmodified high resolution stream.
According to one embodiment of the invention illustrated in Figure 3, the original unmodified high-resolution stream is also provided to the motion estimator 354. The reconstructed high-resolution stream is also provided to an adder 388 which adds the output from the inverse DCT 378 (possibly modified by the output of the motion compensator 356 depending on the position of the switch 382). The output of the adder 388 is supplied to the motion estimator 354. As a result, the motion estimation is performed on the upscaled base layer plus the enhancement layer instead of the residual difference between the original high- resolution stream and the reconstructed high-resolution stream. This motion estimation produces motion vectors that track the actual motion better than the vectors produced by the known systems of Figures 1 and 2. This leads to a perceptually better picture quality especially for consumer applications which have lower bit rates than professional applications.
Furthermore, a DC-offset operation followed by a clipping operation can be introduced into the enhancement encoder 314, wherein the DC-offset value 360 is added by adder 362 to the residual signal output from the subtraction unit 358. This optional DC-offset and clipping operation allows the use of existing standards, e.g., MPEG, for the enhancement encoder where the pixel values are in a predetermined range, e.g., 0...255. The residual signal is normally concentrated around zero. By adding a DC-offset value 360, the concentration of samples can be shifted to the middle of the range, e.g., 128 for 8 bit video samples. The advantage of this addition is that the standard components of the encoder for the enhancement layer can be used and result in a cost efficient (re-use of IP blocks) solution. According to one embodiment of the invention, the enhancement output stream from the VLC unit 372 is supplied to a split vector unit 390. The motion estimation vectors from the base layer are also supplied to the split vector unit 390. The split vector unit 390 subtracts processed motion estimation vectors of the base layer from the motion estimation vectors of the enhancement layer to produce a residual of the motion estimation vectors. The residual signal is then transmitted. By reducing the redundancy of the vectors of the enhancement layer, the bitrate of the enhancement layer is reduced.
In one embodiment of the invention, the base motion vectors are scaled in the split vector unit 390 (or a scaling unit not illustrated in Figure 3) to form the processed base motion vectors. The scaling can be performed using a linear or non-linear scaling factor. For non-linear scaling, the horizontal portion of the base motion vector is scaled by a first scaling factor and the vertical portion of the base motion vector is scaled by a second scaling factor. In addition, it may be unclear from which base macroblock the base vectors should be taken. According to one embodiment of the invention, the base macroblock which covers most of the intended enhancement macroblock is selected. In another embodiment of the invention, the base motion vectors from some or all of the base macroblocks which cover at least a portion of the intended enhancement macroblock are selected. The corresponding selected base motion vectors from each base macroblock can then be averaged in some known manner to produce a set of base motion vectors which are then scaled. Figure 4 illustrates a decoder 400 according to one embodiment of the invention for decoding the base and enhancement streams produced by the encoder 300. The base stream is decoded in a base decoder 402. The decoded base stream is then upconverted by an upconverter 404. The upconverted base stream is supplied to an addition unit 406. The vectors from the base layer are sent from the base decoder 402 to the merge vector unit 408. The base motion vectors must, however, first be scaled by the merge vector unit 408 (or a scaling device not illustrated in Figure 4) using the same scaling factors as were used in the split vector unit 390. The merge vector unit 408 adds the processed base vectors to the residual signal in the enhancement stream. Thus, the motion vectors of the enhancement stream are reconstituted and the entire enhancement stream can now be decoded by an enhancement decoder 410. The decoded enhancement stream is then added to the upconverted base stream by the addition unit 406 to create the full output signal of the decoder 400. While the illustrative embodiment illustrated in Figure 4 refers to motion vectors, it will be understood by those skilled in the art that the invention applies to other base and enhancement features as well. The above-described embodiments of the invention enhance the efficiency of spatial scalable compression schemes by lowering the bitrate of the enhancement layer by only transmitting a residual of the enhancement features in the enhancement layer. It will be understood that the different embodiments of the invention are not limited to the exact order of the above-described steps as the timing of some steps can be interchanged without affecting the overall operation of the invention. Furthermore, the term "comprising" does not exclude other elements or steps, the terms "a" and "an" do not exclude a plurality and a single processor or other unit may fulfill the functions of several of the units or circuits recited in the claims.

Claims

CLAIMS:
1. An apparatus for performing spatial scalable compression of an input video stream including an encoder for encoding and outputting the video stream in a compressed form, comprising: a base layer encoder (312) for encoding a base stream which comprises base features; an enhancement layer encoder (314) for encoding a residual signal to produce an enhancement stream comprising enhancement features, wherein the residual signal is the difference between original frames of the input video stream an upscaled frames from the base layer; a unit (390) for subtracting a processed version of the base features from the enhancement features in the enhancement stream.
2. The apparatus according to claim 1, wherein said base features are base motion vectors and said enhancement features are enhancement motion vectors.
3. The apparatus according to claim 2, wherein the base motion vectors are scaled to form the processed base motion vectors.
4. The apparatus according to claim 3, wherein a linear scaling factor is used to scale the base motion vectors.
5. The apparatus according to claim 3, wherein a non-linear scaling factor is used to scale the base motion vectors.
6. The apparatus according to claim 5, wherein a first scaling factor scales a horizontal portion of the base motion vectors and a second scaling scales a vertical portion of the base motion vectors.
7. The apparatus according to claim 3, wherein the base motion vectors are taken from a base macroblock which substantially covers an intended enhancement macroblock.
8. The apparatus according to claim 7, wherein the base motion vectors a taken from a plurality of base macroblocks which cover at least a portion of the intended enhancement macroblock, wherein corresponding base motion vectors from all of the plurality of base macroblocks which at least partially cover the intended enhancement macroblock are combined into one set of base motion vectors which are then scaled.
9. The apparatus according to claim 8, wherein the corresponding base motion vectors from all of the plurality of base macroblocks are averaged or weighted averaged to create the set of base motion vectors which are then scaled.
10. A layered encoder for encoding an input video stream, comprising: a downsampling unit (320) for reducing the resolution of the video stream; a first motion estimation unit (322) which calculates base motion vectors for each frame of the downsampled video stream; a first motion compensation unit (324) which receives the base motion vectors from the first motion estimation unit and produces a first predicted stream; a first subtraction unit (325) for subtracting the first predicted stream from said downsampled video stream to produce a base stream; a base encoder (312) for encoding a lower resolution base stream; an upconverting unit (350) for decoding and increasing the resolution of the base stream to produce a reconstructed video stream; a second motion estimation unit (354) which receives the input video stream and the reconstructed video stream and calculates enhancement motion vectors for each frame of the received streams based upon an upscaled base layer plus enhancement layer; a second subtraction unit (358) for subtracting the reconstructed video stream from the input video stream to produce a residual stream; a second motion compensation unit (356) which receives the motion vectors from the motion estimation unit and produces a second predicted stream; a third subtraction unit (364) for subtracting the second predicted stream from the residual stream; an enhancement encoder (314) for encoding the resulting stream from the subtraction unit and outputting an enhancement stream; and a split vector unit (390) for subtracting a processed version of the base motion vectors from the enhancement motion vectors in the enhancement stream.
11. A method for providing spatial scalable compression of an input video stream, comprising the steps of: encoding a base stream which comprises base features; encoding a residual signal to produce an enhancement stream comprising enhancement features, wherein the residual signal is the difference between original frames of the input video stream an upscaled frames from the base layer; subtracting a processed version of the base features from the enhancement features in the enhancement stream.
12. The method according to claim 11, wherein said base features are base motion vectors and said enhancement features are enhancement notion vectors.
13. A decoder for decoding compressed video information, comprising: a base stream decoder (402) for decoding a received base stream; an upconverting unit (404) for increasing the resolution of the decoded base stream; a merge unit (408) for adding processed base features produced by the base stream decoder to a residual signal in a received enhancement stream; an enhancement stream decoder (410) for decoding an output signal from the merge unit; and an addition unit (406) for combining the upconverted decoded base stream and the decoded output of the merge unit to produce a video output.
14. The decoder according to claim 13, wherein said base featares are base motion vectors and said enhancement features are enhancement motion vectors.
15. The decoder according to claim 14, wherein the base motion vectors are scaled to form the processed base motion vectors.
16. The decoder according to claim 15, wherein a linear scaling factor is used to scale the base motion vectors.
17. The decoder according to claim 15, wherein a non-linear scaling factor is used to scale the base motion vectors.
18. The decoder according to claim 17, wherein a first scaling factor scales a horizontal portion of the base motion vectors and a second scaling scales a vertical portion of the base motion vectors.
19. The decoder according to claim 15, wherein the base motion vectors are taken from a base macroblock which substantially covers an intended enhancement macroblock.
20. The decoder according to claim 19, wherein the base motion vectors are taken from a plurality of base macroblocks which cover at least a portion of the intended enhancement macroblock, wherein corresponding base motion vectors from all of the plurality of base macroblocks which at least partially cover the intended enhancement macroblock are combined into one set of base motion vectors which are then scaled.
21. The decoder according to claim 20, wherein the corresponding base motion vectors from all of the plurality of base macroblocks are averaged or weighted averaged to create the set of base motion vectors which are then scaled.
22. A method for decoding compressed video information received in a base stream and an enhancement stream, comprising the steps of: decoding the received base stream; increasing the resolution of the decoded base stream; adding processed base features produced by the base stream decoder to a residual signal in the received enhancement stream to form a combined signal; decoding the combined signal; and combining the upconverted decoded base stream and the decoded combined signal to produce a video output.
PCT/IB2004/050074 2003-02-17 2004-02-04 Video coding WO2004073312A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
JP2006502560A JP2006518568A (en) 2003-02-17 2004-02-04 Video encoding
US10/545,342 US20060133475A1 (en) 2003-02-17 2004-02-04 Video coding
EP04707996A EP1597919A1 (en) 2003-02-17 2004-02-04 Video coding

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP03100350 2003-02-17
EP03100350.2 2003-02-17

Publications (1)

Publication Number Publication Date
WO2004073312A1 true WO2004073312A1 (en) 2004-08-26

Family

ID=32865050

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2004/050074 WO2004073312A1 (en) 2003-02-17 2004-02-04 Video coding

Country Status (6)

Country Link
US (1) US20060133475A1 (en)
EP (1) EP1597919A1 (en)
JP (1) JP2006518568A (en)
KR (1) KR20050105222A (en)
CN (1) CN1751519A (en)
WO (1) WO2004073312A1 (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1631089A1 (en) * 2004-08-30 2006-03-01 Matsushita Electric Industrial Co., Ltd. Video coding apparatus and decoding apparatus
EP1659797A2 (en) * 2004-10-21 2006-05-24 Samsung Electronics Co., Ltd. Method and apparatus for compressing motion vectors in video coder based on multi-layer
WO2006058921A1 (en) * 2004-12-03 2006-06-08 Thomson Licensing Method for scalable video coding
FR2879066A1 (en) * 2004-12-03 2006-06-09 Thomson Licensing Sa Coding data inheriting method from images with lower resolution, by assigning mode and motion data of zoomed BR block to HR block if number of zoomed BR blocks is 1
DE102006032021A1 (en) * 2006-07-10 2008-01-17 Nokia Siemens Networks Gmbh & Co.Kg A method and encoding device for encoding an image area of an image of an image sequence in at least two quality levels, and a method and decoding device for decoding a first encoded data stream and a second encoded data stream
JP2008517498A (en) * 2004-10-15 2008-05-22 フラウンホッファー−ゲゼルシャフト ツァ フェルダールング デァ アンゲヴァンテン フォアシュンク エー.ファオ Apparatus and method for generating an encoded video sequence using intermediate layer motion data prediction
JP2008527881A (en) * 2005-01-12 2008-07-24 ノキア コーポレイション Method and system for inter-layer prediction mode coding in scalable video coding
US7889793B2 (en) 2004-10-21 2011-02-15 Samsung Electronics Co., Ltd. Method and apparatus for effectively compressing motion vectors in video coder based on multi-layer
CN101513053B (en) * 2005-03-18 2011-04-06 夏普株式会社 Methods and systems for picture up-sampling
US8351502B2 (en) 2005-04-19 2013-01-08 Samsung Electronics Co., Ltd. Method and apparatus for adaptively selecting context model for entropy coding
US8867619B2 (en) 2004-10-15 2014-10-21 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Apparatus and method for generating a coded video sequence by using an intermediate layer motion data prediction
EP2870760A1 (en) * 2012-07-09 2015-05-13 Qualcomm Incorporated Intra mode extensions for difference domain intra prediction
WO2018005845A1 (en) 2016-06-30 2018-01-04 Sony Interactive Entertainment Inc. Encoding/decoding digital frames by down-sampling/up-sampling with enhancement information

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7860161B2 (en) * 2003-12-15 2010-12-28 Microsoft Corporation Enhancement layer transcoding of fine-granular scalable video bitstreams
JP5213456B2 (en) * 2005-02-18 2013-06-19 トムソン ライセンシング Method for deriving encoding information of high resolution picture from low resolution picture, and encoding and decoding apparatus for realizing the method
JP5065051B2 (en) * 2005-02-18 2012-10-31 トムソン ライセンシング Method for deriving encoding information of high-resolution image from low-resolution image, and encoding and decoding apparatus for realizing the method
KR100763192B1 (en) * 2005-09-26 2007-10-04 삼성전자주식회사 Method and apparatus for entropy encoding and entropy decoding FGS layer's video data
CN101356820B (en) * 2006-01-05 2011-01-26 汤姆森许可贸易公司 Inter-layer motion prediction method
EP1879399A1 (en) 2006-07-12 2008-01-16 THOMSON Licensing Method for deriving motion data for high resolution pictures from motion data of low resolution pictures and coding and decoding devices implementing said method
EP2077038B1 (en) * 2006-10-18 2013-01-30 Apple Inc. Scalable video coding with filtering of lower layers
JP4922839B2 (en) * 2007-06-04 2012-04-25 三洋電機株式会社 Signal processing apparatus, video display apparatus, and signal processing method
EP2428042B1 (en) * 2009-05-05 2013-05-01 Telefonaktiebolaget LM Ericsson (publ) Scalable video coding method, encoder and computer program
JP5700970B2 (en) * 2009-07-30 2015-04-15 トムソン ライセンシングThomson Licensing Decoding method of encoded data stream representing image sequence and encoding method of image sequence
WO2011128259A1 (en) * 2010-04-13 2011-10-20 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. A video decoder and a video encoder using motion-compensated prediction
JP5539592B2 (en) * 2010-09-14 2014-07-02 サムスン エレクトロニクス カンパニー リミテッド Multi-layer image encoding and decoding apparatus and method
CN106851320B (en) * 2010-11-04 2020-06-02 Ge视频压缩有限责任公司 Digital storage medium, method of decoding bit stream
JP6763664B2 (en) 2012-10-01 2020-09-30 ジーイー ビデオ コンプレッション エルエルシー Scalable video coding with base layer hints for enhancement layer working parameters
GB2544083B (en) * 2015-11-05 2020-05-20 Advanced Risc Mach Ltd Data stream assembly control

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5852565A (en) * 1996-01-30 1998-12-22 Demografx Temporal and resolution layering in advanced television
US6075906A (en) * 1995-12-13 2000-06-13 Silicon Graphics Inc. System and method for the scaling of image streams that use motion vectors
US20020150164A1 (en) * 2000-06-30 2002-10-17 Boris Felts Encoding method for the compression of a video sequence
US20020154697A1 (en) * 2001-04-19 2002-10-24 Lg Electronic Inc. Spatio-temporal hybrid scalable video coding apparatus using subband decomposition and method
US6510177B1 (en) * 2000-03-24 2003-01-21 Microsoft Corporation System and method for layered video coding enhancement

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6057884A (en) * 1997-06-05 2000-05-02 General Instrument Corporation Temporal and spatial scaleable coding for video object planes
US6233356B1 (en) * 1997-07-08 2001-05-15 At&T Corp. Generalized scalability for video coder based on video objects
US7386049B2 (en) * 2002-05-29 2008-06-10 Innovation Management Sciences, Llc Predictive interpolation of a video signal

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6075906A (en) * 1995-12-13 2000-06-13 Silicon Graphics Inc. System and method for the scaling of image streams that use motion vectors
US5852565A (en) * 1996-01-30 1998-12-22 Demografx Temporal and resolution layering in advanced television
US6510177B1 (en) * 2000-03-24 2003-01-21 Microsoft Corporation System and method for layered video coding enhancement
US20020150164A1 (en) * 2000-06-30 2002-10-17 Boris Felts Encoding method for the compression of a video sequence
US20020154697A1 (en) * 2001-04-19 2002-10-24 Lg Electronic Inc. Spatio-temporal hybrid scalable video coding apparatus using subband decomposition and method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
KUO T ET AL: "ADAPTIVE OVERLAPPED BLOCK MOTION COMPENSATION", PROCEEDINGS OF THE SPIE, SPIE, BELLINGHAM, VA, US, vol. 3164, 1997, pages 401 - 412, XP000914352, ISSN: 0277-786X *
LAUNAY E: "Optimization of image sequences scalable coding", ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 1997. ICASSP-97., 1997 IEEE INTERNATIONAL CONFERENCE ON MUNICH, GERMANY 21-24 APRIL 1997, LOS ALAMITOS, CA, USA,IEEE COMPUT. SOC, US, 21 April 1997 (1997-04-21), pages 3101 - 3104, XP010225813, ISBN: 0-8186-7919-0 *

Cited By (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1788817A1 (en) * 2004-08-30 2007-05-23 Matsushita Electric Industrial Co., Ltd. Decoder, encoder, decoding method and encoding method
EP1631089A1 (en) * 2004-08-30 2006-03-01 Matsushita Electric Industrial Co., Ltd. Video coding apparatus and decoding apparatus
EP1788817A4 (en) * 2004-08-30 2009-07-01 Panasonic Corp Decoder, encoder, decoding method and encoding method
US8208549B2 (en) 2004-08-30 2012-06-26 Panasonic Corporation Decoder, encoder, decoding method and encoding method
US8873623B2 (en) 2004-10-15 2014-10-28 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Apparatus and method for generating a coded video sequence and for decoding a coded video sequence by using an intermediate layer residual value prediction
US8867619B2 (en) 2004-10-15 2014-10-21 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Apparatus and method for generating a coded video sequence by using an intermediate layer motion data prediction
JP2008517498A (en) * 2004-10-15 2008-05-22 フラウンホッファー−ゲゼルシャフト ツァ フェルダールング デァ アンゲヴァンテン フォアシュンク エー.ファオ Apparatus and method for generating an encoded video sequence using intermediate layer motion data prediction
US8873624B2 (en) 2004-10-15 2014-10-28 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Apparatus and method for generating a coded video sequence and for decoding a coded video sequence by using an intermediate layer residual value prediction
US8873622B2 (en) 2004-10-15 2014-10-28 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Apparatus and method for generating a coded video sequence by using an intermediate layer motion data prediction
US8520962B2 (en) 2004-10-21 2013-08-27 Samsung Electronics Co., Ltd. Method and apparatus for effectively compressing motion vectors in video coder based on multi-layer
EP1659797A3 (en) * 2004-10-21 2006-06-07 Samsung Electronics Co., Ltd. Method and apparatus for compressing motion vectors in video coder based on multi-layer
US7889793B2 (en) 2004-10-21 2011-02-15 Samsung Electronics Co., Ltd. Method and apparatus for effectively compressing motion vectors in video coder based on multi-layer
EP1659797A2 (en) * 2004-10-21 2006-05-24 Samsung Electronics Co., Ltd. Method and apparatus for compressing motion vectors in video coder based on multi-layer
US8116578B2 (en) 2004-10-21 2012-02-14 Samsung Electronics Co., Ltd. Method and apparatus for effectively compressing motion vectors in video coder based on multi-layer
FR2879066A1 (en) * 2004-12-03 2006-06-09 Thomson Licensing Sa Coding data inheriting method from images with lower resolution, by assigning mode and motion data of zoomed BR block to HR block if number of zoomed BR blocks is 1
WO2006058921A1 (en) * 2004-12-03 2006-06-08 Thomson Licensing Method for scalable video coding
JP2008527881A (en) * 2005-01-12 2008-07-24 ノキア コーポレイション Method and system for inter-layer prediction mode coding in scalable video coding
CN102075755A (en) * 2005-03-18 2011-05-25 夏普株式会社 Methods and systems for picture up-sampling
CN101513053B (en) * 2005-03-18 2011-04-06 夏普株式会社 Methods and systems for picture up-sampling
US8351502B2 (en) 2005-04-19 2013-01-08 Samsung Electronics Co., Ltd. Method and apparatus for adaptively selecting context model for entropy coding
DE102006032021A1 (en) * 2006-07-10 2008-01-17 Nokia Siemens Networks Gmbh & Co.Kg A method and encoding device for encoding an image area of an image of an image sequence in at least two quality levels, and a method and decoding device for decoding a first encoded data stream and a second encoded data stream
EP2870760A1 (en) * 2012-07-09 2015-05-13 Qualcomm Incorporated Intra mode extensions for difference domain intra prediction
WO2018005845A1 (en) 2016-06-30 2018-01-04 Sony Interactive Entertainment Inc. Encoding/decoding digital frames by down-sampling/up-sampling with enhancement information
EP3479297A4 (en) * 2016-06-30 2020-07-29 Sony Interactive Entertainment Inc. Encoding/decoding digital frames by down-sampling/up-sampling with enhancement information

Also Published As

Publication number Publication date
EP1597919A1 (en) 2005-11-23
KR20050105222A (en) 2005-11-03
JP2006518568A (en) 2006-08-10
CN1751519A (en) 2006-03-22
US20060133475A1 (en) 2006-06-22

Similar Documents

Publication Publication Date Title
US7146056B2 (en) Efficient spatial scalable compression schemes
US20060133475A1 (en) Video coding
JP2962012B2 (en) Video encoding device and decoding device therefor
KR100606588B1 (en) Picture processing device and picture processing method
US6393059B1 (en) Conversion of video data bit stream
KR100314116B1 (en) A motion-compensated coder with motion vector accuracy controlled, a decoder, a method of motion-compensated coding, and a method of decoding
JP2005506815A5 (en)
JP2005507589A5 (en)
KR20060088461A (en) Method and apparatus for deriving motion vectors of macro blocks from motion vectors of pictures of base layer when encoding/decoding video signal
KR20040054743A (en) Spatial scalable compression
JP3649370B2 (en) Motion compensation coding apparatus and motion compensation coding method
KR100202538B1 (en) Mpeg video codec
US20060222083A1 (en) Digital filter with spatial scalability
US20070025438A1 (en) Elastic storage
JP3591700B2 (en) Motion compensated image encoding / decoding device and method thereof
KR0172902B1 (en) Mpeg encoder
WO2006024988A2 (en) A method and apparatus for motion estimation
JP4164903B2 (en) Video code string conversion apparatus and method
JPH06311505A (en) Motion picture coder and decoder
EP2479997A1 (en) Method and apparatus for encoding or decoding a video signal using a summary reference picture

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): BW GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
WWE Wipo information: entry into national phase

Ref document number: 2004707996

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 2006133475

Country of ref document: US

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 10545342

Country of ref document: US

WWE Wipo information: entry into national phase

Ref document number: 1920/CHENP/2005

Country of ref document: IN

WWE Wipo information: entry into national phase

Ref document number: 2006502560

Country of ref document: JP

WWE Wipo information: entry into national phase

Ref document number: 2004804311X

Country of ref document: CN

WWE Wipo information: entry into national phase

Ref document number: 1020057015101

Country of ref document: KR

WWP Wipo information: published in national office

Ref document number: 1020057015101

Country of ref document: KR

WWP Wipo information: published in national office

Ref document number: 2004707996

Country of ref document: EP

WWP Wipo information: published in national office

Ref document number: 10545342

Country of ref document: US

WWW Wipo information: withdrawn in national office

Ref document number: 2004707996

Country of ref document: EP