WO2003090469A2 - System and method for providing single-layer video encoded bitstreams suitable for reduced-complexity decoding - Google Patents

System and method for providing single-layer video encoded bitstreams suitable for reduced-complexity decoding Download PDF

Info

Publication number
WO2003090469A2
WO2003090469A2 PCT/IB2003/001608 IB0301608W WO03090469A2 WO 2003090469 A2 WO2003090469 A2 WO 2003090469A2 IB 0301608 W IB0301608 W IB 0301608W WO 03090469 A2 WO03090469 A2 WO 03090469A2
Authority
WO
WIPO (PCT)
Prior art keywords
dct coefficients
dct
coefficients
input
video image
Prior art date
Application number
PCT/IB2003/001608
Other languages
French (fr)
Other versions
WO2003090469A3 (en
Inventor
Yingwei Chen
Zhun Zhong
Original Assignee
Koninklijke Philips Electronics N.V.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Koninklijke Philips Electronics N.V. filed Critical Koninklijke Philips Electronics N.V.
Priority to EP03712604A priority Critical patent/EP1500279A2/en
Priority to JP2003587114A priority patent/JP4393875B2/en
Priority to AU2003216693A priority patent/AU2003216693A1/en
Priority to KR10-2004-7016955A priority patent/KR20040106364A/en
Publication of WO2003090469A2 publication Critical patent/WO2003090469A2/en
Publication of WO2003090469A3 publication Critical patent/WO2003090469A3/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/18Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a set of transform coefficients
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/156Availability of hardware or computational resources, e.g. encoding based on power-saving criteria
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/124Quantisation
    • H04N19/126Details of normalisation or weighting functions, e.g. normalisation matrices or variable uniform quantisers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/132Sampling, masking or truncation of coding units, e.g. adaptive resampling, frame skipping, frame interpolation or high-frequency transform coefficient masking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/61Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding

Definitions

  • the present invention relates to the compression of digital video information and, more particularly, to a method and device for providing different coding strategies to obtain efficiency during decoding.
  • Single-layer video coding is most widely used in a variety of applications, such as digital- video recording and digital television.
  • the video is encoded at a certain bit rate, then a decoder decodes the bit-stream .and outputs every picture at the encoded Ml quality.
  • a decoder decodes the bit-stream .and outputs every picture at the encoded Ml quality.
  • the reduced-complexity decoding of single-layer bit- streams has been deployed.
  • pictures .are decoded at partial quality the reference pictures used for motion compensation on the decoder side frequently do not match the ones used during encoding. As a result, errors that occur in the reference pictures propagate to other pictures. This is also known as prediction drift or error propagation.
  • Layered-video coding or bit-rate scalable method is an alternative to the single-layer video coding.
  • the video is encoded first at a low bit rate to generate the base layer.
  • the difference between the original and the base-layer-reconstructed video is then encoded to generate one or more enhancement layers.
  • the base layer is always decoded, but the enhancement layer is fully or partially decoded depending on the available processing capacity. If only the base layer is decoded, there is no prediction drift as low-quality reference pictures are used for encoding. If both the full enhancement layer and the base layer are decoded, each layer uses its corresponding reference pictures during encoding for decoding purposes, thus there is no prediction drift.
  • the present invention relates to a method and system for improving the decoding efficiency of an MPEG digital-video-decoder system by changing the single-layer encoding algorithms to obtain an encoded digital-video signal suitable for variable complexity decoding.
  • a method of encoding a stream of input-video image having at least one macro-block includes the steps of: generating a motion vector for each macro-block of the input- video image; generating a predictive-image signal of the input- video image based on the motion vector; transforming the predictive-image signal into a two-dimensional array of DCT coefficients; modifying the array of the DCT coefficients into a set of new DCT coefficients according to predetermined criteria; quantizing the new DCT coefficients into a quantized DCT value; performing a zigzag scan to read each row of the new DCT coefficients into a serial chain; and, coding the serial chain from the zigzag scan and the motion vector to produce an encoded macro-block.
  • the step of modifying the array of the DCT coefficients comprises the steps of: discarding the last-column DCT coefficients and the last-row DCT coefficients alternatively until the energy level of the DCT coefficients reaches a predetermined energy level; and, assigning the discarded column or row with a predetermined value.
  • the step of modifying the array of the DCT coefficients comprises the steps of: calculating a total energy level of the DCT coefficients excluding the lowest 2x2 DCT coefficients; discarding the last column and the last row of the DCT coefficients alternatively until the total energy level of the DCT coefficients reaches a predetermined energy level; and, assigning the discarded column and row to a predetermined value.
  • the method further includes the steps of: dequantizing the qu,antized DCT value to generate dequantized, decoded data; transforming the dequantized, decoded data from a frequency domain to a spatial domain to produce reference data; and, performing a motion compensation on the reference data to produce motion-compensation pictures.
  • a full-pel motion compensation on a predictive (P) picture of the input- video image is performed prior to the step of transforming the predictive-image signal into the array of the DCT coefficients.
  • a method of encoding input- video image having at least one macro-block includes the steps of: (a) generating at least one motion vector and at least one motion-compensation-prediction value for each macro-block of the input- ideo image; (b) transforming the prediction value into a set of DCT coefficients; (c) modifying the set of the DCT coefficients into a set of new DCT coefficients according to predetermined criteria; (d) quantizing the set of the DCT coefficients; and, (e) coding the quantized set of the coefficients and the motion vector to produce an encoded macro-block.
  • the method further includes the steps of forwarding the encoded macro-block to a decoding process; determining the type of picture from the input- ideo image, wherein, if a predictive picture is detected from the input- video image, performing a full-pel motion compensation on the input- video image prior to performing the step (b) of transforming the prediction value into the set of the DCT coefficients.
  • the motion vector is converted to a full-motion vector.
  • the array of the DCT coefficients is modified by calculating a total energy level of the DCT coefficients excluding the lowest 2x2 DCT coefficients; discarding the last column and the last row of the DCT coefficients alternatively until the total energy level of the DCT coefficients reaches a predetermined energy level; and, assigning the discarded column and row with a predetermined value.
  • the method further includes the steps of: dequantizing the quantized DCT value to generate dequantized, decoded data; transforming the dequantized, decoded data from a frequency domain to a spatial domain to produce reference data; and, performing motion compensation on the reference data to produce motion-compensation pictures.
  • an encoding system for encoding an input- video image includes a motion-estimation-prediction module configured to receive the input-video image and to generate at least one motion vector and at least one prediction-error signal; a discrete-cosine-transformer (DCT) module coupled to the output of the motion-estimation-module for transforming the prediction-error signal into a two- dimensional array of DCT coefficients, the DCT coefficients being modified according to the classification of a picture type from the input-video image; a quantization module, coupled to the DCT module, for quantizing the modified DCT coefficients to produce quantized coefficients; a zig-zag scanning module, coupled to the quantization module, configured to convert the quantized coefficients into a serial representation; and, a coding module for coding the quantized set of the coefficients and the motion vector to produce .an encoded macro-block.
  • DCT discrete-cosine-transformer
  • the system further includes: an inverse-quantization module coupled to receive and inverse the quantized coefficients received from the quantization module; an inverse- discrete-cosine transformer (IDCT) coupled to the output of the inverse-quantization module for transforming the dequantized coefficients from a frequency domain to a spatial domain; and, a reference-frame-buffer for receiving output signals from the IDCT to form motion- compensated pictures.
  • an inverse-quantization module coupled to receive and inverse the quantized coefficients received from the quantization module
  • an inverse- discrete-cosine transformer (IDCT) coupled to the output of the inverse-quantization module for transforming the dequantized coefficients from a frequency domain to a spatial domain
  • a reference-frame-buffer for receiving output signals from the IDCT to form motion- compensated pictures.
  • the present invention may be implemented in hardware, software, or a combination of hardware and software as desired for a particular application.
  • the present invention may be realized in a simple, reliable, and inexpensive implementation.
  • FIG. 1 shows an exemplary embodiment of the encoder for compressing video information
  • FIG. 2 shows a graphical representation of modifying DCT coefficients for reducing the computation load during decoding in accordance with the present invention
  • FIG. 3 shows a graphical representation of modifying DCT coefficients prior to performing quantization in accordance with the present invention
  • FIG. 4 is a flow chart illustrating the process of reducing the computation load during decoding in accordance with the present invention.
  • MPEG Moving Pictures Expert Group
  • ISO/TEC 11172- 1 International Standard ISO/TEC 11172- 1 , "Information Technology-Coding of moving pictures and associated audio for digital storage media at up to about 1.5 Mbit/s", Parts 1, 2.and 3, First edition 1993-08-01 which is hereby incorporated by reference in its entirety.
  • I frame intra-frames
  • P frame forward-predicted frames
  • B frame bi-directional- predicted frames
  • the I frame is periodically coded, i.e., one independently coded frame for each group of frames.
  • a prediction is made of the composition of a video frame, the P frame, to be located a specific number of frames forward and before the next reference frame.
  • the B frame is predicted between two reference frames, or by interpolating (averaging) a macro-block in the past reference frame with a macro-block in the future reference frame.
  • the motion vector is also encoded which specifies the relative position of a macro-block within a reference frame with respect to the macro-block within the current frame.
  • the current frame may be encoded based on a previous frame and a subsequent frame. As such, one frame needs to be encoded based on the MPEG encoding convention, .and then other frames relating to that frame are encoded based on the differences from that frame, except the I frame, which is always coded independent of other frames.
  • motion vectors can have integer values (i.e., full-pel coding) in which the values of pixels in the current frame are specified in terms of the value of actual pixels in the reference frame, or half-integer values (i.e., half-pel coding), quarter-integer values (i.e., quarter-pel coding), and fractional values (i.e., fractional-pel coding) in which the values of pixels in the current frame are specified in terms of "virtual" pixels that are interpolated from existing pixels in the reference frame.
  • integer values i.e., full-pel coding
  • half-integer values i.e., half-pel coding
  • quarter-integer values i.e., quarter-pel coding
  • fractional values i.e., fractional-pel coding
  • the half- pel motion compensation as well as the quarter-pel and the frational-pel motion compensation are more computationally extensive than the full-pel motion compensation as the decoder has to interpolate a macro-block from the previous macro-block referenced by the motion vector using the half, quarter, fractional-pel grids, respectively.
  • Frames are coded using a discrete-cosine-transform (DCT) coding scheme, which encodes coefficients as an amplitude of a specific cosine-basis function.
  • the DCT coefficients are quantized and further coded using variable or run-length encoding.
  • a decoder Upon receiving the compressed coded frames, a decoder decodes the macro-block of a present P frame by performing motion compensation with a motion vector applied to a corresponding macro-block of a past reference frame.
  • the decoder also decodes a macro-block of a B frame by performing motion compensation with motion vectors applied to respective past and future reference frames.
  • a macro-block of pixels in one frame can be obtained by translating a macro-block of pixels in a previous or subsequent frame. The amount of translation is referred to as the motion vector.
  • the I frame is encoded as a single image with no reference to any past or future frame, no motion processing is necessary when decoding the I frame.
  • Motion compensation is one of the most computationally intensive operations in many common video-decompression methods, especially when fractional motion vectors are used.
  • video systems unable to keep up with the computational demands of the above decompression burden frequently drop entire frames. This is sometimes observable as a momentary freeze of the picture in the video playback, followed by sudden discontinuities or jerkiness in the picture.
  • various decoding complexity reduction methods have been developed.
  • the present invention provides a novel way to construct the compressed video bitstream such that decoding complexity can be reduced while maintaining the quality of the resulting video image.
  • FIG. 1 illustrates a simplified block diagram of the encoder 10 according to an embodiment of the present invention for encoding video signals.
  • the encoder 10 includes a motion estimation and prediction-module 12, a DCT module 14, a quantization-module 16, a zigzag-scanning module 18, a run-length-coding module 20, an inverse-quantization module 22, an inverse-DCT-module 24, and a reference-frame-buffer module 26.
  • the encoder 10 may be implemented in the central processing unit of a personal computer, workstation, personal-digital-assistant (PDA), handheld computer, and/or an integrated circuit such as a microprocessor, digital-signal processor, micro-controller, micro-computer and/or any other device that manipulates digital information based on programming instructions.
  • the reference-frame-buffer 26 may be a hard-drive memory, random-access memory, read-only memory, external memory and/or any other device that stores digital information.
  • the input- video signals (N) received by the encoder 10 may be signals from a camcorder, a DND player, a VCR, a television tuner and/or any other device that receives digital information. This stream of video information is converted from analog signals to digital signals, then applied to the encoder 10.
  • the input- video signals are further divided into a plurality of layers such that each picture corresponding to a sheet of image (frame) is divided into a plurality of slices, and each slice consists of a plurality of macro-blocks disposed in a line from left to right and from top to bottom.
  • Each of the macro-blocks consists of six components: four brightness components Yl through Y4, which are representative of the brightness of four 8x8 pixel blocks constituting the macro-block of 16x16-pixels, and two colors (U, V) constituting difference components Cb and Cr of 8x8 pixel blocks for the same macro-block.
  • the division of the picture into block units improves the ability to smooth out the changes between two successive pictures.
  • the motion-estimation-and-prediction-module 12 is adapted to receive the input- video signals (N) for estimating motion vectors to provide an offset from the coordinates of a current picture block to the coordinates of a reference frame.
  • the motion vectors can be used later to reconstruct a current frame from a previous reference frame as the motion information reduces the high redundancy that exists between the consecutive frames of a video sequence.
  • an estimate of the current frame can be obtained using the previously-decoded frame and the motion vectors.
  • Motion compensation involves a prediction that uses motion vectors to provide offsets into the past and/or future reference frames containing previously-decoded sample values that are used to form the prediction error.
  • Each of the 8x8 pixel blocks of the macro-blocks is then subjected to a discrete-cosine-transform (DCT) to generate a set of an 8x8 block of DCT coefficients for each of the 64 frequency bands therein. Note that each coefficient can be treated independently without the loss of compression efficiency.
  • the resulting 8x8 block of DCT coefficients is received by the quantization-module 16.
  • the amount of information to encode for each macroblock is also adjusted by processing some high-frequency DCT coefficients according to predetermined criteria (explained later with reference to FIGs. 2 and 3) prior to the quantization.
  • the DCT coefficients in an 8x8 pixel block are then divided by a corresponding coding parameter, i.e., a quantization weight.
  • the quantization weights for a given 8x8 pixel block are expressed in terms of an 8x8 quantization matrix.
  • additional calculations are effected on the DCT coefficients to take into account the quantizer scale value for each macroblock.
  • quantization scale value corresponds to the degree of spatial adjustment of quantization precision that can vary from macroblock. The adjustment often exploits human eye, varying sensitivity to loading artifacts depending on the characteristics of each macroblock visual content.
  • the resulting 8x8 block of quantized DCT coefficients is received by the zigzag-scanning-module 18, where the two-dimensional block of quantized coefficients is scanned in a "zig-zag" order to convert it into a one-dimensional string of quantized DCT coefficients.
  • the run-length coding-module 20 encodes the string of quantized DCT coefficients, macro-block type, and motion vectors using run-length coding to improve coding efficiency by encoding a run of symbols with a single symbol. Note that this coding scheme is well known in the art, and that other well-known coding scheme, such as Huffman coding, may be deployed in accordance with the techniques of the present invention.
  • the output of the run-length coding module 20 is a bitstream BS.
  • a picture is divided into a 8x8 pixel matrix blocks and DCT is performed by the DCT-module 14 in units of blocks to obtain 8x8 transform coefficients.
  • the resulting rectangle 8x8 block of DCT coefficients then undergoes a mask transformation (hereinafter referred as "DCT mask") in which the DCT coefficients are analyzed to obtain a rectangle that contains a specific percentage ( ⁇ 7 ) of energy of DCT coefficients.
  • DCT mask a mask transformation
  • This rectangle is obtained by first calculating the total energy of all DCT blocks, except the lowest 2x2 DCT coefficients. Then, the last or eighth-column frequency component is discarded during energy-level calculation, and the energy level excluding the lowest 2x2 DCT coefficients is calculated again, as shown in FIG. 2(a).
  • the newly-calculated energy level exceeds a predetermined percentage, for example, 85 % from the total energy. If not, the last or eighth-row frequency component is discarded when calculating the remaining energy level of DCT blocks, excluding the lowest 2x2 DCT coefficients, as shown in FIG. 2(b). If the newly-calculated energy level does not fall under the predetermined percentage, the seventh-column frequency component is discarded, as shown in FIG. 2(c), then a new energy level of the remaining DCT blocks, excluding the lowest 2x2 DCT coefficients, is calculated again. In this manner, an additional, column-and-row-frequency component is discarded alternatively until a desired energy level is obtained.
  • a predetermined percentage for example 85 % from the total energy.
  • the column-and- row-frequency components that were discarded when calculating the desired energy level are assigned to a relatively high quantization matrix weight, for example, 255.
  • a relatively high quantization matrix weight for example, 255.
  • pictures decoded with reduced complexity do not have as much high-frequency information as pictures on the encoder side, but the reduced complexity tends to contribute to a prediction drift. If reference pictures used on the encoder side can be made closer to pictures resulting from partial-quality decoding, the prediction drift can be reduced. Thus, in the preferred embodiment of the present invention, it is necessary to give different weights to the type of picture being encoded as some pictures tend to contribute less to the prediction drift.
  • the P pictures in the current Group of Pictures contribute less to the prediction drift than I pictures as there are fewer frames for error propagation before the next intra-picture.
  • the quantization matrix or DCT mask for later pictures in a GOP must be designed with less consideration when performing reduced-complexity decoding according to the present invention.
  • the quantization matrix or DCT mask for I pictures, the first picture in a GOP should be designed with the most consideration for the prediction drift in the event of partial-quality decoding.
  • B pictures are not used as reference pictures, no DCT mask is needed. Accordingly, during encoding the quantization matrix for B pictures may stay the same as that for optimized single-layer encoding. Note further that another way to reduce decoding complexity is by simplifying motion compensation as the motion compensation tends to be a computationally intensive operation.
  • a coarser-precision MC during encoding can be applied to further facilitate variable complexity decoding.
  • V_ pel MC is used for encoding (as in MPEG-2)
  • the decoder may be forcedtp apply only full-pel motion compensation without interpolating ! 2 -pel-position pixel values when non-integer motion vectors are used for the current macro-block during computation peak load periods.
  • V. pel MC is used for encoding V_ pel MC, full-pel MC, or any combination thereof, can be applied for decoding. Accordingly, prediction drift from reduced-complexity decoding can be eliminated or reduced if simplified motion prediction is utilized during encoding for only the P pictures.
  • the motion prediction is not performed for I pictures thus is not applicable.
  • the B pictures can be encoded with full-complexity higher precision motion compensation.
  • the following flow chart of FIG. 4 shows the encoding-operation process that is capable of reducing decoding complexity according to the embodiment of the present invention.
  • step 102 upon receiving the input- video information by the encoder 10 in step 100, it is detected for the type of picture in step 102, so that a different encoding mode can be applied according to the picture classification as described hereinafter.
  • the current picture is encoded as an intra (I) picture.
  • the I frame picture is partitioned into 8x8 blocks, then DCT is performed in step 104 where each coefficient is denoted C , where i and j range from 0 to 7.
  • step 106 the DCT coefficients are analyzed to obtain a rectangle that contains a l of energy of DCT coefficients, excluding the lowest 2x2 DCT coefficients.
  • the rectangle (i a ,j a ) is obtained as follows:
  • quantization matrix for the corresponding rectangle is calculated in step 108.
  • the quantization matrix for full encoding is ⁇ W t ⁇ , where i,j range from 0 to 7.
  • An example of such a matrix is a conventional MPEG-coding-scheme (i.e., Test Models 5 (TM5)) matrix for intra-pictures in which the coding mode for each macro-block is selected by comparing the energy of the predictive residuals (error signal).
  • TM5 Test Models 5
  • the intrade-mode decision is determined by comparison of the variance of the macro-block pixels against the variance of the predictive residuals for each coding mode.
  • the quantization matrix for partial encoding for the current intra-pictures is computed as:
  • step 110 the modified quantization matrix is encoded, and the encoded signals are forwarded to the decoding side.
  • the current picture is encoded as a predictive (P) picture. If the P picture is detected in step 102, the P picture undergoes a full-pel
  • step 120 The operation of steps 122 through step 128 is essentially the same as that described above with respect to step 104 through step 110.
  • the only notable difference is that the setting of a P depends on the location of this P picture in the current GOP.
  • the setting of a P depends on the location of this P picture in the current GOP. As the P pictures are in the GOP later, a lower prediction drift will result in subsequent frames before an I picture resets the prediction cycle.
  • the GOP size the number of pictures in a Group of Pictures
  • n the relative position of the P picture in the GOP.
  • a p is computed as: ⁇ p - ⁇ ⁇ , ' 1 " * 1 , where n ranges from M-l to N-l .
  • the current picture is encoded as a bi-directional-predicted picture.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computing Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

A method and system (10) of reducing the computation load of an MPEG decoder by changing the encoding algorithms in a video-processing system are provided. During an encoding mode, a stream of data blocks is received and at least one motion vector and one motion compensation prediction value for each macro-block are generated (12). The prediction value is transformed (14) into a set of DCT coefficients. Prior to the quantizing step, the set of DCT coefficients are modified (14) according to predetermined criteria. To this end, the total energy level of the DCT coefficients excluding the lowest 2x2 DCT coefficients is computed, and last column and last row of the DCT coefficients is discarded alternatively until the total energy level of the DCT coefficients reaches a predetermined energy level. Thereafter, the discarded column or row is assigned to a predetermined value. The modified DCT coefficients are then quantified (16) and encoded (20), thereby generating a bit stream of encoded macro-blocks, which are subsequently decoded according to a conventional decoding process.

Description

System and method for providing single-layer video encoded bitstreams suitable for reduced- complexity decoding
The present invention relates to the compression of digital video information and, more particularly, to a method and device for providing different coding strategies to obtain efficiency during decoding.
Single-layer video coding is most widely used in a variety of applications, such as digital- video recording and digital television. The video is encoded at a certain bit rate, then a decoder decodes the bit-stream .and outputs every picture at the encoded Ml quality. To reduce the computation load the reduced-complexity decoding of single-layer bit- streams has been deployed. However, as pictures .are decoded at partial quality, the reference pictures used for motion compensation on the decoder side frequently do not match the ones used during encoding. As a result, errors that occur in the reference pictures propagate to other pictures. This is also known as prediction drift or error propagation.
Layered-video coding or bit-rate scalable method is an alternative to the single-layer video coding. The video is encoded first at a low bit rate to generate the base layer. The difference between the original and the base-layer-reconstructed video is then encoded to generate one or more enhancement layers. On the decoder side, the base layer is always decoded, but the enhancement layer is fully or partially decoded depending on the available processing capacity. If only the base layer is decoded, there is no prediction drift as low-quality reference pictures are used for encoding. If both the full enhancement layer and the base layer are decoded, each layer uses its corresponding reference pictures during encoding for decoding purposes, thus there is no prediction drift. However, if the enhancement layer is only partially decoded and there is a motion prediction among enhancement-layer pictures, the prediction drift occurs in the enhancement layer. Moreover, this type of coding is inherently less efficient during compression as the reference pictures used in the enhancement layer do not lend themselves to efficient motion prediction. Furthermore, the extra overhead in multi-layered coding is more complex than decoding a single-layer bit stream to produce the same quality. Accordingly, there is a need for obtaining optimal decoding performance that is relatively simple to facilitate practical implementation. The present invention relates to a method and system for improving the decoding efficiency of an MPEG digital-video-decoder system by changing the single-layer encoding algorithms to obtain an encoded digital-video signal suitable for variable complexity decoding.
According to one aspect of the invention, a method of encoding a stream of input-video image having at least one macro-block is provided. The method includes the steps of: generating a motion vector for each macro-block of the input- video image; generating a predictive-image signal of the input- video image based on the motion vector; transforming the predictive-image signal into a two-dimensional array of DCT coefficients; modifying the array of the DCT coefficients into a set of new DCT coefficients according to predetermined criteria; quantizing the new DCT coefficients into a quantized DCT value; performing a zigzag scan to read each row of the new DCT coefficients into a serial chain; and, coding the serial chain from the zigzag scan and the motion vector to produce an encoded macro-block. The step of modifying the array of the DCT coefficients comprises the steps of: discarding the last-column DCT coefficients and the last-row DCT coefficients alternatively until the energy level of the DCT coefficients reaches a predetermined energy level; and, assigning the discarded column or row with a predetermined value. The step of modifying the array of the DCT coefficients comprises the steps of: calculating a total energy level of the DCT coefficients excluding the lowest 2x2 DCT coefficients; discarding the last column and the last row of the DCT coefficients alternatively until the total energy level of the DCT coefficients reaches a predetermined energy level; and, assigning the discarded column and row to a predetermined value. The method further includes the steps of: dequantizing the qu,antized DCT value to generate dequantized, decoded data; transforming the dequantized, decoded data from a frequency domain to a spatial domain to produce reference data; and, performing a motion compensation on the reference data to produce motion-compensation pictures. In the embodiment, a full-pel motion compensation on a predictive (P) picture of the input- video image is performed prior to the step of transforming the predictive-image signal into the array of the DCT coefficients.
According to another aspect of the invention, a method of encoding input- video image having at least one macro-block includes the steps of: (a) generating at least one motion vector and at least one motion-compensation-prediction value for each macro-block of the input- ideo image; (b) transforming the prediction value into a set of DCT coefficients; (c) modifying the set of the DCT coefficients into a set of new DCT coefficients according to predetermined criteria; (d) quantizing the set of the DCT coefficients; and, (e) coding the quantized set of the coefficients and the motion vector to produce an encoded macro-block. The method further includes the steps of forwarding the encoded macro-block to a decoding process; determining the type of picture from the input- ideo image, wherein, if a predictive picture is detected from the input- video image, performing a full-pel motion compensation on the input- video image prior to performing the step (b) of transforming the prediction value into the set of the DCT coefficients. When a predictive picture is detected from the input- video image, the motion vector is converted to a full-motion vector. The array of the DCT coefficients is modified by calculating a total energy level of the DCT coefficients excluding the lowest 2x2 DCT coefficients; discarding the last column and the last row of the DCT coefficients alternatively until the total energy level of the DCT coefficients reaches a predetermined energy level; and, assigning the discarded column and row with a predetermined value. The method further includes the steps of: dequantizing the quantized DCT value to generate dequantized, decoded data; transforming the dequantized, decoded data from a frequency domain to a spatial domain to produce reference data; and, performing motion compensation on the reference data to produce motion-compensation pictures. According to a further aspect of the invention, an encoding system for encoding an input- video image includes a motion-estimation-prediction module configured to receive the input-video image and to generate at least one motion vector and at least one prediction-error signal; a discrete-cosine-transformer (DCT) module coupled to the output of the motion-estimation-module for transforming the prediction-error signal into a two- dimensional array of DCT coefficients, the DCT coefficients being modified according to the classification of a picture type from the input-video image; a quantization module, coupled to the DCT module, for quantizing the modified DCT coefficients to produce quantized coefficients; a zig-zag scanning module, coupled to the quantization module, configured to convert the quantized coefficients into a serial representation; and, a coding module for coding the quantized set of the coefficients and the motion vector to produce .an encoded macro-block. The system further includes: an inverse-quantization module coupled to receive and inverse the quantized coefficients received from the quantization module; an inverse- discrete-cosine transformer (IDCT) coupled to the output of the inverse-quantization module for transforming the dequantized coefficients from a frequency domain to a spatial domain; and, a reference-frame-buffer for receiving output signals from the IDCT to form motion- compensated pictures. Yet another aspect is that the present invention may be implemented in hardware, software, or a combination of hardware and software as desired for a particular application.
Furthermore, the present invention may be realized in a simple, reliable, and inexpensive implementation.
These and other advantages will become apparent to those skilled in this art upon reading the following detailed description in conjunction with the accompanying drawings.
A more complete understanding of the method and apparatus of the present invention may be had by reference to the following detailed description when taken in conjunction with the accompanying drawings wherein:
FIG. 1 shows an exemplary embodiment of the encoder for compressing video information;
FIG. 2 shows a graphical representation of modifying DCT coefficients for reducing the computation load during decoding in accordance with the present invention;
FIG. 3 shows a graphical representation of modifying DCT coefficients prior to performing quantization in accordance with the present invention; and, FIG. 4 is a flow chart illustrating the process of reducing the computation load during decoding in accordance with the present invention.
In the following description, for purposes of explanation rather than limitation, specific details are set forth such as the particular architecture, interfaces, techniques, etc., in order to provide a thorough understanding of the present invention. However, it will be apparent to those skilled in the .art that the present invention may be practiced in other embodiments, which depart from these specific details. For the purpose of simplicity .and clarity, detailed descriptions of well-known devices, circuits, and methods are omitted so as not to obscure the description of the present invention with unnecessary detail.
In order to facilitate an understanding of this invention, a conventional method of compressing and decoding video data in accordance with the MPEG standard will be described briefly hereinafter. To reduce the high cost of video-compression codecs and implement ways to provide inter-operability of equipment between different manufacturers, a compression standard was created by the Moving Pictures Expert Group (MPEG) standard for video encoding and decoding. The MPEG standard is defined in International Standard ISO/TEC 11172- 1 , "Information Technology-Coding of moving pictures and associated audio for digital storage media at up to about 1.5 Mbit/s", Parts 1, 2.and 3, First edition 1993-08-01 which is hereby incorporated by reference in its entirety.
Under the MPEG standard, there are three types of frames of video information: intra-frames (I frame), forward-predicted frames (P frame), and bi-directional- predicted frames (B frame). The I frame is periodically coded, i.e., one independently coded frame for each group of frames. A prediction is made of the composition of a video frame, the P frame, to be located a specific number of frames forward and before the next reference frame. The B frame is predicted between two reference frames, or by interpolating (averaging) a macro-block in the past reference frame with a macro-block in the future reference frame. The motion vector is also encoded which specifies the relative position of a macro-block within a reference frame with respect to the macro-block within the current frame. The current frame may be encoded based on a previous frame and a subsequent frame. As such, one frame needs to be encoded based on the MPEG encoding convention, .and then other frames relating to that frame are encoded based on the differences from that frame, except the I frame, which is always coded independent of other frames.
In motion-compensation-based video coding, motion vectors can have integer values (i.e., full-pel coding) in which the values of pixels in the current frame are specified in terms of the value of actual pixels in the reference frame, or half-integer values (i.e., half-pel coding), quarter-integer values (i.e., quarter-pel coding), and fractional values (i.e., fractional-pel coding) in which the values of pixels in the current frame are specified in terms of "virtual" pixels that are interpolated from existing pixels in the reference frame. The half- pel motion compensation as well as the quarter-pel and the frational-pel motion compensation are more computationally extensive than the full-pel motion compensation as the decoder has to interpolate a macro-block from the previous macro-block referenced by the motion vector using the half, quarter, fractional-pel grids, respectively.
Frames are coded using a discrete-cosine-transform (DCT) coding scheme, which encodes coefficients as an amplitude of a specific cosine-basis function. The DCT coefficients are quantized and further coded using variable or run-length encoding. Upon receiving the compressed coded frames, a decoder decodes the macro-block of a present P frame by performing motion compensation with a motion vector applied to a corresponding macro-block of a past reference frame. The decoder also decodes a macro-block of a B frame by performing motion compensation with motion vectors applied to respective past and future reference frames. A macro-block of pixels in one frame can be obtained by translating a macro-block of pixels in a previous or subsequent frame. The amount of translation is referred to as the motion vector. As the I frame is encoded as a single image with no reference to any past or future frame, no motion processing is necessary when decoding the I frame.
Motion compensation, as described above, is one of the most computationally intensive operations in many common video-decompression methods, especially when fractional motion vectors are used. As such, video systems unable to keep up with the computational demands of the above decompression burden frequently drop entire frames. This is sometimes observable as a momentary freeze of the picture in the video playback, followed by sudden discontinuities or jerkiness in the picture. To reduce the processing requirements associated with decompression methods on the decoding end, various decoding complexity reduction methods have been developed. The present invention provides a novel way to construct the compressed video bitstream such that decoding complexity can be reduced while maintaining the quality of the resulting video image.
FIG. 1 illustrates a simplified block diagram of the encoder 10 according to an embodiment of the present invention for encoding video signals. The encoder 10 includes a motion estimation and prediction-module 12, a DCT module 14, a quantization-module 16, a zigzag-scanning module 18, a run-length-coding module 20, an inverse-quantization module 22, an inverse-DCT-module 24, and a reference-frame-buffer module 26. In the preferred embodiment of the present invention, the encoder 10 may be implemented in the central processing unit of a personal computer, workstation, personal-digital-assistant (PDA), handheld computer, and/or an integrated circuit such as a microprocessor, digital-signal processor, micro-controller, micro-computer and/or any other device that manipulates digital information based on programming instructions. The reference-frame-buffer 26 may be a hard-drive memory, random-access memory, read-only memory, external memory and/or any other device that stores digital information. Note that the set of modules comprising the motion-compensation-and-prediction-module 12, the inverse-quantization-module 22, and the inverse-DCT-module 24 is generally known as an "embedded decoder." This type of decoding modules is well known to those of ordinary skill in the art; thus, descriptions thereof are omitted to avoid redundancy. The input- video signals (N) received by the encoder 10 may be signals from a camcorder, a DND player, a VCR, a television tuner and/or any other device that receives digital information. This stream of video information is converted from analog signals to digital signals, then applied to the encoder 10. The input- video signals are further divided into a plurality of layers such that each picture corresponding to a sheet of image (frame) is divided into a plurality of slices, and each slice consists of a plurality of macro-blocks disposed in a line from left to right and from top to bottom. Each of the macro-blocks consists of six components: four brightness components Yl through Y4, which are representative of the brightness of four 8x8 pixel blocks constituting the macro-block of 16x16-pixels, and two colors (U, V) constituting difference components Cb and Cr of 8x8 pixel blocks for the same macro-block. The division of the picture into block units improves the ability to smooth out the changes between two successive pictures.
As shown in FIG. 1, the motion-estimation-and-prediction-module 12 is adapted to receive the input- video signals (N) for estimating motion vectors to provide an offset from the coordinates of a current picture block to the coordinates of a reference frame. The motion vectors can be used later to reconstruct a current frame from a previous reference frame as the motion information reduces the high redundancy that exists between the consecutive frames of a video sequence. In addition, an estimate of the current frame can be obtained using the previously-decoded frame and the motion vectors. Motion compensation involves a prediction that uses motion vectors to provide offsets into the past and/or future reference frames containing previously-decoded sample values that are used to form the prediction error.
Each of the 8x8 pixel blocks of the macro-blocks is then subjected to a discrete-cosine-transform (DCT) to generate a set of an 8x8 block of DCT coefficients for each of the 64 frequency bands therein. Note that each coefficient can be treated independently without the loss of compression efficiency. The resulting 8x8 block of DCT coefficients is received by the quantization-module 16. In the preferred embodiment of the present invention, the amount of information to encode for each macroblock is also adjusted by processing some high-frequency DCT coefficients according to predetermined criteria (explained later with reference to FIGs. 2 and 3) prior to the quantization. The DCT coefficients in an 8x8 pixel block are then divided by a corresponding coding parameter, i.e., a quantization weight. The quantization weights for a given 8x8 pixel block are expressed in terms of an 8x8 quantization matrix. Thereafter, additional calculations are effected on the DCT coefficients to take into account the quantizer scale value for each macroblock. Note that quantization scale value corresponds to the degree of spatial adjustment of quantization precision that can vary from macroblock. The adjustment often exploits human eye, varying sensitivity to loading artifacts depending on the characteristics of each macroblock visual content. By quantizing the DCT coefficients with such quantization value, many of the DCT coefficients are converted to zeros, thus improving image-compression efficiency.
The resulting 8x8 block of quantized DCT coefficients is received by the zigzag-scanning-module 18, where the two-dimensional block of quantized coefficients is scanned in a "zig-zag" order to convert it into a one-dimensional string of quantized DCT coefficients. Thereafter, the run-length coding-module 20 encodes the string of quantized DCT coefficients, macro-block type, and motion vectors using run-length coding to improve coding efficiency by encoding a run of symbols with a single symbol. Note that this coding scheme is well known in the art, and that other well-known coding scheme, such as Huffman coding, may be deployed in accordance with the techniques of the present invention. The output of the run-length coding module 20 is a bitstream BS. Now, the provision of encoding suitable variable complexity decoding by adjusting the information in the high-frequency DCT coefficients prior to the quantization according to the present invention will be explained in a detailed description.
Referring to FIG. 2, a picture is divided into a 8x8 pixel matrix blocks and DCT is performed by the DCT-module 14 in units of blocks to obtain 8x8 transform coefficients. The resulting rectangle 8x8 block of DCT coefficients then undergoes a mask transformation (hereinafter referred as "DCT mask") in which the DCT coefficients are analyzed to obtain a rectangle that contains a specific percentage (α7) of energy of DCT coefficients. This rectangle is obtained by first calculating the total energy of all DCT blocks, except the lowest 2x2 DCT coefficients. Then, the last or eighth-column frequency component is discarded during energy-level calculation, and the energy level excluding the lowest 2x2 DCT coefficients is calculated again, as shown in FIG. 2(a). At this time, it is determined whether the newly-calculated energy level exceeds a predetermined percentage, for example, 85 % from the total energy. If not, the last or eighth-row frequency component is discarded when calculating the remaining energy level of DCT blocks, excluding the lowest 2x2 DCT coefficients, as shown in FIG. 2(b). If the newly-calculated energy level does not fall under the predetermined percentage, the seventh-column frequency component is discarded, as shown in FIG. 2(c), then a new energy level of the remaining DCT blocks, excluding the lowest 2x2 DCT coefficients, is calculated again. In this manner, an additional, column-and-row-frequency component is discarded alternatively until a desired energy level is obtained.
Referring to FIG. 3, once the desired energy level is acquired, the column-and- row-frequency components that were discarded when calculating the desired energy level are assigned to a relatively high quantization matrix weight, for example, 255. By quantizing the DCT coefficients with such high value, the DCT coefficients are effectively converted to zeros during quantization. Discarding the high frequency DCT coefficients will not cause significant prediction drift as a similar operation is performed during encoding.
Note that pictures decoded with reduced complexity do not have as much high-frequency information as pictures on the encoder side, but the reduced complexity tends to contribute to a prediction drift. If reference pictures used on the encoder side can be made closer to pictures resulting from partial-quality decoding, the prediction drift can be reduced. Thus, in the preferred embodiment of the present invention, it is necessary to give different weights to the type of picture being encoded as some pictures tend to contribute less to the prediction drift. Those skilled in the art will appreciate that the P pictures in the current Group of Pictures (GOP) contribute less to the prediction drift than I pictures as there are fewer frames for error propagation before the next intra-picture. As such, the quantization matrix or DCT mask for later pictures in a GOP must be designed with less consideration when performing reduced-complexity decoding according to the present invention. Similarly, the quantization matrix or DCT mask for I pictures, the first picture in a GOP, should be designed with the most consideration for the prediction drift in the event of partial-quality decoding. Further, as B pictures are not used as reference pictures, no DCT mask is needed. Accordingly, during encoding the quantization matrix for B pictures may stay the same as that for optimized single-layer encoding. Note further that another way to reduce decoding complexity is by simplifying motion compensation as the motion compensation tends to be a computationally intensive operation. Thus, in the preferred embodiment of the present invention, a coarser-precision MC during encoding can be applied to further facilitate variable complexity decoding. For example, if V_ pel MC is used for encoding (as in MPEG-2), the decoder may be forcedtp apply only full-pel motion compensation without interpolating ! 2-pel-position pixel values when non-integer motion vectors are used for the current macro-block during computation peak load periods. Similarly, if V. pel MC is used for encoding V_ pel MC, full-pel MC, or any combination thereof, can be applied for decoding. Accordingly, prediction drift from reduced-complexity decoding can be eliminated or reduced if simplified motion prediction is utilized during encoding for only the P pictures. However, the motion prediction is not performed for I pictures thus is not applicable. Also, as B pictures are not used as future reference pictures, the B pictures can be encoded with full-complexity higher precision motion compensation. In view of the above considerations, the following flow chart of FIG. 4 shows the encoding-operation process that is capable of reducing decoding complexity according to the embodiment of the present invention.
Referring to FIG. 4, upon receiving the input- video information by the encoder 10 in step 100, it is detected for the type of picture in step 102, so that a different encoding mode can be applied according to the picture classification as described hereinafter.
I. The current picture is encoded as an intra (I) picture.
The I frame picture is partitioned into 8x8 blocks, then DCT is performed in step 104 where each coefficient is denoted C , where i and j range from 0 to 7. In step 106, the DCT coefficients are analyzed to obtain a rectangle that contains al of energy of DCT coefficients, excluding the lowest 2x2 DCT coefficients. In mathematical term, the rectangle (ia ,ja ) is obtained as follows:
First, calculate the total energy of all DCT blocks except the lowest 2x2 coefficients:
Figure imgf000011_0001
Next, calculate Ea = aIE . Next, set initial row-and-column numbers: i=j=7.
Next, iterate through the following steps:
compute E = E - ^ ∑C^ (This step discards the far right column blocks k-0 frequency component), if E <= Ea , stop and output (ij-1); otherwise continue with the next step;
compute E = E - ∑ (This step discards the far lowest row blocks
Figure imgf000011_0002
frequency component), if E <= Ea , stop and output (i-l,j-l); otherwise continue with the next step i=i-l; j=j-l.
After obtaining the rectangle-yielding desired energy level, quantization matrix for the corresponding rectangle is calculated in step 108. Assume the quantization matrix for full encoding is { Wt }, where i,j range from 0 to 7. An example of such a matrix is a conventional MPEG-coding-scheme (i.e., Test Models 5 (TM5)) matrix for intra-pictures in which the coding mode for each macro-block is selected by comparing the energy of the predictive residuals (error signal). Namely, the intrade-mode decision is determined by comparison of the variance of the macro-block pixels against the variance of the predictive residuals for each coding mode.
The quantization matrix for partial encoding for the current intra-pictures is computed as:
'«/ otherwise Wt = 255.
Finally, in step 110, the modified quantization matrix is encoded, and the encoded signals are forwarded to the decoding side.
II. The current picture is encoded as a predictive (P) picture. If the P picture is detected in step 102, the P picture undergoes a full-pel
(reduced resolution) motion prediction in step 120. The operation of steps 122 through step 128 is essentially the same as that described above with respect to step 104 through step 110. The only notable difference is that the setting of aP depends on the location of this P picture in the current GOP. Hence, the discussion of similar components described in the preceding paragraphs is omitted to avoid redundancy. The setting of aP depends on the location of this P picture in the current GOP. As the P pictures are in the GOP later, a lower prediction drift will result in subsequent frames before an I picture resets the prediction cycle. We denote the GOP size (the number of pictures in a Group of Pictures) as N, and the relative position of the P picture in the GOP as n. ap is computed as: α p -^ α , '1 "*1, where n ranges from M-l to N-l .
III. The current picture is encoded as a bi-directional-predicted picture.
If the current picture is encoded as a bi-directional picture, perform-normal encoding as B pictures do not contribute to the prediction drift on the decoder side; therefore no encoding-time-complexity-reduction preparation is needed. The foregoing has disclosed a method and system for reducing the processing requirements associated with decompression methodology by adaptively changing the encoding mode in a video encoder. It should be apparent to those skilled in the art that certain advantages of the method and system have been achieved by increasing the decompression efficiency while not overly degrading the ultimate video image. By designing an encoding process such that certain compression efficiency is maintained during encoding, a complexity reduction can be achieved with less quality degradation to reduce the prediction drift during decoding when compared to a normal encoding process optimized for fixed full quality decoding. While the preferred embodiments of the present invention have been illustrated and described, it will be understood by those skilled in the art that various changes and modifications may be made and equivalents substituted for elements thereof without departing from the true scope of the present invention. Therefore, it is intended that the present invention not be limited to the particular embodiment disclosed as the best mode contemplated for carrying out the present invention, but that the present invention include all embodiments falling within the scope of the appended claims.

Claims

CLAIMS:
1. A method (10) for encoding a stream of input- video image having at least one macro-block, the method comprising the steps of:
- generating (12) a motion vector for each macro-block of said input-video image; - generating (12) a predictive-image signal of said input- video image based on said motion vector;
- tr.ansforming (14) said predictive-image signal into a two-dimensional array of DCT coefficients;
- modifying (14) the array of said DCT coefficients into a set of new DCT coefficients according to predetermined criteria;
- quantizing (16) said new DCT coefficients into a quantized DCT value;
- performing (18) a zigzag scan to read each row of said new DCT coefficients into a serial chain; and,
- coding (20) said serial chain from said zigzag scan and said motion vector to produce an encoded macro-block.
2. The method of claim 1, wherein the step of modifying the array of said DCT coefficients comprises the steps of:
- discarding the last-column DCT coefficients and the last-row DCT coefficients alternatively until the energy level of said DCT coefficients reaches a predetermined energy level; and,
- assigning said discarded column or row with a predetermined value.
3. The method of claim 1 , wherein the array of DCT coefficients is an 8x8 matrix.
4. The method of claim 1, wherein the step of transforming said predictive-image signal into the array of said DCT coefficients comprises the conversion of said DCT coefficients from a time domain into a frequency domain.
5. The method of claim 1 , wherein the step of modifying the array of said DCT coefficients comprises the steps of:
- calculating a total energy level of said DCT coefficients excluding the lowest 2x2 DCT coefficients;
- discarding the last column and the last row of said DCT coefficients alternatively until the total energy level of said DCT coefficients reaches a predetermined energy level; and,
- assigning said discarded column .and row to a predetermined value.
6. The method of claim 1 , wherein the step of coding is performed according to a predetermined encoding-configuration table.
7. The method of claim 1 , further comprising the steps of:
- dequantizing said quantized DCT value to generate dequantized, decoded data; - transforming said dequantized, decoded data from a frequency domain to a spatial domain to produce reference data; and,
- performing a motion compensation on said reference data to produce motion- compensation pictures.
8. The method of claim 1 , further comprising the steps of performing a full-pel motion compensation on a predictive (P) picture of said input- video image prior to the step of transforming said predictive-image signal into the array of said DCT coefficients.
9. The method of claim 1 , further comprising the step of forwarding said encoded macro-block to a decoding process.
10. A method (10) for encoding input- video image having at least one macro- block, the method comprising the steps of: (a) generating (12) at least one motion vector and at least one motion- compensation-prediction value for each macro-block of said input- video image;
(b) transforming (14) said prediction value into a set of DCT coefficients;
(c) modifying (14) the set of said DCT coefficients into a set of new DCT coefficients according to predetermined criteria;
(d) quantizing (16) the set of said DCT coefficients; and,
(e) coding (20) said quantized set of said coefficients and said motion vector to produce an encoded macro-block.
11. The method of claim 10, further comprising the step of determining (102) the type of picture from said input- video image.
12. The method of claim 11 , wherein, if a predictive picture is detected from said input- video image, performing a full-pel motion compensation on said input- video image prior to performing said step (b) of transforming said prediction value into the set of said DCT coefficients.
13. The method of claim 11 , further comprises converting said motion vector to a full-motion vector when a predictive picture is detected from said input-video image.
14. An encoding system (10) for encoding an input- video image, comprising:
- a motion-estimation-prediction module (12) configured to receive said input- video image and to generate at least one motion vector and at least one prediction-error signal; - a discrete-cosine-transformer (DCT) (14) module coupled to the output of said motion-estimation-module (12) for transforming said prediction-error signal into a two- dimensional array of DCT coefficients, said DCT coefficients being modified according to the classification of a picture type from said input- video image;
- a quantization module (16) , coupled to said DCT module, for quantizing said modified DCT coefficients to produce quantized coefficients;
- a zig-zag scanning module (18), coupled to said quantization module, configured to convert said quantized coefficients into a serial representation; and,
- a coding module (20) for coding said quantized set of said coefficients and said motion vector to produce an encoded macro-block. 10
15. The encoding system of claim 14, further comprising:
- an inverse-quantization module (22) coupled to receive and inverse the quantized coefficients received from said quantization module; - an inverse-discrete-cosine transformer (IDCT) (24) coupled to the output of said inverse-quantization module (22) for transforming the dequantized coefficients from a frequency domain to a spatial domain; and,
- a reference-frame-buffer (26) for receiving output signals from said IDCT to form motion-compensated pictures.
16. A machine-readable medium having stored thereon data representing sequences of instructions, and the sequences of instructions which, when executed by a processor, cause the processor to:
- generate (12) a motion vector for each macro-block of said input- video image;
- generate (12) a predictive-image signal of said input- video image based on said motion vector;
- transform (14) said predictive-image signal into a two-dimensional array of DCT coefficients; - modify (14) the array of said DCT coefficients into a set of new DCT coefficients according to predetermined criteria;
- quantize (16) said new DCT coefficients into a quantized DCT value;
- perform (18) a zigzag scan to read each row of said new DCT coefficients into a serial chain; and, - code (20) said serial chain from said zigzag scan and said motion vector to produce an encoded macro-block.
PCT/IB2003/001608 2002-04-22 2003-04-15 System and method for providing single-layer video encoded bitstreams suitable for reduced-complexity decoding WO2003090469A2 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
EP03712604A EP1500279A2 (en) 2002-04-22 2003-04-15 System and method for providing single-layer video encoded bitstreams suitable for reduced-complexity decoding
JP2003587114A JP4393875B2 (en) 2002-04-22 2003-04-15 System and method for providing a single layer video encoded bitstream suitable for reduced complexity decoding
AU2003216693A AU2003216693A1 (en) 2002-04-22 2003-04-15 System and method for providing single-layer video encoded bitstreams suitable for reduced-complexity decoding
KR10-2004-7016955A KR20040106364A (en) 2002-04-22 2003-04-15 System and method for providing single-layer video encoded bitstreams suitable for reduced-complexity decoding

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US10/127,528 2002-04-22
US10/127,528 US6985635B2 (en) 2002-04-22 2002-04-22 System and method for providing a single-layer video encoded bitstreams suitable for reduced-complexity decoding

Publications (2)

Publication Number Publication Date
WO2003090469A2 true WO2003090469A2 (en) 2003-10-30
WO2003090469A3 WO2003090469A3 (en) 2004-02-05

Family

ID=29215287

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2003/001608 WO2003090469A2 (en) 2002-04-22 2003-04-15 System and method for providing single-layer video encoded bitstreams suitable for reduced-complexity decoding

Country Status (7)

Country Link
US (1) US6985635B2 (en)
EP (1) EP1500279A2 (en)
JP (1) JP4393875B2 (en)
KR (1) KR20040106364A (en)
CN (1) CN1647540A (en)
AU (1) AU2003216693A1 (en)
WO (1) WO2003090469A2 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100405851C (en) * 2005-11-18 2008-07-23 宁波中科集成电路设计中心有限公司 Motion vector prediction multiplex design method in multi-mode standard decoder
US8848789B2 (en) 2006-03-27 2014-09-30 Qualcomm Incorporated Method and system for coding and decoding information associated with video compression

Families Citing this family (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050185852A1 (en) * 2004-02-20 2005-08-25 Jiliang Song Method and apparatus to generate complex borders
KR100694059B1 (en) * 2004-09-30 2007-03-12 삼성전자주식회사 Method and apparatus for encoding and decoding in inter mode based on multi time scan
CN101061723A (en) * 2004-11-22 2007-10-24 皇家飞利浦电子股份有限公司 Motion vector field projection dealing with covering and uncovering
US8780957B2 (en) * 2005-01-14 2014-07-15 Qualcomm Incorporated Optimal weights for MMSE space-time equalizer of multicode CDMA system
WO2006099082A2 (en) * 2005-03-10 2006-09-21 Qualcomm Incorporated Content adaptive multimedia processing
EP1768415A1 (en) * 2005-09-27 2007-03-28 Matsushita Electric Industrial Co., Ltd. Adaptive scan order of DCT coefficients and its signaling
US20090028239A1 (en) * 2005-05-03 2009-01-29 Bernhard Schuur Moving picture encoding method, moving picture decoding method and apparatuses using the methods
EP1768416A1 (en) * 2005-09-27 2007-03-28 Matsushita Electric Industrial Co., Ltd. Frequency selective video compression and quantization
US9113147B2 (en) * 2005-09-27 2015-08-18 Qualcomm Incorporated Scalability techniques based on content information
US8654848B2 (en) * 2005-10-17 2014-02-18 Qualcomm Incorporated Method and apparatus for shot detection in video streaming
US20070206117A1 (en) * 2005-10-17 2007-09-06 Qualcomm Incorporated Motion and apparatus for spatio-temporal deinterlacing aided by motion compensation for field-based video
US8948260B2 (en) * 2005-10-17 2015-02-03 Qualcomm Incorporated Adaptive GOP structure in video streaming
US20070171280A1 (en) * 2005-10-24 2007-07-26 Qualcomm Incorporated Inverse telecine algorithm based on state machine
US8780997B2 (en) * 2005-11-18 2014-07-15 Apple Inc. Regulation of decode-side processing based on perceptual masking
US20070147496A1 (en) * 2005-12-23 2007-06-28 Bhaskar Sherigar Hardware implementation of programmable controls for inverse quantizing with a plurality of standards
US8102916B1 (en) 2006-01-12 2012-01-24 Zenverge, Inc. Dynamically changing media compression format in compressed domain
US7830800B1 (en) * 2006-01-12 2010-11-09 Zenverge, Inc. Architecture for combining media processing with networking
US9131164B2 (en) * 2006-04-04 2015-09-08 Qualcomm Incorporated Preprocessor method and apparatus
US8208553B2 (en) * 2006-05-04 2012-06-26 Altera Corporation Methods and apparatus for quarter-pel refinement in a SIMD array processor
US8311114B1 (en) 2006-12-06 2012-11-13 Zenverge, Inc. Streamlined transcoder architecture
WO2009032255A2 (en) * 2007-09-04 2009-03-12 The Regents Of The University Of California Hierarchical motion vector processing method, software and devices
US8265168B1 (en) 2008-02-01 2012-09-11 Zenverge, Inc. Providing trick mode for video stream transmitted over network
WO2009097284A1 (en) * 2008-02-01 2009-08-06 Zenverge, Inc. Intermediate compression of reference frames for transcoding
US9326004B2 (en) * 2008-06-03 2016-04-26 Broadcom Corporation Reduced memory mode video decode
KR101775555B1 (en) * 2010-03-02 2017-09-07 삼성전자주식회사 Method and apparatus for generating video packet
KR20120012301A (en) * 2010-07-30 2012-02-09 삼성전자주식회사 Method and apparatus for transmitting and receiving extended broadcast service in digital broadcasting
TWI470999B (en) * 2012-06-19 2015-01-21 Wistron Corp Method, apparatus, and system for bitstream editing and storage
US8861877B2 (en) 2012-07-19 2014-10-14 Omnivision Technologies, Inc. System and method for improving decoder performance by using multiple decoding channels

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0577363A1 (en) * 1992-06-30 1994-01-05 Loral Aerospace Corporation Compression and reconstruction of radiological images
US5598484A (en) * 1991-08-30 1997-01-28 Fuji Xerox Co., Ltd. Apparatus for encoding an image signal
US6049629A (en) * 1992-03-23 2000-04-11 Canon Kabushiki Kaisha Coding apparatus for coding image data using one of an interpicture coding method and an interpicture motion-compensated coding method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5598484A (en) * 1991-08-30 1997-01-28 Fuji Xerox Co., Ltd. Apparatus for encoding an image signal
US6049629A (en) * 1992-03-23 2000-04-11 Canon Kabushiki Kaisha Coding apparatus for coding image data using one of an interpicture coding method and an interpicture motion-compensated coding method
EP0577363A1 (en) * 1992-06-30 1994-01-05 Loral Aerospace Corporation Compression and reconstruction of radiological images

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100405851C (en) * 2005-11-18 2008-07-23 宁波中科集成电路设计中心有限公司 Motion vector prediction multiplex design method in multi-mode standard decoder
US8848789B2 (en) 2006-03-27 2014-09-30 Qualcomm Incorporated Method and system for coding and decoding information associated with video compression

Also Published As

Publication number Publication date
AU2003216693A1 (en) 2003-11-03
EP1500279A2 (en) 2005-01-26
CN1647540A (en) 2005-07-27
JP2005523658A (en) 2005-08-04
WO2003090469A3 (en) 2004-02-05
AU2003216693A8 (en) 2003-11-03
KR20040106364A (en) 2004-12-17
JP4393875B2 (en) 2010-01-06
US20030198293A1 (en) 2003-10-23
US6985635B2 (en) 2006-01-10

Similar Documents

Publication Publication Date Title
US6985635B2 (en) System and method for providing a single-layer video encoded bitstreams suitable for reduced-complexity decoding
US10687075B2 (en) Sub-block transform coding of prediction residuals
JP3149418B2 (en) Image prediction decoding method and apparatus
US5796434A (en) System and method for performing motion estimation in the DCT domain with improved efficiency
US5731850A (en) Hybrid hierarchial/full-search MPEG encoder motion estimation
US8009734B2 (en) Method and/or apparatus for reducing the complexity of H.264 B-frame encoding using selective reconstruction
US7379501B2 (en) Differential coding of interpolation filters
US7822118B2 (en) Method and apparatus for control of rate-distortion tradeoff by mode selection in video encoders
US9071844B2 (en) Motion estimation with motion vector penalty
US7787541B2 (en) Dynamic pre-filter control with subjective noise detector for video compression
JP2005507589A (en) Spatial expandable compression
US8811493B2 (en) Method of decoding a digital video sequence and related apparatus
JPH0818979A (en) Image processor
US5614953A (en) Image signal decoding apparatus having an encoding error compensation
JP2001148852A (en) Image information converter and image information conversion method
JP3343554B1 (en) Image prediction decoding method and image prediction encoding device
JP3157144B1 (en) Picture prediction decoding method
JP2002359852A (en) Device and method for predictive decoding of image
JP3343553B1 (en) Image prediction coding apparatus and method
JP2001268571A (en) Moving image re-encoder
JP2001145108A (en) Device and method for converting image information

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NI NO NZ OM PH PL PT RO RU SC SD SE SG SK SL TJ TM TN TR TT TZ UA UG UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
WWE Wipo information: entry into national phase

Ref document number: 2003712604

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 2003587114

Country of ref document: JP

WWE Wipo information: entry into national phase

Ref document number: 20038089769

Country of ref document: CN

Ref document number: 1020047016955

Country of ref document: KR

WWP Wipo information: published in national office

Ref document number: 1020047016955

Country of ref document: KR

WWP Wipo information: published in national office

Ref document number: 2003712604

Country of ref document: EP