US20150256854A1 - 4x4 transform for media coding - Google Patents

4x4 transform for media coding Download PDF

Info

Publication number
US20150256854A1
US20150256854A1 US14/717,618 US201514717618A US2015256854A1 US 20150256854 A1 US20150256854 A1 US 20150256854A1 US 201514717618 A US201514717618 A US 201514717618A US 2015256854 A1 US2015256854 A1 US 2015256854A1
Authority
US
United States
Prior art keywords
dct
idct
factor
apply
transform
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US14/717,618
Inventor
Yuriy Reznik
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qualcomm Inc
Original Assignee
Qualcomm Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qualcomm Inc filed Critical Qualcomm Inc
Priority to US14/717,618 priority Critical patent/US20150256854A1/en
Publication of US20150256854A1 publication Critical patent/US20150256854A1/en
Assigned to QUALCOMM INCORPORATED reassignment QUALCOMM INCORPORATED ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: REZNIK, YURIY
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/625Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding using discrete cosine transform [DCT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/14Fourier, Walsh or analogous domain transformations, e.g. Laplace, Hilbert, Karhunen-Loeve, transforms
    • G06F17/147Discrete orthonormal transforms, e.g. discrete cosine transform, discrete sine transform, and variations therefrom, e.g. modified discrete cosine transform, integer transforms approximating the discrete cosine transform
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/124Quantisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/44Decoders specially adapted therefor, e.g. video decoders which are asymmetric with respect to the encoder
    • H04N19/45Decoders specially adapted therefor, e.g. video decoders which are asymmetric with respect to the encoder performing compensation of the inverse transform mismatch, e.g. Inverse Discrete Cosine Transform [IDCT] mismatch
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/90Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
    • H04N19/91Entropy coding, e.g. variable length coding [VLC] or arithmetic coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/61Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding

Definitions

  • This disclosure relates to data compression and, more particularly, data compression involving transforms.
  • Data compression is widely used in a variety of applications to reduce consumption of data storage space, transmission bandwidth, or both.
  • Example applications of data compression include visible or audible media data coding, such as digital video, image, speech, and audio coding.
  • Digital video coding for example, is used in a wide range of devices, including digital televisions, digital direct broadcast systems, wireless communication devices, personal digital assistants (PDAs), laptop or desktop computers, digital cameras, digital recording devices, video gaming devices, cellular or satellite radio telephones, or the like.
  • Digital video devices implement video compression techniques, such as MPEG-2, MPEG-4, or H.264/MPEG-4 Advanced Video Coding (AVC), to transmit and receive digital video more efficiently.
  • MPEG-2, MPEG-4, or H.264/MPEG-4 Advanced Video Coding (AVC) to transmit and receive digital video more efficiently.
  • video compression techniques perform spatial prediction, motion estimation and motion compensation to reduce or remove redundancy inherent in video data.
  • intra-coding relies on spatial prediction to reduce or remove spatial redundancy in video within a given video frame.
  • Inter-coding relies on temporal prediction to reduce or remove temporal redundancy in video within adjacent frames.
  • a video encoder performs motion estimation to track the movement of matching video blocks between two or more adjacent frames. Motion estimation generates motion vectors, which indicate the displacement of video blocks relative to corresponding video blocks in one or more reference frames. Motion compensation uses the motion vector to generate a prediction video block from a reference frame. After motion compensation, a residual video block is formed by subtracting the prediction video block from the original video block.
  • a video encoder then applies a transform followed by quantization and lossless statistical coding processes to further reduce the bit rate of the residual block produced by the video coding process.
  • the applied transform comprises a discrete cosine transform (DCT).
  • DCT discrete cosine transform
  • the DCT is applied to video blocks whose size is a power of two, such as a video block that is 4 pixels high by 4 pixels wide (which is often referred to as a “4 ⁇ 4 video block”).
  • These DCTs may therefore be referred to as 4 ⁇ 4 DCTs in that these DCTs are applied to 4 ⁇ 4 video blocks to produce a 4 ⁇ 4 matrix of DCT coefficients.
  • the 4 ⁇ 4 matrix of DCT coefficients produced from applying a 4 ⁇ 4 DCT to the residual block then undergo quantization and lossless statistical coding processes to generate a bitstream.
  • statistical coding processes also known as “entropy coding” processes
  • CAVLC context-adaptive variable length coding
  • CABAC context-adaptive binary arithmetic coding
  • a video decoder receives the encoded bitstream and performs lossless decoding to decompress residual information for each of the blocks. Using the residual information and motion information, the video decoder reconstructs the encoded video.
  • this disclosure is directed to techniques for coding data, such as media data, using one or more implementations of an approximation of 4 ⁇ 4 discrete cosine transform (DCT) that may provide increased coding gain relative to conventional 4 ⁇ 4 DCTs.
  • DCT discrete cosine transform
  • the implementations of the 4 ⁇ 4 DCT applied in accordance with the techniques of this disclosure involve various relationships between scaled factors and internal factors.
  • scaled factors refers to factors external from the implementation of the 4 ⁇ 4 DCT that are removed through factorization.
  • internal factors refers to factors internal to the implementation of the 4 ⁇ 4 DCT that remain after factorization.
  • One example implementation of the 4 ⁇ 4 DCT is orthogonal, which implies that the matrix of coefficients representative of the 4 ⁇ 4 DCT, when multiplied by a transpose of this matrix, equals the identity matrix.
  • Another example implementation of the 4 ⁇ 4 DCT is near-orthogonal (or approximately orthogonal).
  • an apparatus comprises a 4 ⁇ 4 discrete cosine transform (DCT) hardware unit that implements an orthogonal 4 ⁇ 4 DCT having an odd portion that applies first and second internal factors (C, S) that are related to a scaled factor ( ⁇ ) such that the scaled factor equals a square root of a sum of a square of the first internal factor (C) plus a square of the second internal factor (S), wherein the 4 ⁇ 4 DCT hardware unit applies the 4 ⁇ 4 DCT implementation to media data to transform the media data from a spatial domain to a frequency domain.
  • DCT discrete cosine transform
  • a method comprises applying an orthogonal 4 ⁇ 4 discreet cosine transform (DCT) implementation with a 4 ⁇ 4 DCT hardware unit to media data to transform the media data from a spatial domain to a frequency domain, wherein the orthogonal 4 ⁇ 4 DCT implementation includes an odd portion that applies first and second internal factors (C, S) that are related to a scaled factor ( ⁇ ) such that the scaled factor equals a square root of a sum of a square of the first internal factor (C) plus a square of the second internal factor (S).
  • DCT discreet cosine transform
  • an apparatus comprises means for applying an orthogonal 4 ⁇ 4 discreet cosine transform (DCT) implementation to media data to transform the media data from a spatial domain to a frequency domain, wherein the orthogonal 4 ⁇ 4 DCT implementation includes an odd portion that applies first and second internal factors (C, S) that are related to a scaled factor ( ⁇ ) such that the scaled factor equals a square root of a sum of a square of the first internal factor (C) plus a square of the second internal factor (S).
  • DCT discreet cosine transform
  • a non-transitory computer-readable storage medium stores instructions that, when executed by a processor, cause the processor to apply an orthogonal 4 ⁇ 4 discreet cosine transform (DCT) implementation with a 4 ⁇ 4 DCT hardware unit to media data to transform the media data from a spatial domain to a frequency domain, wherein the orthogonal 4 ⁇ 4 DCT implementation includes an odd portion that applies first and second internal factors (C, S) that are related to a scaled factor ( ⁇ ) such that the scaled factor equals a square root of a sum of a square of the first internal factor (C) plus a square of the second internal factor (S).
  • DCT discret cosine transform
  • an apparatus comprises a 4 ⁇ 4 inverse discrete cosine transform (IDCT) hardware unit that implements an IDCT of an orthogonal 4 ⁇ 4 DCT having an odd portion that applies first and second internal factors (C, S) that are related to a scaled factor ( ⁇ ) such that the scaled factor equals a square root of a sum of a square of the first internal factor (C) plus a square of the second internal factor (S), wherein the 4 ⁇ 4 IDCT hardware unit applies the 4 ⁇ 4 IDCT implementation to DCT coefficients representative of media data to transform the media data from a frequency domain to a spatial domain.
  • IDCT inverse discrete cosine transform
  • a method comprises applying a 4 ⁇ 4 inverse discrete cosine transform (IDCT) of an orthogonal 4 ⁇ 4 DCT with a 4 ⁇ 4 IDCT hardware unit to DCT coefficients representative of media data to transform the media data from a frequency domain to a spatial domain, wherein the orthogonal 4 ⁇ 4 DCT includes an odd portion that applies first and second internal factors (C, S) that are related to a scaled factor ( ⁇ ) such that the scaled factor equals a square root of a sum of a square of the first internal factor (C) plus a square of the second internal factor (S).
  • IDCT inverse discrete cosine transform
  • an apparatus comprises means for applying a 4 ⁇ 4 inverse discrete cosine transform (IDCT) of an orthogonal 4 ⁇ 4 DCT to DCT coefficients representative of media data to transform the media data from a frequency domain to a spatial domain, wherein the orthogonal 4 ⁇ 4 DCT includes an odd portion that applies first and second internal factors (C, S) that are related to a scaled factor ( ⁇ ) such that the scaled factor equals a square root of a sum of a square of the first internal factor (C) plus a square of the second internal factor (S).
  • IDCT inverse discrete cosine transform
  • a non-transitory computer-readable storage medium stores instructions that, when executed by a processor, cause the processor to apply a 4 ⁇ 4 inverse discrete cosine transform (IDCT) of an orthogonal 4 ⁇ 4 DCT with a 4 ⁇ 4 IDCT hardware unit to DCT coefficients representative of media data to transform the media data from a frequency domain to a spatial domain, wherein the orthogonal 4 ⁇ 4 DCT includes an odd portion that applies first and second internal factors (C, S) that are related to a scaled factor ( ⁇ ) such that the scaled factor equals a square root of a sum of a square of the first internal factor (C) plus a square of the second internal factor (S).
  • IDCT inverse discrete cosine transform
  • an apparatus comprises a 4 ⁇ 4 discrete cosine transform (DCT) hardware unit, wherein the DCT module implements a non-orthogonal 4 ⁇ 4 DCT having an odd portion that applies first and second variables (C, S) that are related to a scaled factor ( ⁇ ) by the following equation:
  • variables ⁇ and ⁇ denote irrational internal transform factors and variables C and S denote dyadic rational internal transform factors used in place of variables ⁇ and ⁇ in integer implementations of the non-orthogonal 4 ⁇ 4 DCT
  • the 4 ⁇ 4 DCT hardware unit applies the 4 ⁇ 4 DCT implementation to media data to transform the media data from a spatial domain to a frequency domain.
  • a method comprises applying a non-orthogonal 4 ⁇ 4 discrete cosine transform (DCT) with a 4 ⁇ 4 DCT hardware unit to media data to transform the media data from a spatial domain to a frequency domain, wherein the non-orthogonal 4 ⁇ 4 DCT includes an odd portion that applies first and second variables (C, S) that are related to a scaled factor ( ⁇ ) by the following equation:
  • DCT discrete cosine transform
  • variables ⁇ and ⁇ denote irrational internal transform factors and variables C and S denote dyadic rational internal transform factors used in place of variables ⁇ and ⁇ in integer implementations of the non-orthogonal 4 ⁇ 4 DCT.
  • an apparatus comprises means for applying a non-orthogonal 4 ⁇ 4 discrete cosine transform (DCT) with a 4 ⁇ 4 DCT hardware unit to media data to transform the media data from a spatial domain to a frequency domain, wherein the non-orthogonal 4 ⁇ 4 DCT includes an odd portion that applies first and second variables (C, S) that are related to a scaled factor ( ⁇ ) by the following equation:
  • DCT discrete cosine transform
  • variables ⁇ and ⁇ denote irrational internal transform factors and variables C and S denote dyadic rational internal transform factors used in place of variables ⁇ and ⁇ in integer implementations of the non-orthogonal 4 ⁇ 4 DCT.
  • a non-transitory computer-readable storage medium stores instructions that, when executed by a processor, cause the processor to apply a non-orthogonal 4 ⁇ 4 discrete cosine transform (DCT) with a 4 ⁇ 4 DCT hardware unit to media data to transform the media data from a spatial domain to a frequency domain, wherein the non-orthogonal 4 ⁇ 4 DCT includes an odd portion that applies first and second variables (C, S) that are related to a scaled factor ( ⁇ ) by the following equation:
  • DCT discrete cosine transform
  • variables ⁇ and ⁇ denote irrational internal transform factors and variables C and S denote dyadic rational internal transform factors used in place of variables ⁇ and ⁇ in integer implementations of the non-orthogonal 4 ⁇ 4 DCT.
  • an apparatus comprises a 4 ⁇ 4 inverse discrete cosine transform (IDCT) hardware unit, wherein the DCT hardware unit implements an inverse DCT of a non-orthogonal 4 ⁇ 4 DCT having an odd portion that applies first and second internal factors (C, S) that are related to a scaled factor ( ⁇ ) by the following equation:
  • variables ⁇ and ⁇ denote irrational internal transform factors and variables C and S denote dyadic rational internal transform factors used in place of variables ⁇ and ⁇ in integer implementations of the non-orthogonal 4 ⁇ 4 DCT
  • the 4 ⁇ 4 IDCT hardware unit applies the 4 ⁇ 4 IDCT implementation to DCT coefficients representative of media data to transform the media data from a frequency domain to a spatial domain.
  • a method comprises applying a 4 ⁇ 4 inverse discrete cosine transform (IDCT) with a 4 ⁇ 4 IDCT hardware unit to DCT coefficients representative of media data to transform the media data from a frequency domain to a spatial domain,
  • IDCT inverse discrete cosine transform
  • variables ⁇ and ⁇ denote irrational internal transform factors and variables C and S denote dyadic rational internal transform factors used in place of variables ⁇ and ⁇ in integer implementations of the non-orthogonal 4 ⁇ 4 DCT.
  • an apparatus comprises means for applying a 4 ⁇ 4 inverse discrete cosine transform (IDCT) with a 4 ⁇ 4 IDCT hardware unit to DCT coefficients representative of media data to transform the media data from a frequency domain to a spatial domain, wherein the 4 ⁇ 4 IDCT comprises an IDCT of a non-orthogonal 4 ⁇ 4 DCT having an odd portion that applies first and second internal factors (C, S) that are related to a scaled factor ( ⁇ ) by the following equation:
  • IDCT inverse discrete cosine transform
  • variables ⁇ and ⁇ denote irrational internal transform factors and variables C and S denote dyadic rational internal transform factors used in place of variables ⁇ and ⁇ in integer implementations of the non-orthogonal 4 ⁇ 4 DCT.
  • a non-transitory computer-readable storage medium stores instructions that, when executed by a processor, cause the processor to apply a 4 ⁇ 4 inverse discrete cosine transform (IDCT) with a 4 ⁇ 4 IDCT hardware unit to DCT coefficients representative of media data to transform the media data from a frequency domain to a spatial domain, wherein the 4 ⁇ 4 IDCT comprises an IDCT of a non-orthogonal 4 ⁇ 4 DCT having an odd portion that applies first and second internal factors (C, S) that are related to a scaled factor ( ⁇ ) by the following equation:
  • IDCT inverse discrete cosine transform
  • variables ⁇ and ⁇ denote irrational internal transform factors and variables C and S denote dyadic rational internal transform factors used in place of variables ⁇ and ⁇ in integer implementations of the non-orthogonal 4 ⁇ 4 DCT.
  • an apparatus comprises a 4 ⁇ 4 discrete cosine transform (DCT) hardware unit that implements a non-orthogonal 4 ⁇ 4 DCT having an odd portion that applies first and second internal factors (A, B) that are related to a scaled factor ( ⁇ ) by the following equation:
  • DCT discrete cosine transform
  • the scaled factor ( ⁇ ) equals a sum of the first internal factor (A) plus the second internal factor (B) divided by one plus one divided by the square root of two
  • the 4 ⁇ 4 DCT hardware unit applies the 4 ⁇ 4 DCT implementation to media data to transform the media data from a spatial domain to a frequency domain.
  • a method comprises applying a non-orthogonal 4 ⁇ 4 discrete cosine transform (DCT) with a 4 ⁇ 4 DCT hardware unit to media data to transform the media data from a spatial domain to a frequency domain.
  • the non-orthogonal 4 ⁇ 4 DCT includes an odd portion that applies first and second internal factors (A, B) that are related to a scaled factor ( ⁇ ) by the following equation:
  • scaled factor ( ⁇ ) equals a sum of the first internal factor (A) plus the second internal factor (B) divided by one plus one divided by the square root of two.
  • an apparatus comprises means for applying a non-orthogonal 4 ⁇ 4 discrete cosine transform (DCT) with a 4 ⁇ 4 DCT hardware unit to media data to transform the media data from a spatial domain to a frequency domain, wherein the non-orthogonal 4 ⁇ 4 DCT includes an odd portion that applies first and second internal factors (A, B) that are related to a scaled factor ( ⁇ ) by the following equation:
  • DCT discrete cosine transform
  • scaled factor ( ⁇ ) equals a sum of the first internal factor (A) plus the second internal factor (B) divided by one plus one divided by the square root of two.
  • a non-transitory computer-readable storage medium stores instructions that, when executed by a processor, cause the processor to apply a non-orthogonal 4 ⁇ 4 discrete cosine transform (DCT) with a 4 ⁇ 4 DCT hardware unit to media data to transform the media data from a spatial domain to a frequency domain.
  • the non-orthogonal 4 ⁇ 4 DCT includes an odd portion that applies first and second internal factors (A, B) that are related to a scaled factor ( ⁇ ) by the following equation:
  • scaled factor ( ⁇ ) equals a sum of the first internal factor (A) plus the second internal factor (B) divided by one plus one divided by the square root of two.
  • an apparatus comprises a 4 ⁇ 4 inverse discrete cosine transform (IDCT) hardware unit, wherein the 4 ⁇ 4 IDCT hardware unit implements an IDCT of a non-orthogonal 4 ⁇ 4 DCT having an odd portion that applies first and second internal factors (A, B) that are related to a scaled factor ( ⁇ ) by the following equation:
  • the scaled factor ( ⁇ ) equals a sum of the first internal factor (A) plus the second internal factor (B) divided by one plus one divided by the square root of two
  • the 4 ⁇ 4 IDCT hardware unit applies the 4 ⁇ 4 IDCT implementation to DCT coefficients representative of media data to transform the media data from a frequency domain to a spatial domain.
  • a method comprises applying a 4 ⁇ 4 inverse discrete cosine transform (IDCT) with a 4 ⁇ 4 IDCT hardware unit to DCT coefficients representative of media data to transform the media data from a frequency domain to a spatial domain.
  • the IDCT comprises an IDCT of a non-orthogonal 4 ⁇ 4 DCT having an odd portion that applies first and second internal factors (A, B) that are related to a scaled factor ( ⁇ ) by the following equation:
  • scaled factor ( ⁇ ) equals a sum of the first internal factor (A) plus the second internal factor (B) divided by one plus one divided by the square root of two.
  • an apparatus comprises means for applying a 4 ⁇ 4 inverse discrete cosine transform (IDCT) with a 4 ⁇ 4 IDCT hardware unit to DCT coefficients representative of media data to transform the media data from a frequency domain to a spatial domain.
  • IDCT comprises an IDCT of a non-orthogonal 4 ⁇ 4 DCT having an odd portion that applies first and second internal factors (A, B) that are related to a scaled factor ( ⁇ ) by the following equation:
  • scaled factor ( ⁇ ) equals a sum of the first internal factor (A) plus the second internal factor (B) divided by one plus one divided by the square root of two.
  • a non-transitory computer-readable storage medium stores instructions that, when executed by a processor, cause the processor to apply a 4 ⁇ 4 inverse discrete cosine transform (IDCT) with a 4 ⁇ 4 IDCT hardware unit to DCT coefficients representative of media data to transform the media data from a frequency domain to a spatial domain.
  • the IDCT comprises an IDCT of a non-orthogonal 4 ⁇ 4 DCT having an odd portion that applies first and second internal factors (A, B) that are related to a scaled factor ( ⁇ ) by the following equation:
  • scaled factor ( ⁇ ) equals a sum of the first internal factor (A) plus the second internal factor (B) divided by one plus one divided by the square root of two.
  • FIG. 1 is a block diagram illustrating a video encoding and decoding system.
  • FIG. 2 is a block diagram illustrating the video encoder of FIG. 1 in more detail.
  • FIG. 3 is a block diagram illustrating the video decoder of FIG. 1 in more detail.
  • FIGS. 4A-4C are diagrams that each illustrates an implementation of a scaled 4 ⁇ 4 DCT-II constructed in accordance with the techniques of this disclosure.
  • FIG. 5 is a flow chart illustrating exemplary operation of a coding device in applying a 4 ⁇ 4 DCT implementation constructed in accordance with the techniques of this disclosure.
  • FIG. 6 is a flowchart illustrating example operation of a coding device in applying a 4 ⁇ 4 DCT-III implementation constructed in accordance with the techniques of this disclosure.
  • FIGS. 7A-7C are diagrams illustrating graphs of peak signal-to-noise ratios with respect to bitrates for each of three different 4 ⁇ 4 DCT-II implementations constructed in accordance with the techniques of this disclosure.
  • this disclosure is directed to techniques for coding data using one or more 4 ⁇ 4 discrete cosine transforms (DCTs) represented as a 4 ⁇ 4 matrix of coefficients selected in accordance with various relationships.
  • the techniques may be applied to compress a variety of data, including visible or audible media data, such as digital video, image, speech, and/or audio data, and thereby transform such electrical signals representing such data into compressed signals for more efficient processing, transmission or archival of the electrical signals.
  • coefficients may be selected for the coefficient matrixes such that orthogonal and near-orthogonal implementations of 4 ⁇ 4 DCTs, when applied to data, may promote increased coding gain.
  • a video block generally refers to any sized portion of a video frame, where a video frame refers to a picture or image in a series of pictures or images.
  • Each video block typically comprises a plurality of discrete pixel data that indicates either color components, e.g., red, blue and green, (so-called “chromaticity” or “chroma” components) or luminance components (so-called “luma” components).
  • Each set of pixel data comprises a single 1 ⁇ 1 point in the video block and may be considered a discrete data unit with respect to video blocks.
  • a 4 ⁇ 4 video block for example, comprises four rows of pixel data with four discrete sets of pixel data in each row.
  • An n-bit value may be assigned to each pixel to specify a color or luminance value.
  • DCTs are commonly described in terms of the size of the block of data, whether audio, speech image or video data, the DCT is capable of processing. For example, if a DCT can process a 4 ⁇ 4 block of data, the DCT may be referred to as a 4 ⁇ 4 DCT. Moreover, DCTs may be denoted as a particular type. The most commonly employed type of DCT of the eight different types of DCTs is a DCT of type-II, which may be denoted as “DCT-II.” Often, when referring generally to a DCT, such reference refers to a DCT of type-II or DCT-II.
  • DCT-III The inverse of a DCT-II is referred to as a DCT of type-III, which similarly may be denoted as “DCT-III” or, with the common understanding that DCT refers to a DCT-II, as “IDCT” where the “I” in “IDCT” denotes inverse.
  • DCTs below conforms to this notation, where general reference to DCTs refers to a DCT-II unless otherwise specified. However, to avoid confusion, DCTs, including DCTs-II, are for the most part referred to below with the corresponding type (II, III, etc.) indicated.
  • the techniques described in this disclosure involve both an encoder and/or decoder that employ one or more implementations of the 4 ⁇ 4 DCTs-II to facilitate compression and/or decompression of data.
  • the compression and decompression accomplished through applying these 4 ⁇ 4 DCT-II implementations permits physical transformation of electrical signals representing the data such that the signals can be processed, transmitted, and/or stored more efficiently using physical computing hardware, physical transmission media (e.g., copper, optical fiber, wireless, or other media), and/or storage hardware (e.g., magnetic or optical disk or tape, or any of a variety of solid state media).
  • the implementations may be configured solely in hardware or may be configured in a combination of hardware and software.
  • the implementations of the 4 ⁇ 4 DCTs-II may be orthogonal or near-orthogonal.
  • orthogonal refers to a property of the matrix in general where the matrix, when multiplied by the transpose of the matrix, equals the identity matrix.
  • near-orthogonal refers to instances where this orthogonal property is relaxed, such that strict orthogonality is not required. In this respect, “near-orthogonal” suggests approximately or loosely orthogonal.
  • a near-orthogonal matrix does not meet the technical definition of orthogonal and such near-orthogonal matrixes may be considered non-orthogonal from a purely technical perspective.
  • the 4 ⁇ 4 DCT module implements an orthogonal 4 ⁇ 4 DCT-II constructed in accordance with the techniques described in this disclosure.
  • This orthogonal 4 ⁇ 4 DCT-II implementation includes an odd portion and an even portion.
  • the so-called “odd portion” of the 4 ⁇ 4 DCT-II refers to a portion of the 4 ⁇ 4 DCT-II implementation that outputs odd numbered coefficients.
  • the so-called “even portion” of the 4 ⁇ 4 DCT-II refers to a portion of the 4 ⁇ 4 DCT-II implementation that outputs even numbered coefficients.
  • the odd portion applies first and second internal factors C, S that are related to a scaled factor ( ⁇ ) such that the scaled factor equals a square root of a sum of a square of the first internal factor (C) plus a square of the second internal factor (S).
  • internal factors refers to factors internal to the implementation of the 4 ⁇ 4 DCT that remain after factorization.
  • scaled factors refers to factors external from the implementation of the 4 ⁇ 4 DCT that are removed through factorization.
  • a multiplication may require three or more times as many computational operations (e.g., clock cycles) to complete than a more simple addition operation.
  • Specific multipliers may be implemented to perform multiplication more efficiently (e.g., in less clock cycles) but these multiplier implementations typically consume significantly more chip or silicon surface area and may also draw large amounts of power. Multiplication by factors is therefore often avoided, particularly in power sensitive devices, such as most mobile devices including cellular phones, so-called “smart” cellular phones, personal digital assistants (PDAs), laptop computers, so-called “netbooks,” and the like.
  • Factorization is a process whereby one or more internal factors may be removed from the 4 ⁇ 4 DCT-II implementation and replaced with external factors. The external factors can then be incorporated in subsequent quantization operations, for example, with respect to video encoders, usually with minimal expense or increase in complexity.
  • the above relationship between the first and second internal factors C, S and the scaled factor ( ⁇ ) noted above provides for specific values of the internal factors not used in previous implementations of 4 ⁇ 4 DCTs-II.
  • values for internal factors C and S of 2 and 5, respectively do not overly increase implementation complexity and improve upon coding gain over known 4 ⁇ 4 DCT implementations involving values of 1 and 2 for C and S.
  • the video encoder then applies the 4 ⁇ 4 DCT-II implementation with internal factors 2 and 5 to media data so as to transform the media data from a spatial domain to a frequency domain.
  • the techniques facilitate coding gain (which is a term representative of compression efficiency) when compared to standard DCT-II implementations that include internal factors of 1 and 2.
  • Orthogonality is generally desired with respect to DCT-II implementations because it is invertible.
  • This invertible property allows a video encoder to apply the orthogonal 4 ⁇ 4 DCT implementation to generate DCT coefficients from residual blocks of video data.
  • a video decoder can then apply a 4 ⁇ 4 inverse DCT-II (IDCT) implementation to reconstruct the residual block of video data from the DCT-II coefficients with little if any loss in data.
  • ICT inverse DCT-II
  • the video, audio or general coding pipeline in practice involves a number of steps that introduce so-called “noise” that in most respects effectively prevents the accurate reconstruction of the values provided by orthogonal 4 ⁇ 4 DCT-II implementations.
  • near-orthogonal transforms may improve coding efficiency while also reducing implementation complexity compared to strictly orthogonal integer transforms.
  • relaxing the orthogonal property introduces noise into the system, but may improve coding gain while also reducing implementation complexity.
  • This near-orthogonal 4 ⁇ 4 DCT-II implementation also includes an odd portion and an even portion.
  • the odd portion in this instance applies first and second internal factors (C, S) that are related to a scaled factor ( ⁇ ) by the following equation:
  • variables ⁇ and ⁇ denote original (irrational) internal transform factors, for example, ( ⁇ ) may be a cosine of three times a constant pi ( ⁇ ) divided by eight, and ( ⁇ ) may be a sine of three times the constant pi ( ⁇ ) divided by eight.
  • Variables (C) and (S) denote integer (or dyadic rational) internal transform factors placed instead of ( ⁇ ) and ( ⁇ ).
  • FIG. 1 is a block diagram illustrating a video encoding and decoding system 10 .
  • system 10 includes a source hardware device 12 that transmits encoded video to a receive hardware device 14 via a communication channel 16 .
  • Source device 12 may include a video source 18 , video encoder 20 and a transmitter 22 .
  • Destination device 14 may include a receiver 24 , video decoder 26 and video display device 28 .
  • communication channel 16 may comprise any wireless or wired communication medium, such as a radio frequency (RF) spectrum or one or more physical transmission lines, or any combination of wireless and wired media.
  • Channel 16 may form part of a packet-based network, such as a local area network, wide-area network, or a global network such as the Internet.
  • Communication channel 16 generally represents any suitable communication medium, or collection of different communication media, for transmitting video data from source device 12 to receive device 14 .
  • Source device 12 generates video for transmission to destination device 14 .
  • devices 12 , 14 may operate in a substantially symmetrical manner.
  • each of devices 12 , 14 may include video encoding and decoding components.
  • system 10 may support one-way or two-way video transmission between video devices 12 , 14 , e.g., for video streaming, video broadcasting, or video telephony.
  • devices 12 , 14 could be configured to send and receive, or exchange, other types of data, such as image, speech or audio data, or combinations of two or more of video, image, speech and audio data. Accordingly, the following discussion of video applications is provided for purposes of illustration and should not be considered limiting of the various aspects of the disclosure as broadly described herein.
  • Video source 18 may include a video capture device, such as one or more video cameras, a video archive containing previously captured video, or a live video feed from a video content provider. As a further alternative, video source 18 may generate computer graphics-based data as the source video, or a combination of live video and computer-generated video. In some cases, if video source 18 is a camera, source device 12 and receive device 14 may form so-called camera phones or video phones. Hence, in some aspects, source device 12 , receive device 14 or both may form a wireless communication device handset, such as a mobile telephone.
  • the captured, pre-captured or computer-generated video may be encoded by video encoder 20 for transmission from video source device 12 to video decoder 26 of video receive device 14 via transmitter 22 , channel 16 and receiver 24 .
  • Display device 28 may include any of a variety of display devices such as a liquid crystal display (LCD), plasma display or organic light emitting diode (OLED) display.
  • LCD liquid crystal display
  • OLED organic light emitting diode
  • Video encoder 20 and video decoder 26 may be configured to support scalable video coding for spatial, temporal and/or signal-to-noise ratio (SNR) scalability.
  • video encoder 20 and video decoder 22 may be configured to support fine granularity SNR scalability (FGS) coding.
  • Encoder 20 and decoder 26 may support various degrees of scalability by supporting encoding, transmission and decoding of a base layer and one or more scalable enhancement layers.
  • a base layer carries video data with a minimum level of quality.
  • One or more enhancement layers carry additional bitstream to support higher spatial, temporal and/or SNR levels.
  • Video encoder 20 and video decoder 26 may operate according to a video compression standard, such as MPEG-2, MPEG-4, ITU-T H.263, or ITU-T H.264/MPEG-4 Advanced Video Coding (AVC).
  • video encoder 20 and video decoder 26 may be integrated with an audio encoder and decoder, respectively, and include appropriate MUX-DEMUX units, or other hardware and software, to handle encoding of both audio and video in a common data stream or separate data streams. If applicable, MUX-DEMUX units may conform to the ITU H.223 multiplexer protocol, or other protocols such as the user datagram protocol (UDP).
  • UDP user datagram protocol
  • the techniques described in this disclosure may be applied to enhance H.264 video coding for delivering real-time video services in terrestrial mobile multimedia multicast (TM3) systems using the Forward Link Only (FLO) Air Interface Specification, “Forward Link Only Air Interface Specification for Terrestrial Mobile Multimedia Multicast,” published as Technical Standard TIA-1099 (the “FLO Specification”), e.g., via a wireless video broadcast server or wireless communication device handset.
  • FLO Forward Link Only
  • the FLO Specification includes examples defining bitstream syntax and semantics and decoding processes suitable for the FLO Air Interface.
  • video may be broadcasted according to other standards such as DVB-H (digital video broadcast-handheld), ISDB-T (integrated services digital broadcast - terrestrial), or DMB (digital media broadcast).
  • source device 12 may be a mobile wireless terminal, a video streaming server, or a video broadcast server.
  • techniques described in this disclosure are not limited to any particular type of broadcast, multicast, or point-to-point system.
  • source device 12 may broadcast several channels of video data to multiple receive device, each of which may be similar to receive device 14 of FIG. 1 .
  • Video encoder 20 and video decoder 26 each may be implemented as one or more microprocessors, digital signal processors (DSPs), application specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), discrete logic, software, hardware, firmware or any combinations thereof.
  • DSPs digital signal processors
  • ASICs application specific integrated circuits
  • FPGAs field programmable gate arrays
  • each of video encoder 20 and video decoder 26 may be implemented as least partially as an integrated circuit (IC) chip or device, and included in one or more encoders or decoders, either of which may be integrated as part of a combined encoder/decoder (CODEC) in a respective mobile device, subscriber device, broadcast device, server, or the like.
  • CDEC combined encoder/decoder
  • source device 12 and receive device 14 each may include appropriate modulation, demodulation, frequency conversion, filtering, and amplifier components for transmission and reception of encoded video, as applicable, including radio frequency (RF) wireless components and antennas sufficient to support wireless communication.
  • RF radio frequency
  • a video sequence includes a series of video frames.
  • Video encoder 20 operates on blocks of pixels within individual video frames in order to encode the video data.
  • the video blocks may have fixed or varying sizes, and may differ in size according to a specified coding standard.
  • Each video frame includes a series of slices.
  • Each slice may include a series of macroblocks, which may be arranged into sub-blocks.
  • the ITU-T H.264 standard supports intra prediction in various dyadic block sizes, such as 16 by 16, 8 by 8, 4 by 4 for luma components, and 8 ⁇ 8 for chroma components, as well as inter prediction in various block sizes, such as 16 by 16, 16 by 8, 8 by 16, 8 by 8, 8 by 4, 4 by 8 and 4 by 4 for luma components and corresponding scaled sizes for chroma components.
  • Smaller video blocks can generally provide better resolution, and may be used for locations of a video frame that include higher levels of detail.
  • macroblocks (MBs) and the various sub-blocks may be considered, in general, to represent video blocks.
  • a slice may be considered to represent a series of video blocks, such as MBs and/or sub-blocks. Each slice may be an independently decodable unit.
  • a transform may be performed on dyadic or non-dyadic sized residual blocks, and an additional transform may be applied to the DCT coefficients of the 4 ⁇ 4 blocks for chroma components or luma component if the intra — 16 ⁇ 16 prediction mode is used.
  • Video encoder 20 and/or video decoder 26 of system 10 of FIG. 1 may be configured to include an implementation of a 4 ⁇ 4 DCT-II and an inverse thereof (e.g., a 4 ⁇ 4 DCT-III), respectively, wherein the 4 ⁇ 4 DCT-II adheres to one of the various relationships of the techniques for selecting DCT-II matrix coefficients for a 4 ⁇ 4 sized DCT described in this disclosure.
  • ITU-T H.264 standard supports intra prediction in various block sizes, such as 16 by 16, 8 by 8, 4 by 4 for luma components, and 8 ⁇ 8 for chroma components, revisions to this standard to improve coding efficiency are currently underway.
  • ITU-T H.265 or simply H.265 (sometimes referred to as next generation video coding or NGVC).
  • NGVC next generation video coding
  • 4 ⁇ 4 DCTs of type-II (“DCTs-II”) that adhere to one of the various relationships set forth in accordance with the techniques of this disclosure may improve coding efficiency as measured in terms of peak signal-to-noise ratios (PSNRs). Consequently, ITU-T H.265 and other evolving standards or specifications may consider these DCTs-II so as to improve coding efficiency.
  • PSNRs peak signal-to-noise ratios
  • implementations of 4 ⁇ 4 DCTs-II may be generated in a manner that adheres to one of the various relationships that may promote improved coding gain over conventional implementations.
  • a first relationship is defined for orthogonal implementations of 4 ⁇ 4 DCTs-II and is set forth below with respect to equation (1):
  • C and S denote first and second internal factors in an “odd” portion of the 4 ⁇ 4 DCTs-II implementation and ( ⁇ ) denotes a scaled factor applied to the “odd” portion of the 4 ⁇ 4 DCTs-II implementation.
  • the so-called “odd portion” of the 4 ⁇ 4 DCT-II refers to a portion of the 4 ⁇ 4 DCT-II implementation that outputs odd numbered coefficients.
  • the so-called “even” portion of the 4 ⁇ 4 DCT-II refers to a portion of the 4 ⁇ 4 DCT-II implementation that outputs even numbered coefficients.
  • internal factors refers to factors internal to the implementation of the 4 ⁇ 4 DCT that remain after factorization.
  • scaled factors refers to factors external from the implementation of the 4 ⁇ 4 DCT that are removed through factorization.
  • a multiplication may require three or more times as many computational operations (e.g., clock cycles) to complete than a more simple addition operation.
  • Specific multipliers may be implemented to perform multiplication more efficiently (e.g., in less clock cycles) but these multiplier implementations typically consume significantly more chip or silicon surface area and may also draw large amounts of power. Multiplication by factors is therefore often avoided, particularly in power sensitive devices, such as most mobile devices including cellular phones, so-called “smart” cellular phones, personal digital assistants (PDAs), laptop computers, so-called “netbooks,” and the like.
  • Factorization is a process whereby one or more internal factors may be removed from the 4 ⁇ 4 DCT-II implementation and replaced with external factors. The external factors can then be incorporated in subsequent quantization operations, for example, with respect to video encoders, usually with minimal expense or increase in complexity.
  • the above relationship between the first and second internal factors C, S and the scaled factor ( ⁇ ) noted above with respect to equation (1) provides for specific values of the internal factors not used in previous implementations of 4 ⁇ 4 DCTs-II.
  • values for internal factors C and S of 2 and 5, respectively do not overly increase implementation complexity and improve upon coding gain over known 4 ⁇ 4 DCT implementations involving values of 1 and 2 for C and S.
  • the video encoder then applies the 4 ⁇ 4 DCT-II implementation with internal factors 2 and 5 to media data so as to transform the media data from a spatial domain to a frequency domain.
  • the techniques facilitate coding gain (which is a term representative of compression efficiency) when compared to standard DCT-II implementations that include internal factors of 1 and 2.
  • Orthogonality is generally desired with respect to DCT-II implementations because it is invertible.
  • This invertible property allows a video encoder to apply the orthogonal 4 ⁇ 4 DCT implementation to generate DCT coefficients from residual blocks of video data.
  • a video decoder can then apply a 4 ⁇ 4 inverse DCT-II (IDCT) implementation to reconstruct the residual block of video data from the DCT-II coefficients with little if any loss in data.
  • ICT inverse DCT-II
  • the video, audio or general coding pipeline in practice involves a number of additional steps (such as scaling or quantization) that introduce so-called “noise” that in most respects effectively prevents the accurate reconstruction of the values provided by orthogonal 4 ⁇ 4 DCT-II implementations.
  • relaxing the orthogonal property to achieve a near-orthogonal may be possible.
  • such near-orthogonal transforms may improve coding efficiency while also reducing implementation complexity compared to strictly orthogonal integer transforms.
  • relaxing the orthogonal property introduces noise into the system, but may improve coding gain while also reducing implementation complexity.
  • the control unit implements the near-orthogonal 4 ⁇ 4 DCT-II in accordance with the techniques described in this disclosure.
  • This near-orthogonal 4 ⁇ 4 DCT-II implementation also includes an odd portion and an even portion.
  • the odd portion in this instance applies first and second internal factors (C, S) that are related to a scaled factor ( ⁇ ) by the following equation (2):
  • variables ⁇ and ⁇ denote original (irrational) internal transform factors, for example, ( ⁇ ) may be a cosine of three times a constant pi ( ⁇ ) divided by eight, and ( ⁇ ) may be a sine of three times the constant pi ( ⁇ ) divided by eight.
  • Variables (C) and (S) denote integer (or dyadic rational) internal transform factors placed instead of ( ⁇ ) and ( ⁇ ).
  • the above resulting 4 ⁇ 4 DCTs-II implementations constructed in accordance with the techniques described in this disclosure represent scaled 4 ⁇ 4 DCT-II implementations as opposed to straight 4 ⁇ 4 DCT-II implementations.
  • the implementations are “scaled” in that they have undergone factorization to remove internal factors and therefore output scaled coefficients that require additional external factors be applied to correctly calculate the 4 ⁇ 4 DCT.
  • So-called “straight” DCT-II implementations output coefficients that do not require any further operations, such as multiplication by external factors, to correctly calculate the 4 ⁇ 4 DCT.
  • One alternative factorization produces a different scaled 4 ⁇ 4 DCT-II implementation from which another relationship can be derived in accordance with the techniques of this disclosure to produce a near-orthogonal implementation that improves coding gain over conventional DCTs-II commonly employed by video encoders that comply with H.264.
  • Equation (3) indicates that the scaled factor ( ⁇ ) equals a sum of the first internal factor (A) plus the second internal factor (B) divided by a one plus one divided by the square root of two. This equation may identify particular values of 7 and 5 for internal factors A and B, respectively.
  • This resulting near-orthogonal 4 ⁇ 4 DCT-II implementation constructed using the alternative factorization and with the above noted internal factors may more accurately represent the irrational internal factors of a straight 4 ⁇ 4 DCT-II than conventional H.264 4 ⁇ 4 DCT-II implementations and thereby provide improved coding gain over conventional 4 ⁇ 4 DCT-II implementations. Consequently, the control unit applies this near-orthogonal 4 ⁇ 4 DCT-II to media data to transform the media data from a spatial domain to a frequency domain with the result of potentially improved coding gain.
  • FIG. 2 is a block diagram illustrating video encoder 20 of FIG. 1 in more detail.
  • Video encoder 20 may be formed at least in part as one or more integrated circuit devices, which may be referred to collectively as an integrated circuit device. In some aspects, video encoder 20 may form part of a wireless communication device handset or broadcast server.
  • Video encoder 20 may perform intra- and inter-coding of blocks within video frames. Intra-coding relies on spatial prediction to reduce or remove spatial redundancy in video within a given video frame. Inter-coding relies on temporal prediction to reduce or remove temporal redundancy in video within adjacent frames of a video sequence. For inter-coding, video encoder 20 performs motion estimation to track the movement of matching video blocks between adjacent frames.
  • video encoder 20 receives a current video block 30 within a video frame to be encoded.
  • video encoder 20 includes motion estimation unit 32 , reference frame store 34 , motion compensation unit 36 , block transform unit 38 , quantization unit 40 , inverse quantization unit 42 , inverse transform unit 44 and entropy coding unit 46 .
  • An in-loop or post loop deblocking filter (not shown) may be applied to filter blocks to remove blocking artifacts.
  • Video encoder 20 also includes summer 48 and summer 50 .
  • FIG. 2 illustrates the temporal prediction components of video encoder 20 for inter-coding of video blocks. Although not shown in FIG. 2 for ease of illustration, video encoder 20 also may include spatial prediction components for intra-coding of some video blocks.
  • Motion estimation unit 32 compares video block 30 to blocks in one or more adjacent video frames to generate one or more motion vectors.
  • the adjacent frame or frames may be retrieved from reference frame store 34 , which may comprise any type of memory or data storage device to store video blocks reconstructed from previously encoded blocks.
  • Motion estimation may be performed for blocks of variable sizes, e.g., 16 ⁇ 16, 16 ⁇ 8, 8 ⁇ 16, 8 ⁇ 8 or smaller block sizes.
  • Motion estimation unit 32 identifies one or more blocks in adjacent frames that most closely matches the current video block 30 , e.g., based on a rate distortion model, and determines displacement between the blocks in adjacent frames and the current video block.
  • motion estimation unit 32 produces one or more motion vectors (MV) that indicate the magnitude and trajectory of the displacement between current video block 30 and one or more matching blocks from the reference frames used to code current video block 30 .
  • the matching block or blocks will serve as predictive (or prediction) blocks for inter-coding of the block to be coded.
  • Motion vectors may have half- or quarter-pixel precision, or even finer precision, allowing video encoder 20 to track motion with higher precision than integer pixel locations and obtain a better prediction block.
  • interpolation operations are carried out in motion compensation unit 36 .
  • Motion estimation unit 32 identifies the best block partitions and motion vector or motion vectors for a video block using certain criteria, such as a rate-distortion model. For example, there may be more than motion vector in the case of bi-directional prediction. Using the resulting block partitions and motion vectors, motion compensation unit 36 forms a prediction video block.
  • Video encoder 20 forms a residual video block by subtracting the prediction video block produced by motion compensation unit 36 from the original, current video block 30 at summer 48 .
  • Block transform unit 38 applies a transform producing residual transform block coefficients.
  • block transform unit 38 includes a 4 ⁇ 4 DCT-II unit 52 that implements a 4 ⁇ 4 DCT-II constructed in accordance with the techniques described in this disclosure.
  • 4 ⁇ 4 DCT-II unit 52 represents a hardware module, which in some instances executes software (such as a digital signal processor or DSP executing software code or instructions), that implements a 4 ⁇ 4 DCT-II having internal factors defined by one of the three relationships identified above.
  • Block transform unit 38 applies scaled 4 ⁇ 4 DCT-II unit 52 to the residual block to produce a 4 ⁇ 4 block of residual transform coefficients.
  • 4 ⁇ 4 DCT-II unit 52 generally transforms the residual block from the spatial domain, which is represented as residual pixel data, to the frequency domain, which is represented as DCT coefficients.
  • the transform coefficients may comprise DCT coefficients that include at least one DC coefficient and one or more AC coefficients.
  • Quantization unit 40 quantizes (e.g., rounds) the residual transform block coefficients to further reduce bit rate.
  • quantization unit 40 accounts for the scaled nature of scaled 4 ⁇ 4 DCT-II unit 52 by incorporating internal factors removed during factorization. That is, quantization unit 40 incorporates the external factor shown below with respect to implementations 70 A- 70 C of FIGS. 4A-4C . As quantization typically involves multiplication, incorporating these factors into quantization unit 40 may not increase the implementation complexity of quantization unit 40 .
  • removing the factors from scaled 4 ⁇ 4 DCT-II unit 52 decreases the implementation complexity of DCT-II unit 52 without increasing the implementation complexity of quantization unit 40 , resulting in a net decrease of implementation complexity with respect to video encoder 20 .
  • Entropy coding unit 46 entropy codes the quantized coefficients to even further reduce bit rate.
  • Entropy coding unit 46 performs a statistical lossless coding, referred to in some instances, as entropy coding.
  • Entropy coding unit 46 models a probability distribution of quantized DCT coefficients and selects a codebook (e.g., CAVLC or CABAC) based on the modeled probability distribution. Using this codebook, entropy coding unit 46 selects codes for each quantized DCT coefficient in a manner that compresses quantized DCT coefficients.
  • a codebook e.g., CAVLC or CABAC
  • entropy coding unit 46 may select a short codeword (in terms of bits) for frequently occurring quantized DCT coefficients and longer codeword (in term of bits) for less frequently occurring quantized DCT coefficients. So long as the short codeword uses less bits than the quantized DCT coefficients, on average entropy coding unit 46 compresses the quantized DCT coefficients. Entropy coding unit 46 outputs the entropy coded coefficients as a bitstream which is sent to video decoder 26 . In general, video decoder 26 performs inverse operations to decode and reconstruct the encoded video from the bitstream, as will be described with reference to the example of FIG. 3 .
  • Reconstruction unit 42 and inverse transform unit 44 reconstruct quantized coefficients and apply inverse transformation, respectively, to reconstruct the residual block.
  • Summation unit 50 adds the reconstructed residual block to the motion compensated prediction block produced by motion compensation unit 36 to produce a reconstructed video block for storage in reference frame store 34 .
  • the reconstructed video block is used by motion estimation unit 32 and motion compensation unit 36 to encode a block in a subsequent video frame.
  • FIG. 3 is a block diagram illustrating an example of video decoder 26 of FIG. 1 in more detail.
  • Video decoder 26 may be formed at least in part as one or more integrated circuit devices, which may be referred to collectively as an integrated circuit device. In some aspects, video decoder 26 may form part of a wireless communication device handset. Video decoder 26 may perform intra- and inter-decoding of blocks within video frames. As shown in FIG. 3 , video decoder 26 receives an encoded video bitstream that has been encoded by video encoder 20 .
  • video decoder 26 includes entropy decoding unit 54 , motion compensation unit 56 , reconstruction unit 58 , inverse transform unit 60 , and reference frame store 62 .
  • Entropy decoding unit 64 may access one or more data structures stored in a memory 64 to obtain data useful in coding.
  • Video decoder 26 also may include an in-loop deblocking filter (not shown) that filters the output of summer 66 .
  • Video decoder 26 also includes summer 66 .
  • FIG. 3 illustrates the temporal prediction components of video decoder 26 for inter-decoding of video blocks. Although not shown in FIG. 3 , video decoder 26 also may include spatial prediction components for intra-decoding of some video blocks.
  • Entropy decoding unit 54 receives the encoded video bitstream and decodes from the bitstream quantized residual coefficients and quantized parameters, as well as other information, such as macroblock coding mode and motion information, which may include motion vectors and block partitions.
  • Motion compensation unit 56 receives the motion vectors and block partitions and one or more reconstructed reference frames from reference frame store 62 to produce a prediction video block.
  • Reconstruction unit 58 inverse quantizes, i.e., de-quantizes, the quantized block coefficients.
  • Inverse transform unit 60 applies an inverse transform, e.g., an inverse DCT, to the coefficients to produce residual blocks.
  • inverse transform unit 60 includes a scaled 4 ⁇ 4 DCT-III unit 68 , which inverse transform unit 60 applies to the coefficients to produce residual blocks.
  • Scaled 4 ⁇ 4 DCT-III unit 68 which is the inverse of scaled 4 ⁇ 4 DCT-II unit 52 shown in FIG. 2 , may transform the coefficients from the frequency domain to the spatial domain to produce the residual blocks.
  • reconstruction unit 58 accounts for the scaled nature of 4 ⁇ 4 DCT-III unit 68 by incorporating the external factors removed during factorization into the reconstruction process with little if any increase in implementation complexity. Removing factors from scaled 4 ⁇ 4 DCT-III unit 68 may reduce implementation complexity, thereby resulting in a net decrease of complexity for video decoder 26 .
  • the prediction video blocks are then summed by summer 66 with the residual blocks to form decoded blocks.
  • a deblocking filter (not shown) may be applied to filter the decoded blocks to remove blocking artifacts.
  • the filtered blocks are then placed in reference frame store 62 , which provides reference frame for decoding of subsequent video frames and also produces decoded video to drive display device 28 ( FIG. 1 ).
  • FIGS. 4A-4C are diagrams that each illustrate an implementation of a scaled 4 ⁇ 4 DCT-II constructed in accordance with the techniques of this disclosure.
  • FIG. 4A is a diagram that illustrates a scaled orthogonal 4 ⁇ 4 DCT-II implementation 70 A constructed in accordance with the techniques of this disclosure.
  • FIG. 4B is a diagram that illustrates a scaled near-orthogonal 4 ⁇ 4 DCT-II implementation 70 B constructed in accordance with the techniques of this disclosure.
  • FIG. 4C is a diagram that illustrates a scaled near-orthogonal 4 ⁇ 4 DCT-II alternative implementation 70 C constructed in accordance with the techniques of this disclosure.
  • 4 ⁇ 4 DCT unit 52 shown in the example of FIG. 2 may incorporate one or more these implementations 70 A- 70 C.
  • 4 ⁇ 4 DCT-II implementation 70 A includes a butterfly unit 72 , an even portion 74 A and an odd portion 74 B.
  • Butterfly unit 92 may represent hardware or a combination of hardware and software for routing or otherwise forwarding inputs x 0 , . . . , x 3 to proper even and odd portions 74 A, 47 B (“portions 74 ”).
  • Butterfly unit 92 usually combines the result of smaller DCTs, such as 2 ⁇ 2 DCT-II implementations, which in this case may be represented by even and odd portions 74 , respectively.
  • Even portion 74 A is a 2 ⁇ 2 portion of 4 ⁇ 4 DCT-II implementation 70 A that outputs even DCT coefficients X 0 and X 2 . Notably, these even coefficients X 0 and X 2 are multiplied by an external factor of a half (1 ⁇ 2), which can be and usually is applied by quantization unit 40 .
  • Odd portion 74 B is a 2 ⁇ 2 portion of 4 ⁇ 4 DCT-II implementation 70 A that outputs odd DCT coefficients X 1 and X 3 .
  • Odd portion 74 B includes two internal factors denoted C and S, which are related to an external factor applied to odd coefficients X 1 and X 3 by the above noted equation (1), which is defined in accordance with the techniques of this disclosure.
  • the additional external factor of one divided by the square root of two (1/ ⁇ 2) is multiplied by one divided by the relationship noted in equation (1) above to result in the external factor shown with respect to odd coefficients X 1 and X 3 .
  • equation (1) The relationship noted in equation (1) can be derived by first considering the orthogonal property, which is set forth mathematically by the following equation (4):
  • variable C in this instance refers to any matrix, while C T denotes the transpose of the matrix C.
  • the variable I denotes an identity matrix.
  • a matrix exhibits orthogonal property if the transpose of the matrix times the matrix itself equals the identity matrix.
  • the matrix C can be split into an integer scaled transform denoted C′ and a diagonal matrix of scale factors or external factors D, as noted in the following equation (5):
  • Equation (7) provides a mechanism for choosing scaling factors such that the resulting integer transform remains orthogonal.
  • this DCT-II usually only applies approximations of factors representative of the cosine of three times the constant pi divided by eight and the sine of three times the constant pi divided by eight.
  • these two factors are to be replaced by integers C and S, which are coefficients of the matrix C′, and using the above orthogonality condition
  • equation (1) above denotes the normalization factor, such that the task of designing an orthogonal approximation of 4 ⁇ 4 DCT-II may be limited to finding pairs of integers (C, S), such that the following equations (8) and (9) are satisfied:
  • Table 1 illustrates various values selected for the integers of C and S and the resulting approximation errors in comparison to the 4 ⁇ 4 DCT-II implementation adopted in the H.264 video coding standard.
  • the complexity involves only an additional addition and shift when compared to the base H.264 implementation, but does not involve any expensive, in the sense of operations, multiplications. Consequently, the techniques described in this disclosure promote increased coding gain with only minor increases in complexity, with implementation 70 A incorporating values of 2 and 5 for variables C and S respectively providing potentially the best coding gain with minimal increases to implementation complexity.
  • implementation 70 A shown in the example of FIG. 4A may also represent a DCT of type III or inverse DCT implementation.
  • Forming an inverse DCT from implementation 70 A involves reversing the inputs and the outputs such that inputs are received by the implementation on the right of FIG. 4A and outputs are output at the left of the implementation. Inputs are then processed by even and odd portions 74 first and then by butterfly 72 before being output on the left.
  • this IDCT implementation that is inverse to implementation 70 A is not shown in a separate figure considering that such an implementation may be described as a mirror image of implementation 70 A.
  • FIG. 4B is a diagram that illustrates a scaled near-orthogonal 4 ⁇ 4 DCT-II implementation 70 B constructed in accordance with the techniques of this disclosure.
  • 4 ⁇ 4 DCT-II implementation 70 B includes a butterfly unit 76 , which is similar to butterfly unit 72 of FIG. 4A , and even and odd portions 78 A, 78 B (“portions 78 ”).
  • Even portion 78 A is similar to even portion 74 A.
  • Odd portion 78 B is also similar to odd portion 74 B except that the orthogonality condition has been relaxed, leading to a different relationship, i.e., the relationship denoted above with respect to equation (2), between internal factors C, S and scaled factor ⁇ .
  • equation (10) simply indicates a norm of distance from the identity matrix can be defined as the transpose of the matrix time the matrix minus the identity matrix. Assuming that C T C remains diagonal, the average absolute distance can be computed in accordance with the following equation (11):
  • ⁇ N 1 N ⁇ tr ⁇ ( ⁇ C T ⁇ C - I ⁇ ) , ( 11 )
  • coding gain may improve but analysis of coding gain with respect to the average absolute difference is too dependent on a particular model or statistics of image undergoing compression. Consequently, the extent to which to relax the orthogonality property may be determined through analysis of a different metric related to finding integer transforms that are potentially best in terms of matching basis functions of DCT-II. More information regarding this form of evaluation can be found in an article authored by Y. A. Reznik, A. T. Hinds, and J. L. Mitchell, entitled “Improved Precision of Fixed-Point Algorithms by Means of Common Factors,” Proc. ICIP 2008, San Diego, Calif., the entire contents of which are incorporated by reference as if fully set forth herein.
  • Equation (12) ensures that, for scaled factor ⁇ , the errors of corresponding approximation for C and S are in the same magnitude but sign-opposite. Under these assumptions, the integer scaled transform shown as 4 ⁇ 4 DCT-II implementation 70 B results.
  • Table 2 illustrates various values selected for the integers of C and S and the resulting approximation errors.
  • the third error metric (C 2 +S 2 / ⁇ 2 ⁇ 1) shown above under the heading of “Approximation errors” is essentially a subset of orthogonality mismatch metric ⁇ N discussed above with respect to equation (11), where this mismatch metric describes values appearing at the odd positions along the diagonal of C T C ⁇ I.
  • more precise integer approximations to the DCT-II basis functions are also generally closer to being orthogonal. While such integer approximation are generally closer to being orthogonal, DCT-II implementation 70 B with C and S set to values of 1 and 2, respectfully, provides possibly the most return of those listed in terms of coding gain, as shown below with respect to FIG. 7B .
  • implementation 70 B shown in the example of FIG. 4B may also represent a DCT of type III or inverse DCT implementation.
  • Forming an inverse DCT from implementation 70 B involves reversing the inputs and the outputs such that inputs are received by the implementation on the right of FIG. 4B and outputs are output at the left of the implementation. Inputs are then processed by even and odd portions 78 first and then by butterfly 76 before being output on the left.
  • this IDCT implementation that is inverse to implementation 70 B is not shown in a separate figure considering that such an implementation may be described as a mirror image of implementation 70 B.
  • FIG. 4C is a diagram that illustrates another exemplary scaled near-orthogonal 4 ⁇ 4 DCT-II implementation 70 C constructed in accordance with the techniques of this disclosure that results from an alternative factorization.
  • 4 ⁇ 4 DCT-II implementation 70 C includes a butterfly unit 80, which is similar to butterfly unit 72 of FIG. 4A and butterfly unit 76 of FIG. 4B , and even and odd portions 82 A, 82 B (“portions 82 ”). Even portion 82 A is similar to even portion 78 A.
  • Odd portion 82 B is similar to odd portion 78 B in that the orthogonality condition has been relaxed, but as a result of the alternative factorization, a different relationship, i.e., the relationship denoted above with respect to equation (3), between internal factors A, B and scaled factor ⁇ results. More information regarding the alternative factorization can be found in an article authored by Y. A. Reznik, and R. C. Chivukula, entitled “On Design of Transforms for High-Resolution/High-Performance Video Coding,” MPEG input document M16438, presented at MPEG's 88th meeting, in Maui, Hi., in April 2009, the entire contents of which are hereby incorporated by reference as if fully set forth herein.
  • Table 3 illustrates various values selected for the integers of C and S and the resulting approximation errors.
  • implementation 70 C shown in the example of FIG. 4C may also represent a DCT of type III or inverse DCT implementation.
  • Forming an inverse DCT from implementation 70 C involves reversing the inputs and the outputs such that inputs are received by the implementation on the right of FIG. 4C and outputs are output at the left of the implementation. Inputs are then processed by even and odd portions 82 first and then by butterfly 80 before being output on the left.
  • this IDCT implementation that is inverse to implementation 70 C is not shown in a separate figure considering that such an implementation may be described as a mirror image of implementation 70 C.
  • FIG. 5 is a flow chart illustrating exemplary operation of a coding device, such as video encoder 20 of FIG. 2 , in applying a 4 ⁇ 4 DCT implementation constructed in accordance with the techniques of this disclosure.
  • video encoder 20 receives a current video block 30 within a video frame to be encoded ( 90 ).
  • Motion estimation unit 32 performs motion estimation to compare video block 30 to blocks in one or more adjacent video frames to generate one or more motion vectors ( 92 ).
  • the adjacent frame or frames may be retrieved from reference frame store 34 .
  • Motion estimation may be performed for blocks of variable sizes, e.g., 16 ⁇ 16, 16 ⁇ 8, 8 ⁇ 16, 8 ⁇ 8, 4 ⁇ 4 or smaller block sizes.
  • Motion estimation unit 32 identifies one or more blocks in adjacent frames that most closely matches the current video block 30 , e.g., based on a rate distortion model, and determines displacement between the blocks in adjacent frames and the current video block. On this basis, motion estimation unit 32 produces one or more motion vectors (MV) that indicate the magnitude and trajectory of the displacement between current video block 30 and one or more matching blocks from the reference frames used to code current video block 30 .
  • the matching block or blocks will serve as predictive (or prediction) blocks for inter-coding of the block to be coded.
  • Motion vectors may have half- or quarter-pixel precision, or even finer precision, allowing video encoder 20 to track motion with higher precision than integer pixel locations and obtain a better prediction block.
  • interpolation operations are carried out in motion compensation unit 36 .
  • Motion estimation unit 32 identifies the best block partitions and motion vector or motion vectors for a video block using certain criteria, such as a rate-distortion model. For example, there may be more than motion vector in the case of bi-directional prediction. Using the resulting block partitions and motion vectors, motion compensation unit 36 forms a prediction video block ( 94 ).
  • Video encoder 20 forms a residual video block by subtracting the prediction video block produced by motion compensation unit 36 from the original, current video block 30 at summer 48 ( 96 ).
  • Block transform unit 38 applies a transform producing residual transform block coefficients.
  • Block transform unit 38 includes a 4 ⁇ 4 DCT-II unit 52 generated in accordance with the techniques described in this disclosure.
  • Block transform unit 38 applies scaled 4 ⁇ 4 DCT-II unit 52 to the residual block to produce a 4 ⁇ 4 block of residual transform coefficients.
  • 4 ⁇ 4 DCT-II unit 52 generally transforms the residual block from the spatial domain, which is represented as residual pixel data, to the frequency domain, which is represented as DCT coefficients ( 98 ).
  • the transform coefficients may comprise DCT coefficients that include at least one DC coefficient and one or more AC coefficients.
  • Quantization unit 40 quantizes (e.g., rounds) the residual transform block coefficients to further reduce bit rate ( 100 ).
  • quantization unit 40 accounts for the scaled nature of scaled 4 ⁇ 4 DCT-II unit 52 by incorporating internal factors removed during factorization. That is, quantization unit 40 incorporates the external factor noted above with respect to implementations 70 A- 70 C of FIGS. 4A-4C . As quantization typically involves multiplication, incorporating these factors into quantization unit 40 may not increase the implementation complexity of quantization unit 40 .
  • removing the factors from scaled 4 ⁇ 4 DCT-II unit 52 decreases the implementation complexity of DCT-II unit 52 without increasing the implementation complexity of quantization unit 40 , resulting in a net decrease of implementation complexity with respect to video encoder 20 .
  • Entropy coding unit 46 entropy codes the quantized coefficients to even further reduce bit rate. Entropy coding unit 46 performs a statistical lossless coding, referred to in some instances, as entropy coding to generate a coded bitstream ( 102 ). Entropy coding unit 46 models a probability distribution of quantized DCT coefficients and selects a codebook (e.g., CAVLC or CABAC) based on the modeled probability distribution. Using this codebook, entropy coding unit 46 selects codes for each quantized DCT coefficient in a manner that compresses quantized DCT coefficients. Entropy coding unit 46 outputs the entropy coded coefficients as a coded bitstream which is stored to a memory or storage device and/or sent to video decoder 26 ( 104 ).
  • a codebook e.g., CAVLC or CABAC
  • Reconstruction unit 42 and inverse transform unit 44 reconstruct quantized coefficients and apply inverse transformation, respectively, to reconstruct the residual block.
  • Summation unit 50 adds the reconstructed residual block to the motion compensated prediction block produced by motion compensation unit 36 to produce a reconstructed video block for storage in reference frame store 34 .
  • the reconstructed video block is used by motion estimation unit 32 and motion compensation unit 36 to encode a block in a subsequent video frame.
  • FIG. 6 is a flowchart illustrating example operation of a coding device, such as video decoder 26 of FIG. 3 , in applying a 4 ⁇ 4 DCT-III implementation constructed in accordance with the techniques of this disclosure.
  • Video decoder 26 receives an encoded video bitstream that has been encoded by video encoder 20 .
  • entropy decoding unit 54 receives the encoded video bitstream and decodes from the bitstream quantized residual coefficients and quantized parameters, as well as other information, such as macroblock coding mode and motion information, which may include motion vectors and block partitions ( 106 , 108 ).
  • Motion compensation unit 56 receives the motion vectors and block partitions and one or more reconstructed reference frames from reference frame store 62 to produce a prediction video block ( 110 ).
  • Reconstruction unit 58 inverse quantizes, i.e., de-quantizes, the quantized block coefficients ( 112 ).
  • Inverse transform unit 60 applies an inverse transform, e.g., an inverse DCT, to the coefficients to produce residual blocks.
  • inverse transform unit 60 includes a scaled 4 ⁇ 4 DCT-III unit 68 , which inverse transform unit 60 applies to the coefficients to produce residual blocks ( 114 ).
  • Scaled 4 ⁇ 4 DCT-III unit 68 which is the inverse of scaled 4 ⁇ 4 DCT-II unit 52 shown in FIG. 2 , may transform the coefficients from the frequency domain to the spatial domain to produce the residual blocks.
  • reconstruction unit 58 accounts for the scaled nature of 4 ⁇ 4 DCT-III unit 68 by incorporating the external factors removed during factorization into the reconstruction process with little if any increase in implementation complexity. Removing factors from scaled 4 ⁇ 4 DCT-III unit 68 may reduce implementation complexity, thereby resulting in a net decrease of complexity for video decoder 26 .
  • the prediction video blocks are then summed by summer 66 with the residual blocks to form decoded blocks ( 116 ).
  • a deblocking filter (not shown) may be applied to filter the decoded blocks to remove blocking artifacts.
  • the filtered blocks are then placed in reference frame store 62 , which provides reference frame for decoding of subsequent video frames and also produces decoded video to drive a display device, such as display device 28 of FIG. 1 ( 118 ).
  • FIGS. 7A-7C are diagrams illustrating graphs 120 A- 120 C of peak signal-to-noise ratios with respect to bitrates for each of three different 4 ⁇ 4 DCT-II implementations, such as implementations 70 A- 70 C of FIGS. 4A-4C , constructed in accordance with the techniques of this disclosure.
  • FIG. 7A is a diagram illustrating graph 120 A of peak signal-to-noise ratios (PSNR) with respect to bitrates for an orthogonal scaled 4 ⁇ 4 DCT-II implementation, such as implementations 70 A of FIG. 4A , constructed in accordance with the techniques of this disclosure.
  • PSNR peak signal-to-noise ratios
  • the solid line represents the standard 4 ⁇ 4 DCT-II implementation incorporated by the H.264 video coding standard.
  • the dotted line represents a theoretical best DCT implementation capable of performing irrational multiplication and additions.
  • the long dashed line represents orthogonal 4 ⁇ 4 DCT-II implementation 70 A with internal factors C and S set to 2 and 5 respectively.
  • the short dashed line represents orthogonal 4 ⁇ 4 DCT-II implementation 70 A with internal factors C and S set to 3 and 7 respectfully.
  • the dashed-dotted line represents orthogonal 4 ⁇ 4 DCT-II implementation 70 A with internal factors C and S set to 5 and 12 respectfully.
  • orthogonal 4 ⁇ 4 DCT-II implementation 70 A with internal factors C and S set to 2 and 5 more accurately approximates the theoretical best DCT-II implementation than the H.264 implementation.
  • orthogonal 4 ⁇ 4 DCT-II implementation 70 A with internal factors C and S set to 3 and 7 or 5 and 12 do not provide much gain in terms of PSNR over orthogonal 4 ⁇ 4 DCT-II implementation 70 A with internal factors C and S set to 2 and 5, despite these implementations involving a more complex implementation.
  • FIG. 7B is a diagram illustrating graph 120 B of peak signal-to-noise ratios (PSNR) with respect to bitrates for an orthogonal scaled 4 ⁇ 4 DCT-II implementation, such as implementations 70 B of FIG. 4B , constructed in accordance with the techniques of this disclosure.
  • the solid line represents the standard orthogonal 4 ⁇ 4 DCT-II implementation incorporated by the H.264 video coding standard.
  • the dotted line represents a theoretical best DCT implementation capable of performing irrational multiplication and additions.
  • the short dashed line represents near-orthogonal 4 ⁇ 4 DCT-II implementation 70 B with internal factors C and S set to 1 and 2 respectfully.
  • the long dashed line represents near-orthogonal 4 ⁇ 4 DCT-II implementation 70 B with internal factors C and S set to 2 and 5 respectively.
  • the dashed-dotted line represents near-orthogonal 4 ⁇ 4 DCT-II implementation 70 B with internal factors C and S set to 5 and 12 respectfully.
  • near-orthogonal 4 ⁇ 4 DCT-II implementation 70 B with internal factors C and S set to 2 and 5 is not much better in terms of PSNR in comparison to the H.264 implementation.
  • near-orthogonal 4 ⁇ 4 DCT-II implementation 70 B with internal factors C and S set to 1 and 2 provides a better PSNR than even the theoretical DCT implementation
  • near-orthogonal 4 ⁇ 4 DCT-II implementation 70 B with internal factors C and S set to 5 and 12 most accurately represents the theoretical DCT implementation.
  • FIG. 7C is a diagram illustrating graph 120 C of peak signal-to-noise ratios (PSNR) with respect to bitrates for a near-orthogonal scaled 4 ⁇ 4 DCT-II implementation derived from an alternative factorization, such as implementations 70 C of FIGS. 4C , and constructed in accordance with the techniques of this disclosure.
  • the solid line represents the standard orthogonal 4 ⁇ 4 DCT-II implementation incorporated by the H.264 video coding standard.
  • the dotted line represents a theoretical best DCT implementation capable of performing irrational multiplication and additions.
  • the long dashed line represents near-orthogonal 4 ⁇ 4 DCT-II implementation 70 C with internal factors B and A set to 2 and 3 respectively.
  • the short dashed line represents near-orthogonal 4 ⁇ 4 DCT-II implementation 70 C with internal factors B and A set to 5 and 7 respectfully.
  • the dashed-dotted line represents near-orthogonal 4 ⁇ 4 DCT-II implementation 70 C with internal factors B and A set to 29 and 41 respectfully.
  • near-orthogonal 4 ⁇ 4 DCT-II implementation 70 C with internal factors B and A set to 2 and 3 is worse in terms of PSNR than the H.264 implementation.
  • near-orthogonal 4 ⁇ 4 DCT-II implementation 70 C with internal factors B and A set to 5 and 7 provides a better PSNR than the H.264 implementation and accurately represents the theoretical DCT implementation without requiring the complexity of near-orthogonal 4 ⁇ 4 DCT-II implementation 70 C with internal factors C and S set to 29 and 41.
  • the techniques of this disclosure may be implemented in a wide variety of devices or apparatuses, including a wireless communication device handset such as a mobile phone, an integrated circuit (IC) or a set of ICs (i.e., a chip set). Any components, modules or units have been described provided to emphasize functional aspects and does not necessarily require realization by different hardware units. The techniques described herein may also be implemented in hardware, software, firmware, or any combination thereof. Any features described as modules, units or components may be implemented together in an integrated logic device or separately as discrete but interoperable logic devices. In some cases, various features may be implemented as an integrated circuit device, such as an integrated circuit chip or chipset.
  • the techniques may be realized at least in part by a computer-readable medium comprising instructions that, when executed in a processor, performs one or more of the methods described above.
  • the computer-readable medium may comprise a computer-readable storage medium that is a physical structure, and may form part of a computer program product, which may include packaging materials.
  • the computer-readable storage medium may comprise random access memory (RAM) such as synchronous dynamic random access memory (SDRAM), read-only memory (ROM), non-volatile random access memory (NVRAM), electrically erasable programmable read-only memory (EEPROM), FLASH memory, magnetic or optical data storage media, and the like.
  • RAM random access memory
  • SDRAM synchronous dynamic random access memory
  • ROM read-only memory
  • NVRAM non-volatile random access memory
  • EEPROM electrically erasable programmable read-only memory
  • FLASH memory magnetic or optical data storage media, and the like.
  • the computer-readable storage medium may, in some respects, be considered a non-transitory computer
  • the code or instructions may be executed by one or more processors, such as one or more digital signal processors (DSPs), general purpose microprocessors, an application specific integrated circuits (ASICs), field programmable logic arrays (FPGAs), or other equivalent integrated or discrete logic circuitry.
  • DSPs digital signal processors
  • ASICs application specific integrated circuits
  • FPGAs field programmable logic arrays
  • processors may refer to any of the foregoing structure or any other structure suitable for implementation of the techniques described herein.
  • the functionality described herein may be provided within dedicated software modules or hardware modules configured for encoding and decoding, or incorporated in a combined video codec. Also, the techniques could be fully implemented in one or more circuits or logic elements.
  • the disclosure also contemplates any of a variety of integrated circuit devices that include circuitry to implement one or more of the techniques described in this disclosure.
  • Such circuitry may be provided in a single integrated circuit chip or in multiple, interoperable integrated circuit chips in a so-called chipset.
  • Such integrated circuit devices may be used in a variety of applications, some of which may include use in wireless communication devices, such as mobile telephone handsets.

Abstract

In general, techniques are described that provide for 4×4 transforms for media coding. A number of different 4×4 transforms are described that adhere to these techniques. As one example, an apparatus includes a 4×4 discrete cosine transform (DCT) hardware unit. The DCT hardware unit implements an orthogonal 4×4 DCT having an odd portion that applies first and second internal factors (C, S) that are related to a scaled factor (ξ) such that the scaled factor equals a square root of a sum of a square of the first internal factor (C) plus a square of the second internal factor (S). The 4×4 DCT hardware unit applies the 4×4 DCT implementation to media data to transform the media data from a spatial domain to a frequency domain. As another example, an apparatus implements a non-orthogonal 4×4 DCT to improve coding gain.

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • This application is a Continuation of U.S. patent application Ser. No. 12/788,625, filed May 27, 2010, which claims priority under 35 U.S.C. §119(e) to U.S. Provisional Application No. 61/184,656, filed Jun. 5, 2009 and U.S. Provisional Application No. 61/219,887, filed Jun. 24, 2009.
  • TECHNICAL FIELD
  • This disclosure relates to data compression and, more particularly, data compression involving transforms.
  • BACKGROUND
  • Data compression is widely used in a variety of applications to reduce consumption of data storage space, transmission bandwidth, or both. Example applications of data compression include visible or audible media data coding, such as digital video, image, speech, and audio coding. Digital video coding, for example, is used in a wide range of devices, including digital televisions, digital direct broadcast systems, wireless communication devices, personal digital assistants (PDAs), laptop or desktop computers, digital cameras, digital recording devices, video gaming devices, cellular or satellite radio telephones, or the like. Digital video devices implement video compression techniques, such as MPEG-2, MPEG-4, or H.264/MPEG-4 Advanced Video Coding (AVC), to transmit and receive digital video more efficiently.
  • In general, video compression techniques perform spatial prediction, motion estimation and motion compensation to reduce or remove redundancy inherent in video data. In particular, intra-coding relies on spatial prediction to reduce or remove spatial redundancy in video within a given video frame. Inter-coding relies on temporal prediction to reduce or remove temporal redundancy in video within adjacent frames. For inter-coding, a video encoder performs motion estimation to track the movement of matching video blocks between two or more adjacent frames. Motion estimation generates motion vectors, which indicate the displacement of video blocks relative to corresponding video blocks in one or more reference frames. Motion compensation uses the motion vector to generate a prediction video block from a reference frame. After motion compensation, a residual video block is formed by subtracting the prediction video block from the original video block.
  • A video encoder then applies a transform followed by quantization and lossless statistical coding processes to further reduce the bit rate of the residual block produced by the video coding process. In some instances, the applied transform comprises a discrete cosine transform (DCT). Typically, the DCT is applied to video blocks whose size is a power of two, such as a video block that is 4 pixels high by 4 pixels wide (which is often referred to as a “4×4 video block”). These DCTs may therefore be referred to as 4×4 DCTs in that these DCTs are applied to 4×4 video blocks to produce a 4×4 matrix of DCT coefficients. The 4×4 matrix of DCT coefficients produced from applying a 4×4 DCT to the residual block then undergo quantization and lossless statistical coding processes to generate a bitstream. Examples of statistical coding processes (also known as “entropy coding” processes) include context-adaptive variable length coding (CAVLC) or context-adaptive binary arithmetic coding (CABAC). A video decoder receives the encoded bitstream and performs lossless decoding to decompress residual information for each of the blocks. Using the residual information and motion information, the video decoder reconstructs the encoded video.
  • SUMMARY
  • In general, this disclosure is directed to techniques for coding data, such as media data, using one or more implementations of an approximation of 4×4 discrete cosine transform (DCT) that may provide increased coding gain relative to conventional 4×4 DCTs. The implementations of the 4×4 DCT applied in accordance with the techniques of this disclosure involve various relationships between scaled factors and internal factors. The term “scaled factors” refers to factors external from the implementation of the 4×4 DCT that are removed through factorization. The term “internal factors” refers to factors internal to the implementation of the 4×4 DCT that remain after factorization. One example implementation of the 4×4 DCT is orthogonal, which implies that the matrix of coefficients representative of the 4×4 DCT, when multiplied by a transpose of this matrix, equals the identity matrix. Another example implementation of the 4×4 DCT is near-orthogonal (or approximately orthogonal). By adhering to the various relationships described in detail below, the techniques facilitate selection of matrix coefficients in both instances that result in orthogonal and near-orthogonal 4×4 DCT implementations, which, when applied to data, may promote increased coding gain relative to convention 4×4 DCTs.
  • In one aspect, an apparatus comprises a 4×4 discrete cosine transform (DCT) hardware unit that implements an orthogonal 4×4 DCT having an odd portion that applies first and second internal factors (C, S) that are related to a scaled factor (ξ) such that the scaled factor equals a square root of a sum of a square of the first internal factor (C) plus a square of the second internal factor (S), wherein the 4×4 DCT hardware unit applies the 4×4 DCT implementation to media data to transform the media data from a spatial domain to a frequency domain.
  • In another aspect, a method comprises applying an orthogonal 4×4 discreet cosine transform (DCT) implementation with a 4×4 DCT hardware unit to media data to transform the media data from a spatial domain to a frequency domain, wherein the orthogonal 4×4 DCT implementation includes an odd portion that applies first and second internal factors (C, S) that are related to a scaled factor (ξ) such that the scaled factor equals a square root of a sum of a square of the first internal factor (C) plus a square of the second internal factor (S).
  • In another aspect, an apparatus comprises means for applying an orthogonal 4×4 discreet cosine transform (DCT) implementation to media data to transform the media data from a spatial domain to a frequency domain, wherein the orthogonal 4×4 DCT implementation includes an odd portion that applies first and second internal factors (C, S) that are related to a scaled factor (ξ) such that the scaled factor equals a square root of a sum of a square of the first internal factor (C) plus a square of the second internal factor (S).
  • In another aspect, a non-transitory computer-readable storage medium stores instructions that, when executed by a processor, cause the processor to apply an orthogonal 4×4 discreet cosine transform (DCT) implementation with a 4×4 DCT hardware unit to media data to transform the media data from a spatial domain to a frequency domain, wherein the orthogonal 4×4 DCT implementation includes an odd portion that applies first and second internal factors (C, S) that are related to a scaled factor (ξ) such that the scaled factor equals a square root of a sum of a square of the first internal factor (C) plus a square of the second internal factor (S).
  • In another aspect, an apparatus comprises a 4×4 inverse discrete cosine transform (IDCT) hardware unit that implements an IDCT of an orthogonal 4×4 DCT having an odd portion that applies first and second internal factors (C, S) that are related to a scaled factor (ξ) such that the scaled factor equals a square root of a sum of a square of the first internal factor (C) plus a square of the second internal factor (S), wherein the 4×4 IDCT hardware unit applies the 4×4 IDCT implementation to DCT coefficients representative of media data to transform the media data from a frequency domain to a spatial domain.
  • In another aspect, a method comprises applying a 4×4 inverse discrete cosine transform (IDCT) of an orthogonal 4×4 DCT with a 4×4 IDCT hardware unit to DCT coefficients representative of media data to transform the media data from a frequency domain to a spatial domain, wherein the orthogonal 4×4 DCT includes an odd portion that applies first and second internal factors (C, S) that are related to a scaled factor (ξ) such that the scaled factor equals a square root of a sum of a square of the first internal factor (C) plus a square of the second internal factor (S).
  • In another aspect, an apparatus comprises means for applying a 4×4 inverse discrete cosine transform (IDCT) of an orthogonal 4×4 DCT to DCT coefficients representative of media data to transform the media data from a frequency domain to a spatial domain, wherein the orthogonal 4×4 DCT includes an odd portion that applies first and second internal factors (C, S) that are related to a scaled factor (ξ) such that the scaled factor equals a square root of a sum of a square of the first internal factor (C) plus a square of the second internal factor (S).
  • In another aspect, a non-transitory computer-readable storage medium stores instructions that, when executed by a processor, cause the processor to apply a 4×4 inverse discrete cosine transform (IDCT) of an orthogonal 4×4 DCT with a 4×4 IDCT hardware unit to DCT coefficients representative of media data to transform the media data from a frequency domain to a spatial domain, wherein the orthogonal 4×4 DCT includes an odd portion that applies first and second internal factors (C, S) that are related to a scaled factor (ξ) such that the scaled factor equals a square root of a sum of a square of the first internal factor (C) plus a square of the second internal factor (S).
  • In another aspect, an apparatus comprises a 4×4 discrete cosine transform (DCT) hardware unit, wherein the DCT module implements a non-orthogonal 4×4 DCT having an odd portion that applies first and second variables (C, S) that are related to a scaled factor (ξ) by the following equation:
  • ξ = C + S ω + ψ ,
  • wherein variables ω and ψ denote irrational internal transform factors and variables C and S denote dyadic rational internal transform factors used in place of variables ω and ψ in integer implementations of the non-orthogonal 4×4 DCT, and wherein the 4×4 DCT hardware unit applies the 4×4 DCT implementation to media data to transform the media data from a spatial domain to a frequency domain.
  • In another aspect, a method comprises applying a non-orthogonal 4×4 discrete cosine transform (DCT) with a 4×4 DCT hardware unit to media data to transform the media data from a spatial domain to a frequency domain, wherein the non-orthogonal 4×4 DCT includes an odd portion that applies first and second variables (C, S) that are related to a scaled factor (ξ) by the following equation:
  • ξ = C + S ω + ψ ,
  • wherein variables ω and ψ denote irrational internal transform factors and variables C and S denote dyadic rational internal transform factors used in place of variables ω and ψ in integer implementations of the non-orthogonal 4×4 DCT.
  • In another aspect, an apparatus comprises means for applying a non-orthogonal 4×4 discrete cosine transform (DCT) with a 4×4 DCT hardware unit to media data to transform the media data from a spatial domain to a frequency domain, wherein the non-orthogonal 4×4 DCT includes an odd portion that applies first and second variables (C, S) that are related to a scaled factor (ξ) by the following equation:
  • ξ = C + S ω + ψ ,
  • wherein variables ω and ψ denote irrational internal transform factors and variables C and S denote dyadic rational internal transform factors used in place of variables ω and ψ in integer implementations of the non-orthogonal 4×4 DCT.
  • In another aspect, a non-transitory computer-readable storage medium stores instructions that, when executed by a processor, cause the processor to apply a non-orthogonal 4×4 discrete cosine transform (DCT) with a 4×4 DCT hardware unit to media data to transform the media data from a spatial domain to a frequency domain, wherein the non-orthogonal 4×4 DCT includes an odd portion that applies first and second variables (C, S) that are related to a scaled factor (ξ) by the following equation:
  • ξ = C + S ω + ψ ,
  • wherein variables ω and ψ denote irrational internal transform factors and variables C and S denote dyadic rational internal transform factors used in place of variables ω and ψ in integer implementations of the non-orthogonal 4×4 DCT.
  • In another aspect, an apparatus comprises a 4×4 inverse discrete cosine transform (IDCT) hardware unit, wherein the DCT hardware unit implements an inverse DCT of a non-orthogonal 4×4 DCT having an odd portion that applies first and second internal factors (C, S) that are related to a scaled factor (ξ) by the following equation:
  • ξ = C + S ω + ψ ,
  • wherein variables ω and ψ denote irrational internal transform factors and variables C and S denote dyadic rational internal transform factors used in place of variables ω and ψ in integer implementations of the non-orthogonal 4×4 DCT, and wherein the 4×4 IDCT hardware unit applies the 4×4 IDCT implementation to DCT coefficients representative of media data to transform the media data from a frequency domain to a spatial domain.
  • In another aspect, a method comprises applying a 4×4 inverse discrete cosine transform (IDCT) with a 4×4 IDCT hardware unit to DCT coefficients representative of media data to transform the media data from a frequency domain to a spatial domain,
    • wherein the 4×4 IDCT comprises an IDCT of a non-orthogonal 4×4 DCT having an odd portion that applies first and second internal factors (C, S) that are related to a scaled factor (ξ) by the following equation:
  • ξ = C + S ω + ψ ,
  • wherein variables ω and ψ denote irrational internal transform factors and variables C and S denote dyadic rational internal transform factors used in place of variables ω and ψ in integer implementations of the non-orthogonal 4×4 DCT.
  • In another aspect, an apparatus comprises means for applying a 4×4 inverse discrete cosine transform (IDCT) with a 4×4 IDCT hardware unit to DCT coefficients representative of media data to transform the media data from a frequency domain to a spatial domain, wherein the 4×4 IDCT comprises an IDCT of a non-orthogonal 4×4 DCT having an odd portion that applies first and second internal factors (C, S) that are related to a scaled factor (ξ) by the following equation:
  • ξ = C + S ω + ψ ,
  • wherein variables ω and ψ denote irrational internal transform factors and variables C and S denote dyadic rational internal transform factors used in place of variables ω and ψ in integer implementations of the non-orthogonal 4×4 DCT.
  • In another aspect, a non-transitory computer-readable storage medium stores instructions that, when executed by a processor, cause the processor to apply a 4×4 inverse discrete cosine transform (IDCT) with a 4×4 IDCT hardware unit to DCT coefficients representative of media data to transform the media data from a frequency domain to a spatial domain, wherein the 4×4 IDCT comprises an IDCT of a non-orthogonal 4×4 DCT having an odd portion that applies first and second internal factors (C, S) that are related to a scaled factor (ξ) by the following equation:
  • ξ = C + S ω + ψ ,
  • wherein variables ω and ψ denote irrational internal transform factors and variables C and S denote dyadic rational internal transform factors used in place of variables ω and ψ in integer implementations of the non-orthogonal 4×4 DCT.
  • In another aspect, an apparatus comprises a 4×4 discrete cosine transform (DCT) hardware unit that implements a non-orthogonal 4×4 DCT having an odd portion that applies first and second internal factors (A, B) that are related to a scaled factor (ξ) by the following equation:
  • ξ = A + B 1 + 1 / 2 ,
  • wherein the scaled factor (ξ) equals a sum of the first internal factor (A) plus the second internal factor (B) divided by one plus one divided by the square root of two, and wherein the 4×4 DCT hardware unit applies the 4×4 DCT implementation to media data to transform the media data from a spatial domain to a frequency domain.
  • In another aspect, a method comprises applying a non-orthogonal 4×4 discrete cosine transform (DCT) with a 4×4 DCT hardware unit to media data to transform the media data from a spatial domain to a frequency domain. The non-orthogonal 4×4 DCT includes an odd portion that applies first and second internal factors (A, B) that are related to a scaled factor (ξ) by the following equation:
  • ξ = A + B 1 + 1 / 2 ,
  • wherein the scaled factor (ξ) equals a sum of the first internal factor (A) plus the second internal factor (B) divided by one plus one divided by the square root of two.
  • In another aspect, an apparatus comprises means for applying a non-orthogonal 4×4 discrete cosine transform (DCT) with a 4×4 DCT hardware unit to media data to transform the media data from a spatial domain to a frequency domain, wherein the non-orthogonal 4×4 DCT includes an odd portion that applies first and second internal factors (A, B) that are related to a scaled factor (ξ) by the following equation:
  • ξ = A + B 1 + 1 / 2 ,
  • wherein the scaled factor (ξ) equals a sum of the first internal factor (A) plus the second internal factor (B) divided by one plus one divided by the square root of two.
  • In another aspect, a non-transitory computer-readable storage medium stores instructions that, when executed by a processor, cause the processor to apply a non-orthogonal 4×4 discrete cosine transform (DCT) with a 4×4 DCT hardware unit to media data to transform the media data from a spatial domain to a frequency domain. The non-orthogonal 4×4 DCT includes an odd portion that applies first and second internal factors (A, B) that are related to a scaled factor (ξ) by the following equation:
  • ξ = A + B 1 + 1 / 2 ,
  • wherein the scaled factor (ξ) equals a sum of the first internal factor (A) plus the second internal factor (B) divided by one plus one divided by the square root of two.
  • In another aspect, an apparatus comprises a 4×4 inverse discrete cosine transform (IDCT) hardware unit, wherein the 4×4 IDCT hardware unit implements an IDCT of a non-orthogonal 4×4 DCT having an odd portion that applies first and second internal factors (A, B) that are related to a scaled factor (ξ) by the following equation:
  • ξ = A + B 1 + 1 / 2 ,
  • wherein the scaled factor (ξ) equals a sum of the first internal factor (A) plus the second internal factor (B) divided by one plus one divided by the square root of two, and wherein the 4×4 IDCT hardware unit applies the 4×4 IDCT implementation to DCT coefficients representative of media data to transform the media data from a frequency domain to a spatial domain.
  • In another aspect, a method comprises applying a 4×4 inverse discrete cosine transform (IDCT) with a 4×4 IDCT hardware unit to DCT coefficients representative of media data to transform the media data from a frequency domain to a spatial domain. The IDCT comprises an IDCT of a non-orthogonal 4×4 DCT having an odd portion that applies first and second internal factors (A, B) that are related to a scaled factor (ξ) by the following equation:
  • ξ = A + B 1 + 1 / 2 ,
  • wherein the scaled factor (ξ) equals a sum of the first internal factor (A) plus the second internal factor (B) divided by one plus one divided by the square root of two.
  • In another aspect, an apparatus comprises means for applying a 4×4 inverse discrete cosine transform (IDCT) with a 4×4 IDCT hardware unit to DCT coefficients representative of media data to transform the media data from a frequency domain to a spatial domain. The IDCT comprises an IDCT of a non-orthogonal 4×4 DCT having an odd portion that applies first and second internal factors (A, B) that are related to a scaled factor (ξ) by the following equation:
  • ξ = A + B 1 + 1 / 2 ,
  • wherein the scaled factor (ξ) equals a sum of the first internal factor (A) plus the second internal factor (B) divided by one plus one divided by the square root of two.
  • In another aspect, a non-transitory computer-readable storage medium stores instructions that, when executed by a processor, cause the processor to apply a 4×4 inverse discrete cosine transform (IDCT) with a 4×4 IDCT hardware unit to DCT coefficients representative of media data to transform the media data from a frequency domain to a spatial domain. The IDCT comprises an IDCT of a non-orthogonal 4×4 DCT having an odd portion that applies first and second internal factors (A, B) that are related to a scaled factor (ξ) by the following equation:
  • ξ = A + B 1 + 1 / 2 ,
  • wherein the scaled factor (ξ) equals a sum of the first internal factor (A) plus the second internal factor (B) divided by one plus one divided by the square root of two.
  • The details of one or more aspects of the techniques are set forth in the accompanying drawings and the description below. Other features, objects, and advantages of the techniques described in this disclosure will be apparent from the description and drawings, and from the claims.
  • BRIEF DESCRIPTION OF DRAWINGS
  • FIG. 1 is a block diagram illustrating a video encoding and decoding system.
  • FIG. 2 is a block diagram illustrating the video encoder of FIG. 1 in more detail.
  • FIG. 3 is a block diagram illustrating the video decoder of FIG. 1 in more detail.
  • FIGS. 4A-4C are diagrams that each illustrates an implementation of a scaled 4×4 DCT-II constructed in accordance with the techniques of this disclosure.
  • FIG. 5 is a flow chart illustrating exemplary operation of a coding device in applying a 4×4 DCT implementation constructed in accordance with the techniques of this disclosure.
  • FIG. 6 is a flowchart illustrating example operation of a coding device in applying a 4×4 DCT-III implementation constructed in accordance with the techniques of this disclosure.
  • FIGS. 7A-7C are diagrams illustrating graphs of peak signal-to-noise ratios with respect to bitrates for each of three different 4×4 DCT-II implementations constructed in accordance with the techniques of this disclosure.
  • DETAILED DESCRIPTION
  • In general, this disclosure is directed to techniques for coding data using one or more 4×4 discrete cosine transforms (DCTs) represented as a 4×4 matrix of coefficients selected in accordance with various relationships. The techniques may be applied to compress a variety of data, including visible or audible media data, such as digital video, image, speech, and/or audio data, and thereby transform such electrical signals representing such data into compressed signals for more efficient processing, transmission or archival of the electrical signals. By adhering to the various relationships defined in accordance with the techniques of this disclosure, coefficients may be selected for the coefficient matrixes such that orthogonal and near-orthogonal implementations of 4×4 DCTs, when applied to data, may promote increased coding gain.
  • The size denoted above, i.e., 4×4, is represented in terms of discrete data units. To illustrate, video data is often described in terms of video blocks, particularly with respect to video compression. A video block generally refers to any sized portion of a video frame, where a video frame refers to a picture or image in a series of pictures or images. Each video block typically comprises a plurality of discrete pixel data that indicates either color components, e.g., red, blue and green, (so-called “chromaticity” or “chroma” components) or luminance components (so-called “luma” components). Each set of pixel data comprises a single 1×1 point in the video block and may be considered a discrete data unit with respect to video blocks. Thus, a 4×4 video block, for example, comprises four rows of pixel data with four discrete sets of pixel data in each row. An n-bit value may be assigned to each pixel to specify a color or luminance value.
  • DCTs are commonly described in terms of the size of the block of data, whether audio, speech image or video data, the DCT is capable of processing. For example, if a DCT can process a 4×4 block of data, the DCT may be referred to as a 4×4 DCT. Moreover, DCTs may be denoted as a particular type. The most commonly employed type of DCT of the eight different types of DCTs is a DCT of type-II, which may be denoted as “DCT-II.” Often, when referring generally to a DCT, such reference refers to a DCT of type-II or DCT-II. The inverse of a DCT-II is referred to as a DCT of type-III, which similarly may be denoted as “DCT-III” or, with the common understanding that DCT refers to a DCT-II, as “IDCT” where the “I” in “IDCT” denotes inverse. Reference to DCTs below conforms to this notation, where general reference to DCTs refers to a DCT-II unless otherwise specified. However, to avoid confusion, DCTs, including DCTs-II, are for the most part referred to below with the corresponding type (II, III, etc.) indicated.
  • The techniques described in this disclosure involve both an encoder and/or decoder that employ one or more implementations of the 4×4 DCTs-II to facilitate compression and/or decompression of data. Again, the compression and decompression accomplished through applying these 4×4 DCT-II implementations permits physical transformation of electrical signals representing the data such that the signals can be processed, transmitted, and/or stored more efficiently using physical computing hardware, physical transmission media (e.g., copper, optical fiber, wireless, or other media), and/or storage hardware (e.g., magnetic or optical disk or tape, or any of a variety of solid state media). The implementations may be configured solely in hardware or may be configured in a combination of hardware and software.
  • The implementations of the 4×4 DCTs-II may be orthogonal or near-orthogonal. The term “orthogonal” refers to a property of the matrix in general where the matrix, when multiplied by the transpose of the matrix, equals the identity matrix. The term “near-orthogonal” refers to instances where this orthogonal property is relaxed, such that strict orthogonality is not required. In this respect, “near-orthogonal” suggests approximately or loosely orthogonal. A near-orthogonal matrix, however, does not meet the technical definition of orthogonal and such near-orthogonal matrixes may be considered non-orthogonal from a purely technical perspective.
  • To illustrate the orthogonal implementation of the 4×4 DCT-II described in this disclosure, consider an apparatus that includes a 4×4 DCT module. The 4×4 DCT module implements an orthogonal 4×4 DCT-II constructed in accordance with the techniques described in this disclosure. This orthogonal 4×4 DCT-II implementation includes an odd portion and an even portion. The so-called “odd portion” of the 4×4 DCT-II refers to a portion of the 4×4 DCT-II implementation that outputs odd numbered coefficients. The so-called “even portion” of the 4×4 DCT-II refers to a portion of the 4×4 DCT-II implementation that outputs even numbered coefficients.
  • In accordance with the techniques of this disclosure, the odd portion applies first and second internal factors C, S that are related to a scaled factor (ξ) such that the scaled factor equals a square root of a sum of a square of the first internal factor (C) plus a square of the second internal factor (S). The term “internal factors” refers to factors internal to the implementation of the 4×4 DCT that remain after factorization. The term “scaled factors” refers to factors external from the implementation of the 4×4 DCT that are removed through factorization.
  • Internal factors commonly increase implementation complexity by requiring multiplications that may be expensive, in terms of implementation complexity. For example, a multiplication may require three or more times as many computational operations (e.g., clock cycles) to complete than a more simple addition operation. Specific multipliers may be implemented to perform multiplication more efficiently (e.g., in less clock cycles) but these multiplier implementations typically consume significantly more chip or silicon surface area and may also draw large amounts of power. Multiplication by factors is therefore often avoided, particularly in power sensitive devices, such as most mobile devices including cellular phones, so-called “smart” cellular phones, personal digital assistants (PDAs), laptop computers, so-called “netbooks,” and the like. Factorization is a process whereby one or more internal factors may be removed from the 4×4 DCT-II implementation and replaced with external factors. The external factors can then be incorporated in subsequent quantization operations, for example, with respect to video encoders, usually with minimal expense or increase in complexity.
  • In any event, the above relationship between the first and second internal factors C, S and the scaled factor (ξ) noted above provides for specific values of the internal factors not used in previous implementations of 4×4 DCTs-II. For example, values for internal factors C and S of 2 and 5, respectively, do not overly increase implementation complexity and improve upon coding gain over known 4×4 DCT implementations involving values of 1 and 2 for C and S. The video encoder then applies the 4×4 DCT-II implementation with internal factors 2 and 5 to media data so as to transform the media data from a spatial domain to a frequency domain. By applying this orthogonal 4×4 DCT-II implementation, the techniques facilitate coding gain (which is a term representative of compression efficiency) when compared to standard DCT-II implementations that include internal factors of 1 and 2.
  • Orthogonality is generally desired with respect to DCT-II implementations because it is invertible. This invertible property, as one example, allows a video encoder to apply the orthogonal 4×4 DCT implementation to generate DCT coefficients from residual blocks of video data. A video decoder can then apply a 4×4 inverse DCT-II (IDCT) implementation to reconstruct the residual block of video data from the DCT-II coefficients with little if any loss in data. Considering that a main goal of video encoding is the preservation of data, various coding standards, such as H.264 video coding standard adopted an orthogonal implementation of the 4×4 DCT.
  • While orthogonality is generally desired in theory, the video, audio or general coding pipeline in practice involves a number of steps that introduce so-called “noise” that in most respects effectively prevents the accurate reconstruction of the values provided by orthogonal 4×4 DCT-II implementations. Considering integer-arithmetic implementations, near-orthogonal transforms may improve coding efficiency while also reducing implementation complexity compared to strictly orthogonal integer transforms. In effect, relaxing the orthogonal property introduces noise into the system, but may improve coding gain while also reducing implementation complexity.
  • To illustrate the near-orthogonal implementation of the 4×4 DCT-II described in this disclosure, consider that the 4×4 DCT module of the apparatus implements this near-orthogonal 4×4 DCT-II that is constructed in accordance with the techniques described in this disclosure. This near-orthogonal 4×4 DCT-II implementation also includes an odd portion and an even portion. The odd portion in this instance applies first and second internal factors (C, S) that are related to a scaled factor (ξ) by the following equation:
  • ξ = C + S ω + ψ .
  • In this equation, variables ω and ψ denote original (irrational) internal transform factors, for example, (ω) may be a cosine of three times a constant pi (π) divided by eight, and (ψ) may be a sine of three times the constant pi (π) divided by eight. Variables (C) and (S) denote integer (or dyadic rational) internal transform factors placed instead of (ω) and (ψ).
    • Equation (2) indicates that the scaled factor (ξ) equals a sum of the first internal factor (C) plus the second internal factor (S) divided by a (ω) plus (ψ). This equation may identify particular internal factors values of C and S similar to the above relationship defined with respect to the orthogonal implementation, but result in a different external factor. However, the different external factor does not typically increase implementation complexity for the reasons noted above, but does generally provide more accurate approximation of the original transform factors. It may also provide improved coding gain over conventional 4×4 DCT-II implementations and even, in some instances, the orthogonal 4×4 DCT-II implementation described above. Consequently, the control unit applies this near-orthogonal 4×4 DCT-II to media data to transform the media data from a spatial domain to a frequency domain with the result of potentially improved coding gain.
  • FIG. 1 is a block diagram illustrating a video encoding and decoding system 10. As shown in FIG. 1, system 10 includes a source hardware device 12 that transmits encoded video to a receive hardware device 14 via a communication channel 16. Source device 12 may include a video source 18, video encoder 20 and a transmitter 22. Destination device 14 may include a receiver 24, video decoder 26 and video display device 28.
  • In the example of FIG. 1, communication channel 16 may comprise any wireless or wired communication medium, such as a radio frequency (RF) spectrum or one or more physical transmission lines, or any combination of wireless and wired media. Channel 16 may form part of a packet-based network, such as a local area network, wide-area network, or a global network such as the Internet. Communication channel 16 generally represents any suitable communication medium, or collection of different communication media, for transmitting video data from source device 12 to receive device 14.
  • Source device 12 generates video for transmission to destination device 14. In some cases, however, devices 12, 14 may operate in a substantially symmetrical manner. For example, each of devices 12, 14 may include video encoding and decoding components. Hence, system 10 may support one-way or two-way video transmission between video devices 12, 14, e.g., for video streaming, video broadcasting, or video telephony. For other data compression and coding applications, devices 12, 14 could be configured to send and receive, or exchange, other types of data, such as image, speech or audio data, or combinations of two or more of video, image, speech and audio data. Accordingly, the following discussion of video applications is provided for purposes of illustration and should not be considered limiting of the various aspects of the disclosure as broadly described herein.
  • Video source 18 may include a video capture device, such as one or more video cameras, a video archive containing previously captured video, or a live video feed from a video content provider. As a further alternative, video source 18 may generate computer graphics-based data as the source video, or a combination of live video and computer-generated video. In some cases, if video source 18 is a camera, source device 12 and receive device 14 may form so-called camera phones or video phones. Hence, in some aspects, source device 12, receive device 14 or both may form a wireless communication device handset, such as a mobile telephone. In each case, the captured, pre-captured or computer-generated video may be encoded by video encoder 20 for transmission from video source device 12 to video decoder 26 of video receive device 14 via transmitter 22, channel 16 and receiver 24. Display device 28 may include any of a variety of display devices such as a liquid crystal display (LCD), plasma display or organic light emitting diode (OLED) display.
  • Video encoder 20 and video decoder 26 may be configured to support scalable video coding for spatial, temporal and/or signal-to-noise ratio (SNR) scalability. In some aspects, video encoder 20 and video decoder 22 may be configured to support fine granularity SNR scalability (FGS) coding. Encoder 20 and decoder 26 may support various degrees of scalability by supporting encoding, transmission and decoding of a base layer and one or more scalable enhancement layers. For scalable video coding, a base layer carries video data with a minimum level of quality. One or more enhancement layers carry additional bitstream to support higher spatial, temporal and/or SNR levels.
  • Video encoder 20 and video decoder 26 may operate according to a video compression standard, such as MPEG-2, MPEG-4, ITU-T H.263, or ITU-T H.264/MPEG-4 Advanced Video Coding (AVC). Although not shown in FIG. 1, in some aspects, video encoder 20 and video decoder 26 may be integrated with an audio encoder and decoder, respectively, and include appropriate MUX-DEMUX units, or other hardware and software, to handle encoding of both audio and video in a common data stream or separate data streams. If applicable, MUX-DEMUX units may conform to the ITU H.223 multiplexer protocol, or other protocols such as the user datagram protocol (UDP).
  • In some aspects, for video broadcasting, the techniques described in this disclosure may be applied to enhance H.264 video coding for delivering real-time video services in terrestrial mobile multimedia multicast (TM3) systems using the Forward Link Only (FLO) Air Interface Specification, “Forward Link Only Air Interface Specification for Terrestrial Mobile Multimedia Multicast,” published as Technical Standard TIA-1099 (the “FLO Specification”), e.g., via a wireless video broadcast server or wireless communication device handset. The FLO Specification includes examples defining bitstream syntax and semantics and decoding processes suitable for the FLO Air Interface. Alternatively, video may be broadcasted according to other standards such as DVB-H (digital video broadcast-handheld), ISDB-T (integrated services digital broadcast - terrestrial), or DMB (digital media broadcast). Hence, source device 12 may be a mobile wireless terminal, a video streaming server, or a video broadcast server. However, techniques described in this disclosure are not limited to any particular type of broadcast, multicast, or point-to-point system. In the case of broadcast, source device 12 may broadcast several channels of video data to multiple receive device, each of which may be similar to receive device 14 of FIG. 1.
  • Video encoder 20 and video decoder 26 each may be implemented as one or more microprocessors, digital signal processors (DSPs), application specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), discrete logic, software, hardware, firmware or any combinations thereof. Hence, each of video encoder 20 and video decoder 26 may be implemented as least partially as an integrated circuit (IC) chip or device, and included in one or more encoders or decoders, either of which may be integrated as part of a combined encoder/decoder (CODEC) in a respective mobile device, subscriber device, broadcast device, server, or the like. In addition, source device 12 and receive device 14 each may include appropriate modulation, demodulation, frequency conversion, filtering, and amplifier components for transmission and reception of encoded video, as applicable, including radio frequency (RF) wireless components and antennas sufficient to support wireless communication. For ease of illustration, however, such components are not shown in FIG. 1.
  • A video sequence includes a series of video frames. Video encoder 20 operates on blocks of pixels within individual video frames in order to encode the video data. The video blocks may have fixed or varying sizes, and may differ in size according to a specified coding standard. Each video frame includes a series of slices. Each slice may include a series of macroblocks, which may be arranged into sub-blocks. As an example, the ITU-T H.264 standard supports intra prediction in various dyadic block sizes, such as 16 by 16, 8 by 8, 4 by 4 for luma components, and 8×8 for chroma components, as well as inter prediction in various block sizes, such as 16 by 16, 16 by 8, 8 by 16, 8 by 8, 8 by 4, 4 by 8 and 4 by 4 for luma components and corresponding scaled sizes for chroma components.
  • Smaller video blocks can generally provide better resolution, and may be used for locations of a video frame that include higher levels of detail. In general, macroblocks (MBs) and the various sub-blocks may be considered, in general, to represent video blocks. In addition, a slice may be considered to represent a series of video blocks, such as MBs and/or sub-blocks. Each slice may be an independently decodable unit. After prediction, a transform may be performed on dyadic or non-dyadic sized residual blocks, and an additional transform may be applied to the DCT coefficients of the 4×4 blocks for chroma components or luma component if the intra16×16 prediction mode is used.
  • Video encoder 20 and/or video decoder 26 of system 10 of FIG. 1 may be configured to include an implementation of a 4×4 DCT-II and an inverse thereof (e.g., a 4×4 DCT-III), respectively, wherein the 4×4 DCT-II adheres to one of the various relationships of the techniques for selecting DCT-II matrix coefficients for a 4×4 sized DCT described in this disclosure. While ITU-T H.264 standard supports intra prediction in various block sizes, such as 16 by 16, 8 by 8, 4 by 4 for luma components, and 8×8 for chroma components, revisions to this standard to improve coding efficiency are currently underway. One revised standard may be referred to as ITU-T H.265 or simply H.265 (sometimes referred to as next generation video coding or NGVC). As described below with respect to FIGS. 7A-7C, 4×4 DCTs of type-II (“DCTs-II”) that adhere to one of the various relationships set forth in accordance with the techniques of this disclosure may improve coding efficiency as measured in terms of peak signal-to-noise ratios (PSNRs). Consequently, ITU-T H.265 and other evolving standards or specifications may consider these DCTs-II so as to improve coding efficiency.
  • In accordance with the techniques described in this disclosure, implementations of 4×4 DCTs-II may be generated in a manner that adheres to one of the various relationships that may promote improved coding gain over conventional implementations. A first relationship is defined for orthogonal implementations of 4×4 DCTs-II and is set forth below with respect to equation (1):

  • ξ=√{square root over (C 2 +S 2)},   (1)
  • where C and S denote first and second internal factors in an “odd” portion of the 4×4 DCTs-II implementation and (ξ) denotes a scaled factor applied to the “odd” portion of the 4×4 DCTs-II implementation. The so-called “odd portion” of the 4×4 DCT-II refers to a portion of the 4×4 DCT-II implementation that outputs odd numbered coefficients. The so-called “even” portion of the 4×4 DCT-II refers to a portion of the 4×4 DCT-II implementation that outputs even numbered coefficients. The term “internal factors” refers to factors internal to the implementation of the 4×4 DCT that remain after factorization. The term “scaled factors” refers to factors external from the implementation of the 4×4 DCT that are removed through factorization.
  • Internal factors commonly increase implementation complexity by requiring multiplications that may be expensive, in terms of implementation complexity. For example, a multiplication may require three or more times as many computational operations (e.g., clock cycles) to complete than a more simple addition operation. Specific multipliers may be implemented to perform multiplication more efficiently (e.g., in less clock cycles) but these multiplier implementations typically consume significantly more chip or silicon surface area and may also draw large amounts of power. Multiplication by factors is therefore often avoided, particularly in power sensitive devices, such as most mobile devices including cellular phones, so-called “smart” cellular phones, personal digital assistants (PDAs), laptop computers, so-called “netbooks,” and the like. Factorization is a process whereby one or more internal factors may be removed from the 4×4 DCT-II implementation and replaced with external factors. The external factors can then be incorporated in subsequent quantization operations, for example, with respect to video encoders, usually with minimal expense or increase in complexity.
  • In any event, the above relationship between the first and second internal factors C, S and the scaled factor (ξ) noted above with respect to equation (1) provides for specific values of the internal factors not used in previous implementations of 4×4 DCTs-II. For example, values for internal factors C and S of 2 and 5, respectively, do not overly increase implementation complexity and improve upon coding gain over known 4×4 DCT implementations involving values of 1 and 2 for C and S. The video encoder then applies the 4×4 DCT-II implementation with internal factors 2 and 5 to media data so as to transform the media data from a spatial domain to a frequency domain. By applying this orthogonal 4×4 DCT-II implementation, the techniques facilitate coding gain (which is a term representative of compression efficiency) when compared to standard DCT-II implementations that include internal factors of 1 and 2.
  • Orthogonality is generally desired with respect to DCT-II implementations because it is invertible. This invertible property, as one example, allows a video encoder to apply the orthogonal 4×4 DCT implementation to generate DCT coefficients from residual blocks of video data. A video decoder can then apply a 4×4 inverse DCT-II (IDCT) implementation to reconstruct the residual block of video data from the DCT-II coefficients with little if any loss in data. Several coding standards, such as the H.264 video coding standard, adopted an orthogonal implementation of the 4×4 DCT.
  • While orthogonality is generally desired in theory, the video, audio or general coding pipeline in practice involves a number of additional steps (such as scaling or quantization) that introduce so-called “noise” that in most respects effectively prevents the accurate reconstruction of the values provided by orthogonal 4×4 DCT-II implementations. As a result, relaxing the orthogonal property to achieve a near-orthogonal (which is non-orthogonal technically speaking) may be possible. Considering integer-arithmetic implementations, such near-orthogonal transforms may improve coding efficiency while also reducing implementation complexity compared to strictly orthogonal integer transforms. In general, relaxing the orthogonal property introduces noise into the system, but may improve coding gain while also reducing implementation complexity.
  • To illustrate the near-orthogonal implementation of the 4×4 DCT-II, consider an apparatus that includes a control unit, as one example. The control unit implements the near-orthogonal 4×4 DCT-II in accordance with the techniques described in this disclosure. This near-orthogonal 4×4 DCT-II implementation also includes an odd portion and an even portion. The odd portion in this instance applies first and second internal factors (C, S) that are related to a scaled factor (ξ) by the following equation (2):
  • ξ = C + S ω + ψ . ( 2 )
  • In equation (2) variables ω and ψ denote original (irrational) internal transform factors, for example, (ω) may be a cosine of three times a constant pi (π) divided by eight, and (ψ) may be a sine of three times the constant pi (π) divided by eight. Variables (C) and (S) denote integer (or dyadic rational) internal transform factors placed instead of (ω) and (ψ).
    • Equation (2) indicates that the scaled factor (ξ) equals a sum of the first internal factor (C) plus the second internal factor (S) divided by a (ω) plus (ψ). This equation may identify particular internal factors values of C and S similar to the above relationship defined with respect to the orthogonal implementation, but result in a different external factor. However, the different external factor does not typically increase implementation complexity for the reasons noted above, but does generally provide more accurate approximation of the original transform factors. It may also provide improved coding gain over conventional 4×4 DCT-II implementations and even, in some instances, the orthogonal 4×4 DCT-II implementation described above. Consequently, the control unit applies this near-orthogonal 4×4 DCT-II to media data to transform the media data from a spatial domain to a frequency domain with the result of potentially improved coding gain.
  • The above resulting 4×4 DCTs-II implementations constructed in accordance with the techniques described in this disclosure represent scaled 4×4 DCT-II implementations as opposed to straight 4×4 DCT-II implementations. The implementations are “scaled” in that they have undergone factorization to remove internal factors and therefore output scaled coefficients that require additional external factors be applied to correctly calculate the 4×4 DCT. So-called “straight” DCT-II implementations output coefficients that do not require any further operations, such as multiplication by external factors, to correctly calculate the 4×4 DCT.
  • There are a number of different factorizations capable of producing scaled 4×4 DCT-II implementations. One alternative factorization produces a different scaled 4×4 DCT-II implementation from which another relationship can be derived in accordance with the techniques of this disclosure to produce a near-orthogonal implementation that improves coding gain over conventional DCTs-II commonly employed by video encoders that comply with H.264.
  • To illustrate the near-orthogonal implementation with respect to the alternative factorization to produce a scaled 4×4 DCT-II, consider an apparatus that includes a control unit, as one example. The control unit implements the near-orthogonal 4×4 DCT-II in accordance with the techniques described in this disclosure. This near-orthogonal 4×4 DCT-II implementation includes an odd portion and an even portion similar to the implementations described above. The odd portion in this instance applies first and second internal factors (A, B) that are related to a scaled factor (ξ) by the following equation (3):
  • ξ = A + B 1 + 1 / 2 . ( 3 )
  • Equation (3) indicates that the scaled factor (ξ) equals a sum of the first internal factor (A) plus the second internal factor (B) divided by a one plus one divided by the square root of two. This equation may identify particular values of 7 and 5 for internal factors A and B, respectively. This resulting near-orthogonal 4×4 DCT-II implementation constructed using the alternative factorization and with the above noted internal factors may more accurately represent the irrational internal factors of a straight 4×4 DCT-II than conventional H.264 4×4 DCT-II implementations and thereby provide improved coding gain over conventional 4×4 DCT-II implementations. Consequently, the control unit applies this near-orthogonal 4×4 DCT-II to media data to transform the media data from a spatial domain to a frequency domain with the result of potentially improved coding gain.
  • FIG. 2 is a block diagram illustrating video encoder 20 of FIG. 1 in more detail. Video encoder 20 may be formed at least in part as one or more integrated circuit devices, which may be referred to collectively as an integrated circuit device. In some aspects, video encoder 20 may form part of a wireless communication device handset or broadcast server. Video encoder 20 may perform intra- and inter-coding of blocks within video frames. Intra-coding relies on spatial prediction to reduce or remove spatial redundancy in video within a given video frame. Inter-coding relies on temporal prediction to reduce or remove temporal redundancy in video within adjacent frames of a video sequence. For inter-coding, video encoder 20 performs motion estimation to track the movement of matching video blocks between adjacent frames.
  • As shown in FIG. 2, video encoder 20 receives a current video block 30 within a video frame to be encoded. In the example of FIG. 2, video encoder 20 includes motion estimation unit 32, reference frame store 34, motion compensation unit 36, block transform unit 38, quantization unit 40, inverse quantization unit 42, inverse transform unit 44 and entropy coding unit 46. An in-loop or post loop deblocking filter (not shown) may be applied to filter blocks to remove blocking artifacts. Video encoder 20 also includes summer 48 and summer 50. FIG. 2 illustrates the temporal prediction components of video encoder 20 for inter-coding of video blocks. Although not shown in FIG. 2 for ease of illustration, video encoder 20 also may include spatial prediction components for intra-coding of some video blocks.
  • Motion estimation unit 32 compares video block 30 to blocks in one or more adjacent video frames to generate one or more motion vectors. The adjacent frame or frames may be retrieved from reference frame store 34, which may comprise any type of memory or data storage device to store video blocks reconstructed from previously encoded blocks. Motion estimation may be performed for blocks of variable sizes, e.g., 16×16, 16×8, 8×16, 8×8 or smaller block sizes. Motion estimation unit 32 identifies one or more blocks in adjacent frames that most closely matches the current video block 30, e.g., based on a rate distortion model, and determines displacement between the blocks in adjacent frames and the current video block. On this basis, motion estimation unit 32 produces one or more motion vectors (MV) that indicate the magnitude and trajectory of the displacement between current video block 30 and one or more matching blocks from the reference frames used to code current video block 30. The matching block or blocks will serve as predictive (or prediction) blocks for inter-coding of the block to be coded.
  • Motion vectors may have half- or quarter-pixel precision, or even finer precision, allowing video encoder 20 to track motion with higher precision than integer pixel locations and obtain a better prediction block. When motion vectors with fractional pixel values are used, interpolation operations are carried out in motion compensation unit 36. Motion estimation unit 32 identifies the best block partitions and motion vector or motion vectors for a video block using certain criteria, such as a rate-distortion model. For example, there may be more than motion vector in the case of bi-directional prediction. Using the resulting block partitions and motion vectors, motion compensation unit 36 forms a prediction video block.
  • Video encoder 20 forms a residual video block by subtracting the prediction video block produced by motion compensation unit 36 from the original, current video block 30 at summer 48. Block transform unit 38 applies a transform producing residual transform block coefficients. As shown in FIG. 2, block transform unit 38 includes a 4×4 DCT-II unit 52 that implements a 4×4 DCT-II constructed in accordance with the techniques described in this disclosure. 4×4 DCT-II unit 52 represents a hardware module, which in some instances executes software (such as a digital signal processor or DSP executing software code or instructions), that implements a 4×4 DCT-II having internal factors defined by one of the three relationships identified above. Block transform unit 38 applies scaled 4×4 DCT-II unit 52 to the residual block to produce a 4×4 block of residual transform coefficients. 4×4 DCT-II unit 52 generally transforms the residual block from the spatial domain, which is represented as residual pixel data, to the frequency domain, which is represented as DCT coefficients. The transform coefficients may comprise DCT coefficients that include at least one DC coefficient and one or more AC coefficients.
  • Quantization unit 40 quantizes (e.g., rounds) the residual transform block coefficients to further reduce bit rate. As mentioned above, quantization unit 40 accounts for the scaled nature of scaled 4×4 DCT-II unit 52 by incorporating internal factors removed during factorization. That is, quantization unit 40 incorporates the external factor shown below with respect to implementations 70A-70C of FIGS. 4A-4C. As quantization typically involves multiplication, incorporating these factors into quantization unit 40 may not increase the implementation complexity of quantization unit 40. In this respect, removing the factors from scaled 4×4 DCT-II unit 52 decreases the implementation complexity of DCT-II unit 52 without increasing the implementation complexity of quantization unit 40, resulting in a net decrease of implementation complexity with respect to video encoder 20.
  • Entropy coding unit 46 entropy codes the quantized coefficients to even further reduce bit rate. Entropy coding unit 46 performs a statistical lossless coding, referred to in some instances, as entropy coding. Entropy coding unit 46 models a probability distribution of quantized DCT coefficients and selects a codebook (e.g., CAVLC or CABAC) based on the modeled probability distribution. Using this codebook, entropy coding unit 46 selects codes for each quantized DCT coefficient in a manner that compresses quantized DCT coefficients. To illustrate, entropy coding unit 46 may select a short codeword (in terms of bits) for frequently occurring quantized DCT coefficients and longer codeword (in term of bits) for less frequently occurring quantized DCT coefficients. So long as the short codeword uses less bits than the quantized DCT coefficients, on average entropy coding unit 46 compresses the quantized DCT coefficients. Entropy coding unit 46 outputs the entropy coded coefficients as a bitstream which is sent to video decoder 26. In general, video decoder 26 performs inverse operations to decode and reconstruct the encoded video from the bitstream, as will be described with reference to the example of FIG. 3.
  • Reconstruction unit 42 and inverse transform unit 44 reconstruct quantized coefficients and apply inverse transformation, respectively, to reconstruct the residual block. Summation unit 50 adds the reconstructed residual block to the motion compensated prediction block produced by motion compensation unit 36 to produce a reconstructed video block for storage in reference frame store 34. The reconstructed video block is used by motion estimation unit 32 and motion compensation unit 36 to encode a block in a subsequent video frame.
  • FIG. 3 is a block diagram illustrating an example of video decoder 26 of FIG. 1 in more detail. Video decoder 26 may be formed at least in part as one or more integrated circuit devices, which may be referred to collectively as an integrated circuit device. In some aspects, video decoder 26 may form part of a wireless communication device handset. Video decoder 26 may perform intra- and inter-decoding of blocks within video frames. As shown in FIG. 3, video decoder 26 receives an encoded video bitstream that has been encoded by video encoder 20. In the example of FIG. 3, video decoder 26 includes entropy decoding unit 54, motion compensation unit 56, reconstruction unit 58, inverse transform unit 60, and reference frame store 62. Entropy decoding unit 64 may access one or more data structures stored in a memory 64 to obtain data useful in coding. Video decoder 26 also may include an in-loop deblocking filter (not shown) that filters the output of summer 66. Video decoder 26 also includes summer 66. FIG. 3 illustrates the temporal prediction components of video decoder 26 for inter-decoding of video blocks. Although not shown in FIG. 3, video decoder 26 also may include spatial prediction components for intra-decoding of some video blocks.
  • Entropy decoding unit 54 receives the encoded video bitstream and decodes from the bitstream quantized residual coefficients and quantized parameters, as well as other information, such as macroblock coding mode and motion information, which may include motion vectors and block partitions. Motion compensation unit 56 receives the motion vectors and block partitions and one or more reconstructed reference frames from reference frame store 62 to produce a prediction video block.
  • Reconstruction unit 58 inverse quantizes, i.e., de-quantizes, the quantized block coefficients. Inverse transform unit 60 applies an inverse transform, e.g., an inverse DCT, to the coefficients to produce residual blocks. More specifically, inverse transform unit 60 includes a scaled 4×4 DCT-III unit 68, which inverse transform unit 60 applies to the coefficients to produce residual blocks. Scaled 4×4 DCT-III unit 68, which is the inverse of scaled 4×4 DCT-II unit 52 shown in FIG. 2, may transform the coefficients from the frequency domain to the spatial domain to produce the residual blocks. Similar to quantization unit 40 above, reconstruction unit 58 accounts for the scaled nature of 4×4 DCT-III unit 68 by incorporating the external factors removed during factorization into the reconstruction process with little if any increase in implementation complexity. Removing factors from scaled 4×4 DCT-III unit 68 may reduce implementation complexity, thereby resulting in a net decrease of complexity for video decoder 26.
  • The prediction video blocks are then summed by summer 66 with the residual blocks to form decoded blocks. A deblocking filter (not shown) may be applied to filter the decoded blocks to remove blocking artifacts. The filtered blocks are then placed in reference frame store 62, which provides reference frame for decoding of subsequent video frames and also produces decoded video to drive display device 28 (FIG. 1).
  • FIGS. 4A-4C are diagrams that each illustrate an implementation of a scaled 4×4 DCT-II constructed in accordance with the techniques of this disclosure. FIG. 4A is a diagram that illustrates a scaled orthogonal 4×4 DCT-II implementation 70A constructed in accordance with the techniques of this disclosure. FIG. 4B is a diagram that illustrates a scaled near-orthogonal 4×4 DCT-II implementation 70B constructed in accordance with the techniques of this disclosure. FIG. 4C is a diagram that illustrates a scaled near-orthogonal 4×4 DCT-II alternative implementation 70C constructed in accordance with the techniques of this disclosure. 4×4 DCT unit 52 shown in the example of FIG. 2 may incorporate one or more these implementations 70A-70C.
  • Referring first to the example of FIG. 4A, 4×4 DCT-II implementation 70A includes a butterfly unit 72, an even portion 74A and an odd portion 74B. Butterfly unit 92 may represent hardware or a combination of hardware and software for routing or otherwise forwarding inputs x0, . . . , x3 to proper even and odd portions 74A, 47B (“portions 74”). Butterfly unit 92 usually combines the result of smaller DCTs, such as 2×2 DCT-II implementations, which in this case may be represented by even and odd portions 74, respectively. Even portion 74A is a 2×2 portion of 4×4 DCT-II implementation 70A that outputs even DCT coefficients X0 and X2. Notably, these even coefficients X0 and X2 are multiplied by an external factor of a half (½), which can be and usually is applied by quantization unit 40.
  • Odd portion 74B is a 2×2 portion of 4×4 DCT-II implementation 70A that outputs odd DCT coefficients X1 and X3. Odd portion 74B includes two internal factors denoted C and S, which are related to an external factor applied to odd coefficients X1 and X3 by the above noted equation (1), which is defined in accordance with the techniques of this disclosure. The additional external factor of one divided by the square root of two (1/√2) is multiplied by one divided by the relationship noted in equation (1) above to result in the external factor shown with respect to odd coefficients X1 and X3.
  • The relationship noted in equation (1) can be derived by first considering the orthogonal property, which is set forth mathematically by the following equation (4):

  • CTC=I.   (4)
  • The variable C in this instance refers to any matrix, while CT denotes the transpose of the matrix C. The variable I denotes an identity matrix. Thus, a matrix exhibits orthogonal property if the transpose of the matrix times the matrix itself equals the identity matrix.
  • Assuming a scaled matrix, which is preferred in media coding implementations for the reasons noted above, the matrix C can be split into an integer scaled transform denoted C′ and a diagonal matrix of scale factors or external factors D, as noted in the following equation (5):

  • C=C′D.   (5)
  • Substituting C′D from equation (5) for C in equation (4) results in the following equation (6):

  • (C′D)T(C′D)=DC′ T C′D=I,   (6)
  • which can be simplified to the mathematical equation shown in the following equation (7):

  • C′C=D −2.   (7)
  • Equation (7) provides a mechanism for choosing scaling factors such that the resulting integer transform remains orthogonal.
  • For example, in the case of 4×4 DCT-II implementations, this DCT-II usually only applies approximations of factors representative of the cosine of three times the constant pi divided by eight and the sine of three times the constant pi divided by eight. Assuming that these two factors are to be replaced by integers C and S, which are coefficients of the matrix C′, and using the above orthogonality condition, equation (1) above denotes the normalization factor, such that the task of designing an orthogonal approximation of 4×4 DCT-II may be limited to finding pairs of integers (C, S), such that the following equations (8) and (9) are satisfied:
  • C C 2 + S 2 cos ( 3 π 8 ) , and ( 8 ) S C 2 + S 2 sin ( 3 π 8 ) . ( 9 )
  • Under these assumptions, the integer scaled transform shown as 4×4 DCT-II implementation 70A results.
  • The following Table 1 illustrates various values selected for the integers of C and S and the resulting approximation errors in comparison to the 4×4 DCT-II implementation adopted in the H.264 video coding standard.
  • TABLE 1
    Parameters Approximation errors
        C     S     {square root over (C2 +S2)} cos ( 3 π 8 ) - C C 2 + S 2 sin ( 3 π 8 ) - S C 2 + S 2   Complexity (x*C, y*S)     Comments
    1 2 {square root over (5)} −0.0645302 0.0294523 1 shift adopted in H.264
    2 5 {square root over (59)} 0.0112928 −0.00459716 1 add + 2shs +2-bits in dyn. Range
    3 7 {square root over (58)} −0.0112359 0.0047345 2 adds + 1sh +3-bits in dyn. Range
    5 12 13 −0.00193195 0.000802609 2adds + 2shs +4-bits in dyn. Range
    17 41 {square root over (1970)} −0.00033159 0.000137419 3adds + 2shs +5-bits in dyn. Range

    Notably, when the variables C and S are set to 2 and 5 respectively, the complexity of the resulting implementation 70A increases but there is much less error in the approximations of the cosine of three times the constant pi divided by eight and the sine of three times the constant pi divided by eight, which promotes coding gain. The complexity involves only an additional addition and shift when compared to the base H.264 implementation, but does not involve any expensive, in the sense of operations, multiplications. Consequently, the techniques described in this disclosure promote increased coding gain with only minor increases in complexity, with implementation 70A incorporating values of 2 and 5 for variables C and S respectively providing potentially the best coding gain with minimal increases to implementation complexity.
  • While described above with respect to a DCT of type II, implementation 70A shown in the example of FIG. 4A may also represent a DCT of type III or inverse DCT implementation. Forming an inverse DCT from implementation 70A involves reversing the inputs and the outputs such that inputs are received by the implementation on the right of FIG. 4A and outputs are output at the left of the implementation. Inputs are then processed by even and odd portions 74 first and then by butterfly 72 before being output on the left. For ease of illustration purposes, this IDCT implementation that is inverse to implementation 70A is not shown in a separate figure considering that such an implementation may be described as a mirror image of implementation 70A.
  • FIG. 4B is a diagram that illustrates a scaled near-orthogonal 4×4 DCT-II implementation 70B constructed in accordance with the techniques of this disclosure. 4×4 DCT-II implementation 70B includes a butterfly unit 76, which is similar to butterfly unit 72 of FIG. 4A, and even and odd portions 78A, 78B (“portions 78”). Even portion 78A is similar to even portion 74A. Odd portion 78B is also similar to odd portion 74B except that the orthogonality condition has been relaxed, leading to a different relationship, i.e., the relationship denoted above with respect to equation (2), between internal factors C, S and scaled factor ξ.
  • To derive example implementation 70B of FIG. 4B in accordance with the relationship denoted by equation (2), first consider that, while orthogonality generally ensures a straightforward inverse implementation of 4×4 DCT-II in theory, in practice most scale factors (following integer transforms) become irrational numbers, which are hard to implement precisely using an integer multiplier. Moreover, quantization generally follows application of 4×4 DCT transforms and this quantization adds noise which may prevent a straightforward application of the inverse orthogonal DCT-II implementation. Moreover, considering integer-arithmetic implementations, such near-orthogonal transforms may improve coding efficiency while also reducing implementation complexity compared to strictly orthogonal integer transforms. Consequently, relaxing the degree of such orthogonality mismatch between the straight and inverse implementations may actually improve coding gain.
  • To characterize the degree of mismatch, a norm of distance from the identity matrix is defined in accordance with the following equation (10):

  • ∥CTC−I∥.   (10)
  • Using the same notation as that above with respect to equation (4), equation (10) simply indicates a norm of distance from the identity matrix can be defined as the transpose of the matrix time the matrix minus the identity matrix. Assuming that CT C remains diagonal, the average absolute distance can be computed in accordance with the following equation (11):
  • δ N = 1 N tr ( C T C - I ) , ( 11 )
  • where the average absolute distance is denoted by the variable δN and N equals the size of the matrix.
  • By relaxing the orthogonality property, coding gain may improve but analysis of coding gain with respect to the average absolute difference is too dependent on a particular model or statistics of image undergoing compression. Consequently, the extent to which to relax the orthogonality property may be determined through analysis of a different metric related to finding integer transforms that are potentially best in terms of matching basis functions of DCT-II. More information regarding this form of evaluation can be found in an article authored by Y. A. Reznik, A. T. Hinds, and J. L. Mitchell, entitled “Improved Precision of Fixed-Point Algorithms by Means of Common Factors,” Proc. ICIP 2008, San Diego, Calif., the entire contents of which are incorporated by reference as if fully set forth herein.
  • From this incorporated reference, one technique for producing best matching design is referred to as a “common-factor-based approximation.” Using this techniques, the following equation (12) can be derived as follows:
  • ξ = C + S cos ( 3 π 8 ) + sin ( 3 π 8 ) , ( 12 )
  • such that the following equations (13) and (14) may be derived as follows:
  • C / ξ cos ( 3 π 8 ) , and ( 13 ) S / ξ sin ( 3 π 8 ) . ( 14 )
  • Equation (12) ensures that, for scaled factor ξ, the errors of corresponding approximation for C and S are in the same magnitude but sign-opposite. Under these assumptions, the integer scaled transform shown as 4×4 DCT-II implementation 70B results.
  • The following Table 2 illustrates various values selected for the integers of C and S and the resulting approximation errors.
  • TABLE 2
    Parameters Approximation errors
        C     S ξ = C + S cos ( 3 π 8 ) + sin ( 3 π 8 ) cos ( 3 π 8 ) - C / ξ sin ( 3 π 8 ) - S / ξ C 2 + S 2 ξ - 1   Bit-depth expansion due to factors C, S
    1 2  2.296100594 −0.0528375558  0.0528375558 −0.0516073433 1-bit
    2 5  5.357568053  0.0093797282 −0.0093797282  0.010328504  2-bits
    5 12 13.01123670  −0.0015997926  0.0015997926 −0.0017264839 4-bits

    Considering Table 2 in more detail, when the variables C and S are set to 2 and 5 respectively, the approximation errors are reduced. The third error metric (C2+S22−1) shown above under the heading of “Approximation errors” is essentially a subset of orthogonality mismatch metric δN discussed above with respect to equation (11), where this mismatch metric describes values appearing at the odd positions along the diagonal of CTC−I. Notably, more precise integer approximations to the DCT-II basis functions are also generally closer to being orthogonal. While such integer approximation are generally closer to being orthogonal, DCT-II implementation 70B with C and S set to values of 1 and 2, respectfully, provides possibly the most return of those listed in terms of coding gain, as shown below with respect to FIG. 7B.
  • While described above with respect to a DCT of type II, implementation 70B shown in the example of FIG. 4B may also represent a DCT of type III or inverse DCT implementation. Forming an inverse DCT from implementation 70B involves reversing the inputs and the outputs such that inputs are received by the implementation on the right of FIG. 4B and outputs are output at the left of the implementation. Inputs are then processed by even and odd portions 78 first and then by butterfly 76 before being output on the left. For ease of illustration purposes, this IDCT implementation that is inverse to implementation 70B is not shown in a separate figure considering that such an implementation may be described as a mirror image of implementation 70B.
  • FIG. 4C is a diagram that illustrates another exemplary scaled near-orthogonal 4×4 DCT-II implementation 70C constructed in accordance with the techniques of this disclosure that results from an alternative factorization. 4×4 DCT-II implementation 70C includes a butterfly unit 80, which is similar to butterfly unit 72 of FIG. 4A and butterfly unit 76 of FIG. 4B, and even and odd portions 82A, 82B (“portions 82”). Even portion 82A is similar to even portion 78A. Odd portion 82B is similar to odd portion 78B in that the orthogonality condition has been relaxed, but as a result of the alternative factorization, a different relationship, i.e., the relationship denoted above with respect to equation (3), between internal factors A, B and scaled factor ξ results. More information regarding the alternative factorization can be found in an article authored by Y. A. Reznik, and R. C. Chivukula, entitled “On Design of Transforms for High-Resolution/High-Performance Video Coding,” MPEG input document M16438, presented at MPEG's 88th meeting, in Maui, Hi., in April 2009, the entire contents of which are hereby incorporated by reference as if fully set forth herein.
  • Notably, different scale factors are applied to odd coefficients X1 and X3, and there is only one irrational factor to approximate in 4×4 DCT-II implementation 70C. To remain orthogonal, the internal factor B usually must be set to one divided by the square root of two and A must be set to one. Consequently, changing the values internal factors A, B from these values leads to a non-orthogonal implementation. To evaluate various values of these internal factors, the above techniques referred to as the common factor approximation techniques, which is noted above with respect to FIG. 4B, are employed. Using this techniques, the following equation (15) is determined so that two integer values can be selected for internal factors A, B to derive parameter ξ:
  • ξ = A + B 1 + 1 / 2 , ( 15 )
  • such that the following equations (16) and (17) are satisfied:

  • A/ξ≈1, and   (16)

  • B/ξ≈1/√{square root over (2)}.   (17)
  • The above equation (15) ensures that errors of the corresponding approximations become balanced in magnitude but opposite in signs. Under these assumptions, the integer scaled transform shown as 4×4 DCT-II implementation 70C results.
  • The following Table 3 illustrates various values selected for the integers of C and S and the resulting approximation errors.
  • TABLE 3
    Parameters Approximation errors
      A   B ξ = A + B 1 + 1 / 2   1 − A/ξ   1/{square root over (2)} − B/ξ   ∥CTC − I∥ Bit-depth expansion due to factors A, B
    3 2  2.928932188 −0.0242640686  0.0242640686 0.067451660 2-bits
    7 5  7.029437252  0.0041877111 −0.0041877111 0.011879709 3-bits
    . . . . . . . . . . . . . . . . . . . . .
    29 41 41.00505064   0.0001231711 −0.0001231711 0.000348411 6-bits

    Considering Table 3 in more detail, when the variables A and B are set to 7 and 5 respectively, the approximation errors are reduced. 4×4 DCT-II implementation 70C with A and B set to values of 7 and 5, respectfully, provides possibly the most return of those listed in terms of coding gain in comparison to complexity increase (which is not shown in Table 3), as shown below with respect to FIG. 7C.
  • While described above with respect to a DCT of type II, implementation 70C shown in the example of FIG. 4C may also represent a DCT of type III or inverse DCT implementation. Forming an inverse DCT from implementation 70C involves reversing the inputs and the outputs such that inputs are received by the implementation on the right of FIG. 4C and outputs are output at the left of the implementation. Inputs are then processed by even and odd portions 82 first and then by butterfly 80 before being output on the left. For ease of illustration purposes, this IDCT implementation that is inverse to implementation 70C is not shown in a separate figure considering that such an implementation may be described as a mirror image of implementation 70C.
  • FIG. 5 is a flow chart illustrating exemplary operation of a coding device, such as video encoder 20 of FIG. 2, in applying a 4×4 DCT implementation constructed in accordance with the techniques of this disclosure. Initially, video encoder 20 receives a current video block 30 within a video frame to be encoded (90). Motion estimation unit 32 performs motion estimation to compare video block 30 to blocks in one or more adjacent video frames to generate one or more motion vectors (92). The adjacent frame or frames may be retrieved from reference frame store 34. Motion estimation may be performed for blocks of variable sizes, e.g., 16×16, 16×8, 8×16, 8×8, 4×4 or smaller block sizes. Motion estimation unit 32 identifies one or more blocks in adjacent frames that most closely matches the current video block 30, e.g., based on a rate distortion model, and determines displacement between the blocks in adjacent frames and the current video block. On this basis, motion estimation unit 32 produces one or more motion vectors (MV) that indicate the magnitude and trajectory of the displacement between current video block 30 and one or more matching blocks from the reference frames used to code current video block 30. The matching block or blocks will serve as predictive (or prediction) blocks for inter-coding of the block to be coded.
  • Motion vectors may have half- or quarter-pixel precision, or even finer precision, allowing video encoder 20 to track motion with higher precision than integer pixel locations and obtain a better prediction block. When motion vectors with fractional pixel values are used, interpolation operations are carried out in motion compensation unit 36. Motion estimation unit 32 identifies the best block partitions and motion vector or motion vectors for a video block using certain criteria, such as a rate-distortion model. For example, there may be more than motion vector in the case of bi-directional prediction. Using the resulting block partitions and motion vectors, motion compensation unit 36 forms a prediction video block (94).
  • Video encoder 20 forms a residual video block by subtracting the prediction video block produced by motion compensation unit 36 from the original, current video block 30 at summer 48 (96). Block transform unit 38 applies a transform producing residual transform block coefficients. Block transform unit 38 includes a 4×4 DCT-II unit 52 generated in accordance with the techniques described in this disclosure. Block transform unit 38 applies scaled 4×4 DCT-II unit 52 to the residual block to produce a 4×4 block of residual transform coefficients. 4×4 DCT-II unit 52 generally transforms the residual block from the spatial domain, which is represented as residual pixel data, to the frequency domain, which is represented as DCT coefficients (98). The transform coefficients may comprise DCT coefficients that include at least one DC coefficient and one or more AC coefficients.
  • Quantization unit 40 quantizes (e.g., rounds) the residual transform block coefficients to further reduce bit rate (100). As mentioned above, quantization unit 40 accounts for the scaled nature of scaled 4×4 DCT-II unit 52 by incorporating internal factors removed during factorization. That is, quantization unit 40 incorporates the external factor noted above with respect to implementations 70A-70C of FIGS. 4A-4C. As quantization typically involves multiplication, incorporating these factors into quantization unit 40 may not increase the implementation complexity of quantization unit 40. In this respect, removing the factors from scaled 4×4 DCT-II unit 52 decreases the implementation complexity of DCT-II unit 52 without increasing the implementation complexity of quantization unit 40, resulting in a net decrease of implementation complexity with respect to video encoder 20.
  • Entropy coding unit 46 entropy codes the quantized coefficients to even further reduce bit rate. Entropy coding unit 46 performs a statistical lossless coding, referred to in some instances, as entropy coding to generate a coded bitstream (102). Entropy coding unit 46 models a probability distribution of quantized DCT coefficients and selects a codebook (e.g., CAVLC or CABAC) based on the modeled probability distribution. Using this codebook, entropy coding unit 46 selects codes for each quantized DCT coefficient in a manner that compresses quantized DCT coefficients. Entropy coding unit 46 outputs the entropy coded coefficients as a coded bitstream which is stored to a memory or storage device and/or sent to video decoder 26 (104).
  • Reconstruction unit 42 and inverse transform unit 44 reconstruct quantized coefficients and apply inverse transformation, respectively, to reconstruct the residual block. Summation unit 50 adds the reconstructed residual block to the motion compensated prediction block produced by motion compensation unit 36 to produce a reconstructed video block for storage in reference frame store 34. The reconstructed video block is used by motion estimation unit 32 and motion compensation unit 36 to encode a block in a subsequent video frame.
  • FIG. 6 is a flowchart illustrating example operation of a coding device, such as video decoder 26 of FIG. 3, in applying a 4×4 DCT-III implementation constructed in accordance with the techniques of this disclosure. Video decoder 26 receives an encoded video bitstream that has been encoded by video encoder 20. In particular, entropy decoding unit 54 receives the encoded video bitstream and decodes from the bitstream quantized residual coefficients and quantized parameters, as well as other information, such as macroblock coding mode and motion information, which may include motion vectors and block partitions (106, 108). Motion compensation unit 56 receives the motion vectors and block partitions and one or more reconstructed reference frames from reference frame store 62 to produce a prediction video block (110).
  • Reconstruction unit 58 inverse quantizes, i.e., de-quantizes, the quantized block coefficients (112). Inverse transform unit 60 applies an inverse transform, e.g., an inverse DCT, to the coefficients to produce residual blocks. More specifically, inverse transform unit 60 includes a scaled 4×4 DCT-III unit 68, which inverse transform unit 60 applies to the coefficients to produce residual blocks (114). Scaled 4×4 DCT-III unit 68, which is the inverse of scaled 4×4 DCT-II unit 52 shown in FIG. 2, may transform the coefficients from the frequency domain to the spatial domain to produce the residual blocks. Similar to quantization unit 40 above, reconstruction unit 58 accounts for the scaled nature of 4×4 DCT-III unit 68 by incorporating the external factors removed during factorization into the reconstruction process with little if any increase in implementation complexity. Removing factors from scaled 4×4 DCT-III unit 68 may reduce implementation complexity, thereby resulting in a net decrease of complexity for video decoder 26.
  • The prediction video blocks are then summed by summer 66 with the residual blocks to form decoded blocks (116). A deblocking filter (not shown) may be applied to filter the decoded blocks to remove blocking artifacts. The filtered blocks are then placed in reference frame store 62, which provides reference frame for decoding of subsequent video frames and also produces decoded video to drive a display device, such as display device 28 of FIG. 1 (118).
  • FIGS. 7A-7C are diagrams illustrating graphs 120A-120C of peak signal-to-noise ratios with respect to bitrates for each of three different 4×4 DCT-II implementations, such as implementations 70A-70C of FIGS. 4A-4C, constructed in accordance with the techniques of this disclosure. FIG. 7A is a diagram illustrating graph 120A of peak signal-to-noise ratios (PSNR) with respect to bitrates for an orthogonal scaled 4×4 DCT-II implementation, such as implementations 70A of FIG. 4A, constructed in accordance with the techniques of this disclosure. According to the key of graph 120A, the solid line represents the standard 4×4 DCT-II implementation incorporated by the H.264 video coding standard. The dotted line represents a theoretical best DCT implementation capable of performing irrational multiplication and additions. The long dashed line represents orthogonal 4×4 DCT-II implementation 70A with internal factors C and S set to 2 and 5 respectively. The short dashed line represents orthogonal 4×4 DCT-II implementation 70A with internal factors C and S set to 3 and 7 respectfully. The dashed-dotted line represents orthogonal 4×4 DCT-II implementation 70A with internal factors C and S set to 5 and 12 respectfully. Notably, orthogonal 4×4 DCT-II implementation 70A with internal factors C and S set to 2 and 5 more accurately approximates the theoretical best DCT-II implementation than the H.264 implementation. Moreover, orthogonal 4×4 DCT-II implementation 70A with internal factors C and S set to 3 and 7 or 5 and 12 do not provide much gain in terms of PSNR over orthogonal 4×4 DCT-II implementation 70A with internal factors C and S set to 2 and 5, despite these implementations involving a more complex implementation.
  • FIG. 7B is a diagram illustrating graph 120B of peak signal-to-noise ratios (PSNR) with respect to bitrates for an orthogonal scaled 4×4 DCT-II implementation, such as implementations 70B of FIG. 4B, constructed in accordance with the techniques of this disclosure. According to the key of graph 120B, the solid line represents the standard orthogonal 4×4 DCT-II implementation incorporated by the H.264 video coding standard. The dotted line represents a theoretical best DCT implementation capable of performing irrational multiplication and additions. The short dashed line represents near-orthogonal 4×4 DCT-II implementation 70B with internal factors C and S set to 1 and 2 respectfully. The long dashed line represents near-orthogonal 4×4 DCT-II implementation 70B with internal factors C and S set to 2 and 5 respectively. The dashed-dotted line represents near-orthogonal 4×4 DCT-II implementation 70B with internal factors C and S set to 5 and 12 respectfully. Notably, near-orthogonal 4×4 DCT-II implementation 70B with internal factors C and S set to 2 and 5 is not much better in terms of PSNR in comparison to the H.264 implementation. However, near-orthogonal 4×4 DCT-II implementation 70B with internal factors C and S set to 1 and 2 provides a better PSNR than even the theoretical DCT implementation, while near-orthogonal 4×4 DCT-II implementation 70B with internal factors C and S set to 5 and 12 most accurately represents the theoretical DCT implementation.
  • FIG. 7C is a diagram illustrating graph 120C of peak signal-to-noise ratios (PSNR) with respect to bitrates for a near-orthogonal scaled 4×4 DCT-II implementation derived from an alternative factorization, such as implementations 70C of FIGS. 4C, and constructed in accordance with the techniques of this disclosure. According to the key of graph 120C, the solid line represents the standard orthogonal 4×4 DCT-II implementation incorporated by the H.264 video coding standard. The dotted line represents a theoretical best DCT implementation capable of performing irrational multiplication and additions. The long dashed line represents near-orthogonal 4×4 DCT-II implementation 70C with internal factors B and A set to 2 and 3 respectively. The short dashed line represents near-orthogonal 4×4 DCT-II implementation 70C with internal factors B and A set to 5 and 7 respectfully. The dashed-dotted line represents near-orthogonal 4×4 DCT-II implementation 70C with internal factors B and A set to 29 and 41 respectfully. Notably, near-orthogonal 4×4 DCT-II implementation 70C with internal factors B and A set to 2 and 3 is worse in terms of PSNR than the H.264 implementation. However, near-orthogonal 4×4 DCT-II implementation 70C with internal factors B and A set to 5 and 7 provides a better PSNR than the H.264 implementation and accurately represents the theoretical DCT implementation without requiring the complexity of near-orthogonal 4×4 DCT-II implementation 70C with internal factors C and S set to 29 and 41.
  • The techniques of this disclosure may be implemented in a wide variety of devices or apparatuses, including a wireless communication device handset such as a mobile phone, an integrated circuit (IC) or a set of ICs (i.e., a chip set). Any components, modules or units have been described provided to emphasize functional aspects and does not necessarily require realization by different hardware units. The techniques described herein may also be implemented in hardware, software, firmware, or any combination thereof. Any features described as modules, units or components may be implemented together in an integrated logic device or separately as discrete but interoperable logic devices. In some cases, various features may be implemented as an integrated circuit device, such as an integrated circuit chip or chipset.
  • If implemented in software, the techniques may be realized at least in part by a computer-readable medium comprising instructions that, when executed in a processor, performs one or more of the methods described above. The computer-readable medium may comprise a computer-readable storage medium that is a physical structure, and may form part of a computer program product, which may include packaging materials. The computer-readable storage medium may comprise random access memory (RAM) such as synchronous dynamic random access memory (SDRAM), read-only memory (ROM), non-volatile random access memory (NVRAM), electrically erasable programmable read-only memory (EEPROM), FLASH memory, magnetic or optical data storage media, and the like. In this sense, the computer-readable storage medium may, in some respects, be considered a non-transitory computer-readable storage medium.
  • The code or instructions may be executed by one or more processors, such as one or more digital signal processors (DSPs), general purpose microprocessors, an application specific integrated circuits (ASICs), field programmable logic arrays (FPGAs), or other equivalent integrated or discrete logic circuitry. Accordingly, the term “processor,” as used herein may refer to any of the foregoing structure or any other structure suitable for implementation of the techniques described herein. In addition, in some aspects, the functionality described herein may be provided within dedicated software modules or hardware modules configured for encoding and decoding, or incorporated in a combined video codec. Also, the techniques could be fully implemented in one or more circuits or logic elements.
  • The disclosure also contemplates any of a variety of integrated circuit devices that include circuitry to implement one or more of the techniques described in this disclosure. Such circuitry may be provided in a single integrated circuit chip or in multiple, interoperable integrated circuit chips in a so-called chipset. Such integrated circuit devices may be used in a variety of applications, some of which may include use in wireless communication devices, such as mobile telephone handsets.
  • Various aspects of the techniques have been described. These and other aspects are within the scope of the following claims.

Claims (45)

1. An apparatus for decoding media data comprising:
a reconstruction unit configured to determine a plurality of reconstructed transform coefficients based on a plurality of quantized transform coefficients; and
an inverse transform unit configured to apply an inverse discrete cosine transform (IDCT) of a 4 point discrete cosine transform (DCT) to the plurality of reconstructed transform coefficients to produce a residual block, the IDCT having an odd portion that applies first and second internal factors (C, S) that are related to the scaled factor (ξ) such that the scaled factor equals a square root of a sum of a square of the first internal factor (C) plus a square of the second internal factor (S), the first and second internal factors (C, S) are co-prime and greater than or equal to two.
2. The apparatus of claim 1, wherein determining the plurality of reconstructed transform coefficient comprises scaling the quantized transform coefficients.
3. The apparatus of claim 2, wherein scaling comprises dividing the plurality of reconstructed transform coefficients by a factor based at least on the scaled factor (ξ).
4. The apparatus of claim 1, wherein the inverse transform unit is configured to apply a 4×4 IDCT of a substantially orthogonal 4×4 DCT by applying the four point IDCT.
5. The apparatus of claim 4, wherein the inverse transform unit is configured to apply the four point IDCT in a column dimension and apply the four point IDCT in a row dimension.
6. The apparatus of claim 1, wherein the inverse transform unit applies first and second internal factors (C, S), wherein (C, S) equals one of (2, 5), (3, 7), (5,12), and (17, 41).
7. The apparatus of claim 1, wherein the DCT is a type II DCT and the IDCT is a type III DCT.
8. The apparatus of claim 1, wherein the inverse transform unit comprises a processor configured to execute software to apply the IDCT of the orthogonal 4 point DCT.
9. An apparatus for decoding media data comprising:
a reconstruction unit configured to determine a plurality of reconstructed transform coefficients based on a plurality of quantized transform coefficients; and
an inverse transform unit configured to apply an inverse discrete cosine transform (IDCT) of a 4 point DCT to the plurality of reconstructed transform coefficients to produce a residual block, the IDCT having an odd portion that applies first and second internal factors (C, S) that are related to a scaled factor (ξ) such that the scaled factor equals a square root of a sum of a square of the first internal factor (C) plus a square of the second internal factor (S), the first and second internal factors (C, S) are dyadic rational numbers.
10. The apparatus of claim 9, wherein determining the plurality of reconstructed transform coefficient comprises scaling the quantized transform coefficients.
11. The apparatus of claim 10, wherein scaling comprises dividing the plurality of reconstructed transform coefficients by a factor based at least on the scaled factor (ξ).
12. The apparatus of claim 9, wherein the inverse transform unit is configured to apply a 4×4 IDCT of a substantially orthogonal 4×4 DCT by applying the four point IDCT.
13. The apparatus of claim 12, wherein the inverse transform unit is configured to apply the four point IDCT in a column dimension and apply the four point IDCT in a row dimension.
14. The apparatus of claim 9, wherein the DCT is a type II DCT and the IDCT is a type III DCT.
15. The apparatus of claim 9, wherein the inverse transform unit comprises a processor configured to execute software to apply the IDCT of the orthogonal 4 point DCT.
16. An apparatus for decoding media data comprising:
a reconstruction unit configured to determine a plurality of reconstructed transform coefficients based on a plurality of quantized transform coefficients; and
an inverse transform unit configured to apply an inverse discrete cosine transform (IDCT) of a 4 point discrete cosine transform (DCT) to the plurality of reconstructed transform coefficients to produce a residual block, the IDCT having an odd portion that applies first and second internal factors (C, S) that are related to the scaled factor (ξ) by the following equation:
ξ = C + S ω + ψ
wherein variables ω and ψ denote irrational internal transform factors and variables C and S denote internal transform factors used in place of variables ω and ψ in integer implementations of the non-orthogonal 4×4 DCT, the first and second internal factors (C, S) are co-prime and greater than or equal to two.
17. The apparatus of claim 16, wherein determining the plurality of reconstructed transform coefficient comprises scaling the quantized transform coefficients.
18. The apparatus of claim 17, wherein scaling comprises dividing the plurality of reconstructed transform coefficients by a factor based at least on the scaled factor (ξ).
19. The apparatus of claim 16, wherein the inverse transform unit is configured to apply a 4×4 IDCT of a substantially orthogonal 4×4 DCT by applying the four point IDCT.
20. The apparatus of claim 19, wherein the inverse transform unit is configured to apply the four point IDCT in a column dimension and apply the four point IDCT in a row dimension.
21. The apparatus of claim 19, wherein the inverse transform unit applies first and second internal factors (C, S), wherein (C, S) equals one of (2, 5), (3, 7), (5,12), and (17, 41).
22. The apparatus of claim 19, wherein the DCT is a type II DCT and the IDCT is a type III DCT.
23. The apparatus of claim 19, wherein the inverse transform unit comprises a processor configured to execute software to apply the IDCT of the orthogonal 4 point DCT.
24. An apparatus for decoding media data comprising:
a reconstruction unit configured to determine a plurality of reconstructed transform coefficients based on a plurality of quantized transform coefficients; and
an inverse transform unit configured to apply an inverse discrete cosine transform (IDCT) of a 4 point DCT to the plurality of reconstructed transform coefficients to produce a residual block, the IDCT having an odd portion that applies first and second internal factors (C, S) that are related to a scaled factor (ξ) by the following equation:
ξ = C + S ω + ψ
wherein variables ω and ψ denote irrational internal transform factors and variables C and S denote internal transform factors used in place of variables ω and ψ in integer implementations of the non-orthogonal 4×4 DCT, the first and second internal factors (C, S) are dyadic rational numbers
25. The apparatus of claim 24, wherein determining the plurality of reconstructed transform coefficient comprises scaling the quantized transform coefficients.
26. The apparatus of claim 25, wherein scaling comprises dividing the plurality of reconstructed transform coefficients by a factor based at least on the scaled factor (ξ).
27. The apparatus of claim 24, wherein the inverse transform unit is configured to apply a 4×4 IDCT of a substantially orthogonal 4×4 DCT by applying the four point IDCT.
28. The apparatus of claim 27, wherein the inverse transform unit is configured to apply the four point IDCT in a column dimension and apply the four point IDCT in a row dimension.
29. The apparatus of claim 24, wherein the DCT is a type II DCT and the IDCT is a type III DCT.
30. The apparatus of claim 24, wherein the inverse transform unit comprises a processor configured to execute software to apply the IDCT of the orthogonal 4 point DCT.
31. An apparatus for decoding media data comprising:
a reconstruction unit configured to determine a plurality of reconstructed transform coefficients based on a plurality of quantized transform coefficients; and
an inverse transform unit configured to apply an inverse discrete cosine transform (IDCT) of a 4 point discrete cosine transform (DCT) to the reconstructed transform coefficients to produce a residual block, the IDCT having an odd portion that applies first and second internal factors internal factors (A, B) as a part of the 4×4 inverse DCT that are related to a scaled factor (ξ) by the following equation:
ξ = A + B 1 + 1 / 2 ,
wherein the scaled factor (ξ) equals a sum of the first internal factor (A) plus the second internal factor (B) divided by one plus one divided by the square root of two, the first and second internal factors (A, B) are co-prime and greater than or equal to two.
32. The apparatus of claim 31, wherein determining the plurality of reconstructed transform coefficient comprises scaling the quantized transform coefficients.
33. The apparatus of claim 32, wherein scaling comprises dividing the plurality of reconstructed transform coefficients by a factor based at least on the scaled factor (ξ).
34. The apparatus of claim 31, wherein the inverse transform unit is configured to apply a 4×4 IDCT of a substantially orthogonal 4×4 DCT by applying the four point IDCT.
35. The apparatus of claim 34, wherein the inverse transform unit is configured to apply the four point IDCT in a column dimension and apply the four point IDCT in a row dimension.
36. The apparatus of claim 31, wherein the inverse transform unit applies first and second internal factors (A, B), wherein (A, B) equals one of (3, 2), (7, 5), and (29, 41).
37. The apparatus of claim 31, wherein the DCT is a type II DCT and the IDCT is a type III DCT.
38. The apparatus of claim 31, wherein the inverse transform unit comprises a processor configured to execute software to apply the IDCT of the orthogonal 4 point DCT.
39. An apparatus for decoding media data comprising:
a reconstruction unit configured to determine a plurality of reconstructed transform coefficients based on a plurality of quantized transform coefficients; and
an inverse transform unit configured to apply an inverse discrete cosine transform (IDCT) of a 4 point DCT to the plurality of reconstructed transform coefficients, the IDCT having an odd portion that applies first and second internal factors (A, B) as a part of the 4×4 inverse DCT that are related to a scaled factor (ξ) by the following equation:
ξ = A + B 1 + 1 / 2 ,
wherein the scaled factor (ξ) equals a sum of the first internal factor (A) plus the second internal factor (B) divided by one plus one divided by the square root of two, the first and second internal factors (A, B) are dyadic rational numbers.
40. The apparatus of claim 39, wherein determining the plurality of reconstructed transform coefficient comprises scaling the quantized transform coefficients.
41. The method of claim 40, wherein scaling comprises dividing the plurality of reconstructed transform coefficients by a factor based at least on the scaled factor (ξ).
42. The apparatus of claim 39, wherein the inverse transform unit is configured to apply a 4×4 IDCT of a substantially orthogonal 4×4 DCT by applying the four point IDCT.
43. The apparatus of claim 42, wherein the inverse transform unit is configured to apply the four point IDCT in a column dimension and apply the four point IDCT in a row dimension.
44. The apparatus of claim 39, wherein the DCT is a type II DCT and the IDCT is a type III DCT.
45. The apparatus of claim 39, wherein the inverse transform unit comprises a processor configured to execute software to apply the IDCT of the orthogonal 4 point DCT.
US14/717,618 2009-06-05 2015-05-20 4x4 transform for media coding Abandoned US20150256854A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US14/717,618 US20150256854A1 (en) 2009-06-05 2015-05-20 4x4 transform for media coding

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US18465609P 2009-06-05 2009-06-05
US21988709P 2009-06-24 2009-06-24
US12/788,625 US9069713B2 (en) 2009-06-05 2010-05-27 4X4 transform for media coding
US14/717,618 US20150256854A1 (en) 2009-06-05 2015-05-20 4x4 transform for media coding

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US12/788,625 Continuation US9069713B2 (en) 2009-06-05 2010-05-27 4X4 transform for media coding

Publications (1)

Publication Number Publication Date
US20150256854A1 true US20150256854A1 (en) 2015-09-10

Family

ID=43298574

Family Applications (3)

Application Number Title Priority Date Filing Date
US12/788,625 Active 2033-08-10 US9069713B2 (en) 2009-06-05 2010-05-27 4X4 transform for media coding
US14/717,618 Abandoned US20150256854A1 (en) 2009-06-05 2015-05-20 4x4 transform for media coding
US14/717,678 Abandoned US20150256855A1 (en) 2009-06-05 2015-05-20 4x4 transform for media coding

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US12/788,625 Active 2033-08-10 US9069713B2 (en) 2009-06-05 2010-05-27 4X4 transform for media coding

Family Applications After (1)

Application Number Title Priority Date Filing Date
US14/717,678 Abandoned US20150256855A1 (en) 2009-06-05 2015-05-20 4x4 transform for media coding

Country Status (8)

Country Link
US (3) US9069713B2 (en)
EP (1) EP2438535A2 (en)
JP (1) JP5497163B2 (en)
KR (1) KR101315600B1 (en)
CN (3) CN105491389B (en)
BR (1) BRPI1010755A2 (en)
TW (1) TW201126349A (en)
WO (1) WO2010141899A2 (en)

Families Citing this family (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9110849B2 (en) * 2009-04-15 2015-08-18 Qualcomm Incorporated Computing even-sized discrete cosine transforms
US8762441B2 (en) * 2009-06-05 2014-06-24 Qualcomm Incorporated 4X4 transform for media coding
US9069713B2 (en) * 2009-06-05 2015-06-30 Qualcomm Incorporated 4X4 transform for media coding
US8451904B2 (en) 2009-06-24 2013-05-28 Qualcomm Incorporated 8-point transform for media data coding
US9075757B2 (en) * 2009-06-24 2015-07-07 Qualcomm Incorporated 16-point transform for media data coding
US9081733B2 (en) * 2009-06-24 2015-07-14 Qualcomm Incorporated 16-point transform for media data coding
US9118898B2 (en) 2009-06-24 2015-08-25 Qualcomm Incorporated 8-point transform for media data coding
US9824066B2 (en) 2011-01-10 2017-11-21 Qualcomm Incorporated 32-point transform for media data coding
CN108337522B (en) * 2011-06-15 2022-04-19 韩国电子通信研究院 Scalable decoding method/apparatus, scalable encoding method/apparatus, and medium
GB2559062B (en) * 2011-10-17 2018-11-14 Kt Corp Video decoding method using transform method selected from a transform method set
CA2853002C (en) * 2011-10-18 2017-07-25 Kt Corporation Method for encoding image, method for decoding image, image encoder, and image decoder
ES2864591T3 (en) * 2011-12-21 2021-10-14 Sun Patent Trust Context selection for entropy coding of transform coefficients
US10289856B2 (en) * 2014-10-17 2019-05-14 Spatial Digital Systems, Inc. Digital enveloping for digital right management and re-broadcasting
EP3051818A1 (en) 2015-01-30 2016-08-03 Thomson Licensing Method and device for decoding a color picture
US9998763B2 (en) * 2015-03-31 2018-06-12 Nxgen Partners Ip, Llc Compression of signals, images and video for multimedia, communications and other applications
CA3108454A1 (en) * 2018-08-03 2020-02-06 V-Nova International Limited Transformations for signal enhancement coding
SG11202105604UA (en) * 2018-11-27 2021-06-29 Op Solutions Llc Block-based spatial activity measures for pictures cross-reference to related applications
WO2020113068A1 (en) 2018-11-27 2020-06-04 Op Solutions, Llc Block-based picture fusion for contextual segmentation and processing

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030093452A1 (en) * 2001-08-23 2003-05-15 Minhua Zhou Video block transform
US9069713B2 (en) * 2009-06-05 2015-06-30 Qualcomm Incorporated 4X4 transform for media coding

Family Cites Families (74)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2581463B1 (en) * 1985-05-03 1989-09-08 Thomson Csf COSINUS TRANSFORM COMPUTING DEVICES, CODING DEVICE AND IMAGE DECODING DEVICE COMPRISING SUCH COMPUTING DEVICES
US5253055A (en) * 1992-07-02 1993-10-12 At&T Bell Laboratories Efficient frequency scalable video encoding with coefficient selection
US5408425A (en) * 1993-05-25 1995-04-18 The Aerospace Corporation Split-radix discrete cosine transform
US5508949A (en) * 1993-12-29 1996-04-16 Hewlett-Packard Company Fast subband filtering in digital signal coding
US5649077A (en) * 1994-03-30 1997-07-15 Institute Of Microelectronics, National University Of Singapore Modularized architecture for rendering scaled discrete cosine transform coefficients and inverse thereof for rapid implementation
TW284869B (en) 1994-05-27 1996-09-01 Hitachi Ltd
JP3115199B2 (en) * 1994-12-16 2000-12-04 松下電器産業株式会社 Image compression coding device
US5737450A (en) * 1995-05-15 1998-04-07 Polaroid Corporation Method and apparatus for fast two-dimensional cosine transform filtering
JP2778622B2 (en) * 1995-06-06 1998-07-23 日本電気株式会社 Two-dimensional DCT circuit
JPH09212484A (en) 1996-01-30 1997-08-15 Texas Instr Inc <Ti> Discrete cosine transformation method
AU9030298A (en) 1997-08-25 1999-03-16 Qualcomm Incorporated Variable block size 2-dimensional inverse discrete cosine transform engine
CN1296852C (en) 1997-11-17 2007-01-24 索尼电子有限公司 Method and system for digital video data decompression by odopting discrete conversion
US6215909B1 (en) * 1997-11-17 2001-04-10 Sony Electronics, Inc. Method and system for improved digital video data processing using 4-point discrete cosine transforms
US6252994B1 (en) * 1998-01-26 2001-06-26 Xerox Corporation Adaptive quantization compatible with the JPEG baseline sequential mode
US6222944B1 (en) 1998-05-07 2001-04-24 Sarnoff Corporation Down-sampling MPEG image decoder
JP2001346213A (en) * 2000-06-02 2001-12-14 Nec Corp Discrete cosine transform unit and its discrete cosine transform method
WO2001059603A1 (en) 2000-02-09 2001-08-16 Cheng T C Fast method for the forward and inverse mdct in audio coding
CN100429644C (en) 2000-10-23 2008-10-29 国际商业机器公司 Faster transforms using scaled terms, early aborts, and precision refinements
US7929610B2 (en) * 2001-03-26 2011-04-19 Sharp Kabushiki Kaisha Methods and systems for reducing blocking artifacts with reduced complexity for spatially-scalable video coding
US7366236B1 (en) * 2001-06-04 2008-04-29 Cisco Sytems Canada Co. Source adaptive system and method for 2D iDCT
ATE363183T1 (en) 2001-08-24 2007-06-15 Koninkl Philips Electronics Nv ADDING HALF IMAGES OF AN IMAGE
US7082450B2 (en) * 2001-08-30 2006-07-25 Nokia Corporation Implementation of a transform and of a subsequent quantization
US6882685B2 (en) * 2001-09-18 2005-04-19 Microsoft Corporation Block transform and quantization for image and video coding
KR100481067B1 (en) * 2001-09-28 2005-04-07 브이케이 주식회사 Apparatus for 2-D Discrete Cosine Transform using Distributed Arithmetic Module
US7088791B2 (en) 2001-10-19 2006-08-08 Texas Instruments Incorporated Systems and methods for improving FFT signal-to-noise ratio by identifying stage without bit growth
CN101448162B (en) * 2001-12-17 2013-01-02 微软公司 Method for processing video image
FR2834362A1 (en) 2001-12-28 2003-07-04 Koninkl Philips Electronics Nv ADAPTIVE REVERSE TRANSFORMATION DEVICE
JP2003223433A (en) 2002-01-31 2003-08-08 Matsushita Electric Ind Co Ltd Method and apparatus for orthogonal transformation, encoding method and apparatus, method and apparatus for inverse orthogonal transformation, and decoding method and apparatus
US7007055B2 (en) * 2002-03-12 2006-02-28 Intel Corporation Method of performing NxM Discrete Cosine Transform
US7242713B2 (en) * 2002-05-02 2007-07-10 Microsoft Corporation 2-D transforms for image and video coding
US7437394B2 (en) * 2002-06-19 2008-10-14 The Aerospace Corporation Merge and split discrete cosine block transform method
US20040136602A1 (en) * 2003-01-10 2004-07-15 Nithin Nagaraj Method and apparatus for performing non-dyadic wavelet transforms
US7412100B2 (en) * 2003-09-04 2008-08-12 Qualcomm Incorporated Apparatus and method for sub-sampling images in a transform domain
US7379500B2 (en) * 2003-09-30 2008-05-27 Microsoft Corporation Low-complexity 2-power transform for image/video compression
TWI241074B (en) 2003-11-05 2005-10-01 Bing-Fei Wu Image compression system using two-dimensional discrete wavelet transformation
TWI240560B (en) 2003-12-03 2005-09-21 Via Tech Inc Control device, system and method for reading multi-pixel
US20050213835A1 (en) * 2004-03-18 2005-09-29 Huazhong University Of Science & Technology And Samsung Electronics Co., Ltd. Integer transform matrix selection method in video coding and related integer transform method
US8861600B2 (en) * 2004-06-18 2014-10-14 Broadcom Corporation Method and system for dynamically configurable DCT/IDCT module in a wireless handset
US7587093B2 (en) 2004-07-07 2009-09-08 Mediatek Inc. Method and apparatus for implementing DCT/IDCT based video/image processing
KR100688382B1 (en) * 2004-08-13 2007-03-02 경희대학교 산학협력단 Method for interpolating a reference pixel in annular picture, apparatus therefore, annular picture encoding method, apparatus therefore, annular picture decoding method and apparatus therefore
US8130827B2 (en) * 2004-08-13 2012-03-06 Samsung Electronics Co., Ltd. Method and apparatus for interpolating a reference pixel in an annular image and encoding/decoding an annular image
TWI284869B (en) 2004-10-22 2007-08-01 Au Optronics Corp Pixel of display
US7471850B2 (en) 2004-12-17 2008-12-30 Microsoft Corporation Reversible transform for lossy and lossless 2-D data compression
US7792385B2 (en) * 2005-01-25 2010-09-07 Globalfoundries Inc. Scratch pad for storing intermediate loop filter data
TW200643848A (en) 2005-06-01 2006-12-16 Wintek Corp Method and apparatus for four-color data conversion
US20070025441A1 (en) * 2005-07-28 2007-02-01 Nokia Corporation Method, module, device and system for rate control provision for video encoders capable of variable bit rate encoding
TWI280804B (en) 2005-09-26 2007-05-01 Yuh-Jue Chuang Method for splitting 8x8 DCT into four 4x4 modified DCTS used in AVC/H. 264
US7725516B2 (en) 2005-10-05 2010-05-25 Qualcomm Incorporated Fast DCT algorithm for DSP with VLIW architecture
US20070200738A1 (en) * 2005-10-12 2007-08-30 Yuriy Reznik Efficient multiplication-free computation for signal and data processing
TWI311856B (en) 2006-01-04 2009-07-01 Quanta Comp Inc Synthesis subband filtering method and apparatus
US8595281B2 (en) * 2006-01-11 2013-11-26 Qualcomm Incorporated Transforms with common factors
CN100562111C (en) 2006-03-28 2009-11-18 华为技术有限公司 Discrete cosine inverse transformation method and device thereof
US8849884B2 (en) * 2006-03-29 2014-09-30 Qualcom Incorporate Transform design with scaled and non-scaled interfaces
CN101018327B (en) * 2006-04-11 2012-10-31 炬力集成电路设计有限公司 Discrete cosine conversion integration module and its computing combination method
EP1850597A1 (en) 2006-04-24 2007-10-31 Universität Dortmund Method and circuit for performing a cordic based Loeffler discrete cosine transformation (DCT), particularly for signal processing
US8571340B2 (en) * 2006-06-26 2013-10-29 Qualcomm Incorporated Efficient fixed-point approximations of forward and inverse discrete cosine transforms
US8582663B2 (en) * 2006-08-08 2013-11-12 Core Wireless Licensing S.A.R.L. Method, device, and system for multiplexing of video streams
US8300698B2 (en) * 2006-10-23 2012-10-30 Qualcomm Incorporated Signalling of maximum dynamic range of inverse discrete cosine transform
US8548815B2 (en) 2007-09-19 2013-10-01 Qualcomm Incorporated Efficient design of MDCT / IMDCT filterbanks for speech and audio coding applications
US8654833B2 (en) * 2007-09-26 2014-02-18 Qualcomm Incorporated Efficient transformation techniques for video coding
WO2009045683A1 (en) * 2007-09-28 2009-04-09 Athanasios Leontaris Video compression and tranmission techniques
US20090141808A1 (en) * 2007-11-30 2009-06-04 Yiufai Wong System and methods for improved video decoding
US8631060B2 (en) * 2007-12-13 2014-01-14 Qualcomm Incorporated Fast algorithms for computation of 5-point DCT-II, DCT-IV, and DST-IV, and architectures
KR20090078494A (en) * 2008-01-15 2009-07-20 삼성전자주식회사 Deblocking filtering method and deblocking filter for video data
CN101330616B (en) 2008-07-31 2011-04-13 上海交通大学 Hardware implementing apparatus and method for inverse discrete cosine transformation during video decoding process
US20100172409A1 (en) * 2009-01-06 2010-07-08 Qualcom Incorporated Low-complexity transforms for data compression and decompression
US9110849B2 (en) * 2009-04-15 2015-08-18 Qualcomm Incorporated Computing even-sized discrete cosine transforms
US8762441B2 (en) * 2009-06-05 2014-06-24 Qualcomm Incorporated 4X4 transform for media coding
US9081733B2 (en) * 2009-06-24 2015-07-14 Qualcomm Incorporated 16-point transform for media data coding
US8451904B2 (en) * 2009-06-24 2013-05-28 Qualcomm Incorporated 8-point transform for media data coding
US9118898B2 (en) * 2009-06-24 2015-08-25 Qualcomm Incorporated 8-point transform for media data coding
US9075757B2 (en) * 2009-06-24 2015-07-07 Qualcomm Incorporated 16-point transform for media data coding
CN101989253B (en) * 2009-07-31 2012-08-29 鸿富锦精密工业(深圳)有限公司 Discrete cosine conversion circuit and image processing device using same
US9824066B2 (en) * 2011-01-10 2017-11-21 Qualcomm Incorporated 32-point transform for media data coding

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030093452A1 (en) * 2001-08-23 2003-05-15 Minhua Zhou Video block transform
US9069713B2 (en) * 2009-06-05 2015-06-30 Qualcomm Incorporated 4X4 transform for media coding

Also Published As

Publication number Publication date
JP5497163B2 (en) 2014-05-21
US9069713B2 (en) 2015-06-30
CN105744280A (en) 2016-07-06
CN105491389B (en) 2018-12-04
JP2012529128A (en) 2012-11-15
CN102667757B (en) 2016-03-30
CN105491389A (en) 2016-04-13
US20150256855A1 (en) 2015-09-10
KR20120052927A (en) 2012-05-24
EP2438535A2 (en) 2012-04-11
CN102667757A (en) 2012-09-12
KR101315600B1 (en) 2013-10-10
WO2010141899A3 (en) 2012-05-18
TW201126349A (en) 2011-08-01
BRPI1010755A2 (en) 2016-03-22
WO2010141899A2 (en) 2010-12-09
US20100309974A1 (en) 2010-12-09

Similar Documents

Publication Publication Date Title
US8762441B2 (en) 4X4 transform for media coding
US9069713B2 (en) 4X4 transform for media coding
US9319685B2 (en) 8-point inverse discrete cosine transform including odd and even portions for media data coding
US8718144B2 (en) 8-point transform for media data coding
US9110849B2 (en) Computing even-sized discrete cosine transforms
US9081733B2 (en) 16-point transform for media data coding
US9075757B2 (en) 16-point transform for media data coding
US20100172409A1 (en) Low-complexity transforms for data compression and decompression

Legal Events

Date Code Title Description
AS Assignment

Owner name: QUALCOMM INCORPORATED, CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:REZNIK, YURIY;REEL/FRAME:036807/0851

Effective date: 20100516

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO PAY ISSUE FEE