WO2010039728A2 - Video coding with large macroblocks - Google Patents

Video coding with large macroblocks Download PDF

Info

Publication number
WO2010039728A2
WO2010039728A2 PCT/US2009/058833 US2009058833W WO2010039728A2 WO 2010039728 A2 WO2010039728 A2 WO 2010039728A2 US 2009058833 W US2009058833 W US 2009058833W WO 2010039728 A2 WO2010039728 A2 WO 2010039728A2
Authority
WO
WIPO (PCT)
Prior art keywords
video
block
macroblock
coded unit
video blocks
Prior art date
Application number
PCT/US2009/058833
Other languages
French (fr)
Other versions
WO2010039728A3 (en
Inventor
Peisong Chen
Yan Ye
Marta Karczewicz
Original Assignee
Qualcomm Incorporated
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Family has litigation
First worldwide family litigation filed litigation Critical https://patents.darts-ip.com/?family=42041691&utm_source=google_patent&utm_medium=platform_link&utm_campaign=public_patent_search&patent=WO2010039728(A2) "Global patent litigation dataset” by Darts-ip is licensed under a Creative Commons Attribution 4.0 International License.
Application filed by Qualcomm Incorporated filed Critical Qualcomm Incorporated
Priority to JP2011530142A priority Critical patent/JP2012504908A/en
Priority to CN200980139141.9A priority patent/CN102172021B/en
Priority to EP09793127.3A priority patent/EP2347591B2/en
Priority to KR1020117010099A priority patent/KR101222400B1/en
Publication of WO2010039728A2 publication Critical patent/WO2010039728A2/en
Publication of WO2010039728A3 publication Critical patent/WO2010039728A3/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/70Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/146Data rate or code amount at the encoder output
    • H04N19/147Data rate or code amount at the encoder output according to rate distortion criteria
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/177Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a group of pictures [GOP]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/44Decoders specially adapted therefor, e.g. video decoders which are asymmetric with respect to the encoder
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/46Embedding additional information in the video signal during the compression process
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/593Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/61Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/90Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
    • H04N19/96Tree coding, e.g. quad-tree coding

Definitions

  • This disclosure relates to digital video coding and, more particularly, block- based video coding.
  • Digital video capabilities can be incorporated into a wide range of devices, including digital televisions, digital direct broadcast systems, wireless broadcast systems, personal digital assistants (PDAs), laptop or desktop computers, digital cameras, digital recording devices, video gaming devices, video game consoles, cellular or satellite radio telephones, and the like.
  • Digital video devices implement video compression techniques, such as those described in the standards defined by MPEG-2, MPEG-4, ITU-T H.263 or ITU-T H.264/MPEG-4, Part 10, Advanced Video Coding (AVC), and extensions of such standards, to transmit and receive digital video information more efficiently.
  • video compression techniques such as those described in the standards defined by MPEG-2, MPEG-4, ITU-T H.263 or ITU-T H.264/MPEG-4, Part 10, Advanced Video Coding (AVC), and extensions of such standards, to transmit and receive digital video information more efficiently.
  • Video compression techniques perform spatial prediction and/or temporal prediction to reduce or remove redundancy inherent in video sequences.
  • a video frame or slice may be partitioned into macrob locks. Each macroblock can be further partitioned.
  • Macroblocks in an intra-coded (I) frame or slice are encoded using spatial prediction with respect to neighboring macroblocks.
  • Macroblocks in an inter-coded (P or B) frame or slice may use spatial prediction with respect to neighboring macroblocks in the same frame or slice or temporal prediction with respect to other reference frames.
  • this disclosure describes techniques for encoding digital video data using large macroblocks.
  • Large macroblocks are larger than macroblocks generally prescribed by existing video encoding standards.
  • Most video encoding standards prescribe the use of a macroblock in the form of a 16x16 array of pixels.
  • an encoder and decoder may utilize large macroblocks that are greater than 16x16 pixels in size.
  • a large macroblock may have a 32x32, 64x64, or larger array of pixels.
  • Video coding relies on spatial and/or temporal redundancy to support compression of video data. Video frames generated with higher spatial resolution and/or higher frame rate may support more redundancy.
  • the use of large macroblocks, as described in this disclosure, may permit a video coding technique to utilize larger degrees of redundancy produced as spatial resolution and/or frame rate increase.
  • video coding techniques may utilize a variety of features to support coding of large macroblocks.
  • a large macroblock coding technique may partition a large macroblock into partitions, and use different partition sizes and different coding modes, e.g., different spatial (I) or temporal (P or B) modes, for selected partitions.
  • a coding technique may utilize hierarchical coded block pattern (CBP) values to efficiently identify coded macroblocks and partitions having non-zero coefficients within a large macroblock.
  • CBP hierarchical coded block pattern
  • a coding technique may compare rate-distortion metrics produced by coding using large and small macroblocks to select a macroblock size producing more favorable results.
  • the disclosure provides a method comprising encoding, with a video encoder, a video block having a size of more than 16x16 pixels, generating block- type syntax information that indicates the size of the block, and generating a coded block pattern value for the encoded block, wherein the coded block pattern value indicates whether the encoded block includes at least one non-zero coefficient.
  • the disclosure provides an apparatus comprising a video encoder configured to encode a video block having a size of more than 16x16 pixels, generate block-type syntax information that indicates the size of the block, and generate a coded block pattern value for the encoded block, wherein the coded block pattern value indicates whether the encoded block includes at least one non-zero coefficient.
  • the disclosure provides a computer-readable medium encoded with instructions to cause a video encoding apparatus to encode, with a video encoder, a video block having a size of more than 16x16 pixels, generate block-type syntax information that indicates the size of the block, and generate a coded block pattern value for the encoded block, wherein the coded block pattern value indicates whether the encoded block includes at least one non-zero coefficient.
  • the disclosure provides a method comprising receiving, with a video decoder, an encoded video block having a size of more than 16x16 pixels, receiving block-type syntax information that indicates the size of the encoded block, receiving a coded block pattern value for the encoded block, wherein the coded block pattern value indicates whether the encoded block includes at least one non-zero coefficient, and decoding the encoded block based on the block-type syntax information and the coded block pattern value for the encoded block.
  • the disclosure provides an apparatus comprising a video decoder configured to receive an encoded video block having a size of more than 16x16 pixels, receive block-type syntax information that indicates the size of the encoded block, receive a coded block pattern value for the encoded block, wherein the coded block pattern value indicates whether the encoded block includes at least one non-zero coefficient, and decode the encoded block based on the block-type syntax information and the coded block pattern value for the encoded block.
  • the disclosure provides a computer-readable medium comprising instructions to cause a video decoder to receive an encoded video block having a size of more than 16x16 pixels, receive block-type syntax information that indicates the size of the encoded block, receive a coded block pattern value for the encoded block, wherein the coded block pattern value indicates whether the encoded block includes at least one non-zero coefficient, and decode the encoded block based on the block-type syntax information and the coded block pattern value for the encoded block.
  • the disclosure provides a method comprising receiving, with a video encoder, a video block having a size of more than 16x16 pixels, partitioning the block into partitions, encoding one of the partitions using a first encoding mode, encoding another of the partitions using a second encoding mode different from the first encoding mode, and generating block-type syntax information that indicates the size of the block and identifies the partitions and the encoding modes used to encode the partitions.
  • the disclosure provides an apparatus comprising a video encoder configured to receive a video block having a size of more than 16x16 pixels, partition the block into partitions, encode one of the partitions using a first encoding mode, encode another of the partitions using a second encoding mode different from the first encoding mode, generate block-type syntax information that indicates the size of the block and identifies the partitions and the encoding modes used to encode the partitions.
  • the disclosure provides a computer-readable medium encoded with instructions to cause a video encoder to receive a video block having a size of more than 16x16 pixels, partition the block into partitions, encode one of the partitions using a first encoding mode, encode another of the partitions using a second encoding mode different from the first encoding mode, and generate block-type syntax information that indicates the size of the block and identifies the partitions and the encoding modes used to encode the partitions.
  • the disclosure provides a method comprising receiving, with a video decoder, a video block having a size of more than 16x16 pixels, wherein the block is partitioned into partitions, one of the partitions is encoded with a first encoding mode and another of the partitions is encoded with a second encoding mode different from the first encoding mode, receiving block-type syntax information that indicates the size of the block and identifies the partitions and the encoding modes used to encode the partitions, and decoding the video block based on the block-type syntax information.
  • the disclosure provides an apparatus comprising a video decoder configured to receive a video block having a size of more than 16x16 pixels, wherein the block is partitioned into partitions, one of the partitions is encoded with a first encoding mode and another of the partitions is encoded with a second encoding mode different from the first encoding mode, receive block-type syntax information that indicates the size of the block and identifies the partitions and the encoding modes used to encode the partitions, and decode the video block based on the block-type syntax information.
  • the disclosure provides a computer-readable medium encoded with instructions to cause a video decoder to receive, with a video decoder, a video block having a size of more than 16x16 pixels, wherein the block is partitioned into partitions, one of the partitions is encoded with a first encoding mode and another of the partitions is encoded with a second encoding mode different from the first encoding mode, receive block-type syntax information that indicates the size of the block and identifies the partitions and the encoding modes used to encode the partitions, and decode the video block based on the block-type syntax information.
  • the disclosure provides a method comprising receiving, with a digital video encoder, a video coding unit, determining a first rate-distortion metric for encoding the video coding unit using first video blocks with sizes of 16x16 pixels, determining a second rate-distortion metric for encoding the video coding unit using second video blocks with sizes of more than 16x16 pixels, encoding the video coding unit using the first video blocks when the first rate-distortion metric is less than second rate-distortion metric, and encoding the video coding unit using the second video blocks when the second rate-distortion metric is less than the first rate-distortion metric.
  • the disclosure provides an apparatus comprising a video encoder configured to receive a video coding unit, determine a first rate-distortion metric for encoding the video coding unit using first video blocks with sizes of 16x16 pixels, determine a second rate-distortion metric for encoding the video coding unit using second video blocks with sizes of more than 16x16 pixels, encode the video coding unit using the first video blocks when the first rate-distortion metric is less than second rate-distortion metric, encode the video coding unit using the second video blocks when the second rate-distortion metric is less than the first rate-distortion metric.
  • the disclosure provides a computer-readable medium encoded with instructions to cause a video encoder to receive a video coding unit, determine a first rate-distortion metric for encoding the video coding unit using first video blocks with sizes of 16x16 pixels, determine a second rate-distortion metric for encoding the video coding unit using second video blocks with sizes of more than 16x16 pixels, encode the video coding unit using the first video blocks when the first rate-distortion metric is less than second rate-distortion metric, and encode the video coding unit using the second video blocks when the second rate-distortion metric is less than the first rate-distortion metric.
  • the disclosure provides a method comprising encoding, with a video encoder, a coded unit comprising a plurality of video blocks, wherein at least one of the plurality of video blocks comprises a size of more than 16x16 pixels, and generating syntax information for the coded unit that includes a maximum size value, wherein the maximum size value indicates a size of a largest one of the plurality of video blocks in the coded unit.
  • the disclosure provides an apparatus comprising a video encoder configured to encode a coded unit comprising a plurality of video blocks, wherein at least one of the plurality of video blocks comprises a size of more than 16x16 pixels and to generate syntax information for the coded unit that includes a maximum size value, wherein the maximum size value indicates a size of a largest one of the plurality of video blocks in the coded unit.
  • the disclosure provides an apparatus comprising apparatus comprising means for encoding a coded unit comprising a plurality of video blocks, wherein at least one of the plurality of video blocks comprises a size of more than 16x16 pixels, and means for generating syntax information for the coded unit that includes a maximum size value, wherein the maximum size value indicates a size of a largest one of the plurality of video blocks in the coded unit.
  • the disclosure provides a computer-readable storage medium encoded with instructions for causing a programmable processor to encode a coded unit comprising a plurality of video blocks, wherein at least one of the plurality of video blocks comprises a size of more than 16x16 pixels, and generate syntax information for the coded unit that includes a maximum size value, wherein the maximum size value indicates a size of a largest one of the plurality of video blocks in the coded unit.
  • the disclosure provides a method comprising receiving, with a video decoder, a coded unit comprising a plurality of video blocks, wherein at least one of the plurality of video blocks comprises a size of more than 16x16 pixels, receiving syntax information for the coded unit that includes a maximum size value, wherein the maximum size value indicates a size of a largest one of the plurality of video blocks in the coded unit, selecting a block-type syntax decoder according to the maximum size value, and decoding each of the plurality of video blocks in the coded unit using the selected block-type syntax decoder.
  • the disclosure provides an apparatus comprising a video decoder configured to receive a coded unit comprising a plurality of video blocks, wherein at least one of the plurality of video blocks comprises a size of more than 16x16 pixels, receive syntax information for the coded unit that includes a maximum size value, wherein the maximum size value indicates a size of a largest one of the plurality of video blocks in the coded unit, select a block-type syntax decoder according to the maximum size value, and decode each of the plurality of video blocks in the coded unit using the selected block-type syntax decoder.
  • the disclosure provides means for receiving a coded unit comprising a plurality of video blocks, wherein at least one of the plurality of video blocks comprises a size of more than 16x16 pixels, means for receiving syntax information for the coded unit that includes a maximum size value, wherein the maximum size value indicates a size of a largest one of the plurality of video blocks in the coded unit, means for selecting a block-type syntax decoder according to the maximum size value, and means for decoding each of the plurality of video blocks in the coded unit using the selected block-type syntax decoder.
  • the disclosure provides a computer-readable storage medium encoded with instructions for causing a programmable processor to receive a coded unit comprising a plurality of video blocks, wherein at least one of the plurality of video blocks comprises a size of more than 16x16 pixels, receive syntax information for the coded unit that includes a maximum size value, wherein the maximum size value indicates a size of a largest one of the plurality of video blocks in the coded unit, select a block-type syntax decoder according to the maximum size value, and decode each of the plurality of video blocks in the coded unit using the selected block-type syntax decoder.
  • FIG. 1 is a block diagram illustrating an example video encoding and decoding system that encodes and decodes digital video data using large macroblocks.
  • FIG. 2 is a block diagram illustrating an example of a video encoder that implements techniques for coding large macroblocks.
  • FIG. 3 is a block diagram illustrating an example of a video decoder that implements techniques for coding large macroblocks.
  • FIG. 4A is a conceptual diagram illustrating partitioning among various levels of a large macroblock.
  • FIG. 4B is a conceptual diagram illustrating assignment of different coding modes to different partitions a large macroblock.
  • FIG. 5 is a conceptual diagram illustrating a hierarchical view of various levels of a large macroblock.
  • FIG. 6 is a flowchart illustrating an example method for setting a coded block pattern (CBP) value of a 64x64 pixel large macroblock.
  • CBP coded block pattern
  • FIG. 7 is a flowchart illustrating an example method for setting a CBP value of a 32x32 pixel partition of a 64x64 pixel large macroblock.
  • FIG. 8 is a flowchart illustrating an example method for setting a CBP value of a 16x16 pixel partition of a 32x32 pixel partition of a 64x64 pixel large macroblock.
  • FIG. 9 is a flowchart illustrating an example method for determining a two-bit lumal6x8_CBP value.
  • FIG. 10 is a block diagram illustrating an example arrangement of a 64x64 pixel large macroblock.
  • FIG. 11 is a flowchart illustrating an example method for calculating optimal partitioning and encoding methods for an NxN pixel large video block.
  • FIG. 12 is a block diagram illustrating an example 64x64 pixel macroblock with various partitions and selected encoding methods for each partition.
  • FIG. 13 is a flowchart illustrating an example method for determining an optimal size of a macroblock for encoding a frame of a video sequence.
  • FIG. 14 is a block diagram illustrating an example wireless communication device including a video encoder/decoder (CODEC) that codes digital video data using large macroblocks.
  • CDEC video encoder/decoder
  • FIG. 15 is a block diagram illustrating an example array representation of a hierarchical CBP representation for a large macroblock.
  • FIG. 16 is a block diagram illustrating an example tree structure corresponding to the hierarchical CBP representation of FIG. 15.
  • FIG. 17 is a flowchart illustrating an example method for using syntax information of a coded unit to indicate and select block-based syntax encoders and decoders for video blocks of the coded unit.
  • the disclosure describes techniques for encoding and decoding digital video data using large macroblocks.
  • Large macroblocks are larger than macroblocks generally prescribed by existing video encoding standards.
  • Most video encoding standards prescribe the use of a macroblock in the form of a 16x16 array of pixels.
  • an encoder and/or a decoder may utilize large macroblocks that are greater than 16x16 pixels in size.
  • a large macroblock may have a 32x32, 64x64, or possibly larger array of pixels.
  • a macroblock may refer to a data structure for a pixel array that comprises a defined size expressed as NxN pixels, where N is a positive integer value.
  • the macroblock may define four luminance blocks, each comprising an array of (N/2)x(N/2) pixels, two chrominance blocks, each comprising an array of NxN pixels, and a header comprising macroblock-type information and coded block pattern (CBP) information, as discussed in greater detail below.
  • CBP coded block pattern
  • macroblocks may comprise NxN arrays of pixels where N may be greater than 16.
  • conventional video coding standards prescribe that an inter-encoded macroblock is typically assigned a single motion vector.
  • a plurality of motion vectors may be assigned for inter-encoded partitions of an NxN macroblock, as described in greater detail below.
  • References to "large macroblocks" or similar phrases generally refer to macroblocks with arrays of pixels greater than 16x16.
  • large macroblocks may support improvements in coding efficiency and/or reductions in data transmission overhead while maintaining or possibly improving image quality.
  • the use of large macroblocks may permit a video encoder and/or decoder to take advantage of increased redundancy provided by video data generated with increased spatial resolution (e.g., 1280x720 or 1920x1080 pixels per frame) and/or increased frame rate (e.g., 30 or 60 frames per second).
  • a digital video sequence with a spatial resolution of 1280x720 pixels per frame and a frame rate of 60 frames per second is spatially 36 times larger than and temporally 4 times faster than a digital video sequence with a spatial resolution of 176x144 pixels per frame and a frame rate of 15 frames per second.
  • a video encoder and/or decoder can better exploit increased spatial and/or temporal redundancy to support compression of video data.
  • a smaller number of blocks may be encoded for a given frame or slice, reducing the amount of overhead information that needs to be transmitted.
  • larger macroblocks may permit a reduction in the overall number of macroblocks coded per frame or slice.
  • the spatial resolution of a frame is increased by four times, for example, then four times as many 16x16 macroblocks would be required for the pixels in the frame.
  • the number of macroblocks needed to handle the increased spatial resolution is reduced.
  • the cumulative amount of coding information such as syntax information, motion vector data, and the like can be reduced.
  • the size of a macroblock generally refers to the number of pixels contained in the macroblock, e.g., 64x64, 32x32, 16x16, or the like.
  • a large macroblock e.g., 64x64 or 32x32
  • the spatial area defined by the vertical and horizontal dimensions of a large macroblock i.e., as a fraction of the area defined by the vertical and horizontal dimensions of a video frame, may or may not be larger than the area of a conventional 16x16 macroblock.
  • the area of the large macroblock may be the same or similar to a conventional 16x16 macroblock.
  • the large macroblock has a higher spatial resolution characterized by a higher number and higher spatial density of pixels within the macroblock.
  • the size of the macroblock may be configured based at least in part on the number of pixels in the frame, i.e., the spatial resolution in the frame. If the frame has a higher number of pixels, a large macroblock can be configured to have a higher number of pixels.
  • a video encoder may be configured to utilize a 32x32 pixel macroblock for a 1280x720 pixel frame displayed at 30 frames per second.
  • a video encoder may be configured to utilize a 64x64 pixel macroblock for a 1280x720 pixel frame displayed at 60 frames per second.
  • Each macroblock encoded by an encoder may require data that describes one or more characteristics of the macroblock.
  • the data may indicate, for example, macroblock type data to represent the size of the macroblock, the way in which the macroblock is partitioned, and the coding mode (spatial or temporal) applied to the macroblock and/or its partitions.
  • the data may include motion vector difference (mvd) data along with other syntax elements that represents motion vector information for the macroblock and/or its partitions.
  • the data may include a coded block pattern (CBP) value along with other syntax elements to represent residual information after prediction.
  • CBP coded block pattern
  • the macroblock type data may be provided in a single macroblock header for the large macroblock.
  • Video coding techniques described in this disclosure may utilize one or more features to support coding of large macroblocks. For example, a large macroblock may be partitioned into smaller partitions. Different coding modes, e.g., different spatial (I) or temporal (P or B) coding modes, may be applied to selected partitions within a large macroblock.
  • I spatial
  • P or B temporal
  • a hierarchical coded block pattern (CBP) values can be utilized to efficiently identify coded macroblocks and partitions having non-zero transform coefficients representing residual data.
  • rate-distortion metrics may be compared for coding using large and small macroblock sizes to select a macroblock size producing favorable results.
  • a coded unit e.g., a frame, slice, sequence, or group of pictures
  • macroblocks of varying sizes may include a syntax element that indicates the size of the largest macroblock in the coded unit.
  • large macroblocks comprise a different block-level syntax than standard 16x16 pixel blocks. Accordingly, by indicating the size of the largest macroblock in the coded unit, an encoder may signal to a decoder a block-level syntax decoder to apply to the macroblocks of the coded unit.
  • coding modes for different partitions in a large macroblock may be referred to as mixed mode coding of large macroblocks.
  • a large macroblock may be coded such that some partitions have different coding modes, such as different intra-coding modes (e.g., 1 16x16, 1 8x8, 1 4x4) or intra- and inter-coding modes.
  • a large macroblock may be coded with a first mode and another partition may be coded with a second mode that is different than the first mode.
  • the first mode may be a first I mode and the second mode may be a second I mode, different from the first I mode.
  • the first mode may be an I mode and the second mode may be a P or B mode.
  • a large macroblock may include one or more temporally (P or B) coded partitions and one or more spatially (I) coded partitions, or one or more spatially coded partitions with different I modes.
  • One or more hierarchical coded block pattern (CBP) values may be used to efficiently describe whether any partitions in a large macroblock have at least one nonzero transform coefficient and, if so, which partitions.
  • the transform coefficients encode residual data for the large macroblock.
  • a large macroblock level CBP bit indicates whether any partitions in the large macroblock includes a non-zero, quantized coefficient. If not, there is no need to consider whether any of the partitions has a nonzero coefficient, as the entire large macroblock is known to have no non-zero coefficients. In this case, a predictive macroblock can be used to decode the macroblock without residual data.
  • partition-level CBP values can be analyzed to identify which of the partitions includes at least one non-zero coefficient.
  • the decoder then may retrieve appropriate residual data for the partitions having at least one non-zero coefficient, and decode the partitions using the residual data and predictive block data.
  • one or more partitions may have nonzero coefficients, and therefore include partition-level CBP values with the appropriate indication. Both the large macroblock and at least some of the partitions may be larger than 16x16 pixels.
  • rate- distortion metrics may be analyzed for both large macroblocks (e.g., 32x32 or 64x64) and small macroblocks (e.g., 16x16). For example, an encoder may compare rate- distortion metrics between 16x16 macroblocks, 32x32 macroblocks, and 64x64 macroblocks for a coded unit, such as a frame or a slice. The encoder may then select the macroblock size that results in the best rate-distortion and encode the coded unit using the selected macroblock size, i.e., the macroblock size with the best rate- distortion.
  • the selection may be based on encoding the frame or slice in three or more passes, e.g., a first pass using 16x16 pixel macroblocks, a second pass using 32x32 pixel macroblocks, and a third pass using 64x64 pixel macroblocks, and comparing rate- distortion metrics for each pass.
  • an encoder may optimize rate- distortion by varying the macroblock size and selecting the macroblock size that results in the best or optimal rate-distortion for a given coding unit, such as a slice or frame.
  • the encoder may further transmit syntax information for the coded unit, e.g., as part of a frame header or a slice header, that identifies the size of the macroblocks used in the coded unit.
  • the syntax information for the coded unit may comprise a maximum size indicator that indicates a maximum size of macroblocks used in the coded unit.
  • the encoder may inform a decoder as to what syntax to expect for macroblocks of the coded unit.
  • the maximum size of macroblocks comprises 16x16 pixels
  • the decoder may expect standard H.264 syntax and parse the macroblocks according to H.264-specified syntax.
  • the maximum size of macroblocks is greater than 16x16, e.g., comprises 64x64 pixels
  • the decoder may expect modified and/or additional syntax elements that relate to processing of larger macroblocks, as described by this disclosure, and parse the macroblocks according to such modified or additional syntax.
  • FIG. 1 is a block diagram illustrating an example video encoding and decoding system 10 that may utilize techniques for encoding/decoding digital video data using a large macroblock, i.e., a macroblock that contains more pixels than a 16x16 macroblock.
  • system 10 includes a source device 12 that transmits encoded video to a destination device 14 via a communication channel 16.
  • Source device 12 and destination device 14 may comprise any of a wide range of devices.
  • source device 12 and destination device 14 may comprise wireless communication devices, such as wireless handsets, so-called cellular or satellite radiotelephones, or any wireless devices that can communicate video information over a communication channel 16, in which case communication channel 16 is wireless.
  • communication channel 16 may comprise any combination of wireless or wired media suitable for transmission of encoded video data.
  • source device 12 may include a video source 18, video encoder 20, a modulator/demodulator (modem) 22 and a transmitter 24.
  • Destination device 14 may include a receiver 26, a modem 28, a video decoder 30, and a display device 32.
  • video encoder 20 of source device 12 may be configured to apply one or more of the techniques for using, in a video encoding process, a large macroblock having a size that is larger than a macroblock size prescribed by conventional video encoding standards.
  • video decoder 30 of destination device 14 may be configured to apply one or more of the techniques for using, in a video decoding process, a macroblock size that is larger than a macroblock size prescribed by conventional video encoding standards.
  • Source device 12 and destination device 14 are merely examples of such coding devices in which source device 12 generates coded video data for transmission to destination device 14.
  • devices 12, 14 may operate in a substantially symmetrical manner such that each of devices 12, 14 include video encoding and decoding components.
  • system 10 may support oneway or two-way video transmission between video devices 12, 14, e.g., for video streaming, video playback, video broadcasting, or video telephony.
  • Video source 18 of source device 12 may include a video capture device, such as a video camera, a video archive containing previously captured video, an/or a video feed from a video content provider.
  • video source 18 may generate computer graphics-based data as the source video, or a combination of live video, archived video, and computer-generated video.
  • source device 12 and destination device 14 may form so-called camera phones or video phones.
  • the techniques described in this disclosure may be applicable to video coding in general, and may be applied to wireless or wired applications.
  • the captured, pre-captured, or computer-generated video may be encoded by video encoder 20.
  • the encoded video information may then be modulated by modem 22 according to a communication standard, and transmitted to destination device 14 via transmitter 24.
  • Modem 22 may include various mixers, filters, amplifiers or other components designed for signal modulation.
  • Transmitter 24 may include circuits designed for transmitting data, including amplifiers, filters, and one or more antennas.
  • Receiver 26 of destination device 14 receives information over channel 16, and modem 28 demodulates the information.
  • the video encoding process may implement one or more of the techniques described herein to use a large macroblock, e.g., larger than 16x16, for inter (i.e., temporal) and/or intra (i.e., spatial) encoding of video data.
  • the video decoding process performed by video decoder 30 may also use such techniques during the decoding process.
  • the information communicated over channel 16 may include syntax information defined by video encoder 20, which is also used by video decoder 30, that includes syntax elements that describe characteristics and/or processing of the large macroblocks, as discussed in greater detail below.
  • Display device 32 displays the decoded video data to a user, and may comprise any of a variety of display devices such as a cathode ray tube (CRT), a liquid crystal display (LCD), a plasma display, an organic light emitting diode (OLED) display, or another type of display device.
  • CTR cathode ray tube
  • LCD liquid crystal display
  • OLED organic light emitting diode
  • communication channel 16 may comprise any wireless or wired communication medium, such as a radio frequency (RF) spectrum or one or more physical transmission lines, or any combination of wireless and wired media.
  • Communication channel 16 may form part of a packet-based network, such as a local area network, a wide-area network, or a global network such as the Internet.
  • Communication channel 16 generally represents any suitable communication medium, or collection of different communication media, for transmitting video data from source device 12 to destination device 14, including any suitable combination of wired or wireless media.
  • Communication channel 16 may include routers, switches, base stations, or any other equipment that may be useful to facilitate communication from source device 12 to destination device 14.
  • Video encoder 20 and video decoder 30 may operate according to a video compression standard, such as the ITU-T H.264 standard, alternatively described as MPEG-4, Part 10, Advanced Video Coding (AVC).
  • AVC Advanced Video Coding
  • the techniques of this disclosure are not limited to any particular coding standard.
  • Other examples include MPEG-2 and ITU-T H.263.
  • video encoder 20 and video decoder 30 may each be integrated with an audio encoder and decoder, and may include appropriate MUX-DEMUX units, or other hardware and software, to handle encoding of both audio and video in a common data stream or separate data streams. If applicable, MUX-DEMUX units may conform to the ITU H.223 multiplexer protocol, or other protocols such as the user datagram protocol (UDP).
  • UDP user datagram protocol
  • the ITU-T H.264/MPEG-4 (AVC) standard was formulated by the ITU-T Video Coding Experts Group (VCEG) together with the ISO/IEC Moving Picture Experts Group (MPEG) as the product of a collective partnership known as the Joint Video Team (JVT).
  • JVT Joint Video Team
  • the H.264 standard is described in ITU-T Recommendation H.264, Advanced Video Coding for generic audiovisual services, by the ITU-T Study Group, and dated March, 2005, which may be referred to herein as the H.264 standard or H.264 specification, or the H.264/AVC standard or specification.
  • the Joint Video Team (JVT) continues to work on extensions to H.264/MPEG-4 AVC.
  • Video encoder 20 and video decoder 30 each may be implemented as any of a variety of suitable encoder circuitry, such as one or more microprocessors, digital signal processors (DSPs), application specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), discrete logic, software, hardware, firmware or any combinations thereof.
  • DSPs digital signal processors
  • ASICs application specific integrated circuits
  • FPGAs field programmable gate arrays
  • Each of video encoder 20 and video decoder 30 may be included in one or more encoders or decoders, either of which may be integrated as part of a combined encoder/decoder (CODEC) in a respective camera, computer, mobile device, subscriber device, broadcast device, set-top box, server, or the like.
  • CDEC combined encoder/decoder
  • a video sequence typically includes a series of video frames.
  • Video encoder 20 operates on video blocks within individual video frames in order to encode the video data.
  • a video block may correspond to a macroblock or a partition of a macroblock.
  • a video block may further correspond to a partition of a partition.
  • the video blocks may have fixed or varying sizes, and may differ in size according to a specified coding standard or in accordance with the techniques of this disclosure.
  • Each video frame may include a plurality of slices.
  • Each slice may include a plurality of macroblocks, which may be arranged into partitions, also referred to as sub-blocks.
  • the ITU-T H.264 standard supports intra prediction in various block sizes, such as 16 by 16, 8 by 8, or 4 by 4 for luma components, and 8x8 for chroma components, as well as inter prediction in various block sizes, such as 16x16, 16x8, 8x16, 8x8, 8x4, 4x8 and 4x4 for luma components and corresponding scaled sizes for chroma components.
  • "x" and “by” may be used interchangeably to refer to the pixel dimensions of the block in terms of vertical and horizontal dimensions, e.g., 16x16 pixels or 16 by 16 pixels.
  • a 16x16 block will have 16 pixels in a vertical direction and 16 pixels in a horizontal direction.
  • an NxN block generally has N pixels in a vertical direction and N pixels in a horizontal direction, where N represents a positive integer value that may be greater than 16.
  • the pixels in a block may be arranged in rows and columns.
  • Block sizes that are less than 16 by 16 may be referred to as partitions of a 16 by 16 macroblock.
  • block sizes less than NxN may be referred to as partitions of the NxN block.
  • the techniques of this disclosure describe intra- and inter-coding for macroblocks larger than the conventional 16x16 pixel macroblock, such as 32x32 pixel macroblocks, 64x64 pixel macroblocks, or larger macroblocks.
  • Video blocks may comprise blocks of pixel data in the pixel domain, or blocks of transform coefficients in the transform domain, e.g., following application of a transform such as a discrete cosine transform (DCT), an integer transform, a wavelet transform, or a conceptually similar transform to the residual video block data representing pixel differences between coded video blocks and predictive video blocks.
  • a video block may comprise blocks of quantized transform coefficients in the transform domain.
  • Smaller video blocks can provide better resolution, and may be used for locations of a video frame that include high levels of detail.
  • macroblocks and the various partitions sometimes referred to as sub-blocks, may be considered to be video blocks.
  • a slice may be considered to be a plurality of video blocks, such as macrob locks and/or sub-blocks.
  • Each slice may be an independently decodable unit of a video frame.
  • frames themselves may be decodable units, or other portions of a frame may be defined as decodable units.
  • coded unit or “coding unit” may refer to any independently decodable unit of a video frame such as an entire frame, a slice of a frame, a group of pictures (GOP) also referred to as a sequence, or another independently decodable unit defined according to applicable coding techniques.
  • GOP group of pictures
  • Quantization generally refers to a process in which transform coefficients are quantized to possibly reduce the amount of data used to represent the coefficients.
  • the quantization process may reduce the bit depth associated with some or all of the coefficients. For example, an n-bit value may be rounded down to an m-bit value during quantization, where n is greater than m.
  • entropy coding of the quantized data may be performed, e.g., according to content adaptive variable length coding (CAVLC), context adaptive binary arithmetic coding (CABAC), or another entropy coding methodology.
  • CAVLC content adaptive variable length coding
  • CABAC context adaptive binary arithmetic coding
  • a processing unit configured for entropy coding, or another processing unit may perform other processing functions, such as zero run length coding of quantized coefficients and/or generation of syntax information such as CBP values, macroblock type, coding mode, maximum macroblock size for a coded unit (such as a frame, slice, macroblock, or sequence), or the like.
  • video encoder 20 may use a macroblock that is larger than that prescribed by conventional video encoding standards to encode digital video data.
  • video encoder 20 may encode, with a video encoder, a video block having a size of more than 16x16 pixels, generate block- type syntax information that indicates the size of the block, and generate a CBP value for the encoded block, wherein the coded block pattern value indicates whether the encoded block includes at least one non-zero coefficient.
  • the macroblock block-type syntax information may be provided in a macroblock header for the large macroblock.
  • the macroblock block-type syntax information may indicate an address or position of the macroblock in a frame or slice, or a macroblock number that identifies the position of the macroblock, a type of coding mode applied to the macroblock, a quantization value for the macroblock, any motion vector information for the macroblock and a CBP value for the macroblock.
  • video encoder 20 may receive a video block having a size of more than 16x16 pixels, partitioning the block into partitions, encode one of the partitions using a first encoding mode, encode another of the partitions using a second encoding mode different from the first encoding mode, and generate block-type syntax information that indicates the size of the block and identifies the partitions and the encoding modes used to encode the partitions.
  • video encoder 20 may receive a video coding unit, such as a frame or slice, determine a first rate-distortion metric for encoding the video coding unit using first video blocks with sizes of 16x16 pixels, determine a second rate- distortion metric for encoding the video coding unit using second video blocks with sizes of more than 16x16 pixels, encode the video coding unit using the first video blocks when the first rate-distortion metric is less than second rate-distortion metric, and encode the video coding unit using the second video blocks when the second rate- distortion metric is less than the first rate-distortion metric.
  • a video coding unit such as a frame or slice
  • video decoder 30 may receive an encoded video block having a size of more than 16x16 pixels, receive block-type syntax information that indicates the size of the encoded block, receive a coded block pattern value for the encoded block, wherein the coded block pattern value indicates whether the encoded block includes at least one non-zero coefficient, and decode the encoded block based on the block-type syntax information and the coded block pattern value for the encoded block.
  • video decoder 30 may receive a video block having a size of more than 16x16 pixels, wherein the block is partitioned into partitions, one of the partitions is infra-encoded and another of the partitions is infra-encoded, receive block- type syntax information that indicates the size of the block and identifies the partitions and the encoding modes used to encode the partitions, and decode the video block based on the block-type syntax information.
  • FIG. 2 is a block diagram illustrating an example of a video encoder 50 that may implement techniques for using a large macroblock consistent with this disclosure.
  • Video encoder 50 may correspond to video encoder 20 of source device 12, or a video encoder of a different device.
  • Video encoder 50 may perform infra- and inter-coding of blocks within video frames, including large macroblocks, or partitions or sub-partitions of large macroblocks.
  • Intra-coding relies on spatial prediction to reduce or remove spatial redundancy in video within a given video frame.
  • Inter-coding relies on temporal prediction to reduce or remove temporal redundancy in video within adjacent frames of a video sequence.
  • Intra-mode may refer to any of several spatial based compression modes and inter-modes such as prediction (P -mode) or bi-directional (B-mode) may refer to any of several temporal-based compression modes.
  • the techniques of this disclosure may be applied both during inter-coding and intra-coding. In some cases, techniques of this disclosure may also be applied to encoding non- video digital pictures. That is, a digital still picture encoder may utilize the techniques of this disclosure to intra-code a digital still picture using large macroblocks in a manner similar to encoding intra-coded macroblocks in video frames in a video sequence.
  • video encoder 50 receives a current video block within a video frame to be encoded.
  • video encoder 50 includes motion compensation unit 35, motion estimation unit 36, intra prediction unit 37, mode select unit 39, reference frame store 34, summer 48, transform unit 38, quantization unit 40, and entropy coding unit 46.
  • video encoder 50 also includes inverse quantization unit 42, inverse transform unit 44, and summer 51.
  • a deblocking filter (not shown in FIG. 2) may also be included to filter block boundaries to remove blockiness artifacts from reconstructed video. If desired, the deblocking filter would typically filter the output of summer 51.
  • video encoder 50 receives a video frame or slice to be coded.
  • the frame or slice may be divided into multiple video blocks, including large macroblocks.
  • Motion estimation unit 36 and motion compensation unit 35 perform inter-predictive coding of the received video block relative to one or more blocks in one or more reference frames to provide temporal compression.
  • Intra prediction unit 37 performs intra-predictive coding of the received video block relative to one or more neighboring blocks in the same frame or slice as the block to be coded to provide spatial compression.
  • Mode select unit 39 may select one of the coding modes, intra or inter, e.g., based on error results, and provides the resulting intra- or inter-coded block to summer 48 to generate residual block data and to summer 51 to reconstruct the encoded block for use as a reference frame.
  • the video block to be coded may comprise a macroblock that is larger than that prescribed by conventional coding standards, i.e., larger than a 16x16 pixel macroblock.
  • the large video block may comprise a 64x64 pixel macroblock or a 32x32 pixel macroblock.
  • Motion estimation unit 36 and motion compensation unit 35 may be highly integrated, but are illustrated separately for conceptual purposes.
  • Motion estimation is the process of generating motion vectors, which estimate motion for video blocks.
  • a motion vector for example, may indicate the displacement of a predictive block within a predictive reference frame (or other coded unit) relative to the current block being coded within the current frame (or other coded unit).
  • a predictive block is a block that is found to closely match the block to be coded, in terms of pixel difference, which may be determined by sum of absolute difference (SAD), sum of square difference (SSD), or other difference metrics.
  • a motion vector may also indicate displacement of a partition of a large macroblock.
  • a first motion vector may indicate displacement of the 32x64 partition
  • a second motion vector may indicate displacement of a first one of the 32x32 partitions
  • a third motion vector may indicate displacement of a second one of the 32x32 partitions, all relative to corresponding partitions in a reference frame.
  • Such partitions may also be considered video blocks, as those terms are used in this disclosure.
  • Motion compensation may involve fetching or generating the predictive block based on the motion vector determined by motion estimation. Again, motion estimation unit 36 and motion compensation unit 35 may be functionally integrated.
  • Motion estimation unit 36 calculates a motion vector for the video block of an inter-coded frame by comparing the video block to video blocks of a reference frame in reference frame store 34.
  • Motion compensation unit 35 may also interpolate sub-integer pixels of the reference frame, e.g., an I-frame or a P-frame.
  • the ITU H.264 standard refers to reference frames as "lists.” Therefore, data stored in reference frame store 34 may also be considered lists.
  • Motion estimation unit 36 compares blocks of one or more reference frames (or lists) from reference frame store 34 to a block to be encoded of a current frame, e.g., a P-frame or a B-frame.
  • a motion vector calculated by motion estimation unit 36 may refer to a sub-integer pixel location of a reference frame.
  • Motion estimation unit 36 sends the calculated motion vector to entropy coding unit 46 and motion compensation unit 35.
  • the reference frame block identified by a motion vector may be referred to as a predictive block.
  • Motion compensation unit 35 calculates error values for the predictive block of the reference frame.
  • Motion compensation unit 35 may calculate prediction data based on the predictive block.
  • Video encoder 50 forms a residual video block by subtracting the prediction data from motion compensation unit 35 from the original video block being coded.
  • Summer 48 represents the component or components that perform this subtraction operation.
  • Transform unit 38 applies a transform, such as a discrete cosine transform (DCT) or a conceptually similar transform, to the residual block, producing a video block comprising residual transform coefficient values.
  • Transform unit 38 may perform other transforms, such as those defined by the H.264 standard, which are conceptually similar to DCT. Wavelet transforms, integer transforms, sub-band transforms or other types of transforms could also be used.
  • transform unit 38 applies the transform to the residual block, producing a block of residual transform coefficients.
  • the transform may convert the residual information from a pixel value domain to a transform domain, such as a frequency domain.
  • Quantization unit 40 quantizes the residual transform coefficients to further reduce bit rate.
  • the quantization process may reduce the bit depth associated with some or all of the coefficients.
  • quantization unit 40 may establish a different degree of quantization for each 64x64 pixel macroblock according to a luminance quantization parameter, referred to in this disclosure as QPy.
  • Quantization unit 40 may further modify the luminance quantization parameter used during quantization of a 64x64 macroblock based on a quantization parameter modifier, referred to herein as "MB64_delta_QP," and a previously encoded 64x64 pixel macroblock.
  • Each 64x64 pixel large macroblock may comprise an individual MB64_delta_QP value, in the range between -26 and +25, inclusive.
  • video encoder 50 may establish the MB64_delta_QP value for a particular block based on a desired bitrate for transmitting the encoded version of the block.
  • the MB64_delta_QP value of a first 64x64 pixel macroblock may be equal to the QP value of a frame or slice that includes the first 64x64 pixel macroblock, e.g., in the frame/slice header.
  • QPy for a current 64x64 pixel macroblock may be calculated according to the formula:
  • QP ⁇ (QP Y PREV + MB64 _ delta QP + 52)%52
  • QP Y , PREV refers to the QPy value of the previous 64x64 pixel macroblock in the decoding order of the current slice/frame
  • % refers to the modulo operator such that N%52 returns a result between 0 and 51, inclusive, corresponding to the remainder value of N divided by 52.
  • QP Y , PREV may be set equal to the frame/slice QP sent in the frame/slice header.
  • quantization unit 40 presumes that the MB64_delta_QP value is equal to zero when a MB64_delta_QP value is not defined for a particular 64x64 pixel macroblock, including "skip" type macroblocks, such as P Skip and B Skip macroblock types.
  • additional delta QP values (generally referred to as quantization parameter modification values) may be defined for finer grain quantization control of partitions within a 64x64 pixel macroblock, such as MB32_delta_QP values for each 32x32 pixel partition of a 64x64 pixel macroblock.
  • each partition of a 64x64 macroblock may be assigned an individual quantization parameter.
  • Each quantization parameter modification value may be included as syntax information with the corresponding encoded block, and a decoder may decode the encoded block by dequantizing, i.e., inverse quantizing, the encoded block according to the quantization parameter modification value.
  • entropy coding unit 46 entropy codes the quantized transform coefficients.
  • entropy coding unit 46 may perform content adaptive variable length coding (CAVLC), context adaptive binary arithmetic coding (CABAC), or another entropy coding technique.
  • CABAC context adaptive binary arithmetic coding
  • the encoded video may be transmitted to another device or archived for later transmission or retrieval.
  • the coded bitstream may include entropy coded residual transform coefficient blocks, motion vectors for such blocks, MB64_delta_QP values for each 64x64 pixel macroblock, and other syntax elements including, for example, macroblock-type identifier values, coded unit headers indicating the maximum size of macroblocks in the coded unit, QPy values, coded block pattern (CBP) values, values that identify a partitioning method of a macroblock or sub-block, and transform size flag values, as discussed in greater detail below.
  • context may be based on neighboring macroblocks.
  • entropy coding unit 46 or another unit of video encoder 50 may be configured to perform other coding functions, in addition to entropy coding.
  • entropy coding unit 46 may be configured to determine the CBP values for the large macroblocks and partitions.
  • Entropy coding unit 46 may apply a hierarchical CBP scheme to provide a CBP value for a large macroblock that indicates whether any partitions in the macroblock include non-zero transform coefficient values and, if so, other CBP values to indicate whether particular partitions within the large macroblock have non-zero transform coefficient values.
  • entropy coding unit 46 may perform run length coding of the coefficients in a large macroblock or subpartition.
  • entropy coding unit 46 may apply a zig-zag scan or other scan pattern to scan the transform coefficients in a macroblock or partition and encode runs of zeros for further compression. Entropy coding unit 46 also may construct header information with appropriate syntax elements for transmission in the encoded video bitstream.
  • Inverse quantization unit 42 and inverse transform unit 44 apply inverse quantization and inverse transformation, respectively, to reconstruct the residual block in the pixel domain, e.g., for later use as a reference block.
  • Motion compensation unit 35 may calculate a reference block by adding the residual block to a predictive block of one of the frames of reference frame store 34. Motion compensation unit 35 may also apply one or more interpolation filters to the reconstructed residual block to calculate sub-integer pixel values.
  • Summer 51 adds the reconstructed residual block to the motion compensated prediction block produced by motion compensation unit 35 to produce a reconstructed video block for storage in reference frame store 34.
  • the reconstructed video block may be used by motion estimation unit 36 and motion compensation unit 35 as a reference block to inter-code a block in a subsequent video frame.
  • the large macroblock may comprise a 64x64 pixel macroblock, a 32x32 pixel macroblock, or other macroblock that is larger than the size prescribed by conventional video coding standards.
  • FIG. 3 is a block diagram illustrating an example of a video decoder 60, which decodes a video sequence that is encoded in the manner described in this disclosure.
  • the encoded video sequence may include encoded macroblocks that are larger than the size prescribed by conventional video encoding standards.
  • the encoded macroblocks may be 32x32 pixel or 64x64 pixel macroblocks.
  • video decoder 60 includes an entropy decoding unit 52, motion compensation unit 54, intra prediction unit 55, inverse quantization unit 56, inverse transformation unit 58, reference frame store 62 and summer 64.
  • Video decoder 60 may, in some examples, perform a decoding pass generally reciprocal to the encoding pass described with respect to video encoder 50 (FIG.
  • Motion compensation unit 54 may generate prediction data based on motion vectors received from entropy decoding unit 52.
  • Entropy decoding unit 52 entropy-decodes the received bitstream to generate quantized coefficients and syntax elements (e.g., motion vectors, CBP values, QPy values, transform size flag values, MB64_delta_QP values).
  • Entropy decoding unit 52 may parse the bitstream to identify syntax information in coded units such as frames, slices and/or macroblock headers.
  • Syntax information for a coded unit comprising a plurality of macroblocks may indicate the maximum size of the macroblocks, e.g., 16x16 pixels, 32x32 pixels, 64x64 pixels, or other larger sized macroblocks in the coded unit.
  • the syntax information for a block is forwarded from entropy coding unit 52 to either motion compensation unit 54 or intra-prediction unit 55, e.g., depending on the coding mode of the block.
  • a decoder may use the maximum size indicator in the syntax of a coded unit to select a syntax decoder for the coded unit. Using the syntax decoder specified for the maximum size, the decoder can then properly interpret and process the large-sized macroblocks include in the coded unit.
  • Motion compensation unit 54 may use motion vectors received in the bitstream to identify a prediction block in reference frames in reference frame store 62.
  • Intra prediction unit 55 may use intra prediction modes received in the bitstream to form a prediction block from spatially adjacent blocks.
  • Inverse quantization unit 56 inverse quantizes, i.e., de-quantizes, the quantized block coefficients provided in the bitstream and decoded by entropy decoding unit 52.
  • the inverse quantization process may include a conventional process, e.g., as defined by the H.264 decoding standard.
  • the inverse quantization process may also include use of a quantization parameter QPy calculated by encoder 50 for each 64x64 macroblock to determine a degree of quantization and, likewise, a degree of inverse quantization that should be applied.
  • Inverse transform unit 58 applies an inverse transform, e.g., an inverse DCT, an inverse integer transform, or a conceptually similar inverse transform process, to the transform coefficients in order to produce residual blocks in the pixel domain.
  • Motion compensation unit 54 produces motion compensated blocks, possibly performing interpolation based on interpolation filters. Identifiers for interpolation filters to be used for motion estimation with sub-pixel precision may be included in the syntax elements.
  • Motion compensation unit 54 may use interpolation filters as used by video encoder 50 during encoding of the video block to calculate interpolated values for sub-integer pixels of a reference block. Motion compensation unit 54 may determine the interpolation filters used by video encoder 50 according to received syntax information and use the interpolation filters to produce predictive blocks.
  • Motion compensation unit 54 uses some of the syntax information to determine sizes of macrob locks used to encode frame(s) of the encoded video sequence, partition information that describes how each macroblock of a frame of the encoded video sequence is partitioned, modes indicating how each partition is encoded, one or more reference frames (or lists) for each inter-encoded macroblock or partition, and other information to decode the encoded video sequence.
  • Summer 64 sums the residual blocks with the corresponding prediction blocks generated by motion compensation unit 54 or intra-prediction unit to form decoded blocks. If desired, a deblocking filter may also be applied to filter the decoded blocks in order to remove blockiness artifacts.
  • the decoded video blocks are then stored in reference frame store 62, which provides reference blocks for subsequent motion compensation and also produces decoded video for presentation on a display device (such as device 32 of FIG. 1).
  • the decoded video blocks may each comprise a 64x64 pixel macroblock, 32x32 pixel macroblock, or other larger-than-standard macroblock. Some macroblocks may include partitions with a variety of different partition sizes.
  • Blocks of each partition level include a number of pixels corresponding to the particular level.
  • Four partitioning patterns are also shown for each level, where a first partition pattern includes the whole block, a second partition pattern includes two horizontal partitions of equal size, a third partition pattern includes two vertical partitions of equal size, and a fourth partition pattern includes four equally- sized partitions.
  • One of the partitioning patterns may be chosen for each partition at each partition level.
  • level 0 corresponds to a 64x64 pixel macroblock partition of luma samples and associated chroma samples.
  • Level 1 corresponds to a 32x32 pixel block of luma samples and associated chroma samples.
  • Level 2 corresponds to a 16x16 pixel block of luma samples and associated chroma samples, and level 3 corresponds to an 8x8 pixel block of luma samples and associated chroma samples.
  • level 0 could begin with a 128x128 pixel macroblock, a 256x256 pixel macroblock, or other larger-sized macroblock.
  • the highest-numbered level in some examples, could be as fine-grain as a single pixel, i.e., a 1x1 block.
  • partitioning may be increasingly sub-partitioned, such that the macroblock is partitioned, partitions are further partitioned, further partitions are still further partitioned, and so forth.
  • partitions below level 0, i.e., partitions of partitions may be referred to as sub-partitions.
  • any or all of the sub-blocks may be partitioned according to the partition patterns of the next level. That is, for an NxN block that has been partitioned at level x into four equally sized sub-blocks (N/2)x(N/2), any of the (N/2)x(N/2) sub-blocks can be further partitioned according to any of the partition patterns of level x+1.
  • a 32x32 pixel sub-block of a 64x64 pixel macroblock at level 0 can be further partitioned according to any of the patterns shown in FIG.
  • each of the 16x16 pixel sub-blocks can be further partitioned according to any of the patterns shown in FIG. 4A at level 2.
  • each of the 8x8 pixel sub-blocks can be further partitioned according to any of the patterns shown in FIG. 4 A at level 3.
  • video encoder 50 may determine different partitioning levels for different macroblocks, as well as coding modes to apply to such partitions, e.g., based on rate-distortion analysis. Also, as described in greater detail below, video encoder 50 may encode at least some of the final partitions differently, using spatial (P-encoded or B-encoded) or temporal (I-encoded) prediction, e.g., based on rate-distortion metric results or other considerations.
  • a large macroblock may be coded such that some partitions have different coding mode.
  • some (at least one) partitions may be coded with different intra-coding modes (e.g., 1 16x16, 1 8x8, 1 4x4) relative to other (at least one) partitions in the same macroblock.
  • some (at least one) partitions may be intra-coded while other (at least one) partitions in the same macroblock are inter-coded.
  • video encoder 50 may, for a 32x32 block with four 16x16 partitions, encode some of the 16x16 partitions using spatial prediction and other 16x16 partitions using temporal prediction.
  • video encoder 50 may, for a 32x32 block with four 16x16 partitions, encode one or more of the 16x16 partitions using a first prediction mode (e.g., one of 1 16x16, 1 8x8, 1 4x4) and one or more other 16x16 partitions using a different spatial prediction mode (e.g., one of 1 16x16, 1 8x8, I_4x4).
  • a first prediction mode e.g., one of 1 16x16, 1 8x8, 1 4x4
  • a different spatial prediction mode e.g., one of 1 16x16, 1 8x8, I_4x4
  • FIG. 4B is a conceptual diagram illustrating assignment of different coding modes to different partitions a large macroblock.
  • FIG. 4B illustrates assignment of an I l 6x16 intra-coding mode to an upper left 16x16 block of a large 32x32 macroblock, 1 8x8 intra-coding modes to upper right and lower left 16x16 blocks of the large 32x32 macroblock, and an 1 4x4 intra-coding mode to a lower right 16x16 block of the large 32x32 macroblock.
  • the coding modes illustrated in FIG. 4B may be H.264 intra-coding modes for luma coding.
  • each partition can be further partitioned on a selective basis, and each final partition can be selectively coded using either temporal prediction or spatial prediction, and using selected temporal or spatial coding modes. Consequently, it is possible to code a large macroblock with mixed modes such that some partitions in the macroblock are intra-coded and other partitions in the same macroblock are inter-coded, or some partitions in the same macroblock are coded with different intra-coding modes or different inter-coding modes.
  • Video encoder 50 may further define each partition according to a macroblock type.
  • the macroblock type may be included as a syntax element in an encoded bitstream, e.g., as a syntax element in a macroblock header.
  • the macroblock type may be used to identify how the macroblock is partitioned, and the respective methods or modes for encoding each of the partitions of the macroblock, as discussed above.
  • Methods for encoding the partitions may include not only intra- and inter- coding, but also particular modes of intra-coding (e.g., 1 16x16, 1 8x8, 1 4x4) or inter- coding (e.g., P_ or B_ 16x16, 16x8, 8x16, 8x8, 8x4, 4x8 and 4x4).
  • partition level 0 blocks may be defined according to an MB64_type syntax element, representative of a macroblock with 64x64 pixels. Similar type definitions may be formed for any MB[N]_type, where [N] refers to a block with NxN pixels, where N is a positive integer that may be greater than 16.
  • NxN block has four partitions of size (N/2)x(N/2), as shown in the last column on FIG. 4A, each of the four partitions may receive their own type definitions, e.g., MB[N/2]_type.
  • video encoder 50 may introduce an MB32_type for each of the four 32x32 pixel partitions.
  • These macroblock type syntax elements may assist decoder 60 in decoding large macroblocks and various partitions of large macroblocks, as described in this disclosure.
  • Each NxN pixel macroblock where N is greater than 16 generally corresponds to a unique type definition. Accordingly, the encoder may generate syntax appropriate for the particular macroblock and indicate to the decoder the maximum size of macroblocks in a coded unit, such as a frame, slice, or sequence of macroblocks.
  • the decoder may receive an indication of a syntax decoder to apply to macroblocks of the coded unit. This also ensures that the decoder may be backwards-compatible with existing coding standards, such as H.264, in that the encoder may indicate the type of syntax decoders to apply to the macroblocks, e.g., standard H.264 or those specified for processing of larger macroblocks according to the techniques of this disclosure.
  • each MB[N]_type definition may represent, for a corresponding type, a number of pixels in a block of the corresponding type (e.g., 64x64), a reference frame (or reference list) for the block, a number of partitions for the block, the size of each partition of the block, how each partition is encoded (e.g., intra or inter and particular modes), and the reference frame (or reference list) for each partition of the block when the partition is inter-coded.
  • video encoder 50 may, in some examples, use conventional type definitions as the types of the blocks, such as types specified by the H.264 standard. In other examples, video encoder 50 may apply newly defined block types for 16x16 and smaller blocks.
  • Video encoder 50 may evaluate both conventional inter- or intra-coding methods using normal macroblock sizes and partitions, such as methods prescribed by ITU H.264, and inter- or intra-coding methods using the larger macroblocks and partitions described by this disclosure, and compare the rate-distortion characteristics of each approach to determine which method results in the best rate-distortion performance. Video encoder 50 then may select, and apply to the block to be coded, the best coding approach, including inter- or intra-mode, macroblock size (large, larger or normal), and partitioning, based on optimal or acceptable rate-distortion results for the coding approach. As an illustration, video encoder 50 may select the use of 64x64 macroblocks, 32x32 macroblocks or 16x16 macroblocks to encode a particular frame or slice based on rate-distortion results produced when the video encoder uses such macroblock sizes.
  • two different approaches may be used to design intra modes using large macroblocks.
  • spatial prediction may be performed for a block based on neighboring blocks directly.
  • video encoder 50 may generate spatial predictive 32x32 blocks based on their neighboring pixels directly and generate spatial predictive 64x64 blocks based on their neighboring pixels directly.
  • spatial prediction may be performed at a larger scale compared to 16x16 intra blocks. Therefore, these techniques may, in some examples, result in some bit rate savings, e.g., with a smaller number of blocks or partitions per frame or slice.
  • video encoder 50 may group four NxN blocks together to generate an (N*2)x(N*2) block, and then encode the (N*2)x(N*2) block.
  • video encoder 50 may group four intra-coded blocks together, thereby forming a large intra-coded macroblock.
  • four intra- coded blocks, each having a size of 16x16, can be grouped together to form a large, 32x32 intra-coded block.
  • Video encoder 50 may encode each of the four corresponding NxN blocks using a different encoding mode, e.g., I_16xl6, 1_8x8, or I_4x4 according to H.264.
  • each 16x16 block can be assigned its own mode of spatial prediction by video encoder 50, e.g., to promote favorable encoding results.
  • Video encoder 50 may design intra modes according to either of the two different methods discussed above, and analyze the different methods to determine which approach provides better encoding results. For example, video encoder 50 may apply the different intra mode approaches, and place them in a single candidate pool to allow them to compete with each other for the best rate-distortion performance. Using a rate-distortion comparison between the different approaches, video encoder 50 can determine how to encode each partition and/or macroblock.
  • FIG. 5 is a conceptual diagram illustrating a hierarchical view of various partition levels of a large macroblock.
  • FIG. 5 also represents the relationships between various partition levels of a large macroblock as described with respect to FIG. 4A.
  • Each block of a partition level, as illustrated in the example of FIG. 5, may have a corresponding coded block pattern (CBP) value.
  • CBP values form part of the syntax information that describes a block or macroblock.
  • the CBP values are each one-bit syntax values that indicate whether or not there are any nonzero transform coefficient values in a given block following transform and quantization operations.
  • a prediction block may be very close in pixel content to a block to be coded such that all of the residual transform coefficients are quantized to zero, in which case there may be no need to transmit transform coefficients for the coded block.
  • the CBP value for the block may be set to zero to indicate that the coded block includes no non-zero coefficients.
  • the CBP value may be set to one.
  • Decoder 60 may use CBP values to identify residual blocks that are coded, i.e., with one or more non-zero transform coefficients, versus blocks that are not coded, i.e., including no non-zero transform coefficients.
  • an encoder may assign CBP values to large macroblocks hierarchically based on whether those macroblocks, including their partitions, have at least one non-zero coefficient, and assign CBP values to the partitions to indicate which partitions have non-zero coefficients.
  • Hierarchical CBP for large macroblocks can facilitate processing of large macroblocks to quickly identify coded large macroblocks and uncoded large macroblocks, and permit identification of coded partitions at each partition level for the large macroblock to determine whether it is necessary to use residual data to decode the blocks.
  • a 64x64 pixel macroblock at level zero may include syntax information comprising a CBP64 value, e.g., a one-bit value, to indicate whether the entire 64x64 pixel macroblock, including any partitions, has non-zero coefficients or not.
  • video encoder 50 "sets" the CBP64 bit, e.g., to a value of "1,” to represent that the 64x64 pixel macroblock includes at least one non-zero coefficient.
  • the CBP64 value is set, e.g., to a value of "1,” the 64x64 pixel macroblock includes at least one non-zero coefficient somewhere in the macroblock.
  • video encoder 50 "clears" the CBP64 value, e.g., to a value of "0,” to represent that the 64x64 pixel macroblock has all zero coefficients.
  • the CBP64 value is cleared, e.g., to a value of "0”
  • the 64x64 pixel macroblock is indicated as having all zero coefficients.
  • Macroblocks with CBP64 values of "0” do not generally require transmission of residual data in the bitstream, whereas macroblocks with CBP64 values of "1" generally require transmission of residual data in the bitstream for use in decoding such macroblocks.
  • a 64x64 pixel macroblock that has all zero coefficients need not include CBP values for partitions or sub-blocks thereof. That is, because the 64x64 pixel macroblock has all zero coefficients, each of the partitions also necessarily has all zero coefficients.
  • a 64x64 pixel macroblock that includes at least one non-zero coefficient may further include CBP values for the partitions at the next partition level.
  • a CBP64 with a value of one may include additional syntax information in the form of a one-bit value CBP32 for each 32x32 partition of the 64x64 block. That is, in one example, each 32x32 pixel partition (such as the four partition blocks of level 1 in FIG.
  • each CBP32 value may comprise a bit that is set to a value of one when the corresponding 32x32 pixel block has at least one non-zero coefficient and that is cleared to a value of zero when the corresponding 32x32 pixel block has all zero coefficients.
  • the encoder may further indicate, in syntax of a coded unit comprising a plurality of macroblocks, such as a frame, slice, or sequence, the maximum size of a macroblock in the coded unit, to indicate to the decoder how to interpret the syntax information of each macroblock, e.g., which syntax decoder to use for processing of macroblocks in the coded unit.
  • a 64x64 pixel macroblock that has all zero coefficients may use a single bit to represent the fact that the macroblock has all zero coefficients
  • a 64x64 pixel macroblock with at least one non-zero coefficient may include CBP syntax information comprising at least five bits, a first bit to represent that the 64x64 pixel macroblock has a non-zero coefficient and four additional bits, each representative of whether a corresponding one of four 32x32 pixel partitions of the macroblock includes at least one non-zero coefficient.
  • the fourth additional bit may not be included, which the decoder may interpret as the last partition being one.
  • the encoder may determine that the last bit has a value of one when the first three bits are zero and when the bit representative of the higher level hierarchy has a value of one. For example, a prefix of a CBP64 value of "10001" may be shortened to "1000," as the first bit indicates that at least one of the four partitions has non-zero coefficients, and the next three zeros indicate that the first three partitions have all zero coefficients. Therefore, a decoder may deduce that it is the last partition that includes a non-zero coefficient, without the explicit bit informing the decoder of this fact, e.g., from the bit string "1000.” That is, the decoder may interpret the CBP64 prefix "1000" as "10001."
  • a one-bit CBP32 may be set to a value of "1" when the 32x32 pixel partition includes at least one non-zero coefficient, and to a value of "0" when all of the coefficients have a value of zero. If a 32x32 pixel partition has a CBP value of 1, then partitions of that 32x32 partition at the next partition level may be assigned CBP values to indicate whether the respective partitions include any non-zero coefficients. Hence, the CBP values may be assigned in a hierarchical manner at each partition level until there are no further partition levels or no partitions including non-zero coefficients.
  • encoders and/or decoders may utilize hierarchical CBP values to represent whether a large macroblock (e.g., 64x64 or 32x32) and partitions thereof include at least one non-zero coefficient or all zero coefficients.
  • an encoder may encode a large macroblock of a coded unit of a digital video stream, such that the macroblock block comprises greater than 16x16 pixels, generate block-type syntax information that identifies the size of the block, generate a CBP value for the block, such that the CBP value identifies whether the block includes at least one nonzero coefficient, and generate additional CBP values for various partition levels of the block, if applicable.
  • the hierarchical CBP values may comprise an array of bits (e.g., a bit vector) whose length depends on the values of the prefix.
  • the array may further represent a hierarchy of CBP values, such as a tree structure, as shown in FIG. 5.
  • the array may represent nodes of the tree in a breadth-first manner, where each node corresponds to a bit in the array. When a note of the tree has a bit that is set to "1," in one example, the node has four branches (corresponding to the four partitions), and when the bit is cleared to "0," the node has no branches.
  • an encoder and/or a decoder may determine the four consecutive bits starting at node 7 that represent the nodes that branch from node x by calculating:
  • tree[] corresponds to the array of bits with a starting index of 0, i is an integer index into the array tree[], x corresponds to the index of node X in tree[], and y corresponds to the index of node Y that is the first branch-node of node X.
  • the three subsequent array positions i.e., y+ ⁇ , y+2, and y+3) correspond to the other branch- nodes of node X.
  • An encoder such as video encoder 50 (FIG. T) may assign CBP values for 16x16 pixel partitions of the 32x32 pixel partitions with at least one non-zero coefficient using existing methods, such as methods prescribed by ITU H.264 for setting CBP values for 16x16 blocks, as part of the syntax of the 64x64 pixel macroblock.
  • the encoder may also select CBP values for the partitions of the 32x32 pixel partitions that have at least one non-zero coefficient based on the size of the partitions, a type of block corresponding to the partitions (e.g., chroma block or luma block), or other characteristics of the partitions.
  • FIGS. 6-9 are flowcharts illustrating example methods for setting various coded block pattern (CBP) values in accordance with the techniques of this disclosure. Although the example methods of FIGS. 6-9 are discussed with respect to a 64x64 pixel macroblock, it should be understood that similar techniques may apply for assigning hierarchical CBP values for other sizes of macrob locks. Although the examples of FIGS. 6-9 are discussed with respect to video encoder 50 (FIG. X), it should be understood that other encoders may employ similar methods to assign CBP values to larger-than- standard macrob locks.
  • CBP coded block pattern
  • decoders may utilize similar, albeit reciprocal, methods for interpreting the meaning of a particular CBP value for a macroblock. For example, if an inter-coded macroblock received in the bitstream has a CBP value of "0," the decoder may receive no residual data for the macroblock and may simply produce a predictive block identified by a motion vector as the decoded macroblock, or a group of predictive blocks identified by motion vectors with respect to partitions of the macroblock.
  • FIG. 6 is a flowchart illustrating an example method for setting a CBP64 value of an example 64x64 pixel macroblock. Similar methods may be applied for macrob locks larger than 64x64.
  • video encoder 50 receives a 64x64 pixel macroblock (100).
  • Motion estimation unit 36 and motion compensation unit 35 may then generate one or more motion vectors and one or more residual blocks to encode the macroblock, respectively.
  • the output of transform unit 38 generally comprises an array of residual transform coefficient values for an intra-coded block or a residual block of an inter-coded block, which array is quantized by quantization unit 40 to produce a series of quantized transform coefficients.
  • Entropy coding unit 46 may provide entropy coding and other coding functions separate from entropy coding. For example, in addition to CAVLC, CABAC, or other entropy coding functions, entropy coding unit 46 or another unit of video encoder 50 may determine CBP values for the large macrob locks and partitions. In particular, entropy coding unit 46 may determine the CBP64 value for a 64x64 pixel macroblock by first determining whether the macroblock has at least one non-zero, quantized transform coefficient (102).
  • entropy coding unit 46 determines that all of the transform coefficients have a value of zero ("NO" branch of 102), entropy coding unit 46 clears the CBP64 value for the 64x64 macroblock, e.g., resets a bit for the CBP64 value to "0" (104).
  • entropy coding unit 46 identifies at least one non-zero coefficient ("YES" branch of 102) for the 64x65 macroblock, entropy coding unit 46 sets the CBP64 value, e.g., sets a bit for the CBP64 value to "1" (106).
  • entropy coding unit 46 does not need to establish any additional CBP values for the partitions of the macroblock, which may reduce overhead. In one example, when the macroblock has at least one non-zero coefficient, however, entropy coding unit 46 proceeds to determine CBP values for each of the four 32x32 pixel partitions of the 64x64 pixel macroblock (108). Entropy coding unit 46 may utilize the method described with respect to FIG. 7 four times, once for each of the four partitions, to establish four CBP32 values, each corresponding to a different one of the four 32x32 pixel partitions of the 64x64 macroblock.
  • entropy coding unit 46 may transmit a single bit with a value of "0" to indicate that the macroblock has all zero coefficients, whereas when the macroblock has at least one non-zero coefficient, entropy coding unit 46 may transmit five bits, one bit for the macroblock and four bits, each corresponding to one of the four partitions of the macroblock.
  • residual data for the partition may be sent in the encoded bitstream.
  • the encoder may only send three zeros, i.e., "000,” rather than three zeros and a one, i.e., "0001.”
  • FIG. 7 is a flowchart illustrating an example method for setting a CBP32 value of a 32x32 pixel partition of a 64x64 pixel macroblock.
  • entropy coding unit 46 receives a 32x32 pixel partition of the macroblock (110), e.g., one of the four partitions referred to with respect to FIG. 6.
  • Entropy coding unit 46 determines a CBP32 value for the 32x32 pixel partition by first determining whether the partition includes at least one non-zero coefficient (112).
  • entropy coding unit 46 determines that all of the coefficients for the partition have a value of zero ("NO" branch of 112), entropy coding unit 46 clears the CBP32 value, e.g., resets a bit for the CBP32 value to "0" (114).
  • entropy coding unit 46 identifies at least one non-zero coefficient of the partition ("YES" branch of 112), entropy coding unit 46 sets the CBP32 value, e.g., sets a bit for the CBP32 value to a value of "1" (116). [0141] In one example, when the partition has all zero coefficients, entropy coding unit 46 does not establish any additional CBP values for the partition.
  • entropy coding unit 46 determines CBP values for each of the four 16x16 pixel partitions of the 32x32 pixel partition of the macroblock. Entropy coding unit 46 may utilize the method described with respect to FIG. 8 to establish four CBP 16 values each corresponding to one of the four 16x16 pixel partitions.
  • entropy coding unit 46 may set a bit with a value of "0" to indicate that the partition has all zero coefficients, whereas when the partition has at least one non-zero coefficient, entropy coding unit 46 may include five bits, one bit for the partition and four bits each corresponding to a different one of the four sub-partitions of the partition of the macroblock. Hence, each additional partition level may present four additional CBP bits when the partition in the preceding partition level had at least one nonzero transform coefficient value.
  • a 64x64 macroblock has a CBP value of 1, and four 32x32 partitions have CBP values of 1, 0, 1 and 1, respectively, the overall CBP value up to that point is 11011. Additional CBP bits may be added for additional partitions of the 32x32 partitions, e.g., into 16x16 partitions.
  • FIG. 8 is a flowchart illustrating an example method for setting a CBP 16 value of a 16x16 pixel partition of a 32x32 pixel partition of a 64x64 pixel macroblock.
  • video encoder 50 may utilize CBP values as prescribed by a video coding standard, such as ITU H.264, as discussed below.
  • video encoder 50 may utilize CBP values in accordance with other techniques of this disclosure.
  • entropy coding unit 46 receives a 16x16 partition (120), e.g., one of the 16x16 partitions of a 32x32 partition described with respect to FIG. 7.
  • Entropy coding unit 46 may then determine whether a motion partition for the 16x16 pixel partition is larger than an 8x8 pixel block (122).
  • a motion partition describes a partition in which motion is concentrated.
  • a 16x16 pixel partition with only one motion vector may be considered a 16x16 motion partition.
  • a 16x16 pixel partition with two 8x16 partitions, each having one motion vector each of the two 8x16 partitions may be considered an 8x16 motion partition.
  • entropy coding unit 46 assigns a CBP value to the 16x16 pixel partition in the same manner as prescribed by ITU H.264 (124), in the example of FIG. 8. [0145] When there exists a motion partition for the 16x16 pixel partition that is larger than an 8x8 pixel block ("YES" branch of 122), entropy coding unit 46 constructs and sends a lumacbpl ⁇ value (125) using the steps following step 125. In the example of FIG.
  • entropy coding unit 46 determines whether the 16x16 pixel luma component of the partition has at least one non-zero coefficient (126). When the 16x16 pixel luma component has all zero coefficients ("NO" branch of 126), entropy coding unit 46 assigns the CBP 16 value according to the Coded Block Pattern Chroma portion of ITU H.264 (128), in the example of FIG. 8.
  • entropy coding unit 46 determines a transform-size flag for the 16x16 pixel partition (130).
  • the transform-size flag generally indicates a transform being used for the partition.
  • the transform represented by the transform-size flag may include one of a 4x4 transform, an 8x8 transform, a 16x16 transform, a 16x8 transform, or an 8x16 transform.
  • the transform- size flag may comprise an integer value that corresponds to an enumerated value that identifies one of the possible transforms.
  • Entropy coding unit 46 may then determine whether the transform-size flag represents that the transform size is greater than or equal to 16x8 (or 8x16) (132).
  • entropy coding unit 46 assigns a value to CBP16 according to ITU H.264 (134), in the example of FIG. 8.
  • entropy coding unit 46 determines whether a type for the 16x16 pixel partition is either two 16x8 or two 8x16 pixel partitions (136).
  • entropy coding unit 46 assigns the CBP 16 value according to the Chroma Coded Block Partition prescribed by ITU H.264 (140), in the example of FIG. 8.
  • entropy coding unit 46 also uses the Chroma Coded Block Pattern prescribed by ITU H.264, but in addition assigns the CBP16 value a two-bit lumal6x8_CBP value (142), e.g., according to the method described with respect to FIG. 9.
  • FIG. 9 is a flowchart illustrating an example method for determining a two-bit lumal6x8_CBP value.
  • Entropy coding unit 46 receives a 16x16 pixel partition that is further partitioned into two 16x8 or two 8x16 pixel partitions (150).
  • Entropy coding unit 46 generally assigns each bit of lumal6x8_CBP according to whether a corresponding sub-block of the 16x16 pixel partition includes at least one non-zero coefficient.
  • Entropy coding unit 46 determines whether a first sub-block of the 16x16 pixel partition has at least one non-zero coefficient to determine whether the first sub-block has at least one non-zero coefficient (152). When the first sub-block has all zero coefficients ("NO" branch of 152), entropy coding unit 46 clears the first bit of lumal6x8_CBP, e.g., assigns lumal6x8_CBP[0] a value of "0" (154).
  • entropy coding unit 46 sets the first bit of lumal6x8_CBP, e.g., assigns lumal6x8_CBP[0] a value of "1" (156).
  • Entropy coding unit 46 also determines whether a second sub-partition of the 16x16 pixel partition has at least one non-zero coefficient (158). When the second sub- partition has all zero coefficients ("NO" branch of 158), entropy coding unit 46 clears the second bit of lumal6x8_CBP, e.g., assigns lumal6x8_CBP[l] a value of "0" (160). When the second sub-block has at least one non-zero coefficient ("YES" branch of 158), entropy coding unit 46 then sets the second bit of lumal6x8_CBP, e.g., assigns lumal6x8_CBP[l] a value of "1" (162).
  • “lumacbpl ⁇ ” corresponds to an operation of appending a one-bit flag indicating whether an entire 16x16 luma block has nonzero coefficients or not. When “lumacbpl ⁇ ” equals one, there is at least one nonzero coefficient.
  • the function "Transform size flag” refers to a calculation performed having a result that indicates the transform being used, e.g., one of a 4x4 transform, 8x8 transform, 16x16 transform (for motion partition equal to or bigger than 16x16), 16x8 transform (for P_16x8), or 8x16 transform (for P_8xl6).
  • TRANSFORM_SIZE_GREATER_THAN_16x8 is an enumerated value (e.g., "2") that is used to indicate that a transform size is greater than or equal to 16x8 or 8x16.
  • the result of the transform size flag is incorporated into the syntax information of the 64x64 pixel macroblock.
  • Lumal6x8_cbp refers to a calculation that produces a two-bit number with each bit indicating whether one of the two partitions of P l 6x8 or P_8xl6 has nonzero coefficients or not.
  • the two-bit number resulting from Iumal6x8_cbp is incorporated into the syntax of the 64x64 pixel macroblock.
  • the value "chroma cbp” may be calculated in the same manner as the CodedBlockPatternChroma as prescribed by ITU H.264.
  • the calculated chroma cbp value is incorporated into the syntax information of the 64x64 pixel macroblock.
  • the function h264_cbp may be calculated in the same way as the CBP defined in ITU H.264.
  • the calculated H264_cbp value is incorporated into the syntax information of the 64x64 pixel macroblock.
  • a method according to FIGS. 6-9 may include encoding, with a video encoder, a video block having a size of more than 16x16 pixels, generating block-type syntax information that indicates the size of the block, and generating a coded block pattern value for the encoded block, wherein the coded block pattern value indicates whether the encoded block includes at least one non-zero coefficient.
  • FIG. 10 is a block diagram illustrating an example arrangement of a 64x64 pixel macroblock. The macroblock of FIG. 10 comprises four 32x32 partitions, labeled A, B, C, and D in FIG. 10. As discussed with respect to FIG.
  • a block may be partitioned in any one of four ways: the entire block (64x64) with no sub-partitions, two equal-sized horizontal partitions (32x64 and 32x64), two equal-sized vertical partitions (64x32 and 64x32), or four equal-sized square partitions (32x32, 32x32, 32x32 and 32x32).
  • the whole block partition comprises each of blocks A, B, C, and D; a first one of the two equal-sized horizontal partitions comprises A and B, while a second one of the two equal-sized horizontal partitions comprises C and D; a first one of the two equal-sized vertical partitions comprises A and C, while a second one of the two equal-sized vertical partitions comprises B and D; and the four equal- sized square partitions correspond to one of each of A, B, C, and D.
  • Similar partition schemes can be used for any size block, e.g., larger than 64x64 pixels, 32x32 pixels, 16x16 pixels, 8x8 pixels, or other sizes of video blocks.
  • each of the partitions may be intra-coded differently, i.e., with a different mode, such as different intra-modes.
  • a 32x32 partition such as partition A of FIG. 10 may be further partitioned into four equal-sized blocks of size 16x16 pixels.
  • ITU H.264 describes three different methods for intra-encoding a 16x16 macroblock, including intra-coding at the 16x16 level, intra- coding at the 8x8 level, and intra-coding at the 4x4 level.
  • ITU H.264 prescribes encoding each partition of a 16x16 macroblock using the same intra-coding mode. Therefore, according to ITU H.264, if one sub-block of a 16x16 macroblock is to be intra-coded at the 4x4 level, every sub-block of the 16x16 macroblock must be intra- coded at the 4x4 level.
  • An encoder configured according to the techniques of this disclosure may apply a mixed mode approach.
  • a large macroblock may have various partitions encoded with different coding modes.
  • one 16x16 partition may be intra-coded at the 4x4 pixel level, while other 16x16 partitions may be intra-coded at the 8x8 pixel level, and one 16x16 partition may be intra-coded at the 16x16 level, e.g., as shown in FIG. 4B.
  • the first block to be intra-coded may be the upper-left block, followed by the block immediately to the right of the first block, followed by the block immediately beneath the first block, and finally followed by the block beneath and to the right of the first block.
  • the order of intra-coding would proceed from A to B to C and finally to D.
  • FIG. 10 depicts a 64x64 pixel macroblock, intra-coding of a partitioned block of a different size may follow this same ordering.
  • each partition of the block may be encoded according to a different encoding mode, either intra-encoded (I-coded) or inter-encoded with reference to a single reference frame/slice/list (P-coded).
  • an encoder such as video encoder 50, may analyze rate-distortion cost information for each MB_N_type (i.e., each type of partition) based on a Lagrange multiplier, as discussed in greater detail with respect to FIG. 11, selecting the lowest cost as the best partition method.
  • elements of the column "MB N type” are keys for each type of partition of an NxN block.
  • Elements of the column “Name of MB_N_type” are names of different partitioning types of an NxN block.
  • P in the name refers to the block being inter-coded using P-coding, i.e., with reference to a single frame/slice/list.
  • LO in the name refers to the reference frame/slice/list, e.g., "list 0,” used as reference frames or slices for P coding.
  • NxN refers to the partition being the whole block
  • NxM refers to the partition being two partitions of width N and height M
  • MxN refers to the partition being two partitions of width M and height N
  • MxM refers to the partition being four equal-sized partitions each with width M and height M.
  • each partition may be encoded by either or both of a first frame/slice/list (LO) and a second frame/slice/list (Ll).
  • LO first frame/slice/list
  • Ll second frame/slice/list
  • “BiPred” refers to the corresponding partition being predicted from both LO and Ll.
  • Table 2 column labels and values are similar in meaning to those used in Table 1.
  • FIG. 11 is a flowchart illustrating an example method for calculating optimal partitioning and encoding methods for an NxN pixel video block.
  • the method of FIG. 11 comprises calculating the cost for each different encoding method (e.g., various spatial or temporal modes) as applied to each different partitioning method shown in, e.g., FIG. 4A, and selecting the combination of encoding mode and partitioning method with the best rate-distortion cost for the NxN pixel video block.
  • each different encoding method e.g., various spatial or temporal modes
  • Rate-distortion cost distortion + ⁇ * rate, where distortion represents error between an original block and a coded block and rate represents the bit rate necessary to support the coding mode.
  • rate and distortion may be determined on a macroblock, partition, slice or frame level.
  • video encoder 50 receives an NxN video block to be encoded (170).
  • video encoder 50 may receive a 64x64 large macroblock or a partition thereof, such as, for example, a 32x32 or 16x16 partition, for which video encoder 50 is to select an encoding and partitioning method.
  • Video encoder 50 then calculates the cost to encode the NxN block (172) using a variety of different coding modes, such as different infra- and inter-coding modes.
  • Video encoder 50 may encode the macroblock using the specified coding technique and determine the resulting bit rate cost and distortion.
  • the distortion may be determined based on a pixel difference between the pixels in the coded macroblock and the pixels in the original macroblock, e.g., based on a sum of absolute difference (SAD) metric, sum of square difference (SSD) metric, or other pixel difference metric.
  • SAD sum of absolute difference
  • SSD sum of square difference
  • Video encoder 50 may then partition the NxN block into two equally-sized non- overlapping horizontal Nx(N/2) partitions. Video encoder 50 may calculate the cost to encode each of the partitions using various coding modes (176). For example, to calculate the cost to encode the first Nx(N/2) partition, video encoder 50 may calculate the distortion and the bitrate to encode the first Nx(N/2) partition, and then calculate
  • COSt dist ⁇ rti ⁇ n(Mode, FIRST PARTITION, NX(N/2)) + ⁇ * rate(Mode, FIRST PARTITION, Nx(N/2))-
  • Video encoder 50 may then partition the NxN block into four equally-sized non- overlapping (N/2)x(N/2) partitions. Video encoder 50 may calculate the cost to encode the partitions using various coding modes (180). To calculate the cost to encode the (N/2)x(N/2) partitions, video encoder 50 may first calculate the distortion and the bitrate to encode the upper-left (N/2)x(N/2) partition and find the cost thereof as cost( Mo de,
  • UPPER-LEFT, (N/2)x(N/2)) dist ⁇ rti ⁇ n(Mode, UPPER-LEFT, (N/2)x(N/2)) + ⁇ * rate(Mode, UPPER-LEFT,
  • Video encoder 50 may similarly calculate the cost of each (N/2)x(N/2) block in the order: (1) upper-left partition, (2) upper-right partition, (3) bottom-left partition, (4) bottom-right partition.
  • Video encoder 50 may, in some examples, make recursive calls to this method on one or more of the (N/2)x(N/2) partitions to calculate the cost of partitioning and separately encoding each of the (N/2)x(N/2) partitions further, e.g., as (N/2)x(N/4) partitions, (N/4)x(N/2) partitions, and (N/4)x(N/4) partitions.
  • video encoder 50 may determine which combination of partitioning and encoding mode produced the best, i.e., lowest, cost in terms of rate and distortion (182). For example, video encoder 50 may compare the best cost of encoding two adjacent (N/2)x(N/2) partitions to the best cost of encoding the Nx(N/2) partition comprising the two adjacent (N/2)x(N/2) partitions. When the aggregate cost of encoding the two adjacent (N/2)x(N/2) partitions exceeds the cost to encode the Nx(N/2) partition comprising them, video encoder 50 may select the lower-cost option of encoding the Nx(N/2) partition.
  • video encoder 50 may apply every combination of partitioning method and encoding mode for each partition to identify a lowest cost partitioning and encoding method.
  • video encoder 50 may be configured to evaluate a more limited set of partitioning and encoding mode combinations.
  • video encoder 50 may encode the NxN macroblock using the best-cost determined method (184).
  • the result may be a large macroblock having partitions that are coded using different coding modes.
  • the ability to apply mixed mode coding to a large macroblock, such that different coding modes are applied to different partitions in the large macroblock, may permit the macroblock to be coded with reduced cost.
  • method for coding with mixed modes may include receiving, with video encoder 50, a video block having a size of more than 16x16 pixels, partitioning the block into partitions, encoding one of the partitions with a first encoding mode, encoding another of the partitions with a second coding mode different from the first encoding mode, and generating block-type syntax information that indicates the size of the block and identifies the partitions and the encoding modes used to encode the partitions.
  • FIG. 12 is a block diagram illustrating an example 64x64 pixel large macroblock with various partitions and different selected encoding methods for each partition. In the example of FIG.
  • each partition is labeled with one of an "I,” "P,” or “B.”
  • Partitions labeled “I” are partitions for which an encoder has elected to utilize intra- coding, e.g., based on rate-distortion evaluation.
  • Partitions labeled “P” are partitions for which the encoder has elected to utilize single-reference inter-coding, e.g., based on rate-distortion evaluation.
  • Partitions labeled "B” are partitions for which the encoder has elected to utilize bi-predicted inter-coding, e.g., based on rate-distortion evaluation.
  • different partitions within the same large macroblock have different coding modes, including different partition or sub-partition sizes and different intra- or inter-coding modes.
  • the large macroblock is a macroblock identified by a macroblock syntax element that identifies the macroblock type, e.g., mb64_type or mb32_type, for a given coding standard such as an extension of the H.264 coding standard.
  • the macroblock type syntax element may be provided as a macroblock header syntax element in the encoded video bitstream.
  • the I-, P- and B-coded partitions illustrated in FIG. 12 may be coded according to different coding modes, e.g., intra- or inter-prediction modes with various block sizes, including large block size modes for large partitions greater than 16x16 in size or H.264 modes for partitions that are less than or equal to 16x16 in size.
  • an encoder such as video encoder 50 may use the example method described with respect to FIG. 11 to select various encoding modes and partition sizes for different partitions and sub-partitions of the example large macroblock of FIG. 12.
  • video encoder 50 may receive a 64x64 macroblock, execute the method of FIG. 11, and produce the example macroblock of FIG. 12 with various partition sizes and coding modes as a result. It should be understood, however, that selections for partitioning and encoding modes may result from application of the method of FIG. 11, e.g., based on the type of frame from which the macroblock was selected and based on the input macroblock upon which the method is executed.
  • each partition when the frame comprises an I-frame, each partition will be intra-encoded.
  • each partition when the frame comprises a P-frame, each partition may either be intra-encoded or inter-coded based on a single reference frame (i.e., without bi- prediction).
  • the example macroblock of FIG. 12 is assumed to have been selected from a bi- predicted frame (B-frame) for purposes of illustration.
  • B-frame bi- predicted frame
  • video encoder 50 would not encode a partition using bi-directional prediction.
  • video encoder 50 would not encode a partition using inter-coding, either P- encoding or B-encoding.
  • video encoder 50 may select various partition sizes for different portions of the macroblock and elect to encode each partition using any available encoding mode.
  • FIG. 13 is a flowchart illustrating an example method for determining an optimal size of a macroblock for encoding a frame or slice of a video sequence. Although described with respect to selecting an optimal size of a macroblock for a frame, a method similar to that described with respect to FIG. 13 may be used to select an optimal size of a macroblock for a slice. Likewise, although the method of FIG. 13 is described with respect to video encoder 50, it should be understood that any encoder may utilize the example method of FIG.
  • the method of FIG. 13 comprises performing an encoding pass three times, once for each of a 16x16 macroblock, a 32x32 macroblock, and a 64x64 macroblock, and a video encoder may calculate rate-distortion metrics for each pass to determine which macroblock size provides the best rate-distortion.
  • Video encoder 50 may first encode a frame using 16x16 pixel macrob locks during a first encoding pass (190), e.g., using a function encode (frame, MB16_type), to produce an encoded frame F 16 .
  • video encoder 50 may calculate the bit rate and distortion based on the use of 16x16 pixel macrob locks as Ri 6 and Die, respectively (192).
  • Video encoder 50 may then encode the frame using 32x32 pixel macroblocks during a second encoding pass (196), e.g., using a function encode (frame, MB32_type), to produce an encoded frame F32. After the second encoding pass, video encoder 50 may calculate the bit rate and distortion based on the use of 32x32 pixel macroblocks as R 32 and D 32 , respectively (198).
  • Video encoder 50 may then encode the frame using 64x64 pixel macroblocks during a third encoding pass (202), e.g., using a function encode(frame, MB64_type), to produce an encoded frame F 64 .
  • video encoder 50 may calculate the bit rate and distortion based on the use of 64x64 pixel macroblocks as R 64 and D 64 , respectively (204).
  • video encoder 50 may determine which of the metrics Ci 6 , C32, and C 64 is lowest for the frame (208). Video encoder 50 may elect to use the frame encoded with the macroblock size that resulted in the lowest cost (210). Thus, for example, when Ci 6 is lowest, video encoder 50 may forward frame Fi 6 , encoded with the 16x16 macroblocks as the encoded frame in a bitstream for storage or transmission to a decoder. When C 32 is lowest, video encoder 50 may forward F 32 , encoded with the 32x32 macroblocks. When C 64 is lowest, video encoder 50 may forward F 64 , encoded with the 64x64 macroblocks.
  • video encoder 50 may perform the encoding passes in any order. For example, video encoder 50 may begin with the 64x64 macroblock encoding pass, perform the 32x32 macroblock encoding pass second, and end with the 16x16 macroblock encoding pass. Also, similar methods may be used for encoding other coded units comprising a plurality of macroblocks, such as slices with different sizes of macroblocks. For example, video encoder 50 may apply a method similar to that of FIG. 13 for selecting an optimal macroblock size for encoding slices of a frame, rather than the entire frame.
  • Video encoder 50 may also transmit an identifier of the size of the macrob locks for a particular coded unit (e.g., a frame or a slice) in the header of the coded unit for use by a decoder.
  • a particular coded unit e.g., a frame or a slice
  • a method may include receiving, with a digital video encoder, a coded unit of a digital video stream, calculating a first rate-distortion metric corresponding to a rate-distortion for encoding the coded unit using a first plurality of blocks each comprising 16x16 pixels, calculating a second rate-distortion metric corresponding to a rate-distortion for encoding the coded unit using a second plurality of blocks each comprising greater than 16x16 pixels, and determining which of the first rate-distortion metric and the second rate-distortion metric is lowest for the coded unit.
  • FIG. 14 is a block diagram illustrating an example wireless communication device 230 including a video encoder/decoder CODEC 234 that may encode and/or decode digital video data using the larger-than-standard macroblocks, using any of a variety of the techniques described in this disclosure.
  • a video encoder/decoder CODEC 234 may encode and/or decode digital video data using the larger-than-standard macroblocks, using any of a variety of the techniques described in this disclosure.
  • wireless communication device 230 includes video camera 232, video encoder-decoder (CODEC) 234, modulator/demodulator (modem) 236, transceiver 238, processor 240, user interface 242, memory 244, data storage device 246, antenna 248, and bus 250.
  • the components included in wireless communication device 230 illustrated in FIG. 14 may be realized by any suitable combination of hardware, software and/or firmware. In the illustrated example, the components are depicted as separate units. However, in other examples, the various components may be integrated into combined units within common hardware and/or software.
  • memory 244 may store instructions executable by processor 240 corresponding to various functions of video CODEC 234.
  • video camera 232 may include a video CODEC that performs the functions of video CODEC 234, e.g., encoding and/or decoding video data.
  • video camera 232 may correspond to video source 18 (FIG. 1).
  • video camera 232 may record video data captured by an array of sensors to generate digital video data.
  • Video camera 232 may send raw, recorded digital video data to video CODEC 234 for encoding and then to data storage device 246 via bus 250 for data storage.
  • Processor 240 may send signals to video camera 232 via bus 250 regarding a mode in which to record video, a frame rate at which to record video, a time at which to end recording or to change frame rate modes, a time at which to send video data to video CODEC 234, or signals indicating other modes or parameters.
  • User interface 242 may comprise one or more interfaces, such as input and output interfaces.
  • user interface 242 may include a touch screen, a keypad, buttons, a screen that may act as a viewf ⁇ nder, a microphone, a speaker, or other interfaces.
  • processor 240 may signal video camera 232 to send the video data to user interface 242 to be displayed on the viewf ⁇ nder.
  • Video CODEC 234 may encode video data from video camera 232 and decode video data received via antenna 248, transceiver 238, and modem 236. Video CODEC 234 additionally or alternatively may decode previously encoded data received from data storage device 246 for playback. Video CODEC 234 may encode and/or decode digital video data using macroblocks that are larger than the size of macroblocks prescribed by conventional video encoding standards. For example, video CODEC 234 may encode and/or decode digital video data using a large macroblock comprising 64x64 pixels or 32x32 pixels. The large macroblock may be identified with a macroblock type syntax element according to a video standard, such as an extension of the H.264 standard.
  • Video CODEC 234 may perform the functions of either or both of video encoder 50 (FIG. 2) and/or video decoder 60 (FIG. 3), as well as any other encoding/decoding functions or techniques as described in this disclosure.
  • CODEC 234 may partition a large macroblock into a variety of differently sized, smaller partitions, and use different coding modes, e.g., spatial (I) or temporal (P or B), for selected partitions. Selection of partition sizes and coding modes may be based on rate-distortion results for such partition sizes and coding modes.
  • CODEC 234 also may utilize hierarchical coded block pattern (CBP) values to identify coded macroblocks and partitions having non-zero coefficients within a large macroblock.
  • CODEC 234 may compare rate-distortion metrics for large and small macroblocks to select a macroblock size producing more favorable results for a frame, slice or other coding unit.
  • a user may interact with user interface 242 to transmit a recorded video sequence in data storage device 246 to another device, such as another wireless communication device, via modem 236, transceiver 238, and antenna 248.
  • the video sequence may be encoded according to an encoding standard, such as MPEG-2, MPEG- 3, MPEG-4, H.263, H.264, or other video encoding standards, subject to extensions or modifications described in this disclosure.
  • the video sequence may also be encoded using larger-than-standard macroblocks, as described in this disclosure.
  • Wireless communication device 230 may also receive an encoded video segment and store the received video sequence in data storage device 246.
  • Macroblocks of the received, encoded video sequence may be larger than macroblocks specified by conventional video encoding standards.
  • video CODEC 234 may decode the video sequence and send decoded frames of the video segment to user interface 242.
  • video CODEC 234 may decode the audio, or wireless communication device 230 may further include an audio codec (not shown) to decode the audio. In this manner, video CODEC 234 may perform both the functions of an encoder and of a decoder.
  • Memory 244 of wireless communication device 230 of FIG. 14 may be encoded with computer-readable instructions that cause processor 240 and/or video CODEC 234 to perform various tasks, in addition to storing encoded video data. Such instructions may be loaded into memory 244 from a data storage device such as data storage device 246. For example, the instructions may cause processor 240 to perform the functions described with respect to video CODEC 234.
  • FIG. 15 is a block diagram illustrating an example hierarchical coded block pattern (CBP) 260.
  • CBP hierarchical coded block pattern
  • the example of CBP 260 generally corresponds to a portion of the syntax information for a 64x64 pixel macroblock.
  • CBP 260 comprises a CBP64 value 262, four CBP32 values 264, 266, 268, 270, and four CBP16 values 272, 274, 276, 278.
  • Each block of CBP 260 may include one or more bits.
  • CBP64 value 262 when CBP64 value 262 is a bit with a value of "1," indicating that there is at least one non-zero coefficient in the large macroblock, CBP 260 includes the four CBP32 values 264, 266, 268, 270 for four 32x32 partitions of the large 64x64 macroblock, as shown in the example of FIG. 15. [0195]
  • CBP64 value 262 when CBP64 value 262 is a bit with a value of "0,” CBP 260 may consist only of CBP64, as a value of "0" may indicate that the block corresponding to CBP 260 has all zero-valued coefficients. Hence, all partitions of that block likewise will contain all zero-valued coefficients.
  • the CBP32 value for the 32x32 partition has four branches, representative of CBP16 values, e.g., as shown with respect to CBP32 value 266.
  • the CBP32 does not have any branches. In the example of FIG.
  • CBP 260 may have a five -bit prefix of "10100,” indicating that the CBP64 value is "1,” and that one of the 32x32 partitions has a CBP32 value of "1,” with subsequent bits corresponding to the four CBP 16 values 272, 274, 276, 278 corresponding to 16x16 partitions of the 32x32 partition with the CBP 32 value of "1.”
  • CBP32 value is shown as having a value of "1" in the example of FIG. 15, in other examples, two, three or all four 32x32 partitions may have CBP32 values of "1,” in which case multiple instances of four 16x16 partitions with corresponding CBP 16 values would be required.
  • CBP16 values 272, 274, 276, 278 for the four 16x16 partitions may be calculated according to various methods, e.g., according to the methods of FIGS. 8 and 9. Any or all of CBP 16 values 272, 274, 276, 278 may include a "lumacbpl ⁇ " value, a transform size flag, and/or a Iumal6x8_cbp. CBP 16 values 272, 274, 276, 278 may also be calculated according to a CBP value as defined in ITU H.264 or as a CodedBlockPatternChroma in ITU H.264, as discussed with respect to FIGS. 8 and 9. In the example of FIG.
  • FIG. 16 is a block diagram illustrating an example tree structure 280 corresponding to CBP 260 (FIG. 15).
  • CBP64 node 282 corresponds to CBP64 value 262
  • CBP32 nodes 284, 286, 288, 290 each correspond to respective ones of CBP32 values 264, 266, 268, 270
  • CBP 16 nodes 292, 294, 296, 298 each correspond to respective ones of CBP 16 values 272, 274, 276, 278.
  • a coded block pattern value as defined in this disclosure may correspond to a hierarchical CBP.
  • Each node yielding another branch in the tree corresponds to a respective CBP value of "1.”
  • CBP64 282 and CBP32 286 both have values of "1," and yield further partitions with possible CBP values of "1,” i.e., where at least one partition at the next partition level includes at least one non-zero transform coefficient value.
  • FIG. 17 is a flowchart illustrating an example method for using syntax information of a coded unit to indicate and select block-based syntax encoders and decoders for video blocks of the coded unit.
  • steps 300 to 310 of FIG. 17 may be performed by a video encoder, such as video encoder 20 (FIG. 1), in addition to and in conjunction with encoding a plurality of video blocks for a coded unit.
  • a coded unit may comprise a video frame, a slice, or a group of pictures (also referred to as a "sequence").
  • Steps 312 to 316 of FIG. 17 may be performed by a video decoder, such as video decoder 30 (FIG. 1), in addition to and in conjunction with decoding the plurality of video blocks of the coded unit.
  • video encoder 20 may receive a set of various-sized blocks for a coded unit, such as a frame, slice, or group of pictures (300).
  • a coded unit such as a frame, slice, or group of pictures (300).
  • one or more of the blocks may comprise greater than 16x16 pixels, e.g., 32x32 pixels, 64x64 pixels, etc.
  • the blocks need not each include the same number of pixels.
  • video encoder 20 may encode each of the blocks using the same block-based syntax.
  • video encoder 20 may encode each of the blocks using a hierarchical coded block pattern, as described above.
  • Video encoder 20 may select the block-based syntax to use based on a largest block, i.e., maximum block size, in the set of blocks for the coded unit.
  • the maximum block size may correspond to the size of a largest macroblock included in the coded unit. Accordingly, video encoder 20 may determine the largest sized block in the set (302). In the example of FIG. 17, video encoder 20 may also determine the smallest sized block in the set (304).
  • the hierarchical coded block pattern of a block has a length that corresponds to whether partitions of the block have a non-zero, quantized coefficient.
  • video encoder 20 may include a minimum size value in syntax information for a coded unit. In some examples, the minimum size value indicates the minimum partition size in the coded unit. The minimum partition size, e.g., the smallest block in a coded unit, in this manner may be used to determine a maximum length for the hierarchical coded block pattern.
  • Video encoder 20 may then encode each block of the set for the coded unit according to the syntax corresponding to the largest block (306). For example, assuming that the largest block comprises a 64x64 pixel block, video encoder 20 may use syntax such as that defined above for MB64_type. As another example, assuming that the largest block comprises a 32x32 pixel block, video encoder 20 may use the syntax such as that defined above for MB32_type.
  • Video encoder 20 also generates coded unit syntax information, which includes values corresponding to the largest block in the coded unit and the smallest block in the coded unit (308). Video encoder 20 may then transmit the coded unit, including the syntax information for the coded unit and each of the blocks of the coded unit, to video decoder 30.
  • Video decoder 30 may receive the coded unit and the syntax information for the coded unit from video encoder 20 (312). Video decoder 30 may select a block-based syntax decoder based on the indication in the coded unit syntax information of the largest block in the coded unit (314). For example, assuming that the coded unit syntax information indicated that the largest block in the coded unit comprised 64x64 pixels, video decoder 30 may select a syntax decoder for MB64_type blocks. Video decoder 30 may then apply the selected syntax decoder to blocks of the coded unit to decode the blocks of the coded unit (316).
  • Video decoder 30 may also determine when a block does not have further separately encoded sub-partitions based on the indication in the coded unit syntax information of the smallest encoded partition. For example, if the largest block is 64x64 pixels and the smallest block is also 64x64 pixels, then it can be determined that the 64x64 blocks are not divided into sub-partitions smaller than the 64x64 size. As another example, if the largest block is 64x64 pixels and the smallest block is 32x32 pixels, then it can be determined that the 64x64 blocks are divided into sub-partitions no smaller than 32x32.
  • video decoder 30 may remain backwards-compatible with existing coding standards, such as H.264.
  • existing coding standards such as H.264.
  • video encoder 20 may indicate this in the coded unit syntax information, and video decoder 30 may apply standard H.264 block-based syntax decoders.
  • video encoder 20 may indicate this in the coded unit syntax information, and video decoder 30 may selectively apply a block-based syntax decoder in accordance with the techniques of this disclosure to decode the blocks of the coded unit.
  • the functions described may be implemented in hardware, software, firmware, or any combination thereof.
  • Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another.
  • a storage media may be any available media that can be accessed by a computer.
  • such computer-readable media can comprise RAM, ROM, EEPROM, CD- ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer.
  • any connection is properly termed a computer-readable medium.
  • Disk and disc includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk and blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media.

Abstract

Techniques are described for encoding and decoding digital video data using macroblocks that are larger than the macroblocks prescribed by conventional video encoding and decoding standards. For example, the techniques include encoding and decoding a video stream using macroblocks comprising greater than 16x16 pixels. In one example, an apparatus includes a video encoder configured to encode a coded unit comprising a plurality of video blocks, wherein at least one of the plurality of video blocks comprises a size of more than 16x16 pixels and to generate syntax information for the coded unit that includes a maximum size value, wherein the maximum size value indicates a size of a largest one of the plurality of video blocks in the coded unit. The syntax information may also include a minimum size value. In this manner, the encoder may indicate to a decoder the proper syntax decoder to apply to the coded unit.

Description

VIDEO CODING WITH LARGE MACROBLOCKS
[0001] This application claims the benefit of U.S. Provisional Application Nos. 61/102,787 filed on October 3, 2008, 61/144,357 filed on January 13, 2009, and 61/166,631 filed on April 3, 2009, each of which is incorporated herein by reference in its entirety.
[0002] This application is related to U.S. Patent Applications, all filed on the same date as the present application, all possessing the same title, "VIDEO CODING WITH LARGE MACROBLOCKS," (temporarily referenced by Attorney Docket Numbers 090033U1, 090033U2, 030033U3, which are all assigned to the assigner hereof and hereby expressly incorporated by reference in their entirety for all purposes.
TECHNICAL FIELD
[0003] This disclosure relates to digital video coding and, more particularly, block- based video coding.
BACKGROUND
[0004] Digital video capabilities can be incorporated into a wide range of devices, including digital televisions, digital direct broadcast systems, wireless broadcast systems, personal digital assistants (PDAs), laptop or desktop computers, digital cameras, digital recording devices, video gaming devices, video game consoles, cellular or satellite radio telephones, and the like. Digital video devices implement video compression techniques, such as those described in the standards defined by MPEG-2, MPEG-4, ITU-T H.263 or ITU-T H.264/MPEG-4, Part 10, Advanced Video Coding (AVC), and extensions of such standards, to transmit and receive digital video information more efficiently.
[0005] Video compression techniques perform spatial prediction and/or temporal prediction to reduce or remove redundancy inherent in video sequences. For block- based video coding, a video frame or slice may be partitioned into macrob locks. Each macroblock can be further partitioned. Macroblocks in an intra-coded (I) frame or slice are encoded using spatial prediction with respect to neighboring macroblocks. Macroblocks in an inter-coded (P or B) frame or slice may use spatial prediction with respect to neighboring macroblocks in the same frame or slice or temporal prediction with respect to other reference frames.
SUMMARY
[0006] In general, this disclosure describes techniques for encoding digital video data using large macroblocks. Large macroblocks are larger than macroblocks generally prescribed by existing video encoding standards. Most video encoding standards prescribe the use of a macroblock in the form of a 16x16 array of pixels. In accordance with this disclosure, an encoder and decoder may utilize large macroblocks that are greater than 16x16 pixels in size. As examples, a large macroblock may have a 32x32, 64x64, or larger array of pixels.
[0007] Video coding relies on spatial and/or temporal redundancy to support compression of video data. Video frames generated with higher spatial resolution and/or higher frame rate may support more redundancy. The use of large macroblocks, as described in this disclosure, may permit a video coding technique to utilize larger degrees of redundancy produced as spatial resolution and/or frame rate increase. In accordance with this disclosure, video coding techniques may utilize a variety of features to support coding of large macroblocks.
[0008] As described in this disclosure, a large macroblock coding technique may partition a large macroblock into partitions, and use different partition sizes and different coding modes, e.g., different spatial (I) or temporal (P or B) modes, for selected partitions. As another example, a coding technique may utilize hierarchical coded block pattern (CBP) values to efficiently identify coded macroblocks and partitions having non-zero coefficients within a large macroblock. As a further example, a coding technique may compare rate-distortion metrics produced by coding using large and small macroblocks to select a macroblock size producing more favorable results.
[0009] In one example, the disclosure provides a method comprising encoding, with a video encoder, a video block having a size of more than 16x16 pixels, generating block- type syntax information that indicates the size of the block, and generating a coded block pattern value for the encoded block, wherein the coded block pattern value indicates whether the encoded block includes at least one non-zero coefficient. [0010] In another example, the disclosure provides an apparatus comprising a video encoder configured to encode a video block having a size of more than 16x16 pixels, generate block-type syntax information that indicates the size of the block, and generate a coded block pattern value for the encoded block, wherein the coded block pattern value indicates whether the encoded block includes at least one non-zero coefficient. [0011] In another example, the disclosure provides a computer-readable medium encoded with instructions to cause a video encoding apparatus to encode, with a video encoder, a video block having a size of more than 16x16 pixels, generate block-type syntax information that indicates the size of the block, and generate a coded block pattern value for the encoded block, wherein the coded block pattern value indicates whether the encoded block includes at least one non-zero coefficient. [0012] In an additional example, the disclosure provides a method comprising receiving, with a video decoder, an encoded video block having a size of more than 16x16 pixels, receiving block-type syntax information that indicates the size of the encoded block, receiving a coded block pattern value for the encoded block, wherein the coded block pattern value indicates whether the encoded block includes at least one non-zero coefficient, and decoding the encoded block based on the block-type syntax information and the coded block pattern value for the encoded block.
[0013] In a further example, the disclosure provides an apparatus comprising a video decoder configured to receive an encoded video block having a size of more than 16x16 pixels, receive block-type syntax information that indicates the size of the encoded block, receive a coded block pattern value for the encoded block, wherein the coded block pattern value indicates whether the encoded block includes at least one non-zero coefficient, and decode the encoded block based on the block-type syntax information and the coded block pattern value for the encoded block.
[0014] In another example, the disclosure provides a computer-readable medium comprising instructions to cause a video decoder to receive an encoded video block having a size of more than 16x16 pixels, receive block-type syntax information that indicates the size of the encoded block, receive a coded block pattern value for the encoded block, wherein the coded block pattern value indicates whether the encoded block includes at least one non-zero coefficient, and decode the encoded block based on the block-type syntax information and the coded block pattern value for the encoded block. [0015] In another example, the disclosure provides a method comprising receiving, with a video encoder, a video block having a size of more than 16x16 pixels, partitioning the block into partitions, encoding one of the partitions using a first encoding mode, encoding another of the partitions using a second encoding mode different from the first encoding mode, and generating block-type syntax information that indicates the size of the block and identifies the partitions and the encoding modes used to encode the partitions.
[0016] In an additional example, the disclosure provides an apparatus comprising a video encoder configured to receive a video block having a size of more than 16x16 pixels, partition the block into partitions, encode one of the partitions using a first encoding mode, encode another of the partitions using a second encoding mode different from the first encoding mode, generate block-type syntax information that indicates the size of the block and identifies the partitions and the encoding modes used to encode the partitions.
[0017] In another example, the disclosure provides a computer-readable medium encoded with instructions to cause a video encoder to receive a video block having a size of more than 16x16 pixels, partition the block into partitions, encode one of the partitions using a first encoding mode, encode another of the partitions using a second encoding mode different from the first encoding mode, and generate block-type syntax information that indicates the size of the block and identifies the partitions and the encoding modes used to encode the partitions.
[0018] In a further example, the disclosure provides a method comprising receiving, with a video decoder, a video block having a size of more than 16x16 pixels, wherein the block is partitioned into partitions, one of the partitions is encoded with a first encoding mode and another of the partitions is encoded with a second encoding mode different from the first encoding mode, receiving block-type syntax information that indicates the size of the block and identifies the partitions and the encoding modes used to encode the partitions, and decoding the video block based on the block-type syntax information.
[0019] In another example, the disclosure provides an apparatus comprising a video decoder configured to receive a video block having a size of more than 16x16 pixels, wherein the block is partitioned into partitions, one of the partitions is encoded with a first encoding mode and another of the partitions is encoded with a second encoding mode different from the first encoding mode, receive block-type syntax information that indicates the size of the block and identifies the partitions and the encoding modes used to encode the partitions, and decode the video block based on the block-type syntax information.
[0020] In an additional example, the disclosure provides a computer-readable medium encoded with instructions to cause a video decoder to receive, with a video decoder, a video block having a size of more than 16x16 pixels, wherein the block is partitioned into partitions, one of the partitions is encoded with a first encoding mode and another of the partitions is encoded with a second encoding mode different from the first encoding mode, receive block-type syntax information that indicates the size of the block and identifies the partitions and the encoding modes used to encode the partitions, and decode the video block based on the block-type syntax information. [0021] In another example, the disclosure provides a method comprising receiving, with a digital video encoder, a video coding unit, determining a first rate-distortion metric for encoding the video coding unit using first video blocks with sizes of 16x16 pixels, determining a second rate-distortion metric for encoding the video coding unit using second video blocks with sizes of more than 16x16 pixels, encoding the video coding unit using the first video blocks when the first rate-distortion metric is less than second rate-distortion metric, and encoding the video coding unit using the second video blocks when the second rate-distortion metric is less than the first rate-distortion metric. [0022] In an additional example, the disclosure provides an apparatus comprising a video encoder configured to receive a video coding unit, determine a first rate-distortion metric for encoding the video coding unit using first video blocks with sizes of 16x16 pixels, determine a second rate-distortion metric for encoding the video coding unit using second video blocks with sizes of more than 16x16 pixels, encode the video coding unit using the first video blocks when the first rate-distortion metric is less than second rate-distortion metric, encode the video coding unit using the second video blocks when the second rate-distortion metric is less than the first rate-distortion metric. [0023] In another example, the disclosure provides a computer-readable medium encoded with instructions to cause a video encoder to receive a video coding unit, determine a first rate-distortion metric for encoding the video coding unit using first video blocks with sizes of 16x16 pixels, determine a second rate-distortion metric for encoding the video coding unit using second video blocks with sizes of more than 16x16 pixels, encode the video coding unit using the first video blocks when the first rate-distortion metric is less than second rate-distortion metric, and encode the video coding unit using the second video blocks when the second rate-distortion metric is less than the first rate-distortion metric.
[0024] In another example, the disclosure provides a method comprising encoding, with a video encoder, a coded unit comprising a plurality of video blocks, wherein at least one of the plurality of video blocks comprises a size of more than 16x16 pixels, and generating syntax information for the coded unit that includes a maximum size value, wherein the maximum size value indicates a size of a largest one of the plurality of video blocks in the coded unit.
[0025] In another example, the disclosure provides an apparatus comprising a video encoder configured to encode a coded unit comprising a plurality of video blocks, wherein at least one of the plurality of video blocks comprises a size of more than 16x16 pixels and to generate syntax information for the coded unit that includes a maximum size value, wherein the maximum size value indicates a size of a largest one of the plurality of video blocks in the coded unit.
[0026] In another example, the disclosure provides an apparatus comprising apparatus comprising means for encoding a coded unit comprising a plurality of video blocks, wherein at least one of the plurality of video blocks comprises a size of more than 16x16 pixels, and means for generating syntax information for the coded unit that includes a maximum size value, wherein the maximum size value indicates a size of a largest one of the plurality of video blocks in the coded unit.
[0027] In another example, the disclosure provides a computer-readable storage medium encoded with instructions for causing a programmable processor to encode a coded unit comprising a plurality of video blocks, wherein at least one of the plurality of video blocks comprises a size of more than 16x16 pixels, and generate syntax information for the coded unit that includes a maximum size value, wherein the maximum size value indicates a size of a largest one of the plurality of video blocks in the coded unit.
[0028] In another example, the disclosure provides a method comprising receiving, with a video decoder, a coded unit comprising a plurality of video blocks, wherein at least one of the plurality of video blocks comprises a size of more than 16x16 pixels, receiving syntax information for the coded unit that includes a maximum size value, wherein the maximum size value indicates a size of a largest one of the plurality of video blocks in the coded unit, selecting a block-type syntax decoder according to the maximum size value, and decoding each of the plurality of video blocks in the coded unit using the selected block-type syntax decoder.
[0029] In another example, the disclosure provides an apparatus comprising a video decoder configured to receive a coded unit comprising a plurality of video blocks, wherein at least one of the plurality of video blocks comprises a size of more than 16x16 pixels, receive syntax information for the coded unit that includes a maximum size value, wherein the maximum size value indicates a size of a largest one of the plurality of video blocks in the coded unit, select a block-type syntax decoder according to the maximum size value, and decode each of the plurality of video blocks in the coded unit using the selected block-type syntax decoder.
[0030] In another example, the disclosure provides means for receiving a coded unit comprising a plurality of video blocks, wherein at least one of the plurality of video blocks comprises a size of more than 16x16 pixels, means for receiving syntax information for the coded unit that includes a maximum size value, wherein the maximum size value indicates a size of a largest one of the plurality of video blocks in the coded unit, means for selecting a block-type syntax decoder according to the maximum size value, and means for decoding each of the plurality of video blocks in the coded unit using the selected block-type syntax decoder.
[0031] In another example, the disclosure provides a computer-readable storage medium encoded with instructions for causing a programmable processor to receive a coded unit comprising a plurality of video blocks, wherein at least one of the plurality of video blocks comprises a size of more than 16x16 pixels, receive syntax information for the coded unit that includes a maximum size value, wherein the maximum size value indicates a size of a largest one of the plurality of video blocks in the coded unit, select a block-type syntax decoder according to the maximum size value, and decode each of the plurality of video blocks in the coded unit using the selected block-type syntax decoder. [0032] The details of one or more examples are set forth in the accompanying drawings and the description below. Other features, objects, and advantages will be apparent from the description and drawings, and from the claims.
BRIEF DESCRIPTION OF DRAWINGS
[0033] FIG. 1 is a block diagram illustrating an example video encoding and decoding system that encodes and decodes digital video data using large macroblocks. [0034] FIG. 2 is a block diagram illustrating an example of a video encoder that implements techniques for coding large macroblocks.
[0035] FIG. 3 is a block diagram illustrating an example of a video decoder that implements techniques for coding large macroblocks.
[0036] FIG. 4A is a conceptual diagram illustrating partitioning among various levels of a large macroblock.
[0037] FIG. 4B is a conceptual diagram illustrating assignment of different coding modes to different partitions a large macroblock.
[0038] FIG. 5 is a conceptual diagram illustrating a hierarchical view of various levels of a large macroblock.
[0039] FIG. 6 is a flowchart illustrating an example method for setting a coded block pattern (CBP) value of a 64x64 pixel large macroblock.
[0040] FIG. 7 is a flowchart illustrating an example method for setting a CBP value of a 32x32 pixel partition of a 64x64 pixel large macroblock.
[0041] FIG. 8 is a flowchart illustrating an example method for setting a CBP value of a 16x16 pixel partition of a 32x32 pixel partition of a 64x64 pixel large macroblock. [0042] FIG. 9 is a flowchart illustrating an example method for determining a two-bit lumal6x8_CBP value.
[0043] FIG. 10 is a block diagram illustrating an example arrangement of a 64x64 pixel large macroblock.
[0044] FIG. 11 is a flowchart illustrating an example method for calculating optimal partitioning and encoding methods for an NxN pixel large video block. [0045] FIG. 12 is a block diagram illustrating an example 64x64 pixel macroblock with various partitions and selected encoding methods for each partition. [0046] FIG. 13 is a flowchart illustrating an example method for determining an optimal size of a macroblock for encoding a frame of a video sequence.
[0047] FIG. 14 is a block diagram illustrating an example wireless communication device including a video encoder/decoder (CODEC) that codes digital video data using large macroblocks.
[0048] FIG. 15 is a block diagram illustrating an example array representation of a hierarchical CBP representation for a large macroblock.
[0049] FIG. 16 is a block diagram illustrating an example tree structure corresponding to the hierarchical CBP representation of FIG. 15. [0050] FIG. 17 is a flowchart illustrating an example method for using syntax information of a coded unit to indicate and select block-based syntax encoders and decoders for video blocks of the coded unit.
DETAILED DESCRIPTION
[0051] The disclosure describes techniques for encoding and decoding digital video data using large macroblocks. Large macroblocks are larger than macroblocks generally prescribed by existing video encoding standards. Most video encoding standards prescribe the use of a macroblock in the form of a 16x16 array of pixels. In accordance with this disclosure, an encoder and/or a decoder may utilize large macroblocks that are greater than 16x16 pixels in size. As examples, a large macroblock may have a 32x32, 64x64, or possibly larger array of pixels.
[0052] In general, a macroblock, as that term is used in this disclosure, may refer to a data structure for a pixel array that comprises a defined size expressed as NxN pixels, where N is a positive integer value. The macroblock may define four luminance blocks, each comprising an array of (N/2)x(N/2) pixels, two chrominance blocks, each comprising an array of NxN pixels, and a header comprising macroblock-type information and coded block pattern (CBP) information, as discussed in greater detail below.
[0053] Conventional video coding standards ordinarily prescribe that the defined macroblock size is a 16x16 array of pixels. In accordance with various techniques described in this disclosure, macroblocks may comprise NxN arrays of pixels where N may be greater than 16. Likewise, conventional video coding standards prescribe that an inter-encoded macroblock is typically assigned a single motion vector. In accordance with various techniques described in this disclosure, a plurality of motion vectors may be assigned for inter-encoded partitions of an NxN macroblock, as described in greater detail below. References to "large macroblocks" or similar phrases generally refer to macroblocks with arrays of pixels greater than 16x16.
[0054] In some cases, large macroblocks may support improvements in coding efficiency and/or reductions in data transmission overhead while maintaining or possibly improving image quality. For example, the use of large macroblocks may permit a video encoder and/or decoder to take advantage of increased redundancy provided by video data generated with increased spatial resolution (e.g., 1280x720 or 1920x1080 pixels per frame) and/or increased frame rate (e.g., 30 or 60 frames per second).
[0055] As an illustration, a digital video sequence with a spatial resolution of 1280x720 pixels per frame and a frame rate of 60 frames per second is spatially 36 times larger than and temporally 4 times faster than a digital video sequence with a spatial resolution of 176x144 pixels per frame and a frame rate of 15 frames per second. With increased macroblock size, a video encoder and/or decoder can better exploit increased spatial and/or temporal redundancy to support compression of video data. [0056] Also, by using larger macroblocks, a smaller number of blocks may be encoded for a given frame or slice, reducing the amount of overhead information that needs to be transmitted. In other words, larger macroblocks may permit a reduction in the overall number of macroblocks coded per frame or slice. If the spatial resolution of a frame is increased by four times, for example, then four times as many 16x16 macroblocks would be required for the pixels in the frame. In this example, with 64x64 macroblocks, the number of macroblocks needed to handle the increased spatial resolution is reduced. With a reduced number of macroblocks per frame or slice, for example, the cumulative amount of coding information such as syntax information, motion vector data, and the like can be reduced.
[0057] In this disclosure, the size of a macroblock generally refers to the number of pixels contained in the macroblock, e.g., 64x64, 32x32, 16x16, or the like. Hence, a large macroblock (e.g., 64x64 or 32x32) may be large in the sense that it contains a larger number of pixels than a 16x16 macroblock. However, the spatial area defined by the vertical and horizontal dimensions of a large macroblock, i.e., as a fraction of the area defined by the vertical and horizontal dimensions of a video frame, may or may not be larger than the area of a conventional 16x16 macroblock. In some examples, the area of the large macroblock may be the same or similar to a conventional 16x16 macroblock. However, the large macroblock has a higher spatial resolution characterized by a higher number and higher spatial density of pixels within the macroblock.
[0058] The size of the macroblock may be configured based at least in part on the number of pixels in the frame, i.e., the spatial resolution in the frame. If the frame has a higher number of pixels, a large macroblock can be configured to have a higher number of pixels. As an illustration, a video encoder may be configured to utilize a 32x32 pixel macroblock for a 1280x720 pixel frame displayed at 30 frames per second. As another illustration, a video encoder may be configured to utilize a 64x64 pixel macroblock for a 1280x720 pixel frame displayed at 60 frames per second.
[0059] Each macroblock encoded by an encoder may require data that describes one or more characteristics of the macroblock. The data may indicate, for example, macroblock type data to represent the size of the macroblock, the way in which the macroblock is partitioned, and the coding mode (spatial or temporal) applied to the macroblock and/or its partitions. In addition, the data may include motion vector difference (mvd) data along with other syntax elements that represents motion vector information for the macroblock and/or its partitions. Also, the data may include a coded block pattern (CBP) value along with other syntax elements to represent residual information after prediction. The macroblock type data may be provided in a single macroblock header for the large macroblock.
[0060] As mentioned above, by utilizing a large macroblock, the encoder may reduce the number of macroblocks per frame or slice, and thereby reduce the amount of net overhead that needs to be transmitted for each frame or slice. Also, by utilizing a large macroblock, the total number of macroblocks may decrease for a particular frame or slice, which may reduce blocky artifacts in video displayed to a user. [0061] Video coding techniques described in this disclosure may utilize one or more features to support coding of large macroblocks. For example, a large macroblock may be partitioned into smaller partitions. Different coding modes, e.g., different spatial (I) or temporal (P or B) coding modes, may be applied to selected partitions within a large macroblock. Also, a hierarchical coded block pattern (CBP) values can be utilized to efficiently identify coded macroblocks and partitions having non-zero transform coefficients representing residual data. In addition, rate-distortion metrics may be compared for coding using large and small macroblock sizes to select a macroblock size producing favorable results. Furthermore, a coded unit (e.g., a frame, slice, sequence, or group of pictures) comprising macroblocks of varying sizes may include a syntax element that indicates the size of the largest macroblock in the coded unit. As described in greater detail below, large macroblocks comprise a different block-level syntax than standard 16x16 pixel blocks. Accordingly, by indicating the size of the largest macroblock in the coded unit, an encoder may signal to a decoder a block-level syntax decoder to apply to the macroblocks of the coded unit.
[0062] Use of different coding modes for different partitions in a large macroblock may be referred to as mixed mode coding of large macroblocks. Instead of coding a large macroblock uniformly such that all partitions have the same intra- or inter-coding mode, a large macroblock may be coded such that some partitions have different coding modes, such as different intra-coding modes (e.g., 1 16x16, 1 8x8, 1 4x4) or intra- and inter-coding modes.
[0063] If a large macroblock is divided into two or more partitions, for example, at least one partition may be coded with a first mode and another partition may be coded with a second mode that is different than the first mode. In some cases, the first mode may be a first I mode and the second mode may be a second I mode, different from the first I mode. In other cases, the first mode may be an I mode and the second mode may be a P or B mode. Hence, in some examples, a large macroblock may include one or more temporally (P or B) coded partitions and one or more spatially (I) coded partitions, or one or more spatially coded partitions with different I modes.
[0064] One or more hierarchical coded block pattern (CBP) values may be used to efficiently describe whether any partitions in a large macroblock have at least one nonzero transform coefficient and, if so, which partitions. The transform coefficients encode residual data for the large macroblock. A large macroblock level CBP bit indicates whether any partitions in the large macroblock includes a non-zero, quantized coefficient. If not, there is no need to consider whether any of the partitions has a nonzero coefficient, as the entire large macroblock is known to have no non-zero coefficients. In this case, a predictive macroblock can be used to decode the macroblock without residual data.
[0065] Alternatively, if the macroblock-level CBP value indicates that at least one partition in the large macroblock has a non-zero coefficient, then partition-level CBP values can be analyzed to identify which of the partitions includes at least one non-zero coefficient. The decoder then may retrieve appropriate residual data for the partitions having at least one non-zero coefficient, and decode the partitions using the residual data and predictive block data. In some cases, one or more partitions may have nonzero coefficients, and therefore include partition-level CBP values with the appropriate indication. Both the large macroblock and at least some of the partitions may be larger than 16x16 pixels.
[0066] To select macroblock sizes yielding favorable rate-distortion metrics, rate- distortion metrics may be analyzed for both large macroblocks (e.g., 32x32 or 64x64) and small macroblocks (e.g., 16x16). For example, an encoder may compare rate- distortion metrics between 16x16 macroblocks, 32x32 macroblocks, and 64x64 macroblocks for a coded unit, such as a frame or a slice. The encoder may then select the macroblock size that results in the best rate-distortion and encode the coded unit using the selected macroblock size, i.e., the macroblock size with the best rate- distortion.
[0067] The selection may be based on encoding the frame or slice in three or more passes, e.g., a first pass using 16x16 pixel macroblocks, a second pass using 32x32 pixel macroblocks, and a third pass using 64x64 pixel macroblocks, and comparing rate- distortion metrics for each pass. In this manner, an encoder may optimize rate- distortion by varying the macroblock size and selecting the macroblock size that results in the best or optimal rate-distortion for a given coding unit, such as a slice or frame. The encoder may further transmit syntax information for the coded unit, e.g., as part of a frame header or a slice header, that identifies the size of the macroblocks used in the coded unit. As discussed in greater detail below, the syntax information for the coded unit may comprise a maximum size indicator that indicates a maximum size of macroblocks used in the coded unit. In this manner, the encoder may inform a decoder as to what syntax to expect for macroblocks of the coded unit. When the maximum size of macroblocks comprises 16x16 pixels, the decoder may expect standard H.264 syntax and parse the macroblocks according to H.264-specified syntax. However, when the maximum size of macroblocks is greater than 16x16, e.g., comprises 64x64 pixels, the decoder may expect modified and/or additional syntax elements that relate to processing of larger macroblocks, as described by this disclosure, and parse the macroblocks according to such modified or additional syntax.
[0068] For some video frames or slices, large macroblocks may present substantial bit rate savings and thereby produce the best rate-distortion results, given relatively low distortion. For other video frames or slices, however, smaller macroblocks may present less distortion, outweighing bit rate in the rate-distortion cost analysis. Hence, in different cases, 64x64, 32x32 or 16x16 may be appropriate for different video frames or slices, e.g., depending on video content and complexity.
[0069] FIG. 1 is a block diagram illustrating an example video encoding and decoding system 10 that may utilize techniques for encoding/decoding digital video data using a large macroblock, i.e., a macroblock that contains more pixels than a 16x16 macroblock. As shown in FIG. 1, system 10 includes a source device 12 that transmits encoded video to a destination device 14 via a communication channel 16. Source device 12 and destination device 14 may comprise any of a wide range of devices. In some cases, source device 12 and destination device 14 may comprise wireless communication devices, such as wireless handsets, so-called cellular or satellite radiotelephones, or any wireless devices that can communicate video information over a communication channel 16, in which case communication channel 16 is wireless. The techniques of this disclosure, however, which concern use of a large macroblock comprising more pixels than macroblocks prescribed by conventional video encoding standards, are not necessarily limited to wireless applications or settings. For example, these techniques may apply to over-the-air television broadcasts, cable television transmissions, satellite television transmissions, Internet video transmissions, encoded digital video that is encoded onto a storage medium, or other scenarios. Accordingly, communication channel 16 may comprise any combination of wireless or wired media suitable for transmission of encoded video data.
[0070] In the example of FIG. 1, source device 12 may include a video source 18, video encoder 20, a modulator/demodulator (modem) 22 and a transmitter 24. Destination device 14 may include a receiver 26, a modem 28, a video decoder 30, and a display device 32. In accordance with this disclosure, video encoder 20 of source device 12 may be configured to apply one or more of the techniques for using, in a video encoding process, a large macroblock having a size that is larger than a macroblock size prescribed by conventional video encoding standards. Similarly, video decoder 30 of destination device 14 may be configured to apply one or more of the techniques for using, in a video decoding process, a macroblock size that is larger than a macroblock size prescribed by conventional video encoding standards.
[0071] The illustrated system 10 of FIG. 1 is merely one example. Techniques for using a large macroblock as described in this disclosure may be performed by any digital video encoding and/or decoding device. Source device 12 and destination device 14 are merely examples of such coding devices in which source device 12 generates coded video data for transmission to destination device 14. In some examples, devices 12, 14 may operate in a substantially symmetrical manner such that each of devices 12, 14 include video encoding and decoding components. Hence, system 10 may support oneway or two-way video transmission between video devices 12, 14, e.g., for video streaming, video playback, video broadcasting, or video telephony. [0072] Video source 18 of source device 12 may include a video capture device, such as a video camera, a video archive containing previously captured video, an/or a video feed from a video content provider. As a further alternative, video source 18 may generate computer graphics-based data as the source video, or a combination of live video, archived video, and computer-generated video. In some cases, if video source 18 is a video camera, source device 12 and destination device 14 may form so-called camera phones or video phones. As mentioned above, however, the techniques described in this disclosure may be applicable to video coding in general, and may be applied to wireless or wired applications. In each case, the captured, pre-captured, or computer-generated video may be encoded by video encoder 20. The encoded video information may then be modulated by modem 22 according to a communication standard, and transmitted to destination device 14 via transmitter 24. Modem 22 may include various mixers, filters, amplifiers or other components designed for signal modulation. Transmitter 24 may include circuits designed for transmitting data, including amplifiers, filters, and one or more antennas.
[0073] Receiver 26 of destination device 14 receives information over channel 16, and modem 28 demodulates the information. Again, the video encoding process may implement one or more of the techniques described herein to use a large macroblock, e.g., larger than 16x16, for inter (i.e., temporal) and/or intra (i.e., spatial) encoding of video data. The video decoding process performed by video decoder 30 may also use such techniques during the decoding process. The information communicated over channel 16 may include syntax information defined by video encoder 20, which is also used by video decoder 30, that includes syntax elements that describe characteristics and/or processing of the large macroblocks, as discussed in greater detail below. The syntax information may be included in any or all of a frame header, a slice header, a sequence header (for example, with respect to H.264, by using profile and level to which the coded video sequence conforms), or a macroblock header. Display device 32 displays the decoded video data to a user, and may comprise any of a variety of display devices such as a cathode ray tube (CRT), a liquid crystal display (LCD), a plasma display, an organic light emitting diode (OLED) display, or another type of display device.
[0074] In the example of FIG. 1, communication channel 16 may comprise any wireless or wired communication medium, such as a radio frequency (RF) spectrum or one or more physical transmission lines, or any combination of wireless and wired media. Communication channel 16 may form part of a packet-based network, such as a local area network, a wide-area network, or a global network such as the Internet. Communication channel 16 generally represents any suitable communication medium, or collection of different communication media, for transmitting video data from source device 12 to destination device 14, including any suitable combination of wired or wireless media. Communication channel 16 may include routers, switches, base stations, or any other equipment that may be useful to facilitate communication from source device 12 to destination device 14.
[0075] Video encoder 20 and video decoder 30 may operate according to a video compression standard, such as the ITU-T H.264 standard, alternatively described as MPEG-4, Part 10, Advanced Video Coding (AVC). The techniques of this disclosure, however, are not limited to any particular coding standard. Other examples include MPEG-2 and ITU-T H.263. Although not shown in FIG. 1, in some aspects, video encoder 20 and video decoder 30 may each be integrated with an audio encoder and decoder, and may include appropriate MUX-DEMUX units, or other hardware and software, to handle encoding of both audio and video in a common data stream or separate data streams. If applicable, MUX-DEMUX units may conform to the ITU H.223 multiplexer protocol, or other protocols such as the user datagram protocol (UDP).
[0076] The ITU-T H.264/MPEG-4 (AVC) standard was formulated by the ITU-T Video Coding Experts Group (VCEG) together with the ISO/IEC Moving Picture Experts Group (MPEG) as the product of a collective partnership known as the Joint Video Team (JVT). In some aspects, the techniques described in this disclosure may be applied to devices that generally conform to the H.264 standard. The H.264 standard is described in ITU-T Recommendation H.264, Advanced Video Coding for generic audiovisual services, by the ITU-T Study Group, and dated March, 2005, which may be referred to herein as the H.264 standard or H.264 specification, or the H.264/AVC standard or specification. The Joint Video Team (JVT) continues to work on extensions to H.264/MPEG-4 AVC.
[0077] Video encoder 20 and video decoder 30 each may be implemented as any of a variety of suitable encoder circuitry, such as one or more microprocessors, digital signal processors (DSPs), application specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), discrete logic, software, hardware, firmware or any combinations thereof. Each of video encoder 20 and video decoder 30 may be included in one or more encoders or decoders, either of which may be integrated as part of a combined encoder/decoder (CODEC) in a respective camera, computer, mobile device, subscriber device, broadcast device, set-top box, server, or the like. [0078] A video sequence typically includes a series of video frames. Video encoder 20 operates on video blocks within individual video frames in order to encode the video data. A video block may correspond to a macroblock or a partition of a macroblock. A video block may further correspond to a partition of a partition. The video blocks may have fixed or varying sizes, and may differ in size according to a specified coding standard or in accordance with the techniques of this disclosure. Each video frame may include a plurality of slices. Each slice may include a plurality of macroblocks, which may be arranged into partitions, also referred to as sub-blocks.
[0079] As an example, the ITU-T H.264 standard supports intra prediction in various block sizes, such as 16 by 16, 8 by 8, or 4 by 4 for luma components, and 8x8 for chroma components, as well as inter prediction in various block sizes, such as 16x16, 16x8, 8x16, 8x8, 8x4, 4x8 and 4x4 for luma components and corresponding scaled sizes for chroma components. In this disclosure, "x" and "by" may be used interchangeably to refer to the pixel dimensions of the block in terms of vertical and horizontal dimensions, e.g., 16x16 pixels or 16 by 16 pixels. In general, a 16x16 block will have 16 pixels in a vertical direction and 16 pixels in a horizontal direction. Likewise, an NxN block generally has N pixels in a vertical direction and N pixels in a horizontal direction, where N represents a positive integer value that may be greater than 16. The pixels in a block may be arranged in rows and columns.
[0080] Block sizes that are less than 16 by 16 may be referred to as partitions of a 16 by 16 macroblock. Likewise, for an NxN block, block sizes less than NxN may be referred to as partitions of the NxN block. The techniques of this disclosure describe intra- and inter-coding for macroblocks larger than the conventional 16x16 pixel macroblock, such as 32x32 pixel macroblocks, 64x64 pixel macroblocks, or larger macroblocks. Video blocks may comprise blocks of pixel data in the pixel domain, or blocks of transform coefficients in the transform domain, e.g., following application of a transform such as a discrete cosine transform (DCT), an integer transform, a wavelet transform, or a conceptually similar transform to the residual video block data representing pixel differences between coded video blocks and predictive video blocks. In some cases, a video block may comprise blocks of quantized transform coefficients in the transform domain.
[0081] Smaller video blocks can provide better resolution, and may be used for locations of a video frame that include high levels of detail. In general, macroblocks and the various partitions, sometimes referred to as sub-blocks, may be considered to be video blocks. In addition, a slice may be considered to be a plurality of video blocks, such as macrob locks and/or sub-blocks. Each slice may be an independently decodable unit of a video frame. Alternatively, frames themselves may be decodable units, or other portions of a frame may be defined as decodable units. The term "coded unit" or "coding unit" may refer to any independently decodable unit of a video frame such as an entire frame, a slice of a frame, a group of pictures (GOP) also referred to as a sequence, or another independently decodable unit defined according to applicable coding techniques.
[0082] Following intra-predictive or inter-predictive coding to produce predictive data and residual data, and following any transforms (such as the 4x4 or 8x8 integer transform used in H.264/ AVC or a discrete cosine transform DCT) to produce transform coefficients, quantization of transform coefficients may be performed. Quantization generally refers to a process in which transform coefficients are quantized to possibly reduce the amount of data used to represent the coefficients. The quantization process may reduce the bit depth associated with some or all of the coefficients. For example, an n-bit value may be rounded down to an m-bit value during quantization, where n is greater than m.
[0083] Following quantization, entropy coding of the quantized data may be performed, e.g., according to content adaptive variable length coding (CAVLC), context adaptive binary arithmetic coding (CABAC), or another entropy coding methodology. A processing unit configured for entropy coding, or another processing unit, may perform other processing functions, such as zero run length coding of quantized coefficients and/or generation of syntax information such as CBP values, macroblock type, coding mode, maximum macroblock size for a coded unit (such as a frame, slice, macroblock, or sequence), or the like.
[0084] According to various techniques of this disclosure, video encoder 20 may use a macroblock that is larger than that prescribed by conventional video encoding standards to encode digital video data. In one example, video encoder 20 may encode, with a video encoder, a video block having a size of more than 16x16 pixels, generate block- type syntax information that indicates the size of the block, and generate a CBP value for the encoded block, wherein the coded block pattern value indicates whether the encoded block includes at least one non-zero coefficient. The macroblock block-type syntax information may be provided in a macroblock header for the large macroblock. The macroblock block-type syntax information may indicate an address or position of the macroblock in a frame or slice, or a macroblock number that identifies the position of the macroblock, a type of coding mode applied to the macroblock, a quantization value for the macroblock, any motion vector information for the macroblock and a CBP value for the macroblock.
[0085] In another example, video encoder 20 may receive a video block having a size of more than 16x16 pixels, partitioning the block into partitions, encode one of the partitions using a first encoding mode, encode another of the partitions using a second encoding mode different from the first encoding mode, and generate block-type syntax information that indicates the size of the block and identifies the partitions and the encoding modes used to encode the partitions.
[0086] In an additional example, video encoder 20 may receive a video coding unit, such as a frame or slice, determine a first rate-distortion metric for encoding the video coding unit using first video blocks with sizes of 16x16 pixels, determine a second rate- distortion metric for encoding the video coding unit using second video blocks with sizes of more than 16x16 pixels, encode the video coding unit using the first video blocks when the first rate-distortion metric is less than second rate-distortion metric, and encode the video coding unit using the second video blocks when the second rate- distortion metric is less than the first rate-distortion metric.
[0087] In one example, video decoder 30 may receive an encoded video block having a size of more than 16x16 pixels, receive block-type syntax information that indicates the size of the encoded block, receive a coded block pattern value for the encoded block, wherein the coded block pattern value indicates whether the encoded block includes at least one non-zero coefficient, and decode the encoded block based on the block-type syntax information and the coded block pattern value for the encoded block. [0088] In another example, video decoder 30 may receive a video block having a size of more than 16x16 pixels, wherein the block is partitioned into partitions, one of the partitions is infra-encoded and another of the partitions is infra-encoded, receive block- type syntax information that indicates the size of the block and identifies the partitions and the encoding modes used to encode the partitions, and decode the video block based on the block-type syntax information.
[0089] FIG. 2 is a block diagram illustrating an example of a video encoder 50 that may implement techniques for using a large macroblock consistent with this disclosure. Video encoder 50 may correspond to video encoder 20 of source device 12, or a video encoder of a different device. Video encoder 50 may perform infra- and inter-coding of blocks within video frames, including large macroblocks, or partitions or sub-partitions of large macroblocks. Intra-coding relies on spatial prediction to reduce or remove spatial redundancy in video within a given video frame. Inter-coding relies on temporal prediction to reduce or remove temporal redundancy in video within adjacent frames of a video sequence.
[0090] Intra-mode (I-mode) may refer to any of several spatial based compression modes and inter-modes such as prediction (P -mode) or bi-directional (B-mode) may refer to any of several temporal-based compression modes. The techniques of this disclosure may be applied both during inter-coding and intra-coding. In some cases, techniques of this disclosure may also be applied to encoding non- video digital pictures. That is, a digital still picture encoder may utilize the techniques of this disclosure to intra-code a digital still picture using large macroblocks in a manner similar to encoding intra-coded macroblocks in video frames in a video sequence.
[0091] As shown in FIG. 2, video encoder 50 receives a current video block within a video frame to be encoded. In the example of FIG. 2, video encoder 50 includes motion compensation unit 35, motion estimation unit 36, intra prediction unit 37, mode select unit 39, reference frame store 34, summer 48, transform unit 38, quantization unit 40, and entropy coding unit 46. For video block reconstruction, video encoder 50 also includes inverse quantization unit 42, inverse transform unit 44, and summer 51. A deblocking filter (not shown in FIG. 2) may also be included to filter block boundaries to remove blockiness artifacts from reconstructed video. If desired, the deblocking filter would typically filter the output of summer 51.
[0092] During the encoding process, video encoder 50 receives a video frame or slice to be coded. The frame or slice may be divided into multiple video blocks, including large macroblocks. Motion estimation unit 36 and motion compensation unit 35 perform inter-predictive coding of the received video block relative to one or more blocks in one or more reference frames to provide temporal compression. Intra prediction unit 37 performs intra-predictive coding of the received video block relative to one or more neighboring blocks in the same frame or slice as the block to be coded to provide spatial compression.
[0093] Mode select unit 39 may select one of the coding modes, intra or inter, e.g., based on error results, and provides the resulting intra- or inter-coded block to summer 48 to generate residual block data and to summer 51 to reconstruct the encoded block for use as a reference frame. In accordance with the techniques of this disclosure, the video block to be coded may comprise a macroblock that is larger than that prescribed by conventional coding standards, i.e., larger than a 16x16 pixel macroblock. For example, the large video block may comprise a 64x64 pixel macroblock or a 32x32 pixel macroblock.
[0094] Motion estimation unit 36 and motion compensation unit 35 may be highly integrated, but are illustrated separately for conceptual purposes. Motion estimation is the process of generating motion vectors, which estimate motion for video blocks. A motion vector, for example, may indicate the displacement of a predictive block within a predictive reference frame (or other coded unit) relative to the current block being coded within the current frame (or other coded unit). A predictive block is a block that is found to closely match the block to be coded, in terms of pixel difference, which may be determined by sum of absolute difference (SAD), sum of square difference (SSD), or other difference metrics.
[0095] A motion vector may also indicate displacement of a partition of a large macroblock. In one example with respect to a 64x64 pixel macroblock with a 32x64 partition and two 32x32 partitions, a first motion vector may indicate displacement of the 32x64 partition, a second motion vector may indicate displacement of a first one of the 32x32 partitions, and a third motion vector may indicate displacement of a second one of the 32x32 partitions, all relative to corresponding partitions in a reference frame. Such partitions may also be considered video blocks, as those terms are used in this disclosure. Motion compensation may involve fetching or generating the predictive block based on the motion vector determined by motion estimation. Again, motion estimation unit 36 and motion compensation unit 35 may be functionally integrated. [0096] Motion estimation unit 36 calculates a motion vector for the video block of an inter-coded frame by comparing the video block to video blocks of a reference frame in reference frame store 34. Motion compensation unit 35 may also interpolate sub-integer pixels of the reference frame, e.g., an I-frame or a P-frame. The ITU H.264 standard refers to reference frames as "lists." Therefore, data stored in reference frame store 34 may also be considered lists. Motion estimation unit 36 compares blocks of one or more reference frames (or lists) from reference frame store 34 to a block to be encoded of a current frame, e.g., a P-frame or a B-frame. When the reference frames in reference frame store 34 include values for sub-integer pixels, a motion vector calculated by motion estimation unit 36 may refer to a sub-integer pixel location of a reference frame. Motion estimation unit 36 sends the calculated motion vector to entropy coding unit 46 and motion compensation unit 35. The reference frame block identified by a motion vector may be referred to as a predictive block. Motion compensation unit 35 calculates error values for the predictive block of the reference frame.
[0097] Motion compensation unit 35 may calculate prediction data based on the predictive block. Video encoder 50 forms a residual video block by subtracting the prediction data from motion compensation unit 35 from the original video block being coded. Summer 48 represents the component or components that perform this subtraction operation. Transform unit 38 applies a transform, such as a discrete cosine transform (DCT) or a conceptually similar transform, to the residual block, producing a video block comprising residual transform coefficient values. Transform unit 38 may perform other transforms, such as those defined by the H.264 standard, which are conceptually similar to DCT. Wavelet transforms, integer transforms, sub-band transforms or other types of transforms could also be used. In any case, transform unit 38 applies the transform to the residual block, producing a block of residual transform coefficients. The transform may convert the residual information from a pixel value domain to a transform domain, such as a frequency domain.
[0098] Quantization unit 40 quantizes the residual transform coefficients to further reduce bit rate. The quantization process may reduce the bit depth associated with some or all of the coefficients. In one example, quantization unit 40 may establish a different degree of quantization for each 64x64 pixel macroblock according to a luminance quantization parameter, referred to in this disclosure as QPy. Quantization unit 40 may further modify the luminance quantization parameter used during quantization of a 64x64 macroblock based on a quantization parameter modifier, referred to herein as "MB64_delta_QP," and a previously encoded 64x64 pixel macroblock. [0099] Each 64x64 pixel large macroblock may comprise an individual MB64_delta_QP value, in the range between -26 and +25, inclusive. In general, video encoder 50 may establish the MB64_delta_QP value for a particular block based on a desired bitrate for transmitting the encoded version of the block. The MB64_delta_QP value of a first 64x64 pixel macroblock may be equal to the QP value of a frame or slice that includes the first 64x64 pixel macroblock, e.g., in the frame/slice header. QPy for a current 64x64 pixel macroblock may be calculated according to the formula:
QPγ = (QPY PREV + MB64 _ delta QP + 52)%52 where QPY,PREV refers to the QPy value of the previous 64x64 pixel macroblock in the decoding order of the current slice/frame, and where "%" refers to the modulo operator such that N%52 returns a result between 0 and 51, inclusive, corresponding to the remainder value of N divided by 52. For a first macroblock in a frame/slice, QPY,PREV may be set equal to the frame/slice QP sent in the frame/slice header. [00100] In one example, quantization unit 40 presumes that the MB64_delta_QP value is equal to zero when a MB64_delta_QP value is not defined for a particular 64x64 pixel macroblock, including "skip" type macroblocks, such as P Skip and B Skip macroblock types. In some examples, additional delta QP values (generally referred to as quantization parameter modification values) may be defined for finer grain quantization control of partitions within a 64x64 pixel macroblock, such as MB32_delta_QP values for each 32x32 pixel partition of a 64x64 pixel macroblock. In some examples, each partition of a 64x64 macroblock may be assigned an individual quantization parameter. Using an individualized quantization parameter for each partition may result in more efficient quantization of a macroblock, e.g., to better adjust quantization for a non-homogeneous area, instead of using a single QP for a 64x64 macroblock. Each quantization parameter modification value may be included as syntax information with the corresponding encoded block, and a decoder may decode the encoded block by dequantizing, i.e., inverse quantizing, the encoded block according to the quantization parameter modification value.
[0100] Following quantization, entropy coding unit 46 entropy codes the quantized transform coefficients. For example, entropy coding unit 46 may perform content adaptive variable length coding (CAVLC), context adaptive binary arithmetic coding (CABAC), or another entropy coding technique. Following the entropy coding by entropy coding unit 46, the encoded video may be transmitted to another device or archived for later transmission or retrieval. The coded bitstream may include entropy coded residual transform coefficient blocks, motion vectors for such blocks, MB64_delta_QP values for each 64x64 pixel macroblock, and other syntax elements including, for example, macroblock-type identifier values, coded unit headers indicating the maximum size of macroblocks in the coded unit, QPy values, coded block pattern (CBP) values, values that identify a partitioning method of a macroblock or sub-block, and transform size flag values, as discussed in greater detail below. In the case of context adaptive binary arithmetic coding, context may be based on neighboring macroblocks.
[0101] In some cases, entropy coding unit 46 or another unit of video encoder 50 may be configured to perform other coding functions, in addition to entropy coding. For example, entropy coding unit 46 may be configured to determine the CBP values for the large macroblocks and partitions. Entropy coding unit 46 may apply a hierarchical CBP scheme to provide a CBP value for a large macroblock that indicates whether any partitions in the macroblock include non-zero transform coefficient values and, if so, other CBP values to indicate whether particular partitions within the large macroblock have non-zero transform coefficient values. Also, in some cases, entropy coding unit 46 may perform run length coding of the coefficients in a large macroblock or subpartition. In particular, entropy coding unit 46 may apply a zig-zag scan or other scan pattern to scan the transform coefficients in a macroblock or partition and encode runs of zeros for further compression. Entropy coding unit 46 also may construct header information with appropriate syntax elements for transmission in the encoded video bitstream. [0102] Inverse quantization unit 42 and inverse transform unit 44 apply inverse quantization and inverse transformation, respectively, to reconstruct the residual block in the pixel domain, e.g., for later use as a reference block. Motion compensation unit 35 may calculate a reference block by adding the residual block to a predictive block of one of the frames of reference frame store 34. Motion compensation unit 35 may also apply one or more interpolation filters to the reconstructed residual block to calculate sub-integer pixel values. Summer 51 adds the reconstructed residual block to the motion compensated prediction block produced by motion compensation unit 35 to produce a reconstructed video block for storage in reference frame store 34. The reconstructed video block may be used by motion estimation unit 36 and motion compensation unit 35 as a reference block to inter-code a block in a subsequent video frame. The large macroblock may comprise a 64x64 pixel macroblock, a 32x32 pixel macroblock, or other macroblock that is larger than the size prescribed by conventional video coding standards.
[0103] FIG. 3 is a block diagram illustrating an example of a video decoder 60, which decodes a video sequence that is encoded in the manner described in this disclosure. The encoded video sequence may include encoded macroblocks that are larger than the size prescribed by conventional video encoding standards. For example, the encoded macroblocks may be 32x32 pixel or 64x64 pixel macroblocks. In the example of FIG. 3, video decoder 60 includes an entropy decoding unit 52, motion compensation unit 54, intra prediction unit 55, inverse quantization unit 56, inverse transformation unit 58, reference frame store 62 and summer 64. Video decoder 60 may, in some examples, perform a decoding pass generally reciprocal to the encoding pass described with respect to video encoder 50 (FIG. 2). Motion compensation unit 54 may generate prediction data based on motion vectors received from entropy decoding unit 52. [0104] Entropy decoding unit 52 entropy-decodes the received bitstream to generate quantized coefficients and syntax elements (e.g., motion vectors, CBP values, QPy values, transform size flag values, MB64_delta_QP values). Entropy decoding unit 52 may parse the bitstream to identify syntax information in coded units such as frames, slices and/or macroblock headers. Syntax information for a coded unit comprising a plurality of macroblocks may indicate the maximum size of the macroblocks, e.g., 16x16 pixels, 32x32 pixels, 64x64 pixels, or other larger sized macroblocks in the coded unit. The syntax information for a block is forwarded from entropy coding unit 52 to either motion compensation unit 54 or intra-prediction unit 55, e.g., depending on the coding mode of the block. A decoder may use the maximum size indicator in the syntax of a coded unit to select a syntax decoder for the coded unit. Using the syntax decoder specified for the maximum size, the decoder can then properly interpret and process the large-sized macroblocks include in the coded unit.
[0105] Motion compensation unit 54 may use motion vectors received in the bitstream to identify a prediction block in reference frames in reference frame store 62. Intra prediction unit 55 may use intra prediction modes received in the bitstream to form a prediction block from spatially adjacent blocks. Inverse quantization unit 56 inverse quantizes, i.e., de-quantizes, the quantized block coefficients provided in the bitstream and decoded by entropy decoding unit 52. The inverse quantization process may include a conventional process, e.g., as defined by the H.264 decoding standard. The inverse quantization process may also include use of a quantization parameter QPy calculated by encoder 50 for each 64x64 macroblock to determine a degree of quantization and, likewise, a degree of inverse quantization that should be applied. [0106] Inverse transform unit 58 applies an inverse transform, e.g., an inverse DCT, an inverse integer transform, or a conceptually similar inverse transform process, to the transform coefficients in order to produce residual blocks in the pixel domain. Motion compensation unit 54 produces motion compensated blocks, possibly performing interpolation based on interpolation filters. Identifiers for interpolation filters to be used for motion estimation with sub-pixel precision may be included in the syntax elements. Motion compensation unit 54 may use interpolation filters as used by video encoder 50 during encoding of the video block to calculate interpolated values for sub-integer pixels of a reference block. Motion compensation unit 54 may determine the interpolation filters used by video encoder 50 according to received syntax information and use the interpolation filters to produce predictive blocks.
[0107] Motion compensation unit 54 uses some of the syntax information to determine sizes of macrob locks used to encode frame(s) of the encoded video sequence, partition information that describes how each macroblock of a frame of the encoded video sequence is partitioned, modes indicating how each partition is encoded, one or more reference frames (or lists) for each inter-encoded macroblock or partition, and other information to decode the encoded video sequence.
[0108] Summer 64 sums the residual blocks with the corresponding prediction blocks generated by motion compensation unit 54 or intra-prediction unit to form decoded blocks. If desired, a deblocking filter may also be applied to filter the decoded blocks in order to remove blockiness artifacts. The decoded video blocks are then stored in reference frame store 62, which provides reference blocks for subsequent motion compensation and also produces decoded video for presentation on a display device (such as device 32 of FIG. 1). The decoded video blocks may each comprise a 64x64 pixel macroblock, 32x32 pixel macroblock, or other larger-than-standard macroblock. Some macroblocks may include partitions with a variety of different partition sizes. [0109] FIG. 4 A is a conceptual diagram illustrating example partitioning among various partition levels of a large macroblock. Blocks of each partition level include a number of pixels corresponding to the particular level. Four partitioning patterns are also shown for each level, where a first partition pattern includes the whole block, a second partition pattern includes two horizontal partitions of equal size, a third partition pattern includes two vertical partitions of equal size, and a fourth partition pattern includes four equally- sized partitions. One of the partitioning patterns may be chosen for each partition at each partition level.
[0110] In the example of FIG. 4A, level 0 corresponds to a 64x64 pixel macroblock partition of luma samples and associated chroma samples. Level 1 corresponds to a 32x32 pixel block of luma samples and associated chroma samples. Level 2 corresponds to a 16x16 pixel block of luma samples and associated chroma samples, and level 3 corresponds to an 8x8 pixel block of luma samples and associated chroma samples.
[0111] In other examples, additional levels could be introduced to utilize larger or smaller numbers of pixels. For example, level 0 could begin with a 128x128 pixel macroblock, a 256x256 pixel macroblock, or other larger-sized macroblock. The highest-numbered level, in some examples, could be as fine-grain as a single pixel, i.e., a 1x1 block. Hence, from the lowest to highest levels, partitioning may be increasingly sub-partitioned, such that the macroblock is partitioned, partitions are further partitioned, further partitions are still further partitioned, and so forth. In some instances, partitions below level 0, i.e., partitions of partitions, may be referred to as sub-partitions.
[0112] When a block at one level is partitioned using four equally-sized sub-blocks, any or all of the sub-blocks may be partitioned according to the partition patterns of the next level. That is, for an NxN block that has been partitioned at level x into four equally sized sub-blocks (N/2)x(N/2), any of the (N/2)x(N/2) sub-blocks can be further partitioned according to any of the partition patterns of level x+1. Thus, a 32x32 pixel sub-block of a 64x64 pixel macroblock at level 0 can be further partitioned according to any of the patterns shown in FIG. 4A at level 1, e.g., 32x32, 32x16 and 32x16, 16x32 and 16x32, or 16x16, 16x16, 16x16 and 16x16. Likewise, where four 16x16 pixel sub- blocks result from a 32x32 pixel sub-block being partitioned, each of the 16x16 pixel sub-blocks can be further partitioned according to any of the patterns shown in FIG. 4A at level 2. Where four 8x8 pixel sub-blocks result from a 16x16 pixel sub-block being partitioned, each of the 8x8 pixel sub-blocks can be further partitioned according to any of the patterns shown in FIG. 4 A at level 3.
[0113] Using the example four levels of partitions shown in FIG. 4A, large homogeneous areas and fine sporadic changes can be adaptively represented by an encoder implementing the framework and techniques of this disclosure. For example, video encoder 50 may determine different partitioning levels for different macroblocks, as well as coding modes to apply to such partitions, e.g., based on rate-distortion analysis. Also, as described in greater detail below, video encoder 50 may encode at least some of the final partitions differently, using spatial (P-encoded or B-encoded) or temporal (I-encoded) prediction, e.g., based on rate-distortion metric results or other considerations. [0114] Instead of coding a large macroblock uniformly such that all partitions have the same intra- or inter-coding mode, a large macroblock may be coded such that some partitions have different coding mode. For example, some (at least one) partitions may be coded with different intra-coding modes (e.g., 1 16x16, 1 8x8, 1 4x4) relative to other (at least one) partitions in the same macroblock. Also, some (at least one) partitions may be intra-coded while other (at least one) partitions in the same macroblock are inter-coded.
[0115] For example, video encoder 50 may, for a 32x32 block with four 16x16 partitions, encode some of the 16x16 partitions using spatial prediction and other 16x16 partitions using temporal prediction. As another example, video encoder 50 may, for a 32x32 block with four 16x16 partitions, encode one or more of the 16x16 partitions using a first prediction mode (e.g., one of 1 16x16, 1 8x8, 1 4x4) and one or more other 16x16 partitions using a different spatial prediction mode (e.g., one of 1 16x16, 1 8x8, I_4x4).
[0116] FIG. 4B is a conceptual diagram illustrating assignment of different coding modes to different partitions a large macroblock. In particular, FIG. 4B illustrates assignment of an I l 6x16 intra-coding mode to an upper left 16x16 block of a large 32x32 macroblock, 1 8x8 intra-coding modes to upper right and lower left 16x16 blocks of the large 32x32 macroblock, and an 1 4x4 intra-coding mode to a lower right 16x16 block of the large 32x32 macroblock. In some cases, the coding modes illustrated in FIG. 4B may be H.264 intra-coding modes for luma coding.
[0117] In the manner described, each partition can be further partitioned on a selective basis, and each final partition can be selectively coded using either temporal prediction or spatial prediction, and using selected temporal or spatial coding modes. Consequently, it is possible to code a large macroblock with mixed modes such that some partitions in the macroblock are intra-coded and other partitions in the same macroblock are inter-coded, or some partitions in the same macroblock are coded with different intra-coding modes or different inter-coding modes.
[0118] Video encoder 50 may further define each partition according to a macroblock type. The macroblock type may be included as a syntax element in an encoded bitstream, e.g., as a syntax element in a macroblock header. In general, the macroblock type may be used to identify how the macroblock is partitioned, and the respective methods or modes for encoding each of the partitions of the macroblock, as discussed above. Methods for encoding the partitions may include not only intra- and inter- coding, but also particular modes of intra-coding (e.g., 1 16x16, 1 8x8, 1 4x4) or inter- coding (e.g., P_ or B_ 16x16, 16x8, 8x16, 8x8, 8x4, 4x8 and 4x4). [0119] As discussed with respect to the example of Table 1 below in greater detail for P-blocks and with respect to the example of Table 2 below for B-blocks, partition level 0 blocks may be defined according to an MB64_type syntax element, representative of a macroblock with 64x64 pixels. Similar type definitions may be formed for any MB[N]_type, where [N] refers to a block with NxN pixels, where N is a positive integer that may be greater than 16. When an NxN block has four partitions of size (N/2)x(N/2), as shown in the last column on FIG. 4A, each of the four partitions may receive their own type definitions, e.g., MB[N/2]_type. For example, for a 64x64 pixel block (of type MB64_type) with four 32x32 pixel partitions, video encoder 50 may introduce an MB32_type for each of the four 32x32 pixel partitions. These macroblock type syntax elements may assist decoder 60 in decoding large macroblocks and various partitions of large macroblocks, as described in this disclosure. Each NxN pixel macroblock where N is greater than 16 generally corresponds to a unique type definition. Accordingly, the encoder may generate syntax appropriate for the particular macroblock and indicate to the decoder the maximum size of macroblocks in a coded unit, such as a frame, slice, or sequence of macroblocks. In this manner, the decoder may receive an indication of a syntax decoder to apply to macroblocks of the coded unit. This also ensures that the decoder may be backwards-compatible with existing coding standards, such as H.264, in that the encoder may indicate the type of syntax decoders to apply to the macroblocks, e.g., standard H.264 or those specified for processing of larger macroblocks according to the techniques of this disclosure. [0120] In general, each MB[N]_type definition may represent, for a corresponding type, a number of pixels in a block of the corresponding type (e.g., 64x64), a reference frame (or reference list) for the block, a number of partitions for the block, the size of each partition of the block, how each partition is encoded (e.g., intra or inter and particular modes), and the reference frame (or reference list) for each partition of the block when the partition is inter-coded. For 16x16 and smaller blocks, video encoder 50 may, in some examples, use conventional type definitions as the types of the blocks, such as types specified by the H.264 standard. In other examples, video encoder 50 may apply newly defined block types for 16x16 and smaller blocks.
[0121] Video encoder 50 may evaluate both conventional inter- or intra-coding methods using normal macroblock sizes and partitions, such as methods prescribed by ITU H.264, and inter- or intra-coding methods using the larger macroblocks and partitions described by this disclosure, and compare the rate-distortion characteristics of each approach to determine which method results in the best rate-distortion performance. Video encoder 50 then may select, and apply to the block to be coded, the best coding approach, including inter- or intra-mode, macroblock size (large, larger or normal), and partitioning, based on optimal or acceptable rate-distortion results for the coding approach. As an illustration, video encoder 50 may select the use of 64x64 macroblocks, 32x32 macroblocks or 16x16 macroblocks to encode a particular frame or slice based on rate-distortion results produced when the video encoder uses such macroblock sizes.
[0122] In general, two different approaches may be used to design intra modes using large macroblocks. As one example, during intra-coding, spatial prediction may be performed for a block based on neighboring blocks directly. In accordance with the techniques of this disclosure, video encoder 50 may generate spatial predictive 32x32 blocks based on their neighboring pixels directly and generate spatial predictive 64x64 blocks based on their neighboring pixels directly. In this manner, spatial prediction may be performed at a larger scale compared to 16x16 intra blocks. Therefore, these techniques may, in some examples, result in some bit rate savings, e.g., with a smaller number of blocks or partitions per frame or slice.
[0123] As another example, video encoder 50 may group four NxN blocks together to generate an (N*2)x(N*2) block, and then encode the (N*2)x(N*2) block. Using existing H.264 intra-coding modes, video encoder 50 may group four intra-coded blocks together, thereby forming a large intra-coded macroblock. For example, four intra- coded blocks, each having a size of 16x16, can be grouped together to form a large, 32x32 intra-coded block. Video encoder 50 may encode each of the four corresponding NxN blocks using a different encoding mode, e.g., I_16xl6, 1_8x8, or I_4x4 according to H.264. In this manner, each 16x16 block can be assigned its own mode of spatial prediction by video encoder 50, e.g., to promote favorable encoding results. [0124] Video encoder 50 may design intra modes according to either of the two different methods discussed above, and analyze the different methods to determine which approach provides better encoding results. For example, video encoder 50 may apply the different intra mode approaches, and place them in a single candidate pool to allow them to compete with each other for the best rate-distortion performance. Using a rate-distortion comparison between the different approaches, video encoder 50 can determine how to encode each partition and/or macroblock. In particular, video encoder 50 may select the coding modes that produce the best rate-distortion performance for a given macroblock, and apply those coding modes to encode the macroblock. [0125] FIG. 5 is a conceptual diagram illustrating a hierarchical view of various partition levels of a large macroblock. FIG. 5 also represents the relationships between various partition levels of a large macroblock as described with respect to FIG. 4A. Each block of a partition level, as illustrated in the example of FIG. 5, may have a corresponding coded block pattern (CBP) value. The CBP values form part of the syntax information that describes a block or macroblock. In one example, the CBP values are each one-bit syntax values that indicate whether or not there are any nonzero transform coefficient values in a given block following transform and quantization operations.
[0126] In some cases, a prediction block may be very close in pixel content to a block to be coded such that all of the residual transform coefficients are quantized to zero, in which case there may be no need to transmit transform coefficients for the coded block. Instead, the CBP value for the block may be set to zero to indicate that the coded block includes no non-zero coefficients. Alternatively, if a block includes at least one nonzero coefficient, the CBP value may be set to one. Decoder 60 may use CBP values to identify residual blocks that are coded, i.e., with one or more non-zero transform coefficients, versus blocks that are not coded, i.e., including no non-zero transform coefficients.
[0127] In accordance with some of the techniques described in this disclosure, an encoder may assign CBP values to large macroblocks hierarchically based on whether those macroblocks, including their partitions, have at least one non-zero coefficient, and assign CBP values to the partitions to indicate which partitions have non-zero coefficients. Hierarchical CBP for large macroblocks can facilitate processing of large macroblocks to quickly identify coded large macroblocks and uncoded large macroblocks, and permit identification of coded partitions at each partition level for the large macroblock to determine whether it is necessary to use residual data to decode the blocks.
[0128] In one example, a 64x64 pixel macroblock at level zero may include syntax information comprising a CBP64 value, e.g., a one-bit value, to indicate whether the entire 64x64 pixel macroblock, including any partitions, has non-zero coefficients or not. In one example, video encoder 50 "sets" the CBP64 bit, e.g., to a value of "1," to represent that the 64x64 pixel macroblock includes at least one non-zero coefficient. Thus, when the CBP64 value is set, e.g., to a value of "1," the 64x64 pixel macroblock includes at least one non-zero coefficient somewhere in the macroblock. In another example, video encoder 50 "clears" the CBP64 value, e.g., to a value of "0," to represent that the 64x64 pixel macroblock has all zero coefficients. Thus, when the CBP64 value is cleared, e.g., to a value of "0," the 64x64 pixel macroblock is indicated as having all zero coefficients. Macroblocks with CBP64 values of "0" do not generally require transmission of residual data in the bitstream, whereas macroblocks with CBP64 values of "1" generally require transmission of residual data in the bitstream for use in decoding such macroblocks.
[0129] A 64x64 pixel macroblock that has all zero coefficients need not include CBP values for partitions or sub-blocks thereof. That is, because the 64x64 pixel macroblock has all zero coefficients, each of the partitions also necessarily has all zero coefficients. On the contrary, a 64x64 pixel macroblock that includes at least one non-zero coefficient may further include CBP values for the partitions at the next partition level. For example, a CBP64 with a value of one may include additional syntax information in the form of a one-bit value CBP32 for each 32x32 partition of the 64x64 block. That is, in one example, each 32x32 pixel partition (such as the four partition blocks of level 1 in FIG. 5) of a 64x64 pixel macroblock is assigned a CBP32 value as part of the syntax information of the 64x64 pixel macroblock. As with the CBP64 value, each CBP32 value may comprise a bit that is set to a value of one when the corresponding 32x32 pixel block has at least one non-zero coefficient and that is cleared to a value of zero when the corresponding 32x32 pixel block has all zero coefficients. The encoder may further indicate, in syntax of a coded unit comprising a plurality of macroblocks, such as a frame, slice, or sequence, the maximum size of a macroblock in the coded unit, to indicate to the decoder how to interpret the syntax information of each macroblock, e.g., which syntax decoder to use for processing of macroblocks in the coded unit. [0130] In this manner, a 64x64 pixel macroblock that has all zero coefficients may use a single bit to represent the fact that the macroblock has all zero coefficients, whereas a 64x64 pixel macroblock with at least one non-zero coefficient may include CBP syntax information comprising at least five bits, a first bit to represent that the 64x64 pixel macroblock has a non-zero coefficient and four additional bits, each representative of whether a corresponding one of four 32x32 pixel partitions of the macroblock includes at least one non-zero coefficient. In some examples, when the first three of the four additional bits are zero, the fourth additional bit may not be included, which the decoder may interpret as the last partition being one. That is, the encoder may determine that the last bit has a value of one when the first three bits are zero and when the bit representative of the higher level hierarchy has a value of one. For example, a prefix of a CBP64 value of "10001" may be shortened to "1000," as the first bit indicates that at least one of the four partitions has non-zero coefficients, and the next three zeros indicate that the first three partitions have all zero coefficients. Therefore, a decoder may deduce that it is the last partition that includes a non-zero coefficient, without the explicit bit informing the decoder of this fact, e.g., from the bit string "1000." That is, the decoder may interpret the CBP64 prefix "1000" as "10001."
[0131] Likewise, a one-bit CBP32 may be set to a value of "1" when the 32x32 pixel partition includes at least one non-zero coefficient, and to a value of "0" when all of the coefficients have a value of zero. If a 32x32 pixel partition has a CBP value of 1, then partitions of that 32x32 partition at the next partition level may be assigned CBP values to indicate whether the respective partitions include any non-zero coefficients. Hence, the CBP values may be assigned in a hierarchical manner at each partition level until there are no further partition levels or no partitions including non-zero coefficients. [0132] In the above manner, encoders and/or decoders may utilize hierarchical CBP values to represent whether a large macroblock (e.g., 64x64 or 32x32) and partitions thereof include at least one non-zero coefficient or all zero coefficients. Accordingly, an encoder may encode a large macroblock of a coded unit of a digital video stream, such that the macroblock block comprises greater than 16x16 pixels, generate block-type syntax information that identifies the size of the block, generate a CBP value for the block, such that the CBP value identifies whether the block includes at least one nonzero coefficient, and generate additional CBP values for various partition levels of the block, if applicable.
[0133] In one example, the hierarchical CBP values may comprise an array of bits (e.g., a bit vector) whose length depends on the values of the prefix. The array may further represent a hierarchy of CBP values, such as a tree structure, as shown in FIG. 5. The array may represent nodes of the tree in a breadth-first manner, where each node corresponds to a bit in the array. When a note of the tree has a bit that is set to "1," in one example, the node has four branches (corresponding to the four partitions), and when the bit is cleared to "0," the node has no branches. [0134] In this example, to identify the values of the nodes that branch from a particular node X, an encoder and/or a decoder may determine the four consecutive bits starting at node 7 that represent the nodes that branch from node x by calculating:
y = 4 * ∑tree[i] -3 ι=0
where tree[] corresponds to the array of bits with a starting index of 0, i is an integer index into the array tree[], x corresponds to the index of node X in tree[], and y corresponds to the index of node Y that is the first branch-node of node X. The three subsequent array positions (i.e., y+\, y+2, and y+3) correspond to the other branch- nodes of node X.
[0135] An encoder, such as video encoder 50 (FIG. T), may assign CBP values for 16x16 pixel partitions of the 32x32 pixel partitions with at least one non-zero coefficient using existing methods, such as methods prescribed by ITU H.264 for setting CBP values for 16x16 blocks, as part of the syntax of the 64x64 pixel macroblock. The encoder may also select CBP values for the partitions of the 32x32 pixel partitions that have at least one non-zero coefficient based on the size of the partitions, a type of block corresponding to the partitions (e.g., chroma block or luma block), or other characteristics of the partitions. Example methods for setting a CBP value of a partition of a 32x32 pixel partition are discussed in further detail with respect to FIGS. 8 and 9. [0136] FIGS. 6-9 are flowcharts illustrating example methods for setting various coded block pattern (CBP) values in accordance with the techniques of this disclosure. Although the example methods of FIGS. 6-9 are discussed with respect to a 64x64 pixel macroblock, it should be understood that similar techniques may apply for assigning hierarchical CBP values for other sizes of macrob locks. Although the examples of FIGS. 6-9 are discussed with respect to video encoder 50 (FIG. X), it should be understood that other encoders may employ similar methods to assign CBP values to larger-than- standard macrob locks. Likewise, decoders may utilize similar, albeit reciprocal, methods for interpreting the meaning of a particular CBP value for a macroblock. For example, if an inter-coded macroblock received in the bitstream has a CBP value of "0," the decoder may receive no residual data for the macroblock and may simply produce a predictive block identified by a motion vector as the decoded macroblock, or a group of predictive blocks identified by motion vectors with respect to partitions of the macroblock.
[0137] FIG. 6 is a flowchart illustrating an example method for setting a CBP64 value of an example 64x64 pixel macroblock. Similar methods may be applied for macrob locks larger than 64x64. Initially, video encoder 50 receives a 64x64 pixel macroblock (100). Motion estimation unit 36 and motion compensation unit 35 may then generate one or more motion vectors and one or more residual blocks to encode the macroblock, respectively. The output of transform unit 38 generally comprises an array of residual transform coefficient values for an intra-coded block or a residual block of an inter-coded block, which array is quantized by quantization unit 40 to produce a series of quantized transform coefficients.
[0138] Entropy coding unit 46 may provide entropy coding and other coding functions separate from entropy coding. For example, in addition to CAVLC, CABAC, or other entropy coding functions, entropy coding unit 46 or another unit of video encoder 50 may determine CBP values for the large macrob locks and partitions. In particular, entropy coding unit 46 may determine the CBP64 value for a 64x64 pixel macroblock by first determining whether the macroblock has at least one non-zero, quantized transform coefficient (102). When entropy coding unit 46 determines that all of the transform coefficients have a value of zero ("NO" branch of 102), entropy coding unit 46 clears the CBP64 value for the 64x64 macroblock, e.g., resets a bit for the CBP64 value to "0" (104). When entropy coding unit 46 identifies at least one non-zero coefficient ("YES" branch of 102) for the 64x65 macroblock, entropy coding unit 46 sets the CBP64 value, e.g., sets a bit for the CBP64 value to "1" (106). [0139] When the macroblock has all zero coefficients, entropy coding unit 46 does not need to establish any additional CBP values for the partitions of the macroblock, which may reduce overhead. In one example, when the macroblock has at least one non-zero coefficient, however, entropy coding unit 46 proceeds to determine CBP values for each of the four 32x32 pixel partitions of the 64x64 pixel macroblock (108). Entropy coding unit 46 may utilize the method described with respect to FIG. 7 four times, once for each of the four partitions, to establish four CBP32 values, each corresponding to a different one of the four 32x32 pixel partitions of the 64x64 macroblock. In this manner, when a macroblock has all zero coefficients, entropy coding unit 46 may transmit a single bit with a value of "0" to indicate that the macroblock has all zero coefficients, whereas when the macroblock has at least one non-zero coefficient, entropy coding unit 46 may transmit five bits, one bit for the macroblock and four bits, each corresponding to one of the four partitions of the macroblock. In addition, when a partition includes at least one non-zero coefficient, residual data for the partition may be sent in the encoded bitstream. As with the example of the CBP64 discussed above, when the first three of the four additional bits are zero, the fourth additional bit may not be necessary, because the decoder may determine that it has a value of one. Thus in some examples, the encoder may only send three zeros, i.e., "000," rather than three zeros and a one, i.e., "0001."
[0140] FIG. 7 is a flowchart illustrating an example method for setting a CBP32 value of a 32x32 pixel partition of a 64x64 pixel macroblock. Initially, for the next partition level, entropy coding unit 46 receives a 32x32 pixel partition of the macroblock (110), e.g., one of the four partitions referred to with respect to FIG. 6. Entropy coding unit 46 then determines a CBP32 value for the 32x32 pixel partition by first determining whether the partition includes at least one non-zero coefficient (112). When entropy coding unit 46 determines that all of the coefficients for the partition have a value of zero ("NO" branch of 112), entropy coding unit 46 clears the CBP32 value, e.g., resets a bit for the CBP32 value to "0" (114). When entropy coding unit 46 identifies at least one non-zero coefficient of the partition ("YES" branch of 112), entropy coding unit 46 sets the CBP32 value, e.g., sets a bit for the CBP32 value to a value of "1" (116). [0141] In one example, when the partition has all zero coefficients, entropy coding unit 46 does not establish any additional CBP values for the partition. When a partition includes at least one non-zero coefficient, however, entropy coding unit 46 determines CBP values for each of the four 16x16 pixel partitions of the 32x32 pixel partition of the macroblock. Entropy coding unit 46 may utilize the method described with respect to FIG. 8 to establish four CBP 16 values each corresponding to one of the four 16x16 pixel partitions.
[0142] In this manner, when a partition has all zero coefficients, entropy coding unit 46 may set a bit with a value of "0" to indicate that the partition has all zero coefficients, whereas when the partition has at least one non-zero coefficient, entropy coding unit 46 may include five bits, one bit for the partition and four bits each corresponding to a different one of the four sub-partitions of the partition of the macroblock. Hence, each additional partition level may present four additional CBP bits when the partition in the preceding partition level had at least one nonzero transform coefficient value. As one example, if a 64x64 macroblock has a CBP value of 1, and four 32x32 partitions have CBP values of 1, 0, 1 and 1, respectively, the overall CBP value up to that point is 11011. Additional CBP bits may be added for additional partitions of the 32x32 partitions, e.g., into 16x16 partitions.
[0143] FIG. 8 is a flowchart illustrating an example method for setting a CBP 16 value of a 16x16 pixel partition of a 32x32 pixel partition of a 64x64 pixel macroblock. For certain 16x16 pixel partitions, video encoder 50 may utilize CBP values as prescribed by a video coding standard, such as ITU H.264, as discussed below. For other 16x16 partitions, video encoder 50 may utilize CBP values in accordance with other techniques of this disclosure. Initially, as shown in FIG. 8, entropy coding unit 46 receives a 16x16 partition (120), e.g., one of the 16x16 partitions of a 32x32 partition described with respect to FIG. 7.
[0144] Entropy coding unit 46 may then determine whether a motion partition for the 16x16 pixel partition is larger than an 8x8 pixel block (122). In general, a motion partition describes a partition in which motion is concentrated. For example, a 16x16 pixel partition with only one motion vector may be considered a 16x16 motion partition. Similarly, for a 16x16 pixel partition with two 8x16 partitions, each having one motion vector, each of the two 8x16 partitions may be considered an 8x16 motion partition. In any case, when the motion partition is not larger than an 8x8 pixel block ("NO" branch of 122), entropy coding unit 46 assigns a CBP value to the 16x16 pixel partition in the same manner as prescribed by ITU H.264 (124), in the example of FIG. 8. [0145] When there exists a motion partition for the 16x16 pixel partition that is larger than an 8x8 pixel block ("YES" branch of 122), entropy coding unit 46 constructs and sends a lumacbplό value (125) using the steps following step 125. In the example of FIG. 8, to construct the lumacbplό value, entropy coding unit 46 determines whether the 16x16 pixel luma component of the partition has at least one non-zero coefficient (126). When the 16x16 pixel luma component has all zero coefficients ("NO" branch of 126), entropy coding unit 46 assigns the CBP 16 value according to the Coded Block Pattern Chroma portion of ITU H.264 (128), in the example of FIG. 8.
[0146] When entropy coding unit 46 determines that the 16x16 pixel luma component has at least one non-zero coefficient ("YES" branch of 126), entropy coding unit 46 determines a transform-size flag for the 16x16 pixel partition (130). The transform-size flag generally indicates a transform being used for the partition. The transform represented by the transform-size flag may include one of a 4x4 transform, an 8x8 transform, a 16x16 transform, a 16x8 transform, or an 8x16 transform. The transform- size flag may comprise an integer value that corresponds to an enumerated value that identifies one of the possible transforms. Entropy coding unit 46 may then determine whether the transform-size flag represents that the transform size is greater than or equal to 16x8 (or 8x16) (132).
[0147] When the transform-size flag does not indicate that the transform size is greater than or equal to 16x8 (or 8x16) ("NO" branch of 132), entropy coding unit 46 assigns a value to CBP16 according to ITU H.264 (134), in the example of FIG. 8. When the transform-size flag indicates that the transform size is greater than or equal to 16x8 (or 8x16) ("YES" branch of 132), entropy coding unit 46 then determines whether a type for the 16x16 pixel partition is either two 16x8 or two 8x16 pixel partitions (136). [0148] When the type for the 16x16 pixel partition is not two 16x8 and not two 8x16 pixel partitions ("NO" branch of 138), entropy coding unit 46 assigns the CBP 16 value according to the Chroma Coded Block Partition prescribed by ITU H.264 (140), in the example of FIG. 8. When the type for the 16x16 pixel partition is either two 16x8 or two 8x16 pixel partitions ("YES" branch of 136), entropy coding unit 46 also uses the Chroma Coded Block Pattern prescribed by ITU H.264, but in addition assigns the CBP16 value a two-bit lumal6x8_CBP value (142), e.g., according to the method described with respect to FIG. 9.
[0149] FIG. 9 is a flowchart illustrating an example method for determining a two-bit lumal6x8_CBP value. Entropy coding unit 46 receives a 16x16 pixel partition that is further partitioned into two 16x8 or two 8x16 pixel partitions (150). Entropy coding unit 46 generally assigns each bit of lumal6x8_CBP according to whether a corresponding sub-block of the 16x16 pixel partition includes at least one non-zero coefficient.
[0150] Entropy coding unit 46 determines whether a first sub-block of the 16x16 pixel partition has at least one non-zero coefficient to determine whether the first sub-block has at least one non-zero coefficient (152). When the first sub-block has all zero coefficients ("NO" branch of 152), entropy coding unit 46 clears the first bit of lumal6x8_CBP, e.g., assigns lumal6x8_CBP[0] a value of "0" (154). When the first sub-block has at least one non-zero coefficient ("YES" branch of 152), entropy coding unit 46 sets the first bit of lumal6x8_CBP, e.g., assigns lumal6x8_CBP[0] a value of "1" (156).
[0151] Entropy coding unit 46 also determines whether a second sub-partition of the 16x16 pixel partition has at least one non-zero coefficient (158). When the second sub- partition has all zero coefficients ("NO" branch of 158), entropy coding unit 46 clears the second bit of lumal6x8_CBP, e.g., assigns lumal6x8_CBP[l] a value of "0" (160). When the second sub-block has at least one non-zero coefficient ("YES" branch of 158), entropy coding unit 46 then sets the second bit of lumal6x8_CBP, e.g., assigns lumal6x8_CBP[l] a value of "1" (162).
[0152] The following pseudocode provides one example implementation of the methods described with respect to FIGS. 8 and 9: if (motion partition bigger than 8x8)
{ lumacbplό if (lumacbpl6 != 0)
{ transform size flag if (transform size flag == TRANSFORM SIZE GREATER THAN l 6x8)
{ if ((mbl6_type ==P_16x8) OR (mbl6_type==P_8xl6))
{
Iumal6x8_cbp chroma cbp
} else chroma cbp
} else h264_cbp
} else chroma cbp
} else h264_cbp
[0153] In the pseudocode, "lumacbplό" corresponds to an operation of appending a one-bit flag indicating whether an entire 16x16 luma block has nonzero coefficients or not. When "lumacbplβ" equals one, there is at least one nonzero coefficient. The function "Transform size flag" refers to a calculation performed having a result that indicates the transform being used, e.g., one of a 4x4 transform, 8x8 transform, 16x16 transform (for motion partition equal to or bigger than 16x16), 16x8 transform (for P_16x8), or 8x16 transform (for P_8xl6).
TRANSFORM_SIZE_GREATER_THAN_16x8 is an enumerated value (e.g., "2") that is used to indicate that a transform size is greater than or equal to 16x8 or 8x16. The result of the transform size flag is incorporated into the syntax information of the 64x64 pixel macroblock.
[0154] "Lumal6x8_cbp" refers to a calculation that produces a two-bit number with each bit indicating whether one of the two partitions of P l 6x8 or P_8xl6 has nonzero coefficients or not. The two-bit number resulting from Iumal6x8_cbp is incorporated into the syntax of the 64x64 pixel macroblock. The value "chroma cbp" may be calculated in the same manner as the CodedBlockPatternChroma as prescribed by ITU H.264. The calculated chroma cbp value is incorporated into the syntax information of the 64x64 pixel macroblock. The function h264_cbp may be calculated in the same way as the CBP defined in ITU H.264. The calculated H264_cbp value is incorporated into the syntax information of the 64x64 pixel macroblock.
[0155] In general, a method according to FIGS. 6-9 may include encoding, with a video encoder, a video block having a size of more than 16x16 pixels, generating block-type syntax information that indicates the size of the block, and generating a coded block pattern value for the encoded block, wherein the coded block pattern value indicates whether the encoded block includes at least one non-zero coefficient. [0156] FIG. 10 is a block diagram illustrating an example arrangement of a 64x64 pixel macroblock. The macroblock of FIG. 10 comprises four 32x32 partitions, labeled A, B, C, and D in FIG. 10. As discussed with respect to FIG. 4A, in one example, a block may be partitioned in any one of four ways: the entire block (64x64) with no sub-partitions, two equal-sized horizontal partitions (32x64 and 32x64), two equal-sized vertical partitions (64x32 and 64x32), or four equal-sized square partitions (32x32, 32x32, 32x32 and 32x32).
[0157] In the example of FIG. 10, the whole block partition comprises each of blocks A, B, C, and D; a first one of the two equal-sized horizontal partitions comprises A and B, while a second one of the two equal-sized horizontal partitions comprises C and D; a first one of the two equal-sized vertical partitions comprises A and C, while a second one of the two equal-sized vertical partitions comprises B and D; and the four equal- sized square partitions correspond to one of each of A, B, C, and D. Similar partition schemes can be used for any size block, e.g., larger than 64x64 pixels, 32x32 pixels, 16x16 pixels, 8x8 pixels, or other sizes of video blocks.
[0158] When a video block is intra-coded, various methods may be used for partitioning the video block. Moreover, each of the partitions may be intra-coded differently, i.e., with a different mode, such as different intra-modes. For example, a 32x32 partition, such as partition A of FIG. 10, may be further partitioned into four equal-sized blocks of size 16x16 pixels. As one example, ITU H.264 describes three different methods for intra-encoding a 16x16 macroblock, including intra-coding at the 16x16 level, intra- coding at the 8x8 level, and intra-coding at the 4x4 level. However, ITU H.264 prescribes encoding each partition of a 16x16 macroblock using the same intra-coding mode. Therefore, according to ITU H.264, if one sub-block of a 16x16 macroblock is to be intra-coded at the 4x4 level, every sub-block of the 16x16 macroblock must be intra- coded at the 4x4 level.
[0159] An encoder configured according to the techniques of this disclosure, on the other hand, may apply a mixed mode approach. For intra-coding, for example, a large macroblock may have various partitions encoded with different coding modes. As an illustration, in a 32x32 partition, one 16x16 partition may be intra-coded at the 4x4 pixel level, while other 16x16 partitions may be intra-coded at the 8x8 pixel level, and one 16x16 partition may be intra-coded at the 16x16 level, e.g., as shown in FIG. 4B. [0160] When a video block is to be partitioned into four equal-sized sub-blocks for intra-coding, the first block to be intra-coded may be the upper-left block, followed by the block immediately to the right of the first block, followed by the block immediately beneath the first block, and finally followed by the block beneath and to the right of the first block. With reference to the example block of FIG. 10, the order of intra-coding would proceed from A to B to C and finally to D. Although FIG. 10 depicts a 64x64 pixel macroblock, intra-coding of a partitioned block of a different size may follow this same ordering.
[0161] When a video block is to be inter-coded as part of a P-frame or P-slice, the block may be partitioned into any of the four above-described partitions, each of which may be separately encoded. That is, each partition of the block may be encoded according to a different encoding mode, either intra-encoded (I-coded) or inter-encoded with reference to a single reference frame/slice/list (P-coded). Table 1, below, summarizes inter-encoding information for each potential partition of a block of size NxN. Where Table 1 refers to "M," M = N/2. In Table 1 below, LO refers to "list 0," i.e., the reference frame/slice/list. When deciding how to best partition the NxN block, an encoder, such as video encoder 50, may analyze rate-distortion cost information for each MB_N_type (i.e., each type of partition) based on a Lagrange multiplier, as discussed in greater detail with respect to FIG. 11, selecting the lowest cost as the best partition method.
TABLE l
Figure imgf000044_0001
[0162] In Table 1 above, elements of the column "MB N type" are keys for each type of partition of an NxN block. Elements of the column "Name of MB_N_type" are names of different partitioning types of an NxN block. "P" in the name refers to the block being inter-coded using P-coding, i.e., with reference to a single frame/slice/list. "LO" in the name refers to the reference frame/slice/list, e.g., "list 0," used as reference frames or slices for P coding. "NxN" refers to the partition being the whole block, "NxM" refers to the partition being two partitions of width N and height M, "MxN" refers to the partition being two partitions of width M and height N, "MxM" refers to the partition being four equal-sized partitions each with width M and height M. [0163] In Table 1, PN Skip implies that the block was "skipped," e.g., because the block resulting from coding had all zero coefficients. Elements of the column "Prediction Mode part 1" refer to the reference frame/slice/list for sub-partition 1 of the partition, while elements of the column "Prediction Mode part 2" refer to the reference frame/slice/list for sub-partition 2 of the partition. Because P_L0_NxN has only a single partition, the corresponding element of "Prediction Mode part 2" is "N/A," as there is no second sub-partition. For PN MxM, there exist four partition blocks that may be separately encoded. Therefore, both prediction mode columns for PN MxM include "N/A." PN_Skip, as with P_L0_NxN, has only a single part, so the corresponding element of column "Prediction Mode part 2" is "N/A." [0164] Table 2, below, includes similar columns and elements to those of Table 1. However, Table 2 corresponds to various encoding modes for an inter-coded block using bi-directional prediction (B-encoded). Therefore, each partition may be encoded by either or both of a first frame/slice/list (LO) and a second frame/slice/list (Ll). "BiPred" refers to the corresponding partition being predicted from both LO and Ll. In Table 2, column labels and values are similar in meaning to those used in Table 1.
TABLE 2
Figure imgf000046_0001
[0165] FIG. 11 is a flowchart illustrating an example method for calculating optimal partitioning and encoding methods for an NxN pixel video block. In general, the method of FIG. 11 comprises calculating the cost for each different encoding method (e.g., various spatial or temporal modes) as applied to each different partitioning method shown in, e.g., FIG. 4A, and selecting the combination of encoding mode and partitioning method with the best rate-distortion cost for the NxN pixel video block. Cost can be generally calculated using a Lagrange multiplier with rate and distortion values, such that the rate-distortion cost = distortion + λ * rate, where distortion represents error between an original block and a coded block and rate represents the bit rate necessary to support the coding mode. In some cases, rate and distortion may be determined on a macroblock, partition, slice or frame level.
[0166] Initially, video encoder 50 receives an NxN video block to be encoded (170). For example, video encoder 50 may receive a 64x64 large macroblock or a partition thereof, such as, for example, a 32x32 or 16x16 partition, for which video encoder 50 is to select an encoding and partitioning method. Video encoder 50 then calculates the cost to encode the NxN block (172) using a variety of different coding modes, such as different infra- and inter-coding modes. To calculate the cost to spatially encode the NxN block, video encoder 50 may calculate the distortion and the bitrate needed to encode the NxN block with a given coding mode, and then calculate cost = distortion(Mode, NxN) + λ * rate(Mode, NXN> Video encoder 50 may encode the macroblock using the specified coding technique and determine the resulting bit rate cost and distortion. The distortion may be determined based on a pixel difference between the pixels in the coded macroblock and the pixels in the original macroblock, e.g., based on a sum of absolute difference (SAD) metric, sum of square difference (SSD) metric, or other pixel difference metric.
[0167] Video encoder 50 may then partition the NxN block into two equally-sized non- overlapping horizontal Nx(N/2) partitions. Video encoder 50 may calculate the cost to encode each of the partitions using various coding modes (176). For example, to calculate the cost to encode the first Nx(N/2) partition, video encoder 50 may calculate the distortion and the bitrate to encode the first Nx(N/2) partition, and then calculate
COSt = distθrtiθn(Mode, FIRST PARTITION, NX(N/2)) + λ * rate(Mode, FIRST PARTITION, Nx(N/2))-
[0168] Video encoder 50 may then partition the NxN block into two equally-sized non- overlapping vertical (N/2)xN partitions. Video encoder 50 may calculate the cost to encode each of the partitions using various coding modes (178). For example, to calculate the cost to encode the first one of the (N/2)xN partitions, video encoder 50 may calculate the distortion and the bitrate to encode the first (N/2)xN partition, and then calculate cost = distortion(Mode, FIRST PARTITION, (N/2)XN) + λ * rate(Mode, FIRST PARTITION, (N/2)XN> Video encoder 50 may perform a similar calculation for the cost to encode the second one of the (N/2)xN macroblock partitions.
[0169] Video encoder 50 may then partition the NxN block into four equally-sized non- overlapping (N/2)x(N/2) partitions. Video encoder 50 may calculate the cost to encode the partitions using various coding modes (180). To calculate the cost to encode the (N/2)x(N/2) partitions, video encoder 50 may first calculate the distortion and the bitrate to encode the upper-left (N/2)x(N/2) partition and find the cost thereof as cost(Mode,
UPPER-LEFT, (N/2)x(N/2)) = distθrtiθn(Mode, UPPER-LEFT, (N/2)x(N/2)) + λ * rate(Mode, UPPER-LEFT,
(N/2)x(N/2))- Video encoder 50 may similarly calculate the cost of each (N/2)x(N/2) block in the order: (1) upper-left partition, (2) upper-right partition, (3) bottom-left partition, (4) bottom-right partition. Video encoder 50 may, in some examples, make recursive calls to this method on one or more of the (N/2)x(N/2) partitions to calculate the cost of partitioning and separately encoding each of the (N/2)x(N/2) partitions further, e.g., as (N/2)x(N/4) partitions, (N/4)x(N/2) partitions, and (N/4)x(N/4) partitions. [0170] Next, video encoder 50 may determine which combination of partitioning and encoding mode produced the best, i.e., lowest, cost in terms of rate and distortion (182). For example, video encoder 50 may compare the best cost of encoding two adjacent (N/2)x(N/2) partitions to the best cost of encoding the Nx(N/2) partition comprising the two adjacent (N/2)x(N/2) partitions. When the aggregate cost of encoding the two adjacent (N/2)x(N/2) partitions exceeds the cost to encode the Nx(N/2) partition comprising them, video encoder 50 may select the lower-cost option of encoding the Nx(N/2) partition. In general, video encoder 50 may apply every combination of partitioning method and encoding mode for each partition to identify a lowest cost partitioning and encoding method. In some cases, video encoder 50 may be configured to evaluate a more limited set of partitioning and encoding mode combinations. [0171] Upon determining the best, e.g., lowest cost, partitioning and encoding methods, video encoder 50 may encode the NxN macroblock using the best-cost determined method (184). In some cases, the result may be a large macroblock having partitions that are coded using different coding modes. The ability to apply mixed mode coding to a large macroblock, such that different coding modes are applied to different partitions in the large macroblock, may permit the macroblock to be coded with reduced cost. [0172] In some examples, method for coding with mixed modes may include receiving, with video encoder 50, a video block having a size of more than 16x16 pixels, partitioning the block into partitions, encoding one of the partitions with a first encoding mode, encoding another of the partitions with a second coding mode different from the first encoding mode, and generating block-type syntax information that indicates the size of the block and identifies the partitions and the encoding modes used to encode the partitions. [0173] FIG. 12 is a block diagram illustrating an example 64x64 pixel large macroblock with various partitions and different selected encoding methods for each partition. In the example of FIG. 12, each partition is labeled with one of an "I," "P," or "B." Partitions labeled "I" are partitions for which an encoder has elected to utilize intra- coding, e.g., based on rate-distortion evaluation. Partitions labeled "P" are partitions for which the encoder has elected to utilize single-reference inter-coding, e.g., based on rate-distortion evaluation. Partitions labeled "B" are partitions for which the encoder has elected to utilize bi-predicted inter-coding, e.g., based on rate-distortion evaluation. In the example of FIG. 12, different partitions within the same large macroblock have different coding modes, including different partition or sub-partition sizes and different intra- or inter-coding modes.
[0174] The large macroblock is a macroblock identified by a macroblock syntax element that identifies the macroblock type, e.g., mb64_type or mb32_type, for a given coding standard such as an extension of the H.264 coding standard. The macroblock type syntax element may be provided as a macroblock header syntax element in the encoded video bitstream. The I-, P- and B-coded partitions illustrated in FIG. 12 may be coded according to different coding modes, e.g., intra- or inter-prediction modes with various block sizes, including large block size modes for large partitions greater than 16x16 in size or H.264 modes for partitions that are less than or equal to 16x16 in size. [0175] In one example, an encoder, such as video encoder 50, may use the example method described with respect to FIG. 11 to select various encoding modes and partition sizes for different partitions and sub-partitions of the example large macroblock of FIG. 12. For example, video encoder 50 may receive a 64x64 macroblock, execute the method of FIG. 11, and produce the example macroblock of FIG. 12 with various partition sizes and coding modes as a result. It should be understood, however, that selections for partitioning and encoding modes may result from application of the method of FIG. 11, e.g., based on the type of frame from which the macroblock was selected and based on the input macroblock upon which the method is executed. For example, when the frame comprises an I-frame, each partition will be intra-encoded. As another example, when the frame comprises a P-frame, each partition may either be intra-encoded or inter-coded based on a single reference frame (i.e., without bi- prediction).
[0176] The example macroblock of FIG. 12 is assumed to have been selected from a bi- predicted frame (B-frame) for purposes of illustration. In other examples, where a macroblock is selected from a P-frame, video encoder 50 would not encode a partition using bi-directional prediction. Likewise, where a macroblock is selected from an I- frame, video encoder 50 would not encode a partition using inter-coding, either P- encoding or B-encoding. However, in any case, video encoder 50 may select various partition sizes for different portions of the macroblock and elect to encode each partition using any available encoding mode.
[0177] In the example of FIG. 12, it is assumed that a combination of partition and mode selection based on rate-distortion analysis has resulted in one 32x32 B-coded partition, one 32x32 P-coded partition, on 16x32 I-coded partition, one 32x16 B-coded partition, one 16x16 P-coded partition, one 16x8 P-coded partition, one 8x16 P-coded partition, one 8x8 P-coded partition, one 8x8 B-coded partition, one 8x8 I-coded partition, and numerous smaller sub-partitions having various coding modes. The example of FIG. 12 is provided for purposes of conceptual illustration of mixed mode coding of partitions in a large macroblock, and should not necessarily be considered representative of actual coding results for a particular large 64x64 macroblock. [0178] FIG. 13 is a flowchart illustrating an example method for determining an optimal size of a macroblock for encoding a frame or slice of a video sequence. Although described with respect to selecting an optimal size of a macroblock for a frame, a method similar to that described with respect to FIG. 13 may be used to select an optimal size of a macroblock for a slice. Likewise, although the method of FIG. 13 is described with respect to video encoder 50, it should be understood that any encoder may utilize the example method of FIG. 13 to determine an optimal (e.g., least cost) size of a macroblock for encoding a frame of a video sequence. In general, the method of FIG. 13 comprises performing an encoding pass three times, once for each of a 16x16 macroblock, a 32x32 macroblock, and a 64x64 macroblock, and a video encoder may calculate rate-distortion metrics for each pass to determine which macroblock size provides the best rate-distortion.
[0179] Video encoder 50 may first encode a frame using 16x16 pixel macrob locks during a first encoding pass (190), e.g., using a function encode (frame, MB16_type), to produce an encoded frame F16. After the first encoding pass, video encoder 50 may calculate the bit rate and distortion based on the use of 16x16 pixel macrob locks as Ri6 and Die, respectively (192). Video encoder 50 may then calculate a rate-distortion metric in the form of the cost of using 16x16 pixel macrob locks C16 using the Lagrange multiplier C16 = Di6 + λ*Ri6 (194). Coding modes and partition sizes may be selected for the 16x16 pixel macrob locks, for example, according to the H.264 standard. [0180] Video encoder 50 may then encode the frame using 32x32 pixel macroblocks during a second encoding pass (196), e.g., using a function encode (frame, MB32_type), to produce an encoded frame F32. After the second encoding pass, video encoder 50 may calculate the bit rate and distortion based on the use of 32x32 pixel macroblocks as R32 and D32, respectively (198). Video encoder 50 may then calculate a rate-distortion metric in the form the cost of using 32x32 pixel macroblocks C32 using the Lagrange multiplier C32 = D32 + λ*R32 (200). Coding modes and partition sizes may be selected for the 32x32 pixel macroblocks, for example, using rate and distortion evaluation techniques as described with reference to FIGS. 11 and 12.
[0181] Video encoder 50 may then encode the frame using 64x64 pixel macroblocks during a third encoding pass (202), e.g., using a function encode(frame, MB64_type), to produce an encoded frame F64. After the third encoding pass, video encoder 50 may calculate the bit rate and distortion based on the use of 64x64 pixel macroblocks as R64 and D64, respectively (204). Video encoder 50 may then calculate a rate-distortion metric in the form the cost of using 64x64 pixel macroblocks C64 using the Lagrange multiplier C64 = D64 + A^R64 (206). Coding modes and partition sizes may be selected for the 64x64 pixel macroblocks, for example, using rate and distortion evaluation techniques as described with reference to FIGS. 11 and 12.
[0182] Next, video encoder 50 may determine which of the metrics Ci6, C32, and C64 is lowest for the frame (208). Video encoder 50 may elect to use the frame encoded with the macroblock size that resulted in the lowest cost (210). Thus, for example, when Ci6 is lowest, video encoder 50 may forward frame Fi6, encoded with the 16x16 macroblocks as the encoded frame in a bitstream for storage or transmission to a decoder. When C32 is lowest, video encoder 50 may forward F32, encoded with the 32x32 macroblocks. When C64 is lowest, video encoder 50 may forward F64, encoded with the 64x64 macroblocks.
[0183] In other examples, video encoder 50 may perform the encoding passes in any order. For example, video encoder 50 may begin with the 64x64 macroblock encoding pass, perform the 32x32 macroblock encoding pass second, and end with the 16x16 macroblock encoding pass. Also, similar methods may be used for encoding other coded units comprising a plurality of macroblocks, such as slices with different sizes of macroblocks. For example, video encoder 50 may apply a method similar to that of FIG. 13 for selecting an optimal macroblock size for encoding slices of a frame, rather than the entire frame.
[0184] Video encoder 50 may also transmit an identifier of the size of the macrob locks for a particular coded unit (e.g., a frame or a slice) in the header of the coded unit for use by a decoder. In accordance with the method of FIG. 13, a method may include receiving, with a digital video encoder, a coded unit of a digital video stream, calculating a first rate-distortion metric corresponding to a rate-distortion for encoding the coded unit using a first plurality of blocks each comprising 16x16 pixels, calculating a second rate-distortion metric corresponding to a rate-distortion for encoding the coded unit using a second plurality of blocks each comprising greater than 16x16 pixels, and determining which of the first rate-distortion metric and the second rate-distortion metric is lowest for the coded unit. The method may further include, when the first rate- distortion metric is determined to be lowest, encoding the coded unit using the first plurality of blocks, and when the second rate-distortion metric is determined to be lowest, encoding the coded unit using the second plurality of blocks. [0185] FIG. 14 is a block diagram illustrating an example wireless communication device 230 including a video encoder/decoder CODEC 234 that may encode and/or decode digital video data using the larger-than-standard macroblocks, using any of a variety of the techniques described in this disclosure. In the example of FIG. 14, wireless communication device 230 includes video camera 232, video encoder-decoder (CODEC) 234, modulator/demodulator (modem) 236, transceiver 238, processor 240, user interface 242, memory 244, data storage device 246, antenna 248, and bus 250. [0186] The components included in wireless communication device 230 illustrated in FIG. 14 may be realized by any suitable combination of hardware, software and/or firmware. In the illustrated example, the components are depicted as separate units. However, in other examples, the various components may be integrated into combined units within common hardware and/or software. As one example, memory 244 may store instructions executable by processor 240 corresponding to various functions of video CODEC 234. As another example, video camera 232 may include a video CODEC that performs the functions of video CODEC 234, e.g., encoding and/or decoding video data.
[0187] In one example, video camera 232 may correspond to video source 18 (FIG. 1). In general, video camera 232 may record video data captured by an array of sensors to generate digital video data. Video camera 232 may send raw, recorded digital video data to video CODEC 234 for encoding and then to data storage device 246 via bus 250 for data storage. Processor 240 may send signals to video camera 232 via bus 250 regarding a mode in which to record video, a frame rate at which to record video, a time at which to end recording or to change frame rate modes, a time at which to send video data to video CODEC 234, or signals indicating other modes or parameters. [0188] User interface 242 may comprise one or more interfaces, such as input and output interfaces. For example, user interface 242 may include a touch screen, a keypad, buttons, a screen that may act as a viewfϊnder, a microphone, a speaker, or other interfaces. As video camera 232 receives video data, processor 240 may signal video camera 232 to send the video data to user interface 242 to be displayed on the viewfϊnder.
[0189] Video CODEC 234 may encode video data from video camera 232 and decode video data received via antenna 248, transceiver 238, and modem 236. Video CODEC 234 additionally or alternatively may decode previously encoded data received from data storage device 246 for playback. Video CODEC 234 may encode and/or decode digital video data using macroblocks that are larger than the size of macroblocks prescribed by conventional video encoding standards. For example, video CODEC 234 may encode and/or decode digital video data using a large macroblock comprising 64x64 pixels or 32x32 pixels. The large macroblock may be identified with a macroblock type syntax element according to a video standard, such as an extension of the H.264 standard.
[0190] Video CODEC 234 may perform the functions of either or both of video encoder 50 (FIG. 2) and/or video decoder 60 (FIG. 3), as well as any other encoding/decoding functions or techniques as described in this disclosure. For example, CODEC 234 may partition a large macroblock into a variety of differently sized, smaller partitions, and use different coding modes, e.g., spatial (I) or temporal (P or B), for selected partitions. Selection of partition sizes and coding modes may be based on rate-distortion results for such partition sizes and coding modes. CODEC 234 also may utilize hierarchical coded block pattern (CBP) values to identify coded macroblocks and partitions having non-zero coefficients within a large macroblock. In addition, in some examples, CODEC 234 may compare rate-distortion metrics for large and small macroblocks to select a macroblock size producing more favorable results for a frame, slice or other coding unit. [0191] A user may interact with user interface 242 to transmit a recorded video sequence in data storage device 246 to another device, such as another wireless communication device, via modem 236, transceiver 238, and antenna 248. The video sequence may be encoded according to an encoding standard, such as MPEG-2, MPEG- 3, MPEG-4, H.263, H.264, or other video encoding standards, subject to extensions or modifications described in this disclosure. For example, the video sequence may also be encoded using larger-than-standard macroblocks, as described in this disclosure. Wireless communication device 230 may also receive an encoded video segment and store the received video sequence in data storage device 246.
[0192] Macroblocks of the received, encoded video sequence may be larger than macroblocks specified by conventional video encoding standards. To display an encoded video segment in data storage device 246, such as a recorded video sequence or a received video segment, video CODEC 234 may decode the video sequence and send decoded frames of the video segment to user interface 242. When a video sequence includes audio data, video CODEC 234 may decode the audio, or wireless communication device 230 may further include an audio codec (not shown) to decode the audio. In this manner, video CODEC 234 may perform both the functions of an encoder and of a decoder.
[0193] Memory 244 of wireless communication device 230 of FIG. 14 may be encoded with computer-readable instructions that cause processor 240 and/or video CODEC 234 to perform various tasks, in addition to storing encoded video data. Such instructions may be loaded into memory 244 from a data storage device such as data storage device 246. For example, the instructions may cause processor 240 to perform the functions described with respect to video CODEC 234.
[0194] FIG. 15 is a block diagram illustrating an example hierarchical coded block pattern (CBP) 260. The example of CBP 260 generally corresponds to a portion of the syntax information for a 64x64 pixel macroblock. In the example of FIG. 15, CBP 260 comprises a CBP64 value 262, four CBP32 values 264, 266, 268, 270, and four CBP16 values 272, 274, 276, 278. Each block of CBP 260 may include one or more bits. In one example, when CBP64 value 262 is a bit with a value of "1," indicating that there is at least one non-zero coefficient in the large macroblock, CBP 260 includes the four CBP32 values 264, 266, 268, 270 for four 32x32 partitions of the large 64x64 macroblock, as shown in the example of FIG. 15. [0195] In another example, when CBP64 value 262 is a bit with a value of "0," CBP 260 may consist only of CBP64, as a value of "0" may indicate that the block corresponding to CBP 260 has all zero-valued coefficients. Hence, all partitions of that block likewise will contain all zero-valued coefficients. In one example, when a CBP64 is a bit with a value of "1," and one of the CBP32 values for a particular 32x32 partition is a bit with a value of "1," the CBP32 value for the 32x32 partition has four branches, representative of CBP16 values, e.g., as shown with respect to CBP32 value 266. In one example, when a CBP32 value is a bit with a value of "0," the CBP32 does not have any branches. In the example of FIG. 15, CBP 260 may have a five -bit prefix of "10100," indicating that the CBP64 value is "1," and that one of the 32x32 partitions has a CBP32 value of "1," with subsequent bits corresponding to the four CBP 16 values 272, 274, 276, 278 corresponding to 16x16 partitions of the 32x32 partition with the CBP 32 value of "1." Although only a single CBP32 value is shown as having a value of "1" in the example of FIG. 15, in other examples, two, three or all four 32x32 partitions may have CBP32 values of "1," in which case multiple instances of four 16x16 partitions with corresponding CBP 16 values would be required. [0196] In the example of FIG. 15, the four CBP16 values 272, 274, 276, 278 for the four 16x16 partitions may be calculated according to various methods, e.g., according to the methods of FIGS. 8 and 9. Any or all of CBP 16 values 272, 274, 276, 278 may include a "lumacbplό" value, a transform size flag, and/or a Iumal6x8_cbp. CBP 16 values 272, 274, 276, 278 may also be calculated according to a CBP value as defined in ITU H.264 or as a CodedBlockPatternChroma in ITU H.264, as discussed with respect to FIGS. 8 and 9. In the example of FIG. 15, assuming that CBP16 278 has a value of "1," and the other CBP 16 values 272, 274, 276 have values of "0," the nine-bit CBP value for the 64x64 macroblock would be "101000001," where each bit corresponds to one of the partitions at a respective level in the CBP/partition hierarchy. [0197] FIG. 16 is a block diagram illustrating an example tree structure 280 corresponding to CBP 260 (FIG. 15). CBP64 node 282 corresponds to CBP64 value 262, CBP32 nodes 284, 286, 288, 290 each correspond to respective ones of CBP32 values 264, 266, 268, 270, and CBP 16 nodes 292, 294, 296, 298 each correspond to respective ones of CBP 16 values 272, 274, 276, 278. In this manner, a coded block pattern value as defined in this disclosure may correspond to a hierarchical CBP. Each node yielding another branch in the tree corresponds to a respective CBP value of "1." In the examples of FIGS. 15 and 16, CBP64 282 and CBP32 286 both have values of "1," and yield further partitions with possible CBP values of "1," i.e., where at least one partition at the next partition level includes at least one non-zero transform coefficient value.
[0198] FIG. 17 is a flowchart illustrating an example method for using syntax information of a coded unit to indicate and select block-based syntax encoders and decoders for video blocks of the coded unit. In general, steps 300 to 310 of FIG. 17 may be performed by a video encoder, such as video encoder 20 (FIG. 1), in addition to and in conjunction with encoding a plurality of video blocks for a coded unit. A coded unit may comprise a video frame, a slice, or a group of pictures (also referred to as a "sequence"). Steps 312 to 316 of FIG. 17 may be performed by a video decoder, such as video decoder 30 (FIG. 1), in addition to and in conjunction with decoding the plurality of video blocks of the coded unit.
[0199] Initially, video encoder 20 may receive a set of various-sized blocks for a coded unit, such as a frame, slice, or group of pictures (300). In accordance with the techniques of this disclosure, one or more of the blocks may comprise greater than 16x16 pixels, e.g., 32x32 pixels, 64x64 pixels, etc. However, the blocks need not each include the same number of pixels. In general, video encoder 20 may encode each of the blocks using the same block-based syntax. For example, video encoder 20 may encode each of the blocks using a hierarchical coded block pattern, as described above. [0200] Video encoder 20 may select the block-based syntax to use based on a largest block, i.e., maximum block size, in the set of blocks for the coded unit. The maximum block size may correspond to the size of a largest macroblock included in the coded unit. Accordingly, video encoder 20 may determine the largest sized block in the set (302). In the example of FIG. 17, video encoder 20 may also determine the smallest sized block in the set (304). As discussed above, the hierarchical coded block pattern of a block has a length that corresponds to whether partitions of the block have a non-zero, quantized coefficient. In some examples, video encoder 20 may include a minimum size value in syntax information for a coded unit. In some examples, the minimum size value indicates the minimum partition size in the coded unit. The minimum partition size, e.g., the smallest block in a coded unit, in this manner may be used to determine a maximum length for the hierarchical coded block pattern.
[0201] Video encoder 20 may then encode each block of the set for the coded unit according to the syntax corresponding to the largest block (306). For example, assuming that the largest block comprises a 64x64 pixel block, video encoder 20 may use syntax such as that defined above for MB64_type. As another example, assuming that the largest block comprises a 32x32 pixel block, video encoder 20 may use the syntax such as that defined above for MB32_type.
[0202] Video encoder 20 also generates coded unit syntax information, which includes values corresponding to the largest block in the coded unit and the smallest block in the coded unit (308). Video encoder 20 may then transmit the coded unit, including the syntax information for the coded unit and each of the blocks of the coded unit, to video decoder 30.
[0203] Video decoder 30 may receive the coded unit and the syntax information for the coded unit from video encoder 20 (312). Video decoder 30 may select a block-based syntax decoder based on the indication in the coded unit syntax information of the largest block in the coded unit (314). For example, assuming that the coded unit syntax information indicated that the largest block in the coded unit comprised 64x64 pixels, video decoder 30 may select a syntax decoder for MB64_type blocks. Video decoder 30 may then apply the selected syntax decoder to blocks of the coded unit to decode the blocks of the coded unit (316). Video decoder 30 may also determine when a block does not have further separately encoded sub-partitions based on the indication in the coded unit syntax information of the smallest encoded partition. For example, if the largest block is 64x64 pixels and the smallest block is also 64x64 pixels, then it can be determined that the 64x64 blocks are not divided into sub-partitions smaller than the 64x64 size. As another example, if the largest block is 64x64 pixels and the smallest block is 32x32 pixels, then it can be determined that the 64x64 blocks are divided into sub-partitions no smaller than 32x32.
[0204] In this manner, video decoder 30 may remain backwards-compatible with existing coding standards, such as H.264. For example, when the largest block in a coded unit comprises 16x16 pixels, video encoder 20 may indicate this in the coded unit syntax information, and video decoder 30 may apply standard H.264 block-based syntax decoders. However, when the largest block in a coded unit comprises more than 16x16 pixels, video encoder 20 may indicate this in the coded unit syntax information, and video decoder 30 may selectively apply a block-based syntax decoder in accordance with the techniques of this disclosure to decode the blocks of the coded unit. [0205] In one or more examples, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a computer. By way of example, and not limitation, such computer-readable media can comprise RAM, ROM, EEPROM, CD- ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. Also, any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. Disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk and blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media.
[0206] Various examples have been described. These and other examples are within the scope of the following claims.

Claims

CLAIMS:
1. A method comprising: encoding, with a video encoder, a coded unit comprising a plurality of video blocks, wherein at least one of the plurality of video blocks comprises a size of more than 16x16 pixels; and generating syntax information for the coded unit that includes a maximum size value, wherein the maximum size value indicates a size of a largest one of the plurality of video blocks in the coded unit.
2. The method of claim 1, wherein the coded unit comprises one of a frame, a slice, and a group of pictures.
3. The method of claim 1, wherein the syntax information comprises a fixed-length code corresponding to a size of the largest one of the plurality of video blocks.
4. The method of claim 1, wherein generating the syntax information further comprises including, in the syntax information, a minimum size value, wherein the minimum size value indicates a size of a smallest one of the plurality of video blocks in the coded unit.
5. The method of claim 4, further comprising generating block-based syntax information for each of the plurality of video blocks according to the maximum size value and the minimum size value.
6. An apparatus comprising a video encoder configured to encode a coded unit comprising a plurality of video blocks, wherein at least one of the plurality of video blocks comprises a size of more than 16x16 pixels and to generate syntax information for the coded unit that includes a maximum size value, wherein the maximum size value indicates a size of a largest one of the plurality of video blocks in the coded unit.
7. The apparatus of claim 6, wherein the coded unit comprises one of a frame, a slice, and a group of pictures.
8. The apparatus of claim 6, wherein the syntax information comprises a fixed- length code corresponding to a size of the largest one of the plurality of video blocks.
9. The apparatus of claim 6, wherein the video encoder is configured to include, in the syntax information, a minimum size value, wherein the minimum size value indicates a size of a smallest one of the plurality of video blocks in the coded unit.
10. The apparatus of claim 9, wherein the video encoder is configured to generate block-based syntax information for each of the plurality of video blocks according to the maximum size value and the minimum size value.
11. An apparatus comprising: means for encoding a coded unit comprising a plurality of video blocks, wherein at least one of the plurality of video blocks comprises a size of more than 16x16 pixels; and means for generating syntax information for the coded unit that includes a maximum size value, wherein the maximum size value indicates a size of a largest one of the plurality of video blocks in the coded unit.
12. The apparatus of claim 11 , wherein the coded unit comprises one of a frame, a slice, and a group of pictures.
13. The apparatus of claim 11, wherein the syntax information comprises a fixed- length code corresponding to a size of the largest one of the plurality of video blocks.
14. The apparatus of claim 11 , wherein the means for generating the syntax information further comprises means for including, in the syntax information, a minimum size value, wherein the minimum size value indicates a size of a smallest one of the plurality of video blocks in the coded unit.
15. The apparatus of claim 14, further comprising means for generating block-based syntax information for each of the plurality of video blocks according to the maximum size value and the minimum size value.
16. A computer-readable storage medium encoded with instructions for causing a programmable processor to: encode a coded unit comprising a plurality of video blocks, wherein at least one of the plurality of video blocks comprises a size of more than 16x16 pixels; and generate syntax information for the coded unit that includes a maximum size value, wherein the maximum size value indicates a size of a largest one of the plurality of video blocks in the coded unit.
17. The computer-readable storage medium of claim 16, wherein the coded unit comprises one of a frame, a slice, and a group of pictures.
18. The computer-readable storage medium of claim 16, wherein the syntax information comprises a fixed-length code corresponding to a size of the largest one of the plurality of video blocks.
19. The computer-readable storage medium of claim 16, wherein the instructions to generate the syntax information further comprise instructions to include, in the syntax information, a minimum size value, wherein the minimum size value indicates a size of a smallest one of the plurality of video blocks in the coded unit.
20. The computer-readable storage medium of claim 19, further comprising instructions to generate block-based syntax information for each of the plurality of video blocks according to the maximum size value and the minimum size value.
21. A method comprising : receiving, with a video decoder, a coded unit comprising a plurality of video blocks, wherein at least one of the plurality of video blocks comprises a size of more than 16x16 pixels; receiving syntax information for the coded unit that includes a maximum size value, wherein the maximum size value indicates a size of a largest one of the plurality of video blocks in the coded unit; selecting a block-type syntax decoder according to the maximum size value; and decoding each of the plurality of video blocks in the coded unit using the selected block-type syntax decoder.
22. The method of claim 21 , wherein the syntax information includes a minimum size value, wherein the minimum size value indicates a size of a smallest one of the plurality of video blocks in the coded unit, and wherein the selected block-type syntax decoder indicates how to decode the plurality of video blocks in the coded unit according to the minimum size value.
23. An apparatus comprising a video decoder configured to receive a coded unit comprising a plurality of video blocks, wherein at least one of the plurality of video blocks comprises a size of more than 16x16 pixels, receive syntax information for the coded unit that includes a maximum size value, wherein the maximum size value indicates a size of a largest one of the plurality of video blocks in the coded unit, select a block-type syntax decoder according to the maximum size value, and decode each of the plurality of video blocks in the coded unit using the selected block-type syntax decoder.
24. The apparatus of claim 23, wherein the syntax information includes a minimum size value, wherein the minimum size value indicates a size of a smallest one of the plurality of video blocks in the coded unit, and wherein the selected block-type syntax decoder indicates how to decode the plurality of video blocks in the coded unit according to the minimum size value.
25. An apparatus comprising: means for receiving a coded unit comprising a plurality of video blocks, wherein at least one of the plurality of video blocks comprises a size of more than 16x16 pixels; means for receiving syntax information for the coded unit that includes a maximum size value, wherein the maximum size value indicates a size of a largest one of the plurality of video blocks in the coded unit; means for selecting a block-type syntax decoder according to the maximum size value; and means for decoding each of the plurality of video blocks in the coded unit using the selected block-type syntax decoder.
26. The apparatus of claim 25, wherein the syntax information includes a minimum size value, wherein the minimum size value indicates a size of a smallest one of the plurality of video blocks in the coded unit, and wherein the selected block-type syntax decoder indicates how to decode the plurality of video blocks in the coded unit according to the minimum size value.
27. A computer-readable storage medium encoded with instructions for causing a programmable processor to: receive a coded unit comprising a plurality of video blocks, wherein at least one of the plurality of video blocks comprises a size of more than 16x16 pixels; receive syntax information for the coded unit that includes a maximum size value, wherein the maximum size value indicates a size of a largest one of the plurality of video blocks in the coded unit; select a block-type syntax decoder according to the maximum size value; and decode each of the plurality of video blocks in the coded unit using the selected block-type syntax decoder.
28. The computer-readable storage medium of claim 27, wherein the syntax information includes a minimum size value, wherein the minimum size value indicates a size of a smallest one of the plurality of video blocks in the coded unit, and wherein the selected block-type syntax decoder indicates how to decode the plurality of video blocks in the coded unit according to the minimum size value.
PCT/US2009/058833 2008-10-03 2009-09-29 Video coding with large macroblocks WO2010039728A2 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
JP2011530142A JP2012504908A (en) 2008-10-03 2009-09-29 Video coding using large macro blocks
CN200980139141.9A CN102172021B (en) 2008-10-03 2009-09-29 Video coding with large macroblocks
EP09793127.3A EP2347591B2 (en) 2008-10-03 2009-09-29 Video coding with large macroblocks
KR1020117010099A KR101222400B1 (en) 2008-10-03 2009-09-29 Video coding with large macroblocks

Applications Claiming Priority (8)

Application Number Priority Date Filing Date Title
US10278708P 2008-10-03 2008-10-03
US61/102,787 2008-10-03
US14435709P 2009-01-13 2009-01-13
US61/144,357 2009-01-13
US16663109P 2009-04-03 2009-04-03
US61/166,631 2009-04-03
US12/562,504 2009-09-18
US12/562,504 US8503527B2 (en) 2008-10-03 2009-09-18 Video coding with large macroblocks

Publications (2)

Publication Number Publication Date
WO2010039728A2 true WO2010039728A2 (en) 2010-04-08
WO2010039728A3 WO2010039728A3 (en) 2010-07-08

Family

ID=42041691

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2009/058833 WO2010039728A2 (en) 2008-10-03 2009-09-29 Video coding with large macroblocks

Country Status (7)

Country Link
US (8) US8503527B2 (en)
EP (1) EP2347591B2 (en)
JP (3) JP2012504908A (en)
KR (1) KR101222400B1 (en)
CN (2) CN102172021B (en)
TW (1) TWI392370B (en)
WO (1) WO2010039728A2 (en)

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2452494A2 (en) * 2009-08-14 2012-05-16 Samsung Electronics Co., Ltd. Method and apparatus for encoding video, and method and apparatus for decoding video
CN103155561A (en) * 2011-03-11 2013-06-12 通用仪表公司 Method and apparatus for spatial scalability for hevc
JP2013531942A (en) * 2010-06-10 2013-08-08 トムソン ライセンシング Method and apparatus for determining a quantization parameter predictor from a plurality of adjacent quantization parameters
CN103314590A (en) * 2011-01-13 2013-09-18 日本电气株式会社 Video encoding device, video decoding device, video encoding method, video decoding method, and program
EP2665272A1 (en) * 2011-01-13 2013-11-20 Nec Corporation Video encoding device, video decoding device, video encoding method, video decoding method, and program
JP2013546257A (en) * 2010-11-01 2013-12-26 クゥアルコム・インコーポレイテッド Joint coding of syntax elements for video coding
CN103650499A (en) * 2011-06-30 2014-03-19 Sk电信有限公司 Method and apparatus for coding/decoding through high-speed coding unit mode decision
EP2736254A1 (en) * 2011-07-22 2014-05-28 Hitachi, Ltd. Video decoding method and image encoding method
AU2010328813B2 (en) * 2009-12-08 2014-06-12 Samsung Electronics Co., Ltd. Method and apparatus for encoding video by motion prediction using arbitrary partition, and method and apparatus for decoding video by motion prediction using arbitrary partition
JP2015019420A (en) * 2009-08-13 2015-01-29 サムスン エレクトロニクス カンパニー リミテッド Image decryption method
EP2838267A3 (en) * 2009-10-23 2015-04-08 Samsung Electronics Co., Ltd Method and apparatus for encoding video and method and apparatus for decoding video, based on hierarchical structure of coding unit
EP2624557A4 (en) * 2010-09-30 2016-02-24 Samsung Electronics Co Ltd Video encoding method for encoding hierarchical-structure symbols and a device therefor, and video decoding method for decoding hierarchical-structure symbols and a device therefor
JP2016129363A (en) * 2010-04-13 2016-07-14 サムスン エレクトロニクス カンパニー リミテッド Video-encoding method and apparatus of the same using prediction units based on encoding units with tree structure, and video-decoding method and apparatus of the same
US9860528B2 (en) 2011-06-10 2018-01-02 Hfi Innovation Inc. Method and apparatus of scalable video coding
JP2020115669A (en) * 2009-07-01 2020-07-30 インターデジタル ヴイシー ホールディングス, インコーポレイテッド Method and apparatus for signaling intra prediction to large block for video encoder and decoder
US11706430B2 (en) 2010-10-04 2023-07-18 Electronics And Telecommunications Research Institute Method for encoding/decoding block information using quad tree, and device for using same
US11758194B2 (en) 2008-10-03 2023-09-12 Qualcomm Incorporated Device and method for video decoding video blocks

Families Citing this family (76)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8379727B2 (en) * 2008-09-26 2013-02-19 General Instrument Corporation Method and apparatus for scalable motion estimation
US8483285B2 (en) 2008-10-03 2013-07-09 Qualcomm Incorporated Video coding using transforms bigger than 4×4 and 8×8
US8619856B2 (en) * 2008-10-03 2013-12-31 Qualcomm Incorporated Video coding with large macroblocks
US8634456B2 (en) * 2008-10-03 2014-01-21 Qualcomm Incorporated Video coding with large macroblocks
KR101712351B1 (en) * 2009-06-26 2017-03-06 에스케이 텔레콤주식회사 Video Encoding/Decoding Method and Apparatus by Using Multiple Dimensional Integer Transform
KR101474756B1 (en) * 2009-08-13 2014-12-19 삼성전자주식회사 Method and apparatus for encoding and decoding image using large transform unit
JP5234368B2 (en) 2009-09-30 2013-07-10 ソニー株式会社 Image processing apparatus and method
US9549190B2 (en) * 2009-10-01 2017-01-17 Sk Telecom Co., Ltd. Method and apparatus for encoding/decoding image using variable-size macroblocks
PL3595311T3 (en) * 2009-10-01 2021-01-25 Sk Telecom Co., Ltd Method and apparatus for encoding/decoding image using variable-sized macroblocks
CN102918840B (en) * 2009-10-01 2016-05-25 Sk电信有限公司 Use dividing layer to carry out the method and apparatus of encoding/decoding image
KR101504887B1 (en) * 2009-10-23 2015-03-24 삼성전자 주식회사 Method and apparatus for video decoding by individual parsing or decoding in data unit level, and method and apparatus for video encoding for individual parsing or decoding in data unit level
US10897625B2 (en) * 2009-11-20 2021-01-19 Texas Instruments Incorporated Block artifact suppression in video coding
US20110274162A1 (en) 2010-05-04 2011-11-10 Minhua Zhou Coding Unit Quantization Parameters in Video Coding
KR20110061468A (en) * 2009-12-01 2011-06-09 (주)휴맥스 Methods for encoding/decoding high definition image and apparatuses for performing the same
KR101479141B1 (en) * 2009-12-10 2015-01-07 에스케이텔레콤 주식회사 Coding Method and Apparatus by Using Tree Structure
US8885711B2 (en) 2009-12-17 2014-11-11 Sk Telecom Co., Ltd. Image encoding/decoding method and device
CN106412600B (en) * 2010-01-12 2019-07-16 Lg电子株式会社 The processing method and equipment of vision signal
KR101675118B1 (en) 2010-01-14 2016-11-10 삼성전자 주식회사 Method and apparatus for video encoding considering order of skip and split, and method and apparatus for video decoding considering order of skip and split
KR101703327B1 (en) * 2010-01-14 2017-02-06 삼성전자 주식회사 Method and apparatus for video encoding using pattern information of hierarchical data unit, and method and apparatus for video decoding using pattern information of hierarchical data unit
US20110249754A1 (en) * 2010-04-12 2011-10-13 Qualcomm Incorporated Variable length coding of coded block pattern (cbp) in video compression
US8942282B2 (en) 2010-04-12 2015-01-27 Qualcomm Incorporated Variable length coding of coded block pattern (CBP) in video compression
DK2991355T3 (en) * 2010-04-13 2018-02-19 Ge Video Compression Llc Inheritance in sampler array multitree subdivision
KR102311520B1 (en) 2010-04-13 2021-10-13 지이 비디오 컴프레션, 엘엘씨 Video coding using multi-tree sub - divisions of images
KR102360146B1 (en) 2010-04-13 2022-02-08 지이 비디오 컴프레션, 엘엘씨 Sample region merging
BR122020007923B1 (en) 2010-04-13 2021-08-03 Ge Video Compression, Llc INTERPLANE PREDICTION
KR101813189B1 (en) * 2010-04-16 2018-01-31 에스케이 텔레콤주식회사 Video coding/decoding apparatus and method
US20130058410A1 (en) * 2010-05-13 2013-03-07 Sharp Kabushiki Kaisha Encoding device, decoding device, and data structure
CN106060547B (en) * 2010-06-07 2019-09-13 数码士有限公司 The method and apparatus of decoding high resolution image
US8837577B2 (en) * 2010-07-15 2014-09-16 Sharp Laboratories Of America, Inc. Method of parallel video coding based upon prediction type
KR20120009618A (en) * 2010-07-19 2012-02-02 에스케이 텔레콤주식회사 Method and Apparatus for Partitioned-Coding of Frequency Transform Unit and Method and Apparatus for Encoding/Decoding of Video Data Thereof
US20120106622A1 (en) * 2010-11-03 2012-05-03 Mediatek Inc. Method and Apparatus of Slice Grouping for High Efficiency Video Coding
BR112013017395B1 (en) * 2011-01-06 2020-10-06 Samsung Electronics Co., Ltd VIDEO DECODER METHOD, AND VIDEO ENCODER METHOD
JP2014506063A (en) * 2011-01-07 2014-03-06 サムスン エレクトロニクス カンパニー リミテッド Video prediction method and apparatus capable of bidirectional prediction and unidirectional prediction, video encoding method and apparatus, and video decoding method and apparatus
US20120189052A1 (en) * 2011-01-24 2012-07-26 Qualcomm Incorporated Signaling quantization parameter changes for coded units in high efficiency video coding (hevc)
US20140185948A1 (en) * 2011-05-31 2014-07-03 Humax Co., Ltd. Method for storing motion prediction-related information in inter prediction method, and method for obtaining motion prediction-related information in inter prediction method
US20140241422A1 (en) * 2011-06-28 2014-08-28 Samsung Electronics Co., Ltd. Method and apparatus for image encoding and decoding using adaptive quantization parameter differential
US8929455B2 (en) * 2011-07-01 2015-01-06 Mitsubishi Electric Research Laboratories, Inc. Method for selecting transform types from mapping table for prediction modes
US20130016769A1 (en) 2011-07-17 2013-01-17 Qualcomm Incorporated Signaling picture size in video coding
US9787982B2 (en) * 2011-09-12 2017-10-10 Qualcomm Incorporated Non-square transform units and prediction units in video coding
US9538184B2 (en) * 2011-09-14 2017-01-03 Samsung Electronics Co., Ltd. Method and device for encoding and decoding video
US9332283B2 (en) * 2011-09-27 2016-05-03 Broadcom Corporation Signaling of prediction size unit in accordance with video coding
JP5976658B2 (en) 2011-09-29 2016-08-24 シャープ株式会社 Image decoding apparatus, image decoding method, and image encoding apparatus
US10110891B2 (en) * 2011-09-29 2018-10-23 Sharp Kabushiki Kaisha Image decoding device, image decoding method, and image encoding device
US9066068B2 (en) * 2011-10-31 2015-06-23 Avago Technologies General Ip (Singapore) Pte. Ltd. Intra-prediction mode selection while encoding a picture
US9167261B2 (en) * 2011-11-07 2015-10-20 Sharp Laboratories Of America, Inc. Video decoder with constrained dynamic range
US8923388B2 (en) * 2011-11-21 2014-12-30 Texas Instruments Incorporated Early stage slice cap decision in video coding
WO2013107027A1 (en) 2012-01-19 2013-07-25 Mediatek Singapore Pte. Ltd. Methods and apparatuses of cbf coding in hevc
SG10201505820QA (en) * 2012-01-30 2015-08-28 Samsung Electronics Co Ltd Method and apparatus for video encoding for each spatial sub-area, and method and apparatus for video decoding for each spatial sub-area
CN103379319B (en) * 2012-04-12 2018-03-20 中兴通讯股份有限公司 A kind of filtering method, wave filter and the encoder and decoder comprising the wave filter
US9113164B1 (en) * 2012-05-15 2015-08-18 Google Inc. Constant bit rate control using implicit quantization values
RS57336B1 (en) * 2012-07-02 2018-08-31 Samsung Electronics Co Ltd Method for entropy decoding of a video
RU2510589C2 (en) * 2012-07-05 2014-03-27 Вадим Витальевич Ярошенко Method of encoding digital video image
WO2014120368A1 (en) * 2013-01-30 2014-08-07 Intel Corporation Content adaptive entropy coding for next generation video
KR101484282B1 (en) 2013-04-02 2015-01-22 삼성전자주식회사 Method and apparatus for video encoding by motion prediction using arbitrary partition, and method and apparatus for video decoding by motion compensation using arbitrary partition
US20150189269A1 (en) * 2013-12-30 2015-07-02 Google Inc. Recursive block partitioning
GB2523993A (en) * 2014-03-06 2015-09-16 Sony Corp Data encoding and decoding
US20150262404A1 (en) * 2014-03-13 2015-09-17 Huawei Technologies Co., Ltd. Screen Content And Mixed Content Coding
US9596479B2 (en) * 2014-10-07 2017-03-14 Hfi Innovation Inc. Method of pulse-code modulation and palette coding for video coding
US10114835B2 (en) 2015-04-29 2018-10-30 Box, Inc. Virtual file system for cloud-based shared content
WO2017083553A1 (en) * 2015-11-10 2017-05-18 Vid Scale, Inc. Systems and methods for coding in super-block based video coding framework
CN105847798A (en) * 2016-03-30 2016-08-10 乐视控股(北京)有限公司 Method and device for dividing fast coding unit during video coding process
WO2018131986A1 (en) * 2017-01-16 2018-07-19 세종대학교 산학협력단 Image encoding/decoding method and device
KR102331043B1 (en) * 2017-03-20 2021-11-25 삼성전자주식회사 The encoding system and operating method for the same
US11252464B2 (en) 2017-06-14 2022-02-15 Mellanox Technologies, Ltd. Regrouping of video data in host memory
US20200014918A1 (en) * 2018-07-08 2020-01-09 Mellanox Technologies, Ltd. Application accelerator
US20200014945A1 (en) * 2018-07-08 2020-01-09 Mellanox Technologies, Ltd. Application acceleration
US11470131B2 (en) 2017-07-07 2022-10-11 Box, Inc. User device processing of information from a network-accessible collaboration system
EP3496403A1 (en) * 2017-12-06 2019-06-12 V-Nova International Limited Hierarchical data structure
WO2019111004A1 (en) * 2017-12-06 2019-06-13 V-Nova International Ltd Methods and apparatuses for encoding and decoding a bytestream
KR102445899B1 (en) * 2017-12-29 2022-09-21 인텔렉추얼디스커버리 주식회사 Video coding method and apparatus using sub-block level intra prediction
AU2019247240B2 (en) * 2018-04-01 2022-03-31 B1 Institute Of Image Technology, Inc. Method and apparatus for encoding/decoding image
WO2019244116A1 (en) * 2018-06-21 2019-12-26 Beijing Bytedance Network Technology Co., Ltd. Border partition in video coding
US10812819B2 (en) * 2018-10-07 2020-10-20 Tencent America LLC Method and apparatus for video coding
US10970881B2 (en) * 2018-12-21 2021-04-06 Samsung Display Co., Ltd. Fallback modes for display compression
JP2021175035A (en) * 2020-04-21 2021-11-01 キヤノン株式会社 Image processing apparatus and image processing method
CN113381391B (en) * 2021-05-21 2022-05-31 广西大学 Single-end protection method for high-voltage direct-current transmission line

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008027192A2 (en) * 2006-08-25 2008-03-06 Thomson Licensing Methods and apparatus for reduced resolution partitioning

Family Cites Families (143)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH082106B2 (en) 1986-11-10 1996-01-10 国際電信電話株式会社 Hybrid coding method for moving image signals
US5107345A (en) * 1990-02-27 1992-04-21 Qualcomm Incorporated Adaptive block size image compression method and system
JPH08205140A (en) 1995-01-31 1996-08-09 Canon Inc Image compressing device
EP0732855B1 (en) 1995-03-15 2002-10-16 Kabushiki Kaisha Toshiba Moving picture variable length coding system and method
US6084908A (en) 1995-10-25 2000-07-04 Sarnoff Corporation Apparatus and method for quadtree based variable block size motion estimation
JP3855286B2 (en) 1995-10-26 2006-12-06 ソニー株式会社 Image encoding device, image encoding method, image decoding device, image decoding method, and recording medium
US5956088A (en) 1995-11-21 1999-09-21 Imedia Corporation Method and apparatus for modifying encoded digital video for improved channel utilization
US6215910B1 (en) 1996-03-28 2001-04-10 Microsoft Corporation Table-based compression with embedded coding
US6571016B1 (en) 1997-05-05 2003-05-27 Microsoft Corporation Intra compression of pixel blocks using predicted mean
DE69731517T2 (en) 1996-07-11 2005-10-20 Koninklijke Philips Electronics N.V. TRANSFER AND RECEIVE CODED VIDEO IMAGES
US6233017B1 (en) 1996-09-16 2001-05-15 Microsoft Corporation Multimedia compression system with adaptive block sizes
US5748116A (en) 1996-11-27 1998-05-05 Teralogic, Incorporated System and method for nested split coding of sparse data sets
US6633611B2 (en) 1997-04-24 2003-10-14 Mitsubishi Denki Kabushiki Kaisha Method and apparatus for region-based moving image encoding and decoding
US6539124B2 (en) 1999-02-03 2003-03-25 Sarnoff Corporation Quantizer selection based on region complexities derived using a rate distortion model
US6778709B1 (en) 1999-03-12 2004-08-17 Hewlett-Packard Development Company, L.P. Embedded block coding with optimized truncation
US6660836B1 (en) 1999-04-13 2003-12-09 Case Western Reserve University Methods for carbon-centered radical mediated heavy hydrogen labeling of compounds
US6529634B1 (en) 1999-11-08 2003-03-04 Qualcomm, Inc. Contrast sensitive variance based adaptive block size DCT image compression
US6600836B1 (en) * 2000-01-28 2003-07-29 Qualcomm, Incorporated Quality based image compression
DE10022331A1 (en) 2000-05-10 2001-11-15 Bosch Gmbh Robert Method for transformation coding of moving image sequences e.g. for audio-visual objects, involves block-wise assessing movement vectors between reference- and actual- image signals of image sequence
US6968012B1 (en) 2000-10-02 2005-11-22 Firepad, Inc. Methods for encoding digital video for decoding on low performance devices
US6937770B1 (en) 2000-12-28 2005-08-30 Emc Corporation Adaptive bit rate control for rate reduction of MPEG coded video
US6947487B2 (en) 2001-04-18 2005-09-20 Lg Electronics Inc. VSB communication system
US7474699B2 (en) 2001-08-28 2009-01-06 Ntt Docomo, Inc. Moving picture encoding/transmission system, moving picture encoding/transmission method, and encoding apparatus, decoding apparatus, encoding method decoding method and program usable for the same
FI111592B (en) 2001-09-06 2003-08-15 Oulun Yliopisto Method and apparatus for encoding successive images
US6959116B2 (en) 2001-09-18 2005-10-25 Emc Corporation Largest magnitude indices selection for (run, level) encoding of a block coded picture
US6980596B2 (en) 2001-11-27 2005-12-27 General Instrument Corporation Macroblock level adaptive frame/field coding for digital video content
GB2382940A (en) 2001-11-27 2003-06-11 Nokia Corp Encoding objects and background blocks
US20030123738A1 (en) 2001-11-30 2003-07-03 Per Frojdh Global motion compensation for video pictures
CN101448162B (en) 2001-12-17 2013-01-02 微软公司 Method for processing video image
US7200275B2 (en) 2001-12-17 2007-04-03 Microsoft Corporation Skip macroblock coding
CN1640146A (en) 2002-03-05 2005-07-13 皇家飞利浦电子股份有限公司 Method and system for layered video encoding
US20050213831A1 (en) 2002-03-05 2005-09-29 Van Der Schaar Mihaela Method and system for encoding fractional bitplanes
JP2003319394A (en) 2002-04-26 2003-11-07 Sony Corp Encoding apparatus and method, decoding apparatus and method, recording medium, and program
US7038676B2 (en) * 2002-06-11 2006-05-02 Sony Computer Entertainmant Inc. System and method for data compression
US7289674B2 (en) 2002-06-11 2007-10-30 Nokia Corporation Spatial prediction based intra coding
EP1742480B1 (en) 2002-07-11 2008-10-29 Matsushita Electric Industrial Co., Ltd. Virtual display buffer management method for H.264 prediction image decoding.
US6795584B2 (en) 2002-10-03 2004-09-21 Nokia Corporation Context-based adaptive variable length coding for adaptive block transforms
EP1582063B1 (en) 2003-01-07 2018-03-07 Thomson Licensing DTV Mixed inter/intra video coding of macroblock partitions
KR100604032B1 (en) * 2003-01-08 2006-07-24 엘지전자 주식회사 Apparatus for supporting plural codec and Method thereof
JP4593556B2 (en) 2003-01-09 2010-12-08 ザ リージェンツ オブ ザ ユニバーシティ オブ カリフォルニア Video encoding method and device
BR0317943A (en) * 2003-01-10 2005-11-29 Thomson Licensing Sa Spatial error concealment based on intrapreview modes transmitted in a coded stream
KR101004208B1 (en) 2003-02-21 2010-12-24 파나소닉 주식회사 Picture coding method and picture decoding method
KR20060109247A (en) * 2005-04-13 2006-10-19 엘지전자 주식회사 Method and apparatus for encoding/decoding a video signal using pictures of base layer
EP1623577A1 (en) 2003-05-06 2006-02-08 Koninklijke Philips Electronics N.V. Encoding of video information using block based adaptive scan order
HUP0301368A3 (en) 2003-05-20 2005-09-28 Amt Advanced Multimedia Techno Method and equipment for compressing motion picture data
US7653133B2 (en) 2003-06-10 2010-01-26 Rensselaer Polytechnic Institute (Rpi) Overlapped block motion compression for variable size blocks in the context of MCTF scalable video coders
ES2343410T3 (en) 2003-06-25 2010-07-30 Thomson Licensing INTERTRAM CODING WITH FAST MODE DECISION.
US7426308B2 (en) 2003-07-18 2008-09-16 Microsoft Corporation Intraframe and interframe interlace coding and decoding
US7830963B2 (en) 2003-07-18 2010-11-09 Microsoft Corporation Decoding jointly coded transform type and subblock pattern information
US8085845B2 (en) 2003-08-26 2011-12-27 Thomson Licensing Method and apparatus for encoding hybrid intra-inter coded blocks
CN1843040A (en) 2003-08-26 2006-10-04 三星电子株式会社 Scalable video coding and decoding methods, and scalable video encoder and decoder
US7599438B2 (en) 2003-09-07 2009-10-06 Microsoft Corporation Motion vector block pattern coding and decoding
US8064520B2 (en) * 2003-09-07 2011-11-22 Microsoft Corporation Advanced bi-directional predictive coding of interlaced video
US7286710B2 (en) 2003-10-01 2007-10-23 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Coding of a syntax element contained in a pre-coded video signal
US7366462B2 (en) 2003-10-24 2008-04-29 Qualcomm Incorporated Method and apparatus for seamlessly switching reception between multimedia streams in a wireless communication system
US7362804B2 (en) 2003-11-24 2008-04-22 Lsi Logic Corporation Graphical symbols for H.264 bitstream syntax elements
JP3910594B2 (en) * 2004-02-20 2007-04-25 三菱電機株式会社 Image encoding device
US8116374B2 (en) 2004-05-07 2012-02-14 Broadcom Corporation Method and system for generating a transform size syntax element for video decoding
US7894530B2 (en) * 2004-05-07 2011-02-22 Broadcom Corporation Method and system for dynamic selection of transform size in a video decoder based on signal content
US20060002474A1 (en) 2004-06-26 2006-01-05 Oscar Chi-Lim Au Efficient multi-block motion estimation for video compression
US7852916B2 (en) 2004-06-27 2010-12-14 Apple Inc. Efficient use of storage in encoding and decoding video data streams
US7792188B2 (en) 2004-06-27 2010-09-07 Apple Inc. Selecting encoding types and predictive modes for encoding video data
KR100627329B1 (en) 2004-08-19 2006-09-25 전자부품연구원 Apparatus and method for adaptive motion estimation and mode decision in h.264 video codec
US8085846B2 (en) 2004-08-24 2011-12-27 Thomson Licensing Method and apparatus for decoding hybrid intra-inter coded blocks
CN100568974C (en) * 2004-09-08 2009-12-09 松下电器产业株式会社 Motion image encoding method and dynamic image decoding method
KR20060043115A (en) 2004-10-26 2006-05-15 엘지전자 주식회사 Method and apparatus for encoding/decoding video signal using base layer
JP4877449B2 (en) 2004-11-04 2012-02-15 カシオ計算機株式会社 Moving picture coding apparatus and moving picture coding processing program
DE102004056446A1 (en) 2004-11-23 2006-06-29 Siemens Ag Method for transcoding and transcoding device
KR100679031B1 (en) 2004-12-03 2007-02-05 삼성전자주식회사 Method for encoding/decoding video based on multi-layer, and apparatus using the method
US7430238B2 (en) 2004-12-10 2008-09-30 Micronas Usa, Inc. Shared pipeline architecture for motion vector prediction and residual decoding
US20060133495A1 (en) 2004-12-22 2006-06-22 Yan Ye Temporal error concealment for video communications
JP5213456B2 (en) 2005-02-18 2013-06-19 トムソン ライセンシング Method for deriving encoding information of high resolution picture from low resolution picture, and encoding and decoding apparatus for realizing the method
US20060203905A1 (en) 2005-03-14 2006-09-14 Shih-Chang Hsia Video coding system
CN1319383C (en) 2005-04-07 2007-05-30 西安交通大学 Method for implementing motion estimation and motion vector coding with high-performance air space scalability
US7949044B2 (en) 2005-04-12 2011-05-24 Lsi Corporation Method for coefficient bitdepth limitation, encoder and bitstream generation apparatus
JP2006304107A (en) 2005-04-22 2006-11-02 Ntt Electornics Corp Coding device and program applied thereto
US8169953B2 (en) 2005-05-17 2012-05-01 Qualcomm Incorporated Method and apparatus for wireless multi-carrier communications
US7895250B2 (en) 2005-05-25 2011-02-22 Qualcomm Incorporated Fixed point integer division techniques for AC/DC prediction in video coding devices
JP4510701B2 (en) * 2005-05-31 2010-07-28 株式会社Kddi研究所 Video encoding device
US8118676B2 (en) 2005-07-08 2012-02-21 Activevideo Networks, Inc. Video game system using pre-encoded macro-blocks
KR20070012201A (en) 2005-07-21 2007-01-25 엘지전자 주식회사 Method for encoding and decoding video signal
US7881384B2 (en) 2005-08-05 2011-02-01 Lsi Corporation Method and apparatus for H.264 to MPEG-2 video transcoding
US7912127B2 (en) 2005-08-05 2011-03-22 Lsi Corporation H.264 to VC-1 and VC-1 to H.264 transcoding
US20070074265A1 (en) 2005-09-26 2007-03-29 Bennett James D Video processor operable to produce motion picture expert group (MPEG) standard compliant video stream(s) from video data and metadata
KR101441269B1 (en) * 2005-09-26 2014-09-18 미쓰비시덴키 가부시키가이샤 Dynamic image decoding device and dynamic image decoding method
KR100654601B1 (en) * 2005-10-06 2006-12-08 주식회사 휴맥스 Device and method for merging different codecs
US8000539B2 (en) 2005-12-21 2011-08-16 Ntt Docomo, Inc. Geometrical image representation and compression
US8861585B2 (en) 2006-01-20 2014-10-14 Qualcomm Incorporated Method and apparatus for error resilience algorithms in wireless video communication
JP2007243427A (en) 2006-03-07 2007-09-20 Nippon Hoso Kyokai <Nhk> Encoder and decoder
US8848789B2 (en) 2006-03-27 2014-09-30 Qualcomm Incorporated Method and system for coding and decoding information associated with video compression
US8750387B2 (en) 2006-04-04 2014-06-10 Qualcomm Incorporated Adaptive encoder-assisted frame rate up conversion
US8494052B2 (en) 2006-04-07 2013-07-23 Microsoft Corporation Dynamic selection of motion estimation search ranges and extended motion vector ranges
US20070274396A1 (en) * 2006-05-26 2007-11-29 Ximin Zhang Complexity adaptive skip mode estimation for video encoding
KR100809298B1 (en) 2006-06-22 2008-03-04 삼성전자주식회사 Flag encoding method, flag decoding method, and apparatus thereof
US20080002770A1 (en) 2006-06-30 2008-01-03 Nokia Corporation Methods, apparatus, and a computer program product for providing a fast inter mode decision for video encoding in resource constrained devices
GB0619570D0 (en) 2006-10-04 2006-11-15 Univ Bristol Complexity scalable video transcoder and encoder
CN101175210B (en) 2006-10-30 2010-08-11 中国科学院计算技术研究所 Entropy decoding method and device used for decoding video estimation residual error coefficient
US8923393B2 (en) 2006-11-02 2014-12-30 Qualcomm Incorporated Apparatus and method of reduced reference frame search in video encoding
US7573407B2 (en) 2006-11-14 2009-08-11 Qualcomm Incorporated Memory efficient adaptive block coding
TWI368444B (en) * 2006-11-17 2012-07-11 Lg Electronics Inc Method and apparatus for decoding/encoding a video signal
EP2124343A4 (en) 2006-12-14 2012-01-11 Nec Corp Video encoding method, video encoding device, and video encoding program
US8804829B2 (en) 2006-12-20 2014-08-12 Microsoft Corporation Offline motion description for video generation
US8311120B2 (en) 2006-12-22 2012-11-13 Qualcomm Incorporated Coding mode selection using information of other coding modes
KR101356735B1 (en) 2007-01-03 2014-02-03 삼성전자주식회사 Mothod of estimating motion vector using global motion vector, apparatus, encoder, decoder and decoding method
US8335261B2 (en) * 2007-01-08 2012-12-18 Qualcomm Incorporated Variable length coding techniques for coded block patterns
CN101222641B (en) 2007-01-11 2011-08-24 华为技术有限公司 Infra-frame prediction encoding and decoding method and device
JP4901772B2 (en) 2007-02-09 2012-03-21 パナソニック株式会社 Moving picture coding method and moving picture coding apparatus
JP5413191B2 (en) 2007-03-20 2014-02-12 富士通株式会社 Moving picture encoding method and apparatus, and moving picture decoding apparatus
WO2008120577A1 (en) 2007-03-29 2008-10-09 Kabushiki Kaisha Toshiba Image coding and decoding method, and apparatus
MX2009009947A (en) 2007-04-16 2009-09-24 Toshiba Kk Image encoding and image decoding method and device.
KR101305491B1 (en) 2007-04-17 2013-09-17 (주)휴맥스 Bitstream decoding device and method
CN100512442C (en) 2007-04-20 2009-07-08 西安交通大学 Method of adaptive motion compensation time-domain filter
US8340183B2 (en) * 2007-05-04 2012-12-25 Qualcomm Incorporated Digital multimedia channel switching
US8488668B2 (en) 2007-06-15 2013-07-16 Qualcomm Incorporated Adaptive coefficient scanning for video coding
US7991237B2 (en) 2007-06-28 2011-08-02 Mitsubishi Electric Corporation Image encoding device, image decoding device, image encoding method and image decoding method
US7885819B2 (en) 2007-06-29 2011-02-08 Microsoft Corporation Bitstream syntax for multi-process audio decoding
US7895347B2 (en) 2007-07-27 2011-02-22 Red Hat, Inc. Compact encoding of arbitrary length binary objects
EP2183922A4 (en) 2007-08-16 2011-04-27 Nokia Corp A method and apparatuses for encoding and decoding an image
US8938009B2 (en) 2007-10-12 2015-01-20 Qualcomm Incorporated Layered encoded bitstream structure
BRPI0818444A2 (en) 2007-10-12 2016-10-11 Qualcomm Inc adaptive encoding of video block header information
US20100208827A1 (en) 2007-10-16 2010-08-19 Thomson Licensing Methods and apparatus for video encoding and decoding geometerically partitioned super macroblocks
UY31437A1 (en) 2007-10-29 2009-05-29 MYCOPLASMA BOVIS VACCINE AND SAME USE METHODS
WO2009110160A1 (en) 2008-03-07 2009-09-11 株式会社 東芝 Dynamic image encoding/decoding method and device
US8982952B2 (en) * 2008-06-02 2015-03-17 Broadcom Corporation Method and system for using motion vector confidence to determine a fine motion estimation patch priority list for a scalable coder
WO2009151246A2 (en) 2008-06-09 2009-12-17 Lg Electronics Inc. Transmitting/receiving system and method of processing broadcast signal in transmitting/receiving system
KR20090129926A (en) 2008-06-13 2009-12-17 삼성전자주식회사 Method and apparatus for image encoding by dynamic unit grouping, and method and apparatus for image decoding by dynamic unit grouping
KR101517768B1 (en) * 2008-07-02 2015-05-06 삼성전자주식회사 Method and apparatus for encoding video and method and apparatus for decoding video
EP2150060A1 (en) 2008-07-28 2010-02-03 Alcatel, Lucent Method and arrangement for video encoding
EP2373031A1 (en) 2008-08-12 2011-10-05 Lg Electronics Inc. Method of decoding a video signal
WO2010036772A2 (en) 2008-09-26 2010-04-01 Dolby Laboratories Licensing Corporation Complexity allocation for video and image coding applications
US8503527B2 (en) 2008-10-03 2013-08-06 Qualcomm Incorporated Video coding with large macroblocks
US8483285B2 (en) * 2008-10-03 2013-07-09 Qualcomm Incorporated Video coding using transforms bigger than 4×4 and 8×8
US20100086031A1 (en) * 2008-10-03 2010-04-08 Qualcomm Incorporated Video coding with large macroblocks
US8619856B2 (en) * 2008-10-03 2013-12-31 Qualcomm Incorporated Video coding with large macroblocks
US8634456B2 (en) * 2008-10-03 2014-01-21 Qualcomm Incorporated Video coding with large macroblocks
US8879637B2 (en) 2008-10-06 2014-11-04 Lg Electronics Inc. Method and an apparatus for processing a video signal by which coding efficiency of a video signal can be raised by using a mixed prediction mode in predicting different macroblock sizes
US8265155B2 (en) 2009-01-05 2012-09-11 Electronics And Telecommunications Research Institute Method of block partition for H.264 inter prediction
KR101740039B1 (en) 2009-06-26 2017-05-25 톰슨 라이센싱 Methods and apparatus for video encoding and decoding using adaptive geometric partitioning
KR101474756B1 (en) 2009-08-13 2014-12-19 삼성전자주식회사 Method and apparatus for encoding and decoding image using large transform unit
KR101452859B1 (en) 2009-08-13 2014-10-23 삼성전자주식회사 Method and apparatus for encoding and decoding motion vector
KR20110017719A (en) 2009-08-14 2011-02-22 삼성전자주식회사 Method and apparatus for video encoding, and method and apparatus for video decoding
KR101456498B1 (en) 2009-08-14 2014-10-31 삼성전자주식회사 Method and apparatus for video encoding considering scanning order of coding units with hierarchical structure, and method and apparatus for video decoding considering scanning order of coding units with hierarchical structure
US8896612B2 (en) * 2010-11-16 2014-11-25 Ncomputing Inc. System and method for on-the-fly key color generation

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008027192A2 (en) * 2006-08-25 2008-03-06 Thomson Licensing Methods and apparatus for reduced resolution partitioning

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
CHEN P ET AL: "Video coding using extended block sizes" 36. VCEG MEETING; 8-10-2008 - 10-10-2008; SAN DIEGO, US; (VIDEO CODING EXPERTS GROUP OF ITU-T SG.16), 15 October 2008 (2008-10-15), XP030003645 *
JVT: "Draft ITU-T Recommendation and Final Draft International Standard of Joint Video Specification (ITU-T Rec. H.264 ¦ ISO/IEC 14496-10 AVC)" JOINT VIDEO TEAM (JVT) OF ISO/IEC MPEG & ITU-T VCEG(ISO/IEC JTC1/SC29/WG11 AND ITU-T SG16 Q6), no. JVT-G050r1, 14 March 2003 (2003-03-14), XP030005712 *
KIM J ET AL: "Enlarging MB size for high fidelity video coding beyond HD" 36. VCEG MEETING; 8-10-2008 - 10-10-2008; SAN DIEGO, US; (VIDEO CODING EXPERTS GROUP OF ITU-T SG.16), 5 October 2008 (2008-10-05), XP030003643 *
MA S ET AL: "High-definition video coding with super-macroblocks (Invited Paper)" VISUAL COMMUNICATIONS AND IMAGE PROCESSING; 30-1-2007 - 1-2-2007; SAN JOSE, 30 January 2007 (2007-01-30), XP030081117 *
NAITO S ET AL: "Efficient coding scheme for super high definition video based on extending H.264 high profile" PROCEEDINGS OF THE SPIE - THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING, SPIE, US, vol. 6077, no. 67727, 18 January 2006 (2006-01-18), pages 1-8, XP002538136 ISSN: 0277-786X *
QUALCOMM INC: "Video Coding Using Extended Block Sizes" ITU (INTERNATIONAL TELECOMMUNICATION UNION) STUDY GROUP 16,, vol. COM16 C123 E, 1 January 2009 (2009-01-01), page 4PP, XP007912516 *

Cited By (108)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11758194B2 (en) 2008-10-03 2023-09-12 Qualcomm Incorporated Device and method for video decoding video blocks
JP7184841B2 (en) 2009-07-01 2022-12-06 インターデジタル ヴイシー ホールディングス, インコーポレイテッド Method and apparatus for signaling intra-prediction for large blocks for video encoders and decoders
JP2020115669A (en) * 2009-07-01 2020-07-30 インターデジタル ヴイシー ホールディングス, インコーポレイテッド Method and apparatus for signaling intra prediction to large block for video encoder and decoder
US9544588B2 (en) 2009-08-13 2017-01-10 Samsung Electronics Co., Ltd. Method and apparatus for encoding/decoding motion vector
JP2015029335A (en) * 2009-08-13 2015-02-12 サムスン エレクトロニクス カンパニー リミテッド Image decoding method
US10110902B2 (en) 2009-08-13 2018-10-23 Samsung Electronics Co., Ltd. Method and apparatus for encoding/decoding motion vector
US9883186B2 (en) 2009-08-13 2018-01-30 Samsung Electronics Co., Ltd. Method and apparatus for encoding/decoding motion vector
JP2015019420A (en) * 2009-08-13 2015-01-29 サムスン エレクトロニクス カンパニー リミテッド Image decryption method
US9313489B2 (en) 2009-08-14 2016-04-12 Samsung Electronics Co., Ltd. Method and apparatus for encoding video, and method and apparatus for decoding video
EP2452494A2 (en) * 2009-08-14 2012-05-16 Samsung Electronics Co., Ltd. Method and apparatus for encoding video, and method and apparatus for decoding video
EP3267683A1 (en) * 2009-08-14 2018-01-10 Samsung Electronics Co., Ltd. Method and apparatus for encoding video
EP2863639A3 (en) * 2009-08-14 2015-07-08 Samsung Electronics Co., Ltd Method and apparatus for encoding video, and method and apparatus for decoding video
EP2665267A3 (en) * 2009-08-14 2014-02-19 Samsung Electronics Co., Ltd Method and apparatus for encoding video, and method and apparatus for decoding video
EP3101895A1 (en) * 2009-08-14 2016-12-07 Samsung Electronics Co., Ltd. Method for decoding video
EP2882188A3 (en) * 2009-08-14 2015-06-17 Samsung Electronics Co., Ltd Method and apparatus for encoding video, and method and apparatus for decoding video
US9374579B2 (en) 2009-08-14 2016-06-21 Samsung Electronics Co., Ltd. Method and apparatus for encoding video, and method and apparatus for decoding video
US8842734B2 (en) 2009-08-14 2014-09-23 Samsung Electronics Co., Ltd. Method and apparatus for encoding video, and method and apparatus for decoding video
US9313490B2 (en) 2009-08-14 2016-04-12 Samsung Electronics Co., Ltd. Method and apparatus for encoding video, and method and apparatus for decoding video
EP2665268A3 (en) * 2009-08-14 2014-02-19 Samsung Electronics Co., Ltd Method and apparatus for encoding video, and method and apparatus for decoding video
US9307238B2 (en) 2009-08-14 2016-04-05 Samsung Electronics Co., Ltd. Method and apparatus for encoding video, and method and apparatus for decoding video
EP2804383A1 (en) * 2009-08-14 2014-11-19 Samsung Electronics Co., Ltd Method and apparatus for encoding video, and method and apparatus for decoding video
EP3448026A1 (en) * 2009-08-14 2019-02-27 Samsung Electronics Co., Ltd. Method and apparatus for encoding video, and corresponding storage medium
EP2452494A4 (en) * 2009-08-14 2014-02-19 Samsung Electronics Co Ltd Method and apparatus for encoding video, and method and apparatus for decoding video
US8953682B2 (en) 2009-08-14 2015-02-10 Samsung Electronics Co., Ltd. Method and apparatus for encoding video, and method and apparatus for decoding video
EP2940996A1 (en) * 2009-10-23 2015-11-04 Samsung Electronics Co., Ltd Method and apparatus for encoding video and method and apparatus for decoding video, based on hierarchical structure of coding unit
EP2838267A3 (en) * 2009-10-23 2015-04-08 Samsung Electronics Co., Ltd Method and apparatus for encoding video and method and apparatus for decoding video, based on hierarchical structure of coding unit
EP2940997A1 (en) * 2009-10-23 2015-11-04 Samsung Electronics Co., Ltd Method and apparatus for encoding video and method and apparatus for decoding video, based on hierarchical structure of coding unit
JP2015144472A (en) * 2009-10-23 2015-08-06 サムスン エレクトロニクス カンパニー リミテッド Method for decoding video and apparatus for decoding video
US9414055B2 (en) 2009-10-23 2016-08-09 Samsung Electronics Co., Ltd. Method and apparatus for encoding video and method and apparatus for decoding video, based on hierarchical structure of coding unit
EP3261344A1 (en) * 2009-10-23 2017-12-27 Samsung Electronics Co., Ltd Apparatus for decoding video, based on hierarchical structure of coding unit
EP2489186A4 (en) * 2009-10-23 2015-07-08 Samsung Electronics Co Ltd Method and apparatus for encoding video and method and apparatus for decoding video, based on hierarchical structure of coding unit
EP2897364A1 (en) * 2009-10-23 2015-07-22 Samsung Electronics Co., Ltd Method and apparatus for encoding video and method and apparatus for decoding video, based on hierarchical structure of coding unit
JP2015136156A (en) * 2009-10-23 2015-07-27 サムスン エレクトロニクス カンパニー リミテッド Method and apparatus for decoding video data
JP2015144471A (en) * 2009-10-23 2015-08-06 サムスン エレクトロニクス カンパニー リミテッド Method and apparatus for decoding video data
JP2015144469A (en) * 2009-10-23 2015-08-06 サムスン エレクトロニクス カンパニー リミテッド Method and apparatus for decoding video data
JP2015144470A (en) * 2009-10-23 2015-08-06 サムスン エレクトロニクス カンパニー リミテッド Method and apparatus for decoding video data
US9025667B2 (en) 2009-12-08 2015-05-05 Samsung Electronics Co., Ltd. Method and apparatus for encoding video by motion prediction using arbitrary partition, and method and apparatus for decoding video by motion prediction using arbitrary partition
US9294780B2 (en) 2009-12-08 2016-03-22 Samsung Electronics Co., Ltd. Method and apparatus for encoding video by motion prediction using arbitrary partition, and method and apparatus for decoding video by motion prediction using arbitrary partition
US10448042B2 (en) 2009-12-08 2019-10-15 Samsung Electronics Co., Ltd. Method and apparatus for encoding video by motion prediction using arbitrary partition, and method and apparatus for decoding video by motion prediction using arbitrary partition
AU2010328813B2 (en) * 2009-12-08 2014-06-12 Samsung Electronics Co., Ltd. Method and apparatus for encoding video by motion prediction using arbitrary partition, and method and apparatus for decoding video by motion prediction using arbitrary partition
US8938006B2 (en) 2009-12-08 2015-01-20 Samsung Electronics Co., Ltd. Method and apparatus for encoding video by motion prediction using arbitrary partition, and method and apparatus for decoding video by motion prediction using arbitrary partition
US8780993B2 (en) 2009-12-08 2014-07-15 Samsung Electronics Co., Ltd. Method and apparatus for encoding video by motion prediction using arbitrary partition, and method and apparatus for decoding video by motion prediction using arbitrary partition
US8885723B2 (en) 2009-12-08 2014-11-11 Samsung Electronics Co., Ltd. Method and apparatus for encoding video by motion prediction using arbitrary partition, and method and apparatus for decoding video by motion prediction using arbitrary partition
US8885725B2 (en) 2009-12-08 2014-11-11 Samsung Electronics Co., Ltd. Method and apparatus for encoding video by motion prediction using arbitrary partition, and method and apparatus for decoding video by motion prediction using arbitrary partition
US8885724B2 (en) 2009-12-08 2014-11-11 Samsung Electronics Co., Ltd. Method and apparatus for encoding video by motion prediction using arbitrary partition, and method and apparatus for decoding video by motion prediction using arbitrary partition
JP2016129363A (en) * 2010-04-13 2016-07-14 サムスン エレクトロニクス カンパニー リミテッド Video-encoding method and apparatus of the same using prediction units based on encoding units with tree structure, and video-decoding method and apparatus of the same
US9654790B2 (en) 2010-04-13 2017-05-16 Samsung Electronics Co., Ltd. Video-encoding method and video-encoding apparatus based on encoding units determined in accordance with a tree structure, and video-decoding method and video-decoding apparatus based on encoding units determined in accordance with a tree structure
US9936216B2 (en) 2010-04-13 2018-04-03 Samsung Electronics Co., Ltd. Video-encoding method and video-encoding apparatus using prediction units based on encoding units determined in accordance with a tree structure, and video-decoding method and video-decoding apparatus using prediction units based on encoding units determined in accordance with a tree structure
US10412411B2 (en) 2010-04-13 2019-09-10 Samsung Electronics Co., Ltd. Video-encoding method and video-encoding apparatus using prediction units based on encoding units determined in accordance with a tree structure, and video-decoding method and video-decoding apparatus using prediction units based on encoding units determined in accordance with a tree structure
US10432965B2 (en) 2010-04-13 2019-10-01 Samsung Electronics Co., Ltd. Video-encoding method and video-encoding apparatus based on encoding units determined in accordance with a tree structure, and video-decoding method and video-decoding apparatus based on encoding units determined in accordance with a tree structure
US9942564B2 (en) 2010-04-13 2018-04-10 Samsung Electronics Co., Ltd. Video-encoding method and video-encoding apparatus based on encoding units determined in accordance with a tree structure, and video-decoding method and video-decoding apparatus based on encoding units determined in accordance with a tree structure
US10027972B2 (en) 2010-04-13 2018-07-17 Samsung Electronics Co., Ltd. Video encoding method and video encoding apparatus and video decoding method and video decoding apparatus, which perform deblocking filtering based on tree-structure encoding units
US10306262B2 (en) 2010-04-13 2019-05-28 Samsung Electronics Co., Ltd. Video encoding method and video encoding apparatus and video decoding method and video decoding apparatus, which perform deblocking filtering based on tree-structure encoding units
US9712823B2 (en) 2010-04-13 2017-07-18 Samsung Electronics Co., Ltd. Video-encoding method and video-encoding apparatus using prediction units based on encoding units determined in accordance with a tree structure, and video-decoding method and video-decoding apparatus using prediction units based on encoding units determined in accordance with a tree structure
US9712822B2 (en) 2010-04-13 2017-07-18 Samsung Electronics Co., Ltd. Video encoding method and video encoding apparatus and video decoding method and video decoding apparatus, which perform deblocking filtering based on tree-structure encoding units
JP2018026824A (en) * 2010-06-10 2018-02-15 トムソン ライセンシングThomson Licensing Methods and apparatus for determining quantization parameter predictors from plural neighboring quantization parameters
US11722669B2 (en) 2010-06-10 2023-08-08 Interdigital Vc Holdings, Inc. Methods and apparatus for determining quantization parameter predictors from a plurality of neighboring quantization parameters
JP2020058036A (en) * 2010-06-10 2020-04-09 インターデジタル ヴイシー ホールディングス, インコーポレイテッド Method and apparatus for determining quantization parameter predictors from multiple neighboring quantization parameters
US10334247B2 (en) 2010-06-10 2019-06-25 Interdigital Vc Holdings, Inc. Methods and apparatus for determining quantization parameter predictors from a plurality of neighboring quantization parameters
US10547840B2 (en) 2010-06-10 2020-01-28 Interdigital Vc Holdings, Inc. Methods and apparatus for determining quantization parameter predictors from a plurality of neighboring quantization parameters
US10742981B2 (en) 2010-06-10 2020-08-11 Interdigital Vc Holdings, Inc. Methods and apparatus for determining quantization parameter predictors from a plurality of neighboring quantization parameters
US11381818B2 (en) 2010-06-10 2022-07-05 Interdigital Vc Holdings, Inc. Methods and apparatus for determining quantization parameter predictors from a plurality of neighboring quantization parameters
US9749631B2 (en) 2010-06-10 2017-08-29 Thomson Licensing Methods and apparatus for determining quantization parameter predictors from a plurality of neighboring quantization parameters
JP2016136760A (en) * 2010-06-10 2016-07-28 トムソン ライセンシングThomson Licensing Methods and apparatus for determining quantization parameter predictors from plural neighboring quantization parameters
JP2013531942A (en) * 2010-06-10 2013-08-08 トムソン ライセンシング Method and apparatus for determining a quantization parameter predictor from a plurality of adjacent quantization parameters
US9235774B2 (en) 2010-06-10 2016-01-12 Thomson Licensing Methods and apparatus for determining quantization parameter predictors from a plurality of neighboring quantization parameters
US9300957B2 (en) 2010-09-30 2016-03-29 Samsung Electronics Co., Ltd. Video encoding method for encoding hierarchical-structure symbols and a device therefor, and video decoding method for decoding hierarchical-structure symbols and a device therefor
EP2624557A4 (en) * 2010-09-30 2016-02-24 Samsung Electronics Co Ltd Video encoding method for encoding hierarchical-structure symbols and a device therefor, and video decoding method for decoding hierarchical-structure symbols and a device therefor
EP3404918A1 (en) * 2010-09-30 2018-11-21 Samsung Electronics Co., Ltd. Video decoding method for decoding hierarchical-structure symbols
US11706430B2 (en) 2010-10-04 2023-07-18 Electronics And Telecommunications Research Institute Method for encoding/decoding block information using quad tree, and device for using same
US9532059B2 (en) 2010-10-05 2016-12-27 Google Technology Holdings LLC Method and apparatus for spatial scalability for video coding
JP2013546257A (en) * 2010-11-01 2013-12-26 クゥアルコム・インコーポレイテッド Joint coding of syntax elements for video coding
US9172963B2 (en) 2010-11-01 2015-10-27 Qualcomm Incorporated Joint coding of syntax elements for video coding
EP2665273A4 (en) * 2011-01-13 2014-07-09 Nec Corp Video encoding device, video decoding device, video encoding method, video decoding method, and program
EP2665272A4 (en) * 2011-01-13 2014-07-09 Nec Corp Video encoding device, video decoding device, video encoding method, video decoding method, and program
CN108111847A (en) * 2011-01-13 2018-06-01 日本电气株式会社 Video decoding apparatus and video encoding/decoding method
CN108093255A (en) * 2011-01-13 2018-05-29 日本电气株式会社 Video encoder and method for video coding
EP2665272A1 (en) * 2011-01-13 2013-11-20 Nec Corporation Video encoding device, video decoding device, video encoding method, video decoding method, and program
US11943449B2 (en) 2011-01-13 2024-03-26 Nec Corporation Video decoding device, and video encoding method performing entropy-decoding process for inter prediction unit partition type syntax
CN108055538A (en) * 2011-01-13 2018-05-18 日本电气株式会社 Video encoder and method for video coding
KR101935217B1 (en) * 2011-01-13 2019-01-03 닛본 덴끼 가부시끼가이샤 Video encoding device and video encoding method
EP2665273A1 (en) * 2011-01-13 2013-11-20 Nec Corporation Video encoding device, video decoding device, video encoding method, video decoding method, and program
KR20180043395A (en) * 2011-01-13 2018-04-27 닛본 덴끼 가부시끼가이샤 Video encoding device and video encoding method
CN105187825B (en) * 2011-01-13 2018-03-09 日本电气株式会社 Video decoding apparatus and video encoding/decoding method
CN105187825A (en) * 2011-01-13 2015-12-23 日本电气株式会社 Video encoding device, video decoding device, video encoding method, video decoding method, and program
US11665352B2 (en) 2011-01-13 2023-05-30 Nec Corporation Video encoding device, video decoding device, video encoding method, video decoding method, and program using inter prediction
US11647205B2 (en) 2011-01-13 2023-05-09 Nec Corporation Video encoding device, video decoding device, video encoding method, video decoding method, and program using inter prediction
US9712826B2 (en) 2011-01-13 2017-07-18 Nec Corporation Video encoding device, video decoding device, video encoding method, video decoding method, and program
US11582461B2 (en) 2011-01-13 2023-02-14 Nec Corporation Video encoding device, video decoding device, video encoding method, video decoding method, and program restricts inter-prediction unit partitions based on coding unit depth
CN103314590A (en) * 2011-01-13 2013-09-18 日本电气株式会社 Video encoding device, video decoding device, video encoding method, video decoding method, and program
CN105208393A (en) * 2011-01-13 2015-12-30 日本电气株式会社 Video Encoding Device, Video Decoding Device, Video Encoding Method, Video Decoding Method, And Program
US10841588B2 (en) 2011-01-13 2020-11-17 Nec Corporation Video encoding device, video decoding device, video encoding method, video decoding method, and program using inter prediction
US10841590B2 (en) 2011-01-13 2020-11-17 Nec Corporation Video decoding device, video decoding method, and program
EP3833026A1 (en) * 2011-01-13 2021-06-09 NEC Corporation Video decoding device, video decoding method, and program
EP3860125A1 (en) * 2011-01-13 2021-08-04 NEC Corporation Video decoding device and video decoding method
CN108055538B (en) * 2011-01-13 2021-10-29 日本电气株式会社 Video encoding apparatus and video encoding method
CN108111847B (en) * 2011-01-13 2021-11-23 日本电气株式会社 Video decoding apparatus and video decoding method
CN108093255B (en) * 2011-01-13 2021-12-03 日本电气株式会社 Video encoding apparatus and video encoding method
US11323720B2 (en) 2011-01-13 2022-05-03 Nec Corporation Video encoding device, video decoding device, video encoding method, video decoding method, and program using inter prediction
EP2899976A1 (en) * 2011-01-13 2015-07-29 Nec Corporation Video encoding device, video decoding device, video encoding method, video decoding method, and program
CN103155561A (en) * 2011-03-11 2013-06-12 通用仪表公司 Method and apparatus for spatial scalability for hevc
US9860528B2 (en) 2011-06-10 2018-01-02 Hfi Innovation Inc. Method and apparatus of scalable video coding
US9986245B2 (en) 2011-06-30 2018-05-29 Sk Telecom Co., Ltd. Method and apparatus for decoding a video using an intra prediction
US9565443B2 (en) 2011-06-30 2017-02-07 Sk Telecom Co., Ltd. Method and apparatus for coding/decoding through high-speed coding unit mode decision
CN103650499A (en) * 2011-06-30 2014-03-19 Sk电信有限公司 Method and apparatus for coding/decoding through high-speed coding unit mode decision
US10116942B2 (en) 2011-06-30 2018-10-30 Sk Telecom Co., Ltd. Method and apparatus for decoding a video using an intra prediction
EP2736254A1 (en) * 2011-07-22 2014-05-28 Hitachi, Ltd. Video decoding method and image encoding method
EP2736254A4 (en) * 2011-07-22 2015-04-15 Hitachi Ltd Video decoding method and image encoding method

Also Published As

Publication number Publication date
US8503527B2 (en) 2013-08-06
CN103957406B (en) 2017-08-22
EP2347591B1 (en) 2020-04-08
TW201028008A (en) 2010-07-16
KR101222400B1 (en) 2013-01-16
CN102172021A (en) 2011-08-31
US20130308701A1 (en) 2013-11-21
US10225581B2 (en) 2019-03-05
US20150139337A1 (en) 2015-05-21
WO2010039728A3 (en) 2010-07-08
EP2347591B2 (en) 2023-04-05
US11039171B2 (en) 2021-06-15
US20100086032A1 (en) 2010-04-08
US20230379504A1 (en) 2023-11-23
CN103957406A (en) 2014-07-30
JP5551232B2 (en) 2014-07-16
US9930365B2 (en) 2018-03-27
US11758194B2 (en) 2023-09-12
US20180176600A1 (en) 2018-06-21
JP2013085280A (en) 2013-05-09
US8948258B2 (en) 2015-02-03
EP2347591A2 (en) 2011-07-27
US9788015B2 (en) 2017-10-10
JP5944423B2 (en) 2016-07-05
CN102172021B (en) 2014-06-25
US20170366825A1 (en) 2017-12-21
US20190158882A1 (en) 2019-05-23
JP2012504908A (en) 2012-02-23
US20210344963A1 (en) 2021-11-04
TWI392370B (en) 2013-04-01
JP2014143706A (en) 2014-08-07
KR20110063855A (en) 2011-06-14

Similar Documents

Publication Publication Date Title
US11758194B2 (en) Device and method for video decoding video blocks
AU2009298648B2 (en) Video coding with large macroblocks
CA2738504C (en) Video coding with large macroblocks
US20100086031A1 (en) Video coding with large macroblocks
WO2011100465A1 (en) Video coding with large macroblocks

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 200980139141.9

Country of ref document: CN

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 09793127

Country of ref document: EP

Kind code of ref document: A2

DPE1 Request for preliminary examination filed after expiration of 19th month from priority date (pct application filed from 20040101)
WWE Wipo information: entry into national phase

Ref document number: 606/MUMNP/2011

Country of ref document: IN

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 2011530142

Country of ref document: JP

ENP Entry into the national phase

Ref document number: 20117010099

Country of ref document: KR

Kind code of ref document: A

WWE Wipo information: entry into national phase

Ref document number: 2009793127

Country of ref document: EP