EP2156672A1 - Videocodierungsmodusauswahl unter verwendung geschätzter codierungskosten - Google Patents

Videocodierungsmodusauswahl unter verwendung geschätzter codierungskosten

Info

Publication number
EP2156672A1
EP2156672A1 EP07761930A EP07761930A EP2156672A1 EP 2156672 A1 EP2156672 A1 EP 2156672A1 EP 07761930 A EP07761930 A EP 07761930A EP 07761930 A EP07761930 A EP 07761930A EP 2156672 A1 EP2156672 A1 EP 2156672A1
Authority
EP
European Patent Office
Prior art keywords
coding
transform coefficients
block
residual data
matrix
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP07761930A
Other languages
English (en)
French (fr)
Inventor
Sitaraman Ganapathy Subramania
Fang Shi
Peisong Chen
Seyfullah Halit Oguz
Scott T. Swazey
Vinod Kaushik
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qualcomm Inc
Original Assignee
Qualcomm Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qualcomm Inc filed Critical Qualcomm Inc
Publication of EP2156672A1 publication Critical patent/EP2156672A1/de
Withdrawn legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/189Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding
    • H04N19/19Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding using optimisation based on Lagrange multipliers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/107Selection of coding mode or of prediction mode between spatial and temporal predictive coding, e.g. picture refresh
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/109Selection of coding mode or of prediction mode among a plurality of temporal predictive coding modes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/11Selection of coding mode or of prediction mode among a plurality of spatial predictive coding modes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/146Data rate or code amount at the encoder output
    • H04N19/147Data rate or code amount at the encoder output according to rate distortion criteria
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/146Data rate or code amount at the encoder output
    • H04N19/149Data rate or code amount at the encoder output by estimating the code amount by means of a model, e.g. mathematical model or statistical model
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/61Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding

Definitions

  • the disclosure relates to video coding and, more particularly, to estimating coding costs to code video sequences.
  • Digital video capabilities can be incorporated into a wide range of devices, including digital televisions, digital direct broadcast systems, wireless communication devices, personal digital assistants (PDAs), laptop computers, desktop computers, video game consoles, digital cameras, digital recording devices, cellular or satellite radio telephones, and the like. Digital video devices can provide significant improvements over conventional analog video systems in processing and transmitting video sequences.
  • PDAs personal digital assistants
  • laptop computers desktop computers
  • video game consoles digital cameras
  • digital recording devices digital recording devices
  • cellular or satellite radio telephones and the like.
  • MPEG Moving Picture Experts Group
  • MPEG-I Motion Picture Experts Group
  • MPEG-2 MPEG-4
  • MPEG-4 MPEG-4
  • ISO/IEC MPEG-4 Part 10
  • AVC Advanced Video Coding
  • blocks of pixels are divided into discrete blocks of pixels, and the blocks of pixels are coded based on differences with other blocks, which may be located within the same frame or in a different frame.
  • Some blocks of pixels often referred to as "macroblocks," comprise a grouping of sub-blocks of pixels.
  • a 16x16 macroblock may comprise four 8x8 sub-blocks.
  • the sub-blocks may be coded separately.
  • the H.264 standard permits coding of blocks with a variety of different sizes, e.g., 16x16, 16x8, 8x16, 8x8, 4x4, 8x4, and 4x8.
  • sub-blocks of any size may be included within a macroblock, e.g., 2x16, 16x2, 2x2, 4x16, and 8x2.
  • a method for processing digital video data comprises identifying one or more transform coefficients for residual data of a block of pixels that will remain non-zero when quantized, estimating a number of bits associated with coding of the residual data based on at least the identified transform coefficients, and estimating a coding cost for coding the block of pixels based on at least the estimated number of bits associated with coding the residual data.
  • an apparatus for processing digital video data comprises a transform module that generates transform coefficients for residual data of a block of pixels, a bit estimate module that identifies one or more of the transform coefficients that will remain non-zero when quantized and estimates a number of bits associated with coding of the residual data based on at least the identified transform coefficients, and a control module that estimates a coding cost for coding the block of pixels based on at least the estimated number of bits associated with coding the residual data.
  • an apparatus for processing digital video data comprises means for identifying one or more transform coefficients for residual data of a block of pixels that will remain non-zero when quantized, means for estimating a number of bits associated with coding of the residual data based on at least the identified transform coefficients, means for estimating a coding cost for coding the block of pixels based on at least the estimated number of bits associated with coding the residual data.
  • a computer-program product for processing digital video data comprises a computer readable medium having instructions thereon.
  • the instructions include code for identifying one or more transform coefficients for residual data of a block of pixels that will remain non-zero when quantized, code for estimating a number of bits associated with coding of the residual data based on at least the identified transform coefficients, and code for estimating a coding cost for coding the block of pixels based on at least the estimated number of bits associated with coding the residual data.
  • FIG. 1 is a block diagram illustrating a video coding system that employs the coding cost estimate techniques described herein.
  • FIG. 2 is a block diagram illustrating an exemplary encoding module in further detail.
  • FIG. 3 is a block diagram illustrating another exemplary encoding module in further detail.
  • FIG. 4 is a flow diagram illustrating exemplary operation of an encoding module selecting an encoding mode based on estimated coding costs.
  • FIG. 5 is a flow diagram illustrating exemplary operation of an encoding module estimating the number of bits associated with coding the residual data of a block without quantizing or encoding of the residual data.
  • FIG. 6 is a flow diagram illustrating exemplary operation of an encoding module estimating the number of bits associated with coding the residual data of a block without encoding the residual data.
  • This disclosure describes techniques for video coding mode selection using estimated coding costs.
  • an encoding device may attempt to select a coding mode for coding blocks of pixels that codes the data of the blocks with high efficiency.
  • the encoding device may perform coding mode selection based on at least estimates of coding cost for at least a portion of the possible modes.
  • the encoding device estimates the coding cost for the different modes without actually coding the blocks.
  • the encoding module device may estimate the coding cost for the modes without quantizing the data of the block for each mode. In this manner, the coding cost estimation techniques of this disclosure reduce the amount of computationally intensive calculations needed to perform effective mode selection.
  • FIG. 1 is a block diagram illustrating a multimedia coding system 10 that employs coding cost estimate techniques as described herein.
  • Coding system 10 includes an encoding device 12 and a decoding device 14 connected by a transmission channel 16.
  • Encoding device 12 encodes one or more sequences of digital multimedia data and transmits the encoded sequences over transmission channel 16 to decoding device 14 for decoding and, possibly, presentation to a user of decoding device 14.
  • Transmission channel 16 may comprise any wired or wireless medium, or a combination thereof.
  • Encoding device 12 may form part of a broadcast network component used to broadcast one or more channels of multimedia data.
  • encoding device 12 may form part of a wireless base station, server, or any infrastructure node that is used to broadcast one or more channels of encoded multimedia data to wireless devices.
  • encoding device 12 may transmit the encoded data to a plurality of wireless devices, such as decoding device 14.
  • a single decoding device 14, however, is illustrated in FIG. 1 for simplicity.
  • encoding device 12 may comprise a handset that transmits locally captured video for video telephony or other similar applications.
  • Decoding device 14 may comprise a user device that receives the encoded multimedia data transmitted by encoding device 12 and decodes the multimedia data for presentation to a user.
  • decoding device 14 may be implemented as part of a digital television, a wireless communication device, a gaming device, a portable digital assistant (PDA), a laptop computer or desktop computer, a digital music and video device, such as those sold under the trademark "iPod,” or a radiotelephone such as cellular, satellite or terrestrial-based radiotelephone, or other wireless mobile terminal equipped for video and/or audio streaming, video telephony, or both.
  • Decoding device 14 may be associated with a mobile or stationary device. In a broadcast application, encoding device 12 may transmit encoded video and/or audio to multiple decoding devices 14 associated with multiple users.
  • multimedia coding system 10 may support video telephony or video streaming according to the Session Initiated Protocol (SIP), International Telecommunication Union Standardization Sector (ITU-T) H.323 standard, ITU-T H.324 standard, or other standards.
  • SIP Session Initiated Protocol
  • ITU-T International Telecommunication Union Standardization Sector
  • encoding device 12 may generate encoded multimedia data according to a video compression standard, such as Moving Picture Experts Group (MPEG)-2, MPEG-4, ITU-T H.263, or ITU-T H.264, which corresponds to MPEG-4, Part 10, Advanced Video Coding (AVC).
  • MPEG Moving Picture Experts Group
  • MPEG-4 MPEG-4
  • ITU-T H.263 ITU-T H.264
  • AVC Advanced Video Coding
  • encoding device 12 and decoding device 14 may be integrated with an audio encoder and decoder, respectively, and include appropriate multiplexer-demultiplexer (MUX-DEMUX) modules, or other hardware, firmware, or software, to handle encoding of both audio and video in a common data sequence or separate data sequences.
  • MUX-DEMUX modules may conform to the ITU H.223 multiplexer protocol, or other protocols such as the user datagram protocol (UDP).
  • this disclosure contemplates application to Enhanced H.264 video coding for delivering real-time multimedia services in terrestrial mobile multimedia multicast (TM3) systems using the Forward Link Only (FLO) Air Interface Specification, "Forward Link Only Air Interface Specification for Terrestrial Mobile Multimedia Multicast,” published as Technical Standard TIA- 1099, Aug. 2006 (the "FLO Specification”).
  • FLO Forward Link Only
  • the coding cost estimation techniques described in this disclosure are not limited to any particular type of broadcast, multicast, unicast, or point- to-point system.
  • encoding device 12 includes an encoding module 18 and a transmitter 20.
  • Encoding module 18 receives one or more input multimedia sequences that can include, in the case of video encoding, one or more frames of data and selectively encodes the frames of the received multimedia sequences.
  • Encoding module 18 receives the input multimedia sequences from one or more sources (not shown in FIG. 1).
  • encoding module 18 may receive the input multimedia sequences from one or more video content providers, e.g., via satellite.
  • encoding module 18 may receive the multimedia sequences from an image capture device (not shown in FIG. 1) integrated within encoding device 12 or coupled to encoding device 12.
  • encoding module 18 may receive the multimedia sequences from a memory or archive (not shown in FIG. 1) within encoding device 12 or coupled to encoding device 12.
  • the multimedia sequences may comprise live real-time or near real-time video, audio, or video and audio sequences to be coded and transmitted as a broadcast or on-demand, or may comprise pre-recorded and stored video, audio, or video and audio sequences to be coded and transmitted as a broadcast or on-demand.
  • at least a portion of the multimedia sequences may be computer-generated, such as in the case of gaming.
  • encoding module 18 encodes and transmits a plurality of coded frames to decoding device 14 via transmitter 20.
  • Encoding module 18 may encode the frames of the input multimedia sequences as intra-coded frames, inter-coded frames or a combination thereof.
  • Frames encoded using intra-coding techniques are coded without reference to other frames, and are often referred to as intra ("I") frames.
  • Frames encoded using inter-coding techniques are coded with reference to one or more other frames.
  • the inter-coded frames may include one or more predictive ("P") frames, bi-directional ("B”) frames, or a combination thereof.
  • P frames are encoded with reference to at least one temporally prior frame while B frames are encoded with reference to at least one temporally future frame.
  • B frames may be encoded with reference to at least one temporally future frame and at least one temporally prior frame.
  • Encoding module 18 may be further configured to partition a frame into a plurality of blocks and encode each of the blocks separately.
  • encoding module 18 may partition the frame into a plurality of 16x16 blocks.
  • Some blocks, often referred to as “macroblocks,” comprise a grouping of sub-partition blocks (referred to herein as "sub-blocks").
  • a 16x16 macroblock may comprise four 8x8 sub-blocks, or other sub-partition blocks.
  • the H.264 standard permits encoding of blocks with a variety of different sizes, e.g., 16x16, 16x8, 8x16, 8x8, 4x4, 8x4, and 4x8.
  • encoding module 18 may be configured to divide the frame into several blocks and encode each of the blocks of pixels as intra-coded blocks or inter-coded blocks, each of which may be referred to generally as a block.
  • Encoding module 18 may support a plurality of coding modes. Each of the modes may be correspond to a different combination of block sizes and coding techniques. In the case of the H.264 standard, for example, there are seven inter modes and thirteen intra modes.
  • the seven variable block-size inter modes include a SKIP mode, 16x16 mode, 16x8 mode, 8x16 mode, 8x8 mode, 8x4 mode, 4x8 mode, and 4x4 mode.
  • the thirteen intra modes include an INTRA 4x4 mode for which there are nine possible interpolation directions and an INTRA 16x16 mode for which there are four possible interpolation directions.
  • encoding module 18 attempts to select the mode that codes the data of the blocks with high efficiency. To this end, encoding module 18 estimates, for each of the blocks, a coding cost for at least a portion of the modes. Encoding module 18 estimates the coding cost as a function of rate and distortion. In accordance with the techniques described herein, encoding module 18 estimates the coding cost for the modes without actually coding the blocks to determine the rate and distortion metrics. In this manner, encoding module 18 may select one of the modes based on at least the coding cost without performing the computationally complex coding of the data of the block for each mode.
  • encoding module 18 may estimate the coding cost for the modes without quantizing the data of the block for each mode. In this manner, the coding cost estimation techniques of this disclosure reduce the amount of computationally intensive calculations needed to perform effective mode selection.
  • Encoding device 12 applies the selected mode to code the blocks of the frames and transmits the coded frames of data via transmitter 20.
  • Transmitter 20 may include appropriate modem and driver circuitry software and/or firmware to transmit encoded multimedia over transmission channel 16.
  • transmitter 26 includes RF circuitry to transmit wireless data carrying the encoded multimedia data.
  • Decoding device 14 includes a receiver 22 and a decoding module 24.
  • Decoding device 14 receives the encoded data from encoding device 12 via receiver 22.
  • receiver 22 may include appropriate modem and driver circuitry software and/or firmware to receive encoded multimedia over transmission channel 16, and may include RF circuitry to receive wireless data carrying the encoded multimedia data in wireless applications.
  • Decoding module 24 decodes the coded frames of data received via receiver 22.
  • Decoding device 14 may further present the decoded frame of data to a user via a display (not shown) that may be either integrated within decoding device 14 or provided as a discrete device coupled to decoding device 14 via a wired or wireless connection.
  • encoding device 12 and decoding device 14 each may include reciprocal transmit and receive circuitry so that each may serve as both a transmit device and a receive device for encoded multimedia and other information transmitted over transmission channel 16.
  • both encoding device 12 and decoding device 14 may transmit and receive multimedia sequences and thus participate in two-way communications.
  • the illustrated components of coding system 10 may be integrated as part of an encoder/decoder (CODEC).
  • encoding device 12 and decoding device 14 are exemplary of those applicable to implement the techniques described herein.
  • Encoding device 12 and decoding device 14 may include many other components, if desired.
  • encoding device 12 may include a plurality of encoding modules that each receive one or more sequences of multimedia data and encode the respective sequences of multimedia data in accordance with the techniques described herein.
  • encoding device 12 may further include at least one multiplexer to combine the segments of data for transmission.
  • encoding device 12 and decoding device 14 may include appropriate modulation, demodulation, frequency conversion, filtering, and amplifier components for transmission and reception of encoded video, including radio frequency (RF) wireless components and antennas, as applicable.
  • RF radio frequency
  • FIG. 2 is a block diagram illustrating an exemplary encoding module 30 in further detail.
  • Encoding module 30 may, for example, represent encoding module 18 of encoding device 12 of FIG. 1.
  • encoding module 30 includes a control module 32 that receives input frames of multimedia data of one or more multimedia sequences from one or more sources, and processes the frames of the received multimedia sequences.
  • control module 32 analyzes the incoming frames of the multimedia sequences and determines whether to encode or skip the incoming frames based on analysis of the frames.
  • encoding device 12 may encode the information contained in the multimedia sequences at a reduced frame rate using frame skipping to conserve bandwidth across transmission channel 16.
  • control module 32 may also be configured to determine whether to encode the frames as I frames, P frames, or B frames. Control module 32 may determine to encode an incoming frame as an I frame at the start of a multimedia sequence, at a scene change within the sequence, for use as a channel switch frame, or for use as an intra refresh frame. Otherwise, control module 32 encodes the frame as an inter-coded frame (i.e., a P frame or B frame) to reduce the amount of bandwidth associated with coding the frame.
  • inter-coded frame i.e., a P frame or B frame
  • Control module 32 may be further configured to partition the frames into a plurality of blocks and select a coding mode, such as one of the H.264 coding modes described above, for each of the blocks.
  • encoding module 30 may estimate the coding cost for at least a portion of the modes to assist in selecting a most efficient one of the coding modes.
  • encoding module 30 After selecting the coding mode for use in coding one of the blocks, encoding module 30 generates residual data for the block.
  • spatial prediction module 34 For a block selected to be intra-coded, spatial prediction module 34 generates the residual data for the block. Spatial prediction module 34 may, for example, generate a predicted version of the block via interpolation using one or more adjacent blocks and the interpolation directionality corresponding to the selected intra-coding mode. Spatial prediction module 34 may then compute a difference between the block of the input frame and the predicted block. This difference is referred to as residual data or residual coefficients.
  • motion estimation module 36 and motion compensation module 38 For a block selected to be inter-coded, motion estimation module 36 and motion compensation module 38 generate the residual data for the block.
  • motion estimation module 36 identifies at least one reference frame and searches for a block in the reference frame that is a best match to the block in the input frame.
  • Motion estimation module 36 computes a motion vector to represent an offset between the location of the block in the input frame and the location of the identified block in the reference frame.
  • Motion compensation module 38 computes a difference between the block of the input frame and the identified block in the reference frame to which the motion vector points. This difference is the residual data for the block.
  • Encoding module 30 also includes a transform module 40, a quantization module 46 and an entropy encoder 48.
  • Transform module 40 transforms the residual data of the block in accordance with a transform function.
  • transform module 40 applies an integer transform, such as a 4x4 or 8x8 integer transform or a Discrete Cosine Transform (DCT), to the residual data to generate transform coefficients for the residual data.
  • Quantization module 46 quantizes the transform coefficients and provides the quantized transform coefficients to entropy encoder 48.
  • Entropy encoder 48 encodes the quantized transform coefficients using a context-adaptive coding technique, such as context-adaptive variable-length coding (CAVLC) or context-adaptive binary arithmetic coding (CABAC). As will be described in detail below, entropy encoder 48 applies a selected mode to code the data of the block.
  • a context-adaptive coding technique such as context-adaptive variable-length coding (CAVLC) or context-adaptive binary arithmetic coding (CABAC).
  • CABAC context-adaptive binary arithmetic coding
  • Entropy encoder 48 may also encode additional data associated with the block. For example, in addition to the residual data, entropy encoder 48 may encode one or more motion vectors of the block, an identifier indicating the coding mode of the block, one or more reference frame indices, quantization parameter (QP) information, slice information of the block and the like. Entropy encoder 48 may receive this additional block data from other modules within encoding module 30. For example, the motion vector information may be received from motion estimation module 36 while the block mode information may be received from control module 32.
  • QP quantization parameter
  • entropy encoder 48 may code at least a portion of this additional information using a fixed length coding (FLC) technique or a universal variable length coding (VLC) technique, such as Exponential-Golomb coding ("Exp-Golomb").
  • FLC fixed length coding
  • VLC universal variable length coding
  • entropy encoder 48 may encode a portion of the additional block data using the context-adaptive coding techniques described above, i.e., CABAC or CAVLC.
  • control module 32 estimates a coding cost for at least a portion of the possible modes.
  • control module 32 may estimate the cost of coding the block in each of the possible coding modes. The cost may be estimated, for example, in terms of the number of bits associated with coding the block in a given mode versus the amount of distortion produced in that mode.
  • control module 32 may estimate the coding cost for twenty-two different coding modes (the inter- and intra-coding modes) for a block selected for inter-coding and thirteen different coding modes for a block selected for intra-coding.
  • control module 32 may use another mode selection technique to initially reduce the set of possible modes, and then utilize the techniques of this disclosure to estimate the coding cost for the remaining modes of the set. In other words, in some aspects, control module 32 may narrow down the number of mode possibilities before applying the cost estimate technique.
  • encoding module 30 estimates the coding costs for the modes without actually coding the data of the blocks for the different modes, thereby reducing computational overhead associated with the coding decision. In fact, in the example illustrated in FIG. 2, encoding module 30 may estimate the coding cost without quantizing the data of the block for the different modes. In this manner, the coding cost estimation techniques of this disclosure reduce the amount of computationally intensive calculations needed to compute the coding cost. In particular, it is not necessary to encode the blocks using the various coding modes in order to select one of the modes.
  • control module 32 estimates the coding cost of each analyzed mode in accordance with the equation:
  • J D + ⁇ mode ⁇ R, (1)
  • D a distortion metric of the block
  • ⁇ mode a Lagrange multiplier of the respective mode
  • R a rate metric of the block.
  • the distortion metric (D) may, for example, comprise a sum of absolute difference (SAD), sum of square difference (SSD), a sum of absolute transform difference (SATD), sum of square transform different (SSTD) or the like.
  • the rate metric (R) may, for example, be a number of bits associated with coding the data in a given block. As described above, different types of block data may be coded using different coding techniques. Equation (1) may thus be re -written in the form below:
  • R con text represents a rate metric for block data coded using context-adaptive coding techniques
  • R non context represents a rate metric for block data coded using non context-adaptive coding techniques.
  • the residual data may be coded using context-adaptive coding, such CAVLC or CABAC.
  • Other block data such as motion vectors, block modes, and the like may be coded using a FLC or a universal VLC technique, such as Exp-Golomb.
  • equation (2) may be re-written in the form:
  • Rresidual represents a rate metric for coding the residual data using context-adaptive coding techniques, e.g., the number of bits associated with coding the residual data
  • Rother represents a rate metric for coding the other block data using a FLC or universal VLC technique, e.g., the number of bits associated with coding the other block data.
  • encoding module 30 may determine the number of the bits associated with coding block data using FLC or universal VLC, i.e., R other , relatively easy. Encoding module 30 may, for example, use a code table to identify the number of bits associated with coding the block data using FLC or universal VLC.
  • the code table may, for example, include a plurality of codewords and the number of bits associated with coding the codeword. Determining the number of bits associated with coding the residual data (R es i dual ), however, presents a much more difficult task due to the adaptive nature of context-adaptive coding as a function of the context of the data.
  • bit estimate module 42 may estimate the number of bits associated with coding the residual data using the context-adaptive coding techniques without actually coding the residual data.
  • bit estimate module 42 estimates the number of bits associated with coding the residual data using transform coefficients for the residual data.
  • encoding module 30 only needs to compute the transform coefficients for the residual data to estimate the number of bits associated with coding the residual data. Encoding module 30 therefore reduces the amount of computing resources and time required to determine the number of bits associated with coding the residual data by not quantizing the transform coefficients or encoding the quantized transform coefficients for each of the modes.
  • Bit estimate module 42 analyzes the transform coefficients output by transform module 40 to identify one or more transform coefficients that will remain non-zero after quantization. In particular, bit estimate module 42 compares each of the transform coefficients to a corresponding threshold. In some aspects, the corresponding thresholds may be computed as a function of a QP of encoding module 30. Bit estimate module 42 identifies, as the transform coefficients that will remain non-zero after quantization, the transform coefficients that are greater than or equal to their corresponding thresholds.
  • Bit estimate module 42 estimates the number of bits associated with coding the residual data based on at least the transform coefficients identified to remain non-zero after quantization. In particular, bit estimate module 42 determines the number of non-zero transform coefficients that will survive quantization. Bit estimate module 42 also sums at least a portion of the absolute values of the transform coefficients identified to survive quantization. Bit estimate module 42 then estimates the rate metric for the residual data, i.e., the number of bits associated with coding the residual data, using the equation:
  • Rresrdual U 1 * SATD + Q 2 * NZ est + Q 3 , (4)
  • SATD is the sum of the at least a portion absolute values of the non-zero transform coefficients predicted to survive quantization
  • NZ est is the estimated number of non-zero transform coefficients predicted to survive quantization
  • ai, a 2 , and a 3 are coefficients.
  • Coefficients ai, a 2 , and a 3 may be computed, for example, using least squares estimation.
  • the sum of the transform coefficients is the sum of absolute transform differences SATDs in the example of equation (4), other difference coefficients may be used such as SSTDs.
  • Encoding module 30 computes a matrix of transform coefficients for the residual data. An exemplary matrix of transform coefficients is illustrated below.
  • the number of rows of the matrix of transform coefficients (A) is equal to the number of rows of pixels in the block and the number of columns of the matrix of transform coefficients is equal to the number of columns of pixels in the block.
  • the dimensions of the matrix of transform coefficients is 4x4 to correspond with the 4x4 block.
  • Each of the entries A (i,j) of the matrix of transform coefficients is the transform of the respective residual coefficients.
  • encoding module 30 compares the matrix of residual transform coefficients A to a matrix of threshold values to predict which of the transform coefficient of matrix A will remain non-zero after quantization.
  • An exemplary matrix of threshold values is illustrated below.
  • the matrix C may be computed as a function of a QP value.
  • the dimensions of matrix C are the same as the dimensions of matrix A.
  • the entries of matrix C may be computed based on the equation:
  • QBITS(QPJ is a parameter that determines scaling as a function of QP
  • Level- _Offset(i,j) ⁇ QP ⁇ is a deadzone parameter for the entry at row i and column j of the matrix and is also a function of QP
  • Level _Scale(i, j) ⁇ QP ⁇ is a multiplicative factor for the entry at row i and column j of the matrix and is also a function of QP
  • i corresponds to a row of the matrix
  • j corresponds to a column of the matrix
  • QP corresponds to a quantization parameter of encoding module 30.
  • the variables may be defined in the H.264 coding standard as a function of the operating QP.
  • encoding module 30 may be configured to operate within a range of QP values. In this case, encoding module 30 may pre-compute a plurality of comparison matrices that corresponds with each of the QP values in the range of QP values. Encoding module 30 selects the comparison matrix that corresponds with the QP of encoding module 30 to compare with the transform coefficient matrix.
  • the result of the comparison between the matrix of transform coefficients A and the matrix of thresholds C is a matrix of ones and zeros.
  • a transform coefficient is identified as likely to remain non-zero when the absolute value of the transform coefficient of matrix A is greater than or equal to the corresponding threshold of matrix C.
  • bit estimate module 42 determines the number of transform coefficients that will survive quantization. In other words, bit estimate module 42 determines the number of transform coefficients identified as remaining non-zero after quantization. Bit estimate module 42 may determine the number of transform coefficients identified as remaining non-zero after quantization according to the equation:
  • NZ est is the estimated number of non-zero transform coefficients
  • M(i,j) is the value of the matrix M at row i and column/
  • NZ est is equal to 8.
  • Bit estimate module 42 also computes a sum of at least a portion of the absolute value of the transform coefficients estimated to survive quantization.
  • bit estimate module 42 may compute the sum of the at least a portion of absolute values of the transform coefficients according to the equation:
  • SATD is the sum total of the transform coefficients identified as remaining non-zero after quantization
  • M(i,j) is the value of the matrix Mat row i and columny
  • A(i,j) is the value of the matrix A at row i and columny
  • abs(x) is an absolute value function that computes the absolute value of x.
  • SATD is equal to 2361.
  • Other difference metrics may be used for the transform coefficients, such as SSTDs.
  • bit estimate module 42 approximates the number of bits associated with coding the residual coefficients using equation (3) above.
  • Control module 32 may use the estimate of R reS iduai to compute an estimate of the total coding cost of the mode.
  • Encoding module 30 may estimate the total coding cost for one or more other possible modes in the same manner, and then select the mode with the smallest coding cost. Encoding module 30 then applies the selected coding mode to code the block or blocks of the frame.
  • the foregoing techniques may be implemented individually, or two or more of such techniques, or all of such techniques, may be implemented together in encoding device 12.
  • the components in encoding module 30 are exemplary of those applicable to implement the techniques described herein. Encoding module 30, however, may include many other components, if desired, as well as fewer components that combine the functionality of one or more of the modules described above.
  • the components in encoding module 30 may be implemented as one or more processors, digital signal processors, application specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), discrete logic, software, hardware, firmware, or any combinations thereof. Depiction of different features as modules is intended to highlight different functional aspects of encoding module 30 and does not necessarily imply that such modules must be realized by separate hardware or software components. Rather, functionality associated with one or more modules may be integrated within common or separate hardware or software components.
  • FIG. 3 is a block diagram illustrating another exemplary encoding module 50.
  • Encoding module 50 of FIG. 3 conforms substantially to encoding module 30 of FIG. 2, except bit estimate module 52 of encoding module 50 estimates the number of bits associated with coding the residual data after quantization of the transform coefficients for the residual data.
  • bit estimate module 52 estimates the number of bits associated with coding the residual coefficients using the equation:
  • Rresrdual Cl 1 * SATQD + U 2 * NZ TQ + U 3 , (8) where SATQD is the sum of the absolute values of the non-zero quantized transform coefficients, NZ TQ is the number of non-zero quantized transform coefficients, and ai, ci2, and ci3 are coefficients. Coefficients ai, CI2, and CI3 may be computed, for example, using least squares estimation. Although encoding module 50 quantizes the transform coefficients prior to estimating the number of bits associated with coding the residual data, encoding module 50 still estimates the coding costs for the modes without actually coding the data of the blocks. Thus, the amount of computationally intensive calculations is still reduced.
  • FIG. 4 is a flow diagram illustrating exemplary operation of an encoding module, such as encoding module 30 of FIG. 2 and/or encoding module 50 of FIG. 3, selecting an encoding mode based on at least the estimated coding costs. For exemplary purposes, however, FIG. 4 will be discussed in terms of encoding module 30.
  • Encoding module 30 selects a mode for which to estimate a coding cost (60).
  • Encoding module 30 generate a distortion metric for the current block (62).
  • Encoding module 30 may, for example, compute the distortion metric based on a comparison between the block and at least one reference block. In the case of a block selected to be intra-coded, the reference block may be an adjacent block within the same frame. For a block selected to be inter-coded, on the other hand, the reference block may be a block from an adjacent frame.
  • the distortion metric may be, for example, a SAD, SSD, SATD, SSTD, or other similar distortion metric.
  • encoding module 30 determines the number of bits associated with coding the portion of the data that is coded using non context-adaptive coding techniques (64). As described above, this data may include one or more motion vectors of the block, an identifier that indicates a coding mode of the block, one or more reference frame indices, QP information, slice information of the block and the like. Encoding module 30 may, for example, use a code table to identify the number of bits associated with coding the data using FLC, universal VLC or other non context-adaptive coding technique.
  • Encoding module 30 estimates and/or computes the number of bits associated with coding the portion of the data that is coded using context-adaptive coding techniques (66). In the context of the H.264 standard, for example, encoding module 30 may estimate the number of bits associated with coding the residual data using context-adaptive coding. Encoding module 30 may estimate the number of bits associated with coding the residual data without actually performing the coding the residual data. In certain aspects, encoding module 30 may estimate the number of bits associated with coding the residual data without quantizing the residual data. For example, encoding module 30 may compute transform coefficients for the residual data and identify the transform coefficients likely to remain non-zero after quantization.
  • encoding module 30 uses these identified transform coefficients to estimate the number of bits associated with coding the residual data. In other aspects, encoding module 30 may quantize the transform coefficients and estimate the number of bits associated with coding the residual data based on at least the quantized transform coefficients. In either case, encoding module 30 saves time and processing resources by estimating the required number of bits. If there are sufficient computing resources, encoding module 30 may compute the actual number of bits required instead of estimating.
  • Encoding module 30 estimates and/or computes the total coding cost for coding the block in the selected mode (68). Encoding module 30 may estimate the total coding cost for coding the block based on the distortion metric, the bits associated with coding the portion of the data that is coded using non context-adaptive coding and the bits associated with coding the portion of the data that is coded using context-adaptive coding. For example, encoding module 30 may estimate the total coding cost for coding the block in the selected mode using equation (2) or (3) above.
  • Encoding module 30 determines whether there are any other coding modes for which to estimate the coding cost (70). As described above, encoding module 30 estimates the coding cost for at least a portion of the possible modes. In certain aspects, encoding module 30 may estimate the cost of coding the block in each of the possible coding modes. In the context of the H.264 standard, for example, encoding module 30 may estimate the coding cost for twenty-two different coding modes (the inter- and intra-coding modes) for a block selected for inter-coding and thirteen different coding modes for a block selected for intra-coding. In other aspects, encoding module 30 may use another mode selection technique to initially reduce the set of possible modes, and then utilize the techniques of this disclosure to estimate the coding cost for the reduced set of coding modes.
  • encoding module 30 selects the next coding mode and estimates the cost of coding the data in the selected coding mode. When there are no more coding modes for which to estimate the coding cost, encoding module 30 selects one of the modes to use for coding the block based on at least the estimated coding costs (72). In one example, coding module 30 may select the coding mode that has the smallest estimated coding cost. Upon selection of the mode, coding module 30 may apply the selected mode to code the particular block (74). The process may continue for additional blocks in a given frame. As an example, the process may continue until all the blocks in the frame have been coded using the coding mode selected in accordance with the techniques described herein. Moreover, the process may continue until blocks of a plurality of frames have been coded using a high efficiency mode.
  • FIG. 5 is a flow diagram illustrating exemplary operation of an encoding module, such as encoding module 30 of FIG. 2, estimating the number of bits associated with coding the residual coefficients of a block.
  • encoding module 30 After selecting one of the coding modes for which to estimate the coding cost, encoding module 30 generates the residual data of the block for the selected mode (80).
  • spatial prediction module 34 For a block selected to be intra-coded, for example, spatial prediction module 34 generate the residual data for the block based on a comparison of the block with a predicted version of the block.
  • motion estimation module 36 and motion compensation module 38 compute the residual data for the block based on a comparison between the block and a corresponding block in a reference frame.
  • the residual data may have already been computed to generate the distortion metric of the block.
  • encoding module 30 may retrieve the residual data from a memory.
  • Transform module 40 transforms the residual coefficients of the block in accordance with a transform function to generate transform coefficients for the residual data (82).
  • Transform module 40 may, for example, apply a 4x4 or 8x8 integer transform or a DCT transform to the residual data to generate the transform coefficients for the residual data.
  • Bit estimate module 42 compares one of the transform coefficients to a corresponding threshold to determine whether the transform coefficient is greater than or equal to the threshold (84). The threshold corresponding with the transform coefficient may be computed as a function of the QP of encoding module 30. If the transform coefficient is greater than or equal to the corresponding threshold, bit estimate module 42 identifies the transform coefficient as a coefficient that will remain non-zero after quantization (86). If the transform coefficient is less than the corresponding threshold, bit estimate module 42 identifies the transform coefficient as a coefficient that will become zero after quantization (88).
  • Bit estimate module 42 determines whether there are any additional transform coefficients for the residual data of the block (90). If there are additional transform coefficients of the block, bit estimate module 42 selects another one of the coefficients and compares it to a corresponding threshold. If there are no additional transform coefficients to analyze, bit estimate module 42 determines the number of coefficients identified to remain non-zero after quantization (92). Bit estimate module 42 also sums at least a portion of the absolute values of the transform coefficients identified to remain non-zero after quantization (94). Bit estimate module 42 estimates the number of bits associated with coding the residual data using the determined number of non-zero coefficients and the sum of the portion of the non-zero coefficients (96).
  • Bit estimate module 42 may, for example, estimate the number of bits associated with coding the residual data using equation (4) above. In this manner, the encoding module 30 estimates the number of bits associated with coding the residual data of the block in the selected mode without quantizing or encoding the residual data.
  • FIG. 6 is a flow diagram illustrating exemplary operation of an encoding module, such as encoding module 50 of FIG. 3, estimating the number of bits associated with coding the residual coefficients of a block.
  • encoding module 50 After selecting one of the coding modes for which to estimate the coding cost, encoding module 50 generates the residual coefficients of the block (100).
  • spatial prediction module 34 computes the residual data for the block based on a comparison of the block with a predicted version of the block.
  • motion estimation module 36 and motion compensation module 38 compute the residual data for the block based on a comparison between the block and a corresponding block in a reference frame.
  • the residual coefficients may have already been computed to generate the distortion metric of the block.
  • Transform module 40 transforms the residual coefficients of the block in accordance with a transform function to generate transform coefficients for the residual data (102).
  • Transform module 40 may, for example, apply a 4x4 or 8x8 integer transform or a DCT transform to the residual data to generate transformed residual coefficients.
  • Quantization module 46 quantizes the transform coefficients in accordance with a QP of encoding module 50 (104).
  • Bit estimate module 52 determines the number of quantized transform coefficients that are non-zero (106). Bit estimate module 42 also sums the absolute values of the non-zero levels or quantized transform coefficients (108). Bit estimate module 52 estimates the number of bits associated with coding the residual data using the computed number of non-zero quantized transform coefficients and the sum of the non-zero quantized transform coefficients (110). Bit estimate module 52 may, for example, estimate the number of bits associated with coding the residual coefficients using equation (4) above. In this manner, the encoding module estimates the number of bits associated with coding the residual data of the block in the selected mode without encoding the residual data.
  • the instructions or code associated with a computer-readable medium of the computer program product may be executed by a computer, e.g., by one or more processors, such as one or more digital signal processors (DSPs), general purpose microprocessors, ASICs, FPGAs, or other equivalent integrated or discrete logic circuitry.
  • processors such as one or more digital signal processors (DSPs), general purpose microprocessors, ASICs, FPGAs, or other equivalent integrated or discrete logic circuitry.
  • such computer-readable media can comprise RAM, such as synchronous dynamic random access memory (SDRAM), readonly memory (ROM), non-volatile random access memory (NVRAM), ROM, electrically erasable programmable read-only memory (EEPROM), EEPROM, FLASH memory, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other tangible medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer.
  • RAM such as synchronous dynamic random access memory (SDRAM), readonly memory (ROM), non-volatile random access memory (NVRAM), ROM, electrically erasable programmable read-only memory (EEPROM), EEPROM, FLASH memory, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other tangible medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer.
  • SDRAM synchronous dynamic random access memory
  • ROM readonly memory

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Algebra (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
EP07761930A 2007-05-04 2007-05-04 Videocodierungsmodusauswahl unter verwendung geschätzter codierungskosten Withdrawn EP2156672A1 (de)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US2007/068307 WO2008136828A1 (en) 2007-05-04 2007-05-04 Video coding mode selection using estimated coding costs

Publications (1)

Publication Number Publication Date
EP2156672A1 true EP2156672A1 (de) 2010-02-24

Family

ID=39145223

Family Applications (1)

Application Number Title Priority Date Filing Date
EP07761930A Withdrawn EP2156672A1 (de) 2007-05-04 2007-05-04 Videocodierungsmodusauswahl unter verwendung geschätzter codierungskosten

Country Status (5)

Country Link
EP (1) EP2156672A1 (de)
JP (1) JP2010526515A (de)
KR (2) KR101166732B1 (de)
CN (1) CN101663895B (de)
WO (1) WO2008136828A1 (de)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8891615B2 (en) 2008-01-08 2014-11-18 Qualcomm Incorporated Quantization based on rate-distortion modeling for CABAC coders
US9008171B2 (en) 2008-01-08 2015-04-14 Qualcomm Incorporated Two pass quantization for CABAC coders
US10499059B2 (en) * 2011-03-08 2019-12-03 Velos Media, Llc Coding of transform coefficients for video coding
AP2016009618A0 (en) * 2011-06-16 2016-12-31 Ge Video Compression Llc Entropy coding of motion vector differences
US9749661B2 (en) * 2012-01-18 2017-08-29 Qualcomm Incorporated Sub-streams for wavefront parallel processing in video coding
KR102126855B1 (ko) * 2013-02-15 2020-06-26 한국전자통신연구원 부호화 모드 결정 방법 및 장치
KR102229386B1 (ko) * 2014-12-26 2021-03-22 한국전자통신연구원 영상 부호화 장치 및 방법
WO2020153506A1 (ko) * 2019-01-21 2020-07-30 엘지전자 주식회사 비디오 신호의 처리 방법 및 장치
WO2023067822A1 (ja) * 2021-10-22 2023-04-27 日本電気株式会社 映像符号化装置、映像復号装置、映像符号化方法、映像復号方法および映像システム

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0646268A (ja) * 1992-07-24 1994-02-18 Chinon Ind Inc 符号量制御装置
FR2753330B1 (fr) * 1996-09-06 1998-11-27 Thomson Multimedia Sa Procede de quantification pour codage video
NO318318B1 (no) * 2003-06-27 2005-02-28 Tandberg Telecom As Fremgangsmate for forbedret koding av video
JP2006140758A (ja) * 2004-11-12 2006-06-01 Toshiba Corp 動画像符号化方法、動画像符号化装置および動画像符号化プログラム
JP4146444B2 (ja) * 2005-03-16 2008-09-10 株式会社東芝 動画像符号化の方法及び装置
CN100348051C (zh) * 2005-03-31 2007-11-07 华中科技大学 一种增强型帧内预测模式编码方法

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See references of WO2008136828A1 *

Also Published As

Publication number Publication date
CN101663895B (zh) 2013-05-01
KR20100005240A (ko) 2010-01-14
JP2010526515A (ja) 2010-07-29
KR20120031529A (ko) 2012-04-03
KR101166732B1 (ko) 2012-07-19
CN101663895A (zh) 2010-03-03
WO2008136828A1 (en) 2008-11-13

Similar Documents

Publication Publication Date Title
US8150172B2 (en) Video coding mode selection using estimated coding costs
JP5925416B2 (ja) ビデオブロックヘッダ情報の適応可能なコーディング
US8311120B2 (en) Coding mode selection using information of other coding modes
US8483285B2 (en) Video coding using transforms bigger than 4×4 and 8×8
EP2704442B1 (de) Vorlagenabgleich für Videokodierung
KR101166732B1 (ko) 추정된 코딩 비용을 이용하는 비디오 코딩 모드 선택
KR101247923B1 (ko) 4×4 및 8×8 보다 큰 변환을 이용한 비디오 코딩
KR101377883B1 (ko) 비디오 인코딩에서 넌-제로 라운딩 및 예측 모드 선택 기법들
WO2011103482A1 (en) Block type signalling in video coding
CN101946515A (zh) Cabac译码器的二回合量化
JP5684342B2 (ja) デジタル映像データを処理するための方法および装置
KR101136771B1 (ko) 다른 코딩 모드의 정보를 이용한 코딩 모드 선택

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20091204

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LI LT LU LV MC MT NL PL PT RO SE SI SK TR

AX Request for extension of the european patent

Extension state: AL BA HR MK RS

RIN1 Information on inventor provided before grant (corrected)

Inventor name: KAUSHIK, VINOD

Inventor name: SWAZEY, SCOTT, T.

Inventor name: OGUZ, SEYFULLAH, HALIT

Inventor name: CHEN, PEISONG

Inventor name: SHI, FANG

Inventor name: SUBRAMANIA, SITARAMAN, GANAPATHY

17Q First examination report despatched

Effective date: 20100422

DAX Request for extension of the european patent (deleted)
GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20111025