WO2009158113A2 - Adaptive quantization for enhancement layer video coding - Google Patents

Adaptive quantization for enhancement layer video coding Download PDF

Info

Publication number
WO2009158113A2
WO2009158113A2 PCT/US2009/045659 US2009045659W WO2009158113A2 WO 2009158113 A2 WO2009158113 A2 WO 2009158113A2 US 2009045659 W US2009045659 W US 2009045659W WO 2009158113 A2 WO2009158113 A2 WO 2009158113A2
Authority
WO
WIPO (PCT)
Prior art keywords
picture
quantization
quantization parameter
macroblock
enhancement layer
Prior art date
Application number
PCT/US2009/045659
Other languages
French (fr)
Other versions
WO2009158113A3 (en
Inventor
Shankar Regunathan
Shijun Sun
Chengjie Tu
Chih-Lung Lin
Original Assignee
Microsoft Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to EP18187252.4A priority Critical patent/EP3416382A1/en
Priority to JP2011512545A priority patent/JP5706318B2/en
Priority to MX2014002291A priority patent/MX343458B/en
Priority to CN2009801213483A priority patent/CN102057677B/en
Priority to MX2016014505A priority patent/MX356897B/en
Priority to KR1020107027143A priority patent/KR101780505B1/en
Application filed by Microsoft Corporation filed Critical Microsoft Corporation
Priority to EP09770648.5A priority patent/EP2283655B1/en
Priority to KR1020167007437A priority patent/KR101745845B1/en
Priority to MX2010012818A priority patent/MX2010012818A/en
Publication of WO2009158113A2 publication Critical patent/WO2009158113A2/en
Publication of WO2009158113A3 publication Critical patent/WO2009158113A3/en
Priority to HK11109267.4A priority patent/HK1155303A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/124Quantisation
    • H04N19/126Details of normalisation or weighting functions, e.g. normalisation matrices or variable uniform quantisers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/30Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
    • H04N19/34Scalability techniques involving progressive bit-plane based encoding of the enhancement layer, e.g. fine granular scalability [FGS]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/124Quantisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/172Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a picture, frame or field
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/186Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/187Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a scalable video layer
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/189Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding
    • H04N19/196Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding being specially adapted for the computation of encoding parameters, e.g. by averaging previously computed encoding parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/189Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding
    • H04N19/196Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding being specially adapted for the computation of encoding parameters, e.g. by averaging previously computed encoding parameters
    • H04N19/197Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding being specially adapted for the computation of encoding parameters, e.g. by averaging previously computed encoding parameters including determination of the initial value of an encoding parameter
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/46Embedding additional information in the video signal during the compression process
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/46Embedding additional information in the video signal during the compression process
    • H04N19/463Embedding additional information in the video signal during the compression process by compressing encoding parameters before transmission
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/593Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/61Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/70Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/184Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being bits, e.g. of the compressed video stream

Definitions

  • Engineers use compression (also called coding or encoding) to reduce the bit rate of digital video. Compression decreases the cost of storing and transmitting video by converting the video into a lower bit rate form. Decompression (also called decoding) reconstructs a version of the original video from the compressed form.
  • a "codec” is an encoder/decoder system.
  • quantization is a term used for an approximating non-reversible mapping function commonly used for lossy compression, in which there is a specified set of possible output values, and each member of the set of possible output values has an associated set of input values that result in the selection of that particular output value.
  • quantization techniques have been developed, including scalar or vector, uniform or non-uniform, with or without dead zone, and adaptive or non-adaptive quantization.
  • an encoder performs quantization essentially as a biased division of an original data value by a quantization factor.
  • One or more quantization parameters indicate the quantization factor for purposes of inverse quantization of the data value.
  • QPs quantization parameters
  • an encoder or decoder reconstructs a version of the data value using the quantization factor indicated by the QP(s).
  • Quantization typically introduces loss in fidelity to the original data value, which can show up as compression errors or artifacts in the results of decoding.
  • Most scalable video codecs split video into a base layer and an enhancement layer.
  • the base layer alone provides a reconstruction of the video at a lower quality level and/or a lower resolution, and the enhancement layer can be added to provide extra information that will increase the video quality.
  • Many single-layer digital video coding standards today allow for QPs to vary spatially in the base layer. This feature allows encoding to adapt to the macroblock characteristics and thus achieve better perceptual quality for a given rate.
  • the detailed description presents techniques and tools for scalable encoding and decoding of enhancement layer video using a spatially variable quantization.
  • the quantization may be variable for an entire picture of the enhancement layer video or separately variable for each color channel in the enhancement layer video for the picture.
  • the techniques and tools improve the performance of a general-purpose video encoder when it encodes an enhancement layer of video pictures.
  • a tool such as an encoder encodes enhancement layer video for a picture organized in multiple color channels (e.g., a luma ("Y") channel and two chroma ("U” and "V”) channels).
  • the tool selectively varies quantization spatially over the frame, and in some cases the tool selectively varies quantization spatially and also varies quantization between the multiple color channels of the enhancement layer video for the picture.
  • the tool outputs encoded enhancement layer video for the picture in a bitstream, signaling QP information.
  • the QP information indicates QPs that at least in part parameterize the varied quantization of the enhancement layer video for the picture.
  • a tool such as a decoder decodes enhancement layer video for a picture organized in multiple color channels.
  • the tool receives encoded enhancement layer video for the picture in a bitstream, receiving QP information indicating QPs that at least in part parameterize varied quantization of the enhancement layer video for the picture.
  • QP information indicating QPs that at least in part parameterize varied quantization of the enhancement layer video for the picture.
  • the tool accounts for quantization that varies spatially over the frame and between the multiple color channels of the enhancement layer video for the picture.
  • a tool such as a video decoder receives encoded information for video for a picture from a bitstream.
  • the encoded information includes QP selection information for a current unit of the video for the picture.
  • the tool predicts a QP for the current unit using one or more QPs for spatially neighboring units of the video for the picture.
  • the tool selects between the predicted QP and another QP using the QP selection information, and uses the selected QP in reconstruction of the current unit.
  • the tool decodes different information for predicted QPs for each color channel.
  • a tool such as an encoder signals encoded information for video for a picture from a bitstream.
  • the encoded information includes QP selection information for a current unit of the video for the picture.
  • the tool encodes the current unit, after determining a QP for the current unit, the tool encodes the QP selection information.
  • the tool predicts a QP for the current unit using one or more QPs for spatially neighboring units of the video for the picture. If the predicted QP is the actual QP for the current unit, the QP selection information so indicates. Otherwise, the QP selection information indicates another QP for the current unit.
  • Figure 1 is a block diagram of a suitable computing environment in which several described embodiments may be implemented.
  • Figure 2 is a block diagram of an exemplary encoding system for encoding a picture of enhancement layer video quantized with one or more QPs that vary spatially and/or across color channels of the picture.
  • Figure 3 is a block diagram of an exemplary decoding system for decoding a picture of enhancement layer video quantized with one or more QPs that vary spatially and/or across color channels of the picture.
  • Figure 4 is a flow chart of a generalized technique for encoding enhancement layer video quantized with one or more QPs that vary spatially and/or across color channels of a picture.
  • Figure 5 is a flow chart showing an exemplary technique of determining and signaling the QPs used to encode enhancement layer video quantized with one or more QPs that vary spatially and/or across color channels of a picture.
  • Figures 6A and 6B are flow charts showing exemplary techniques of signaling QPs for macroblocks of enhancement layer video for a picture, where the QPs vary spatially and/or across color channels.
  • Figure 7 is a flow chart of a generalized technique for decoding enhancement layer video quantized with one or more QPs that vary spatially and/or across color channels of a picture.
  • Figure 8 is a flow chart of an generalized technique for using spatial prediction to encode and signal QPs for units of video.
  • Figure 9 is a flow chart of an exemplary technique of using spatial prediction to encode and signal a QP for a macroblock.
  • Figure 10 is a flow chart of a generalized technique for using spatial prediction to decode QPs for units of video.
  • Figure 11 is a pseudocode listing illustrating bitstream syntax for signaling/receiving information that indicates frame QP and channel QPs in first and second example combined implementations.
  • Figure 12 is a pseudocode listing illustrating an example QP prediction rule in the first and second example combined implementations.
  • Figure 13 is a pseudocode listing illustrating bitstream syntax for signaling/receiving information that indicates number of bits used for macrob lock-level differential QP information in the first example combined implementation.
  • Figure 14 is a pseudocode listing illustrating bitstream syntax for signaling/receiving QP selection information in the first example combined implementation.
  • Figure 15 is pseudocode listing illustrating bitstream syntax for signaling/receiving information that indicates how to populate a table of QP values in the second example combined implementation.
  • Figure 16 is a pseudocode listing illustrating bitstream syntax for signaling/receiving QP selection information in the second example combined implementation.
  • Figures 17A-F are VLC tables used for QP selection information in the second example combined implementation.
  • Spatially adapting quantization of enhancement layer video can have other advantages.
  • certain areas of enhancement layer video are predicted from base layer video, while other areas of the enhancement layer video are predicted from previously reconstructed enhancement layer video, for example, using motion compensation.
  • Using different levels of quantization in the different areas of the enhancement layer video can improve performance by allowing the encoder to adapt to the characteristics of the different areas.
  • Adapting quantization between color channels of enhancement layer video can also improve performance.
  • Different video formats can use samples in different color spaces such as RGB, YUV and YCbCr.
  • Y represents the brightness (luma) channel of video
  • U and V, or Cb and Cr represent the color (chroma) channels of the video.
  • the human eye is, in general, more sensitive to variations in brightness than color, so encoders have been developed to take advantage of this fact by reducing the resolution of the chroma channels relative to the luma channel.
  • one chroma sampling rate is 4:4:4 which indicates that for every luma sample, a corresponding U sample and a V sample are present.
  • Chroma sampling rate is 4:2:2, which indicates that a single U sample and a single V sample correspond to two horizontal luma samples.
  • Chroma sampling rates at lower resolution such as 4:2:2 or 4:2:0, result in fewer chroma samples and typically require fewer bits to encode than higher resolution chroma sample rates, such as 4:4:4.
  • each color channel in the video may be quantized to a different level of fidelity in the base layer video.
  • Some scalable video encoders encode base-layer video a low chroma sampling rate (e.g., 4:2:0) and/or fidelity, and encode enhancement-layer video at a higher chroma sampling rate (e.g., 4:2:2 or 4:4:4).
  • the chroma channels of the enhancement layer video may thus have different signal energies than the luma channel.
  • Using different levels of quantization in the different channels of the enhancement layer video can improve performance by allowing the encoder to adapt to the characteristics of the channels.
  • part or all of enhancement layer video can be remapped to a lower chroma resolution for encoding/decoding with a base layer video encoder/decoder.
  • Adapting quantization between channels can help in this situation too. For example, if the base layer video is a tone-mapped version of the enhancement layer video, using different QPs for the luma channel, as compared to the chroma channels, can improve performance.
  • one method of encoding and signaling QP values for enhancement layer video includes using QP prediction to exploit inter-unit, spatial redundancy in QP values. In many scenarios, this helps reduce the cost of signaling QPs for units of a picture or a color channel of the picture, where a unit is a block, macroblock, segment, or some other type of unit. Spatial QP prediction can be used in conjunction with a simple mechanism to signal whether or not quantization varies spatially over picture, or across the color channels in the picture.
  • Some of the techniques and tools described herein address one or more of the problems noted in the background. Typically, a given technique/tool does not solve all such problems. Rather, in view of constraints and tradeoffs in encoding time, encoding resources, decoding time, decoding resources and/or quality, the given technique/tool improves encoding and/or performance for a particular implementation or scenario.
  • Figure 1 illustrates a generalized example of a suitable computing environment (100) in which several of the described embodiments may be implemented.
  • the computing environment (100) is not intended to suggest any limitation as to scope of use or functionality, as the techniques and tools may be implemented in diverse general-purpose or special-purpose computing environments.
  • the computing environment (100) includes at least one processing unit (110) and memory (120).
  • the processing unit (110) executes computer-executable instructions and may be a real or a virtual processor. In a multi-processing system, multiple processing units execute computer-executable instructions to increase processing power.
  • the memory (120) may be volatile memory (e.g., registers, cache, RAM), nonvolatile memory (e.g., ROM, EEPROM, flash memory, etc.), or some combination of the two.
  • the memory (120) stores software (180) implementing an encoder with one or more of the described techniques and tools for enhancement layer video coding and/or decoding using QPs that vary spatially and/or across the color channels of a picture.
  • a computing environment may have additional features.
  • the computing environment (100) includes storage (140), one or more input devices (150), one or more output devices (160), and one or more communication connections (170).
  • An interconnection mechanism such as a bus, controller, or network interconnects the components of the computing environment (100).
  • operating system software provides an operating environment for other software executing in the computing environment (100), and coordinates activities of the components of the computing environment (100).
  • the storage (140) may be removable or non-removable, and includes magnetic disks, magnetic tapes or cassettes, CD-ROMs, DVDs, or any other medium which can be used to store information and which can be accessed within the computing environment (100).
  • the storage (140) stores instructions for the software (180) implementing the video encoder and/or decoder.
  • the input device(s) (150) may be a touch input device such as a keyboard, mouse, pen, or trackball, a voice input device, a scanning device, or another device that provides input to the computing environment (100).
  • the input device(s) (150) may be a sound card, video card, TV tuner card, or similar device that accepts audio or video input in analog or digital form, or a CD-ROM or CD-RW that reads audio or video samples into the computing environment (100).
  • the output device(s) (160) may be a display, printer, speaker, CD-writer, or another device that provides output from the computing environment (100).
  • the communication connection(s) (170) enable communication over a communication medium to another computing entity.
  • the communication medium conveys information such as computer-executable instructions, audio or video input or output, or other data in a modulated data signal.
  • a modulated data signal is a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal.
  • communication media include wired or wireless techniques implemented with an electrical, optical, RF, infrared, acoustic, or other carrier.
  • Computer-readable media are any available media that can be accessed within a computing environment.
  • Computer-readable media include memory (120), storage (140), communication media, and combinations of any of the above.
  • program modules include routines, programs, libraries, objects, classes, components, data structures, etc. that perform particular tasks or implement particular abstract data types.
  • the functionality of the program modules may be combined or split between program modules as desired in various embodiments.
  • Computer-executable instructions for program modules may be executed within a local or distributed computing environment.
  • Figure 2 is a block diagram of an encoding tool (200) for encoding input video as a base layer and an enhancement layer in conjunction with which some described embodiments may be implemented.
  • the format of the base layer bit stream (248) can be a Windows Media Video or VC-I format, MPEG-x format (e.g., MPEG-I, MPEG-2, or MPEG-4), H.26x format (e.g., H.261, H.262, H.263, or H.264), or other format.
  • the tool (200) processes video pictures.
  • picture generally refers to source, coded, or reconstructed image data.
  • a picture is a progressive video frame.
  • a picture may refer to an interlaced video frame, the top field of the frame, or the bottom field of the frame, depending on context.
  • the generic term “picture” will be used to represent these various options.
  • the encoding tool includes a first sealer (204) which accepts input video pictures (202) and outputs base layer video to a base layer encoder (220).
  • the first sealer (204) may downsample or otherwise scale the input video pictures (202), for example, to reduce sample depth, spatial resolution or chroma sampling resolution. Or, in some instances, the first sealer upsamples the input video pictures (202) or does not alter the input video pictures (202) at all.
  • the base layer encoder (220) encodes the base layer video and outputs a base layer bit stream (248), and additionally makes available reconstructed base layer video which is input to an inverse sealer (252). If the reconstructed base layer video has a different bit depth, spatial resolution, chroma sampling rate, etc. than the input video pictures (202) due to scaling, then the inverse sealer (252) may upsample (or otherwise inverse scale) the reconstructed base layer video so that it has the same resolution as the input video pictures (202). [050] The input video pictures (202) are compared against the reconstructed base layer video to produce enhancement layer video that is input to a second sealer (254). The second sealer (254) may or may not be the same physical component or software program as the first sealer (204). The second sealer (254) outputs the enhancement layer video (256) to an enhancement layer encoder (240).
  • the enhancement layer encoder (240) compresses inter-coded, predicted "pictures” (256) of the enhancement layer video and intra-coded "pictures” (256) of the enhancement layer video.
  • the "picture" at a given time in the enhancement layer video represents differences between an input video picture and a reconstructed base layer video picture, but is still encoded as a picture by the example encoder (240).
  • Figure 3 shows a path for intra-coded content through the enhancement layer encoder (240) and a path for inter-coded predicted content.
  • Many of the components of the enhancement layer encoder (240) are used for compressing both intra-coded content and inter-coded, predicted content. The exact operations performed by those components can vary depending on the type of information being compressed.
  • Figure 2 shows a single enhancement layer encoder (240), the enhancement layer video (256) can itself be separated into multiple layers of residual video for encoding with separate residual encoders.
  • the enhancement layer video (256) that is encoded represents differences (but not necessarily all differences) between the reconstructed base layer video and the input video.
  • inter-coded, predicted content (as a picture) is represented in terms of prediction from previously reconstructed content (as one or more other pictures, which are typically referred to as reference pictures or anchors).
  • content at a given time is encoded as a progressive P-frame or B-frame, interlaced P-field or B-field, or interlaced P-frame or B-frame.
  • a prediction residual is the difference between predicted information and corresponding original enhancement layer video.
  • a motion estimator (258) estimates motion of macrob locks or other sets of samples of the enhancement layer video picture with respect to one or more reference pictures, which represent previously reconstructed enhancement layer video content.
  • the picture store (264) buffers reconstructed enhancement layer video (266) as a reference picture. When multiple reference pictures are used, the multiple reference pictures can be from different temporal directions or the same temporal direction.
  • the motion estimator (258) outputs motion information (260) such as motion vector information.
  • the motion compensator (262) applies motion vectors to the reconstructed enhancement layer video content (266) (stored as reference picture(s)) when forming a motion-compensated current picture (268).
  • the difference (if any) between a block of the motion-compensated enhancement layer video (268) and corresponding block of the original enhancement layer video (256) is the prediction residual (270) for the block.
  • reconstructed prediction residuals are added to the motion compensated enhancement layer video (268) to obtain reconstructed content closer to the original enhancement layer video (256). In lossy compression, however, some information is still lost from the original enhancement layer video (256).
  • a motion estimator and motion compensator apply another type of motion estimation/compensation.
  • a frequency transformer (280) converts spatial domain video information into frequency domain (i.e., spectral, transform) data.
  • the frequency transformer (280) applies a DCT, variant of DCT, or other forward block transform to blocks of the samples or prediction residual data, producing blocks of frequency transform coefficients.
  • the frequency transformer (280) applies another conventional frequency transform such as a Fourier transform or uses wavelet or sub-band analysis.
  • the frequency transformer (280) may apply an 8x8, 8x4, 4x8, 4x4 or other size frequency transform.
  • a quantizer (282) then quantizes the blocks of transform coefficients.
  • the quantizer (282) applies non-uniform, scalar quantization to the spectral data with a step size that varies spatially on a picture-by-picture basis, macroblock-by-macroblock basis or other basis. Additionally, in some cases the quantizer varies quantization across color channels of the enhancement layer video picture.
  • the quantizer (282) can also apply another type of quantization, for example, a uniform or adaptive quantization for at least some spectral data coefficients, or directly quantizes spatial domain data in an encoder system that does not use frequency transformations.
  • an inverse quantizer (290) performs inverse quantization on the quantized spectral data coefficients.
  • An inverse frequency transformer (292) performs an inverse frequency transform, producing blocks of reconstructed prediction residuals (for predicted enhancement layer video content) or samples (for intra-coded residual video content). If the enhancement layer video (256) was motion-compensation predicted, the reconstructed prediction residuals are added to the motion-compensated predictors (268) to form the reconstructed enhancement layer video.
  • the picture store (264) buffers the reconstructed enhancement layer video for use in subsequent motion- compensated prediction.
  • the entropy coder (284) compresses the output of the quantizer (282) as well as certain side information (e.g. , quantization parameter values)
  • Typical entropy coding techniques include arithmetic coding, differential coding, Huffman coding, run length coding, LZ coding, dictionary coding, and combinations of the above.
  • the entropy coder (284) typically uses different coding techniques for different kinds of information, and can choose from among multiple code tables within a particular coding technique.
  • a controller receives inputs from various modules such as the motion estimator (258), frequency transformer (280), quantizer (282), inverse quantizer (290), and entropy coder (284).
  • the controller evaluates intermediate results during encoding, for example, setting quantization step sizes and performing rate-distortion analysis.
  • the controller works with modules such as the motion estimator (258), frequency transformer (280), quantizer (282), and entropy coder (284) to set and change coding parameters during encoding.
  • the encoder may iteratively perform certain stages (e.g. , quantization and inverse quantization) to evaluate different parameter settings.
  • the encoder may set parameters at one stage before proceeding to the next stage.
  • the encoder may jointly evaluate different coding parameters.
  • the controller also receives input from an encoding session wizard interface, from another encoder application interface, or from another source to designate video as having specific content to be encoded using specific rules.
  • the encoder (240) additionally performs intra-compression of the enhancement layer video.
  • the sealer (254) provides enhancement layer video (256) to the encoder (240) and the encoder intra-compresses it as an intra-coded picture, without motion compensation.
  • the enhancement layer video (256) is provided directly to the frequency transformer (280), quantizer (282), and entropy coder (284) and output as encoded video.
  • a reconstructed version of the intra-coded enhancement layer video can be buffered for use in subsequent motion compensation of other enhancement layer video.
  • modules of the encoder can be added, omitted, split into multiple modules, combined with other modules, and/or replaced with like modules.
  • the controller can be split into multiple controller modules associated with different modules of the encoder.
  • encoders with different modules and/or other configurations of modules perform one or more of the described techniques.
  • FIG. 3 is a block diagram of a decoding system (300), including an exemplary enhancement layer decoder (340), in conjunction with which some described embodiments may be implemented.
  • the system (300) includes a base layer decoder (320) which receives a base layer bit stream (302) and outputs reconstructed base layer video to a first inverse sealer (352).
  • the base layer bit stream (302) can be a bit stream in a Windows Media Video or VC-I format, MPEG-x format (e.g., MPEG-I, MPEG-2, or MPEG-4), H.26x format (e.g., H.261, H.262, H.263, or H.264), or other format.
  • MPEG-x format e.g., MPEG-I, MPEG-2, or MPEG-4
  • H.26x format e.g., H.261, H.262, H.263, or H.264
  • the base layer bit stream (302) is encoded using motion compensation, and thus the base layer decoder (320) includes a motion compensation loop.
  • the first inverse sealer (352) is operable to upsample or otherwise inverse scale the reconstructed base layer video to the desired bit depth, spatial resolution, chroma sampling rate and/or other resolution of the output reconstructed video pictures (398).
  • the system further includes an enhancement layer decoder (340) operable to receive an enhancement layer bit stream (304).
  • the enhancement layer bit stream (304) can be the same format as the base layer bit stream (302), or it may be a different format.
  • the entropy decoder (384) is operable to decode elements of the bit stream that were encoded by entropy encoding methods including arithmetic coding, differential coding, Huffman coding, run length coding, LZ coding, dictionary coding, and combinations of the above.
  • the entropy decoder (384) typically uses different decoding techniques for different kinds of information, and can choose from among multiple code tables within a particular decoding technique.
  • the entropy decoder (384) outputs side information such as motion vector information (360) to a motion compensator (362).
  • An inverse quantizer (390) applies inverse quantization to some of the output of the entropy decoder (384).
  • the inverse quantizer (390) is operable to reverse non-uniform scalar quantization with a step size that varies on a picture -by-picture basis, macroblock-by-macroblock basis, color channel-by-color channel basis, or some other basis. More generally, the inverse quantizer (390) is operable to reverse quantization applied during encoding.
  • An inverse frequency transformer (392) accepts the output of the inverse quantizer (390).
  • the inverse frequency transformer (392) is operable to produce blocks of spatial domain values by applying an inverse DCT, variant of inverse DCT, or other reverse block transform to the output of the inverse quantizer (390).
  • the inverse frequency transformer (392) may be operable to reverse an 8x8, 8x4, 4x8, 4x4 or some other size frequency transform.
  • the inverse frequency transformer (392) outputs reconstructed values (370) for a prediction residual (in the case of inter-coded enhancement layer video content) or samples (in the case of intra-coded enhancement layer video content).
  • the motion vector information (360) output from the entropy decoder (384) is input to a motion compensator (362).
  • the motion compensator (362) applies the motion vector information to previously reconstructed enhancement layer video buffered in a picture store (364) and outputs motion-compensation-predicted enhancement layer video (368).
  • the motion-compensation- predicted enhancement layer video (368) is combined with the prediction residuals (370) to form reconstructed enhancement layer video (366).
  • the reconstructed enhancement layer video (366) is buffered by the picture store (364) (for use in subsequent motion compensation) and output from the enhancement layer decoder (340) to a second inverse sealer (354).
  • the enhancement layer decoder (340) may be operable to decode 8-bit video, 10-bit video, or video with some other bit depth. If the enhancement layer decoder (340) decodes 8-bit video and output video with a higher bit depth (e.g., 10-bit) is to be reconstructed, then the second inverse sealer (354) upsamples the reconstructed enhancement layer video (366) to the higher bit depth. Or, if the enhancement layer decoder (340) decodes 16-bit video and output video with a lower bit depth (e.g., 8-bit) is to be reconstructed, then the second inverse sealer (354) downsamples the reconstructed enhancement layer video (366) to the lower bit depth.
  • a higher bit depth e.g., 10-bit
  • the second inverse sealer (354) upsamples the reconstructed enhancement layer video (366) to the higher bit depth.
  • the decoding tool combines the inverse scaled, reconstructed enhancement layer video output from the second inverse sealer (354) with the inverse scaled, reconstructed base layer video output by the first inverse sealer (352), to produce reconstructed video pictures (398) for the output video.
  • the entropy decoder (384), inverse quantizer (390), and inverse frequency transformer (392) act as previously mentioned to produce samples of the enhancement layer video, bypassing motion compensation.
  • the reconstructed enhancement layer video (366) is buffered in a picture store (364) for use in future motion compensation.
  • Particular embodiments of video decoders typically use a variation or supplemented version of the generalized decoder (340).
  • modules of the decoder can be added, omitted, split into multiple modules, combined with other modules, and/or replaced with like modules.
  • decoders with different modules and/or other configurations of modules perform one or more of the described techniques.
  • Figure 3 shows a single enhancement layer decoder (340), the enhancement layer video can itself be separated into multiple layers of residual video for encoding with separate residual encoders and signaling as multiple enhancement layer bit streams.
  • a given decoding system includes one or more separate residual decoders for decoding one or more of the multiple enhancement layer bit streams.
  • the enhancement layer video that is decoded represents differences (but not necessarily all differences) between the reconstructed base layer video and the original input video.
  • an encoder varies quantization of enhancement layer video spatially and/or across color channels of a picture. For example, the encoder varies quantization from unit-to-unit for multiple units (such as macroblocks) of enhancement layer video, potentially using different quantization in different color channels for the units.
  • the encoder signals quantization parameters that parameterize the variable quantization.
  • a corresponding decoder varies inverse quantization of the enhancement layer video spatially and/or across color channels of a picture.
  • Figure 4 shows a flow chart for a generalized technique (400) for encoding an enhancement layer video with quantization that varies spatially across a picture and/or across color channels of the picture.
  • An encoding tool (200) such as that described with reference to Figure 2 performs the technique (400), or some other tool may be used.
  • the encoding tool determines (405) whether to vary quantization spatially for a picture of enhancement layer video. This may be indicated by user input or through analysis of the picture or portions of the picture. For example, if a user desires a high degree of rate-distortion efficiency in compression, the user may direct the tool to use spatially varying QPs. Alternatively, if the picture being encoded has a high degree of complexity or spatial variance above a threshold value, then a pre-set threshold in software directs the tool to use spatially varying QPs when encoding the picture.
  • the tool also determines (410) whether to vary quantization between the plural color channels of the picture of enhancement layer video.
  • the pictures can be images of various color formats (e.g., YUV or YCbCr for color space, with 4:4:4, 4:2:2 or 4:2:0 chroma sampling rate). If it is a YUV or YCbCr image, the image has a luma channel and two chroma channels.
  • the separate channels (also called color planes or components) of the image can have different spatial resolutions.
  • the tool may vary the QP across different color channels of the picture according to a user indication, encoder wizard setting, or through analysis of a picture, a portion of the picture, and/or one or more of the color channels.
  • the tool encodes (420) the picture of enhancement layer video using determined QP or QPs.
  • the tool determines one or more QPs for the picture. If the picture's QPs do not vary spatially over the picture, then only a single QP is used for the picture. If the picture's QPs do vary spatially, then a different QP is determined for each unit (e.g., macroblock, block) in the picture. Additionally, if QPs vary across the color channels of the picture, then the tool determines multiple QPs for the multiple channels, and potentially determines different QPs for each unit in the picture. For example, a different QP is determined for the luma channel and each of the chroma channels of a unit in the picture. Generally, the encoding tool applies the QP(s) to each of the units in the picture and produces an enhancement layer bit stream.
  • the tool outputs (430) the encoded enhancement layer bit stream, which includes information indicating the QP or QPs used.
  • the information indicating the QP or QPs is interspersed in the bit stream with the other parameterized information for the picture or units.
  • the tool signals one or more QPs for each unit in the picture in the enhancement layer bit stream. The signaling can be done in the bit stream at the picture level or the unit level.
  • the tool signals a single bit at the picture level to indicate whether QP varies spatially, and if QP varies spatially then the tool signals another bit to indicate whether QP varies across the color channels of the picture.
  • the tool signals the value(s) of the QP(s) for each of the units in the picture at the unit level of the bit stream.
  • the tool may additionally signal at the picture level how many bits are used to signal QP information for each unit at the unit level of the bit stream.
  • the tool signals a table comprising different possible QP values, and then signals a selection value from the table for each of the units in the picture at the unit level in the bit stream.
  • the tool performs the technique (400) for a picture of enhancement layer video and repeats the technique (400) on a picture-by-picture basis. Alternatively, the tool performs the technique for a group of pictures, slice, or other section of video, and repeats the technique on that basis.
  • Figure 5 shows a flowchart of an exemplary technique (500) of encoding enhancement layer video using QPs that vary spatially or across color channels of an individual frame.
  • An encoding tool (200) such as that described with reference to Figure 2 is used to perform the technique (500), or some other tool may be used.
  • the tool repeats the technique (500) on a frame-by-frame basis.
  • the tool first determines (505) whether QP varies spatially for the frame.
  • the tool analyzes the frame to determine whether varying QP would be acceptable or desirable according to one or more of a number of criteria such as desired rate-distortion efficiency, compression speed, degree of complexity of the frame, or other criteria. For example, a user indicates through a user interface such as an encoding wizard that a high degree of rate-distortion efficiency is desired. The tool then determines that a spatially variable QP is necessary to achieve the desired degree of rate-distortion efficiency. Alternatively, the tool determines that the complexity of the frame is above a pre-determined or user-defined threshold and thus determines that a spatially variable QP is desired.
  • the tool determines (510) the frame QP according to criteria such as rate constraints of the compressed file, perceptual quality and/or complexity of the input video.
  • the tool signals (515) the frame QP in the enhancement layer bit stream.
  • the tool determines (520) whether QP varies across the color channels of the frame.
  • the tool analyzes each color channel separately or together with the other color channels to determine whether varying QP would be acceptable or desirable for each color channel, according to one or more of a number of criteria such as desired rate-distortion efficiency, compression speed, degree of complexity of the frame, complexity of each channel in the frame, amount of variance within channels and between different channels, or some other criteria.
  • the tool determines (525) QPs to use within the frame. For example, the tool determines QPs for macroblocks in the frame according to criteria such as rate constraints, perceptual quality and/or complexity of the video for the respective macroblocks. [086] After the tool has determined (525) QPs within the frame, the tool signals (530) the frame QP. Generally, the frame QP is the "default" QP used when encoding each macroblock in the frame. In one example, the frame QP is an average of the QPs of the macroblocks in the frame.
  • the tool determines the frame QP as the most common QP in the frame to reduce the bit cost for signaling the QPs for macroblocks. For example, the tool signals that QP varies spatially, that QP does not vary across channels, and that the frame QP is signaled using x bits, and then signals the value of the frame QP itself. Alternatively, the tool may signal that the frame QP is one of a number of entries in a given table (e.g. , a QP table for a sequence), or the tool may signal the frame QP in some other manner.
  • a given table e.g. , a QP table for a sequence
  • the tool then signals (535) the QPs for the macroblocks in the frame.
  • this comprises signaling the QP for each of the macroblocks with respect to a predicted QP which can be either a frame QP or a QP that is predicted based on the QPs of one or more other, spatially adjacent macroblocks in the frame.
  • this comprises signaling the QP for each of the macroblocks as one of a plurality of values in a table.
  • the tool determines (540) QPs to use within a first color channel of the frame. For example, the tool proceeds to determine QPs for macroblocks in the Y color channel according to criteria such as rate constraints, perceptual quality and/or complexity of the video for the respective macroblocks.
  • the tool After the tool determines (540) the QPs for macroblocks in the channel, the tool signals (545) the frame QP for the channel.
  • the frame QP for the channel is the "default" QP used when encoding each macroblock in the channel.
  • the tool determines the frame QP for the channel by averaging the QPs of each of the macroblocks in the channel.
  • the tool chooses the frame QP for the channel as the most commonly used QP in the channel.
  • signaling the frame QP for the channel comprises signaling that QP varies both spatially and across the different color channels in the frame, and then signaling the frame QP for the channel itself.
  • the frame QP for the channel may be signaled as one of several values in a QP table (e.g., a QP table for a sequence).
  • the tool checks (550) whether there are other color channels in the frame that have not been analyzed, for example, the chroma (U, V) channels. If there are, then the tool performs the determining (540) step and the signaling (545) step for the frame QP for each of the other channels.
  • the tool may perform the determining step (540) for the frame QP for each of the channels before the signaling step (545) for any of the channels, or the steps may be performed in some other order.
  • the tool next signals (555) the QPs for macroblocks for each of the channels.
  • this comprises signaling the QP for each of the macroblocks in each of the channels with respect to a predicted QP.
  • the predicted QP can be the channel QP, or the predicted QP can be a QP based on the QPs of one or more neighboring macroblocks in the color channel.
  • the tool signals the QP of each of the macroblocks in each of the channels as one of a plurality of QP values in a table.
  • each of the color channels may not vary spatially, and so the tool indicates with a skip bit that the QPs for the macroblocks in a color channel are all equal to the frame QP for the channel at some point in the encoding process, such as at the signaling step (545) or the signaling step (555).
  • Figure 6A is a flowchart showing details (600) of one approach to signaling (555) the QPs for macroblocks in each of plural color channels.
  • the tool signals (605) frame-level information for QP variation within one or more of the channels. For example, the tool signals at the frame level the number of bits used to define macroblock QPs relative to the frame QP for each of the channels. Alternatively, the tool signals information indicating a QP index table and populates the table with a plurality of values for different QPs, which can include the channel QP. A different table is indicated for each of the color channels or, alternatively, two or more of the color channels can share a table. Additionally, one or more of the colors channels may not vary spatially over the frame, and so only a single QP may be indicated for that channel.
  • the tool signals (615) information for the QP of the next macroblock.
  • the tool signals whether the actual QP of the macroblock is the same as the macroblock' s predicted QP, which can be the QP of the frame for the color channel or a spatially predicted value for the QP of the macroblock. Macroblock QP prediction rules vary depending on implementation. If the actual QP is not the same as the predicted QP, the tool then signals a difference value between the QP of the macroblock and the predicted QP.
  • the tool signals whether the actual QP of the macroblock is equal to the macroblock's predicted QP, which again can be the QP of the frame for the color channel or a spatially predicted QP value for the macroblock. If the macroblock QP is not equal to the predicted QP, then the tools signals that the QP of the macroblock is one of a plurality of QP values in a QP index table.
  • the tool After the tool has signaled information for the QP of the macroblock in the given color channel, the tool checks (620) whether there is another color channel with a spatially varying QP. If there are one or more other color channels whose QPs have not been signaled, then the tool performs the signaling (615) step for the macroblock in the next color channel. If there is not another color channel with a spatially varying QP, the tool checks (625) whether there is another macroblock in the frame. The macroblocks can be checked according to a raster scan order or some other order. If there is another macroblock in the channel whose QPs have not been signaled, then the tool performs the signaling (615) and checking (620) steps for the next macroblock. If there is no other macroblock in the frame, then the tool is done signaling the QPs for macroblocks in each color channel of the frame.
  • Figure 6b is a flowchart showing details (630) of one approach to signaling (535) the spatially varying QPs of the macroblocks in the frame.
  • the tool signals (635) frame-level information for QP spatial variation over the frame. For example, the tool signals at the frame level the number of bits used to define macroblock QPs relative to the frame QP. Alternatively, the tool signals information indicating a QP index table and populates the table with a plurality of values for different QPs.
  • the tool signals (645) information for the QP of the next macroblock.
  • the tool signals whether the QP of the macroblock is to the same as the macroblock's predicted QP, which can be the QP of the frame or a spatially predicted value for the QP of the macroblock. Macroblock QP prediction rules vary depending on implementation. If the actual QP is not the same as the predicted QP, the tool signals a difference value between the QP of the macroblock and the predicted QP. Alternatively, if the macroblock QP is not equal to one the predicted QP, then the tool signals that the QP of the macroblock is one of a plurality of QP values in a QP index table.
  • the tool After the tool has signaled information for the QP of the macroblock for the frame, the tool checks (650) whether there is another macroblock in the frame.
  • the macrob locks can be checked according to a raster scan order or some other order. If there is another macroblock in the frame, then the tool performs the signaling (645) step for the next macroblock. If there is not another macroblock in the frame, then the tool finishes.
  • Figure 7 shows a general method (700) for decoding enhancement layer video with inverse quantization that varies spatially across a picture or across color channels of the picture.
  • a decoding tool (300) such as the one described with reference to Figure 3, is used to perform the technique (700), or some other tool may be used.
  • the decoding tool receives (710) encoded information in a bit stream for enhancement layer video.
  • the encoded information includes information that indicates QPs for units (e.g., macroblocks, blocks) of a picture or its channels.
  • the tool receives information signaled according to the techniques shown in Figures 5, 6A and 6B, receiving syntax elements that are signaled, evaluating the syntax elements and following the appropriate conditional bit stream paths, to determine QPs that vary spatially and/or between channels of a picture.
  • the tool receives QP information signaled according to another approach.
  • the tool then decodes (720) the enhancement layer video. In doing so, the tool varies inverse quantization (according to the signaled QP information) spatially and/or between channels for units of the enhancement layer video.
  • the tool performs the technique (700) for a picture of the enhancement layer video and repeats the technique on a picture -by-picture basis.
  • the tool performs the technique for a group of pictures, slice, or other section of video, and repeats the technique on that basis.
  • an encoder predictively codes quantization parameters using spatial prediction.
  • a corresponding decoder predicts the quantization parameters using spatial prediction during decoding.
  • the encoder and decoder predict a macrob lock's QP using a QP prediction rule than considers QPs of spatially adjacent macroblocks within a picture or channel of a picture.
  • Spatial prediction of QPs can be used to encode QPs that vary both spatially and between channels, or it can be used in encoding and decoding of other types of QPs.
  • Figure 8 is a flowchart showing a generalized technique (800) for encoding and signaling QPs using spatial prediction.
  • An encoding tool (200) such as that described with reference to Figure 2 may be used to perform the method (800), or some other tool may be used.
  • the technique (800) is described with reference to an entire picture, but the technique may be applied separately to each color channel in the picture.
  • the tool gets (805) the QP for the next unit in the picture.
  • the unit can be a macrob lock, block or other region of the picture.
  • the technique (800) addresses encoding and signaling of QP values, the encoder has already determined QPs of the units and the QP of the picture.
  • the tool determines (810) the predicted QP for the unit.
  • the value of the predicted QP depends on the QP prediction rule in operation. Although the QP prediction rule depends on implementation, the encoder and decoder use the same QP prediction rule, whatever it happens to be.
  • a first example prediction rule compares QPs of units to the left of the current unit and above the current unit. If the QPs of the two neighboring units are the same, the encoder uses that QP as the predicted QP. Otherwise, the encoder uses the picture QP as the predicted QP for the current unit.
  • the encoder uses the median QP among QPs for left, top, and top right neighbors as the predicted QP.
  • the encoder uses another prediction rule, for example, considering a single neighbor's QP to be the predicted QP.
  • the QP prediction rule addresses cases where one or more of the neighboring units are outside of a picture or otherwise have no QP, for example, by using the picture QP or other default QP as the predicted QP of the current unit, or by substituting a dummy QP value for the missing neighbor unit.
  • the tool signals (825) the QP for the unit with reference to the predicted QP. For example, the tool signals a single bit indicating whether or not the unit uses the predicted QP. If not, the tool also signals information indicating the actual QP for the unit.
  • One approach to signaling the actual QP is to signal the difference between the QP for the unit and the predicted QP.
  • Another approach is to signal a QP index that indicates an alternative QP in a table of QPs available to both the encoder and the decoder.
  • the tool instead of signaling the use/do-not-use selection decision separately from selection refinement information, the tool jointly signals the selection information, using a single code to indicate not to use the predicted QP and also indicating the actual QP to use.
  • the tool then checks (830) to see whether there are other units with QPs to be encoded in the picture (or channel). If there are other units, then the tool repeats the steps of getting (810) the QP for the next unit, determining (810) the predicted QP for that unit, and signaling (825) the QP for that unit.
  • Figure 10 is a flowchart showing a general technique (1000) for using spatial prediction to decode QPs for units of video
  • a decoding tool such as the decoding tool (300) described with reference to Figure 3 or other decoding tool, performs the technique (1000).
  • the technique (1000) is described with reference to an entire picture, but the technique may be applied separately to each color channel in the picture.
  • the tool receives (1010) QP selection information for the next unit (e.g., macroblock, block) in the picture.
  • the selection information indicates whether the QP for the unit is the predicted QP or another QP, in which case the QP selection information also indicates what the other QP is.
  • the tool receives (as part of the QP selection information) a single bit indicating whether or not the unit uses the predicted QP. If not, the tool also receives (as part of the QP selection information) information indicating the actual QP for the unit.
  • the tool receives information indicating the difference between the QP for the unit and the predicted QP.
  • the tool receives a QP index that indicates an alternative QP in a table of QPs available to both the encoder and the decoder.
  • the QP selection information can include a separate decision flag and selection code, or it can include a single code that jointly represents the information.
  • the tool predicts (1010) the QP of the unit, and the value of the predicted QP depends on the QP prediction rule in operation. Any of the example QP prediction rules described with reference to Figure 8, when used during encoding, is also used during decoding. Even when the predicted QP is not used as the actual QP for the current unit, the predicted QP is used to determine the actual QP. Alternatively, when the QP selection information indicates that a predicted QP is not used, the encoder skips determination of the predicted QP and decodes an independently signaled QP for the current unit.
  • the tool selects (1015) between the predicted QP and another QP, using the QP selection information. For example, the tool interprets part of the QP selection information that indicates whether or not the unit uses the predicted QP. If not, the tool also interprets additional QP selection information that indicates the other QP for the unit.
  • the tool In a differential coding approach, the tool combines a differential value and the predicted QP to determine the other QP.
  • the tool looks up a QP index in a table of QPs available to determine the other QP.
  • the tool then checks (1025) whether there are other units with QPs to be reconstructed in the picture (or channel). If there are, then the tool repeats the steps of receiving QP selection information for the next unit, determining the predicted QP for that unit, and selecting the QP for that unit.
  • Figure 9 is a flowchart illustrating a technique (900) for using an exemplary prediction rule for predicting the QP of a macroblock during encoding.
  • An encoding tool such as that described with reference to Figure 2, performs the technique (900) when encoding and signaling the QP for a macroblock (QP MB) in a frame or channel of the frame.
  • the tool first checks (905) whether the QP of a macroblock immediately to the left of the current macroblock (QP LEFT) is the same as the QP of a macroblock immediately above the current macroblock (QP TOP).
  • QP LEFT being equal to QP TOP indicates a trend for the QPs of that particular section of the frame or color channel such that it is reasonable to assume that QP MB, the QP of the current macroblock, is most likely close to, if not equal to, QP LEFT.
  • QP PRED is set (910) to be equal to QP LEFT.
  • QP PRED is set (915) to be equal to QP FRAME, which is the default QP of the frame or color channel.
  • QP FRAME is equal to the average of the QPs for the frame or color channel, the most common QP in the frame or color channel, or some other value expected to reduce bit rate associated with signaling QPs for macroblocks.
  • QP PRED is predicted according to the QPs of different macroblocks, such as QP TOP and QP BOTTOM (the QP of a macroblock directly below the current macroblock), QP LEFT and QP RIGHT (the QP of a macroblock directly to the right of the current macroblock), or some other combination of QPs in the frame or channel, depending on scan order followed in encoding QPs for the macroblocks.
  • QP PRED is predicted with regard to only a single previously decoded QP (such as QP LEFT), three previously decoded QPs, or some other combination of QPs.
  • the tool performs multiple checks to determine QP PRED.
  • QP LEFT is not equal to QP TOP LEFT
  • the tool checks to determine whether QP TOP LEFT is equal to QP TOP, and if so, sets QP PRED equal to QP LEFT (assuming horizontal continuity in QP values).
  • QP PRED is based on the QPs of other color channels or previously reconstructed macroblocks in other frames.
  • the tool then checks (920) whether QP MB is equal to QP PRED. In areas of the frame or color channel with high levels of redundancy in QP values, QP MB will most likely be equal to QP PRED. In this instance, the tool signals (930) that QP SKIP is 1.
  • QP SKIP is a one-bit indicator which, when set to 1, indicates that the current macroblock uses QP PRED and the bit stream includes no other QP selection information for the current macroblock.
  • QP MB is not equal to QP PRED, then the tool signals (925) that QP SKIP is 0. Setting QP SKIP to 0 indicates during encoding and decoding that QP MB is not equal to the QP PRED and therefore another QP is signaled (935) for QP MB. In a differential coding approach, this other QP is signaled as a difference value relative to QP PRED. In an alternate QP selection approach, QP MB is signaled as one of a number of available QPs in a table of QP values. Or, the other QP is signaled in some other manner.
  • a QP prediction rule accounts for the unavailability of a neighbor QP by, for example, assigning a picture QP or other default QP to be the predicted QP for the current unit.
  • an encoder and decoder reduce the frequency of unavailable QPs by buffering dummy QP values to units that otherwise lack QPs. For example, even if QP varies spatially in a frame or channel, some macroblocks may still be encoded and decoded without using a QP. For a skipped macroblock or macroblock for which all blocks are not coded (according to the coded block pattern for the macroblock), the bit stream includes no transform coefficient data and no QP is used.
  • the bit stream when QP varies spatially and between channels, if a macroblock has transform coefficient data in a first channel but not a second channel (e.g., since the coded block status of the block(s) in the second channel is 0 in the coded block pattern), the bit stream includes no QP information for the macroblock in the second channel.
  • the encoder and decoder infer the QP for the unit to be equal to the predicted QP for the unit, and the inferred value is used for subsequent QP prediction. For example, if a macroblock is skipped, the QP of the macroblock is set to be equal to the predicted QP for the macroblock, and the inferred QP value is buffered along with other actual QPs (and perhaps inferred QP values) for the frame.
  • an encoder and decoder use QPs that vary spatially and/or between channels of enhancement layer video, and the encoder and decoder use spatial prediction when encoding and decoding values of QP for macroblocks.
  • the encoder and decoder use the same QP prediction rule in the first and second combined implementations, although other QP prediction rules can instead be used.
  • the actual QP for the macroblock is signaled differentially relative to the predicted QP.
  • the actual QP for the macroblock is signaled as an alternative QP index to a table of available QPs for the frame.
  • QP FRAME UNIFORM is a 1- bit frame level syntax element. It indicates whether QP varies spatially across the frame. If QP FRAME UNIFORM equals 0, then the QP varies spatially across the frame. If QP FRAME UNIFORM does not equal 0, then the QP does not vary spatially across the frame, and the encoder and decoder use simple frame-level signaling of frame QP.
  • QP CHANNEL UNIFORM is a 1-bit frame level syntax element that indicates whether QP varies across the color channels of the frame. If QP CHANNEL UNIFORM equals 0, then QP varies across the color channels (in addition to potentially varying spatially within each channel). If QP CHANNEL UNIFORM does not equal 0, then QP does not vary across the color channels.
  • Figure 11 illustrates bit stream syntax and pseudocode for receiving information that indicates frame QP and channel-specific QPs in first and second example combined implementations. Figures 11 through 16 show color channels for the YUV color space, but the pseudocode could be adapted to the RGB space, YCbCr, or some other color space.
  • QP CHANNEL UNIFORM does not equal 0, then QP does not vary across the color channels, and the bit stream includes N bits signaling QP FRAME. If QP CHANNEL UNIFORM equals 0 then the bit stream includes N bits for QP FRAME Y, N bits for QP FRAME U, and N bits for QP FRAME V.
  • the value of N can be pre-defined, set for a sequence, or even set for a frame.
  • Figure 11 shows the same value of N bits for all types of QP, different numbers of bits can be used to signal QP FRAME, QP FRAME Y, QP FRAME U, and/or QP FRAME V.
  • Figures 11 and 13 to 16 illustrate decoder-side operations to receive bit stream syntax elements and determine QPs of macrob locks.
  • the corresponding encoder- side encoding and signaling operations mirror the operations shown in Figures 11 and 13 to 16.
  • an encoder instead of receiving information for a differential QP value (or alternate QP index) and decoding it, an encoder determines the differential QP value (or alternate QP index) and signals it.
  • Figure 12 shows an example QP prediction rule used by the encoder and the decoder in the first and second example combined implementations.
  • the QP prediction rule generally corresponds to the rule explained with reference to steps (905, 910 and 915) of Figure 9
  • For a current macrob lock if both the left neighboring macrob lock and the top neighboring macroblock are available, and the two neighboring macroblocks have equal QPs, then this QP is used as the predicted QP for the current macroblock. If, however, QP TOP is different from QP LEFT, or if either of the neighbors is unavailable, the tool uses QP FRAME (or the appropriate channel-specific QP_FRAME_ value for the Y, U or V channel) as the predicted QP for the current macroblock.
  • QP FRAME or the appropriate channel-specific QP_FRAME_ value for the Y, U or V channel
  • the encoder and the decoder use a different QP prediction rule.
  • the encoder and decoder set the predicted QP for a current macroblock to be the median of QP values from the left, top and top-right neighbors.
  • the encoder and decoder set the predicted QP for a current macroblock to be QP LEFT if the QP values from top-left and top neighbors are the same (showing a horizontal consistency trend), set the predicted QP for the current macroblock to be QP TOP if the QP values from top-left and left neighbors are the same (showing a vertical consistency trend), and otherwise set the predicted QP for the current macroblock to be QP FRAME.
  • the QP MB is not the same as QP PRED
  • the bit stream includes a differential value that indicates QP MB relative to QP PRED.
  • the differential is signaled as a signed or unsigned integer according to a convention determined by the encoder and decoder.
  • Figure 13 illustrates bit stream syntax and pseudocode for receiving information that indicates the number of bits used to differentially signal QP MB for a frame or channels.
  • NUM BITS QP MB (3 bits).
  • NUM BITS QP MB (3 bits) is a 3-bit value that indicates the number of bits used to signal QP MB differentials for macrob locks in a frame. This yields a number from 0 bits to 7 bits for differential QP MB information.
  • the predicted QP is always used for macrob locks, since no differential bits are allowed.
  • the differential values can vary from -64 to 63 in integer QP steps, -32 to 95 in integer QP steps, -32 to 31.5 in half-QP steps, etc.
  • the range is generally centered around QP PRED (or differential of zero). Setting the number of bits used to signal differential QP MB information trades off the costs of signaling the differential QP MB information at higher resolution versus the quality benefits of using the greater range of QP or resolution of QP.
  • NUM BITS QP MB Y (3 bits), NUM BITS QP MB Y (3 bits), and NUM BITS QP MB Y (3 bits), which are 3 -bit values that indicate the number of bits used to signal QP MB differentials for macroblocks in the Y channel, the U channel, and the V channel, respectively. This yields a number from 0 bits to 7 bits for differential QP MB information in the respective channels. Different channels do not need to use the same number of differential QP MB bits as each other.
  • the Y channel may be much more complex than either the U channel or the V channel, and thus the Y channel may use 4 bits for differential QP MB values whereas the U channel and the V channel each use 2 bits.
  • the number of differential QP MB bits By setting the number of differential QP MB bits to zero for a channel, spatially adaptive quantization is effectively disabled for that channel.
  • Figure 14 illustrates bit stream syntax and pseudocode for receiving information that indicates QP for each macroblock.
  • Figure 14 shows macrob lock-level syntax elements. If QP FRAME UNIFORM is equal to 0, QP varies spatially over the frame.
  • the bit stream includes DIFF QP MB (NUM BITS QP MB bits).
  • NUM BITS QP MB can be an integer from 0 to 7.
  • DIFF QP MB represents the difference between QP MB and QP PRED.
  • DIFF QP MB Y NUM BITS QP MB Y bits
  • DIFF QP MB U NUM BITS QP MB U bits
  • DIFF QP MB V NUM BITS QP MB V bits
  • the number of bits for differential QP MB per channel can be an integer from 0 to 7.
  • DIFF QP MB Y represents the difference between QP MB Y and QP PRED Y.
  • QP MB Y DIFF QP MB Y + QP PRED Y.
  • DIFF QP MB U and DIFF QP MB V represent similar values for the U and V channels, respectively.
  • This design allows for a very simple and efficient way to exploit inter-macroblock redundancy in QPs. Even when different color channels use different quantizers for a given macroblock, a 1-bit QP SKIP element for the macroblock is sufficient to indicate that the QPs of the color channels are identical to the QPs of the corresponding color channels of a neighboring macroblock (such as the left or top neighbor). Further, prediction using a simple comparison and selecting a single neighboring macroblock's QP is simpler than blending two or more neighboring macroblocks — it eliminates the need for a median or averaging operation, and provides similar efficiency in compression. More complicated QP prediction rules can provide more accurate prediction at the cost of higher computational complexity.
  • a simple fixed length coding (FLC) table (with code lengths that can vary from frame to frame or channel to channel) is used.
  • FLC fixed length coding
  • performance of such FLCs can be as good as a variable length coding.
  • an encoder and decoder use variable length codes for differential QP MB values.
  • the ability to send the number of bits used to signal the differential QP provides an additional degree of flexibility in improving compression efficiency. If the macroblock QPs are very close to the frame QP, this proximity can be exploited by using only 1 or 2 bits to signal the differential QP MBs for the macroblocks that do not use predicted QP. If the macroblock QPs are very different (in terms of having a larger range), more bits are used to signal the differential QP MBs for the macroblocks.
  • the number of bits used to signal the differential QP MBs for each color channel can also be different based on the characteristics of the respective macroblock QPs are for each channel. For example, if the QP of the U and V channels for all of the macroblocks remains the same, and the luma QP varies spatially for the macroblocks, the tool uses zero bits for signaling the differential QP MB for each of the U and V channels, and 1 or more bits for signaling the differential QP MBs of the Y channel.
  • QP SKIP is not equal to 1
  • QP MB is explicitly signaled using a QP index at the macroblock level.
  • the QP index references a QP in a table of available QPs, which is signaled at frame level.
  • Figure 15 illustrates bit stream syntax and pseudocode for receiving information that specifies the QP values in the table for a frame (or tables for channels), then populating the QP table.
  • Figure 15 shows frame-level syntax elements.
  • the bit stream includes syntax elements specifying the values of a QP table for the frame.
  • NUM_QP_INDEX (3 bits) is a 3-bit value regulating the number of different QPs in the table for the frame.
  • the internal variable NUM QP is equal to NUM QP INDEX + 2, for a range of 2 to 9.
  • the first QP in the QP index table, QP MB TABLE[O], is QP FRAME, the default QP value for the frame.
  • the available QPs are generally ordered from most frequent to least frequent, to facilitate effective variable length coding of QP indices at macroblock level. For example, in the tables shown in Figures 17A to 17F, a single bit is used to signal if QP MB is equal to QP MB T ABLE[O].
  • the remaining rows of the QP table are filled, from position 1 through the position NUM QP-I, by receiving and decoding a QP value for each position.
  • the bit stream includes 8 bits to signal the QP value of each position in the table, though in other examples more or less bits can be used.
  • the QP index table is produced with QP FRAME at position 0 in the table and signaled QP values at each of the other positions in the table from 1 to NUM_QP_INDEX + 1.
  • bit stream includes syntax elements to populate a QP table for each of the Y, U, and V color channels in the frame. For each channel, the positions of the table are filled with the channel-specific QP and alternate QPs.
  • Figure 16 illustrates bit stream syntax and pseudocode for receiving information that indicates QP for a macroblock, then determining the QP, in the second combined implementation.
  • Figure 16 shows macrob lock-level syntax elements.
  • NUM QP EFFECTIVE an internal counter
  • NUM QP - 1 where NUM QP is set from frame-level information in the bit stream, as in Figure 15. This establishes the count of alternate QP values stored in the QP table for the frame. For example, if NUM QP is equal to 9, then the QP table has 8 alternate QP values, the frame QP value at position 0 and 8 alternate QP values at positions 1-8 in the table. Thus, NUM QP EFFECTIVE is equal to 8.
  • QP ID is a value that is used to locate a QP in the QP table. Initially, QP ID is 0.
  • the VLC table (1700) further comprises a QP ID of 1 corresponding to a VLC of 1.
  • the most common QP ID values in the frame or color channel are positioned near the top of the VLC tables, so that the most common QP IDs are signaled using fewer bits.
  • the encoder and decoder use other VLCs to represent QP IDs.
  • the bit stream includes a VLC associated with a QP ID in one of the VLC tables, where NUM QP EFFECTIVE indicates the table to use. For example, if NUM QP EFFECTIVE is equal to 4 and the tool decodes the Huffman code 110, then the tool determines the corresponding QP ID of 2 from the table (1710) shown in Figure 17C. When NUM QP EFFECTIVE is equal to 4, the number of alternate QP values in the QP table is 4, and the QP table also includes the QP FRAME. Thus, the QP IDs in the QP table are 0, 1, 2, 3 and 4. The corresponding VLC table includes only four positions, however, because a position is not needed for the predicted QP, which could have ID of 0, 1, 2, 3 or 4 in the QP table. This helps reduce overall bit rate associated with signaling QP IDs.
  • the decoding tool determines the ID of the QP PRED, which is shown as QP PRED ID. The tool then checks whether the signaled QP ID (or initialized QP ID) is greater than QP PRED ID. If so, then the tool increments QP ID. If not, then the tool does not increment QP ID. Once the tool has determined the appropriate QP ID, the tool determines QP MB with the value in the QP table indicated by QP ID.
  • the predicted QP for a current macroblock has a QP PRED ID of 1 and NUM QP EFFECTIVE is 1, QP ID retains its initial value of 0 and references the other (non-predicted) QP in the QP table with two available QPs. If the QP PRED ID of the predicted QP is 0, QP IP is incremented and references the other (non-predicted) QP in the QP table with two available QPs.
  • QP_ PRED ID be equal to 2 for a current macroblock. If the tool receives a VLC that indicates QP ID of 0 in the table (1715) shown in Figure 17D, since QP ID ⁇ QP PRED ID, the tool looks up the value QP ID of 0 in the QP table. In contrast, if the tool receives a VLC that indicates QP ID of 4 in the table (1715) shown in Figure 17D, the tool increments the QP ID and looks up the value QP ID of 4 in the QP table. By exploiting the fact that signaled QP ID values need not include QP PRED ID as a possible choice, overall bit rate associated with signaling QP ID values is reduced.
  • This technique also exploits inter-macroblock redundancy within sections, allows for signaling of the most common macrob lock QPs using the shortest VLC codes, and, in certain cases, improves performance by using a VLC code for a lower QP ID to signal a QP ID that is actually higher.
  • QP prediction involve spatial prediction of a single predicted QP for a current unit.
  • an encoder and decoder compute multiple predictors for a current unit, and the bit stream includes information indicating a selection of the predicted QP for the current unit from among the multiple predictors.
  • the encoder and decoder instead of performing spatial prediction of QPs, use temporal prediction from co-located macroblocks in other pictures, or use prediction of QPs of macroblocks in one channel from QPs of co-located macroblocks in another color channel.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computing Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

Techniques and tools for encoding enhancement layer video with quantization that varies spatially and/or between color channels are presented, along with corresponding decoding techniques and tools. For example, an encoding tool determines whether quantization varies spatially over a picture, and the tool also determines whether quantization varies between color channels in the picture. The tool signals quantization parameters for macroblocks in the picture in an encoded bit stream. In some implementations, to signal the quantization parameters, the tool predicts the quantization parameters, and the quantization parameters are signaled with reference to the predicted quantization parameters. A decoding tool receives the encoded bit stream, predicts the quantization parameters, and uses the signaled information to determine the quantization parameters for the macroblocks of the enhancement layer video. The decoding tool performs inverse quantization that can vary spatially and/or between color channels.

Description

ADAPTIVE QUANTIZATION FOR ENHANCEMENT LAYER VIDEO CODING
BACKGROUND
[001] Engineers use compression (also called coding or encoding) to reduce the bit rate of digital video. Compression decreases the cost of storing and transmitting video by converting the video into a lower bit rate form. Decompression (also called decoding) reconstructs a version of the original video from the compressed form. A "codec" is an encoder/decoder system.
[002] Generally, much of the bit rate reduction from compression is achieved through quantization. According to one possible definition, quantization is a term used for an approximating non-reversible mapping function commonly used for lossy compression, in which there is a specified set of possible output values, and each member of the set of possible output values has an associated set of input values that result in the selection of that particular output value. A variety of quantization techniques have been developed, including scalar or vector, uniform or non-uniform, with or without dead zone, and adaptive or non-adaptive quantization.
[003] In many implementations, an encoder performs quantization essentially as a biased division of an original data value by a quantization factor. One or more quantization parameters (QPs) indicate the quantization factor for purposes of inverse quantization of the data value. For inverse quantization, often implemented as a multiplication operation, an encoder or decoder reconstructs a version of the data value using the quantization factor indicated by the QP(s). Quantization typically introduces loss in fidelity to the original data value, which can show up as compression errors or artifacts in the results of decoding.
[004] Most scalable video codecs split video into a base layer and an enhancement layer. The base layer alone provides a reconstruction of the video at a lower quality level and/or a lower resolution, and the enhancement layer can be added to provide extra information that will increase the video quality. Many single-layer digital video coding standards today allow for QPs to vary spatially in the base layer. This feature allows encoding to adapt to the macroblock characteristics and thus achieve better perceptual quality for a given rate.
[005] While the above described techniques provide acceptable performance in some instances of scalable video coding, none of them provide the advantages and benefits of the techniques and tools described below. SUMMARY
[006] In summary, the detailed description presents techniques and tools for scalable encoding and decoding of enhancement layer video using a spatially variable quantization. The quantization may be variable for an entire picture of the enhancement layer video or separately variable for each color channel in the enhancement layer video for the picture. The techniques and tools improve the performance of a general-purpose video encoder when it encodes an enhancement layer of video pictures.
[007] In some embodiments, a tool such as an encoder encodes enhancement layer video for a picture organized in multiple color channels (e.g., a luma ("Y") channel and two chroma ("U" and "V") channels). The tool selectively varies quantization spatially over the frame, and in some cases the tool selectively varies quantization spatially and also varies quantization between the multiple color channels of the enhancement layer video for the picture. The tool outputs encoded enhancement layer video for the picture in a bitstream, signaling QP information. The QP information indicates QPs that at least in part parameterize the varied quantization of the enhancement layer video for the picture.
[008] For corresponding decoding, a tool such as a decoder decodes enhancement layer video for a picture organized in multiple color channels. The tool receives encoded enhancement layer video for the picture in a bitstream, receiving QP information indicating QPs that at least in part parameterize varied quantization of the enhancement layer video for the picture. During inverse quantization, the tool accounts for quantization that varies spatially over the frame and between the multiple color channels of the enhancement layer video for the picture.
[009] In other embodiments, a tool such as a video decoder receives encoded information for video for a picture from a bitstream. The encoded information includes QP selection information for a current unit of the video for the picture. When the tool decodes the current unit, the tool predicts a QP for the current unit using one or more QPs for spatially neighboring units of the video for the picture. The tool then selects between the predicted QP and another QP using the QP selection information, and uses the selected QP in reconstruction of the current unit. In some implementations, the tool decodes different information for predicted QPs for each color channel. [010] For corresponding encoding, a tool such as an encoder signals encoded information for video for a picture from a bitstream. The encoded information includes QP selection information for a current unit of the video for the picture. When the tool encodes the current unit, after determining a QP for the current unit, the tool encodes the QP selection information. The tool predicts a QP for the current unit using one or more QPs for spatially neighboring units of the video for the picture. If the predicted QP is the actual QP for the current unit, the QP selection information so indicates. Otherwise, the QP selection information indicates another QP for the current unit.
[Oi l] The foregoing and other objects, features, and advantages will become more apparent from the following detailed description, which proceeds with reference to the accompanying figures. This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
BRIEF DESCRIPTION OF THE DRAWINGS
[012] Figure 1 is a block diagram of a suitable computing environment in which several described embodiments may be implemented.
[013] Figure 2 is a block diagram of an exemplary encoding system for encoding a picture of enhancement layer video quantized with one or more QPs that vary spatially and/or across color channels of the picture.
[014] Figure 3 is a block diagram of an exemplary decoding system for decoding a picture of enhancement layer video quantized with one or more QPs that vary spatially and/or across color channels of the picture.
[015] Figure 4 is a flow chart of a generalized technique for encoding enhancement layer video quantized with one or more QPs that vary spatially and/or across color channels of a picture.
[016] Figure 5 is a flow chart showing an exemplary technique of determining and signaling the QPs used to encode enhancement layer video quantized with one or more QPs that vary spatially and/or across color channels of a picture. [017] Figures 6A and 6B are flow charts showing exemplary techniques of signaling QPs for macroblocks of enhancement layer video for a picture, where the QPs vary spatially and/or across color channels.
[018] Figure 7 is a flow chart of a generalized technique for decoding enhancement layer video quantized with one or more QPs that vary spatially and/or across color channels of a picture.
[019] Figure 8 is a flow chart of an generalized technique for using spatial prediction to encode and signal QPs for units of video.
[020] Figure 9 is a flow chart of an exemplary technique of using spatial prediction to encode and signal a QP for a macroblock.
[021] Figure 10 is a flow chart of a generalized technique for using spatial prediction to decode QPs for units of video.
[022] Figure 11 is a pseudocode listing illustrating bitstream syntax for signaling/receiving information that indicates frame QP and channel QPs in first and second example combined implementations.
[023] Figure 12 is a pseudocode listing illustrating an example QP prediction rule in the first and second example combined implementations.
[024] Figure 13 is a pseudocode listing illustrating bitstream syntax for signaling/receiving information that indicates number of bits used for macrob lock-level differential QP information in the first example combined implementation.
[025] Figure 14 is a pseudocode listing illustrating bitstream syntax for signaling/receiving QP selection information in the first example combined implementation.
[026] Figure 15 is pseudocode listing illustrating bitstream syntax for signaling/receiving information that indicates how to populate a table of QP values in the second example combined implementation. [027] Figure 16 is a pseudocode listing illustrating bitstream syntax for signaling/receiving QP selection information in the second example combined implementation.
[028] Figures 17A-F are VLC tables used for QP selection information in the second example combined implementation.
DETAILED DESCRIPTION
[029] Techniques and tools for adapting quantization spatially and from color channel-to- channel are described herein. Depending on implementation, adapting quantization spatially and across color channels of enhancement layer video can help improve scalable video coding performance in several respects, especially for high-fidelity encoding of high bit depth video.
[030] Many base layer video encoders adapt quantization spatially. When enhancement layer video represents quality differences between reconstructed base layer video and the original video, the energy of the signal in the enhancement layer can vary roughly in proportion to the strength of adaptive quantization in the base layer. Adapting quantization of the enhancement layer video spatially helps improve encoding performance for the enhancement layer video.
[031 ] Spatially adapting quantization of enhancement layer video can have other advantages. In some scalable video encoding/decoding systems, certain areas of enhancement layer video are predicted from base layer video, while other areas of the enhancement layer video are predicted from previously reconstructed enhancement layer video, for example, using motion compensation. Using different levels of quantization in the different areas of the enhancement layer video can improve performance by allowing the encoder to adapt to the characteristics of the different areas.
[032] Adapting quantization between color channels of enhancement layer video can also improve performance. Different video formats can use samples in different color spaces such as RGB, YUV and YCbCr. For YUV or YCbCr, Y represents the brightness (luma) channel of video, and U and V, or Cb and Cr, represent the color (chroma) channels of the video. The human eye is, in general, more sensitive to variations in brightness than color, so encoders have been developed to take advantage of this fact by reducing the resolution of the chroma channels relative to the luma channel. In the YUV color space, one chroma sampling rate is 4:4:4 which indicates that for every luma sample, a corresponding U sample and a V sample are present. Another chroma sampling rate is 4:2:2, which indicates that a single U sample and a single V sample correspond to two horizontal luma samples. Chroma sampling rates at lower resolution, such as 4:2:2 or 4:2:0, result in fewer chroma samples and typically require fewer bits to encode than higher resolution chroma sample rates, such as 4:4:4. Aside from different resolutions in different channels due to chroma sampling, each color channel in the video may be quantized to a different level of fidelity in the base layer video.
[033] Some scalable video encoders encode base-layer video a low chroma sampling rate (e.g., 4:2:0) and/or fidelity, and encode enhancement-layer video at a higher chroma sampling rate (e.g., 4:2:2 or 4:4:4). The chroma channels of the enhancement layer video may thus have different signal energies than the luma channel. Using different levels of quantization in the different channels of the enhancement layer video can improve performance by allowing the encoder to adapt to the characteristics of the channels.
[034] In some implementations, part or all of enhancement layer video can be remapped to a lower chroma resolution for encoding/decoding with a base layer video encoder/decoder. Adapting quantization between channels can help in this situation too. For example, if the base layer video is a tone-mapped version of the enhancement layer video, using different QPs for the luma channel, as compared to the chroma channels, can improve performance.
[035] Techniques and tools for efficiently encoding and signaling QP values are also described herein. For example, one method of encoding and signaling QP values for enhancement layer video includes using QP prediction to exploit inter-unit, spatial redundancy in QP values. In many scenarios, this helps reduce the cost of signaling QPs for units of a picture or a color channel of the picture, where a unit is a block, macroblock, segment, or some other type of unit. Spatial QP prediction can be used in conjunction with a simple mechanism to signal whether or not quantization varies spatially over picture, or across the color channels in the picture.
[036] Some of the techniques and tools described herein address one or more of the problems noted in the background. Typically, a given technique/tool does not solve all such problems. Rather, in view of constraints and tradeoffs in encoding time, encoding resources, decoding time, decoding resources and/or quality, the given technique/tool improves encoding and/or performance for a particular implementation or scenario. I. Computing Environment.
[037] Figure 1 illustrates a generalized example of a suitable computing environment (100) in which several of the described embodiments may be implemented. The computing environment (100) is not intended to suggest any limitation as to scope of use or functionality, as the techniques and tools may be implemented in diverse general-purpose or special-purpose computing environments.
[038] With reference to Figure 1, the computing environment (100) includes at least one processing unit (110) and memory (120). In Figure 1, this most basic configuration (130) is included within a dashed line. The processing unit (110) executes computer-executable instructions and may be a real or a virtual processor. In a multi-processing system, multiple processing units execute computer-executable instructions to increase processing power. The memory (120) may be volatile memory (e.g., registers, cache, RAM), nonvolatile memory (e.g., ROM, EEPROM, flash memory, etc.), or some combination of the two. The memory (120) stores software (180) implementing an encoder with one or more of the described techniques and tools for enhancement layer video coding and/or decoding using QPs that vary spatially and/or across the color channels of a picture.
[039] A computing environment may have additional features. For example, the computing environment (100) includes storage (140), one or more input devices (150), one or more output devices (160), and one or more communication connections (170). An interconnection mechanism (not shown) such as a bus, controller, or network interconnects the components of the computing environment (100). Typically, operating system software (not shown) provides an operating environment for other software executing in the computing environment (100), and coordinates activities of the components of the computing environment (100).
[040] The storage (140) may be removable or non-removable, and includes magnetic disks, magnetic tapes or cassettes, CD-ROMs, DVDs, or any other medium which can be used to store information and which can be accessed within the computing environment (100). The storage (140) stores instructions for the software (180) implementing the video encoder and/or decoder. [041] The input device(s) (150) may be a touch input device such as a keyboard, mouse, pen, or trackball, a voice input device, a scanning device, or another device that provides input to the computing environment (100). For audio or video encoding, the input device(s) (150) may be a sound card, video card, TV tuner card, or similar device that accepts audio or video input in analog or digital form, or a CD-ROM or CD-RW that reads audio or video samples into the computing environment (100). The output device(s) (160) may be a display, printer, speaker, CD-writer, or another device that provides output from the computing environment (100).
[042] The communication connection(s) (170) enable communication over a communication medium to another computing entity. The communication medium conveys information such as computer-executable instructions, audio or video input or output, or other data in a modulated data signal. A modulated data signal is a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media include wired or wireless techniques implemented with an electrical, optical, RF, infrared, acoustic, or other carrier.
[043] The techniques and tools can be described in the general context of computer- readable media. Computer-readable media are any available media that can be accessed within a computing environment. By way of example, and not limitation, with the computing environment (100), computer-readable media include memory (120), storage (140), communication media, and combinations of any of the above.
[044] The techniques and tools can be described in the general context of computer- executable instructions, such as those included in program modules, being executed in a computing environment on a target real or virtual processor. Generally, program modules include routines, programs, libraries, objects, classes, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The functionality of the program modules may be combined or split between program modules as desired in various embodiments. Computer-executable instructions for program modules may be executed within a local or distributed computing environment. [045] For the sake of presentation, the detailed description uses terms like "produce" and "encode" to describe computer operations in a computing environment. These terms are high-level abstractions for operations performed by a computer, and should not be confused with acts performed by a human being. The actual computer operations corresponding to these terms vary depending on implementation.
II. Exemplary Encoding Tool.
[046] Figure 2 is a block diagram of an encoding tool (200) for encoding input video as a base layer and an enhancement layer in conjunction with which some described embodiments may be implemented. For the base layer, the format of the base layer bit stream (248) can be a Windows Media Video or VC-I format, MPEG-x format (e.g., MPEG-I, MPEG-2, or MPEG-4), H.26x format (e.g., H.261, H.262, H.263, or H.264), or other format.
[047] The tool (200) processes video pictures. The term "picture" generally refers to source, coded, or reconstructed image data. For progressive video, a picture is a progressive video frame. For interlaced video, a picture may refer to an interlaced video frame, the top field of the frame, or the bottom field of the frame, depending on context. The generic term "picture" will be used to represent these various options.
[048] The encoding tool includes a first sealer (204) which accepts input video pictures (202) and outputs base layer video to a base layer encoder (220). The first sealer (204) may downsample or otherwise scale the input video pictures (202), for example, to reduce sample depth, spatial resolution or chroma sampling resolution. Or, in some instances, the first sealer upsamples the input video pictures (202) or does not alter the input video pictures (202) at all.
[049] The base layer encoder (220) encodes the base layer video and outputs a base layer bit stream (248), and additionally makes available reconstructed base layer video which is input to an inverse sealer (252). If the reconstructed base layer video has a different bit depth, spatial resolution, chroma sampling rate, etc. than the input video pictures (202) due to scaling, then the inverse sealer (252) may upsample (or otherwise inverse scale) the reconstructed base layer video so that it has the same resolution as the input video pictures (202). [050] The input video pictures (202) are compared against the reconstructed base layer video to produce enhancement layer video that is input to a second sealer (254). The second sealer (254) may or may not be the same physical component or software program as the first sealer (204). The second sealer (254) outputs the enhancement layer video (256) to an enhancement layer encoder (240).
[051] The enhancement layer encoder (240) compresses inter-coded, predicted "pictures" (256) of the enhancement layer video and intra-coded "pictures" (256) of the enhancement layer video. The "picture" at a given time in the enhancement layer video represents differences between an input video picture and a reconstructed base layer video picture, but is still encoded as a picture by the example encoder (240). For the sake of presentation, Figure 3 shows a path for intra-coded content through the enhancement layer encoder (240) and a path for inter-coded predicted content. Many of the components of the enhancement layer encoder (240) are used for compressing both intra-coded content and inter-coded, predicted content. The exact operations performed by those components can vary depending on the type of information being compressed. Although Figure 2 shows a single enhancement layer encoder (240), the enhancement layer video (256) can itself be separated into multiple layers of residual video for encoding with separate residual encoders. Generally, the enhancement layer video (256) that is encoded represents differences (but not necessarily all differences) between the reconstructed base layer video and the input video.
[052] In general, within the encoder (240), inter-coded, predicted content (as a picture) is represented in terms of prediction from previously reconstructed content (as one or more other pictures, which are typically referred to as reference pictures or anchors). For example, content at a given time is encoded as a progressive P-frame or B-frame, interlaced P-field or B-field, or interlaced P-frame or B-frame. Within the encoder (240), a prediction residual is the difference between predicted information and corresponding original enhancement layer video.
[053] If the enhancement layer video (256) content is encoded as a predicted picture, a motion estimator (258) estimates motion of macrob locks or other sets of samples of the enhancement layer video picture with respect to one or more reference pictures, which represent previously reconstructed enhancement layer video content. The picture store (264) buffers reconstructed enhancement layer video (266) as a reference picture. When multiple reference pictures are used, the multiple reference pictures can be from different temporal directions or the same temporal direction. The motion estimator (258) outputs motion information (260) such as motion vector information.
[054] The motion compensator (262) applies motion vectors to the reconstructed enhancement layer video content (266) (stored as reference picture(s)) when forming a motion-compensated current picture (268). The difference (if any) between a block of the motion-compensated enhancement layer video (268) and corresponding block of the original enhancement layer video (256) is the prediction residual (270) for the block. During later reconstruction of the enhancement layer video, reconstructed prediction residuals are added to the motion compensated enhancement layer video (268) to obtain reconstructed content closer to the original enhancement layer video (256). In lossy compression, however, some information is still lost from the original enhancement layer video (256). Alternatively, a motion estimator and motion compensator apply another type of motion estimation/compensation.
[055] A frequency transformer (280) converts spatial domain video information into frequency domain (i.e., spectral, transform) data. For block-based video content, the frequency transformer (280) applies a DCT, variant of DCT, or other forward block transform to blocks of the samples or prediction residual data, producing blocks of frequency transform coefficients. Alternatively, the frequency transformer (280) applies another conventional frequency transform such as a Fourier transform or uses wavelet or sub-band analysis. The frequency transformer (280) may apply an 8x8, 8x4, 4x8, 4x4 or other size frequency transform.
[056] A quantizer (282) then quantizes the blocks of transform coefficients. The quantizer (282) applies non-uniform, scalar quantization to the spectral data with a step size that varies spatially on a picture-by-picture basis, macroblock-by-macroblock basis or other basis. Additionally, in some cases the quantizer varies quantization across color channels of the enhancement layer video picture. The quantizer (282) can also apply another type of quantization, for example, a uniform or adaptive quantization for at least some spectral data coefficients, or directly quantizes spatial domain data in an encoder system that does not use frequency transformations.
[057] When a reconstructed enhancement layer video picture is needed for subsequent motion estimation/compensation, an inverse quantizer (290) performs inverse quantization on the quantized spectral data coefficients. An inverse frequency transformer (292) performs an inverse frequency transform, producing blocks of reconstructed prediction residuals (for predicted enhancement layer video content) or samples (for intra-coded residual video content). If the enhancement layer video (256) was motion-compensation predicted, the reconstructed prediction residuals are added to the motion-compensated predictors (268) to form the reconstructed enhancement layer video. The picture store (264) buffers the reconstructed enhancement layer video for use in subsequent motion- compensated prediction.
[058] The entropy coder (284) compresses the output of the quantizer (282) as well as certain side information (e.g. , quantization parameter values) Typical entropy coding techniques include arithmetic coding, differential coding, Huffman coding, run length coding, LZ coding, dictionary coding, and combinations of the above. The entropy coder (284) typically uses different coding techniques for different kinds of information, and can choose from among multiple code tables within a particular coding technique.
[059] A controller (not shown) receives inputs from various modules such as the motion estimator (258), frequency transformer (280), quantizer (282), inverse quantizer (290), and entropy coder (284). The controller evaluates intermediate results during encoding, for example, setting quantization step sizes and performing rate-distortion analysis. The controller works with modules such as the motion estimator (258), frequency transformer (280), quantizer (282), and entropy coder (284) to set and change coding parameters during encoding. When an encoder evaluates different coding parameter choices during encoding, the encoder may iteratively perform certain stages (e.g. , quantization and inverse quantization) to evaluate different parameter settings. The encoder may set parameters at one stage before proceeding to the next stage. Or, the encoder may jointly evaluate different coding parameters. The tree of coding parameter decisions to be evaluated, and the timing of corresponding encoding, depends on implementation. In some embodiments, the controller also receives input from an encoding session wizard interface, from another encoder application interface, or from another source to designate video as having specific content to be encoded using specific rules.
[060] The above description explicitly addresses motion compensation for enhancement layer video. The encoder (240) additionally performs intra-compression of the enhancement layer video. In that instance, the sealer (254) provides enhancement layer video (256) to the encoder (240) and the encoder intra-compresses it as an intra-coded picture, without motion compensation. Instead, the enhancement layer video (256) is provided directly to the frequency transformer (280), quantizer (282), and entropy coder (284) and output as encoded video. A reconstructed version of the intra-coded enhancement layer video can be buffered for use in subsequent motion compensation of other enhancement layer video.
[061] The relationships shown between modules within the encoder (240) indicate general flows of information in the encoder; other relationships are not shown for the sake of simplicity. In particular, Figure 2 generally does not show side information indicating modes, tables, etc. used for a video sequence, picture, macroblock, block, etc. Such side information, once finalized, is sent in the output bit stream, typically after entropy encoding of the side information.
[062] Particular embodiments of video encoders typically use a variation or supplemented version of the enhancement layer encoder (240). Depending on implementation and the type of compression desired, modules of the encoder can be added, omitted, split into multiple modules, combined with other modules, and/or replaced with like modules. For example, the controller can be split into multiple controller modules associated with different modules of the encoder. In alternative embodiments, encoders with different modules and/or other configurations of modules perform one or more of the described techniques.
III. Exemplary Decoding Tool.
[063] Figure 3 is a block diagram of a decoding system (300), including an exemplary enhancement layer decoder (340), in conjunction with which some described embodiments may be implemented. The system (300) includes a base layer decoder (320) which receives a base layer bit stream (302) and outputs reconstructed base layer video to a first inverse sealer (352). The base layer bit stream (302) can be a bit stream in a Windows Media Video or VC-I format, MPEG-x format (e.g., MPEG-I, MPEG-2, or MPEG-4), H.26x format (e.g., H.261, H.262, H.263, or H.264), or other format. In certain embodiments, the base layer bit stream (302) is encoded using motion compensation, and thus the base layer decoder (320) includes a motion compensation loop. The first inverse sealer (352) is operable to upsample or otherwise inverse scale the reconstructed base layer video to the desired bit depth, spatial resolution, chroma sampling rate and/or other resolution of the output reconstructed video pictures (398). [064] The system further includes an enhancement layer decoder (340) operable to receive an enhancement layer bit stream (304). The enhancement layer bit stream (304) can be the same format as the base layer bit stream (302), or it may be a different format. The entropy decoder (384) is operable to decode elements of the bit stream that were encoded by entropy encoding methods including arithmetic coding, differential coding, Huffman coding, run length coding, LZ coding, dictionary coding, and combinations of the above. The entropy decoder (384) typically uses different decoding techniques for different kinds of information, and can choose from among multiple code tables within a particular decoding technique. The entropy decoder (384) outputs side information such as motion vector information (360) to a motion compensator (362).
[065] An inverse quantizer (390) applies inverse quantization to some of the output of the entropy decoder (384). In certain embodiments, the inverse quantizer (390) is operable to reverse non-uniform scalar quantization with a step size that varies on a picture -by-picture basis, macroblock-by-macroblock basis, color channel-by-color channel basis, or some other basis. More generally, the inverse quantizer (390) is operable to reverse quantization applied during encoding.
[066] An inverse frequency transformer (392) accepts the output of the inverse quantizer (390). The inverse frequency transformer (392) is operable to produce blocks of spatial domain values by applying an inverse DCT, variant of inverse DCT, or other reverse block transform to the output of the inverse quantizer (390). The inverse frequency transformer (392) may be operable to reverse an 8x8, 8x4, 4x8, 4x4 or some other size frequency transform. The inverse frequency transformer (392) outputs reconstructed values (370) for a prediction residual (in the case of inter-coded enhancement layer video content) or samples (in the case of intra-coded enhancement layer video content).
[067] The motion vector information (360) output from the entropy decoder (384) is input to a motion compensator (362). The motion compensator (362) applies the motion vector information to previously reconstructed enhancement layer video buffered in a picture store (364) and outputs motion-compensation-predicted enhancement layer video (368).
[068] In decoding of inter-coded enhancement layer video, the motion-compensation- predicted enhancement layer video (368) is combined with the prediction residuals (370) to form reconstructed enhancement layer video (366). The reconstructed enhancement layer video (366) is buffered by the picture store (364) (for use in subsequent motion compensation) and output from the enhancement layer decoder (340) to a second inverse sealer (354).
[069] The enhancement layer decoder (340) may be operable to decode 8-bit video, 10-bit video, or video with some other bit depth. If the enhancement layer decoder (340) decodes 8-bit video and output video with a higher bit depth (e.g., 10-bit) is to be reconstructed, then the second inverse sealer (354) upsamples the reconstructed enhancement layer video (366) to the higher bit depth. Or, if the enhancement layer decoder (340) decodes 16-bit video and output video with a lower bit depth (e.g., 8-bit) is to be reconstructed, then the second inverse sealer (354) downsamples the reconstructed enhancement layer video (366) to the lower bit depth. The decoding tool combines the inverse scaled, reconstructed enhancement layer video output from the second inverse sealer (354) with the inverse scaled, reconstructed base layer video output by the first inverse sealer (352), to produce reconstructed video pictures (398) for the output video.
[070] The above description explicitly addresses decoding of inter-coded enhancement layer video. The decoder (340), using intra-decoding, also decodes intra-coded enhancement layer video. In that instance, the entropy decoder (384), inverse quantizer (390), and inverse frequency transformer (392) act as previously mentioned to produce samples of the enhancement layer video, bypassing motion compensation. The reconstructed enhancement layer video (366) is buffered in a picture store (364) for use in future motion compensation.
[071] The relationships shown between modules within the decoder (340) indicate general flows of information in the decoder; other relationships are not shown for the sake of simplicity. In particular, Figure 3 generally does not show side information indicating modes, tables, etc. used for a video sequence, picture, macroblock, block, etc.
[072] Particular embodiments of video decoders typically use a variation or supplemented version of the generalized decoder (340). Depending on implementation and the type of compression desired, modules of the decoder can be added, omitted, split into multiple modules, combined with other modules, and/or replaced with like modules. In alternative embodiments, decoders with different modules and/or other configurations of modules perform one or more of the described techniques.
[073] Although Figure 3 shows a single enhancement layer decoder (340), the enhancement layer video can itself be separated into multiple layers of residual video for encoding with separate residual encoders and signaling as multiple enhancement layer bit streams. A given decoding system includes one or more separate residual decoders for decoding one or more of the multiple enhancement layer bit streams. Generally, the enhancement layer video that is decoded represents differences (but not necessarily all differences) between the reconstructed base layer video and the original input video.
IV. Varying Quantization Spatially and Across Channels.
[074] According to a first set of techniques and tools, an encoder varies quantization of enhancement layer video spatially and/or across color channels of a picture. For example, the encoder varies quantization from unit-to-unit for multiple units (such as macroblocks) of enhancement layer video, potentially using different quantization in different color channels for the units. The encoder signals quantization parameters that parameterize the variable quantization. A corresponding decoder varies inverse quantization of the enhancement layer video spatially and/or across color channels of a picture.
A. Generalized Encoding Technique.
[075] Figure 4 shows a flow chart for a generalized technique (400) for encoding an enhancement layer video with quantization that varies spatially across a picture and/or across color channels of the picture. An encoding tool (200), such as that described with reference to Figure 2 performs the technique (400), or some other tool may be used.
[076] The encoding tool determines (405) whether to vary quantization spatially for a picture of enhancement layer video. This may be indicated by user input or through analysis of the picture or portions of the picture. For example, if a user desires a high degree of rate-distortion efficiency in compression, the user may direct the tool to use spatially varying QPs. Alternatively, if the picture being encoded has a high degree of complexity or spatial variance above a threshold value, then a pre-set threshold in software directs the tool to use spatially varying QPs when encoding the picture.
[077] The tool also determines (410) whether to vary quantization between the plural color channels of the picture of enhancement layer video. The pictures can be images of various color formats (e.g., YUV or YCbCr for color space, with 4:4:4, 4:2:2 or 4:2:0 chroma sampling rate). If it is a YUV or YCbCr image, the image has a luma channel and two chroma channels. The separate channels (also called color planes or components) of the image can have different spatial resolutions. The tool may vary the QP across different color channels of the picture according to a user indication, encoder wizard setting, or through analysis of a picture, a portion of the picture, and/or one or more of the color channels.
[078] Next, the tool encodes (420) the picture of enhancement layer video using determined QP or QPs. The tool determines one or more QPs for the picture. If the picture's QPs do not vary spatially over the picture, then only a single QP is used for the picture. If the picture's QPs do vary spatially, then a different QP is determined for each unit (e.g., macroblock, block) in the picture. Additionally, if QPs vary across the color channels of the picture, then the tool determines multiple QPs for the multiple channels, and potentially determines different QPs for each unit in the picture. For example, a different QP is determined for the luma channel and each of the chroma channels of a unit in the picture. Generally, the encoding tool applies the QP(s) to each of the units in the picture and produces an enhancement layer bit stream.
[079] The tool outputs (430) the encoded enhancement layer bit stream, which includes information indicating the QP or QPs used. Typically, the information indicating the QP or QPs is interspersed in the bit stream with the other parameterized information for the picture or units. For example, the tool signals one or more QPs for each unit in the picture in the enhancement layer bit stream. The signaling can be done in the bit stream at the picture level or the unit level. In some implementations, the tool signals a single bit at the picture level to indicate whether QP varies spatially, and if QP varies spatially then the tool signals another bit to indicate whether QP varies across the color channels of the picture. If QP varies spatially over the picture or across the color channels of the picture, the tool signals the value(s) of the QP(s) for each of the units in the picture at the unit level of the bit stream. In this case the tool may additionally signal at the picture level how many bits are used to signal QP information for each unit at the unit level of the bit stream. Alternatively, the tool signals a table comprising different possible QP values, and then signals a selection value from the table for each of the units in the picture at the unit level in the bit stream.
[080] The tool performs the technique (400) for a picture of enhancement layer video and repeats the technique (400) on a picture-by-picture basis. Alternatively, the tool performs the technique for a group of pictures, slice, or other section of video, and repeats the technique on that basis. B. Exemplary Encoding Technique.
[081] Figure 5 shows a flowchart of an exemplary technique (500) of encoding enhancement layer video using QPs that vary spatially or across color channels of an individual frame. An encoding tool (200), such as that described with reference to Figure 2 is used to perform the technique (500), or some other tool may be used. The tool repeats the technique (500) on a frame-by-frame basis.
[082] The tool first determines (505) whether QP varies spatially for the frame. The tool analyzes the frame to determine whether varying QP would be acceptable or desirable according to one or more of a number of criteria such as desired rate-distortion efficiency, compression speed, degree of complexity of the frame, or other criteria. For example, a user indicates through a user interface such as an encoding wizard that a high degree of rate-distortion efficiency is desired. The tool then determines that a spatially variable QP is necessary to achieve the desired degree of rate-distortion efficiency. Alternatively, the tool determines that the complexity of the frame is above a pre-determined or user-defined threshold and thus determines that a spatially variable QP is desired.
[083] If the tool determines that a spatially variable QP is not desired, the tool determines (510) the frame QP according to criteria such as rate constraints of the compressed file, perceptual quality and/or complexity of the input video. The tool signals (515) the frame QP in the enhancement layer bit stream.
[084] If the tool determines that QP does vary spatially, the tool determines (520) whether QP varies across the color channels of the frame. The tool analyzes each color channel separately or together with the other color channels to determine whether varying QP would be acceptable or desirable for each color channel, according to one or more of a number of criteria such as desired rate-distortion efficiency, compression speed, degree of complexity of the frame, complexity of each channel in the frame, amount of variance within channels and between different channels, or some other criteria.
[085] If the tool determines that QP does not vary across the color channels, the tool determines (525) QPs to use within the frame. For example, the tool determines QPs for macroblocks in the frame according to criteria such as rate constraints, perceptual quality and/or complexity of the video for the respective macroblocks. [086] After the tool has determined (525) QPs within the frame, the tool signals (530) the frame QP. Generally, the frame QP is the "default" QP used when encoding each macroblock in the frame. In one example, the frame QP is an average of the QPs of the macroblocks in the frame. Alternatively, the tool determines the frame QP as the most common QP in the frame to reduce the bit cost for signaling the QPs for macroblocks. For example, the tool signals that QP varies spatially, that QP does not vary across channels, and that the frame QP is signaled using x bits, and then signals the value of the frame QP itself. Alternatively, the tool may signal that the frame QP is one of a number of entries in a given table (e.g. , a QP table for a sequence), or the tool may signal the frame QP in some other manner.
[087] The tool then signals (535) the QPs for the macroblocks in the frame. In one embodiment, this comprises signaling the QP for each of the macroblocks with respect to a predicted QP which can be either a frame QP or a QP that is predicted based on the QPs of one or more other, spatially adjacent macroblocks in the frame. In another embodiment, this comprises signaling the QP for each of the macroblocks as one of a plurality of values in a table.
[088] If the tool determines that QP does vary both spatially and across color channels, then the tool determines (540) QPs to use within a first color channel of the frame. For example, the tool proceeds to determine QPs for macroblocks in the Y color channel according to criteria such as rate constraints, perceptual quality and/or complexity of the video for the respective macroblocks.
[089] After the tool determines (540) the QPs for macroblocks in the channel, the tool signals (545) the frame QP for the channel. Generally, the frame QP for the channel is the "default" QP used when encoding each macroblock in the channel. In one example, the tool determines the frame QP for the channel by averaging the QPs of each of the macroblocks in the channel. In another example, the tool chooses the frame QP for the channel as the most commonly used QP in the channel. In one embodiment, signaling the frame QP for the channel comprises signaling that QP varies both spatially and across the different color channels in the frame, and then signaling the frame QP for the channel itself. Alternatively, the frame QP for the channel may be signaled as one of several values in a QP table (e.g., a QP table for a sequence). [090] After the tool has signaled the frame QP for the channel, the tool checks (550) whether there are other color channels in the frame that have not been analyzed, for example, the chroma (U, V) channels. If there are, then the tool performs the determining (540) step and the signaling (545) step for the frame QP for each of the other channels. Alternatively, the tool may perform the determining step (540) for the frame QP for each of the channels before the signaling step (545) for any of the channels, or the steps may be performed in some other order.
[091] The tool next signals (555) the QPs for macroblocks for each of the channels. In one embodiment, this comprises signaling the QP for each of the macroblocks in each of the channels with respect to a predicted QP. The predicted QP can be the channel QP, or the predicted QP can be a QP based on the QPs of one or more neighboring macroblocks in the color channel. In another embodiment, the tool signals the QP of each of the macroblocks in each of the channels as one of a plurality of QP values in a table.
[092] In some cases, each of the color channels may not vary spatially, and so the tool indicates with a skip bit that the QPs for the macroblocks in a color channel are all equal to the frame QP for the channel at some point in the encoding process, such as at the signaling step (545) or the signaling step (555).
C. QP Signaling for Macroblocks in Each Color Channel.
[093] Figure 6A is a flowchart showing details (600) of one approach to signaling (555) the QPs for macroblocks in each of plural color channels.
[094] The tool signals (605) frame-level information for QP variation within one or more of the channels. For example, the tool signals at the frame level the number of bits used to define macroblock QPs relative to the frame QP for each of the channels. Alternatively, the tool signals information indicating a QP index table and populates the table with a plurality of values for different QPs, which can include the channel QP. A different table is indicated for each of the color channels or, alternatively, two or more of the color channels can share a table. Additionally, one or more of the colors channels may not vary spatially over the frame, and so only a single QP may be indicated for that channel.
[095] On a macroblock-by-macroblock basis, the tool signals (615) information for the QP of the next macroblock. In one embodiment, the tool signals whether the actual QP of the macroblock is the same as the macroblock' s predicted QP, which can be the QP of the frame for the color channel or a spatially predicted value for the QP of the macroblock. Macroblock QP prediction rules vary depending on implementation. If the actual QP is not the same as the predicted QP, the tool then signals a difference value between the QP of the macroblock and the predicted QP. Alternatively, the tool signals whether the actual QP of the macroblock is equal to the macroblock's predicted QP, which again can be the QP of the frame for the color channel or a spatially predicted QP value for the macroblock. If the macroblock QP is not equal to the predicted QP, then the tools signals that the QP of the macroblock is one of a plurality of QP values in a QP index table.
[096] After the tool has signaled information for the QP of the macroblock in the given color channel, the tool checks (620) whether there is another color channel with a spatially varying QP. If there are one or more other color channels whose QPs have not been signaled, then the tool performs the signaling (615) step for the macroblock in the next color channel. If there is not another color channel with a spatially varying QP, the tool checks (625) whether there is another macroblock in the frame. The macroblocks can be checked according to a raster scan order or some other order. If there is another macroblock in the channel whose QPs have not been signaled, then the tool performs the signaling (615) and checking (620) steps for the next macroblock. If there is no other macroblock in the frame, then the tool is done signaling the QPs for macroblocks in each color channel of the frame.
D. QP Signaling for Macroblocks in the Frame.
[097] Figure 6b is a flowchart showing details (630) of one approach to signaling (535) the spatially varying QPs of the macroblocks in the frame.
[098] As a first step, the tool signals (635) frame-level information for QP spatial variation over the frame. For example, the tool signals at the frame level the number of bits used to define macroblock QPs relative to the frame QP. Alternatively, the tool signals information indicating a QP index table and populates the table with a plurality of values for different QPs.
[099] On a macroblock-by-macroblock basis, the tool signals (645) information for the QP of the next macroblock. The tool signals whether the QP of the macroblock is to the same as the macroblock's predicted QP, which can be the QP of the frame or a spatially predicted value for the QP of the macroblock. Macroblock QP prediction rules vary depending on implementation. If the actual QP is not the same as the predicted QP, the tool signals a difference value between the QP of the macroblock and the predicted QP. Alternatively, if the macroblock QP is not equal to one the predicted QP, then the tool signals that the QP of the macroblock is one of a plurality of QP values in a QP index table.
[0100] After the tool has signaled information for the QP of the macroblock for the frame, the tool checks (650) whether there is another macroblock in the frame. The macrob locks can be checked according to a raster scan order or some other order. If there is another macroblock in the frame, then the tool performs the signaling (645) step for the next macroblock. If there is not another macroblock in the frame, then the tool finishes.
E. Generalized Decoding Technique.
[0101] Figure 7 shows a general method (700) for decoding enhancement layer video with inverse quantization that varies spatially across a picture or across color channels of the picture. A decoding tool (300), such as the one described with reference to Figure 3, is used to perform the technique (700), or some other tool may be used.
[0102] The decoding tool receives (710) encoded information in a bit stream for enhancement layer video. The encoded information includes information that indicates QPs for units (e.g., macroblocks, blocks) of a picture or its channels. In some embodiments, the tool receives information signaled according to the techniques shown in Figures 5, 6A and 6B, receiving syntax elements that are signaled, evaluating the syntax elements and following the appropriate conditional bit stream paths, to determine QPs that vary spatially and/or between channels of a picture. Alternatively, the tool receives QP information signaled according to another approach.
[0103] The tool then decodes (720) the enhancement layer video. In doing so, the tool varies inverse quantization (according to the signaled QP information) spatially and/or between channels for units of the enhancement layer video.
[0104] The tool performs the technique (700) for a picture of the enhancement layer video and repeats the technique on a picture -by-picture basis. Alternatively, the tool performs the technique for a group of pictures, slice, or other section of video, and repeats the technique on that basis.
V. Predictive Coding and Decoding of Quantization Parameters.
[0105] According to a second set of techniques and tools, an encoder predictively codes quantization parameters using spatial prediction. A corresponding decoder predicts the quantization parameters using spatial prediction during decoding. For example, the encoder and decoder predict a macrob lock's QP using a QP prediction rule than considers QPs of spatially adjacent macroblocks within a picture or channel of a picture. Spatial prediction of QPs can be used to encode QPs that vary both spatially and between channels, or it can be used in encoding and decoding of other types of QPs.
A. Generalized Encoding.
[0106] Figure 8 is a flowchart showing a generalized technique (800) for encoding and signaling QPs using spatial prediction. An encoding tool (200), such as that described with reference to Figure 2 may be used to perform the method (800), or some other tool may be used. The technique (800) is described with reference to an entire picture, but the technique may be applied separately to each color channel in the picture.
[0107] The tool gets (805) the QP for the next unit in the picture. The unit can be a macrob lock, block or other region of the picture. As the technique (800) addresses encoding and signaling of QP values, the encoder has already determined QPs of the units and the QP of the picture.
[0108] The tool determines (810) the predicted QP for the unit. The value of the predicted QP depends on the QP prediction rule in operation. Although the QP prediction rule depends on implementation, the encoder and decoder use the same QP prediction rule, whatever it happens to be. A first example prediction rule compares QPs of units to the left of the current unit and above the current unit. If the QPs of the two neighboring units are the same, the encoder uses that QP as the predicted QP. Otherwise, the encoder uses the picture QP as the predicted QP for the current unit. According to a second example prediction rule, the encoder uses the median QP among QPs for left, top, and top right neighbors as the predicted QP. Alternatively, the encoder uses another prediction rule, for example, considering a single neighbor's QP to be the predicted QP. For any of these example rules, the QP prediction rule addresses cases where one or more of the neighboring units are outside of a picture or otherwise have no QP, for example, by using the picture QP or other default QP as the predicted QP of the current unit, or by substituting a dummy QP value for the missing neighbor unit.
[0109] The tool signals (825) the QP for the unit with reference to the predicted QP. For example, the tool signals a single bit indicating whether or not the unit uses the predicted QP. If not, the tool also signals information indicating the actual QP for the unit. One approach to signaling the actual QP is to signal the difference between the QP for the unit and the predicted QP. Another approach is to signal a QP index that indicates an alternative QP in a table of QPs available to both the encoder and the decoder. Alternatively, instead of signaling the use/do-not-use selection decision separately from selection refinement information, the tool jointly signals the selection information, using a single code to indicate not to use the predicted QP and also indicating the actual QP to use.
[0110] The tool then checks (830) to see whether there are other units with QPs to be encoded in the picture (or channel). If there are other units, then the tool repeats the steps of getting (810) the QP for the next unit, determining (810) the predicted QP for that unit, and signaling (825) the QP for that unit.
B. Generalized Decoding.
[0111] Figure 10 is a flowchart showing a general technique (1000) for using spatial prediction to decode QPs for units of video A decoding tool, such as the decoding tool (300) described with reference to Figure 3 or other decoding tool, performs the technique (1000). The technique (1000) is described with reference to an entire picture, but the technique may be applied separately to each color channel in the picture.
[0112] The tool receives (1010) QP selection information for the next unit (e.g., macroblock, block) in the picture. Generally, the selection information indicates whether the QP for the unit is the predicted QP or another QP, in which case the QP selection information also indicates what the other QP is. For example, the tool receives (as part of the QP selection information) a single bit indicating whether or not the unit uses the predicted QP. If not, the tool also receives (as part of the QP selection information) information indicating the actual QP for the unit. In a differential coding approach, the tool receives information indicating the difference between the QP for the unit and the predicted QP. In an alternative QP selection approach, the tool receives a QP index that indicates an alternative QP in a table of QPs available to both the encoder and the decoder. The QP selection information can include a separate decision flag and selection code, or it can include a single code that jointly represents the information.
[0113] The tool predicts (1010) the QP of the unit, and the value of the predicted QP depends on the QP prediction rule in operation. Any of the example QP prediction rules described with reference to Figure 8, when used during encoding, is also used during decoding. Even when the predicted QP is not used as the actual QP for the current unit, the predicted QP is used to determine the actual QP. Alternatively, when the QP selection information indicates that a predicted QP is not used, the encoder skips determination of the predicted QP and decodes an independently signaled QP for the current unit.
[0114] The tool selects (1015) between the predicted QP and another QP, using the QP selection information. For example, the tool interprets part of the QP selection information that indicates whether or not the unit uses the predicted QP. If not, the tool also interprets additional QP selection information that indicates the other QP for the unit. In a differential coding approach, the tool combines a differential value and the predicted QP to determine the other QP. In an alternative QP selection approach, the tool looks up a QP index in a table of QPs available to determine the other QP.
[0115] The tool then checks (1025) whether there are other units with QPs to be reconstructed in the picture (or channel). If there are, then the tool repeats the steps of receiving QP selection information for the next unit, determining the predicted QP for that unit, and selecting the QP for that unit.
C. Exemplary Prediction Rules.
[0116] Figure 9 is a flowchart illustrating a technique (900) for using an exemplary prediction rule for predicting the QP of a macroblock during encoding. An encoding tool, such as that described with reference to Figure 2, performs the technique (900) when encoding and signaling the QP for a macroblock (QP MB) in a frame or channel of the frame.
[0117] The tool first checks (905) whether the QP of a macroblock immediately to the left of the current macroblock (QP LEFT) is the same as the QP of a macroblock immediately above the current macroblock (QP TOP). QP LEFT being equal to QP TOP indicates a trend for the QPs of that particular section of the frame or color channel such that it is reasonable to assume that QP MB, the QP of the current macroblock, is most likely close to, if not equal to, QP LEFT. Thus, QP PRED is set (910) to be equal to QP LEFT. If QP LEFT is not equal to QP TOP, or if either QP LEFT or QP TOP is unavailable, then QP PRED is set (915) to be equal to QP FRAME, which is the default QP of the frame or color channel. Generally, QP FRAME is equal to the average of the QPs for the frame or color channel, the most common QP in the frame or color channel, or some other value expected to reduce bit rate associated with signaling QPs for macroblocks.
[0118] In alternative QP prediction rules, QP PRED is predicted according to the QPs of different macroblocks, such as QP TOP and QP BOTTOM (the QP of a macroblock directly below the current macroblock), QP LEFT and QP RIGHT (the QP of a macroblock directly to the right of the current macroblock), or some other combination of QPs in the frame or channel, depending on scan order followed in encoding QPs for the macroblocks. Or, QP PRED is predicted with regard to only a single previously decoded QP (such as QP LEFT), three previously decoded QPs, or some other combination of QPs. In some examples, the tool performs multiple checks to determine QP PRED. For example, if QP LEFT is not equal to QP TOP LEFT, the tool checks to determine whether QP TOP LEFT is equal to QP TOP, and if so, sets QP PRED equal to QP LEFT (assuming horizontal continuity in QP values). In still other examples, QP PRED is based on the QPs of other color channels or previously reconstructed macroblocks in other frames.
[0119] Returning to Figure 9, the tool then checks (920) whether QP MB is equal to QP PRED. In areas of the frame or color channel with high levels of redundancy in QP values, QP MB will most likely be equal to QP PRED. In this instance, the tool signals (930) that QP SKIP is 1. QP SKIP is a one-bit indicator which, when set to 1, indicates that the current macroblock uses QP PRED and the bit stream includes no other QP selection information for the current macroblock.
[0120] If QP MB is not equal to QP PRED, then the tool signals (925) that QP SKIP is 0. Setting QP SKIP to 0 indicates during encoding and decoding that QP MB is not equal to the QP PRED and therefore another QP is signaled (935) for QP MB. In a differential coding approach, this other QP is signaled as a difference value relative to QP PRED. In an alternate QP selection approach, QP MB is signaled as one of a number of available QPs in a table of QP values. Or, the other QP is signaled in some other manner.
D. Treatment of Skip Macroblocks.
[0121] A QP prediction rule accounts for the unavailability of a neighbor QP by, for example, assigning a picture QP or other default QP to be the predicted QP for the current unit. In some implementations, an encoder and decoder reduce the frequency of unavailable QPs by buffering dummy QP values to units that otherwise lack QPs. For example, even if QP varies spatially in a frame or channel, some macroblocks may still be encoded and decoded without using a QP. For a skipped macroblock or macroblock for which all blocks are not coded (according to the coded block pattern for the macroblock), the bit stream includes no transform coefficient data and no QP is used. Similarly, when QP varies spatially and between channels, if a macroblock has transform coefficient data in a first channel but not a second channel (e.g., since the coded block status of the block(s) in the second channel is 0 in the coded block pattern), the bit stream includes no QP information for the macroblock in the second channel.
[0122] Thus, in some implementations, if QP is not available for a particular unit, the encoder and decoder infer the QP for the unit to be equal to the predicted QP for the unit, and the inferred value is used for subsequent QP prediction. For example, if a macroblock is skipped, the QP of the macroblock is set to be equal to the predicted QP for the macroblock, and the inferred QP value is buffered along with other actual QPs (and perhaps inferred QP values) for the frame.
VI. Combined Implementations.
[0123] In first and second combined implementations, an encoder and decoder use QPs that vary spatially and/or between channels of enhancement layer video, and the encoder and decoder use spatial prediction when encoding and decoding values of QP for macroblocks. The encoder and decoder use the same QP prediction rule in the first and second combined implementations, although other QP prediction rules can instead be used. In the first combined implementation, when the predicted QP is not used for a macroblock, the actual QP for the macroblock is signaled differentially relative to the predicted QP. In contrast, in the second combined implementation, when the predicted QP is not used for a macroblock, the actual QP for the macroblock is signaled as an alternative QP index to a table of available QPs for the frame.
A. General Signaling in First and Second Combined Implementations.
[0124] In the first and second combined implementations, QP FRAME UNIFORM is a 1- bit frame level syntax element. It indicates whether QP varies spatially across the frame. If QP FRAME UNIFORM equals 0, then the QP varies spatially across the frame. If QP FRAME UNIFORM does not equal 0, then the QP does not vary spatially across the frame, and the encoder and decoder use simple frame-level signaling of frame QP.
[0125] Similarly, QP CHANNEL UNIFORM is a 1-bit frame level syntax element that indicates whether QP varies across the color channels of the frame. If QP CHANNEL UNIFORM equals 0, then QP varies across the color channels (in addition to potentially varying spatially within each channel). If QP CHANNEL UNIFORM does not equal 0, then QP does not vary across the color channels. [0126] Figure 11 illustrates bit stream syntax and pseudocode for receiving information that indicates frame QP and channel-specific QPs in first and second example combined implementations. Figures 11 through 16 show color channels for the YUV color space, but the pseudocode could be adapted to the RGB space, YCbCr, or some other color space.
[0127] If QP CHANNEL UNIFORM does not equal 0, then QP does not vary across the color channels, and the bit stream includes N bits signaling QP FRAME. If QP CHANNEL UNIFORM equals 0 then the bit stream includes N bits for QP FRAME Y, N bits for QP FRAME U, and N bits for QP FRAME V. The value of N can be pre-defined, set for a sequence, or even set for a frame. Moreover, although Figure 11 shows the same value of N bits for all types of QP, different numbers of bits can be used to signal QP FRAME, QP FRAME Y, QP FRAME U, and/or QP FRAME V.
[0128] Figures 11 and 13 to 16 illustrate decoder-side operations to receive bit stream syntax elements and determine QPs of macrob locks. The corresponding encoder- side encoding and signaling operations mirror the operations shown in Figures 11 and 13 to 16. For example, instead of receiving information for a differential QP value (or alternate QP index) and decoding it, an encoder determines the differential QP value (or alternate QP index) and signals it.
B. Spatial Prediction Rule in First and Second Combined Implementations.
[0129] Figure 12 shows an example QP prediction rule used by the encoder and the decoder in the first and second example combined implementations. The QP prediction rule generally corresponds to the rule explained with reference to steps (905, 910 and 915) of Figure 9 For a current macrob lock, if both the left neighboring macrob lock and the top neighboring macroblock are available, and the two neighboring macroblocks have equal QPs, then this QP is used as the predicted QP for the current macroblock. If, however, QP TOP is different from QP LEFT, or if either of the neighbors is unavailable, the tool uses QP FRAME (or the appropriate channel-specific QP_FRAME_ value for the Y, U or V channel) as the predicted QP for the current macroblock.
[0130] Alternatively, the encoder and the decoder use a different QP prediction rule. For example, the encoder and decoder set the predicted QP for a current macroblock to be the median of QP values from the left, top and top-right neighbors. Or, the encoder and decoder set the predicted QP for a current macroblock to be QP LEFT if the QP values from top-left and top neighbors are the same (showing a horizontal consistency trend), set the predicted QP for the current macroblock to be QP TOP if the QP values from top-left and left neighbors are the same (showing a vertical consistency trend), and otherwise set the predicted QP for the current macroblock to be QP FRAME.
C. Signaling QP Differentials in First Combined Implementation.
[0131] In a first scheme, the QP MB is not the same as QP PRED, the bit stream includes a differential value that indicates QP MB relative to QP PRED. Generally, the differential is signaled as a signed or unsigned integer according to a convention determined by the encoder and decoder.
[0132] Figure 13 illustrates bit stream syntax and pseudocode for receiving information that indicates the number of bits used to differentially signal QP MB for a frame or channels. The syntax elements shown in Figure 13 are signaled at frame level in the bit stream. If QP FRAME UNIFORM == 0, then QP varies spatially over the frame of enhancement layer video and thus QP MB information is signaled at the macroblock level. If QP FRAME UNIFORM does not equal 0, then the QP of the frame (or channels) is signaled at the frame level of the bit stream.
[0133] If QP CHANNEL UNIFORM is not equal to 0, then the tool decodes NUM BITS QP MB (3 bits). NUM BITS QP MB (3 bits) is a 3-bit value that indicates the number of bits used to signal QP MB differentials for macrob locks in a frame. This yields a number from 0 bits to 7 bits for differential QP MB information. When the number of bits is 0, the predicted QP is always used for macrob locks, since no differential bits are allowed. At the other extreme, when the number of bits is 7, differentials within a range of 27=128 steps relative to QP PRED can be signaled. Depending on convention, the differential values can vary from -64 to 63 in integer QP steps, -32 to 95 in integer QP steps, -32 to 31.5 in half-QP steps, etc. In some implementations, the range is generally centered around QP PRED (or differential of zero). Setting the number of bits used to signal differential QP MB information trades off the costs of signaling the differential QP MB information at higher resolution versus the quality benefits of using the greater range of QP or resolution of QP.
[0134] If QP CHANNEL UNIFORM is = 0, then the tool decodes NUM BITS QP MB Y (3 bits), NUM BITS QP MB Y (3 bits), and NUM BITS QP MB Y (3 bits), which are 3 -bit values that indicate the number of bits used to signal QP MB differentials for macroblocks in the Y channel, the U channel, and the V channel, respectively. This yields a number from 0 bits to 7 bits for differential QP MB information in the respective channels. Different channels do not need to use the same number of differential QP MB bits as each other. For example, the Y channel may be much more complex than either the U channel or the V channel, and thus the Y channel may use 4 bits for differential QP MB values whereas the U channel and the V channel each use 2 bits. By setting the number of differential QP MB bits to zero for a channel, spatially adaptive quantization is effectively disabled for that channel.
[0135] Figure 14 illustrates bit stream syntax and pseudocode for receiving information that indicates QP for each macroblock. Figure 14 shows macrob lock-level syntax elements. If QP FRAME UNIFORM is equal to 0, QP varies spatially over the frame. For a current macroblock, the bit stream includes a bit QP SKIP, which is used to indicate whether QP MB is equal to QP PRED. If QP SKIP is equal to 1, then QP MB is set to be equal to QP PRED. QP SKIP = 0 indicates that QP MB is being signaled explicitly. If so, the next bit stream syntax elements depend on whether QP CHANNEL UNIFORM is equal to 0.
[0136] If QP CHANNEL UNIFORM is not equal to 0, then the bit stream includes DIFF QP MB (NUM BITS QP MB bits). In the example of Figure 13, NUM BITS QP MB can be an integer from 0 to 7. For the current macroblock, DIFF QP MB represents the difference between QP MB and QP PRED. QP MB is determined to be: QP MB = DIFF QP MB + QP PRED, where QP PRED is the already predicted QP for the current macroblock.
[0137] If QP CHANNEL UNIFORM is equal to 0, then QP for the current macroblock varies across the different color channels of the frame, and so the bit stream includes DIFF QP MB Y (NUM BITS QP MB Y bits), DIFF QP MB U (NUM BITS QP MB U bits), and DIFF QP MB V (NUM BITS QP MB V bits). In the example of Figure 13, the number of bits for differential QP MB per channel can be an integer from 0 to 7. DIFF QP MB Y represents the difference between QP MB Y and QP PRED Y. QP MB Y = DIFF QP MB Y + QP PRED Y. DIFF QP MB U and DIFF QP MB V represent similar values for the U and V channels, respectively.
[0138] This design allows for a very simple and efficient way to exploit inter-macroblock redundancy in QPs. Even when different color channels use different quantizers for a given macroblock, a 1-bit QP SKIP element for the macroblock is sufficient to indicate that the QPs of the color channels are identical to the QPs of the corresponding color channels of a neighboring macroblock (such as the left or top neighbor). Further, prediction using a simple comparison and selecting a single neighboring macroblock's QP is simpler than blending two or more neighboring macroblocks — it eliminates the need for a median or averaging operation, and provides similar efficiency in compression. More complicated QP prediction rules can provide more accurate prediction at the cost of higher computational complexity.
[0139] In the approach shown in Figures 13 and 14, a simple fixed length coding (FLC) table (with code lengths that can vary from frame to frame or channel to channel) is used. For many distributions of differential QP MB values, performance of such FLCs can be as good as a variable length coding. Alternatively, an encoder and decoder use variable length codes for differential QP MB values.
[0140] Additionally, the ability to send the number of bits used to signal the differential QP provides an additional degree of flexibility in improving compression efficiency. If the macroblock QPs are very close to the frame QP, this proximity can be exploited by using only 1 or 2 bits to signal the differential QP MBs for the macroblocks that do not use predicted QP. If the macroblock QPs are very different (in terms of having a larger range), more bits are used to signal the differential QP MBs for the macroblocks.
[0141] The number of bits used to signal the differential QP MBs for each color channel can also be different based on the characteristics of the respective macroblock QPs are for each channel. For example, if the QP of the U and V channels for all of the macroblocks remains the same, and the luma QP varies spatially for the macroblocks, the tool uses zero bits for signaling the differential QP MB for each of the U and V channels, and 1 or more bits for signaling the differential QP MBs of the Y channel.
D. Signaling Alternative QPs in Second Combined Implementation.
[0142] In the second combined implementation, if QP SKIP is not equal to 1, then QP MB is explicitly signaled using a QP index at the macroblock level. The QP index references a QP in a table of available QPs, which is signaled at frame level. Figure 15 illustrates bit stream syntax and pseudocode for receiving information that specifies the QP values in the table for a frame (or tables for channels), then populating the QP table. Figure 15 shows frame-level syntax elements. [0143] If QP FRAME UNIFORM is equal to 0 (QP varies spatially across the frame) and QP CHANNEL UNIFORM is not equal to 0 (QP does not vary across the color channels in the frame), the bit stream includes syntax elements specifying the values of a QP table for the frame. NUM_QP_INDEX (3 bits) is a 3-bit value regulating the number of different QPs in the table for the frame. NUM QP INDEX has 23=8 possible values, from 0 to 7. In other examples, NUM QP INDEX may be signaled using more or less bits.
[0144] The internal variable NUM QP, also regulating the number of different QPs in the table, is equal to NUM QP INDEX + 2, for a range of 2 to 9. The first QP in the QP index table, QP MB TABLE[O], is QP FRAME, the default QP value for the frame. The available QPs are generally ordered from most frequent to least frequent, to facilitate effective variable length coding of QP indices at macroblock level. For example, in the tables shown in Figures 17A to 17F, a single bit is used to signal if QP MB is equal to QP MB T ABLE[O].
[0145] The remaining rows of the QP table are filled, from position 1 through the position NUM QP-I, by receiving and decoding a QP value for each position. In Figure 15, the bit stream includes 8 bits to signal the QP value of each position in the table, though in other examples more or less bits can be used. In Figure 15, the QP index table is produced with QP FRAME at position 0 in the table and signaled QP values at each of the other positions in the table from 1 to NUM_QP_INDEX + 1.
[0146] If QP CHANNEL UNIFORM is equal to 0 (QP varies across the color channels in the frame), the bit stream includes syntax elements to populate a QP table for each of the Y, U, and V color channels in the frame. For each channel, the positions of the table are filled with the channel-specific QP and alternate QPs.
[0147] Figure 16 illustrates bit stream syntax and pseudocode for receiving information that indicates QP for a macroblock, then determining the QP, in the second combined implementation. Figure 16 shows macrob lock-level syntax elements. QP SKIP is used as in the first combined implementation. Again, if QP SKIP is equal to 1 for a current macroblock, then QP MB = QP PRED for that macroblock. If QP SKIP is not equal to 1, then additional information indicating QP MB is signaled explicitly for the macroblock. In the second combined implementation, however, the tool signals the non-predicted QP with reference to the QP table established at the frame level. [0148] When QP CHANNEL UNIFORM indicates QP does not vary between channels, NUM QP EFFECTIVE, an internal counter, equals NUM QP - 1 (where NUM QP is set from frame-level information in the bit stream, as in Figure 15). This establishes the count of alternate QP values stored in the QP table for the frame. For example, if NUM QP is equal to 9, then the QP table has 8 alternate QP values, the frame QP value at position 0 and 8 alternate QP values at positions 1-8 in the table. Thus, NUM QP EFFECTIVE is equal to 8. QP ID is a value that is used to locate a QP in the QP table. Initially, QP ID is 0.
[0149] If NUM QP EFFECTIVE is greater than 1, the QP table comprises the default value and at least two alternate values at positions 1 and 2, and a variable length code ("VLC") in the bit stream indicates the QP ID (index of position in the QP table) of the QP to use for the macroblock. Figures 17A-F show several examples of VLC tables that may be used for variable length coding and decoding. For example, FIG 17A shows a VLC table (1700) corresponding to NUM QP EFFECTIVE = 2, wherein the VLC table (1700) comprises a QP ID of 0 corresponding to a VLC of 0. The VLC table (1700) further comprises a QP ID of 1 corresponding to a VLC of 1. Similarly, Figure 17B shows a VLC table (1705) corresponding to NUM QP EFFECTIVE = 3, with VLCs s for QP IDs of 0, 1, and 2. Figures 17C-F show VLC tables (1710, 1715, 1720, 1730) corresponding to NUM QP EFFECTIVE = 4, 5, 6, and 7, respectively. Typically, the most common QP ID values in the frame or color channel are positioned near the top of the VLC tables, so that the most common QP IDs are signaled using fewer bits. Alternatively, the encoder and decoder use other VLCs to represent QP IDs. Instead of using different VLC tables for different values of NUM QP EFFECTIVE, the encoder and decoder can use a single table, but changing multiple tables typically results in slightly more efficient signaling. (For example, compare VLCs lengths for QP_ID==1 in the different VLC tables in Figures 17A and l7B.)
[0150] There is no VLC table for NUM QP EFFECTIVE = 1 because, if a QP table has only the QP FRAME (or channel QP) and one alternate QP, the non-predicted QP can be inferred to be the QP that is not the predicted QP. In other words, QP PRED for the current macroblock is one of the two QP values in the table. If the macroblock does not use QP PRED (i.e., QP_SKIP==0), then the only other option for the macroblock is the other QP in the QP table, and no VLC is included in the bitstream for QP ID. [0151] If NUM QP EFFECTIVE is greater than 1, the bit stream includes a VLC associated with a QP ID in one of the VLC tables, where NUM QP EFFECTIVE indicates the table to use. For example, if NUM QP EFFECTIVE is equal to 4 and the tool decodes the Huffman code 110, then the tool determines the corresponding QP ID of 2 from the table (1710) shown in Figure 17C. When NUM QP EFFECTIVE is equal to 4, the number of alternate QP values in the QP table is 4, and the QP table also includes the QP FRAME. Thus, the QP IDs in the QP table are 0, 1, 2, 3 and 4. The corresponding VLC table includes only four positions, however, because a position is not needed for the predicted QP, which could have ID of 0, 1, 2, 3 or 4 in the QP table. This helps reduce overall bit rate associated with signaling QP IDs.
[0152] Thus, whether or not NUM QP EFFECTIVE is greater than 1, the decoding tool determines the ID of the QP PRED, which is shown as QP PRED ID. The tool then checks whether the signaled QP ID (or initialized QP ID) is greater than QP PRED ID. If so, then the tool increments QP ID. If not, then the tool does not increment QP ID. Once the tool has determined the appropriate QP ID, the tool determines QP MB with the value in the QP table indicated by QP ID.
[0153] For example, if the predicted QP for a current macroblock has a QP PRED ID of 1 and NUM QP EFFECTIVE is 1, QP ID retains its initial value of 0 and references the other (non-predicted) QP in the QP table with two available QPs. If the QP PRED ID of the predicted QP is 0, QP IP is incremented and references the other (non-predicted) QP in the QP table with two available QPs.
[0154] As another example, let QP_ PRED ID be equal to 2 for a current macroblock. If the tool receives a VLC that indicates QP ID of 0 in the table (1715) shown in Figure 17D, since QP ID < QP PRED ID, the tool looks up the value QP ID of 0 in the QP table. In contrast, if the tool receives a VLC that indicates QP ID of 4 in the table (1715) shown in Figure 17D, the tool increments the QP ID and looks up the value QP ID of 4 in the QP table. By exploiting the fact that signaled QP ID values need not include QP PRED ID as a possible choice, overall bit rate associated with signaling QP ID values is reduced.
[0155] If QP CHANNEL UNIFORM is equal to 0 (QP varies between channels), then this process is performed for the macroblock in each color channel of the frame where QP SKIP is not equal to 1. [0156] The approach of the second combined implementation is particularly useful if a small set of QP choices in a wide range are desired for QPs for macrob locks in the frame or color channel. For example, if certain sections of the frame or color channel are very complex spatially or temporally while other sections of the frame or color channel are relatively uniform, this scheme may help improve overall compression of the frame of enhancement layer video. This technique also exploits inter-macroblock redundancy within sections, allows for signaling of the most common macrob lock QPs using the shortest VLC codes, and, in certain cases, improves performance by using a VLC code for a lower QP ID to signal a QP ID that is actually higher.
VII. Alternatives.
[0157] Although many of the examples presented herein relate to encoding and decoding of enhancement layer video, the techniques and tools described herein for spatial prediction of QPs can be applied to other types of video more generally. Similarly, the techniques and tools described herein for varying QP spatially and/or across channels can be applied to other types of video more generally.
[0158] Many of the examples of QP prediction involve spatial prediction of a single predicted QP for a current unit. Alternatively, an encoder and decoder compute multiple predictors for a current unit, and the bit stream includes information indicating a selection of the predicted QP for the current unit from among the multiple predictors. As another alternative, instead of performing spatial prediction of QPs, the encoder and decoder use temporal prediction from co-located macroblocks in other pictures, or use prediction of QPs of macroblocks in one channel from QPs of co-located macroblocks in another color channel.
[0159] In view of the many possible embodiments to which the principles of the disclosed invention may be applied, it should be recognized that the illustrated embodiments are only preferred examples of the invention and should not be taken as limiting the scope of the invention. In some cases certain steps in the above described techniques can be omitted or repeated. Rather, the scope of the invention is defined by the following claims. We therefore claim as our invention all that comes within the scope and spirit of these claims.

Claims

We claim:
1. A computer-readable storage medium storing instructions which, when executed, cause a computer to perform a method comprising: encoding (420) enhancement layer video for a picture organized in plural color channels, including selectively varying quantization spatially and between the plural color channels of the enhancement layer video for the picture; and outputting (430) the encoded enhancement layer video for the picture in a bitstream, including signaling quantization parameter information that indicates plural quantization parameters that at least in part parameterize the varied quantization of the enhancement layer video for the picture.
2. The computer-readable storage medium of claim 1 wherein the method further comprises: during the encoding: determining whether to use spatial quantization variation; and determining whether to use quantization variation between channels; as part of the outputting, signaling information that indicates an on/off decision for the spatial quantization variation and an on/off decision for the quantization variation between channels.
3. The computer-readable storage medium of claim 2 wherein the method includes, on a picture-by-picture basis for each of plural pictures, repeating the determining whether to use spatial quantization variation, the determining whether to use quantization variation between channels, and the signaling information that indicates the on/off decision for the spatial quantization variation and the on/off decision for the quantization variation between channels.
4. The computer-readable storage medium of claim 2 wherein the signaling information that indicates the on/off decision for the spatial quantization variation is a one- bit flag, and wherein the signaling information that indicates the on/off decision for the quantization variation between channels is a one -bit flag.
5. The computer-readable storage medium of claim 1 wherein the encoding includes performing the quantization on transform coefficients of blocks of macrob locks of the enhancement layer video for the picture.
6. The computer-readable storage medium of claim 1 wherein the signaling the quantization parameter information comprises: signaling picture-level information that indicates one or more picture-level quantization parameters for the enhancement layer video for the picture or respective color channels of the enhancement layer video for the picture; and for each of plural macrob locks of the enhancement layer video for the picture, signaling macroblock-level information that indicates a macroblock-level quantization parameter for the macroblock.
7. The computer-readable storage medium of claim 6 wherein the signaling the quantization parameter information further comprises: signaling additional picture-level information that indicates a resolution of the macroblock-level information, the macroblock-level information representing a quantization parameter differential relative to one of the one or more picture-level quantization parameters.
8. The computer-readable storage medium of claim 6 wherein the signaling the quantization parameter information further comprises: signaling additional picture-level information defining one or more alternative quantization parameters for the enhancement layer video for the picture, the macroblock- level information representing a selection of one of the one or more picture-level quantization parameters or one of the one or more defined alternative quantization parameters.
9. The computer-readable storage medium of claim 1 wherein the encoding includes, for a current macroblock of the enhancement layer video for the picture, predicting a macroblock-level quantization parameter for the current macroblock using one or more macroblock-level quantization parameters for spatially neighboring macroblocks.
10. The computer-readable storage medium of claim 9 wherein the signaling includes, for the current macroblock, signaling macroblock-level information to indicate whether or not the current macroblock uses the predicted macroblock-level quantization parameter.
11. A method comprising: from a bitstream, receiving encoded information for video for a picture, including receiving quantization parameter selection information for a current unit of the video for the picture; and decoding the video for the picture, including, for the current unit: predicting (1010) a quantization parameter for the current unit using one or more quantization parameters for spatially neighboring units of the video for the picture; selecting (1015) between the predicted quantization parameter and another quantization parameter using the quantization parameter selection information; and using the selected quantization parameter in reconstruction of the current unit.
12. The method of claim 11 wherein the encoded information further includes information indicating the other quantization parameter, and wherein the other quantization parameter is signaled differentially relative to the predicted quantization parameter.
13. The method of claim 11 wherein the encoded information further includes information indicating the other quantization parameter, and wherein the other quantization parameter is signaled as a selection among a plurality of pre-determined alternative quantization parameters.
14. The method of claim 11 wherein the spatially neighboring units include a left unit that is to the left of the current unit and an above unit that is above the current unit, and wherein the predicting the quantization parameter uses one or more prediction rules according to which: if the quantization parameter for the left unit equals the quantization parameter for the above unit, the predicted quantization parameter for the current unit equals the quantization parameter of the left unit; and otherwise, the predicted quantization parameter for the current unit equals a default quantization parameter.
15. The method of claim 14, wherein the default quantization parameter is a picture-level quantization parameter for the video for the picture.
16. The method of claim 11 wherein the encoded information further includes resolution information indicating a number of bits used to signal the other quantization parameter differentially relative to the predicted quantization parameter.
17. The method of claim 11 wherein the encoded information further includes: for the video for the picture, alternative quantization parameter information for one or more alternative quantization parameters; and for the current unit, a selection of one of the one or more alternative quantization parameters.
18. A decoder system comprising : a memory for storing digital media data; and a digital media processor programmed to decode the digital media data, wherein the decoding includes: receiving picture-level information indicating a default quantization parameter and one or more alternative quantization parameters for a picture; defining the default quantization parameter and the one or more alternative quantization parameters for the picture; and for each of plural macrob locks of the picture: predicting (1010) a quantization parameter for the macrob lock using one or more quantization parameters for spatially neighboring macroblocks; receiving (1005) macroblock-level information representing a quantization parameter selection; based at least in part on the macroblock-level information, selecting (1015) between the predicted quantization parameter and another quantization parameter, wherein selection of the other quantization parameter includes selecting between the default quantization parameter and the one or more defined alternative quantization parameters using the macroblock-level information; and using the selected quantization parameter in reconstruction of the current macroblock.
19. The decoder system of claim 18 wherein the macrob lock-level information includes a bit indicating a decision whether or not to use the predicted quantization parameter and, if not, further includes a selection of the other quantization parameter.
20. The decoder system of claim 18 wherein the decoding further includes receiving picture-level information indicating a count of available quantization parameters for the picture, wherein interpretation of the macrob lock-level information depends at least in part on the count of the available quantization parameters.
PCT/US2009/045659 2008-06-03 2009-05-29 Adaptive quantization for enhancement layer video coding WO2009158113A2 (en)

Priority Applications (10)

Application Number Priority Date Filing Date Title
JP2011512545A JP5706318B2 (en) 2008-06-03 2009-05-29 Adaptive quantization for enhancement layer video coding.
MX2014002291A MX343458B (en) 2008-06-03 2009-05-29 Adaptive quantization for enhancement layer video coding.
CN2009801213483A CN102057677B (en) 2008-06-03 2009-05-29 Adaptive quantization for enhancement layer video coding
MX2016014505A MX356897B (en) 2008-06-03 2009-05-29 Adaptive quantization for enhancement layer video coding.
KR1020107027143A KR101780505B1 (en) 2008-06-03 2009-05-29 Adaptive quantization for enhancement layer video coding
EP18187252.4A EP3416382A1 (en) 2008-06-03 2009-05-29 Adaptive quantization for enhancement layer video coding
EP09770648.5A EP2283655B1 (en) 2008-06-03 2009-05-29 Adaptive quantization for enhancement layer video coding
KR1020167007437A KR101745845B1 (en) 2008-06-03 2009-05-29 Adaptive quantization for enhancement layer video coding
MX2010012818A MX2010012818A (en) 2008-06-03 2009-05-29 Adaptive quantization for enhancement layer video coding.
HK11109267.4A HK1155303A1 (en) 2008-06-03 2011-09-01 Adaptive quantization for enhancement layer video coding

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US12/156,864 US8897359B2 (en) 2008-06-03 2008-06-03 Adaptive quantization for enhancement layer video coding
US12/156,864 2008-06-03

Related Child Applications (1)

Application Number Title Priority Date Filing Date
EP14000799.8A Previously-Filed-Application EP2770741A1 (en) 2008-06-03 2009-05-29 Adaptive quantization for enhancement layer video coding

Publications (2)

Publication Number Publication Date
WO2009158113A2 true WO2009158113A2 (en) 2009-12-30
WO2009158113A3 WO2009158113A3 (en) 2010-03-04

Family

ID=41379777

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2009/045659 WO2009158113A2 (en) 2008-06-03 2009-05-29 Adaptive quantization for enhancement layer video coding

Country Status (8)

Country Link
US (12) US8897359B2 (en)
EP (3) EP2283655B1 (en)
JP (2) JP5706318B2 (en)
KR (2) KR101745845B1 (en)
CN (2) CN103428497B (en)
HK (1) HK1155303A1 (en)
MX (3) MX356897B (en)
WO (1) WO2009158113A2 (en)

Cited By (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012102088A1 (en) * 2011-01-24 2012-08-02 ソニー株式会社 Image decoding device, image encoding device, and method thereof
WO2012121284A1 (en) * 2011-03-10 2012-09-13 シャープ株式会社 Image decoding apparatus, image encoding apparatus, and data structure of encoded data
WO2012120823A1 (en) * 2011-03-09 2012-09-13 Canon Kabushiki Kaisha Image coding apparatus, method for coding image, program therefor, image decoding apparatus, method for decoding image, and program therefor
WO2012120888A1 (en) * 2011-03-09 2012-09-13 日本電気株式会社 Video encoding device, video decoding device, video encoding method, and video decoding method
WO2012124461A1 (en) * 2011-03-11 2012-09-20 ソニー株式会社 Image processing device and method
WO2012140889A1 (en) * 2011-04-15 2012-10-18 Canon Kabushiki Kaisha Image encoding apparatus, image encoding method, image encoding program, image decoding apparatus, image decoding method, and image decoding program
WO2013001729A1 (en) * 2011-06-28 2013-01-03 日本電気株式会社 Image encoding device and image decoding device
JP2013531942A (en) * 2010-06-10 2013-08-08 トムソン ライセンシング Method and apparatus for determining a quantization parameter predictor from a plurality of adjacent quantization parameters
JP2014509150A (en) * 2011-03-11 2014-04-10 華為技術有限公司 Encoding method and apparatus, and decoding method and apparatus
WO2014120960A1 (en) * 2013-01-30 2014-08-07 Intel Corporation Content adaptive bitrate and quality control by using frame hierarchy sensitive quantization for high efficiency next generation video coding
JP2014520475A (en) * 2011-06-21 2014-08-21 インテレクチュアル ディスカバリー カンパニー リミテッド Adaptive quantization parameter encoding and decoding method and apparatus based on quadtree structure
JP2015502098A (en) * 2011-11-25 2015-01-19 インフォブリッジ ピーティーイー. エルティーディー. Color difference video decoding method
AU2015200682B2 (en) * 2011-03-09 2016-05-26 Nec Corporation Video encoding device, video decoding device, video encoding method, and video decoding method
JP2016103854A (en) * 2016-01-20 2016-06-02 キヤノン株式会社 Image encoder, image encoding method and program, and image decoder, image decoding method and program
US9363421B1 (en) 2015-01-12 2016-06-07 Google Inc. Correcting for artifacts in an encoder and decoder
JP2016181931A (en) * 2011-03-09 2016-10-13 キヤノン株式会社 Image encoding device, image encoding method and program, image decoding device, and image decoding method and program
JP2017153137A (en) * 2010-09-30 2017-08-31 サン パテント トラスト Decoding method, encoding method, decoder and encoder
US9832460B2 (en) 2011-03-09 2017-11-28 Canon Kabushiki Kaisha Image coding apparatus, method for coding image, program therefor, image decoding apparatus, method for decoding image, and program therefor
WO2017203930A1 (en) * 2016-05-27 2017-11-30 Sharp Kabushiki Kaisha Systems and methods for varying quantization parameters
JP2018191334A (en) * 2011-01-24 2018-11-29 ソニー株式会社 Image coding device, image coding method, and program
CN109068136A (en) * 2012-12-18 2018-12-21 索尼公司 Image processing apparatus and image processing method, computer readable storage medium
US10171804B1 (en) 2013-02-21 2019-01-01 Google Llc Video frame encoding scheme selection
US10306227B2 (en) 2008-06-03 2019-05-28 Microsoft Technology Licensing, Llc Adaptive quantization for enhancement layer video coding
USRE47510E1 (en) 2010-09-29 2019-07-09 Sun Patent Trust Image decoding method, image coding method, image decoding apparatus, image coding apparatus and integrated circuit for generating a code stream with a hierarchical code structure
US10602146B2 (en) 2006-05-05 2020-03-24 Microsoft Technology Licensing, Llc Flexible Quantization
US11689722B2 (en) 2018-04-02 2023-06-27 Sharp Kabushiki Kaisha Systems and methods for deriving quantization parameters for video blocks in video coding

Families Citing this family (88)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8422546B2 (en) 2005-05-25 2013-04-16 Microsoft Corporation Adaptive video encoding using a perceptual model
US8130828B2 (en) 2006-04-07 2012-03-06 Microsoft Corporation Adjusting quantization to preserve non-zero AC coefficients
US8503536B2 (en) 2006-04-07 2013-08-06 Microsoft Corporation Quantization adjustments for DC shift artifacts
US7995649B2 (en) 2006-04-07 2011-08-09 Microsoft Corporation Quantization adjustment based on texture level
US8059721B2 (en) 2006-04-07 2011-11-15 Microsoft Corporation Estimating sample-domain distortion in the transform domain with rounding compensation
US7974340B2 (en) 2006-04-07 2011-07-05 Microsoft Corporation Adaptive B-picture quantization control
US8238424B2 (en) 2007-02-09 2012-08-07 Microsoft Corporation Complexity-based adaptive preprocessing for multiple-pass video compression
US8498335B2 (en) 2007-03-26 2013-07-30 Microsoft Corporation Adaptive deadzone size adjustment in quantization
US8243797B2 (en) 2007-03-30 2012-08-14 Microsoft Corporation Regions of interest for quality adjustments
US8442337B2 (en) 2007-04-18 2013-05-14 Microsoft Corporation Encoding adjustments for animation content
US8331438B2 (en) 2007-06-05 2012-12-11 Microsoft Corporation Adaptive selection of picture-level quantization parameters for predicted video pictures
US8189933B2 (en) 2008-03-31 2012-05-29 Microsoft Corporation Classifying and controlling encoding quality for textured, dark smooth and smooth video content
WO2011142279A1 (en) * 2010-05-13 2011-11-17 シャープ株式会社 Encoding device, decoding device, and data structure
DK3177017T3 (en) 2010-06-04 2020-03-02 Sony Corp CODING A QP AND DELTA QP FOR PICTURE BLOCKS BIGGER THAN A MINIMUM SIZE
US8787444B2 (en) * 2010-07-16 2014-07-22 Sony Corporation Differential coding of intra directions (DCIC)
KR20120016991A (en) * 2010-08-17 2012-02-27 오수미 Inter prediction process
US20120057629A1 (en) * 2010-09-02 2012-03-08 Fang Shi Rho-domain Metrics
US9955155B2 (en) * 2010-12-31 2018-04-24 Electronics And Telecommunications Research Institute Method for encoding video information and method for decoding video information, and apparatus using same
KR20190069613A (en) * 2011-02-10 2019-06-19 벨로스 미디어 인터내셔널 리미티드 Image processing device and image processing method
AU2015202011B2 (en) * 2011-02-10 2016-10-20 Sony Group Corporation Image Processing Device and Image Processing Method
CN102685485B (en) * 2011-03-11 2014-11-05 华为技术有限公司 Coding method and device, and decoding method and device
WO2012158504A1 (en) * 2011-05-16 2012-11-22 Dolby Laboratories Licensing Corporation Efficient architecture for layered vdr coding
US9854275B2 (en) 2011-06-25 2017-12-26 Qualcomm Incorporated Quantization in video coding
GB2494468B (en) * 2011-09-12 2014-01-15 Canon Kk Method and device for encoding or decoding information representing prediction modes
US9516085B2 (en) * 2011-09-20 2016-12-06 Google Technology Holdings LLC Constrained fidelity Adaptive Bit Rate encoding systems and methods
WO2013043918A1 (en) * 2011-09-21 2013-03-28 General Instrument Corporation Adaptive streaming to multicast and constrained-fidelity constant bit rate encoding
WO2013046616A1 (en) * 2011-09-29 2013-04-04 パナソニック株式会社 Image encoding apparatus, image decoding apparatus, image encoding method and image decoding method
KR20130049523A (en) * 2011-11-04 2013-05-14 오수미 Apparatus for generating intra prediction block
KR20130049522A (en) 2011-11-04 2013-05-14 오수미 Method for generating intra prediction block
KR20130049525A (en) 2011-11-04 2013-05-14 오수미 Method for inverse transform for reconstructing residual block
KR20130050407A (en) * 2011-11-07 2013-05-16 오수미 Method for generating motion information in inter prediction mode
KR20130050149A (en) * 2011-11-07 2013-05-15 오수미 Method for generating prediction block in inter prediction mode
KR20130050403A (en) * 2011-11-07 2013-05-16 오수미 Method for generating rrconstructed block in inter prediction mode
KR20130050404A (en) * 2011-11-07 2013-05-16 오수미 Method for generating reconstructed block in inter prediction mode
KR20130050406A (en) * 2011-11-07 2013-05-16 오수미 Method for generating prediction block in inter prediction mode
KR20130050405A (en) * 2011-11-07 2013-05-16 오수미 Method for determining temporal candidate in inter prediction mode
TWI523497B (en) * 2011-11-10 2016-02-21 Sony Corp Image processing apparatus and method
ES2952451T3 (en) * 2011-12-13 2023-10-31 Jvckenwood Corp Video encoding device, video encoding method, video decoding device and video decoding method
TWI606718B (en) * 2012-01-03 2017-11-21 杜比實驗室特許公司 Specifying visual dynamic range coding operations and parameters
AR092786A1 (en) * 2012-01-09 2015-05-06 Jang Min METHODS TO ELIMINATE BLOCK ARTIFACTS
US9300984B1 (en) 2012-04-18 2016-03-29 Matrox Graphics Inc. Independent processing of data streams in codec
US10003803B1 (en) 2012-04-18 2018-06-19 Matrox Graphics Inc. Motion-based adaptive quantization
US10003802B1 (en) 2012-04-18 2018-06-19 Matrox Graphics Inc. Motion-based adaptive quantization
US20140003504A1 (en) * 2012-07-02 2014-01-02 Nokia Corporation Apparatus, a Method and a Computer Program for Video Coding and Decoding
WO2014050311A1 (en) * 2012-09-28 2014-04-03 ソニー株式会社 Image processing device and image processing method
KR101661436B1 (en) 2012-09-29 2016-09-29 후아웨이 테크놀러지 컴퍼니 리미티드 Method, apparatus and system for encoding and decoding video
KR101491591B1 (en) * 2012-11-05 2015-02-09 주식회사 케이티 Virtualization server providing virtualization service of web application and method for transmitting data for providing the same
US9374585B2 (en) * 2012-12-19 2016-06-21 Qualcomm Incorporated Low-delay buffering model in video coding
KR102309086B1 (en) * 2013-03-21 2021-10-06 소니그룹주식회사 Device and method for encoding image, and device and method for decoding image
EP2843949B1 (en) * 2013-06-28 2020-04-29 Velos Media International Limited Methods and devices for emulating low-fidelity coding in a high-fidelity coder
US10440365B2 (en) 2013-06-28 2019-10-08 Velos Media, Llc Methods and devices for emulating low-fidelity coding in a high-fidelity coder
US20150016502A1 (en) * 2013-07-15 2015-01-15 Qualcomm Incorporated Device and method for scalable coding of video information
US9510002B2 (en) * 2013-09-09 2016-11-29 Apple Inc. Chroma quantization in video coding
WO2015038156A1 (en) * 2013-09-16 2015-03-19 Entropic Communications, Inc. An efficient progressive jpeg decode method
CN103686177B (en) * 2013-12-19 2018-02-09 中国科学院深圳先进技术研究院 A kind of compression of images, the method, apparatus of decompression and picture system
MY178217A (en) * 2014-01-02 2020-10-07 Vid Scale Inc Methods and systems for scalable video coding with mixed interlace and progressive content
EP3114835B1 (en) 2014-03-04 2020-04-22 Microsoft Technology Licensing, LLC Encoding strategies for adaptive switching of color spaces
SG11201607282YA (en) 2014-03-04 2016-09-29 Microsoft Technology Licensing Llc Adaptive switching of color spaces, color sampling rates and/or bit depths
AU2014388185B2 (en) 2014-03-27 2018-08-02 Microsoft Technology Licensing, Llc Adjusting quantization/scaling and inverse quantization/scaling when switching color spaces
CN105960802B (en) 2014-10-08 2018-02-06 微软技术许可有限责任公司 Adjustment when switching color space to coding and decoding
US10021411B2 (en) * 2014-11-05 2018-07-10 Apple Inc. Techniques in backwards compatible multi-layer compression of HDR video
JP6512927B2 (en) * 2015-04-28 2019-05-15 キヤノン株式会社 Image coding apparatus and control method thereof
KR101968456B1 (en) 2016-01-26 2019-04-11 돌비 레버러토리즈 라이쎈싱 코오포레이션 Adaptive quantization
US10075671B2 (en) 2016-09-26 2018-09-11 Samsung Display Co., Ltd. System and method for electronic data communication
US10616383B2 (en) 2016-09-26 2020-04-07 Samsung Display Co., Ltd. System and method for electronic data communication
US10523895B2 (en) 2016-09-26 2019-12-31 Samsung Display Co., Ltd. System and method for electronic data communication
US10469857B2 (en) 2016-09-26 2019-11-05 Samsung Display Co., Ltd. System and method for electronic data communication
GB2554680B (en) * 2016-10-03 2020-04-01 Advanced Risc Mach Ltd Selecting encoding options
US10769818B2 (en) * 2017-04-09 2020-09-08 Intel Corporation Smart compression/decompression schemes for efficiency and superior results
US10360695B1 (en) 2017-06-01 2019-07-23 Matrox Graphics Inc. Method and an apparatus for enabling ultra-low latency compression of a stream of pictures
US11070818B2 (en) * 2017-07-05 2021-07-20 Telefonaktiebolaget Lm Ericsson (Publ) Decoding a block of video samples
US11019339B2 (en) * 2017-07-12 2021-05-25 Futurewei Technologies, Inc. Fractional quantization parameter offset in video compression
CN109302608B (en) * 2017-07-25 2021-06-22 华为技术有限公司 Image processing method, device and system
GB2567835B (en) 2017-10-25 2020-11-18 Advanced Risc Mach Ltd Selecting encoding options
CN111656442B (en) 2017-11-17 2024-06-28 弗劳恩霍夫应用研究促进协会 Apparatus and method for encoding or decoding directional audio coding parameters using quantization and entropy coding
WO2020007827A1 (en) * 2018-07-02 2020-01-09 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Encoder, decoder and method for adaptive quantization in multi-channel picture coding
EP3675500A1 (en) * 2018-12-24 2020-07-01 InterDigital VC Holdings, Inc. Quantization parameter prediction in video encoding and decoding
US10841356B2 (en) 2018-11-28 2020-11-17 Netflix, Inc. Techniques for encoding a media title while constraining bitrate variations
US10880354B2 (en) 2018-11-28 2020-12-29 Netflix, Inc. Techniques for encoding a media title while constraining quality variations
CN109862359B (en) * 2018-12-29 2021-01-08 北京数码视讯软件技术发展有限公司 Code rate control method and device based on layered B frame and electronic equipment
JP7284429B2 (en) * 2019-05-23 2023-05-31 日本電信電話株式会社 Quantization device, quantization method and program
WO2020243206A1 (en) 2019-05-28 2020-12-03 Dolby Laboratories Licensing Corporation Quantization parameter signaling
US11100899B2 (en) * 2019-08-13 2021-08-24 Facebook Technologies, Llc Systems and methods for foveated rendering
CN113766227B (en) * 2020-06-06 2023-07-11 华为技术有限公司 Quantization and inverse quantization method and apparatus for image encoding and decoding
US11924428B2 (en) * 2020-06-24 2024-03-05 Qualcomm Incorporated Scale factor for quantization parameter values in geometry-based point cloud compression
EP4226612A1 (en) * 2020-10-12 2023-08-16 Qeexo, Co. Quantization of tree-based machine learning models
CN115134598A (en) * 2021-03-25 2022-09-30 四川大学 Compressed video quality blind enhancement method based on QP estimation
CN116233353A (en) * 2023-05-08 2023-06-06 北京航天星桥科技股份有限公司 Remote video conference communication method and system

Family Cites Families (485)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US762026A (en) 1902-10-24 1904-06-07 Ver Chemische Werke Actien Ges Process of making fatty acids.
USRE17095E (en) 1922-10-19 1928-10-02 small
GB897363A (en) 1958-08-13 1962-05-23 Harries Television Res Ltd Improvements in or relating to display devices
GB1218015A (en) 1967-03-13 1971-01-06 Nat Res Dev Improvements in or relating to systems for transmitting television signals
US4460924A (en) 1978-04-19 1984-07-17 Quanticon Inc. Dither quantized signalling for color television
US4334244A (en) 1980-07-28 1982-06-08 Magnavox Government And Industrial Electronics Company Adaptive image enhancement system
FR2532138B1 (en) 1982-08-20 1986-10-10 Thomson Csf METHOD FOR COMPRESSING THE RATE OF SUCCESSIVE DATA TRANSMITTED BETWEEN A TRANSMITTER AND A TELEVISION RECEIVER AND SYSTEM IMPLEMENTING THE METHOD
FR2562364B1 (en) 1984-04-03 1987-06-19 Thomson Csf METHOD AND SYSTEM FOR COMPRESSING THE RATE OF DIGITAL DATA TRANSMITTED BETWEEN A TRANSMITTER AND A TELEVISION RECEIVER
JPS60257820A (en) 1984-06-06 1985-12-19 Toray Ind Inc Gas separation composite membrane
JPH0686264B2 (en) 1984-07-31 1994-11-02 沖電気工業株式会社 Automated teller machine
CA1327074C (en) 1985-02-28 1994-02-15 Tokumichi Murakami Interframe adaptive vector quantization encoding apparatus and video encoding transmission apparatus
EP0250533B1 (en) 1985-12-24 1993-01-27 British Broadcasting Corporation Method of coding a video signal for transmission in a restricted bandwidth
US4760461A (en) 1986-02-28 1988-07-26 Kabushiki Kaisha Toshiba Binary data compression and expansion processing apparatus
DE3735349A1 (en) 1986-10-18 1988-04-28 Toshiba Kawasaki Kk Image compressor
JP2783534B2 (en) 1986-11-13 1998-08-06 キヤノン株式会社 Encoding device
NL8700565A (en) 1987-03-10 1988-10-03 Philips Nv TV SYSTEM IN WHICH TRANSFORMED CODING TRANSFERS DIGITIZED IMAGES FROM A CODING STATION TO A DECODING STATION.
US4774574A (en) 1987-06-02 1988-09-27 Eastman Kodak Company Adaptive block transform image coding method and apparatus
US6563875B2 (en) 1987-12-30 2003-05-13 Thomson Licensing S.A. Adaptive method of encoding and decoding a series of pictures by transformation, and devices for implementing this method
CA1333420C (en) 1988-02-29 1994-12-06 Tokumichi Murakami Vector quantizer
JPH0666948B2 (en) 1988-02-29 1994-08-24 三菱電機株式会社 Interframe vector quantization coding / decoding device
US4821119A (en) 1988-05-04 1989-04-11 Bell Communications Research, Inc. Method and apparatus for low bit-rate interframe video coding
US4965830A (en) 1989-01-17 1990-10-23 Unisys Corp. Apparatus for estimating distortion resulting from compressing digital data
JPH0832047B2 (en) 1989-04-28 1996-03-27 日本ビクター株式会社 Predictive coding device
US5128758A (en) 1989-06-02 1992-07-07 North American Philips Corporation Method and apparatus for digitally processing a high definition television augmentation signal
US5179442A (en) 1989-06-02 1993-01-12 North American Philips Corporation Method and apparatus for digitally processing a high definition television augmentation signal
US5241395A (en) 1989-08-07 1993-08-31 Bell Communications Research, Inc. Adaptive transform coding using variable block size
GB8918559D0 (en) 1989-08-15 1989-09-27 British Telecomm Video filter
JPH0828875B2 (en) 1989-08-21 1996-03-21 三菱電機株式会社 Encoding device and decoding device
US5144426A (en) 1989-10-13 1992-09-01 Matsushita Electric Industrial Co., Ltd. Motion compensated prediction interframe coding system
US5210623A (en) * 1989-12-21 1993-05-11 Eastman Kodak Company Apparatus and method for quantizing and/or reconstructing multi-dimensional digital image signals
JP2841765B2 (en) 1990-07-13 1998-12-24 日本電気株式会社 Adaptive bit allocation method and apparatus
JP3069363B2 (en) 1990-07-20 2000-07-24 株式会社日立製作所 Moving image encoding method, moving image encoding device, data recording device, and data communication device
US5146324A (en) 1990-07-31 1992-09-08 Ampex Corporation Data compression using a feedforward quantization estimator
US5303058A (en) 1990-10-22 1994-04-12 Fujitsu Limited Data processing apparatus for compressing and reconstructing image data
JPH0813138B2 (en) 1990-11-28 1996-02-07 松下電器産業株式会社 Image coding device
US5136377A (en) 1990-12-11 1992-08-04 At&T Bell Laboratories Adaptive non-linear quantizer
US5625714A (en) 1991-01-10 1997-04-29 Olympus Optical Co., Ltd. Image signal decoding device capable of removing block distortion with simple structure
JP3187852B2 (en) 1991-02-08 2001-07-16 ソニー株式会社 High efficiency coding method
US5333212A (en) 1991-03-04 1994-07-26 Storm Technology Image compression technique with regionally selective compression ratio
US5317672A (en) 1991-03-05 1994-05-31 Picturetel Corporation Variable bit rate speech encoder
JP3278187B2 (en) 1991-03-14 2002-04-30 三菱電機株式会社 Motion adaptive luminance signal color signal separation filter
US5611038A (en) 1991-04-17 1997-03-11 Shaw; Venson M. Audio/video transceiver provided with a device for reconfiguration of incompatibly received or transmitted video and audio information
EP0514688A2 (en) 1991-05-21 1992-11-25 International Business Machines Corporation Generalized shape autocorrelation for shape acquisition and recognition
EP0514663A3 (en) 1991-05-24 1993-07-14 International Business Machines Corporation An apparatus and method for motion video encoding employing an adaptive quantizer
DE69222766T2 (en) 1991-06-04 1998-05-07 Qualcomm, Inc., San Diego, Calif. ADAPTIVE COMPRESSION SYSTEM OF BLOCK SIZES OF AN IMAGE
JP2551525Y2 (en) 1991-08-20 1997-10-22 日本エー・エム・ピー株式会社 Wire connector
JP3152765B2 (en) 1991-10-31 2001-04-03 株式会社東芝 Image coding device
TW241350B (en) 1991-11-07 1995-02-21 Rca Thomson Licensing Corp
US5231484A (en) * 1991-11-08 1993-07-27 International Business Machines Corporation Motion video compression system with adaptive bit allocation and quantization
US5253058A (en) 1992-04-01 1993-10-12 Bell Communications Research, Inc. Efficient coding scheme for multilevel video transmission
JP3245977B2 (en) 1992-06-30 2002-01-15 ソニー株式会社 Digital image signal transmission equipment
KR0132895B1 (en) 1992-07-24 1998-10-01 강진구 Image compression and expansion method and apparatus for adaptable function
GB9216659D0 (en) 1992-08-05 1992-09-16 Gerzon Michael A Subtractively dithered digital waveform coding system
JPH0686264A (en) * 1992-08-31 1994-03-25 Hitachi Ltd Variable speed picture encoding system
JP3348310B2 (en) 1992-09-28 2002-11-20 ソニー株式会社 Moving picture coding method and moving picture coding apparatus
US5663763A (en) 1992-10-29 1997-09-02 Sony Corp. Picture signal encoding method and apparatus and picture signal decoding method and apparatus
KR0166722B1 (en) 1992-11-30 1999-03-20 윤종용 Encoding and decoding method and apparatus thereof
JP3406336B2 (en) 1992-12-15 2003-05-12 ソニー株式会社 Image encoding device, image encoding method, image decoding device, and image decoding method
US5467134A (en) 1992-12-22 1995-11-14 Microsoft Corporation Method and system for compressing video data
US5544286A (en) 1993-01-29 1996-08-06 Microsoft Corporation Digital video data compression technique
TW224553B (en) 1993-03-01 1994-06-01 Sony Co Ltd Method and apparatus for inverse discrete consine transform and coding/decoding of moving picture
US5412429A (en) 1993-03-11 1995-05-02 The United States Of America As Represented By The Administrator Of The National Aeronautics And Space Administration Picture data compression coder using subband/transform coding with a Lempel-Ziv-based coder
US5510785A (en) 1993-03-19 1996-04-23 Sony Corporation Method of coding a digital signal, method of generating a coding table, coding apparatus and coding method
US5861921A (en) 1993-03-29 1999-01-19 Canon Kabushiki Kaisha Controlling quantization parameters based on code amount
JPH06296275A (en) 1993-04-08 1994-10-21 Sony Corp Method and device for encoding image signal
KR960010196B1 (en) 1993-06-04 1996-07-26 배순훈 Dct coefficient quantizer utilizing human vision characteristics
US5880775A (en) 1993-08-16 1999-03-09 Videofaxx, Inc. Method and apparatus for detecting changes in a video display
GB2281465B (en) 1993-08-27 1997-06-04 Sony Uk Ltd Image data compression
US5509089A (en) 1993-09-09 1996-04-16 Intel Corporation Method and system for encoding images using temporal filtering
US5724097A (en) 1993-10-18 1998-03-03 Mitsubishi Denki Kabushiki Kaisha Adaptive quantization of video based on edge detection
US6104751A (en) 1993-10-29 2000-08-15 Sgs-Thomson Microelectronics S.A. Apparatus and method for decompressing high definition pictures
BE1007807A3 (en) 1993-11-30 1995-10-24 Philips Electronics Nv Apparatus for encoding a video signal.
US5828786A (en) 1993-12-02 1998-10-27 General Instrument Corporation Analyzer and methods for detecting and processing video data types in a video data stream
JP3224465B2 (en) 1993-12-22 2001-10-29 シャープ株式会社 Image coding device
US5537440A (en) 1994-01-07 1996-07-16 Motorola, Inc. Efficient transcoding device and method
KR0183688B1 (en) 1994-01-12 1999-05-01 김광호 Image encoding method and device
US5587708A (en) 1994-01-19 1996-12-24 Industrial Technology Research Institute Division technique unified quantizer-dequantizer
US5592226A (en) 1994-01-26 1997-01-07 Btg Usa Inc. Method and apparatus for video data compression using temporally adaptive motion interpolation
JP3197420B2 (en) 1994-01-31 2001-08-13 三菱電機株式会社 Image coding device
JPH07250327A (en) 1994-03-08 1995-09-26 Matsushita Electric Ind Co Ltd Image coding method
US5654760A (en) 1994-03-30 1997-08-05 Sony Corporation Selection of quantization step size in accordance with predicted quantization noise
US5649083A (en) 1994-04-15 1997-07-15 Hewlett-Packard Company System and method for dithering and quantizing image data to optimize visual quality of a color recovered image
CN1127562A (en) 1994-04-22 1996-07-24 索尼公司 Method and device for encoding image signal and image signal decoding device
KR0148154B1 (en) 1994-06-15 1998-09-15 김광호 Coding method and apparatus with motion dimensions
JP3954656B2 (en) 1994-09-29 2007-08-08 ソニー株式会社 Image coding apparatus and method
US5604856A (en) 1994-10-13 1997-02-18 Microsoft Corporation Motion compensated noise reduction method and system for computer generated images
US5802213A (en) 1994-10-18 1998-09-01 Intel Corporation Encoding video signals using local quantization levels
US6026190A (en) 1994-10-31 2000-02-15 Intel Corporation Image signal encoding with variable low-pass filter
US5539469A (en) 1994-12-30 1996-07-23 Daewoo Electronics Co., Ltd. Apparatus for determining motion vectors through the use of an adaptive median filtering technique
JP2738325B2 (en) 1995-01-24 1998-04-08 日本電気株式会社 Motion compensated inter-frame prediction device
US5724456A (en) 1995-03-31 1998-03-03 Polaroid Corporation Brightness adjustment of images using digital scene analysis
US5623424A (en) 1995-05-08 1997-04-22 Kabushiki Kaisha Toshiba Rate-controlled digital video editing method and system which controls bit allocation of a video encoder by varying quantization levels
US5781788A (en) 1995-05-08 1998-07-14 Avc Technology, Inc. Full duplex single clip video codec
US5835149A (en) 1995-06-06 1998-11-10 Intel Corporation Bit allocation in a coded video sequence
JPH08336139A (en) 1995-06-08 1996-12-17 Casio Comput Co Ltd Image data processor and quantizing method
US5926209A (en) 1995-07-14 1999-07-20 Sensormatic Electronics Corporation Video camera apparatus with compression system responsive to video camera adjustment
US5793371A (en) 1995-08-04 1998-08-11 Sun Microsystems, Inc. Method and apparatus for geometric compression of three-dimensional graphics data
KR100304660B1 (en) 1995-09-22 2001-11-22 윤종용 Method for encoding video signals by accumulative error processing and encoder
US5970173A (en) 1995-10-05 1999-10-19 Microsoft Corporation Image compression and affine transformation for image motion compensation
CA2187044C (en) 1995-10-06 2003-07-01 Vishal Markandey Method to reduce perceptual contouring in display systems
US5835495A (en) 1995-10-11 1998-11-10 Microsoft Corporation System and method for scaleable streamed audio transmission over a network
US5819035A (en) 1995-10-20 1998-10-06 Matsushita Electric Industrial Co., Ltd. Post-filter for removing ringing artifacts of DCT coding
US6160846A (en) 1995-10-25 2000-12-12 Sarnoff Corporation Apparatus and method for optimizing the rate control in a coding system
US6571019B1 (en) 1995-10-26 2003-05-27 Hyundai Curitel, Inc Apparatus and method of encoding/decoding a coded block pattern
US5926791A (en) 1995-10-26 1999-07-20 Sony Corporation Recursively splitting the low-frequency band with successively fewer filter taps in methods and apparatuses for sub-band encoding, decoding, and encoding and decoding
WO1997021302A1 (en) 1995-12-08 1997-06-12 Trustees Of Dartmouth College Fast lossy internet image transmission apparatus and methods
US5761088A (en) 1995-12-18 1998-06-02 Philips Electronics North America Corporation Method and apparatus for channel identification using incomplete or noisy information
US5878166A (en) 1995-12-26 1999-03-02 C-Cube Microsystems Field frame macroblock encoding decision
US5835145A (en) 1996-01-19 1998-11-10 Lsi Logic Corporation Conversion system using programmable tables for compressing transform coefficients
US5787203A (en) 1996-01-19 1998-07-28 Microsoft Corporation Method and system for filtering compressed video images
JP3067628B2 (en) 1996-01-19 2000-07-17 日本電気株式会社 Image coding device
US5799113A (en) 1996-01-19 1998-08-25 Microsoft Corporation Method for expanding contracted video images
US5731837A (en) 1996-01-25 1998-03-24 Thomson Multimedia, S.A. Quantization circuitry as for video signal compression systems
JP3521596B2 (en) 1996-01-30 2004-04-19 ソニー株式会社 Signal encoding method
US6957350B1 (en) 1996-01-30 2005-10-18 Dolby Laboratories Licensing Corporation Encrypted and watermarked temporal and resolution layering in advanced television
US5786856A (en) 1996-03-19 1998-07-28 International Business Machines Method for adaptive quantization by multiplication of luminance pixel blocks by a modified, frequency ordered hadamard matrix
US5682152A (en) 1996-03-19 1997-10-28 Johnson-Grace Company Data compression using adaptive bit allocation and hybrid lossless entropy encoding
US5764814A (en) 1996-03-22 1998-06-09 Microsoft Corporation Representation and encoding of general arbitrary shapes
US5764803A (en) 1996-04-03 1998-06-09 Lucent Technologies Inc. Motion-adaptive modelling of scene content for very low bit rate model-assisted coding of video sequences
US5850482A (en) 1996-04-17 1998-12-15 Mcdonnell Douglas Corporation Error resilient method and apparatus for entropy coding
US5739861A (en) 1996-05-06 1998-04-14 Music; John D. Differential order video encoding system
US5815097A (en) 1996-05-23 1998-09-29 Ricoh Co. Ltd. Method and apparatus for spatially embedded coding
CN1183769C (en) 1996-05-28 2005-01-05 松下电器产业株式会社 image predictive coding/decoding device and method and recording medium
US5809178A (en) 1996-06-11 1998-09-15 Apple Computer, Inc. Elimination of visible quantizing artifacts in a digital image utilizing a critical noise/quantizing factor
US6865291B1 (en) 1996-06-24 2005-03-08 Andrew Michael Zador Method apparatus and system for compressing data that wavelet decomposes by color plane and then divides by magnitude range non-dc terms between a scalar quantizer and a vector quantizer
CA2208950A1 (en) 1996-07-03 1998-01-03 Xuemin Chen Rate control for stereoscopic digital video encoding
KR100242637B1 (en) 1996-07-06 2000-02-01 윤종용 Loop filtering method for reducing blocking effect and ringing noise of motion compensated image
WO1998009436A1 (en) 1996-08-30 1998-03-05 Sony Corporation Device, method, and medium for recording still picture and animation
US6348945B1 (en) 1996-09-06 2002-02-19 Sony Corporation Method and device for encoding data
FR2753330B1 (en) 1996-09-06 1998-11-27 Thomson Multimedia Sa QUANTIFICATION METHOD FOR VIDEO CODING
KR100297830B1 (en) 1996-11-09 2001-08-07 윤종용 Device and method for controlling bit generation amount per object
US6233017B1 (en) 1996-09-16 2001-05-15 Microsoft Corporation Multimedia compression system with adaptive block sizes
GB2317525B (en) 1996-09-20 2000-11-08 Nokia Mobile Phones Ltd A video coding system
JPH10107644A (en) 1996-09-26 1998-04-24 Sony Corp Quantization device and method, and coder and method
JP3934712B2 (en) 1996-09-27 2007-06-20 日本ビクター株式会社 Video signal encoding method and apparatus
KR100303685B1 (en) 1996-09-30 2001-09-24 송문섭 Image prediction encoding device and method thereof
GB2318472B (en) 1996-10-09 2000-11-15 Sony Uk Ltd Processing encoded signals
KR100198788B1 (en) 1996-12-09 1999-06-15 정선종 Quantization circuit with differenciated pulse code modulator
KR100355324B1 (en) 1996-12-12 2002-11-18 마쯔시다덴기산교 가부시키가이샤 Picture encoder and picture decoder
JP4032446B2 (en) 1996-12-12 2008-01-16 ソニー株式会社 Video data compression apparatus and method
JPH10174103A (en) 1996-12-13 1998-06-26 Matsushita Electric Ind Co Ltd Image encoder, encoded image recording medium, image decoder, image encoding method and encoded image transmitting method
US6243497B1 (en) 1997-02-12 2001-06-05 Sarnoff Corporation Apparatus and method for optimizing the rate control in a coding system
DE69838639T2 (en) 1997-02-14 2008-08-28 Nippon Telegraph And Telephone Corp. PREDICTIVE CODING AND DECODING METHOD FOR DYNAMIC PICTURES
US5969764A (en) 1997-02-14 1999-10-19 Mitsubishi Electric Information Technology Center America, Inc. Adaptive video coding method
US6347116B1 (en) 1997-02-14 2002-02-12 At&T Corp. Non-linear quantizer for video coding
US6373894B1 (en) 1997-02-18 2002-04-16 Sarnoff Corporation Method and apparatus for recovering quantized coefficients
US6115420A (en) 1997-03-14 2000-09-05 Microsoft Corporation Digital video signal encoder and encoding method
US6118817A (en) 1997-03-14 2000-09-12 Microsoft Corporation Digital video signal encoder and encoding method having adjustable quantization
US5844613A (en) 1997-03-17 1998-12-01 Microsoft Corporation Global motion estimator for motion video signal encoding
US6633611B2 (en) 1997-04-24 2003-10-14 Mitsubishi Denki Kabushiki Kaisha Method and apparatus for region-based moving image encoding and decoding
US6058215A (en) 1997-04-30 2000-05-02 Ricoh Company, Ltd. Reversible DCT for lossless-lossy compression
US5959693A (en) 1997-05-07 1999-09-28 General Instrument Corporation Pixel adaptive noise reduction filter for digital video
US6088392A (en) 1997-05-30 2000-07-11 Lucent Technologies Inc. Bit rate coder for differential quantization
JP3617253B2 (en) 1997-06-03 2005-02-02 富士ゼロックス株式会社 Image coding apparatus and method
FI107496B (en) 1997-07-18 2001-08-15 Nokia Mobile Phones Ltd Image Compressor Call
JPH1141610A (en) 1997-07-24 1999-02-12 Nippon Telegr & Teleph Corp <Ntt> Variable speed coding control method and system
US6281942B1 (en) 1997-08-11 2001-08-28 Microsoft Corporation Spatial and temporal filtering mechanism for digital motion video signals
KR100244290B1 (en) 1997-09-09 2000-02-01 구자홍 Method for deblocking filtering for low bit rate video
US6091777A (en) 1997-09-18 2000-07-18 Cubic Video Technologies, Inc. Continuously adaptive digital video compression system and method for a web streamer
AU1062999A (en) 1997-09-29 1999-04-23 Rockwell Semiconductor Systems, Inc. System and method for compressing images using multi-threshold wavelet coding
US6295379B1 (en) 1997-09-29 2001-09-25 Intel Corporation DPCM image compression with plural quantization table levels
KR100511693B1 (en) 1997-10-23 2005-09-02 미쓰비시덴키 가부시키가이샤 Image decoder
US6493385B1 (en) 1997-10-23 2002-12-10 Mitsubishi Denki Kabushiki Kaisha Image encoding method, image encoder, image decoding method, and image decoder
JP4531871B2 (en) 1997-10-30 2010-08-25 富士通セミコンダクター株式会社 Image information processing apparatus and encoding apparatus
WO1999025121A1 (en) 1997-11-07 1999-05-20 Pipe Dream, Inc. Method for compressing and decompressing motion video
US6731811B1 (en) 1997-12-19 2004-05-04 Voicecraft, Inc. Scalable predictive coding method and apparatus
US6873368B1 (en) 1997-12-23 2005-03-29 Thomson Licensing Sa. Low noise encoding and decoding method
KR100243430B1 (en) 1997-12-31 2000-02-01 구자홍 Method of adaptive quantization control
US6275527B1 (en) 1998-01-14 2001-08-14 Conexant Systems, Inc. Pre-quantization in motion compensated video coding
US6654417B1 (en) 1998-01-26 2003-11-25 Stmicroelectronics Asia Pacific Pte. Ltd. One-pass variable bit rate moving pictures encoding
CA2260578C (en) 1998-01-27 2003-01-14 At&T Corp. Method and apparatus for encoding video shape and texture information
JP3462066B2 (en) * 1998-01-29 2003-11-05 株式会社東芝 ADPCM compression device, ADPCM decompression device, and ADPCM compression / decompression device
US6360017B1 (en) 1998-03-05 2002-03-19 Lucent Technologies Inc. Perceptual-based spatio-temporal segmentation for motion estimation
US6249614B1 (en) 1998-03-06 2001-06-19 Alaris, Inc. Video compression and decompression using dynamic quantization and/or encoding
KR100281463B1 (en) 1998-03-14 2001-02-01 전주범 Sub-data encoding apparatus in object based encoding system
TW501022B (en) 1998-03-16 2002-09-01 Mitsubishi Electric Corp Moving picture coding system
US6278735B1 (en) 1998-03-19 2001-08-21 International Business Machines Corporation Real-time single pass variable bit rate control strategy and encoder
US6125147A (en) 1998-05-07 2000-09-26 Motorola, Inc. Method and apparatus for reducing breathing artifacts in compressed video
US6115689A (en) 1998-05-27 2000-09-05 Microsoft Corporation Scalable audio coder and decoder
US6285774B1 (en) 1998-06-08 2001-09-04 Digital Video Express, L.P. System and methodology for tracing to a source of unauthorized copying of prerecorded proprietary material, such as movies
US7313318B2 (en) 1998-06-17 2007-12-25 Victor Company Of Japan, Limited Video signal encoding and recording apparatus with variable transmission rate
US6212232B1 (en) 1998-06-18 2001-04-03 Compaq Computer Corporation Rate control and bit allocation for low bit rate video communication applications
JP2000013794A (en) 1998-06-23 2000-01-14 Mitsubishi Electric Corp Device and method for encoding and decoding moving image
US6275614B1 (en) 1998-06-26 2001-08-14 Sarnoff Corporation Method and apparatus for block classification and adaptive bit allocation
US6411651B1 (en) 1998-06-26 2002-06-25 Compaq Information Technologies Group, L.P. Method and system for distributed video compression in personal computer architecture
US20020001412A1 (en) 1998-07-21 2002-01-03 Hewlett-Packard Company System for variable quantization in jpeg for compound documents
AU717480B2 (en) 1998-08-01 2000-03-30 Korea Advanced Institute Of Science And Technology Loop-filtering method for image data and apparatus therefor
US6389171B1 (en) 1998-08-14 2002-05-14 Apple Computer, Inc. Method and apparatus for a digital video cassette (DVC) decode system
US6219838B1 (en) 1998-08-24 2001-04-17 Sharewave, Inc. Dithering logic for the display of video information
KR100281967B1 (en) 1998-08-31 2001-02-15 전주범 Image coding apparatus using spatial correlation and method thereof
US6380985B1 (en) 1998-09-14 2002-04-30 Webtv Networks, Inc. Resizing and anti-flicker filtering in reduced-size video images
US6256423B1 (en) 1998-09-18 2001-07-03 Sarnoff Corporation Intra-frame quantizer selection for video compression
US6546049B1 (en) 1998-10-05 2003-04-08 Sarnoff Corporation Parameterized quantization matrix adaptation for video encoding
US6256422B1 (en) 1998-11-04 2001-07-03 International Business Machines Corporation Transform-domain correction of real-domain errors
US6393155B1 (en) 1998-11-04 2002-05-21 International Business Machines Corporation Error reduction in transformed digital data
AU743246B2 (en) 1998-11-04 2002-01-24 Mitsubishi Denki Kabushiki Kaisha Image decoder and image encoder
US6584154B1 (en) 1998-11-26 2003-06-24 Oki Electric Industry Co., Ltd. Moving-picture coding and decoding method and apparatus with reduced computational cost
US6983018B1 (en) 1998-11-30 2006-01-03 Microsoft Corporation Efficient motion vector coding for video compression
US6418166B1 (en) 1998-11-30 2002-07-09 Microsoft Corporation Motion estimation and block matching pattern
US6223162B1 (en) 1998-12-14 2001-04-24 Microsoft Corporation Multi-level run length coding for frequency-domain audio coding
US6473534B1 (en) 1999-01-06 2002-10-29 Hewlett-Packard Company Multiplier-free implementation of DCT used in image and video processing and compression
US6760482B1 (en) 1999-02-19 2004-07-06 Unisearch Limited Method for visual optimisation of embedded block codes to exploit visual masking phenomena
US6473409B1 (en) 1999-02-26 2002-10-29 Microsoft Corp. Adaptive filtering system and method for adaptively canceling echoes and reducing noise in digital signals
CA2280662A1 (en) 1999-05-21 2000-11-21 Joe Toth Media server with multi-dimensional scalable data compression
US6370502B1 (en) 1999-05-27 2002-04-09 America Online, Inc. Method and system for reduction of quantization-induced block-discontinuities and general purpose audio codec
FR2794563B1 (en) 1999-06-04 2002-08-16 Thomson Multimedia Sa PLASMA DISPLAY PANEL ADDRESSING METHOD
US6625215B1 (en) 1999-06-07 2003-09-23 Lucent Technologies Inc. Methods and apparatus for context-based inter/intra coding mode selection
FI111764B (en) 1999-06-10 2003-09-15 Nokia Corp Method and arrangement for processing image data
US6490319B1 (en) 1999-06-22 2002-12-03 Intel Corporation Region of interest video coding
JP2001008215A (en) 1999-06-24 2001-01-12 Victor Co Of Japan Ltd Dynamic image encoder and method therefor
JP2001016594A (en) 1999-06-29 2001-01-19 Hitachi Ltd Motion compensation method for moving image
US6263022B1 (en) 1999-07-06 2001-07-17 Philips Electronics North America Corp. System and method for fine granular scalable video with selective quality enhancement
WO2001006794A1 (en) 1999-07-20 2001-01-25 Koninklijke Philips Electronics N.V. Encoding method for the compression of a video sequence
US6408026B1 (en) 1999-08-06 2002-06-18 Sony Corporation Deadzone quantization method and apparatus for image compression
FI107495B (en) 1999-08-13 2001-08-15 Nokia Multimedia Network Termi Method and arrangement for reducing the volume or speed of a coded digital video bit stream
JP2001136535A (en) 1999-08-25 2001-05-18 Fuji Xerox Co Ltd Image-encoding device and quantization characteristic determining device
US6788740B1 (en) 1999-10-01 2004-09-07 Koninklijke Philips Electronics N.V. System and method for encoding and decoding enhancement layer data using base layer quantization data
JP4562051B2 (en) 1999-11-30 2010-10-13 独立行政法人産業技術総合研究所 Signal processing apparatus and signal processing method for cutting tool with wear sensor
US6693645B2 (en) 1999-12-01 2004-02-17 Ivast, Inc. Optimized BIFS encoder
US6765962B1 (en) 1999-12-02 2004-07-20 Sarnoff Corporation Adaptive selection of quantization scales for video encoding
US6456744B1 (en) 1999-12-30 2002-09-24 Quikcat.Com, Inc. Method and apparatus for video compression using sequential frame cellular automata transforms
US6963609B2 (en) 2000-01-12 2005-11-08 Koninklijke Philips Electronics N.V. Image data compression
FI116819B (en) 2000-01-21 2006-02-28 Nokia Corp Procedure for transferring images and an image encoder
US6738423B1 (en) 2000-01-21 2004-05-18 Nokia Mobile Phones Ltd. Method for encoding and decoding video information, a motion compensated video encoder and a corresponding decoder
US6600836B1 (en) 2000-01-28 2003-07-29 Qualcomm, Incorporated Quality based image compression
JP2001245303A (en) 2000-02-29 2001-09-07 Toshiba Corp Moving picture coder and moving picture coding method
US7035473B1 (en) 2000-03-01 2006-04-25 Sharp Laboratories Of America, Inc. Distortion-adaptive visual frequency weighting
JP4254017B2 (en) 2000-03-10 2009-04-15 ソニー株式会社 Image coding apparatus and method
CN1366778A (en) 2000-04-27 2002-08-28 皇家菲利浦电子有限公司 Video compression
US7289154B2 (en) 2000-05-10 2007-10-30 Eastman Kodak Company Digital image processing method and apparatus for brightness adjustment of digital images
US6876703B2 (en) 2000-05-11 2005-04-05 Ub Video Inc. Method and apparatus for video coding
US6747660B1 (en) 2000-05-12 2004-06-08 Microsoft Corporation Method and system for accelerating noise
US6873654B1 (en) 2000-05-16 2005-03-29 Redrock Semiconductor, Inc Method and system for predictive control for live streaming video/audio media
JP2001358948A (en) 2000-06-15 2001-12-26 Canon Inc Image processing method and unit
US7023922B1 (en) 2000-06-21 2006-04-04 Microsoft Corporation Video coding system and method using 3-D discrete wavelet transform and entropy coding with motion information
US6593925B1 (en) 2000-06-22 2003-07-15 Microsoft Corporation Parameterized animation compression methods and arrangements
US7177358B2 (en) 2000-06-27 2007-02-13 Mitsubishi Denki Kabushiki Kaisha Picture coding apparatus, and picture coding method
US20020021756A1 (en) 2000-07-11 2002-02-21 Mediaflow, Llc. Video compression using adaptive selection of groups of frames, adaptive bit allocation, and adaptive replenishment
AU2001273510A1 (en) 2000-07-17 2002-01-30 Trustees Of Boston University Generalized lapped biorthogonal transform embedded inverse discrete cosine transform and low bit rate video sequence coding artifact removal
JP4256574B2 (en) 2000-08-04 2009-04-22 富士通株式会社 Image signal encoding method and image signal encoding apparatus
JP3825615B2 (en) 2000-08-11 2006-09-27 株式会社東芝 Moving picture coding apparatus, moving picture coding method, and medium recording program
JP3561485B2 (en) * 2000-08-18 2004-09-02 株式会社メディアグルー Coded signal separation / synthesis device, difference coded signal generation device, coded signal separation / synthesis method, difference coded signal generation method, medium recording coded signal separation / synthesis program, and difference coded signal generation program recorded Medium
US6678422B1 (en) 2000-08-30 2004-01-13 National Semiconductor Corporation Method and apparatus for image data compression with low memory requirement
US6834080B1 (en) 2000-09-05 2004-12-21 Kabushiki Kaisha Toshiba Video encoding method and video encoding apparatus
US6748020B1 (en) 2000-10-25 2004-06-08 General Instrument Corporation Transcoder-multiplexer (transmux) software architecture
KR100355829B1 (en) 2000-12-13 2002-10-19 엘지전자 주식회사 Dpcm image coder using self-correlated prediction
US7058127B2 (en) 2000-12-27 2006-06-06 International Business Machines Corporation Method and system for video transcoding
WO2002054777A1 (en) 2000-12-28 2002-07-11 Koninklijke Philips Electronics N.V. Mpeg-2 down-sampled video generation
US7072525B1 (en) 2001-02-16 2006-07-04 Yesvideo, Inc. Adaptive filtering of visual image using auxiliary image information
US6757429B2 (en) 2001-02-21 2004-06-29 Boly Media Communications Inc. Method of compressing digital images
US8374237B2 (en) 2001-03-02 2013-02-12 Dolby Laboratories Licensing Corporation High precision encoding and decoding of video images
FR2822284B1 (en) 2001-03-13 2004-01-02 Thomson Multimedia Sa METHOD FOR DISPLAYING VIDEO IMAGES ON A PLASMA DISPLAY PANEL AND CORRESPONDING PLASMA DISPLAY PANELS
US6832005B2 (en) 2001-03-23 2004-12-14 Microsoft Corporation Adaptive encoding and decoding of bi-level images
US6831947B2 (en) 2001-03-23 2004-12-14 Sharp Laboratories Of America, Inc. Adaptive quantization based on bit rate prediction and prediction error energy
WO2002080575A1 (en) 2001-03-29 2002-10-10 Sony Corporation Image processing apparatus, image processing method, image processing program, and recording medium
US6687294B2 (en) 2001-04-27 2004-02-03 Koninklijke Philips Electronics N.V. Distortion quantizer model for video encoding
US7206453B2 (en) 2001-05-03 2007-04-17 Microsoft Corporation Dynamic filtering for lossy compression
US6882753B2 (en) 2001-06-04 2005-04-19 Silicon Integrated Systems Corp. Adaptive quantization using code length in image compression
US6704718B2 (en) 2001-06-05 2004-03-09 Microsoft Corporation System and method for trainable nonlinear prediction of transform coefficients in data compression
US6909745B1 (en) 2001-06-05 2005-06-21 At&T Corp. Content adaptive video encoder
US20030189980A1 (en) 2001-07-02 2003-10-09 Moonlight Cordless Ltd. Method and apparatus for motion estimation between video frames
US6975680B2 (en) 2001-07-12 2005-12-13 Dolby Laboratories, Inc. Macroblock mode decision biasing for video compression systems
US20030112863A1 (en) 2001-07-12 2003-06-19 Demos Gary A. Method and system for improving compressed image chroma information
US7042941B1 (en) 2001-07-17 2006-05-09 Vixs, Inc. Method and apparatus for controlling amount of quantization processing in an encoder
US7801215B2 (en) 2001-07-24 2010-09-21 Sasken Communication Technologies Limited Motion estimation technique for digital video encoding applications
US7079692B2 (en) 2001-07-24 2006-07-18 Koninklijke Philips Electronics N.V. Reduced complexity video decoding by reducing the IDCT computation in B-frames
US6987889B1 (en) 2001-08-10 2006-01-17 Polycom, Inc. System and method for dynamic perceptual coding of macroblocks in a video frame
US7110455B2 (en) 2001-08-14 2006-09-19 General Instrument Corporation Noise reduction pre-processor for digital video using previously generated motion vectors and adaptive spatial filtering
JP4392782B2 (en) 2001-08-21 2010-01-06 Kddi株式会社 Quantization control method in low-rate video coding
US6891889B2 (en) * 2001-09-05 2005-05-10 Intel Corporation Signal to noise ratio optimization for video compression bit-rate control
US7440504B2 (en) 2001-09-24 2008-10-21 Broadcom Corporation Method and apparatus for performing deblocking filtering with interlace capability
US6977659B2 (en) 2001-10-11 2005-12-20 At & T Corp. Texture replacement in video sequences and images
US6992725B2 (en) 2001-10-22 2006-01-31 Nec Electronics America, Inc. Video data de-interlacing using perceptually-tuned interpolation scheme
US7107584B2 (en) 2001-10-23 2006-09-12 Microsoft Corporation Data alignment between native and non-native shared data structures
US6810083B2 (en) 2001-11-16 2004-10-26 Koninklijke Philips Electronics N.V. Method and system for estimating objective quality of compressed video data
KR100643453B1 (en) 2001-11-17 2006-11-10 엘지전자 주식회사 Bit rate control based on object
US6993200B2 (en) 2001-11-20 2006-01-31 Sony Corporation System and method for effectively rendering high dynamic range images
CA2435757C (en) 2001-11-29 2013-03-19 Matsushita Electric Industrial Co., Ltd. Video coding distortion removal method and apparatus using a filter
US7295609B2 (en) 2001-11-30 2007-11-13 Sony Corporation Method and apparatus for coding image information, method and apparatus for decoding image information, method and apparatus for coding and decoding image information, and system of coding and transmitting image information
JP4254147B2 (en) 2001-11-30 2009-04-15 ソニー株式会社 Image information encoding method and apparatus, program, and recording medium
CN101448162B (en) 2001-12-17 2013-01-02 微软公司 Method for processing video image
WO2003053066A1 (en) 2001-12-17 2003-06-26 Microsoft Corporation Skip macroblock coding
US6763068B2 (en) 2001-12-28 2004-07-13 Nokia Corporation Method and apparatus for selecting macroblock quantization parameters in a video encoder
WO2003056839A1 (en) 2001-12-31 2003-07-10 Stmicroelectronics Asia Pacific Pte Ltd Video encoding
US6985529B1 (en) 2002-01-07 2006-01-10 Apple Computer, Inc. Generation and use of masks in MPEG video encoding to indicate non-zero entries in transformed macroblocks
US20030128754A1 (en) 2002-01-09 2003-07-10 Hiroshi Akimoto Motion estimation method for control on the basis of scene analysis in video compression systems
US6647152B2 (en) 2002-01-25 2003-11-11 Thomson Licensing S.A. Method and system for contouring reduction
US20050105889A1 (en) 2002-03-22 2005-05-19 Conklin Gregory J. Video picture compression artifacts reduction via filtering and dithering
US7430303B2 (en) 2002-03-29 2008-09-30 Lockheed Martin Corporation Target detection method and system
US7116831B2 (en) 2002-04-10 2006-10-03 Microsoft Corporation Chrominance motion vector rounding
RU2322770C2 (en) 2002-04-23 2008-04-20 Нокиа Корпорейшн Method and device for indication of quantizer parameters in video encoding system
WO2003091850A2 (en) 2002-04-26 2003-11-06 The Trustees Of Columbia University In The City Of New York Method and system for optimal video transcoding based on utility function descriptors
US7242713B2 (en) 2002-05-02 2007-07-10 Microsoft Corporation 2-D transforms for image and video coding
US7609767B2 (en) 2002-05-03 2009-10-27 Microsoft Corporation Signaling for fading compensation
US20030215011A1 (en) 2002-05-17 2003-11-20 General Instrument Corporation Method and apparatus for transcoding compressed video bitstreams
US7145948B2 (en) 2002-05-29 2006-12-05 Koninklijke Philips Electronics N.V. Entropy constrained scalar quantizer for a Laplace-Markov source
JP2004023288A (en) 2002-06-13 2004-01-22 Kddi R & D Laboratories Inc Preprocessing system for moving image encoding
US6961376B2 (en) 2002-06-25 2005-11-01 General Instrument Corporation Methods and apparatus for rate control during dual pass encoding
US7280700B2 (en) 2002-07-05 2007-10-09 Microsoft Corporation Optimization techniques for data compression
US7599579B2 (en) 2002-07-11 2009-10-06 Ge Medical Systems Global Technology Company, Llc Interpolated image filtering method and apparatus
JP2004056249A (en) 2002-07-17 2004-02-19 Sony Corp Coding apparatus and method, decoding apparatus and method, recording medium, and program
US6947045B1 (en) 2002-07-19 2005-09-20 At&T Corporation Coding of animated 3-D wireframe models for internet streaming applications: methods, systems and program products
US6975773B1 (en) 2002-07-30 2005-12-13 Qualcomm, Incorporated Parameter selection in data compression and decompression
US6891548B2 (en) 2002-08-23 2005-05-10 Hewlett-Packard Development Company, L.P. System and method for calculating a texture-mapping gradient
US20060256867A1 (en) 2002-09-06 2006-11-16 Turaga Deepak S Content-adaptive multiple description motion compensation for improved efficiency and error resilience
US6795584B2 (en) 2002-10-03 2004-09-21 Nokia Corporation Context-based adaptive variable length coding for adaptive block transforms
US6807317B2 (en) 2002-10-25 2004-10-19 Motorola, Inc. Method and decoder system for reducing quantization effects of a decoded image
US7139437B2 (en) 2002-11-12 2006-11-21 Eastman Kodak Company Method and system for removing artifacts in compressed images
GB0228556D0 (en) 2002-12-06 2003-01-15 British Telecomm Video quality measurement
US8054880B2 (en) 2004-12-10 2011-11-08 Tut Systems, Inc. Parallel rate control for digital video encoder with multi-processor architecture and picture-based look-ahead window
US7099389B1 (en) 2002-12-10 2006-08-29 Tut Systems, Inc. Rate control with picture-based lookahead window
JP4214771B2 (en) 2002-12-12 2009-01-28 ソニー株式会社 Image processing apparatus and method and encoding apparatus
KR20040058929A (en) 2002-12-27 2004-07-05 삼성전자주식회사 Advanced method for encoding video based on DCT and apparatus thereof
KR100584552B1 (en) 2003-01-14 2006-05-30 삼성전자주식회사 Method for encoding and decoding video and apparatus thereof
US7212571B2 (en) 2003-01-31 2007-05-01 Seiko Epson Corporation Method and apparatus for DCT domain filtering for block based encoding
EP1445958A1 (en) 2003-02-05 2004-08-11 STMicroelectronics S.r.l. Quantization method and system, for instance for video MPEG applications, and computer program product therefor
KR100539923B1 (en) 2003-02-10 2005-12-28 삼성전자주식회사 A video encoder capable of encoding deferentially as distinguishing image of user and method for compressing a video signal using that
JP3984178B2 (en) 2003-02-13 2007-10-03 日本電信電話株式会社 VIDEO ENCODING METHOD, VIDEO ENCODING DEVICE, VIDEO ENCODING PROGRAM, AND RECORDING MEDIUM CONTAINING THE PROGRAM
US7227587B2 (en) 2003-03-05 2007-06-05 Broadcom Corporation System and method for three dimensional comb filtering
KR100977713B1 (en) 2003-03-15 2010-08-24 삼성전자주식회사 Device and method for pre-processing in order to recognize characters in images
SG140441A1 (en) 2003-03-17 2008-03-28 St Microelectronics Asia Decoder and method of decoding using pseudo two pass decoding and one pass encoding
KR20060105409A (en) 2005-04-01 2006-10-11 엘지전자 주식회사 Method for scalably encoding and decoding video signal
EP1465349A1 (en) 2003-03-31 2004-10-06 Interuniversitair Microelektronica Centrum Vzw Embedded multiple description scalar quantizers for progressive image transmission
CA2427894C (en) * 2003-05-05 2010-08-17 Outokumpu, Oyj Aluminium ingot casting machine
CN1784904A (en) 2003-05-06 2006-06-07 皇家飞利浦电子股份有限公司 Encoding of video information using block based adaptive scan order
GB2401502B (en) 2003-05-07 2007-02-14 British Broadcasting Corp Data processing
WO2005004335A2 (en) 2003-06-25 2005-01-13 Georgia Tech Research Corporation Cauchy-distribution based coding system and method
US7512180B2 (en) 2003-06-25 2009-03-31 Microsoft Corporation Hierarchical data compression system and method for coding video data
US7200277B2 (en) 2003-07-01 2007-04-03 Eastman Kodak Company Method for transcoding a JPEG2000 compressed image
US7194031B2 (en) 2003-07-09 2007-03-20 Silicon Integrated Systems Corp. Rate control method with region of interesting support
US7738554B2 (en) 2003-07-18 2010-06-15 Microsoft Corporation DC coefficient signaling at small quantization step sizes
US7343291B2 (en) 2003-07-18 2008-03-11 Microsoft Corporation Multi-pass variable bitrate media encoding
US7580584B2 (en) 2003-07-18 2009-08-25 Microsoft Corporation Adaptive multiple quantization
JP4388771B2 (en) 2003-07-18 2009-12-24 三菱電機株式会社 Moving picture decoding apparatus and moving picture decoding method
US20050013498A1 (en) 2003-07-18 2005-01-20 Microsoft Corporation Coding of motion vector information
US8218624B2 (en) 2003-07-18 2012-07-10 Microsoft Corporation Fractional quantization step sizes for high bit rates
US7609763B2 (en) 2003-07-18 2009-10-27 Microsoft Corporation Advanced bi-directional predictive coding of video frames
US7426308B2 (en) 2003-07-18 2008-09-16 Microsoft Corporation Intraframe and interframe interlace coding and decoding
US7602851B2 (en) 2003-07-18 2009-10-13 Microsoft Corporation Intelligent differential quantization of video coding
US7383180B2 (en) 2003-07-18 2008-06-03 Microsoft Corporation Constant bitrate media encoding techniques
KR100520298B1 (en) 2003-07-26 2005-10-13 삼성전자주식회사 Method of dithering and Apparatus of the same
US20050024487A1 (en) 2003-07-31 2005-02-03 William Chen Video codec system with real-time complexity adaptation and region-of-interest coding
US7158668B2 (en) 2003-08-01 2007-01-02 Microsoft Corporation Image processing using linear light values and other image processing improvements
KR100505699B1 (en) 2003-08-12 2005-08-03 삼성전자주식회사 Encoding rate controller of video encoder providing for qualitative display using real time variable bit-rate control, video data transmission system having it and method thereof
TWI232681B (en) 2003-08-27 2005-05-11 Mediatek Inc Method of transforming one video output format into another video output format without degrading display quality
US7924921B2 (en) 2003-09-07 2011-04-12 Microsoft Corporation Signaling coding and display options in entry point headers
US7724827B2 (en) 2003-09-07 2010-05-25 Microsoft Corporation Multi-layer run level encoding and decoding
JP5280003B2 (en) 2003-09-07 2013-09-04 マイクロソフト コーポレーション Slice layer in video codec
US7609762B2 (en) 2003-09-07 2009-10-27 Microsoft Corporation Signaling for entry point frames with predicted first field
WO2005036886A1 (en) 2003-10-13 2005-04-21 Koninklijke Philips Electronics N.V. Two-pass video encoding
US20050084013A1 (en) 2003-10-15 2005-04-21 Limin Wang Frequency coefficient scanning paths
US20050105612A1 (en) 2003-11-14 2005-05-19 Sung Chih-Ta S. Digital video stream decoding method and apparatus
US8223844B2 (en) 2003-11-14 2012-07-17 Intel Corporation High frequency emphasis in decoding of encoded signals
JP4063205B2 (en) 2003-11-20 2008-03-19 セイコーエプソン株式会社 Image data compression apparatus and encoder
EP1536647A1 (en) 2003-11-26 2005-06-01 STMicroelectronics Limited A video decoding device
CN100342728C (en) * 2003-11-28 2007-10-10 联发科技股份有限公司 Method for controlling quantization degree of video signal coding bit-stream and related device
EP1692872A1 (en) 2003-12-03 2006-08-23 Koninklijke Philips Electronics N.V. System and method for improved scalability support in mpeg-2 systems
KR20050061762A (en) 2003-12-18 2005-06-23 학교법인 대양학원 Method of encoding mode determination and motion estimation, and encoding apparatus
US7391809B2 (en) 2003-12-30 2008-06-24 Microsoft Corporation Scalable video transcoding
US7471845B2 (en) 2004-01-06 2008-12-30 Sharp Laboratories Of America, Inc. De-ringing filter
WO2005065030A2 (en) 2004-01-08 2005-07-21 Videocodes, Inc. Video compression device and a method for compressing video
KR100556340B1 (en) 2004-01-13 2006-03-03 (주)씨앤에스 테크놀로지 Image Coding System
EP1665133A4 (en) 2004-01-20 2009-05-13 Panasonic Corp Picture coding method, picture decoding method, picture coding apparatus, picture decoding apparatus, and program thereof
US20050190836A1 (en) 2004-01-30 2005-09-01 Jiuhuai Lu Process for maximizing the effectiveness of quantization matrices in video codec systems
PL1709801T3 (en) 2004-01-30 2013-02-28 Panasonic Ip Corp America Video Decoding Method Using Adaptive Quantization Matrices
US7492820B2 (en) 2004-02-06 2009-02-17 Apple Inc. Rate control for video coder employing adaptive linear regression bits modeling
EP1564997A1 (en) 2004-02-12 2005-08-17 Matsushita Electric Industrial Co., Ltd. Encoding and decoding of video images based on a quantization with an adaptive dead-zone size
EP1718080A4 (en) 2004-02-20 2011-01-12 Nec Corp Image encoding method, device thereof, and control program thereof
JP4273996B2 (en) 2004-02-23 2009-06-03 ソニー株式会社 Image encoding apparatus and method, and image decoding apparatus and method
JP2005260467A (en) 2004-03-10 2005-09-22 Konica Minolta Opto Inc Image processor
US8503542B2 (en) 2004-03-18 2013-08-06 Sony Corporation Methods and apparatus to reduce blocking noise and contouring effect in motion compensated compressed video
US7689051B2 (en) 2004-04-15 2010-03-30 Microsoft Corporation Predictive lossless coding of images and video
JP4476104B2 (en) 2004-04-22 2010-06-09 三洋電機株式会社 Coding circuit
US7801383B2 (en) 2004-05-15 2010-09-21 Microsoft Corporation Embedded scalar quantizers with arbitrary dead-zone ratios
US20050259730A1 (en) * 2004-05-18 2005-11-24 Sharp Laboratories Of America, Inc. Video coding with residual color conversion using reversible YCoCg
US20050259729A1 (en) 2004-05-21 2005-11-24 Shijun Sun Video coding with quality scalability
US20050276493A1 (en) 2004-06-01 2005-12-15 Jun Xin Selecting macroblock coding modes for video encoding
US20070230565A1 (en) 2004-06-18 2007-10-04 Tourapis Alexandros M Method and Apparatus for Video Encoding Optimization
CN102595131B (en) 2004-06-18 2015-02-04 汤姆逊许可公司 Encoder for encoding video signal data for an image block
CN100588257C (en) 2004-06-23 2010-02-03 新加坡科技研究局 Scalable video coding with grid motion estimation and compensation
JP4594688B2 (en) 2004-06-29 2010-12-08 オリンパス株式会社 Image encoding processing method, image decoding processing method, moving image compression processing method, moving image expansion processing method, image encoding processing program, image encoding device, image decoding device, image encoding / decoding system, extended image compression / decompression Processing system
FR2872973A1 (en) 2004-07-06 2006-01-13 Thomson Licensing Sa METHOD OR DEVICE FOR CODING A SEQUENCE OF SOURCE IMAGES
US7606427B2 (en) 2004-07-08 2009-10-20 Qualcomm Incorporated Efficient rate control techniques for video encoding
KR100678949B1 (en) 2004-07-15 2007-02-06 삼성전자주식회사 Method for video coding and decoding, video encoder and decoder
RU2377737C2 (en) 2004-07-20 2009-12-27 Квэлкомм Инкорпорейтед Method and apparatus for encoder assisted frame rate up conversion (ea-fruc) for video compression
US7474316B2 (en) 2004-08-17 2009-01-06 Sharp Laboratories Of America, Inc. Bit-depth extension of digital displays via the use of models of the impulse response of the visual system
US20060056508A1 (en) 2004-09-03 2006-03-16 Phillippe Lafon Video coding rate control
WO2006031737A2 (en) 2004-09-14 2006-03-23 Gary Demos High quality wide-range multi-layer compression coding system
DE102004059993B4 (en) 2004-10-15 2006-08-31 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Apparatus and method for generating a coded video sequence using interlayer motion data prediction, and computer program and computer readable medium
KR100679022B1 (en) 2004-10-18 2007-02-05 삼성전자주식회사 Video coding and decoding method using inter-layer filtering, video ecoder and decoder
US20060098733A1 (en) 2004-11-08 2006-05-11 Kabushiki Kaisha Toshiba Variable-length coding device and method of the same
US20060104350A1 (en) 2004-11-12 2006-05-18 Sam Liu Multimedia encoder
JP2006140758A (en) 2004-11-12 2006-06-01 Toshiba Corp Method, apparatus and program for encoding moving image
CN101069432B (en) 2004-12-02 2015-10-21 汤姆逊许可公司 For the determination method and apparatus of the quantization parameter that video encoder rate controls
US7620103B2 (en) 2004-12-10 2009-11-17 Lsi Corporation Programmable quantization dead zone and threshold for standard-based H.264 and/or VC1 video encoding
US8031768B2 (en) 2004-12-15 2011-10-04 Maxim Integrated Products, Inc. System and method for performing optimized quantization via quantization re-scaling
US7136536B2 (en) 2004-12-22 2006-11-14 Telefonaktiebolaget L M Ericsson (Publ) Adaptive filter
EP1675402A1 (en) 2004-12-22 2006-06-28 Thomson Licensing Optimisation of a quantisation matrix for image and video coding
US7653129B2 (en) 2004-12-28 2010-01-26 General Instrument Corporation Method and apparatus for providing intra coding frame bit budget
US8325799B2 (en) 2004-12-28 2012-12-04 Nec Corporation Moving picture encoding method, device using the same, and computer program
US20080187042A1 (en) 2005-01-07 2008-08-07 Koninklijke Philips Electronics, N.V. Method of Processing a Video Signal Using Quantization Step Sizes Dynamically Based on Normal Flow
EP1839265A4 (en) 2005-01-14 2012-10-17 Iucf Hyu Method of encoding and decoding texture coordinates in three-dimensional mesh information for effective texture mapping
CN101111864A (en) 2005-01-31 2008-01-23 皇家飞利浦电子股份有限公司 Pyramidal decomposition for multi-resolution image filtering
US20060188014A1 (en) 2005-02-23 2006-08-24 Civanlar M R Video coding and adaptation by semantics-driven resolution control for transport and storage
US7724972B2 (en) 2005-03-01 2010-05-25 Qualcomm Incorporated Quality metric-biased region-of-interest coding for video telephony
KR100763178B1 (en) 2005-03-04 2007-10-04 삼성전자주식회사 Method for color space scalable video coding and decoding, and apparatus for the same
KR100728222B1 (en) 2005-03-25 2007-06-13 한국전자통신연구원 Hierarchical Video Encoding/Decoding Method for Complete Spatial Scalability and Apparatus thereof
EP1878247A4 (en) 2005-04-01 2012-11-21 Lg Electronics Inc Method for scalably encoding and decoding video signal
US8325797B2 (en) 2005-04-11 2012-12-04 Maxim Integrated Products, Inc. System and method of reduced-temporal-resolution update for video coding and quality control
US7876833B2 (en) 2005-04-11 2011-01-25 Sharp Laboratories Of America, Inc. Method and apparatus for adaptive up-scaling for spatially scalable coding
CN101120593A (en) 2005-04-13 2008-02-06 诺基亚公司 Coding, storage and signalling of scalability information
KR100746007B1 (en) 2005-04-19 2007-08-06 삼성전자주식회사 Method and apparatus for adaptively selecting context model of entrophy coding
KR100763181B1 (en) * 2005-04-19 2007-10-05 삼성전자주식회사 Method and apparatus for improving coding rate by coding prediction information from base layer and enhancement layer
US7620252B2 (en) 2005-04-22 2009-11-17 Hewlett-Packard Development Company, L.P. System and method for compressing an image
US7657098B2 (en) 2005-05-02 2010-02-02 Samsung Electronics Co., Ltd. Method and apparatus for reducing mosquito noise in decoded video sequence
US7684632B2 (en) 2005-05-16 2010-03-23 Hewlett-Packard Development Company, L.P. Estimating image compression quantization parameter values
US8422546B2 (en) 2005-05-25 2013-04-16 Microsoft Corporation Adaptive video encoding using a perceptual model
DE102005025629A1 (en) 2005-06-03 2007-03-22 Micronas Gmbh Image processing method for reducing blocking artifacts
JP5404038B2 (en) 2005-07-01 2014-01-29 ソニック ソリューションズ リミテッド ライアビリティー カンパニー Method, apparatus and system used for multimedia signal encoding
KR100667806B1 (en) 2005-07-07 2007-01-12 삼성전자주식회사 Method and apparatus for video encoding and decoding
US20070009042A1 (en) 2005-07-08 2007-01-11 Robert Craig Video game system using pre-encoded macro-blocks in an I-frame
WO2007008286A1 (en) 2005-07-11 2007-01-18 Thomson Licensing Method and apparatus for macroblock adaptive inter-layer intra texture prediction
US20070147497A1 (en) 2005-07-21 2007-06-28 Nokia Corporation System and method for progressive quantization for scalable image and video coding
MX2008000906A (en) 2005-07-21 2008-03-18 Thomson Licensing Method and apparatus for weighted prediction for scalable video coding.
EP1746839A1 (en) 2005-07-22 2007-01-24 Thomson Licensing Method and apparatus for encoding video data
US20070025441A1 (en) * 2005-07-28 2007-02-01 Nokia Corporation Method, module, device and system for rate control provision for video encoders capable of variable bit rate encoding
US8069466B2 (en) 2005-08-04 2011-11-29 Nds Limited Advanced digital TV system
US7933337B2 (en) * 2005-08-12 2011-04-26 Microsoft Corporation Prediction of transform coefficients for image compression
US20070053603A1 (en) 2005-09-08 2007-03-08 Monro Donald M Low complexity bases matching pursuits data coding and decoding
US8879635B2 (en) 2005-09-27 2014-11-04 Qualcomm Incorporated Methods and device for data alignment with time domain boundary
WO2007044556A2 (en) 2005-10-07 2007-04-19 Innovation Management Sciences, L.L.C. Method and apparatus for scalable video decoder using an enhancement stream
EP1775958A1 (en) 2005-10-14 2007-04-18 Thomson Licensing Method and apparatus for reconstructing the texture of a spatial enhancement-layer video picture
US7778476B2 (en) 2005-10-21 2010-08-17 Maxim Integrated Products, Inc. System and method for transform coding randomization
US8023569B2 (en) 2005-12-15 2011-09-20 Sharp Laboratories Of America, Inc. Methods and systems for block-based residual upsampling
US7889790B2 (en) 2005-12-20 2011-02-15 Sharp Laboratories Of America, Inc. Method and apparatus for dynamically adjusting quantization offset values
KR100867995B1 (en) 2006-01-07 2008-11-10 한국전자통신연구원 Method and apparatus for video data encoding and decoding
SI2192783T1 (en) 2006-01-09 2015-09-30 Matthias Narroschke Adaptive coding of the prediction error in hybrid video coding
JP4795223B2 (en) * 2006-01-31 2011-10-19 キヤノン株式会社 Image processing device
WO2007094100A1 (en) 2006-02-13 2007-08-23 Kabushiki Kaisha Toshiba Moving image encoding/decoding method and device and program
JP4529919B2 (en) 2006-02-28 2010-08-25 日本ビクター株式会社 Adaptive quantization apparatus and adaptive quantization program
US8428136B2 (en) 2006-03-09 2013-04-23 Nec Corporation Dynamic image encoding method and device and program using the same
EP1995967A4 (en) 2006-03-16 2009-11-11 Huawei Tech Co Ltd Method and apparatus for realizing adaptive quantization in encoding process
US8848789B2 (en) * 2006-03-27 2014-09-30 Qualcomm Incorporated Method and system for coding and decoding information associated with video compression
JP2007281949A (en) 2006-04-07 2007-10-25 Matsushita Electric Ind Co Ltd Image encoding device, image encoding decoding system, image encoding method, and image encoding decoding method
US8059721B2 (en) 2006-04-07 2011-11-15 Microsoft Corporation Estimating sample-domain distortion in the transform domain with rounding compensation
US20070237237A1 (en) 2006-04-07 2007-10-11 Microsoft Corporation Gradient slope detection for video compression
US7995649B2 (en) 2006-04-07 2011-08-09 Microsoft Corporation Quantization adjustment based on texture level
US8503536B2 (en) 2006-04-07 2013-08-06 Microsoft Corporation Quantization adjustments for DC shift artifacts
US7974340B2 (en) 2006-04-07 2011-07-05 Microsoft Corporation Adaptive B-picture quantization control
US8130828B2 (en) 2006-04-07 2012-03-06 Microsoft Corporation Adjusting quantization to preserve non-zero AC coefficients
JP4062711B2 (en) 2006-04-17 2008-03-19 俊宏 南 Video encoding device
US8711925B2 (en) * 2006-05-05 2014-04-29 Microsoft Corporation Flexible quantization
US20070268964A1 (en) 2006-05-22 2007-11-22 Microsoft Corporation Unit co-location-based motion estimation
EP1871113A1 (en) 2006-06-20 2007-12-26 THOMSON Licensing Method and apparatus for encoding video enhancement layer with multiresolution color scalability
JP4908943B2 (en) 2006-06-23 2012-04-04 キヤノン株式会社 Image coding apparatus and image coding method
US8120660B2 (en) 2006-07-10 2012-02-21 Freescale Semiconductor, Inc. Image data up sampling
US7885471B2 (en) 2006-07-10 2011-02-08 Sharp Laboratories Of America, Inc. Methods and systems for maintenance and use of coded block pattern information
US7840078B2 (en) 2006-07-10 2010-11-23 Sharp Laboratories Of America, Inc. Methods and systems for image processing control based on adjacent block characteristics
US8253752B2 (en) 2006-07-20 2012-08-28 Qualcomm Incorporated Method and apparatus for encoder assisted pre-processing
US8773494B2 (en) 2006-08-29 2014-07-08 Microsoft Corporation Techniques for managing visual compositions for a multimedia conference call
JP2008099045A (en) 2006-10-13 2008-04-24 Nippon Telegr & Teleph Corp <Ntt> Scalable encoding method, decoding method, device for them, program for them, and recording medium therefor
US9014280B2 (en) 2006-10-13 2015-04-21 Qualcomm Incorporated Video coding with adaptive filtering for motion compensated prediction
US20080095235A1 (en) 2006-10-20 2008-04-24 Motorola, Inc. Method and apparatus for intra-frame spatial scalable video coding
JP4575344B2 (en) 2006-10-24 2010-11-04 アップル インコーポレイテッド Video coding system with multiple independent coding chains for dynamic and selective playback in reduced or full size
US7885476B2 (en) 2006-12-14 2011-02-08 Sony Corporation System and method for effectively performing an adaptive encoding procedure
US8199812B2 (en) 2007-01-09 2012-06-12 Qualcomm Incorporated Adaptive upsampling for scalable video coding
US8238424B2 (en) 2007-02-09 2012-08-07 Microsoft Corporation Complexity-based adaptive preprocessing for multiple-pass video compression
US20080240257A1 (en) 2007-03-26 2008-10-02 Microsoft Corporation Using quantization bias that accounts for relations between transform bins and quantization bins
US8498335B2 (en) 2007-03-26 2013-07-30 Microsoft Corporation Adaptive deadzone size adjustment in quantization
US8204129B2 (en) * 2007-03-27 2012-06-19 Freescale Semiconductor, Inc. Simplified deblock filtering for reduced memory access and computational complexity
US8243797B2 (en) 2007-03-30 2012-08-14 Microsoft Corporation Regions of interest for quality adjustments
US8442337B2 (en) 2007-04-18 2013-05-14 Microsoft Corporation Encoding adjustments for animation content
TW200845723A (en) * 2007-04-23 2008-11-16 Thomson Licensing Method and apparatus for encoding video data, method and apparatus for decoding encoded video data and encoded video signal
US8331438B2 (en) 2007-06-05 2012-12-11 Microsoft Corporation Adaptive selection of picture-level quantization parameters for predicted video pictures
US7983496B2 (en) 2007-06-26 2011-07-19 Mitsubishi Electric Research Laboratories, Inc. Inverse tone mapping for bit-depth scalable image coding adapted to variable block sizes
US20090161756A1 (en) 2007-12-19 2009-06-25 Micron Technology, Inc. Method and apparatus for motion adaptive pre-filtering
US8160132B2 (en) 2008-02-15 2012-04-17 Microsoft Corporation Reducing key picture popping effects in video
US8542730B2 (en) * 2008-02-22 2013-09-24 Qualcomm, Incorporated Fast macroblock delta QP decision
US8953673B2 (en) 2008-02-29 2015-02-10 Microsoft Corporation Scalable video coding and decoding with sample bit depth and chroma high-pass residual layers
US8189933B2 (en) 2008-03-31 2012-05-29 Microsoft Corporation Classifying and controlling encoding quality for textured, dark smooth and smooth video content
US9338475B2 (en) 2008-04-16 2016-05-10 Intel Corporation Tone mapping for bit-depth scalable video codec
US8897359B2 (en) 2008-06-03 2014-11-25 Microsoft Corporation Adaptive quantization for enhancement layer video coding
CN101779469A (en) * 2008-06-27 2010-07-14 索尼公司 Image processing device, and image processing method
CA2807959C (en) 2011-07-29 2018-06-12 Panasonic Corporation Video encoding method, video decoding method, video encoding apparatus, video decoding apparatus, and video encoding/decoding apparatus
US10218976B2 (en) 2016-03-02 2019-02-26 MatrixView, Inc. Quantization matrices for compression of video

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See references of EP2283655A4 *

Cited By (139)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10602146B2 (en) 2006-05-05 2020-03-24 Microsoft Technology Licensing, Llc Flexible Quantization
US10306227B2 (en) 2008-06-03 2019-05-28 Microsoft Technology Licensing, Llc Adaptive quantization for enhancement layer video coding
US11381818B2 (en) 2010-06-10 2022-07-05 Interdigital Vc Holdings, Inc. Methods and apparatus for determining quantization parameter predictors from a plurality of neighboring quantization parameters
JP2013531942A (en) * 2010-06-10 2013-08-08 トムソン ライセンシング Method and apparatus for determining a quantization parameter predictor from a plurality of adjacent quantization parameters
US11722669B2 (en) 2010-06-10 2023-08-08 Interdigital Vc Holdings, Inc. Methods and apparatus for determining quantization parameter predictors from a plurality of neighboring quantization parameters
JP7254681B2 (en) 2010-06-10 2023-04-10 インターデジタル ヴイシー ホールディングス, インコーポレイテッド Method and apparatus for determining a quantization parameter predictor from multiple adjacent quantization parameters
JP7228012B2 (en) 2010-06-10 2023-02-22 インターデジタル ヴイシー ホールディングス, インコーポレイテッド Method and apparatus for determining a quantization parameter predictor from multiple adjacent quantization parameters
JP2018026824A (en) * 2010-06-10 2018-02-15 トムソン ライセンシングThomson Licensing Methods and apparatus for determining quantization parameter predictors from plural neighboring quantization parameters
JP2016136760A (en) * 2010-06-10 2016-07-28 トムソン ライセンシングThomson Licensing Methods and apparatus for determining quantization parameter predictors from plural neighboring quantization parameters
US9749631B2 (en) 2010-06-10 2017-08-29 Thomson Licensing Methods and apparatus for determining quantization parameter predictors from a plurality of neighboring quantization parameters
US20160105673A1 (en) * 2010-06-10 2016-04-14 Thomson Licensing Methods and apparatus for determining quantization parameter predictors from a plurality of neighboring quantization parameters
US10334247B2 (en) 2010-06-10 2019-06-25 Interdigital Vc Holdings, Inc. Methods and apparatus for determining quantization parameter predictors from a plurality of neighboring quantization parameters
JP2022003790A (en) * 2010-06-10 2022-01-11 インターデジタル ヴイシー ホールディングス, インコーポレイテッド Method and device for determining quantization parameter predictor from multiple adjacent quantization parameters
US10547840B2 (en) 2010-06-10 2020-01-28 Interdigital Vc Holdings, Inc. Methods and apparatus for determining quantization parameter predictors from a plurality of neighboring quantization parameters
US9235774B2 (en) 2010-06-10 2016-01-12 Thomson Licensing Methods and apparatus for determining quantization parameter predictors from a plurality of neighboring quantization parameters
US10742981B2 (en) 2010-06-10 2020-08-11 Interdigital Vc Holdings, Inc. Methods and apparatus for determining quantization parameter predictors from a plurality of neighboring quantization parameters
JP2020058036A (en) * 2010-06-10 2020-04-09 インターデジタル ヴイシー ホールディングス, インコーポレイテッド Method and apparatus for determining quantization parameter predictors from multiple neighboring quantization parameters
USRE48726E1 (en) 2010-09-29 2021-09-07 Sun Patent Trust Image decoding method, image coding method, image decoding apparatus, image coding apparatus and integrated circuit for generating a code stream with a hierarchical code structure
USRE47510E1 (en) 2010-09-29 2019-07-09 Sun Patent Trust Image decoding method, image coding method, image decoding apparatus, image coding apparatus and integrated circuit for generating a code stream with a hierarchical code structure
USRE49991E1 (en) 2010-09-29 2024-05-28 Sun Patent Trust Image decoding method, image coding method, image decoding apparatus, image coding apparatus and integrated circuit for generating a code stream with a hierarchical code structure
US10616579B2 (en) 2010-09-30 2020-04-07 Sun Patent Trust Image decoding method, image coding method, image decoding apparatus, image coding apparatus, program, and integrated circuit
JP2021132407A (en) * 2010-09-30 2021-09-09 サン パテント トラスト Decryption method and decryption device
US11310500B2 (en) 2010-09-30 2022-04-19 Sun Patent Trust Image decoding method, image coding method, image decoding apparatus, image coding apparatus, program, and integrated circuit
JP2022097644A (en) * 2010-09-30 2022-06-30 サン パテント トラスト Decoding device, encoding device, and recording medium
JP2019030023A (en) * 2010-09-30 2019-02-21 サン パテント トラスト Decoding method, coding method, decoding device, and coding device
JP7113250B2 (en) 2010-09-30 2022-08-05 サン パテント トラスト Decryption method and decryption device
JP2017153137A (en) * 2010-09-30 2017-08-31 サン パテント トラスト Decoding method, encoding method, decoder and encoder
JP7325573B2 (en) 2010-09-30 2023-08-14 サン パテント トラスト decoder and encoder
US11729389B2 (en) 2010-09-30 2023-08-15 Sun Patent Trust Image decoding method, image coding method, image decoding apparatus, image coding apparatus, program, and integrated circuit
CN110602497A (en) * 2011-01-24 2019-12-20 索尼公司 Image decoding device, image decoding method, and non-transitory computer readable medium
CN106878738A (en) * 2011-01-24 2017-06-20 索尼公司 Method for encoding images and message processing device
JP2012170042A (en) * 2011-01-24 2012-09-06 Sony Corp Image encoding device, image decoding device, method thereof, and program
AU2020203010B2 (en) * 2011-01-24 2021-01-28 Sony Corporation Image decoding device, image encoding device, and method thereof
RU2719453C1 (en) * 2011-01-24 2020-04-17 Сони Корпорейшн Image encoding method and information processing device
CN110839154A (en) * 2011-01-24 2020-02-25 索尼公司 Image decoding method, image processing apparatus, and non-transitory computer readable medium
WO2012102088A1 (en) * 2011-01-24 2012-08-02 ソニー株式会社 Image decoding device, image encoding device, and method thereof
US20190327471A1 (en) * 2011-01-24 2019-10-24 Sony Corporation Image decoding device, image encoding device, and method thereof
EP2645717A4 (en) * 2011-01-24 2016-07-27 Sony Corp Image decoding device, image encoding device, and method thereof
TWI495350B (en) * 2011-01-24 2015-08-01 Sony Corp Image decoding method and information processing device
US10419761B2 (en) 2011-01-24 2019-09-17 Sony Corporation Image decoding device, image encoding device, and method thereof
EP3512198A1 (en) * 2011-01-24 2019-07-17 Sony Corporation Image decoding device, image encoding device, and method thereof
KR101965119B1 (en) * 2011-01-24 2019-04-02 소니 주식회사 Image encoding apparatus and image encoding method
JP2018191334A (en) * 2011-01-24 2018-11-29 ソニー株式会社 Image coding device, image coding method, and program
AU2018201382B2 (en) * 2011-01-24 2018-11-08 Sony Corporation Image decoding device, image encoding device, and method thereof
KR20180053425A (en) * 2011-01-24 2018-05-21 소니 주식회사 Image encoding apparatus and image encoding method
US9560348B2 (en) 2011-01-24 2017-01-31 Sony Corporation Image decoding device, image encoding device, and method thereof using a prediction quantization parameter
KR101858289B1 (en) * 2011-01-24 2018-05-15 소니 주식회사 Image decoding apparatus and image decoding method
US20180063530A1 (en) * 2011-01-24 2018-03-01 Sony Corporation Image decoding device, image encoding device, and method thereof
CN107087191A (en) * 2011-01-24 2017-08-22 索尼公司 Picture decoding method and image processing equipment
RU2615675C2 (en) * 2011-01-24 2017-04-06 Сони Корпорейшн Image decoding device, image encoding device and method thereof
US10771790B2 (en) 2011-03-09 2020-09-08 Nec Corporation Video decoding device and method using inverse quantization
RU2695641C1 (en) * 2011-03-09 2019-07-25 Нек Корпорейшн Video encoding device, video decoding device, video encoding method and video decoding method
US9716895B2 (en) 2011-03-09 2017-07-25 Canon Kabushiki Kaisha Image coding apparatus, method for coding image, program therefor, image decoding apparatus, method for decoding image, and program therefor
JP2017139795A (en) * 2011-03-09 2017-08-10 日本電気株式会社 Video decoding device and video decoding method
WO2012120823A1 (en) * 2011-03-09 2012-09-13 Canon Kabushiki Kaisha Image coding apparatus, method for coding image, program therefor, image decoding apparatus, method for decoding image, and program therefor
AU2012226120B2 (en) * 2011-03-09 2015-07-23 Nec Corporation Video encoding device, video decoding device, video encoding method, and video decoding method
WO2012120888A1 (en) * 2011-03-09 2012-09-13 日本電気株式会社 Video encoding device, video decoding device, video encoding method, and video decoding method
JP2012191294A (en) * 2011-03-09 2012-10-04 Canon Inc Image encoding device, image encoding method and program, image decoding device, image decoding method and program
CN107277524A (en) * 2011-03-09 2017-10-20 日本电气株式会社 Video decoding apparatus and video encoding/decoding method
US11509909B2 (en) 2011-03-09 2022-11-22 Nec Corporation Video decoding device and method using inverse quantization
CN107371035A (en) * 2011-03-09 2017-11-21 佳能株式会社 Image encoding apparatus and method and image decoding apparatus and method
CN107371037A (en) * 2011-03-09 2017-11-21 佳能株式会社 Image encoding apparatus and method and image decoding apparatus and method
US9832460B2 (en) 2011-03-09 2017-11-28 Canon Kabushiki Kaisha Image coding apparatus, method for coding image, program therefor, image decoding apparatus, method for decoding image, and program therefor
US11496749B2 (en) 2011-03-09 2022-11-08 Nec Corporation Video decoding device and method using inverse quantization
US11483571B2 (en) 2011-03-09 2022-10-25 Nec Corporation Video decoding device and method using inverse quantization
CN103416057A (en) * 2011-03-09 2013-11-27 佳能株式会社 Image coding apparatus, method for coding image, program therefor, image decoding apparatus, method for decoding image, and program therefor
CN103444180A (en) * 2011-03-09 2013-12-11 日本电气株式会社 Video encoding device, video decoding device, video encoding method, and video decoding method
RU2551800C2 (en) * 2011-03-09 2015-05-27 Кэнон Кабусики Кайся Image coding device, image coding method, software for this, image decoding device, image decoding method and software for this
EP2685720A1 (en) * 2011-03-09 2014-01-15 Nec Corporation Video encoding device, video decoding device, video encoding method, and video decoding method
JP2016086440A (en) * 2011-03-09 2016-05-19 日本電気株式会社 Video encoding device, video decoding device, video encoding method, and video decoding method
RU2654149C1 (en) * 2011-03-09 2018-05-16 Нек Корпорейшн Video encoding device, video decoding device, video encoding method and video decoding method
RU2608446C2 (en) * 2011-03-09 2017-01-18 Нек Корпорейшн Video encoding device, video decoding device, video encoding method and video decoding method
RU2663353C2 (en) * 2011-03-09 2018-08-03 Кэнон Кабусики Кайся Image encoding device, method for image encoding, program for it, image decoding device, method for image decoding and program for it
US9277221B2 (en) 2011-03-09 2016-03-01 Canon Kabushiki Kaisha Image coding apparatus, method for coding image, program therefor, image decoding apparatus, method for decoding image, and program therefor
JP2018164296A (en) * 2011-03-09 2018-10-18 日本電気株式会社 Video coding apparatus and video decoding apparatus
JP2018164301A (en) * 2011-03-09 2018-10-18 日本電気株式会社 Video coding apparatus and video decoding apparatus
JP6024654B2 (en) * 2011-03-09 2016-11-16 日本電気株式会社 Video encoding apparatus, video decoding apparatus, video encoding method, and video decoding method
AU2015200682B2 (en) * 2011-03-09 2016-05-26 Nec Corporation Video encoding device, video decoding device, video encoding method, and video decoding method
JPWO2012120888A1 (en) * 2011-03-09 2014-07-17 日本電気株式会社 Video encoding apparatus, video decoding apparatus, video encoding method, and video decoding method
CN107371037B (en) * 2011-03-09 2020-03-06 佳能株式会社 Image encoding apparatus and method, and image decoding apparatus and method
EP2685720A4 (en) * 2011-03-09 2014-09-10 Nec Corp Video encoding device, video decoding device, video encoding method, and video decoding method
CN107371035B (en) * 2011-03-09 2019-12-20 佳能株式会社 Image encoding apparatus and method, and image decoding apparatus and method
RU2679116C1 (en) * 2011-03-09 2019-02-06 Нек Корпорейшн Video encoding device, video decoding device, video encoding method and video decoding method
AU2015200683B2 (en) * 2011-03-09 2016-06-09 Nec Corporation Video encoding device, video decoding device, video encoding method, and video decoding method
EP2863637A3 (en) * 2011-03-09 2015-05-06 Nec Corporation Video encoding device, video decoding device, video encoding method, and video decoding method
CN103416057B (en) * 2011-03-09 2017-05-24 佳能株式会社 Image coding apparatus and method and image decoding apparatus and method
RU2686027C1 (en) * 2011-03-09 2019-04-23 Кэнон Кабусики Кайся Image encoding device, a method for encoding an image, a program for this, an image decoding device, a method for decoding an image and a program for this
US10284859B2 (en) 2011-03-09 2019-05-07 Nec Corporation Video decoding device and method using inverse quantization
CN107277524B (en) * 2011-03-09 2019-05-17 日本电气株式会社 Video decoding apparatus and video encoding/decoding method
RU2688266C1 (en) * 2011-03-09 2019-05-21 Кэнон Кабусики Кайся Image encoding device, a method for encoding an image, a program for this, an image decoding device, a method for decoding an image and a program for this
EP2866449A1 (en) * 2011-03-09 2015-04-29 Nec Corporation Video encoding device, video decoding device, video encoding method, and video decoding method
JP2016181931A (en) * 2011-03-09 2016-10-13 キヤノン株式会社 Image encoding device, image encoding method and program, image decoding device, and image decoding method and program
JPWO2012121284A1 (en) * 2011-03-10 2014-07-17 シャープ株式会社 Image decoding apparatus, image encoding apparatus, and data structure of encoded data
CN103460694B (en) * 2011-03-10 2017-02-15 夏普株式会社 Image decoding apparatus, image encoding apparatus, and data structure of encoded data
WO2012121284A1 (en) * 2011-03-10 2012-09-13 シャープ株式会社 Image decoding apparatus, image encoding apparatus, and data structure of encoded data
CN103460694A (en) * 2011-03-10 2013-12-18 夏普株式会社 Image decoding apparatus, image encoding apparatus, and data structure of encoded data
US10194152B2 (en) 2011-03-11 2019-01-29 Sony Corporation Image processing apparatus and method
TWI767240B (en) * 2011-03-11 2022-06-11 日商新力股份有限公司 Image processing apparatus and image processing method
US9794566B2 (en) 2011-03-11 2017-10-17 Sony Corporation Image processing apparatus and method
US10212423B2 (en) 2011-03-11 2019-02-19 Sony Corporation Image processing apparatus and method
CN106454381A (en) * 2011-03-11 2017-02-22 索尼公司 Image processing apparatus and method
US9854243B2 (en) 2011-03-11 2017-12-26 Sony Corporation Image processing apparatus and method
US9571829B2 (en) 2011-03-11 2017-02-14 Huawei Technologies Co., Ltd. Method and device for encoding/decoding with quantization parameter, block size and coding unit size
WO2012124461A1 (en) * 2011-03-11 2012-09-20 ソニー株式会社 Image processing device and method
US20180027234A1 (en) 2011-03-11 2018-01-25 Sony Corporation Image processing apparatus and method
CN106454381B (en) * 2011-03-11 2019-05-28 索尼公司 Image processing apparatus and method
US9135724B2 (en) 2011-03-11 2015-09-15 Sony Corporation Image processing apparatus and method
JP2014509150A (en) * 2011-03-11 2014-04-10 華為技術有限公司 Encoding method and apparatus, and decoding method and apparatus
US9495765B2 (en) 2011-03-11 2016-11-15 Sony Corporation Image processing apparatus and method
WO2012140889A1 (en) * 2011-04-15 2012-10-18 Canon Kabushiki Kaisha Image encoding apparatus, image encoding method, image encoding program, image decoding apparatus, image decoding method, and image decoding program
JP2022120012A (en) * 2011-06-21 2022-08-17 ドルビー ラボラトリーズ ライセンシング コーポレイション Adaptive quantization parameter encoding and decoding method and apparatus based on quadtree structure
JP7344343B2 (en) 2011-06-21 2023-09-13 ドルビー ラボラトリーズ ライセンシング コーポレイション Adaptive quantization parameter encoding and decoding method and apparatus based on quadtree structure
JP2020065290A (en) * 2011-06-21 2020-04-23 インテレクチュアル ディスカバリー カンパニー リミテッド Method and apparatus for adaptive quantization parameter coding and decoding based on quad tree structure
JP7507944B2 (en) 2011-06-21 2024-06-28 ドルビー ラボラトリーズ ライセンシング コーポレイション Method and apparatus for adaptive quantization parameter encoding and decoding based on quad-tree structure
JP2018196147A (en) * 2011-06-21 2018-12-06 インテレクチュアル ディスカバリー カンパニー リミテッド Method and apparatus for adaptively encoding and decoding quantization parameter based on quadtree structure
USRE47465E1 (en) 2011-06-21 2019-06-25 Intellectual Discovery Co., Ltd. Method and apparatus for adaptively encoding and decoding a quantization parameter based on a quadtree structure
JP2014520475A (en) * 2011-06-21 2014-08-21 インテレクチュアル ディスカバリー カンパニー リミテッド Adaptive quantization parameter encoding and decoding method and apparatus based on quadtree structure
JP2022032027A (en) * 2011-06-21 2022-02-24 ドルビー ラボラトリーズ ライセンシング コーポレイション Adaptive quantization parameter encoding and decoding method and device based on quadtree structure
USRE49330E1 (en) 2011-06-21 2022-12-06 Dolby Laboratories Licensing Corporation Method and apparatus for adaptively encoding and decoding a quantization parameter based on a quadtree structure
JP2017201807A (en) * 2011-06-21 2017-11-09 インテレクチュアル ディスカバリー カンパニー リミテッド Method and apparatus for adaptively encoding and decoding quantization parameter based on quadtree structure
JP7087169B2 (en) 2011-06-21 2022-06-20 ドルビー ラボラトリーズ ライセンシング コーポレイション Adaptive Quantization Parameter Coding and Decoding Methods and Devices Based on Quad Tree Structure
USRE46678E1 (en) 2011-06-21 2018-01-16 Intellectual Discovery Co., Ltd. Method and apparatus for adaptively encoding and decoding a quantization parameter based on a quadtree structure
JP2016165138A (en) * 2011-06-21 2016-09-08 インテレクチュアル ディスカバリー カンパニー リミテッド Method and apparatus for adaptively encoding and decoding quantization parameter based on quadtree structure
US9066098B2 (en) 2011-06-21 2015-06-23 Intellectual Discovery Co., Ltd. Method and apparatus for adaptively encoding and decoding a quantization parameter based on a quadtree structure
JPWO2013001729A1 (en) * 2011-06-28 2015-02-23 日本電気株式会社 Video encoding device and video decoding device
WO2013001729A1 (en) * 2011-06-28 2013-01-03 日本電気株式会社 Image encoding device and image decoding device
US10432934B2 (en) 2011-06-28 2019-10-01 Nec Corporation Video encoding device and video decoding device
JP2016189635A (en) * 2011-11-25 2016-11-04 インフォブリッジ ピーティーイー. エルティーディー. Method for decoding color difference video
JP2015502098A (en) * 2011-11-25 2015-01-19 インフォブリッジ ピーティーイー. エルティーディー. Color difference video decoding method
JP2018139437A (en) * 2011-11-25 2018-09-06 インフォブリッジ ピーティーイー. エルティーディー. Method for decoding color difference video
CN109068136A (en) * 2012-12-18 2018-12-21 索尼公司 Image processing apparatus and image processing method, computer readable storage medium
CN109068136B (en) * 2012-12-18 2022-07-19 索尼公司 Image processing apparatus, image processing method, and computer-readable storage medium
WO2014120960A1 (en) * 2013-01-30 2014-08-07 Intel Corporation Content adaptive bitrate and quality control by using frame hierarchy sensitive quantization for high efficiency next generation video coding
US10171804B1 (en) 2013-02-21 2019-01-01 Google Llc Video frame encoding scheme selection
US9363421B1 (en) 2015-01-12 2016-06-07 Google Inc. Correcting for artifacts in an encoder and decoder
JP2016103854A (en) * 2016-01-20 2016-06-02 キヤノン株式会社 Image encoder, image encoding method and program, and image decoder, image decoding method and program
WO2017203930A1 (en) * 2016-05-27 2017-11-30 Sharp Kabushiki Kaisha Systems and methods for varying quantization parameters
US11039175B2 (en) 2016-05-27 2021-06-15 Sharp Kabushiki Kaisha Systems and methods for varying quantization parameters
US11689722B2 (en) 2018-04-02 2023-06-27 Sharp Kabushiki Kaisha Systems and methods for deriving quantization parameters for video blocks in video coding

Also Published As

Publication number Publication date
US20210377535A1 (en) 2021-12-02
US20240251082A1 (en) 2024-07-25
MX343458B (en) 2016-11-07
US20240195972A1 (en) 2024-06-13
CN103428497A (en) 2013-12-04
HK1155303A1 (en) 2012-05-11
US9571840B2 (en) 2017-02-14
CN102057677A (en) 2011-05-11
EP2283655A4 (en) 2011-07-27
EP2770741A1 (en) 2014-08-27
US20090296808A1 (en) 2009-12-03
CN102057677B (en) 2013-10-02
US20240163438A1 (en) 2024-05-16
JP5706318B2 (en) 2015-04-22
US20240171745A1 (en) 2024-05-23
MX356897B (en) 2018-06-19
EP2283655A2 (en) 2011-02-16
US20170111640A1 (en) 2017-04-20
US20190313099A1 (en) 2019-10-10
KR101745845B1 (en) 2017-06-12
MX2010012818A (en) 2010-12-21
US20150043633A1 (en) 2015-02-12
KR20110015002A (en) 2011-02-14
US10306227B2 (en) 2019-05-28
EP3416382A1 (en) 2018-12-19
EP2283655B1 (en) 2018-09-19
US20240195973A1 (en) 2024-06-13
KR101780505B1 (en) 2017-09-21
US9185418B2 (en) 2015-11-10
WO2009158113A3 (en) 2010-03-04
US11902525B2 (en) 2024-02-13
KR20160037244A (en) 2016-04-05
US20230055524A1 (en) 2023-02-23
CN103428497B (en) 2016-12-28
US11122268B2 (en) 2021-09-14
JP2011524130A (en) 2011-08-25
JP2013225889A (en) 2013-10-31
US11528481B2 (en) 2022-12-13
US8897359B2 (en) 2014-11-25
JP5932719B2 (en) 2016-06-08
US20140294070A1 (en) 2014-10-02

Similar Documents

Publication Publication Date Title
US11902525B2 (en) Adaptive quantization for enhancement layer video coding
US8213503B2 (en) Skip modes for inter-layer residual video coding and decoding
US7974340B2 (en) Adaptive B-picture quantization control
CN107295337B (en) Method and apparatus for optimizing encoding/decoding of compensation offsets for a set of reconstructed samples of an image
US20220239940A1 (en) Method and apparatus for video coding
US20160366437A1 (en) Search strategies for intra-picture prediction modes
US20160373739A1 (en) Intra/inter decisions using stillness criteria and information from previous pictures
US20240022738A1 (en) Template matching for multiple reference line intra prediction
Schwarz et al. INTERNATIONAL ORGANIZATION FOR STANDARDIZATION ORGANISATION INTERNATIONALE DE NORMALISATION ISO/IEC JTC1/SC29/WG11 CODING OF MOVING PICTURES AND ASSOCIATED AUDIO

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 200980121348.3

Country of ref document: CN

WWE Wipo information: entry into national phase

Ref document number: 2009770648

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: MX/A/2010/012818

Country of ref document: MX

WWE Wipo information: entry into national phase

Ref document number: 7568/CHENP/2010

Country of ref document: IN

ENP Entry into the national phase

Ref document number: 20107027143

Country of ref document: KR

Kind code of ref document: A

WWE Wipo information: entry into national phase

Ref document number: 2011512545

Country of ref document: JP

NENP Non-entry into the national phase

Ref country code: DE