US8687709B2 - In-loop deblocking for interlaced video - Google Patents

In-loop deblocking for interlaced video Download PDF

Info

Publication number
US8687709B2
US8687709B2 US10/934,116 US93411604A US8687709B2 US 8687709 B2 US8687709 B2 US 8687709B2 US 93411604 A US93411604 A US 93411604A US 8687709 B2 US8687709 B2 US 8687709B2
Authority
US
United States
Prior art keywords
block
frame
sub
frequency transform
field
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active, expires
Application number
US10/934,116
Other versions
US20050084012A1 (en
Inventor
Pohsiang Hsu
Chih-Lung Lin
Sridhar Srinivasan
Thomas W. Holcomb
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Microsoft Technology Licensing LLC
Original Assignee
Microsoft Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to US10/934,116 priority Critical patent/US8687709B2/en
Application filed by Microsoft Corp filed Critical Microsoft Corp
Priority to US10/989,845 priority patent/US7924921B2/en
Priority to US10/989,596 priority patent/US7852919B2/en
Priority to US10/989,843 priority patent/US7609762B2/en
Priority to US10/989,827 priority patent/US8213779B2/en
Assigned to MICROSOFT CORPORATION reassignment MICROSOFT CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: SRINIVASAN, SRIDHAR, HOLCOMB, THOMAS W., HSU, POHSIANG, LIN, CHIH-LUNG
Publication of US20050084012A1 publication Critical patent/US20050084012A1/en
Application granted granted Critical
Publication of US8687709B2 publication Critical patent/US8687709B2/en
Assigned to MICROSOFT TECHNOLOGY LICENSING, LLC reassignment MICROSOFT TECHNOLOGY LICENSING, LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: MICROSOFT CORPORATION
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/90Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
    • H04N19/93Run-length coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/109Selection of coding mode or of prediction mode among a plurality of temporal predictive coding modes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/11Selection of coding mode or of prediction mode among a plurality of spatial predictive coding modes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/112Selection of coding mode or of prediction mode according to a given display mode, e.g. for interlaced or progressive display mode
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/117Filters, e.g. for pre-processing or post-processing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/129Scanning of coding units, e.g. zig-zag scan of transform coefficients or flexible macroblock ordering [FMO]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/13Adaptive entropy coding, e.g. adaptive variable length coding [AVLC] or context adaptive binary arithmetic coding [CABAC]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • H04N19/137Motion inside a coding unit, e.g. average field, frame or block difference
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/146Data rate or code amount at the encoder output
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/146Data rate or code amount at the encoder output
    • H04N19/147Data rate or code amount at the encoder output according to rate distortion criteria
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/157Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
    • H04N19/159Prediction type, e.g. intra-frame, inter-frame or bidirectional frame prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/157Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
    • H04N19/16Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter for a given display mode, e.g. for interlaced or progressive display mode
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/172Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a picture, frame or field
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/18Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a set of transform coefficients
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/184Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being bits, e.g. of the compressed video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/186Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/189Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding
    • H04N19/196Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding being specially adapted for the computation of encoding parameters, e.g. by averaging previously computed encoding parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/46Embedding additional information in the video signal during the compression process
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/46Embedding additional information in the video signal during the compression process
    • H04N19/463Embedding additional information in the video signal during the compression process by compressing encoding parameters before transmission
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors
    • H04N19/517Processing of motion vectors by encoding
    • H04N19/52Processing of motion vectors by encoding by predictive encoding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/523Motion estimation or motion compensation with sub-pixel accuracy
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/593Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/80Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation
    • H04N19/82Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation involving filtering within a prediction loop
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/85Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
    • H04N19/86Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving reduction of coding artifacts, e.g. of blockiness
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/61Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/63Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding using sub-band based transform, e.g. wavelets
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/70Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards

Definitions

  • an encoder/decoder performs in-loop deblocking filtering for interlaced frame coded pictures.
  • a typical raw digital video sequence includes 15 or 30 pictures per second. Each picture can include tens or hundreds of thousands of pixels (also called pels). Each pixel represents a tiny element of the picture. In raw form, a computer commonly represents a pixel with 24 bits or more. Thus, the number of bits per second, or bit rate, of a typical raw digital video sequence can be 5 million bits/second or more.
  • compression also called coding or encoding
  • Compression can be lossless, in which quality of the video does not suffer but decreases in bit rate are limited by the complexity of the video.
  • compression can be lossy, in which quality of the video suffers but decreases in bit rate are more dramatic. Decompression reverses compression.
  • video compression techniques include “intra” compression and “inter” or predictive compression.
  • intra compression techniques compress individual pictures, typically called I-frames or key frames.
  • Inter compression techniques compress frames with reference to preceding and/or following frames, and inter-compressed frames are typically called predicted frames, P-frames, or B-frames.
  • Microsoft Corporation's Windows Media Video, Version 8 [“WMV8”] includes a video encoder and a video decoder.
  • the WMV8 encoder uses intra and inter compression
  • the WMV8 decoder uses intra and inter decompression.
  • Windows Media Video, Version 9 [“WMV9”] uses a similar architecture for many operations.
  • FIG. 1 illustrates block-based intra compression 100 of a block 105 of pixels in a key frame in the WMV8 encoder.
  • a block is a set of pixels, for example, an 8 ⁇ 8 arrangement of pixels.
  • the WMV8 encoder splits a key video frame into 8 ⁇ 8 blocks of pixels and applies an 8 ⁇ 8 Discrete Cosine Transform [“DCT”] 110 to individual blocks such as the block 105 .
  • a DCT is a type of frequency transform that converts the 8 ⁇ 8 block of pixels (spatial information) into an 8 ⁇ 8 block of DCT coefficients 115 , which are frequency information.
  • the DCT operation itself is lossless or nearly lossless.
  • the DCT coefficients are more efficient for the encoder to compress since most of the significant information is concentrated in low frequency coefficients (conventionally, the upper left of the block 115 ) and many of the high frequency coefficients (conventionally, the lower right of the block 115 ) have values of zero or close to zero.
  • the encoder then quantizes 120 the DCT coefficients, resulting in an 8 ⁇ 8 block of quantized DCT coefficients 125 .
  • the encoder applies a uniform, scalar quantization step size to each coefficient.
  • Quantization is lossy. Since low frequency DCT coefficients tend to have higher values, quantization results in loss of precision but not complete loss of the information for the coefficients. On the other hand, since high frequency DCT coefficients tend to have values of zero or close to zero, quantization of the high frequency coefficients typically results in contiguous regions of zero values. In addition, in some cases high frequency DCT coefficients are quantized more coarsely than low frequency DCT coefficients, resulting in greater loss of precision/information for the high frequency DCT coefficients.
  • the encoder then prepares the 8 ⁇ 8 block of quantized DCT coefficients 125 for entropy encoding, which is a form of lossless compression.
  • the exact type of entropy encoding can vary depending on whether a coefficient is a DC coefficient (lowest frequency), an AC coefficient (other frequencies) in the top row or left column, or another AC coefficient.
  • the encoder encodes the DC coefficient 126 as a differential from the DC coefficient 136 of a neighboring 8 ⁇ 8 block, which is a previously encoded neighbor (e.g., top or left) of the block being encoded.
  • FIG. 1 shows a neighbor block 135 that is situated to the left of the block being encoded in the frame.
  • the encoder entropy encodes 140 the differential.
  • the entropy encoder can encode the left column or top row of AC coefficients as a differential from a corresponding column or row of the neighboring 8 ⁇ 8 block.
  • FIG. 1 shows the left column 127 of AC coefficients encoded as a differential 147 from the left column 137 of the neighboring (to the left) block 135 .
  • the differential coding increases the chance that the differential coefficients have zero values.
  • the remaining AC coefficients are from the block 125 of quantized DCT coefficients.
  • the encoder scans 150 the 8 ⁇ 8 block 145 of predicted, quantized AC DCT coefficients into a one-dimensional array 155 and then entropy encodes the scanned AC coefficients using a variation of run length coding 160 .
  • the encoder selects an entropy code from one or more run/level/last tables 165 and outputs the entropy code.
  • FIGS. 2 and 3 illustrate the block-based inter compression for a predicted frame in the WMV8 encoder.
  • FIG. 2 illustrates motion estimation for a predicted frame 210
  • FIG. 3 illustrates compression of a prediction residual for a motion-compensated block of a predicted frame.
  • the WMV8 encoder computes a motion vector for a macroblock 215 in the predicted frame 210 .
  • the encoder searches in a search area 235 of a reference frame 230 .
  • the encoder compares the macroblock 215 from the predicted frame 210 to various candidate macroblocks in order to find a candidate macroblock that is a good match.
  • the encoder outputs information specifying the motion vector (entropy coded) for the matching macroblock.
  • the encoder can encode the differential between the motion vector and the motion vector predictor. After reconstructing the motion vector by adding the differential to the predictor, a decoder uses the motion vector to compute a prediction macroblock for the macroblock 215 using information from the reference frame 230 , which is a previously reconstructed frame available at the encoder and the decoder.
  • the prediction is rarely perfect, so the encoder usually encodes blocks of pixel differences (also called the error or residual blocks) between the prediction macroblock and the macroblock 215 itself.
  • FIG. 3 illustrates an example of computation and encoding of an error block 335 in the WMV8 encoder.
  • the error block 335 is the difference between the predicted block 315 and the original current block 325 .
  • the encoder applies a DCT 340 to the error block 335 , resulting in an 8 ⁇ 8 block 345 of coefficients.
  • the encoder then quantizes 350 the DCT coefficients, resulting in an 8 ⁇ 8 block of quantized DCT coefficients 355 .
  • the encoder scans 360 the 8 ⁇ 8 block 355 into a one-dimensional array 365 such that coefficients are generally ordered from lowest frequency to highest frequency.
  • the encoder entropy encodes the scanned coefficients using a variation of run length coding 370 .
  • the encoder selects an entropy code from one or more run/level/last tables 375 and outputs the entropy code.
  • FIG. 4 shows an example of a corresponding decoding process 400 for an inter-coded block.
  • a decoder decodes ( 410 , 420 ) entropy-coded information representing a prediction residual using variable length decoding 410 with one or more run/level/last tables 415 and run length decoding 420 .
  • the decoder inverse scans 430 a one-dimensional array 425 storing the entropy-decoded information into a two-dimensional block 435 .
  • the decoder inverse quantizes and inverse discrete cosine transforms (together, 440 ) the data, resulting in a reconstructed error block 445 .
  • the decoder computes a predicted block 465 using motion vector information 455 for displacement from a reference frame.
  • the decoder combines 470 the predicted block 465 with the reconstructed error block 445 to form the reconstructed block 475 .
  • the amount of change between the original and reconstructed frames is the distortion and the number of bits required to code the frame indicates the rate for the frame.
  • the amount of distortion is roughly inversely proportional to the rate.
  • a video frame contains lines of spatial information of a video signal. For progressive video, these lines contain samples starting from one time instant and continuing through successive lines to the bottom of the frame.
  • a progressive I-frame is an intra-coded progressive video frame.
  • a progressive P-frame is a progressive video frame coded using forward prediction, and a progressive B-frame is a progressive video frame coded using bi-directional prediction.
  • a typical interlaced video frame consists of two fields scanned starting at different times.
  • an interlaced video frame 500 includes top field 510 and bottom field 520 .
  • the even-numbered lines (top field) are scanned starting at one time (e.g., time t) and the odd-numbered lines (bottom field) are scanned starting at a different (typically later) time (e.g., time t+1).
  • time t time t+1
  • This timing can create jagged tooth-like features in regions of an interlaced video frame where motion is present when the two fields are scanned starting at different times. For this reason, interlaced video frames can be rearranged according to a field structure, with the odd lines grouped together in one field, and the even lines grouped together in another field.
  • a previous WMV encoder and decoder use macroblocks that are arranged according to a field structure (field-coded macroblocks) or a frame structure (frame-coded macroblocks) in interlaced video frames.
  • FIG. 6 shows how field permuting is used to produce field-coded macroblocks in the encoder and decoder.
  • An interlaced macroblock 610 is permuted such that all the top field lines (e.g., even-numbered lines 0 , 2 , . . . 14 ) are placed in the top half of the field-coded macroblock 620 , and all the bottom field lines (e.g., odd-numbered lines 1 , 3 , . . . 15 ) are placed in the bottom half of the field-coded macroblock.
  • the top field lines and bottom field lines alternate throughout the macroblock, as in interlaced macroblock 610 .
  • a 4:1:1 macroblock is composed of four 8 ⁇ 8 luminance blocks and two 4 ⁇ 8 blocks of each chrominance channel.
  • the permuted macroblock is subdivided such that the top two 8 ⁇ 8 luminance blocks and the top 4 ⁇ 8 chrominance block in each chrominance channel contain only top field lines, while the bottom two 8 ⁇ 8 luminance blocks and the bottom 4 ⁇ 8 chrominance block in each chrominance channel contain only bottom field lines.
  • a typical progressive video frame consists of one frame of content with non-alternating lines. In contrast to interlaced video, progressive video does not divide video frames into separate fields, and an entire frame is scanned left to right, top to bottom starting at a single time.
  • a previous WMV video encoder and decoder use a deblocking filter to smooth boundary discontinuities between 8 ⁇ 8 blocks in motion estimation/compensation loops. For example, a video encoder processes a reconstructed reference frame to reduce blocking artifacts prior to motion estimation/compensation using the reference frame, and a video decoder processes a reconstructed reference frame to reduce blocking artifacts prior to motion compensation using the reference frame.
  • the deblocking filter improves the quality of motion estimation/compensation, resulting in better prediction and lower bitrate for prediction residuals.
  • the encoder and decoder perform in-loop deblocking filtering for progressive frames prior to using a reconstructed frame as a reference for motion estimation/compensation.
  • the filtering process operates on pixels (or more precisely, on samples at pixel locations) that border neighboring blocks.
  • the locations of block boundaries depend on the size of the inverse transform used.
  • the block boundaries may occur at every 4th or 8th pixel row or column depending on whether an 8 ⁇ 8, 8 ⁇ 4 or 4 ⁇ 8 inverse transform is used.
  • block boundaries occur at every 8th pixel row and column.
  • FIGS. 7 and 8 show the pixels that are filtered along the horizontal and vertical border regions in the upper left corner of a component (luma, C b or C r ) plane.
  • FIG. 7 shows filtered vertical block boundary pixels in an I-frame.
  • FIG. 8 shows filtered horizontal block boundary pixels in an I-frame.
  • crosses represent pixels (actually samples for pixels) and circled crosses represent filtered pixels.
  • the top horizontal line and first vertical line in the frame are not filtered, even though they lie on a block boundary, because these lines lie on the border of the frame.
  • the bottom horizontal line and last vertical line in the frame also are not filtered for the same reason.
  • the following lines are filtered:
  • blocks can be intra or inter-coded.
  • the encoder and decoder use an 8 ⁇ 8 transform to transform the samples in intra-coded blocks, and the 8 ⁇ 8 block boundaries are always adaptively filtered.
  • the encoder and decoder use an 8 ⁇ 8, 8 ⁇ 4, 4 ⁇ 8 or 4 ⁇ 4 transform for inter-coded blocks and a corresponding inverse transform to construct the samples that represent the residual error.
  • the boundary between the current and neighboring blocks may or may not be adaptively filtered.
  • the boundaries between coded (at least one non-zero coefficient) subblocks (8 ⁇ 4, 4 ⁇ 8 or 4 ⁇ 4) within an 8 ⁇ 8 block are always adaptively filtered.
  • the boundary between a block or subblock and a neighboring block or subblock is not filtered only if both blocks are inter-coded, have the same motion vector, and have no residual error (no transform coefficients), otherwise the boundary is filtered.
  • FIG. 9 shows examples of when filtering between neighboring blocks does and does not occur in progressive P-frames.
  • the shaded blocks or subblocks represent the cases where at least one nonzero coefficient is present. Clear blocks or subblocks represent cases where no transform coefficients are present.
  • Thick lines represent the boundaries that are adaptively filtered. Thin lines represent the boundaries that are not filtered.
  • FIG. 9 illustrates only horizontal macroblock neighbors, but a previous WMV encoder and decoder applies similar rules to vertical neighbors.
  • FIGS. 10 and 11 show an example of pixels that may be filtered in a progressive P-frame.
  • the crosses represent pixel locations and the circled crosses represent the boundary pixels that are adaptively filtered if the conditions specified above are met.
  • FIG. 10 shows pixels filtered along horizontal boundaries. As FIG. 10 shows, the pixels on either side of the block or subblock boundary are candidates to be filtered. For the horizontal boundaries, this could be every 4 th and 5 th , 8 th and 9 th , 12 th and 13 th , etc., pixel row in the frame.
  • FIG. 11 shows candidate pixels to be filtered along vertical boundaries.
  • every 4 th and 5 th , 8 th and 9 th , 12 th and 13 th , etc., pixel column in the frame may be adaptively filtered as these are the 8 ⁇ 8 and 4 ⁇ 8 vertical boundaries.
  • the first and last row and the first and last column in the frame are not filtered.
  • All the 8 ⁇ 8 block horizontal boundary lines in the frame are adaptively filtered first, starting from the top line.
  • all 8 ⁇ 4 block horizontal boundary lines in the frame are adaptively filtered starting from the top line.
  • all 8 ⁇ 8 block vertical boundary lines are adaptively filtered starting from the leftmost line.
  • all 4 ⁇ 8 block vertical boundary lines are adaptively filtered starting with the leftmost line. The rules specified above are used to determine whether the boundary pixels are actually filtered for each block or subblock.
  • the decision criteria described above determine which vertical and horizontal boundaries are adaptively filtered. Since the minimum number of consecutive pixels that are filtered in a row or column is four and the total number of pixels in a row or column is always a multiple of four, the filtering operation is performed on segments of four pixels.
  • the eight pixels are divided into two 4-pixel segments as shown in FIG. 12 .
  • the third pixel pair is adaptively filtered first as indicated by the Xs in FIG. 12 .
  • the result of this adaptive filter operation determines whether the other three pixels in the segment are also filtered.
  • FIG. 13 shows the pixels that are used in the adaptive filtering operation performed on the 3 rd pixel pair.
  • pixels P 4 and P 5 are the pixel pair that may be changed in the filter operation.
  • the pseudo-code 1400 of FIG. 14 shows the adaptive filtering operation performed on the 3 rd pixel pair in each segment.
  • the variable PQUANT represents a quantization step size.
  • the encoder and decoder perform in-loop deblocking filtering across vertical boundaries in interlaced frames having a 4:1:1 macroblock format.
  • adaptive filtering can occur for pixels located immediately on the left and right of a vertical block boundary except for those located on the picture boundaries (i.e., the first and last column of the luminance and chrominance components).
  • pixels (more precisely, samples) that are candidates for filtering in a typical 4:1:1 macroblock in the encoder and decoder are marked M or B, where M denotes boundary pixels located across macroblock boundaries and B denotes boundary pixels located within the macroblock.
  • each block contains eight consecutive alternating lines of the top and bottom fields in the macroblock.
  • a block contains either eight top field lines or eight bottom field lines. The filtering decision is made eight lines at a time.
  • the decision to filter across a vertical block boundary depends on whether the current block and the left neighboring block are frame-coded or field-coded (field/frame type), whether they are intra-coded or inter-coded, and whether they have nonzero transform coefficients.
  • the vertical block boundary pixels are adaptively filtered unless the current block's field/frame type is the same as the left neighboring block's field/frame type, both blocks are not intra-coded, and both have no nonzero transform coefficients, in which case the block boundary is not filtered.
  • Chroma block boundaries are adaptively filtered if the corresponding luminance block boundaries are adaptively filtered. Horizontal boundaries are not filtered.
  • the encoder and decoder adaptively filter block boundaries depending in part on the field/frame type of the neighboring blocks, they do not take transform size into account when making filtering decisions in interlaced frames.
  • the H.263 standard includes an optional deblocking filter mode in which a filter is applied across 8 ⁇ 8 block edge boundaries of decoded I- and P-frames (but not B-frames) to reduce blocking artifacts.
  • Annex J of the H.263 standard describes an optional block edge filter within the coding loop in which filtering is performed on 8 ⁇ 8 block edges (referred to in H.263 as a deblocking edge filter). This filter affects the reconstructed pictures used for prediction of other pictures.
  • the deblocking edge filter operates using a set of four clipped pixel values on a horizontal and/or vertical line, where two of the four values are in one block (e.g., the top block among neighboring top and bottom blocks) and the other two values are in another block (e.g., the bottom block among neighboring top and bottom blocks). Filtering across horizontal edges is performed before filtering across vertical edges to reduce rounding effects.
  • This optional filtering mode can be signaled in the bitstream with a single bit in a field of a picture header.
  • deblocking filtering is performed on a macroblock basis.
  • macroblocks are grouped into macroblock pairs (top and bottom).
  • Macroblock pairs can be field-coded or frame-coded.
  • the macroblock pair is decoded as two frame-coded macroblocks.
  • the top macroblock consists of the top-field lines in the macroblock pair
  • the bottom macroblock consists of the bottom-field lines in the macroblock pair.
  • Sections 8.7 and 12.4.4 of draft JVT-d157 describe deblocking filtering.
  • deblocking is performed on the frame samples, and if neighboring macroblock pair is a field macroblock pair, the neighboring field macroblock pair is converted into a frame macroblock pair before deblocking.
  • deblocking is performed on the field samples of the same field parity, and if a neighboring macroblock pair is a frame macroblock pairs, it is converted into a field macroblock pair before deblocking.
  • all decoding operations for the deblocking filter are based solely on samples within the current field.
  • H.263 does not describe loop filtering for interlaced video.
  • Draft JVT-d157 of the JVT/AVC video standard describes loop filtering only for macroblock pairs in interlaced video, and does not describe, for example, loop filtering for an individual field-coded macroblock having a top field and a bottom field within the same macroblock, or loop filtering decisions for blocks or sub-blocks larger than 4 ⁇ 4.
  • an encoder/decoder obtains pixel data (e.g., chrominance or luminance samples) from one or more field lines (e.g., top field lines or bottom field lines) associated with a first block of a macroblock in an interlaced frame coded picture (e.g., interlaced I-frame, interlaced P-frame, interlaced B-frame, etc.) comprising plural macroblocks (e.g., 4:2:0 macroblocks). Each of the plural macroblocks has an equal number of top field lines and bottom field lines.
  • pixel data e.g., chrominance or luminance samples
  • field lines e.g., top field lines or bottom field lines
  • plural macroblocks e.g., 4:2:0 macroblocks.
  • the encoder/decoder obtains pixel data from one or more field lines associated with a second block in the picture and performs in-loop deblocking filtering across a boundary (e.g., a horizontal or vertical block boundary comprising at least one four-pixel segment) using the obtained pixel data.
  • the in-loop deblocking filtering comprises filter operations performed on pixel data from field lines of same polarity only and can be described as field-based deblocking.
  • Each of the plural macroblocks can be coded according to a field structure or a frame structure, which can be indicated by a transform type.
  • the first block and the second block can each have a transform size selected from a group consisting of: 8 ⁇ 8, 8 ⁇ 4, 4 ⁇ 8, and 4 ⁇ 4.
  • an encoder/decoder obtains field/frame type information for a current macroblock in an interlaced frame coded picture.
  • the encoder/decoder also obtains transform size information for plural blocks in the current macroblock.
  • the encoder/decoder selects one or more boundaries for in-loop deblocking based at least in part on the transform size information and the field/frame type information.
  • the encoder/decoder performs in-loop deblocking (e.g., field-based deblocking) on the selected boundaries.
  • the field/frame transform type information indicates, for example, whether the current macroblock is coded according to a field structure or a frame structure.
  • the selecting of one or more boundaries for in-loop deblocking can be further based on the picture type information (e.g., whether the interlaced frame coded picture is an interlaced I-frame, P-frame or B-frame).
  • an encoder/decoder obtains field/frame type information for a current macroblock, obtains transform size information for plural blocks in the macroblock, selects a boundary between a first block in the macroblock and a second block for in-loop deblocking based at least in part on the transform size information and the field/frame type information, obtains pixel data from one or more field lines associated with the first block and from one or more field lines associated with the second block, and performs in-loop deblocking across the boundary using the obtained pixel data.
  • the in-loop deblocking comprises filtering operations performed on pixel data from field lines of same polarity only.
  • FIG. 1 is a diagram showing block-based intraframe compression of an 8 ⁇ 8 block of pixels according to the prior art.
  • FIG. 2 is a diagram showing motion estimation in a video encoder according to the prior art.
  • FIG. 3 is a diagram showing block-based compression for an 8 ⁇ 8 block of prediction residuals in a video encoder according to the prior art.
  • FIG. 4 is a diagram showing block-based decompression for an 8 ⁇ 8 block of prediction residuals in a video decoder according to the prior art.
  • FIG. 5 is a diagram showing an interlaced frame according to the prior art.
  • FIG. 6 is a diagram showing field permuting of interlaced macroblocks according to the prior art.
  • FIG. 7 is a diagram showing filtered vertical block boundary pixels according to the prior art.
  • FIG. 8 is a diagram showing filtered horizontal block boundary pixels according to the prior art.
  • FIG. 9 is a diagram showing filtering between horizontally neighboring blocks in progressive P-frames according to the prior art.
  • FIG. 10 is a diagram showing filtered horizontal block boundary pixels in progressive P-frames according to the prior art.
  • FIG. 11 is a diagram showing filtered vertical block boundary pixels in progressive P-frames according to the prior art.
  • FIG. 12 is a diagram showing eight pixel pairs divided into two 4-pixel segments on the sides of the vertical boundary between two blocks for filtering in progressive frames according to the prior art.
  • FIG. 13 is a diagram showing pixels used in a filtering operation performed on the 3 rd pixel pair of a 4-pixel segment in progressive frames according to the prior art.
  • FIG. 14 is a code diagram showing pseudo-code for a filtering operation performed on the 3 rd pixel pair in a 4-pixel segment in progressive frames according to the prior art.
  • FIG. 15 is a code diagram showing pseudo-code for a filtering operation performed on the 1 st , 2 nd and 4 th pixel pair in a 4-pixel segment in progressive frames according to the prior art.
  • FIG. 16 is a diagram showing pixels that are candidates for filtering in a 4:1:1 macroblock according to the prior art.
  • FIG. 17 is a block diagram of a suitable computing environment in conjunction with which several described embodiments may be implemented.
  • FIG. 18 is a block diagram of a generalized video encoder system in conjunction with which several described embodiments may be implemented.
  • FIG. 19 is a block diagram of a generalized video decoder system in conjunction with which several described embodiments may be implemented.
  • FIG. 20 is a diagram of a macroblock format used in several described embodiments.
  • FIG. 21A is a diagram of part of an interlaced video frame, showing alternating lines of a top field and a bottom field.
  • FIG. 21B is a diagram of the interlaced video frame organized for encoding/decoding as a frame
  • FIG. 21C is a diagram of the interlaced video frame organized for encoding/decoding as fields.
  • FIG. 22 is a diagram showing a motion estimation/compensation loop with an in-loop deblocking filter in a video encoder.
  • FIG. 23 is a diagram showing a motion compensation loop with an in-loop deblocking filter in a video decoder.
  • FIG. 24 is a code diagram showing pseudo-code for performing in-loop deblocking filtering by processing horizontal boundaries followed by vertical boundaries.
  • FIG. 25 is a flow chart showing a technique for performing field-based deblocking filtering.
  • FIG. 26A is a diagram showing field-based filtering for horizontal block boundaries in interlaced I-frames, P-frames and B-frames.
  • FIG. 26B is a diagram showing field-based filtering for vertical block boundaries in interlaced I-frames, P-frames and B-frames.
  • FIGS. 27A-27B are diagrams showing loop filtering of luminance blocks in an interlaced field transform coded macroblock.
  • FIG. 28 is a flow chart showing a technique for using field/frame transform type and transform size to select block boundaries for in-loop deblocking filtering.
  • FIG. 29 is a diagram showing loop filtering of luminance blocks in an interlaced frame transform coded macroblock.
  • FIGS. 30A-30B are code diagrams showing pseudo-code for horizontal filtering and vertical filtering, respectively, in a macroblock in an interlaced I-frame.
  • FIGS. 31A-31C are code diagrams showing pseudo-code for horizontal filtering for luma and chroma blocks in a macroblock in an interlaced P-frame or B-frame.
  • FIGS. 32A-32C are code diagrams showing pseudo-code for vertical filtering for luma and chroma blocks, respectively, in a macroblock in an interlaced P-frame or B-frame.
  • FIG. 33 is a diagram showing an entry point layer bitstream syntax in a combined implementation.
  • FIG. 34 is a diagram showing a frame layer bitstream syntax for interlaced I-frames in a combined implementation.
  • FIG. 35 is a diagram showing a frame layer bitstream syntax for interlaced P-frames in a combined implementation.
  • FIG. 36 is a diagram showing a frame layer bitstream syntax for interlaced B-frames in a combined implementation.
  • FIG. 37 is a diagram showing a macroblock layer bitstream syntax for macroblocks of interlaced P-frames in a combined implementation.
  • a video encoder and decoder incorporate techniques for encoding and decoding interlaced video, and corresponding signaling techniques for use with a bit stream format or syntax comprising different layers or levels (e.g., sequence level, frame level, field level, macroblock level, and/or block level).
  • the various techniques and tools can be used in combination or independently. Different embodiments implement one or more of the described techniques and tools. Some techniques and tools described herein can be used in a video encoder or decoder, or in some other system not specifically limited to video encoding or decoding.
  • FIG. 17 illustrates a generalized example of a suitable computing environment 1700 in which several of the described embodiments may be implemented.
  • the computing environment 1700 is not intended to suggest any limitation as to scope of use or functionality, as the techniques and tools may be implemented in diverse general-purpose or special-purpose computing environments.
  • the computing environment 1700 includes at least one processing unit 1710 and memory 1720 .
  • the processing unit 1710 executes computer-executable instructions and may be a real or a virtual processor. In a multi-processing system, multiple processing units execute computer-executable instructions to increase processing power.
  • the memory 1720 may be volatile memory (e.g., registers, cache, RAM), non-volatile memory (e.g., ROM, EEPROM, flash memory, etc.), or some combination of the two.
  • the memory 1720 stores software 1780 implementing a video encoder or decoder with one or more of the described techniques and tools.
  • a computing environment may have additional features.
  • the computing environment 1700 includes storage 1740 , one or more input devices 1750 , one or more output devices 1760 , and one or more communication connections 1770 .
  • An interconnection mechanism such as a bus, controller, or network interconnects the components of the computing environment 1700 .
  • operating system software provides an operating environment for other software executing in the computing environment 1700 , and coordinates activities of the components of the computing environment 1700 .
  • the storage 1740 may be removable or non-removable, and includes magnetic disks, magnetic tapes or cassettes, CD-ROMs, DVDs, or any other medium which can be used to store information and which can be accessed within the computing environment 1700 .
  • the storage 1740 stores instructions for the software 1780 implementing the video encoder or decoder.
  • the input device(s) 1750 may be a touch input device such as a keyboard, mouse, pen, or trackball, a voice input device, a scanning device, or another device that provides input to the computing environment 1700 .
  • the input device(s) 1750 may be a sound card, video card, TV tuner card, or similar device that accepts audio or video input in analog or digital form, or a CD-ROM or CD-RW that reads audio or video samples into the computing environment 1700 .
  • the output device(s) 1760 may be a display, printer, speaker, CD-writer, or another device that provides output from the computing environment 1700 .
  • the communication connection(s) 1770 enable communication over a communication medium to another computing entity.
  • the communication medium conveys information such as computer-executable instructions, audio or video input or output, or other data in a modulated data signal.
  • a modulated data signal is a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal.
  • communication media include wired or wireless techniques implemented with an electrical, optical, RF, infrared, acoustic, or other carrier.
  • Computer-readable media are any available media that can be accessed within a computing environment.
  • Computer-readable media include memory 1720 , storage 1740 , communication media, and combinations of any of the above.
  • program modules include routines, programs, libraries, objects, classes, components, data structures, etc. that perform particular tasks or implement particular abstract data types.
  • the functionality of the program modules may be combined or split between program modules as desired in various embodiments.
  • Computer-executable instructions for program modules may be executed within a local or distributed computing environment.
  • FIG. 18 is a block diagram of a generalized video encoder 1800 in conjunction with which some described embodiments may be implemented.
  • FIG. 19 is a block diagram of a generalized video decoder 1900 in conjunction with which some described embodiments may be implemented.
  • FIGS. 18 and 19 usually do not show side information indicating the encoder settings, modes, tables, etc. used for a video sequence, picture, macroblock, block, etc.
  • Such side information is sent in the output bitstream, typically after entropy encoding of the side information.
  • the format of the output bitstream can be a Windows Media Video version 9 format or other format.
  • the encoder 1800 and decoder 1900 process video pictures, which may be video frames, video fields or combinations of frames and fields.
  • the bitstream syntax and semantics at the picture and macroblock levels may depend on whether frames or fields are used. There may be changes to macroblock organization and overall timing as well.
  • the encoder 1800 and decoder 1900 are block-based and use a 4:2:0 macroblock format for frames, with each macroblock including four 8 ⁇ 8 luminance blocks (at times treated as one 16 ⁇ 16 macroblock) and two 8 ⁇ 8 chrominance blocks. For fields, the same or a different macroblock organization and format may be used.
  • the 8 ⁇ 8 blocks may be further sub-divided at different stages, e.g., at the frequency transform and entropy encoding stages. Example video frame organizations are described in more detail below.
  • the encoder 1800 and decoder 1900 are object-based, use a different macroblock or block format, or perform operations on sets of pixels of different size or configuration than 8 ⁇ 8 blocks and 16 ⁇ 16 macroblocks.
  • modules of the encoder or decoder can be added, omitted, split into multiple modules, combined with other modules, and/or replaced with like modules.
  • encoders or decoders with different modules and/or other configurations of modules perform one or more of the described techniques.
  • the encoder 1800 and decoder 1900 process video frames organized as follows.
  • a frame contains lines of spatial information of a video signal. For progressive video, these lines contain samples starting from one time instant and continuing through successive lines to the bottom of the frame.
  • a progressive video frame is divided into macroblocks such as the macroblock 2000 shown in FIG. 20 .
  • the macroblock 2000 includes four 8 ⁇ 8 luminance blocks (Y 0 through Y 3 ) and two 8 ⁇ 8 chrominance blocks that are co-located with the four luminance blocks but half resolution horizontally and vertically, following the conventional 4:2:0 macroblock format.
  • the 8 ⁇ 8 blocks may be further sub-divided at different stages, e.g., at the frequency transform (e.g., 8 ⁇ 4, 4 ⁇ 8 or 4 ⁇ 4 DCTs) and entropy encoding stages.
  • a progressive I-frame is an intra-coded progressive video frame.
  • a progressive P-frame is a progressive video frame coded using forward prediction, and a progressive B-frame is a progressive video frame coded using bi-directional prediction.
  • Progressive P- and B-frames may include intra-coded macroblocks as well as different types of predicted macroblocks.
  • An interlaced video frame consists of two scans of a frame—one comprising the even lines of the frame (the top field) and the other comprising the odd lines of the frame (the bottom field).
  • the two fields may represent two different time periods or they may be from the same time period.
  • FIG. 21A shows part of an interlaced video frame 2100 , including the alternating lines of the top field and bottom field at the top left part of the interlaced video frame 2100 .
  • FIG. 21B shows the interlaced video frame 2100 of FIG. 21A organized for encoding/decoding as a frame 2130 .
  • the interlaced video frame 2100 has been partitioned into macroblocks such as the macroblocks 2131 and 2132 , which use a 4:2:0 format as shown in FIG. 20 .
  • each macroblock 2131 , 2132 includes 8 lines from the top field alternating with 8 lines from the bottom field for 16 lines total, and each line is 16 pixels long.
  • top-field information and bottom-field information may be coded jointly or separately at any of various phases.
  • the macroblock itself may be field transform coded or frame transform coded. Field and frame transform coding for macroblocks is described in further detail below.
  • An interlaced I-frame is two intra-coded fields of an interlaced video frame, where a macroblock includes information for the two fields.
  • An interlaced P-frame is two fields of an interlaced video frame coded using forward prediction
  • an interlaced B-frame is two fields of an interlaced video frame coded using bi-directional prediction, where a macroblock includes information for the two fields.
  • Interlaced P- and B-frames may include intra-coded macroblocks as well as different types of predicted macroblocks.
  • Interlaced BI-frames are a hybrid of interlaced I-frames and interlaced B-frames; they are intra-coded, but are not used as anchors for other frames.
  • FIG. 21C shows the interlaced video frame 2100 of FIG. 21A organized for encoding/decoding as fields 2160 .
  • Each of the two fields of the interlaced video frame 2100 is partitioned into macroblocks.
  • the top field is partitioned into macroblocks such as the macroblock 2161
  • the bottom field is partitioned into macroblocks such as the macroblock 2162 .
  • the macroblocks use a 4:2:0 format as shown in FIG. 20 , and the organization and placement of luminance blocks and chrominance blocks within the macroblocks are not shown.
  • the macroblock 2161 includes 16 lines from the top field and the macroblock 2162 includes 16 lines from the bottom field, and each line is 16 pixels long.
  • An interlaced I-field is a single, separately represented field of an interlaced video frame.
  • An interlaced P-field is a single, separately represented field of an interlaced video frame coded using forward prediction
  • an interlaced B-field is a single, separately represented field of an interlaced video frame coded using bi-directional prediction.
  • Interlaced P- and B-fields may include intra-coded macroblocks as well as different types of predicted macroblocks.
  • Interlaced BI-fields are a hybrid of interlaced I-fields and interlaced B-fields; they are intra-coded, but are not used as anchors for other fields.
  • Interlaced video frames organized for encoding/decoding as fields can include various combinations of different field types.
  • such a frame can have the same field type in both the top and bottom fields or different field types in each field.
  • the possible combinations of field types include I/I, I/P, P/I, P/P, B/B, B/BI, BI/B, and BI/BI.
  • picture generally refers to source, coded or reconstructed image data.
  • a picture is a progressive video frame.
  • a picture may refer to an interlaced video frame, the top field of the frame, or the bottom field of the frame, depending on the context.
  • the encoder 1800 and decoder 1900 are object-based, use a different macroblock or block format, or perform operations on sets of pixels of different size or configuration than 8 ⁇ 8 blocks and 16 ⁇ 16 macroblocks.
  • FIG. 18 is a block diagram of a generalized video encoder system 1800 .
  • the encoder system 1800 receives a sequence of video pictures including a current picture 1805 (e.g., progressive video frame, interlaced video frame, or field of an interlaced video frame), and produces compressed video information 1895 as output.
  • a current picture 1805 e.g., progressive video frame, interlaced video frame, or field of an interlaced video frame
  • Particular embodiments of video encoders typically use a variation or supplemented version of the generalized encoder 1800 .
  • the encoder system 1800 compresses predicted pictures and key pictures.
  • FIG. 18 shows a path for key pictures through the encoder system 1800 and a path for predicted pictures.
  • Many of the components of the encoder system 1800 are used for compressing both key pictures and predicted pictures. The exact operations performed by those components can vary depending on the type of information being compressed.
  • a predicted picture (e.g., progressive P-frame or B-frame, interlaced P-field or B-field, or interlaced P-frame or B-frame) is represented in terms of prediction (or difference) from one or more other pictures (which are typically referred to as reference pictures or anchors).
  • a prediction residual is the difference between what was predicted and the original picture.
  • a key picture e.g., progressive I-frame, interlaced I-field, or interlaced I-frame
  • a motion estimator 1810 estimates motion of macroblocks or other sets of pixels of the current picture 1805 with respect to one or more reference pictures, for example, the reconstructed previous picture 1825 buffered in the picture store 1820 . If the current picture 1805 is a bi-directionally-predicted picture, a motion estimator 1810 estimates motion in the current picture 1805 with respect to up to four reconstructed reference pictures (for an interlaced B-field, for example). Typically, a motion estimator estimates motion in a B-picture with respect to one or more temporally previous reference pictures and one or more temporally future reference pictures. Accordingly, the encoder system 1800 can use the separate stores 1820 and 1822 for multiple reference pictures.
  • the motion estimator 1810 can estimate motion by pixel, 1 ⁇ 2 pixel, 1 ⁇ 4 pixel, or other increments, and can switch the precision of the motion estimation on a picture-by-picture basis or other basis.
  • the motion estimator 1810 (and compensator 1830 ) also can switch between types of reference picture pixel interpolation (e.g., between bicubic and bilinear) on a per-frame or other basis.
  • the precision of the motion estimation can be the same or different horizontally and vertically.
  • the motion estimator 1810 outputs as side information motion information 1815 such as differential motion vector information.
  • the encoder 1800 encodes the motion information 1815 by, for example, computing one or more predictors for motion vectors, computing differentials between the motion vectors and predictors, and entropy coding the differentials.
  • a motion compensator 1830 combines a predictor with differential motion vector information.
  • the motion compensator 1830 applies the reconstructed motion vector to the reconstructed picture(s) 1825 to form a motion-compensated current picture 1835 .
  • the prediction is rarely perfect, however, and the difference between the motion-compensated current picture 1835 and the original current picture 1805 is the prediction residual 1845 .
  • the prediction residual 1845 is added to the motion compensated current picture 1835 to obtain a reconstructed picture that is closer to the original current picture 1805 . In lossy compression, however, some information is still lost from the original current picture 1805 .
  • a motion estimator and motion compensator apply another type of motion estimation/compensation.
  • a frequency transformer 1860 converts the spatial domain video information into frequency domain (i.e., spectral) data.
  • the frequency transformer 1860 applies a DCT, variant of DCT, or other block transform to blocks of the pixel data or prediction residual data, producing blocks of frequency transform coefficients.
  • the frequency transformer 1860 applies another conventional frequency transform such as a Fourier transform or uses wavelet or sub-band analysis.
  • the frequency transformer 1860 may apply an 8 ⁇ 8, 8 ⁇ 4, 4 ⁇ 8, 4 ⁇ 4 or other size frequency transform.
  • a quantizer 1870 then quantizes the blocks of spectral data coefficients.
  • the quantizer applies uniform, scalar quantization to the spectral data with a step-size that varies on a picture-by-picture basis or other basis.
  • the quantizer applies another type of quantization to the spectral data coefficients, for example, a non-uniform, vector, or non-adaptive quantization, or directly quantizes spatial domain data in an encoder system that does not use frequency transformations.
  • the encoder 1800 can use frame dropping, adaptive filtering, or other techniques for rate control.
  • the encoder 1800 may use special signaling for a skipped macroblock, which is a macroblock that has no information of certain types (e.g., no motion information for the macroblock and no residual information).
  • an inverse quantizer 1876 When a reconstructed current picture is needed for subsequent motion estimation/compensation, an inverse quantizer 1876 performs inverse quantization on the quantized spectral data coefficients. An inverse frequency transformer 1866 then performs the inverse of the operations of the frequency transformer 1860 , producing a reconstructed prediction residual (for a predicted picture) or a reconstructed key picture. If the current picture 1805 was a key picture, the reconstructed key picture is taken as the reconstructed current picture (not shown). If the current picture 1805 was a predicted picture, the reconstructed prediction residual is added to the motion-compensated current picture 1835 to form the reconstructed current picture. One or both of the picture stores 1820 , 1822 buffers the reconstructed current picture for use in motion compensated prediction. In some embodiments, the encoder applies a de-blocking filter to the reconstructed frame to adaptively smooth discontinuities and other artifacts in the picture.
  • the entropy coder 1880 compresses the output of the quantizer 1870 as well as certain side information (e.g., motion information 1815 , quantization step size).
  • Typical entropy coding techniques include arithmetic coding, differential coding, Huffman coding, run length coding, LZ coding, dictionary coding, and combinations of the above.
  • the entropy coder 1880 typically uses different coding techniques for different kinds of information (e.g., DC coefficients, AC coefficients, different kinds of side information), and can choose from among multiple code tables within a particular coding technique.
  • the entropy coder 1880 provides compressed video information 1895 to the multiplexer [“MUX”] 1890 .
  • the MUX 1890 may include a buffer, and a buffer level indicator may be fed back to bit rate adaptive modules for rate control.
  • the compressed video information 1895 can be channel coded for transmission over the network. The channel coding can apply error detection and correction data to the compressed video information 1895 .
  • FIG. 19 is a block diagram of a general video decoder system 1900 .
  • the decoder system 1900 receives information 1995 for a compressed sequence of video pictures and produces output including a reconstructed picture 1905 (e.g., progressive video frame, interlaced video frame, or field of an interlaced video frame).
  • a reconstructed picture 1905 e.g., progressive video frame, interlaced video frame, or field of an interlaced video frame.
  • Particular embodiments of video decoders typically use a variation or supplemented version of the generalized decoder 1900 .
  • the decoder system 1900 decompresses predicted pictures and key pictures.
  • FIG. 19 shows a path for key pictures through the decoder system 1900 and a path for forward-predicted pictures.
  • Many of the components of the decoder system 1900 are used for decompressing both key pictures and predicted pictures. The exact operations performed by those components can vary depending on the type of information being decompressed.
  • a DEMUX 1990 receives the information 1995 for the compressed video sequence and makes the received information available to the entropy decoder 1980 .
  • the DEMUX 1990 may include a jitter buffer and other buffers as well.
  • the compressed video information can be channel decoded and processed for error detection and correction.
  • the entropy decoder 1980 entropy decodes entropy-coded quantized data as well as entropy-coded side information (e.g., motion information 1915 , quantization step size), typically applying the inverse of the entropy encoding performed in the encoder.
  • Entropy decoding techniques include arithmetic decoding, differential decoding, Huffman decoding, run length decoding, LZ decoding, dictionary decoding, and combinations of the above.
  • the entropy decoder 1980 typically uses different decoding techniques for different kinds of information (e.g., DC coefficients, AC coefficients, different kinds of side information), and can choose from among multiple code tables within a particular decoding technique.
  • the decoder 1900 decodes the motion information 1915 by, for example, computing one or more predictors for motion vectors, entropy decoding differential motion vectors, and combining decoded differential motion vectors with predictors to reconstruct motion vectors.
  • a motion compensator 1930 applies motion information 1915 to one or more reference pictures 1925 to form a prediction 1935 of the picture 1905 being reconstructed.
  • the motion compensator 1930 uses one or more macroblock motion vector to find macroblock(s) in the reference picture(s) 1925 .
  • One or more picture stores e.g., picture store 1920 , 1922
  • B-pictures have more than one reference picture (e.g., at least one temporally previous reference picture and at least one temporally future reference picture). Accordingly, the decoder system 1900 can use separate picture stores 1920 and 1922 for multiple reference pictures.
  • the motion compensator 1930 can compensate for motion at pixel, 1 ⁇ 2 pixel, 1 ⁇ 4 pixel, or other increments, and can switch the precision of the motion compensation on a picture-by-picture basis or other basis.
  • the motion compensator 1930 also can switch between types of reference picture pixel interpolation (e.g., between bicubic and bilinear) on a per-frame or other basis.
  • the precision of the motion compensation can be the same or different horizontally and vertically.
  • a motion compensator applies another type of motion compensation.
  • the prediction by the motion compensator is rarely perfect, so the decoder 1900 also reconstructs prediction residuals.
  • An inverse quantizer 1970 inverse quantizes entropy-decoded data.
  • the inverse quantizer applies uniform, scalar inverse quantization to the entropy-decoded data with a step-size that varies on a picture-by-picture basis or other basis.
  • the inverse quantizer applies another type of inverse quantization to the data, for example, to reconstruct after a non-uniform, vector, or non-adaptive quantization, or directly inverse quantizes spatial domain data in a decoder system that does not use inverse frequency transformations.
  • An inverse frequency transformer 1960 converts the quantized, frequency domain data into spatial domain video information.
  • the inverse frequency transformer 1960 applies an inverse DCT [“IDCT”], variant of IDCT, or other inverse block transform to blocks of the frequency transform coefficients, producing pixel data or prediction residual data for key pictures or predicted pictures, respectively.
  • the inverse frequency transformer 1960 applies another conventional inverse frequency transform such as an inverse Fourier transform or uses wavelet or sub-band synthesis.
  • the inverse frequency transformer 1960 may apply an 8 ⁇ 8, 8 ⁇ 4, 4 ⁇ 8, 4 ⁇ 4, or other size inverse frequency transform.
  • the decoder 1900 For a predicted picture, the decoder 1900 combines the reconstructed prediction residual 1945 with the motion compensated prediction 1935 to form the reconstructed picture 1905 .
  • the decoder needs a reconstructed picture 1905 for subsequent motion compensation, one or both of the picture stores (e.g., picture store 1920 ) buffers the reconstructed picture 1905 for use in predicting the next picture.
  • the decoder 1900 applies a de-blocking filter to the reconstructed picture to adaptively smooth discontinuities and other artifacts in the picture.
  • Various techniques for in-loop deblocking filtering are described below.
  • a video encoder/decoder can use a deblocking filter to perform in-loop filtering across boundary rows and/or columns in the frame. For example, a video encoder/decoder processes a reconstructed reference frame to reduce blocking artifacts prior to motion estimation/compensation using the reference frame. With in-loop deblocking, a reference frame becomes a better reference candidate to encode the following frame. The deblocking filter improves the quality of motion estimation/compensation, resulting in better prediction and lower bitrate for prediction residuals.
  • FIG. 22 shows a motion estimation/compensation loop 2200 in a video encoder that includes a deblocking filter.
  • Motion estimation/compensation loop 2200 includes motion estimation 2210 and motion compensation 2220 of an input picture 2205 .
  • Motion estimation 2210 finds motion information for the input picture 2205 with respect to a reference picture 2295 (or pictures), which is typically a previously reconstructed intra- or inter-coded picture. Alternatively, the loop filter is applied to backward-predicted or bi-directionally-predicted pictures.
  • Motion estimation 2210 produces motion information such as a set of one or more motion vectors for the input picture 2205 .
  • Motion compensation 2220 applies the motion information to the reference picture(s) 2295 to produce a predicted picture 2225 . The prediction is rarely perfect, so the encoder computes 2230 the error or residual 2235 as the difference between the original input picture 2205 and the predicted picture 2225 .
  • Frequency transformer 2240 frequency transforms the prediction residual 2235 , and quantizer 2250 quantizes the frequency coefficients for the prediction residual 2235 before passing them to downstream components of the encoder.
  • Inverse quantizer 2260 inverse quantizes the frequency coefficients of the prediction residual 2235
  • inverse frequency transformer 2270 changes the prediction residual 2235 back to the spatial domain, producing a reconstructed error 2275 for the input picture 2205 .
  • the encoder combines 2280 the reconstructed error 2275 with the predicted picture 2225 to produce a reconstructed picture.
  • the encoder applies the deblocking loop filter 2290 to the reconstructed picture and stores it in a picture buffer 2292 for use as a possible reference picture 2295 for the next input picture.
  • FIG. 23 shows a motion compensation loop 2300 in a video decoder that includes a deblocking filter.
  • Motion compensation loop 2300 includes motion compensation 2320 , which applies motion information 2315 received from the encoder to a reference picture 2395 (or pictures) to produce a predicted picture 2325 .
  • inverse quantizer 2360 inverse quantizes the frequency coefficients of a prediction residual, and inverse frequency transformer 2370 changes the prediction residual back to the spatial domain, producing a reconstructed error 2375 .
  • the decoder combines 2380 the reconstructed error 2375 with the predicted picture 2325 to produce reconstructed picture 2385 , which is output from the decoder.
  • the decoder applies a deblocking loop filter 2390 to the reconstructed picture 2385 and stores the reconstructed picture in a picture buffer 2392 for use as a possible reference picture 2395 for the next input picture.
  • motion estimation/compensation loop 2200 or motion compensation loop in 2300 can be changed, but the encoder/decoder still applies the deblocking loop filter.
  • Described embodiments include techniques and tools for performing in-loop deblocking filtering in interlace frame coded pictures (e.g., interlaced P-frames, interlaced B-frames, interlaced I-frames, etc.) to reduce blocking artifacts.
  • interlace frame coded pictures e.g., interlaced P-frames, interlaced B-frames, interlaced I-frames, etc.
  • Overall use/non-use of in-loop deblocking can be signaled, for example, at entry point level or sequence level in a bitstream, so as to indicate whether or not in-loop deblocking is enabled from the entry point or in the sequence.
  • 16 ⁇ 16 macroblocks are subdivided into 8 ⁇ 8 blocks, and each inter-coded block can be transform coded using an 8 ⁇ 8 transform, two 4 ⁇ 8 transforms, two 8 ⁇ 4 transforms, or four 4 ⁇ 4 transforms.
  • an encoder/decoder can permute the macroblock in such a way that all the even lines (top field lines) of the macroblock are grouped at the top of the macroblock and all the odd lines (bottom field lines) are grouped at the bottom of the macroblock.
  • the effect of the permutation on the macroblock is to make each 8 ⁇ 8 block inside the macroblock contain only information from one particular field. If the macroblock is permuted in this way, the macroblock is deemed to be field coded. If the macroblock is not permuted in this way, the macroblock is deemed to be frame coded.
  • Field coding shifts the location of the horizontal block boundaries on the final re-interlaced macroblock/frame. For example, when a macroblock is field coded with all 8 ⁇ 8 blocks, the internal 8 ⁇ 8 block boundary of the macroblock will be shifted to the top and bottom macroblock boundaries.
  • filtering lines of different fields together can lead to blurring and distortion due to the fact that different fields are scanned at different times.
  • described embodiments implement one or more techniques and tools for performing in-loop deblocking filtering in interlaced video including, but not limited to, the following:
  • an encoder/decoder performs in-loop deblocking filtering by processing horizontal boundaries first, followed by vertical boundaries.
  • the horizontal boundaries are processed one macroblock at a time in raster scan order.
  • the vertical edges are processed one macroblock at a time in raster scan order.
  • Pseudo-code 2400 in FIG. 24 describes this ordered filtering process.
  • Other valid implementations of the filtering process are not shown for the sake of simplicity, but other valid implementations are possible.
  • an encoder/decoder performs adaptive filtering operations performed on segments of four pixels in some implementations.
  • the eight pixel pairs that make up the vertical boundary between two blocks are adaptively filtered, the eight pixel pairs are divided into two 4-pixel segments as shown in FIG. 12 .
  • the third pixel pair is adaptively filtered first as indicated by the Xs in FIG. 12 .
  • the result of this filter operation determines whether the other three pixels in the segment are also adaptively filtered.
  • FIG. 13 shows the pixels that are used in the filtering operation performed on the 3 rd pixel pair.
  • pixels P 4 and P 5 are the pixels that may be changed in the filter operation.
  • the pseudo-code 1400 of FIG. 14 shows the adaptive filtering operation performed on the 3 rd pixel pair in each segment.
  • the encoder/decoder determines whether to filter the other three pixels based on the pixel values in the line of pixels containing the 3 rd pixel pair.
  • variable PQUANT represents a quantization step size.
  • an encoder/decoder performs field-based in-loop deblocking filtering. For example, an encoder/decoder filters top field lines and bottom field lines separately during in-loop deblocking filtering.
  • FIGS. 12 , 13 , 14 and 15 depict the loop filtering decision process for progressive frames, which involves deciding whether to perform loop filtering for four adjacent rows (for filtering across a vertical boundary, as shown in FIG. 12 ) or columns (for filtering across a horizontal boundary) of samples at a time, on the four samples on each side of the vertical or horizontal boundary.
  • the filter operations described above with reference to FIGS. 12 , 13 , 14 and 15 are modified such that the filtering is always done using the same field lines (i.e., without mixing samples of different field polarities).
  • FIG. 25 shows a technique 2500 for performing field-based deblocking filtering.
  • an encoder/decoder gets pixel data from field lines having the same polarity (e.g., top or bottom) in a current block and/or neighboring block(s).
  • the encoder/decoder performs in-loop deblocking across a boundary within the current block or between the current block and a neighboring block.
  • an encoder/decoder makes a loop filtering decision for a vertical block boundary using four alternating rows of same-polarity samples instead of adjacent rows of mixed-polarity samples.
  • the encoder/decoder makes a loop filtering decision for the two even field lines closest to the horizontal block boundary using the four even field lines on each side of the boundary.
  • the encoder/decoder makes the decision for the two odd field lines closest to the boundary using the four odd field lines on each side of the boundary.
  • FIGS. 26A-26B show examples of field-based filtering for horizontal and vertical block boundaries, respectively.
  • FIG. 26A for a horizontal block boundary between a current block 2610 and a neighboring block 2620 below the current block, the two top field lines are filtered across the block boundary using top field lines only and the two bottom field lines across the block boundary are filtered using bottom field lines only.
  • FIG. 26B for a vertical block boundary between the current block 2610 and a neighboring block 2630 to the right of the current block, the top field and the bottom field are filtered separately across the block boundary.
  • FIG. 26B shows filtering of the top field lines across the vertical block boundary.
  • an encoder/decoder performs filtering of pixels in a different way (for example, using different combinations of pixels for filtering, or by performing different filtering operations), but still filters only lines of the same fields together.
  • FIGS. 27A-27B show loop filtering of luminance blocks in an interlaced field-coded macroblock in some implementations.
  • FIG. 27A shows field coding of luminance blocks of an interlaced macroblock. Field coding is applied to the four 8 ⁇ 8 luminance blocks 2710 of a 16 ⁇ 16 interlaced macroblock yielding field-coded luminance blocks 2720 , shown with horizontal and vertical block boundaries (in bold). Each of the four field transform coded luminance blocks 2720 contains only information from the top field (even numbered lines) or the bottom field (odd numbered lines).
  • FIG. 27B shows reconstruction and loop filtering of the field-coded luminance blocks 2720 .
  • Field coding shifts the location of the horizontal block boundaries on the final re-interlaced macroblock/frame. As shown in FIG. 27B , if a macroblock is field coded with all 8 ⁇ 8 blocks, the internal 8 ⁇ 8 block boundary of the macroblock will be shifted to the top and bottom macroblock boundaries, since there is effectively no boundary between lines 14 and 1 , as they are from different fields. The location of block boundaries also depends on transform size.
  • an encoder/decoder uses field/frame type and transform size to determine block boundaries for in-loop deblocking filtering.
  • FIG. 28 shows a technique 2800 for using field/frame transform type and transform size to select block boundaries for in-loop deblocking filtering.
  • an encoder/decoder gets transform size and field/frame type information for a current macroblock.
  • the encoder/decoder selects block boundary lines for in-loop deblocking based at least in part on the transform size and field/frame type information.
  • the encoder/decoder performs in-loop deblocking on the selected boundary lines.
  • an encoder/decoder takes into account block/subblock transform size (e.g., 4 ⁇ 4, 4 ⁇ 8, 8 ⁇ 4, or 8 ⁇ 8) and field/frame transform type to determine the block boundaries to be filtered in a current macroblock.
  • the encoder/decoder then performs in-loop deblocking on those boundaries using a field-based deblocking filter.
  • the encoder/decoder performs an inverse permutation (re-interlacing) to form the final reconstructed frame.
  • FIGS. 27B and 29 show examples of how the boundaries to be filtered can depend on field/frame type for macroblocks within 8 ⁇ 8 transform size blocks.
  • FIG. 27B shows loop filtering of field-coded 8 ⁇ 8 luminance blocks 2720 .
  • the reconstructed luminance blocks 2730 there is in effect no internal horizontal boundary between blocks (no boundary between lines 7 and 8 ). Instead, the block boundaries coincide with the macroblock boundaries, which are already being filtered. No internal horizontal boundary is filtered.
  • Filtered horizontal block boundary 2740 is a block boundary at the bottom of the macroblock and is filtered using top field lines 2750 and bottom field lines 2760 . In field-based filtering, top field lines are filtered together and bottom field lines are filtered together without mixing fields.
  • FIG. 29 shows loop filtering of frame-coded 8 ⁇ 8 luminance blocks 2910 .
  • An internal horizontal block boundary lies between bottom field line 7 and top field line 8 .
  • the internal block boundary (shown as filtered horizontal block boundary 2930 ) is filtered using top field lines 2940 and bottom field lines 2942 .
  • Filtered horizontal block boundary 2932 is a block boundary at the bottom of the macroblock and is filtered using top field lines 2950 and bottom field lines 2952 . Again, top field lines are filtered together and bottom field lines are filtered together without mixing fields.
  • FIGS. 30A-32B show examples of how an encoder/decoder determines block boundaries to be filtered in one implementation. Other implementations are possible.
  • row and column numbers represent rows and columns in current macroblocks and neighboring macroblocks. Row/column numbers 0 - 15 are in a current macroblock, and row/column numbers greater than 15 are in a neighboring macroblock. Block index numbers (Y 0 , Y 1 , etc.) follow the convention shown in FIG. 20 , after field/frame coding. Field/frame transform type is indicated by the variable FIELDTX.
  • FIELDTX is a macroblock-level bitstream element that is explicitly signaled in intra-coded macroblocks and inferred from another macroblock-level bitstream element (MBMODE) in inter-coded macroblocks. FIELDTX and MBMODE are explained in further detail in Section V, below.
  • each macroblock is 8 ⁇ 8 transform coded.
  • the horizontal block boundary filtering starts by filtering the intra-macroblock horizontal boundary only if the current macroblock is frame-coded. Next, the horizontal block boundary between the current macroblock and the macroblock directly below it (if available) is filtered.
  • the pseudo-code 3000 in FIG. 30A describes the process of horizontal filtering for a macroblock in an interlaced I-frame.
  • Vertical block boundary filtering starts by filtering the internal vertical boundary and then filtering the boundary between the current macroblock and the right neighboring macroblock (if available).
  • the pseudo-code 3010 in FIG. 30B describes the process of the vertical filtering for a macroblock in an interlaced I-frame.
  • each macroblock may be 4 ⁇ 4, 4 ⁇ 8, 8 ⁇ 4, or 8 ⁇ 8 transform coded.
  • the horizontal block boundary filtering occurs in the following order of blocks: Y 0 , Y 1 , Y 2 , Y 3 , C b , C r .
  • the processing of the luma blocks depends on field/frame coding type.
  • the pseudo-code 3100 in FIGS. 31A-B and pseudo-code 3110 in FIG. 31C describe the process of horizontal filtering for luma and chroma blocks, respectively, for macroblocks in interlaced P-frames or B-frames.
  • the vertical block boundary filtering occurs in the in the same order of blocks: Y 0 , Y 1 , Y 2 , Y 3 , C b , C r .
  • the processing of the luma blocks depends on field/frame coding type.
  • the pseudo-code 3200 in FIG. 32A-B and pseudo-code 3210 in FIG. 32C describes the process of vertical filtering for luma and chroma blocks, respectively, for macroblocks in interlaced P-frames or B-frames.
  • an encoder/decoder uses different rules to determine which block and/or subblock boundaries are filtered or the order in which they are filtered, but still uses field/frame coding type and transform size to determine which boundaries are filtered.
  • an encoder/decoder performs filtering operations in a different way (for example, using different combinations of pixels for filtering, or by performing different filtering operations).
  • data for interlaced frame coded pictures (e.g., interlaced P-frames, interlaced B-frames, interlaced I-frames, etc.) is presented in the form of a bitstream having plural layers (e.g., sequence, entry point, frame, field, macroblock, block and/or sub-block layers).
  • layers e.g., sequence, entry point, frame, field, macroblock, block and/or sub-block layers.
  • arrow paths show the possible flows of syntax elements.
  • Syntax elements shown with square-edged boundaries indicate fixed-length syntax elements; those with rounded boundaries indicate variable-length syntax elements and those with a rounded boundary within an outer rounded boundary indicate a syntax element (e.g., a bitplane) made up of simpler syntax elements.
  • a fixed-length syntax element is defined to be a syntax element for which the length of the syntax element is not dependent on data in the syntax element itself; the length of a fixed-length syntax element is either constant or determined by prior data in the syntax flow.
  • a lower layer in a layer diagram e.g., a macroblock layer in a frame-layer diagram
  • a lower layer in a layer diagram is indicated by a rectangle within a rectangle.
  • Entry-point-level bitstream elements are shown in FIG. 33 .
  • an entry point marks a position in a bitstream (e.g., an I-frame or other key frame) at which a decoder can begin decoding. In other words, no pictures before the entry point in the bitstream are needed to decode pictures after the entry point.
  • An entry point header can be used to signal changes in coding control parameters (e.g., enabling or disabling compression tools, such as in-loop deblocking filtering, for frames following an entry point).
  • frame-level bitstream elements are shown in FIGS. 34 , 35 , and 36 , respectively.
  • Frame-level bitstream elements for interlaced BI-frames are identical to those for interlaced I-frames.
  • Data for each frame consists of a frame header followed by data for the macroblock layer (whether for intra or various inter type macroblocks).
  • bitstream elements that make up the macroblock layer for interlaced P-frames are shown in FIG. 37 .
  • Bitstream elements in the macroblock layer for interlaced P-frames e.g., FIELDTX
  • may be present for macroblocks in other interlaced pictures e.g., interlaced B-frames, interlaced I-frames etc.
  • bitstream elements in the frame and macroblock layers that are related to signaling for interlaced pictures. Although the selected bitstream elements are described in the context of a particular layer, some bitstream elements can be used in more than one layer.
  • VSTRANSFORM (1 Bit)
  • FIGS. 34 , 35 , and 36 are diagrams showing frame-level bitstream syntaxes for interlaced I-frames, P-frames, and B-frames, respectively. Specific bitstream elements are described below.
  • FCM Frame Coding Mode
  • FCM is a variable length codeword [“VLC”] used to indicate the picture coding type.
  • FCM takes on values for frame coding modes as shown in Table 1 below:
  • PTYPE is a variable size syntax element present in the frame header for interlaced P-frames and interlaced B-frame (or other kinds of interlaced frames such as interlaced I-frames). PTYPE takes on values for different frame types according to Table 2 below.
  • TFRM Frame-level Transform Type
  • TFRM Frame-Level Transform Type
  • TTFRM signals the transform type used to transform the 8 ⁇ 8 pixel error signal in predicted blocks.
  • the 8 ⁇ 8 error blocks may be transformed using an 8 ⁇ 8 transform, two 8 ⁇ 4 transforms, two 4 ⁇ 8 transforms or four 4 ⁇ 4 transforms.
  • FIELDTX is a bitplane indicating whether macroblocks in an interlaced I-frame are frame-coded or field-coded. FIELDTX is explained in further detail below.
  • FIG. 37 is a diagram showing a macroblock-level bitstream syntax for macroblocks interlaced P-frames in the combined implementation. Specific bitstream elements are described below. Data for a macroblock consists of a macroblock header followed by block layer data. Bitstream elements in the macroblock layer for interlaced P-frames (e.g., FIELDTX) may potentially be present for macroblocks in other interlaced pictures (e.g., interlaced B-frames, etc.)
  • MBMODE Macroblock Mode
  • MBMODE is a variable-size syntax element that jointly specifies macroblock type (e.g., 1 MV, 2 Field MV, 4 Field MV, 4 Frame MV or Intra), transform type (e.g., field, frame, or no coded blocks), and the presence of differential motion vector data for 1 MV macroblocks.
  • macroblock type e.g., 1 MV, 2 Field MV, 4 Field MV, 4 Frame MV or Intra
  • transform type e.g., field, frame, or no coded blocks
  • TTMB MB-Level Transform Type
  • TTMB specifies a transform type, transform type signal level, and subblock pattern.
  • TTBLK indicates the transform type used for the block. TTBLK is not present for the first coded block since transform type for that block is joint coded in TTMB. TTBLK is present for all the remaining coded blocks and indicates the transform type. If the transform type is 8 ⁇ 4 or 4 ⁇ 8, the subblock pattern is decoded as part of TTMB (for the first coded block) or TTBLK (for each remaining coded block after the first one). If the transform type is 4 ⁇ 4, the subblock pattern is encoded in SUBBLKPAT at the block level for each coded block.
  • the decoder may still need information about which subblocks have non-zero coefficients. If the transform type is 8 ⁇ 4 or 4 ⁇ 8, the subblock pattern is decoded as part of TTMB (for the first coded block) or SUBBLKPAT (for each remaining coded block). If the transform type is 4 ⁇ 4, the subblock pattern is encoded in SUBBLKPAT at the block level for each coded block.
  • the decoder needs information about which subblocks have non-zero coefficients.
  • the subblock pattern is encoded in SUBBLKPAT at the block level for each coded block.
  • a subblock pattern indicates no non-zero coefficients are present for a subblock, then no additional coefficient information for that subblock is present in the bitstream.
  • data for the top subblock (if present) is coded first followed by data for the bottom subblock (if present).
  • data for the left subblock (if present) is coded first followed by data for the right subblock (if present).
  • data for the upper left subblock is coded first (if present) followed, in order, by data for the upper right, lower left and lower right subblocks (if present).
  • each macroblock may be motion compensated in frame mode using one or four motion vectors or in field mode using two or four motion vectors.
  • a macroblock that is inter-coded does not contain any intra blocks.
  • the residual after motion compensation may be coded in frame transform mode or field transform mode. More specifically, the luma component of the residual is re-arranged according to fields if it is coded in field transform mode but remains unchanged in frame transform mode, while the chroma component remains the same.
  • a macroblock may also be coded as intra.
  • Motion compensation may be restricted to not include four (both field/frame) motion vectors.
  • the type of motion compensation and residual coding is jointly indicated for each macroblock through MBMODE and a skipped macroblock signal (SKIPMB).
  • Macroblocks in interlaced P-frames are classified into five types: 1 MV, 2 Field MV, 4 Frame MV, 4 Field MV, and Intra.
  • the first four types of macroblock are inter-coded while the last type indicates that the macroblock is intra-coded.
  • the macroblock type is signaled by the MBMODE syntax element in the macroblock layer along with the skip bit. (A skip condition for the macroblock also can be signaled at frame level in a compressed bit plane.)
  • MBMODE jointly encodes macroblock types along with various pieces of information regarding the macroblock for different types of macroblock.
  • MBMODE jointly specifies the type of macroblock (1 MV, 4 Frame MV, 2 Field MV, 4 Field MV, or intra), types of transform for inter-coded macroblock (i.e. field or frame or no coded blocks), and whether there is a differential motion vector for a 1 MV macroblock.
  • MBMODE can take one of 15 possible values:
  • ⁇ MVP> denote the signaling of whether a nonzero 1 MV differential motion vector is present or absent.
  • MBMODE signals the following information jointly:
  • the CBPCY syntax element is not decoded when ⁇ Field/frame Transform> in MBMODE indicates no coded blocks.
  • ⁇ Field/frame Transform> in MBMODE indicates field or frame transform, then CBPCY is decoded.
  • the decoded ⁇ Field/frame Transform> is used to set the flag FIELDTX. If it indicates that the macroblock is field transform coded, FIELDTX is set to 1. If it indicates that the macroblock is frame transform coded, FIELDTX is set to 0. If it indicates a zero-coded block, FIELDTX is set to the same type as the motion vector, i.e., FIELDTX is set to 1 if it is a field motion vector and to 0 if it is a frame motion vector.
  • an additional field is sent to indicate which of the differential motion vectors is non-zero.
  • the 2 MVBP field is sent to indicate which of the two motion vectors contain nonzero differential motion vectors.
  • the 4 MVBP field is sent to indicate which of the four motion vectors contain nonzero differential motion vectors.
  • the Field/Frame transform and zero coded blocks are coded in separate fields.
  • an adaptive filtering operation is performed on each reconstructed frame in the entry point segment. This filtering operation is performed prior to using the reconstructed frame as a reference for motion compensation. When there are multiple slices in a picture, the filtering for each slice is performed independently.
  • the filtering process operates on pixels that border neighboring blocks.
  • the locations of block boundaries depend on the size of the inverse transform used.
  • the block boundaries may occur at every 4th or 8th pixel row or column depending on whether an 8 ⁇ 8, 8 ⁇ 4 or 4 ⁇ 8 inverse transform is used.
  • block boundaries occur at every 8th pixel row and column.
  • FIGS. 7 and 8 show the pixels that are filtered along the horizontal and vertical border regions in the upper left corner of a component (luma, C b or C r ) plane.
  • FIG. 7 shows filtered vertical block boundary pixels in an I-frame.
  • FIG. 8 shows filtered horizontal block boundary pixels in an I-frame.
  • crosses represent pixels (or, more precisely, samples) and circled crosses represent filtered pixels.
  • the top horizontal line and first vertical line in the frame are not filtered, even though they lie on a block boundary, because these lines lie on the border of the frame.
  • the bottom horizontal line and last vertical line in the frame also are not filtered for the same reason.
  • the following lines are filtered:
  • progressive B-frame in-loop deblocking is the same as progressive I-frame deblocking.
  • progressive I-frame in-loop deblocking 8 ⁇ 8 block boundaries are filtered, and motion vectors and 4 ⁇ 8/8 ⁇ 4 transforms are not considered.
  • blocks can be intra or inter-coded.
  • an encoder/decoder uses an 8 ⁇ 8 transform to transform the samples in intra-coded blocks.
  • the 8 ⁇ 8 block boundaries are always adaptively filtered.
  • An encoder/decoder uses an 8 ⁇ 8, 8 ⁇ 4, 4 ⁇ 8 or 4 ⁇ 4 transform for inter-coded blocks and uses a corresponding inverse transform to construct the samples that represent the residual error.
  • the boundary between the current and neighboring blocks may or may not be filtered.
  • the decision of whether to adaptively filter a block or subblock border is as follows:
  • FIG. 9 shows examples of when filtering between neighboring blocks does and does not occur in progressive P-frames.
  • the shaded blocks or subblocks represent the cases where at least one nonzero coefficient is present. Clear blocks or subblocks represent cases where no transform coefficients are present.
  • Thick lines represent the boundaries that are adaptively filtered.
  • Thin lines represent the boundaries that are not filtered.
  • FIGS. 10 and 11 show an example of pixels that may be filtered in a progressive P-frame.
  • the crosses represent pixel locations and the circled crosses represent the boundary pixels that are filtered if the conditions specified above are met.
  • FIG. 10 shows pixels adaptively filtered along horizontal boundaries.
  • the pixels on either side of the block or subblock boundary are candidates to be filtered.
  • this could be every 4 th and 5 th , 8 th and 9 th , 12 th and 13 th , etc., pixel row in the frame as these are the 8 ⁇ 8 and 8 ⁇ 4 horizontal boundaries.
  • FIG. 11 shows pixels adaptively filtered along vertical boundaries. For the vertical boundaries, every 4 th and 5 th , 8 th and 9 th , 12 th and 13 th , etc., pixel column in the frame may be adaptively filtered as these are the 8 ⁇ 8 and 4 ⁇ 8 vertical boundaries.
  • the first and last row and the first and last column in the frame are not filtered.
  • the order in which pixels are filtered is important.
  • all the 8 ⁇ 8 block horizontal boundary lines in the frame are adaptively filtered starting from the top line.
  • all 8 ⁇ 4 block horizontal boundary lines in the frame are adaptively filtered starting from the top line.
  • all 8 ⁇ 8 block vertical boundary lines are adaptively filtered starting from the leftmost line.
  • all 4 ⁇ 8 block vertical boundary lines are adaptively filtered starting with the leftmost line.
  • the rules specified above are used to determine whether the boundary pixels are adaptively filtered for each block or subblock.
  • This section describes an adaptive filtering operation that is performed on the boundary pixels in progressive I-, B- and P-frames in the combined implementation.
  • the decision criteria described above determine which vertical and horizontal boundaries are adaptively filtered.
  • all the 8 ⁇ 8 vertical and horizontal boundaries are adaptively filtered. Since the minimum number of consecutive pixels that are filtered in a row or column is four and the total number of pixels in a row or column is always a multiple of four, the adaptive filtering operation is performed on segments of four pixels.
  • the eight pixels are divided into two 4-pixel segments as shown in FIG. 12 .
  • the third pixel pair is adaptively filtered first as indicated by the Xs in FIG. 12 .
  • the result of this filter operation determines whether the other three pixels in the segment are also adaptively filtered, as described below.
  • FIG. 13 shows the pixels that are used in the filtering operation performed on the 3 rd pixel pair.
  • pixels P 4 and P 5 are the pixel pairs that may be changed in the filter operation.
  • the pseudo-code 1400 of FIG. 14 shows the filtering operation performed on the 3 rd pixel pair in each segment.
  • This section describes the process for in-loop deblocking filtering of interlaced frames in the combined implementation, with reference to concepts discussed in the previous section.
  • LOOPFILTER is a sequence layer syntax element. This filtering operation is performed prior to using the reconstructed frame as a reference for motion predictive coding.
  • the adaptive filtering process operates on the pixels that border neighboring blocks.
  • the block boundaries may occur at every 4 th , 8 th , 12 th , etc., pixel row or column, depending on whether an 8 ⁇ 8, 8 ⁇ 4, 4 ⁇ 8 or 4 ⁇ 4 inverse transform is used.
  • adaptive filtering occurs at every 8 th , 16 th , 24 th , etc., pixel row and column.
  • each macroblock may be frame transform coded or field transform coded according to its FIELDTX flag.
  • the state of the FIELDTX flag along with the size of the transform used (4 ⁇ 4, 4 ⁇ 8, 8 ⁇ 4, 8 ⁇ 8) has an effect on where the in-loop deblocking takes place in the macroblock.
  • FIGS. 26A-26B illustrate field-based filtering for horizontal and vertical block boundaries.
  • the two top field lines are filtered across the block boundary using top field lines only and the two bottom field lines across the block boundary are filtered using bottom field lines only.
  • the top field block boundary and the bottom field block boundary are filtered separately.
  • the in-loop deblocking process starts by processing all the horizontals edges first followed by all the vertical edges.
  • the pseudo-code 2400 in FIG. 24 describes this filtering process in the combined implementation one macroblock at a time for the sake of simplicity, but alternate valid implementations of the filtering process may not follow this macroblock processing order.
  • each macroblock is 8 ⁇ 8 transform coded.
  • the horizontal block boundary filtering starts by filtering the intra-macroblock horizontal boundary only if the current macroblock is frame transform coded. Next, the horizontal block boundary between the current macroblock and the macroblock directly below it (if available) is filtered.
  • the pseudo-code 3000 in FIG. 30 A describes the process of horizontal filtering for a macroblock in an interlaced I-frame.
  • the vertical block boundary filtering starts by filtering the intra-macroblock vertical boundary and then followed by the filtering of the inter-macroblock boundary between the current macroblock and the macroblock to its immediate right (if available).
  • the pseudo-code 3010 in FIG. 30B describes the process of the vertical filtering for a macroblock in an interlaced I-frame.
  • each inter-coded macroblock may be 4 ⁇ 4, 4 ⁇ 8, 8 ⁇ 4, or 8 ⁇ 8 transform coded.
  • the horizontal block boundary filtering occurs in the order of block Y 0 , Y 1 , Y 2 , Y 3 , C b , and then C r .
  • the luma blocks are processed differently according to field/frame coding status. The value is explicitly signaled in intra-coded macroblocks, and it is inferred from MBMODE in inter-coded macroblocks.
  • the pseudo-code 3100 in FIG. 31A and pseudo-code 3110 in FIG. 31B describes the process of horizontal filtering for luma and chroma blocks, respectively, for a macroblock in an interlaced P-frame or B-frame.
  • the vertical block boundary filtering occurs in the order of block Y 0 , Y 1 , Y 2 , Y 3 , C b , and then C r .
  • the luma blocks are processed differently according to field/frame coding status.
  • the pseudo-code 3200 in FIG. 32A and pseudo-code 3210 in FIG. 32B describes the process of vertical filtering for luma and chroma blocks, respectively, for a macroblock in an interlaced P-frame or B-frame.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computing Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)

Abstract

An encoder/decoder obtains pixel data from one or more field lines associated with a first block in an interlaced frame coded picture comprising plural macroblocks each having an equal number of top and bottom field lines. The encoder/decoder obtains pixel data from one or more field lines associated with a second block and performs in-loop deblocking filtering across a boundary. The in-loop deblocking filtering comprises filter operations performed on pixel data from field lines of same polarity only. In another aspect, an encoder/decoder obtains transform size information for plural blocks of macroblock, obtains field/frame type information for the macroblock and selects one or more boundaries for in-loop deblocking based at least in part on the transform size information and the field/frame type information. In-loop deblocking can be performed on horizontal block boundaries prior to vertical block boundaries.

Description

RELATED APPLICATION INFORMATION
This application claims the benefit of U.S. Provisional Patent Application No. 60/501,081, entitled “Video Encoding and Decoding Tools and Techniques,” filed Sep. 7, 2003, which is hereby incorporated by reference.
TECHNICAL FIELD
Techniques and tools for interlaced video coding and decoding are described. For example, an encoder/decoder performs in-loop deblocking filtering for interlaced frame coded pictures.
BACKGROUND
Digital video consumes large amounts of storage and transmission capacity. A typical raw digital video sequence includes 15 or 30 pictures per second. Each picture can include tens or hundreds of thousands of pixels (also called pels). Each pixel represents a tiny element of the picture. In raw form, a computer commonly represents a pixel with 24 bits or more. Thus, the number of bits per second, or bit rate, of a typical raw digital video sequence can be 5 million bits/second or more.
Most computers and computer networks lack the resources to process raw digital video. For this reason, engineers use compression (also called coding or encoding) to reduce the bit rate of digital video. Compression can be lossless, in which quality of the video does not suffer but decreases in bit rate are limited by the complexity of the video. Or, compression can be lossy, in which quality of the video suffers but decreases in bit rate are more dramatic. Decompression reverses compression.
In general, video compression techniques include “intra” compression and “inter” or predictive compression. For progressively scanned video frames, intra compression techniques compress individual pictures, typically called I-frames or key frames. Inter compression techniques compress frames with reference to preceding and/or following frames, and inter-compressed frames are typically called predicted frames, P-frames, or B-frames.
I. Inter and Intra Compression in Windows Media Video, Versions 8 and 9
Microsoft Corporation's Windows Media Video, Version 8 [“WMV8”] includes a video encoder and a video decoder. The WMV8 encoder uses intra and inter compression, and the WMV8 decoder uses intra and inter decompression. Windows Media Video, Version 9 [“WMV9”] uses a similar architecture for many operations.
A. Intra Compression
FIG. 1 illustrates block-based intra compression 100 of a block 105 of pixels in a key frame in the WMV8 encoder. A block is a set of pixels, for example, an 8×8 arrangement of pixels. The WMV8 encoder splits a key video frame into 8×8 blocks of pixels and applies an 8×8 Discrete Cosine Transform [“DCT”] 110 to individual blocks such as the block 105. A DCT is a type of frequency transform that converts the 8×8 block of pixels (spatial information) into an 8×8 block of DCT coefficients 115, which are frequency information. The DCT operation itself is lossless or nearly lossless. Compared to the original pixel values, however, the DCT coefficients are more efficient for the encoder to compress since most of the significant information is concentrated in low frequency coefficients (conventionally, the upper left of the block 115) and many of the high frequency coefficients (conventionally, the lower right of the block 115) have values of zero or close to zero.
The encoder then quantizes 120 the DCT coefficients, resulting in an 8×8 block of quantized DCT coefficients 125. For example, the encoder applies a uniform, scalar quantization step size to each coefficient. Quantization is lossy. Since low frequency DCT coefficients tend to have higher values, quantization results in loss of precision but not complete loss of the information for the coefficients. On the other hand, since high frequency DCT coefficients tend to have values of zero or close to zero, quantization of the high frequency coefficients typically results in contiguous regions of zero values. In addition, in some cases high frequency DCT coefficients are quantized more coarsely than low frequency DCT coefficients, resulting in greater loss of precision/information for the high frequency DCT coefficients.
The encoder then prepares the 8×8 block of quantized DCT coefficients 125 for entropy encoding, which is a form of lossless compression. The exact type of entropy encoding can vary depending on whether a coefficient is a DC coefficient (lowest frequency), an AC coefficient (other frequencies) in the top row or left column, or another AC coefficient.
The encoder encodes the DC coefficient 126 as a differential from the DC coefficient 136 of a neighboring 8×8 block, which is a previously encoded neighbor (e.g., top or left) of the block being encoded. (FIG. 1 shows a neighbor block 135 that is situated to the left of the block being encoded in the frame.) The encoder entropy encodes 140 the differential.
The entropy encoder can encode the left column or top row of AC coefficients as a differential from a corresponding column or row of the neighboring 8×8 block. FIG. 1 shows the left column 127 of AC coefficients encoded as a differential 147 from the left column 137 of the neighboring (to the left) block 135. The differential coding increases the chance that the differential coefficients have zero values. The remaining AC coefficients are from the block 125 of quantized DCT coefficients.
The encoder scans 150 the 8×8 block 145 of predicted, quantized AC DCT coefficients into a one-dimensional array 155 and then entropy encodes the scanned AC coefficients using a variation of run length coding 160. The encoder selects an entropy code from one or more run/level/last tables 165 and outputs the entropy code.
B. Inter Compression
Inter compression in the WMV8 encoder uses block-based motion compensated prediction coding followed by transform coding of the residual error. FIGS. 2 and 3 illustrate the block-based inter compression for a predicted frame in the WMV8 encoder. In particular, FIG. 2 illustrates motion estimation for a predicted frame 210 and FIG. 3 illustrates compression of a prediction residual for a motion-compensated block of a predicted frame.
For example, in FIG. 2, the WMV8 encoder computes a motion vector for a macroblock 215 in the predicted frame 210. To compute the motion vector, the encoder searches in a search area 235 of a reference frame 230. Within the search area 235, the encoder compares the macroblock 215 from the predicted frame 210 to various candidate macroblocks in order to find a candidate macroblock that is a good match. The encoder outputs information specifying the motion vector (entropy coded) for the matching macroblock.
The encoder can encode the differential between the motion vector and the motion vector predictor. After reconstructing the motion vector by adding the differential to the predictor, a decoder uses the motion vector to compute a prediction macroblock for the macroblock 215 using information from the reference frame 230, which is a previously reconstructed frame available at the encoder and the decoder. The prediction is rarely perfect, so the encoder usually encodes blocks of pixel differences (also called the error or residual blocks) between the prediction macroblock and the macroblock 215 itself.
FIG. 3 illustrates an example of computation and encoding of an error block 335 in the WMV8 encoder. The error block 335 is the difference between the predicted block 315 and the original current block 325. The encoder applies a DCT 340 to the error block 335, resulting in an 8×8 block 345 of coefficients. The encoder then quantizes 350 the DCT coefficients, resulting in an 8×8 block of quantized DCT coefficients 355. The encoder scans 360 the 8×8 block 355 into a one-dimensional array 365 such that coefficients are generally ordered from lowest frequency to highest frequency. The encoder entropy encodes the scanned coefficients using a variation of run length coding 370. The encoder selects an entropy code from one or more run/level/last tables 375 and outputs the entropy code.
FIG. 4 shows an example of a corresponding decoding process 400 for an inter-coded block. In summary of FIG. 4, a decoder decodes (410, 420) entropy-coded information representing a prediction residual using variable length decoding 410 with one or more run/level/last tables 415 and run length decoding 420. The decoder inverse scans 430 a one-dimensional array 425 storing the entropy-decoded information into a two-dimensional block 435. The decoder inverse quantizes and inverse discrete cosine transforms (together, 440) the data, resulting in a reconstructed error block 445. In a separate motion compensation path, the decoder computes a predicted block 465 using motion vector information 455 for displacement from a reference frame. The decoder combines 470 the predicted block 465 with the reconstructed error block 445 to form the reconstructed block 475.
The amount of change between the original and reconstructed frames is the distortion and the number of bits required to code the frame indicates the rate for the frame. The amount of distortion is roughly inversely proportional to the rate.
II. Interlaced Video and Progressive Video
A video frame contains lines of spatial information of a video signal. For progressive video, these lines contain samples starting from one time instant and continuing through successive lines to the bottom of the frame. A progressive I-frame is an intra-coded progressive video frame. A progressive P-frame is a progressive video frame coded using forward prediction, and a progressive B-frame is a progressive video frame coded using bi-directional prediction.
A typical interlaced video frame consists of two fields scanned starting at different times. For example, referring to FIG. 5, an interlaced video frame 500 includes top field 510 and bottom field 520. Typically, the even-numbered lines (top field) are scanned starting at one time (e.g., time t) and the odd-numbered lines (bottom field) are scanned starting at a different (typically later) time (e.g., time t+1). This timing can create jagged tooth-like features in regions of an interlaced video frame where motion is present when the two fields are scanned starting at different times. For this reason, interlaced video frames can be rearranged according to a field structure, with the odd lines grouped together in one field, and the even lines grouped together in another field. This arrangement, known as field coding, is useful in high-motion pictures for reduction of such jagged edge artifacts. On the other hand, in stationary regions, image detail in the interlaced video frame may be more efficiently preserved without such a rearrangement. Accordingly, frame coding is often used in stationary or low-motion interlaced video frames, in which the original alternating field line arrangement is preserved.
A previous WMV encoder and decoder use macroblocks that are arranged according to a field structure (field-coded macroblocks) or a frame structure (frame-coded macroblocks) in interlaced video frames. FIG. 6 shows how field permuting is used to produce field-coded macroblocks in the encoder and decoder. An interlaced macroblock 610 is permuted such that all the top field lines (e.g., even-numbered lines 0, 2, . . . 14) are placed in the top half of the field-coded macroblock 620, and all the bottom field lines (e.g., odd-numbered lines 1, 3, . . . 15) are placed in the bottom half of the field-coded macroblock. For a frame-coded macroblock, the top field lines and bottom field lines alternate throughout the macroblock, as in interlaced macroblock 610.
The encoder and decoder use a 4:1:1 macroblock format in interlaced frames. A 4:1:1 macroblock is composed of four 8×8 luminance blocks and two 4×8 blocks of each chrominance channel. In a field-coded 4:1:1 macroblock, the permuted macroblock is subdivided such that the top two 8×8 luminance blocks and the top 4×8 chrominance block in each chrominance channel contain only top field lines, while the bottom two 8×8 luminance blocks and the bottom 4×8 chrominance block in each chrominance channel contain only bottom field lines.
A typical progressive video frame consists of one frame of content with non-alternating lines. In contrast to interlaced video, progressive video does not divide video frames into separate fields, and an entire frame is scanned left to right, top to bottom starting at a single time.
II. Loop Filtering in a Previous WMV Encoder and Decoder
Quantization and other lossy processing of prediction residuals can cause blocking artifacts at block boundaries. Blocking artifacts can be especially troublesome in reference frames that are used for motion estimation and compensation of subsequent predicted frames. To reduce blocking artifacts, a previous WMV video encoder and decoder use a deblocking filter to smooth boundary discontinuities between 8×8 blocks in motion estimation/compensation loops. For example, a video encoder processes a reconstructed reference frame to reduce blocking artifacts prior to motion estimation/compensation using the reference frame, and a video decoder processes a reconstructed reference frame to reduce blocking artifacts prior to motion compensation using the reference frame. The deblocking filter improves the quality of motion estimation/compensation, resulting in better prediction and lower bitrate for prediction residuals.
A. In-loop Deblocking Filtering for Progressive Frames
The encoder and decoder perform in-loop deblocking filtering for progressive frames prior to using a reconstructed frame as a reference for motion estimation/compensation. The filtering process operates on pixels (or more precisely, on samples at pixel locations) that border neighboring blocks. The locations of block boundaries depend on the size of the inverse transform used. For progressive P-frames the block boundaries may occur at every 4th or 8th pixel row or column depending on whether an 8×8, 8×4 or 4×8 inverse transform is used. For progressive I-frames, where an 8×8 transform is used, block boundaries occur at every 8th pixel row and column.
1. Progressive I-Frame In-Loop Deblocking Filtering
For progressive I-frames, deblocking filtering is performed adaptively at all 8×8 block boundaries. FIGS. 7 and 8 show the pixels that are filtered along the horizontal and vertical border regions in the upper left corner of a component (luma, Cb or Cr) plane. FIG. 7 shows filtered vertical block boundary pixels in an I-frame. FIG. 8 shows filtered horizontal block boundary pixels in an I-frame.
In FIGS. 7 and 8, crosses represent pixels (actually samples for pixels) and circled crosses represent filtered pixels. As these figures show, the top horizontal line and first vertical line in the frame are not filtered, even though they lie on a block boundary, because these lines lie on the border of the frame. Although not depicted, the bottom horizontal line and last vertical line in the frame also are not filtered for the same reason. In more formal terms, the following lines are filtered:
    • Horizontal lines: (7, 8), (15, 16) . . . ((N−1)*8−1, (N−1)*8)
    • Vertical lines: (7, 8), (15, 16) . . . ((M−1)*8−1, (M−1)*8)
    • (N=number of horizontal 8×8 blocks in the plane (N*8=horizontal frame size))
    • (M=number of vertical 8×8 blocks in the frame (M*8=vertical frame size))
      For progressive I-frames, all horizontal boundary lines in the frame are filtered first, followed by the vertical boundary lines.
2. Progressive P-frame In-loop Deblocking Filtering
For progressive P-frames, blocks can be intra or inter-coded. The encoder and decoder use an 8×8 transform to transform the samples in intra-coded blocks, and the 8×8 block boundaries are always adaptively filtered. The encoder and decoder use an 8×8, 8×4, 4×8 or 4×4 transform for inter-coded blocks and a corresponding inverse transform to construct the samples that represent the residual error. Depending on the status of the neighboring blocks, the boundary between the current and neighboring blocks may or may not be adaptively filtered. The boundaries between coded (at least one non-zero coefficient) subblocks (8×4, 4×8 or 4×4) within an 8×8 block are always adaptively filtered. The boundary between a block or subblock and a neighboring block or subblock is not filtered only if both blocks are inter-coded, have the same motion vector, and have no residual error (no transform coefficients), otherwise the boundary is filtered.
FIG. 9 shows examples of when filtering between neighboring blocks does and does not occur in progressive P-frames. In FIG. 9, it is assumed that the motion vectors for both blocks are the same (if the motion vectors are different, the boundary is always filtered). The shaded blocks or subblocks represent the cases where at least one nonzero coefficient is present. Clear blocks or subblocks represent cases where no transform coefficients are present. Thick lines represent the boundaries that are adaptively filtered. Thin lines represent the boundaries that are not filtered. FIG. 9 illustrates only horizontal macroblock neighbors, but a previous WMV encoder and decoder applies similar rules to vertical neighbors.
FIGS. 10 and 11 show an example of pixels that may be filtered in a progressive P-frame. The crosses represent pixel locations and the circled crosses represent the boundary pixels that are adaptively filtered if the conditions specified above are met. FIG. 10 shows pixels filtered along horizontal boundaries. As FIG. 10 shows, the pixels on either side of the block or subblock boundary are candidates to be filtered. For the horizontal boundaries, this could be every 4th and 5th, 8th and 9th, 12th and 13th, etc., pixel row in the frame. FIG. 11 shows candidate pixels to be filtered along vertical boundaries. For the vertical boundaries, every 4th and 5th, 8th and 9th, 12th and 13th, etc., pixel column in the frame may be adaptively filtered as these are the 8×8 and 4×8 vertical boundaries. The first and last row and the first and last column in the frame are not filtered.
All the 8×8 block horizontal boundary lines in the frame are adaptively filtered first, starting from the top line. Next, all 8×4 block horizontal boundary lines in the frame are adaptively filtered starting from the top line. Next, all 8×8 block vertical boundary lines are adaptively filtered starting from the leftmost line. Lastly, all 4×8 block vertical boundary lines are adaptively filtered starting with the leftmost line. The rules specified above are used to determine whether the boundary pixels are actually filtered for each block or subblock.
3. Filtering Operations
For progressive P-frames the decision criteria described above determine which vertical and horizontal boundaries are adaptively filtered. Since the minimum number of consecutive pixels that are filtered in a row or column is four and the total number of pixels in a row or column is always a multiple of four, the filtering operation is performed on segments of four pixels.
For example, if the eight pixel pairs that make up the vertical boundary between two blocks are adaptively filtered, then the eight pixels are divided into two 4-pixel segments as shown in FIG. 12. In each 4-pixel segment, the third pixel pair is adaptively filtered first as indicated by the Xs in FIG. 12. The result of this adaptive filter operation determines whether the other three pixels in the segment are also filtered.
FIG. 13 shows the pixels that are used in the adaptive filtering operation performed on the 3rd pixel pair. In FIG. 13, pixels P4 and P5 are the pixel pair that may be changed in the filter operation.
The pseudo-code 1400 of FIG. 14 shows the adaptive filtering operation performed on the 3rd pixel pair in each segment. The value filter_other3_pixels indicates whether the remaining three pixel pairs in the segment are also filtered. If filter_other3_pixels=TRUE, then the other three pixel pairs are adaptively filtered. If filter_other3_pixels=FALSE, then they are not filtered, and the adaptive filtering operation proceeds to the next 4-pixel segment. The pseudo-code 1500 of FIG. 15 shows the adaptive filtering operation that is performed on the 1st, 2nd and 4th pixel pair if filter_other3_pixels=TRUE. In pseudo-code 1400 and pseudo-code 1500, the variable PQUANT represents a quantization step size.
The filtering operations described above are similarly used for filtering horizontal boundary pixels.
D. In-loop Deblocking Filtering for Interlaced Frames
The encoder and decoder perform in-loop deblocking filtering across vertical boundaries in interlaced frames having a 4:1:1 macroblock format. For interlaced I- and P-frames, adaptive filtering can occur for pixels located immediately on the left and right of a vertical block boundary except for those located on the picture boundaries (i.e., the first and last column of the luminance and chrominance components). In FIG. 16, pixels (more precisely, samples) that are candidates for filtering in a typical 4:1:1 macroblock in the encoder and decoder are marked M or B, where M denotes boundary pixels located across macroblock boundaries and B denotes boundary pixels located within the macroblock.
The decision on whether to filter across a vertical boundary is made on a block-by-block basis. In a 4:1:1 frame-coded macroblock, each block contains eight consecutive alternating lines of the top and bottom fields in the macroblock. In a 4:1:1 field-coded macroblock, a block contains either eight top field lines or eight bottom field lines. The filtering decision is made eight lines at a time.
The decision to filter across a vertical block boundary depends on whether the current block and the left neighboring block are frame-coded or field-coded (field/frame type), whether they are intra-coded or inter-coded, and whether they have nonzero transform coefficients. In general, the vertical block boundary pixels are adaptively filtered unless the current block's field/frame type is the same as the left neighboring block's field/frame type, both blocks are not intra-coded, and both have no nonzero transform coefficients, in which case the block boundary is not filtered. Chroma block boundaries are adaptively filtered if the corresponding luminance block boundaries are adaptively filtered. Horizontal boundaries are not filtered.
Although the encoder and decoder adaptively filter block boundaries depending in part on the field/frame type of the neighboring blocks, they do not take transform size into account when making filtering decisions in interlaced frames.
VI. Standards for Video Compression and Decompression
Several international standards relate to video compression and decompression. These standards include the Motion Picture Experts Group [“MPEG”] 1, 2, and 4 standards and the H.261, H.262 (another title for MPEG 2), H.263 and H.264 (also called JVT/AVC) standards from the International Telecommunication Union [“ITU”]. These standards specify aspects of video decoders and formats for compressed video information. Directly or by implication, they also specify certain encoder details, but other encoder details are not specified. These standards use (or support the use of) different combinations of intraframe and interframe decompression and compression.
A. Loop Filtering in the Standards
As in the previous WMV encoders and decoders, some international standards use deblocking filters to reduce the effect of blocking artifacts in reconstructed frames. The H.263 standard includes an optional deblocking filter mode in which a filter is applied across 8×8 block edge boundaries of decoded I- and P-frames (but not B-frames) to reduce blocking artifacts. Annex J of the H.263 standard describes an optional block edge filter within the coding loop in which filtering is performed on 8×8 block edges (referred to in H.263 as a deblocking edge filter). This filter affects the reconstructed pictures used for prediction of other pictures. The deblocking edge filter operates using a set of four clipped pixel values on a horizontal and/or vertical line, where two of the four values are in one block (e.g., the top block among neighboring top and bottom blocks) and the other two values are in another block (e.g., the bottom block among neighboring top and bottom blocks). Filtering across horizontal edges is performed before filtering across vertical edges to reduce rounding effects. This optional filtering mode can be signaled in the bitstream with a single bit in a field of a picture header.
According to draft JVT-d157 of the JVT/AVC video standard, deblocking filtering is performed on a macroblock basis. In interlaced frames, macroblocks are grouped into macroblock pairs (top and bottom). Macroblock pairs can be field-coded or frame-coded. In a frame-coded macroblock pair, the macroblock pair is decoded as two frame-coded macroblocks. In a field-coded macroblock pair, the top macroblock consists of the top-field lines in the macroblock pair, and the bottom macroblock consists of the bottom-field lines in the macroblock pair.
Sections 8.7 and 12.4.4 of draft JVT-d157 describe deblocking filtering. For frame-coded macroblock pairs, deblocking is performed on the frame samples, and if neighboring macroblock pair is a field macroblock pair, the neighboring field macroblock pair is converted into a frame macroblock pair before deblocking. For field-coded macroblock pairs, deblocking is performed on the field samples of the same field parity, and if a neighboring macroblock pair is a frame macroblock pairs, it is converted into a field macroblock pair before deblocking. For field-coded pictures, all decoding operations for the deblocking filter are based solely on samples within the current field. For luma filtering in a 16×16 macroblock with 16 4×4 blocks, the 16 samples of the four vertical edges of the 4×4 raster scan pattern are filtered beginning with the left edge, and the four horizontal edges are filtered beginning with the top edge. For chroma filtering, two edges of eight samples each are filtered in each direction. For additional detail, see JVT-d157.
B. Limitations of the Standards
These international standards are limited in several important ways. For example, H.263 does not describe loop filtering for interlaced video. Draft JVT-d157 of the JVT/AVC video standard describes loop filtering only for macroblock pairs in interlaced video, and does not describe, for example, loop filtering for an individual field-coded macroblock having a top field and a bottom field within the same macroblock, or loop filtering decisions for blocks or sub-blocks larger than 4×4.
Given the critical importance of video compression and decompression to digital video, it is not surprising that video compression and decompression are richly developed fields. Whatever the benefits of previous video compression and decompression techniques, however, they do not have the advantages of the following techniques and tools.
SUMMARY
In summary, the detailed description is directed to various techniques and tools for encoding and decoding interlaced video frames. Described embodiments implement one or more of the described techniques and tools including, but not limited to, the following:
In one aspect, an encoder/decoder obtains pixel data (e.g., chrominance or luminance samples) from one or more field lines (e.g., top field lines or bottom field lines) associated with a first block of a macroblock in an interlaced frame coded picture (e.g., interlaced I-frame, interlaced P-frame, interlaced B-frame, etc.) comprising plural macroblocks (e.g., 4:2:0 macroblocks). Each of the plural macroblocks has an equal number of top field lines and bottom field lines. The encoder/decoder obtains pixel data from one or more field lines associated with a second block in the picture and performs in-loop deblocking filtering across a boundary (e.g., a horizontal or vertical block boundary comprising at least one four-pixel segment) using the obtained pixel data. The in-loop deblocking filtering comprises filter operations performed on pixel data from field lines of same polarity only and can be described as field-based deblocking. Each of the plural macroblocks can be coded according to a field structure or a frame structure, which can be indicated by a transform type. The first block and the second block can each have a transform size selected from a group consisting of: 8×8, 8×4, 4×8, and 4×4.
In another aspect, an encoder/decoder obtains field/frame type information for a current macroblock in an interlaced frame coded picture. The encoder/decoder also obtains transform size information for plural blocks in the current macroblock. The encoder/decoder selects one or more boundaries for in-loop deblocking based at least in part on the transform size information and the field/frame type information. The encoder/decoder performs in-loop deblocking (e.g., field-based deblocking) on the selected boundaries. The field/frame transform type information indicates, for example, whether the current macroblock is coded according to a field structure or a frame structure. The selecting of one or more boundaries for in-loop deblocking can be further based on the picture type information (e.g., whether the interlaced frame coded picture is an interlaced I-frame, P-frame or B-frame).
In another aspect, an encoder/decoder obtains field/frame type information for a current macroblock, obtains transform size information for plural blocks in the macroblock, selects a boundary between a first block in the macroblock and a second block for in-loop deblocking based at least in part on the transform size information and the field/frame type information, obtains pixel data from one or more field lines associated with the first block and from one or more field lines associated with the second block, and performs in-loop deblocking across the boundary using the obtained pixel data. The in-loop deblocking comprises filtering operations performed on pixel data from field lines of same polarity only.
The various techniques and tools can be used in combination or independently.
Additional features and advantages will be made apparent from the following detailed description of different embodiments that proceeds with reference to the accompanying drawings.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 is a diagram showing block-based intraframe compression of an 8×8 block of pixels according to the prior art.
FIG. 2 is a diagram showing motion estimation in a video encoder according to the prior art.
FIG. 3 is a diagram showing block-based compression for an 8×8 block of prediction residuals in a video encoder according to the prior art.
FIG. 4 is a diagram showing block-based decompression for an 8×8 block of prediction residuals in a video decoder according to the prior art.
FIG. 5 is a diagram showing an interlaced frame according to the prior art.
FIG. 6 is a diagram showing field permuting of interlaced macroblocks according to the prior art.
FIG. 7 is a diagram showing filtered vertical block boundary pixels according to the prior art.
FIG. 8 is a diagram showing filtered horizontal block boundary pixels according to the prior art.
FIG. 9 is a diagram showing filtering between horizontally neighboring blocks in progressive P-frames according to the prior art.
FIG. 10 is a diagram showing filtered horizontal block boundary pixels in progressive P-frames according to the prior art.
FIG. 11 is a diagram showing filtered vertical block boundary pixels in progressive P-frames according to the prior art.
FIG. 12 is a diagram showing eight pixel pairs divided into two 4-pixel segments on the sides of the vertical boundary between two blocks for filtering in progressive frames according to the prior art.
FIG. 13 is a diagram showing pixels used in a filtering operation performed on the 3rd pixel pair of a 4-pixel segment in progressive frames according to the prior art.
FIG. 14 is a code diagram showing pseudo-code for a filtering operation performed on the 3rd pixel pair in a 4-pixel segment in progressive frames according to the prior art.
FIG. 15 is a code diagram showing pseudo-code for a filtering operation performed on the 1st, 2nd and 4th pixel pair in a 4-pixel segment in progressive frames according to the prior art.
FIG. 16 is a diagram showing pixels that are candidates for filtering in a 4:1:1 macroblock according to the prior art.
FIG. 17 is a block diagram of a suitable computing environment in conjunction with which several described embodiments may be implemented.
FIG. 18 is a block diagram of a generalized video encoder system in conjunction with which several described embodiments may be implemented.
FIG. 19 is a block diagram of a generalized video decoder system in conjunction with which several described embodiments may be implemented.
FIG. 20 is a diagram of a macroblock format used in several described embodiments.
FIG. 21A is a diagram of part of an interlaced video frame, showing alternating lines of a top field and a bottom field. FIG. 21B is a diagram of the interlaced video frame organized for encoding/decoding as a frame, and FIG. 21C is a diagram of the interlaced video frame organized for encoding/decoding as fields.
FIG. 22 is a diagram showing a motion estimation/compensation loop with an in-loop deblocking filter in a video encoder.
FIG. 23 is a diagram showing a motion compensation loop with an in-loop deblocking filter in a video decoder.
FIG. 24 is a code diagram showing pseudo-code for performing in-loop deblocking filtering by processing horizontal boundaries followed by vertical boundaries.
FIG. 25 is a flow chart showing a technique for performing field-based deblocking filtering.
FIG. 26A is a diagram showing field-based filtering for horizontal block boundaries in interlaced I-frames, P-frames and B-frames. FIG. 26B is a diagram showing field-based filtering for vertical block boundaries in interlaced I-frames, P-frames and B-frames.
FIGS. 27A-27B are diagrams showing loop filtering of luminance blocks in an interlaced field transform coded macroblock.
FIG. 28 is a flow chart showing a technique for using field/frame transform type and transform size to select block boundaries for in-loop deblocking filtering.
FIG. 29 is a diagram showing loop filtering of luminance blocks in an interlaced frame transform coded macroblock.
FIGS. 30A-30B are code diagrams showing pseudo-code for horizontal filtering and vertical filtering, respectively, in a macroblock in an interlaced I-frame.
FIGS. 31A-31C are code diagrams showing pseudo-code for horizontal filtering for luma and chroma blocks in a macroblock in an interlaced P-frame or B-frame.
FIGS. 32A-32C are code diagrams showing pseudo-code for vertical filtering for luma and chroma blocks, respectively, in a macroblock in an interlaced P-frame or B-frame.
FIG. 33 is a diagram showing an entry point layer bitstream syntax in a combined implementation.
FIG. 34 is a diagram showing a frame layer bitstream syntax for interlaced I-frames in a combined implementation.
FIG. 35 is a diagram showing a frame layer bitstream syntax for interlaced P-frames in a combined implementation.
FIG. 36 is a diagram showing a frame layer bitstream syntax for interlaced B-frames in a combined implementation.
FIG. 37 is a diagram showing a macroblock layer bitstream syntax for macroblocks of interlaced P-frames in a combined implementation.
DETAILED DESCRIPTION
The present application relates to techniques and tools for efficient compression and decompression of interlaced video. In various described embodiments, a video encoder and decoder incorporate techniques for encoding and decoding interlaced video, and corresponding signaling techniques for use with a bit stream format or syntax comprising different layers or levels (e.g., sequence level, frame level, field level, macroblock level, and/or block level).
Various alternatives to the implementations described herein are possible. For example, techniques described with reference to flowchart diagrams can be altered by changing the ordering of stages shown in the flowcharts, by repeating or omitting certain stages, etc. As another example, although some implementations are described with reference to specific macroblock formats, other formats also can be used. Further, techniques and tools described with reference to forward prediction may also be applicable to other types of prediction.
The various techniques and tools can be used in combination or independently. Different embodiments implement one or more of the described techniques and tools. Some techniques and tools described herein can be used in a video encoder or decoder, or in some other system not specifically limited to video encoding or decoding.
I. Computing Environment
FIG. 17 illustrates a generalized example of a suitable computing environment 1700 in which several of the described embodiments may be implemented. The computing environment 1700 is not intended to suggest any limitation as to scope of use or functionality, as the techniques and tools may be implemented in diverse general-purpose or special-purpose computing environments.
With reference to FIG. 17, the computing environment 1700 includes at least one processing unit 1710 and memory 1720. In FIG. 17, this most basic configuration 1730 is included within a dashed line. The processing unit 1710 executes computer-executable instructions and may be a real or a virtual processor. In a multi-processing system, multiple processing units execute computer-executable instructions to increase processing power. The memory 1720 may be volatile memory (e.g., registers, cache, RAM), non-volatile memory (e.g., ROM, EEPROM, flash memory, etc.), or some combination of the two. The memory 1720 stores software 1780 implementing a video encoder or decoder with one or more of the described techniques and tools.
A computing environment may have additional features. For example, the computing environment 1700 includes storage 1740, one or more input devices 1750, one or more output devices 1760, and one or more communication connections 1770. An interconnection mechanism (not shown) such as a bus, controller, or network interconnects the components of the computing environment 1700. Typically, operating system software (not shown) provides an operating environment for other software executing in the computing environment 1700, and coordinates activities of the components of the computing environment 1700.
The storage 1740 may be removable or non-removable, and includes magnetic disks, magnetic tapes or cassettes, CD-ROMs, DVDs, or any other medium which can be used to store information and which can be accessed within the computing environment 1700. The storage 1740 stores instructions for the software 1780 implementing the video encoder or decoder.
The input device(s) 1750 may be a touch input device such as a keyboard, mouse, pen, or trackball, a voice input device, a scanning device, or another device that provides input to the computing environment 1700. For audio or video encoding, the input device(s) 1750 may be a sound card, video card, TV tuner card, or similar device that accepts audio or video input in analog or digital form, or a CD-ROM or CD-RW that reads audio or video samples into the computing environment 1700. The output device(s) 1760 may be a display, printer, speaker, CD-writer, or another device that provides output from the computing environment 1700.
The communication connection(s) 1770 enable communication over a communication medium to another computing entity. The communication medium conveys information such as computer-executable instructions, audio or video input or output, or other data in a modulated data signal. A modulated data signal is a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media include wired or wireless techniques implemented with an electrical, optical, RF, infrared, acoustic, or other carrier.
The techniques and tools can be described in the general context of computer-readable media. Computer-readable media are any available media that can be accessed within a computing environment. By way of example, and not limitation, with the computing environment 1700, computer-readable media include memory 1720, storage 1740, communication media, and combinations of any of the above.
The techniques and tools can be described in the general context of computer-executable instructions, such as those included in program modules, being executed in a computing environment on a target real or virtual processor. Generally, program modules include routines, programs, libraries, objects, classes, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The functionality of the program modules may be combined or split between program modules as desired in various embodiments. Computer-executable instructions for program modules may be executed within a local or distributed computing environment.
For the sake of presentation, the detailed description uses terms like “determine,” “decide,” and “apply” to describe computer operations in a computing environment. These terms are high-level abstractions for operations performed by a computer, and should not be confused with acts performed by a human being. The actual computer operations corresponding to these terms vary depending on implementation.
II. Generalized Video Encoder and Decoder
FIG. 18 is a block diagram of a generalized video encoder 1800 in conjunction with which some described embodiments may be implemented. FIG. 19 is a block diagram of a generalized video decoder 1900 in conjunction with which some described embodiments may be implemented.
The relationships shown between modules within the encoder 1800 and decoder 1900 indicate general flows of information in the encoder and decoder; other relationships are not shown for the sake of simplicity. In particular, FIGS. 18 and 19 usually do not show side information indicating the encoder settings, modes, tables, etc. used for a video sequence, picture, macroblock, block, etc. Such side information is sent in the output bitstream, typically after entropy encoding of the side information. The format of the output bitstream can be a Windows Media Video version 9 format or other format.
The encoder 1800 and decoder 1900 process video pictures, which may be video frames, video fields or combinations of frames and fields. The bitstream syntax and semantics at the picture and macroblock levels may depend on whether frames or fields are used. There may be changes to macroblock organization and overall timing as well. The encoder 1800 and decoder 1900 are block-based and use a 4:2:0 macroblock format for frames, with each macroblock including four 8×8 luminance blocks (at times treated as one 16×16 macroblock) and two 8×8 chrominance blocks. For fields, the same or a different macroblock organization and format may be used. The 8×8 blocks may be further sub-divided at different stages, e.g., at the frequency transform and entropy encoding stages. Example video frame organizations are described in more detail below. Alternatively, the encoder 1800 and decoder 1900 are object-based, use a different macroblock or block format, or perform operations on sets of pixels of different size or configuration than 8×8 blocks and 16×16 macroblocks.
Depending on implementation and the type of compression desired, modules of the encoder or decoder can be added, omitted, split into multiple modules, combined with other modules, and/or replaced with like modules. In alternative embodiments, encoders or decoders with different modules and/or other configurations of modules perform one or more of the described techniques.
A. Video Frame Organizations
In some implementations, the encoder 1800 and decoder 1900 process video frames organized as follows. A frame contains lines of spatial information of a video signal. For progressive video, these lines contain samples starting from one time instant and continuing through successive lines to the bottom of the frame. A progressive video frame is divided into macroblocks such as the macroblock 2000 shown in FIG. 20. The macroblock 2000 includes four 8×8 luminance blocks (Y0 through Y3) and two 8×8 chrominance blocks that are co-located with the four luminance blocks but half resolution horizontally and vertically, following the conventional 4:2:0 macroblock format. The 8×8 blocks may be further sub-divided at different stages, e.g., at the frequency transform (e.g., 8×4, 4×8 or 4×4 DCTs) and entropy encoding stages. A progressive I-frame is an intra-coded progressive video frame. A progressive P-frame is a progressive video frame coded using forward prediction, and a progressive B-frame is a progressive video frame coded using bi-directional prediction. Progressive P- and B-frames may include intra-coded macroblocks as well as different types of predicted macroblocks.
An interlaced video frame consists of two scans of a frame—one comprising the even lines of the frame (the top field) and the other comprising the odd lines of the frame (the bottom field). The two fields may represent two different time periods or they may be from the same time period. FIG. 21A shows part of an interlaced video frame 2100, including the alternating lines of the top field and bottom field at the top left part of the interlaced video frame 2100.
FIG. 21B shows the interlaced video frame 2100 of FIG. 21A organized for encoding/decoding as a frame 2130. The interlaced video frame 2100 has been partitioned into macroblocks such as the macroblocks 2131 and 2132, which use a 4:2:0 format as shown in FIG. 20. In the luminance plane, each macroblock 2131, 2132 includes 8 lines from the top field alternating with 8 lines from the bottom field for 16 lines total, and each line is 16 pixels long. (The actual organization and placement of luminance blocks and chrominance blocks within the macroblocks 2131, 2132 are not shown, and in fact may vary for different encoding decisions.) Within a given macroblock, the top-field information and bottom-field information may be coded jointly or separately at any of various phases. The macroblock itself may be field transform coded or frame transform coded. Field and frame transform coding for macroblocks is described in further detail below.
An interlaced I-frame is two intra-coded fields of an interlaced video frame, where a macroblock includes information for the two fields. An interlaced P-frame is two fields of an interlaced video frame coded using forward prediction, and an interlaced B-frame is two fields of an interlaced video frame coded using bi-directional prediction, where a macroblock includes information for the two fields. Interlaced P- and B-frames may include intra-coded macroblocks as well as different types of predicted macroblocks. Interlaced BI-frames are a hybrid of interlaced I-frames and interlaced B-frames; they are intra-coded, but are not used as anchors for other frames.
FIG. 21C shows the interlaced video frame 2100 of FIG. 21A organized for encoding/decoding as fields 2160. Each of the two fields of the interlaced video frame 2100 is partitioned into macroblocks. The top field is partitioned into macroblocks such as the macroblock 2161, and the bottom field is partitioned into macroblocks such as the macroblock 2162. (Again, the macroblocks use a 4:2:0 format as shown in FIG. 20, and the organization and placement of luminance blocks and chrominance blocks within the macroblocks are not shown.) In the luminance plane, the macroblock 2161 includes 16 lines from the top field and the macroblock 2162 includes 16 lines from the bottom field, and each line is 16 pixels long. An interlaced I-field is a single, separately represented field of an interlaced video frame. An interlaced P-field is a single, separately represented field of an interlaced video frame coded using forward prediction, and an interlaced B-field is a single, separately represented field of an interlaced video frame coded using bi-directional prediction. Interlaced P- and B-fields may include intra-coded macroblocks as well as different types of predicted macroblocks. Interlaced BI-fields are a hybrid of interlaced I-fields and interlaced B-fields; they are intra-coded, but are not used as anchors for other fields.
Interlaced video frames organized for encoding/decoding as fields can include various combinations of different field types. For example, such a frame can have the same field type in both the top and bottom fields or different field types in each field. In one implementation, the possible combinations of field types include I/I, I/P, P/I, P/P, B/B, B/BI, BI/B, and BI/BI.
The term picture generally refers to source, coded or reconstructed image data. For progressive video, a picture is a progressive video frame. For interlaced video, a picture may refer to an interlaced video frame, the top field of the frame, or the bottom field of the frame, depending on the context.
Alternatively, the encoder 1800 and decoder 1900 are object-based, use a different macroblock or block format, or perform operations on sets of pixels of different size or configuration than 8×8 blocks and 16×16 macroblocks.
B. Video Encoder
FIG. 18 is a block diagram of a generalized video encoder system 1800. The encoder system 1800 receives a sequence of video pictures including a current picture 1805 (e.g., progressive video frame, interlaced video frame, or field of an interlaced video frame), and produces compressed video information 1895 as output. Particular embodiments of video encoders typically use a variation or supplemented version of the generalized encoder 1800.
The encoder system 1800 compresses predicted pictures and key pictures. For the sake of presentation, FIG. 18 shows a path for key pictures through the encoder system 1800 and a path for predicted pictures. Many of the components of the encoder system 1800 are used for compressing both key pictures and predicted pictures. The exact operations performed by those components can vary depending on the type of information being compressed.
A predicted picture (e.g., progressive P-frame or B-frame, interlaced P-field or B-field, or interlaced P-frame or B-frame) is represented in terms of prediction (or difference) from one or more other pictures (which are typically referred to as reference pictures or anchors). A prediction residual is the difference between what was predicted and the original picture. In contrast, a key picture (e.g., progressive I-frame, interlaced I-field, or interlaced I-frame) is compressed without reference to other pictures.
If the current picture 1805 is a forward-predicted picture, a motion estimator 1810 estimates motion of macroblocks or other sets of pixels of the current picture 1805 with respect to one or more reference pictures, for example, the reconstructed previous picture 1825 buffered in the picture store 1820. If the current picture 1805 is a bi-directionally-predicted picture, a motion estimator 1810 estimates motion in the current picture 1805 with respect to up to four reconstructed reference pictures (for an interlaced B-field, for example). Typically, a motion estimator estimates motion in a B-picture with respect to one or more temporally previous reference pictures and one or more temporally future reference pictures. Accordingly, the encoder system 1800 can use the separate stores 1820 and 1822 for multiple reference pictures. For more information on progressive B-frames and interlaced B-frames and B-fields, see U.S. patent application Ser. No. 10/622,378, entitled, “Advanced Bi-Directional Predictive Coding of Video Frames,” filed Jul. 18, 2003, and U.S. patent application Ser. No. 10/882,135, entitled, “Advanced Bi-Directional Predictive Coding of Interlaced Video,” filed Jun. 29, 2004, which is hereby incorporated herein by reference.
The motion estimator 1810 can estimate motion by pixel, ½ pixel, ¼ pixel, or other increments, and can switch the precision of the motion estimation on a picture-by-picture basis or other basis. The motion estimator 1810 (and compensator 1830) also can switch between types of reference picture pixel interpolation (e.g., between bicubic and bilinear) on a per-frame or other basis. The precision of the motion estimation can be the same or different horizontally and vertically. The motion estimator 1810 outputs as side information motion information 1815 such as differential motion vector information. The encoder 1800 encodes the motion information 1815 by, for example, computing one or more predictors for motion vectors, computing differentials between the motion vectors and predictors, and entropy coding the differentials. To reconstruct a motion vector, a motion compensator 1830 combines a predictor with differential motion vector information.
The motion compensator 1830 applies the reconstructed motion vector to the reconstructed picture(s) 1825 to form a motion-compensated current picture 1835. The prediction is rarely perfect, however, and the difference between the motion-compensated current picture 1835 and the original current picture 1805 is the prediction residual 1845. During later reconstruction of the picture, the prediction residual 1845 is added to the motion compensated current picture 1835 to obtain a reconstructed picture that is closer to the original current picture 1805. In lossy compression, however, some information is still lost from the original current picture 1805. Alternatively, a motion estimator and motion compensator apply another type of motion estimation/compensation.
A frequency transformer 1860 converts the spatial domain video information into frequency domain (i.e., spectral) data. For block-based video pictures, the frequency transformer 1860 applies a DCT, variant of DCT, or other block transform to blocks of the pixel data or prediction residual data, producing blocks of frequency transform coefficients. Alternatively, the frequency transformer 1860 applies another conventional frequency transform such as a Fourier transform or uses wavelet or sub-band analysis. The frequency transformer 1860 may apply an 8×8, 8×4, 4×8, 4×4 or other size frequency transform.
A quantizer 1870 then quantizes the blocks of spectral data coefficients. The quantizer applies uniform, scalar quantization to the spectral data with a step-size that varies on a picture-by-picture basis or other basis. Alternatively, the quantizer applies another type of quantization to the spectral data coefficients, for example, a non-uniform, vector, or non-adaptive quantization, or directly quantizes spatial domain data in an encoder system that does not use frequency transformations. In addition to adaptive quantization, the encoder 1800 can use frame dropping, adaptive filtering, or other techniques for rate control.
The encoder 1800 may use special signaling for a skipped macroblock, which is a macroblock that has no information of certain types (e.g., no motion information for the macroblock and no residual information).
When a reconstructed current picture is needed for subsequent motion estimation/compensation, an inverse quantizer 1876 performs inverse quantization on the quantized spectral data coefficients. An inverse frequency transformer 1866 then performs the inverse of the operations of the frequency transformer 1860, producing a reconstructed prediction residual (for a predicted picture) or a reconstructed key picture. If the current picture 1805 was a key picture, the reconstructed key picture is taken as the reconstructed current picture (not shown). If the current picture 1805 was a predicted picture, the reconstructed prediction residual is added to the motion-compensated current picture 1835 to form the reconstructed current picture. One or both of the picture stores 1820, 1822 buffers the reconstructed current picture for use in motion compensated prediction. In some embodiments, the encoder applies a de-blocking filter to the reconstructed frame to adaptively smooth discontinuities and other artifacts in the picture.
The entropy coder 1880 compresses the output of the quantizer 1870 as well as certain side information (e.g., motion information 1815, quantization step size). Typical entropy coding techniques include arithmetic coding, differential coding, Huffman coding, run length coding, LZ coding, dictionary coding, and combinations of the above. The entropy coder 1880 typically uses different coding techniques for different kinds of information (e.g., DC coefficients, AC coefficients, different kinds of side information), and can choose from among multiple code tables within a particular coding technique.
The entropy coder 1880 provides compressed video information 1895 to the multiplexer [“MUX”] 1890. The MUX 1890 may include a buffer, and a buffer level indicator may be fed back to bit rate adaptive modules for rate control. Before or after the MUX 1890, the compressed video information 1895 can be channel coded for transmission over the network. The channel coding can apply error detection and correction data to the compressed video information 1895.
C. Video Decoder
FIG. 19 is a block diagram of a general video decoder system 1900. The decoder system 1900 receives information 1995 for a compressed sequence of video pictures and produces output including a reconstructed picture 1905 (e.g., progressive video frame, interlaced video frame, or field of an interlaced video frame). Particular embodiments of video decoders typically use a variation or supplemented version of the generalized decoder 1900.
The decoder system 1900 decompresses predicted pictures and key pictures. For the sake of presentation, FIG. 19 shows a path for key pictures through the decoder system 1900 and a path for forward-predicted pictures. Many of the components of the decoder system 1900 are used for decompressing both key pictures and predicted pictures. The exact operations performed by those components can vary depending on the type of information being decompressed.
A DEMUX 1990 receives the information 1995 for the compressed video sequence and makes the received information available to the entropy decoder 1980. The DEMUX 1990 may include a jitter buffer and other buffers as well. Before or after the DEMUX 1990, the compressed video information can be channel decoded and processed for error detection and correction.
The entropy decoder 1980 entropy decodes entropy-coded quantized data as well as entropy-coded side information (e.g., motion information 1915, quantization step size), typically applying the inverse of the entropy encoding performed in the encoder. Entropy decoding techniques include arithmetic decoding, differential decoding, Huffman decoding, run length decoding, LZ decoding, dictionary decoding, and combinations of the above. The entropy decoder 1980 typically uses different decoding techniques for different kinds of information (e.g., DC coefficients, AC coefficients, different kinds of side information), and can choose from among multiple code tables within a particular decoding technique.
The decoder 1900 decodes the motion information 1915 by, for example, computing one or more predictors for motion vectors, entropy decoding differential motion vectors, and combining decoded differential motion vectors with predictors to reconstruct motion vectors.
A motion compensator 1930 applies motion information 1915 to one or more reference pictures 1925 to form a prediction 1935 of the picture 1905 being reconstructed. For example, the motion compensator 1930 uses one or more macroblock motion vector to find macroblock(s) in the reference picture(s) 1925. One or more picture stores (e.g., picture store 1920, 1922) store previous reconstructed pictures for use as reference pictures. Typically, B-pictures have more than one reference picture (e.g., at least one temporally previous reference picture and at least one temporally future reference picture). Accordingly, the decoder system 1900 can use separate picture stores 1920 and 1922 for multiple reference pictures. The motion compensator 1930 can compensate for motion at pixel, ½ pixel, ¼ pixel, or other increments, and can switch the precision of the motion compensation on a picture-by-picture basis or other basis. The motion compensator 1930 also can switch between types of reference picture pixel interpolation (e.g., between bicubic and bilinear) on a per-frame or other basis. The precision of the motion compensation can be the same or different horizontally and vertically. Alternatively, a motion compensator applies another type of motion compensation. The prediction by the motion compensator is rarely perfect, so the decoder 1900 also reconstructs prediction residuals.
An inverse quantizer 1970 inverse quantizes entropy-decoded data. In general, the inverse quantizer applies uniform, scalar inverse quantization to the entropy-decoded data with a step-size that varies on a picture-by-picture basis or other basis. Alternatively, the inverse quantizer applies another type of inverse quantization to the data, for example, to reconstruct after a non-uniform, vector, or non-adaptive quantization, or directly inverse quantizes spatial domain data in a decoder system that does not use inverse frequency transformations.
An inverse frequency transformer 1960 converts the quantized, frequency domain data into spatial domain video information. For block-based video pictures, the inverse frequency transformer 1960 applies an inverse DCT [“IDCT”], variant of IDCT, or other inverse block transform to blocks of the frequency transform coefficients, producing pixel data or prediction residual data for key pictures or predicted pictures, respectively. Alternatively, the inverse frequency transformer 1960 applies another conventional inverse frequency transform such as an inverse Fourier transform or uses wavelet or sub-band synthesis. The inverse frequency transformer 1960 may apply an 8×8, 8×4, 4×8, 4×4, or other size inverse frequency transform.
For a predicted picture, the decoder 1900 combines the reconstructed prediction residual 1945 with the motion compensated prediction 1935 to form the reconstructed picture 1905. When the decoder needs a reconstructed picture 1905 for subsequent motion compensation, one or both of the picture stores (e.g., picture store 1920) buffers the reconstructed picture 1905 for use in predicting the next picture. In some embodiments, the decoder 1900 applies a de-blocking filter to the reconstructed picture to adaptively smooth discontinuities and other artifacts in the picture. Various techniques for in-loop deblocking filtering are described below.
III. Loop Filtering
Quantization and other lossy processing of prediction residuals can cause blocking artifacts at block boundaries. Blocking artifacts can be especially troublesome in reference frames that are used for motion estimation and compensation of subsequent predicted frames. To reduce blocking artifacts, a video encoder/decoder can use a deblocking filter to perform in-loop filtering across boundary rows and/or columns in the frame. For example, a video encoder/decoder processes a reconstructed reference frame to reduce blocking artifacts prior to motion estimation/compensation using the reference frame. With in-loop deblocking, a reference frame becomes a better reference candidate to encode the following frame. The deblocking filter improves the quality of motion estimation/compensation, resulting in better prediction and lower bitrate for prediction residuals.
FIG. 22 shows a motion estimation/compensation loop 2200 in a video encoder that includes a deblocking filter. Motion estimation/compensation loop 2200 includes motion estimation 2210 and motion compensation 2220 of an input picture 2205. Motion estimation 2210 finds motion information for the input picture 2205 with respect to a reference picture 2295 (or pictures), which is typically a previously reconstructed intra- or inter-coded picture. Alternatively, the loop filter is applied to backward-predicted or bi-directionally-predicted pictures. Motion estimation 2210 produces motion information such as a set of one or more motion vectors for the input picture 2205. Motion compensation 2220 applies the motion information to the reference picture(s) 2295 to produce a predicted picture 2225. The prediction is rarely perfect, so the encoder computes 2230 the error or residual 2235 as the difference between the original input picture 2205 and the predicted picture 2225.
Frequency transformer 2240 frequency transforms the prediction residual 2235, and quantizer 2250 quantizes the frequency coefficients for the prediction residual 2235 before passing them to downstream components of the encoder. Inverse quantizer 2260 inverse quantizes the frequency coefficients of the prediction residual 2235, and inverse frequency transformer 2270 changes the prediction residual 2235 back to the spatial domain, producing a reconstructed error 2275 for the input picture 2205. The encoder combines 2280 the reconstructed error 2275 with the predicted picture 2225 to produce a reconstructed picture. The encoder applies the deblocking loop filter 2290 to the reconstructed picture and stores it in a picture buffer 2292 for use as a possible reference picture 2295 for the next input picture.
FIG. 23 shows a motion compensation loop 2300 in a video decoder that includes a deblocking filter. Motion compensation loop 2300 includes motion compensation 2320, which applies motion information 2315 received from the encoder to a reference picture 2395 (or pictures) to produce a predicted picture 2325. In a separate path, inverse quantizer 2360 inverse quantizes the frequency coefficients of a prediction residual, and inverse frequency transformer 2370 changes the prediction residual back to the spatial domain, producing a reconstructed error 2375.
The decoder combines 2380 the reconstructed error 2375 with the predicted picture 2325 to produce reconstructed picture 2385, which is output from the decoder. The decoder applies a deblocking loop filter 2390 to the reconstructed picture 2385 and stores the reconstructed picture in a picture buffer 2392 for use as a possible reference picture 2395 for the next input picture.
Alternatively, the arrangement or constituents of motion estimation/compensation loop 2200 or motion compensation loop in 2300 can be changed, but the encoder/decoder still applies the deblocking loop filter.
IV. Innovations in In-Loop Deblocking Filtering for Interlaced Video
Described embodiments include techniques and tools for performing in-loop deblocking filtering in interlace frame coded pictures (e.g., interlaced P-frames, interlaced B-frames, interlaced I-frames, etc.) to reduce blocking artifacts. Overall use/non-use of in-loop deblocking can be signaled, for example, at entry point level or sequence level in a bitstream, so as to indicate whether or not in-loop deblocking is enabled from the entry point or in the sequence.
In some implementations, 16×16 macroblocks are subdivided into 8×8 blocks, and each inter-coded block can be transform coded using an 8×8 transform, two 4×8 transforms, two 8×4 transforms, or four 4×4 transforms. Prior to block transform coding, an encoder/decoder can permute the macroblock in such a way that all the even lines (top field lines) of the macroblock are grouped at the top of the macroblock and all the odd lines (bottom field lines) are grouped at the bottom of the macroblock. The effect of the permutation on the macroblock is to make each 8×8 block inside the macroblock contain only information from one particular field. If the macroblock is permuted in this way, the macroblock is deemed to be field coded. If the macroblock is not permuted in this way, the macroblock is deemed to be frame coded.
Field coding shifts the location of the horizontal block boundaries on the final re-interlaced macroblock/frame. For example, when a macroblock is field coded with all 8×8 blocks, the internal 8×8 block boundary of the macroblock will be shifted to the top and bottom macroblock boundaries.
Furthermore, filtering lines of different fields together can lead to blurring and distortion due to the fact that different fields are scanned at different times.
Accordingly, described embodiments implement one or more techniques and tools for performing in-loop deblocking filtering in interlaced video including, but not limited to, the following:
    • 1. Field based-in-loop deblocking filtering without filtering across field boundaries.
    • 2. Identification of correct horizontal block boundaries according to field/frame coding type and transform size to ensure filtering of correct block transform boundaries.
      The described techniques and tools can be used in combination with one another or with other techniques and tools, or can be used independently.
      Ordering of Filtering Operations
For both inter and intra interlaced frames, an encoder/decoder performs in-loop deblocking filtering by processing horizontal boundaries first, followed by vertical boundaries. In some implementations, the horizontal boundaries are processed one macroblock at a time in raster scan order. Similarly, the vertical edges are processed one macroblock at a time in raster scan order. Pseudo-code 2400 in FIG. 24 describes this ordered filtering process. Other valid implementations of the filtering process are not shown for the sake of simplicity, but other valid implementations are possible.
Filter Operations
Since the minimum number of consecutive pixels that are filtered in a row or column is four and the total number of pixels in a row or column is always a multiple of four, an encoder/decoder performs adaptive filtering operations performed on segments of four pixels in some implementations.
For example, if the eight pixel pairs that make up the vertical boundary between two blocks are adaptively filtered, the eight pixel pairs are divided into two 4-pixel segments as shown in FIG. 12. In each 4-pixel segment, the third pixel pair is adaptively filtered first as indicated by the Xs in FIG. 12. The result of this filter operation determines whether the other three pixels in the segment are also adaptively filtered.
FIG. 13 shows the pixels that are used in the filtering operation performed on the 3rd pixel pair. In FIG. 13, pixels P4 and P5 are the pixels that may be changed in the filter operation.
The pseudo-code 1400 of FIG. 14 shows the adaptive filtering operation performed on the 3rd pixel pair in each segment. The encoder/decoder determines whether to filter the other three pixels based on the pixel values in the line of pixels containing the 3rd pixel pair. The value filter_other3_pixels indicates whether the remaining three pixel pairs in the segment are also filtered. If filter_other3_pixels=TRUE, then the other three pixel pairs are adaptively filtered. If filter_other3_pixels=FALSE, then they are not filtered, and the adaptive filtering operation proceeds to the next 4-pixel segment. The pseudo-code 1500 of FIG. 15 shows the adaptive filtering operation that is performed on the 1st, 2nd and 4th pixel pair if filter_other3_pixels=TRUE. In pseudo-code 1400 and pseudo-code 1500, the variable PQUANT represents a quantization step size.
A. Field-based In-Loop Deblocking Filtering
In some implementations, an encoder/decoder performs field-based in-loop deblocking filtering. For example, an encoder/decoder filters top field lines and bottom field lines separately during in-loop deblocking filtering.
FIGS. 12, 13, 14 and 15 depict the loop filtering decision process for progressive frames, which involves deciding whether to perform loop filtering for four adjacent rows (for filtering across a vertical boundary, as shown in FIG. 12) or columns (for filtering across a horizontal boundary) of samples at a time, on the four samples on each side of the vertical or horizontal boundary.
In one implementation, the filter operations described above with reference to FIGS. 12, 13, 14 and 15 are modified such that the filtering is always done using the same field lines (i.e., without mixing samples of different field polarities).
FIG. 25 shows a technique 2500 for performing field-based deblocking filtering. At 2510, an encoder/decoder gets pixel data from field lines having the same polarity (e.g., top or bottom) in a current block and/or neighboring block(s). At 2520, the encoder/decoder performs in-loop deblocking across a boundary within the current block or between the current block and a neighboring block.
For example, for interlaced frame coded pictures, an encoder/decoder makes a loop filtering decision for a vertical block boundary using four alternating rows of same-polarity samples instead of adjacent rows of mixed-polarity samples. The encoder/decoder makes a loop filtering decision for the two even field lines closest to the horizontal block boundary using the four even field lines on each side of the boundary. The encoder/decoder makes the decision for the two odd field lines closest to the boundary using the four odd field lines on each side of the boundary.
FIGS. 26A-26B show examples of field-based filtering for horizontal and vertical block boundaries, respectively. In FIG. 26A, for a horizontal block boundary between a current block 2610 and a neighboring block 2620 below the current block, the two top field lines are filtered across the block boundary using top field lines only and the two bottom field lines across the block boundary are filtered using bottom field lines only. In FIG. 26B, for a vertical block boundary between the current block 2610 and a neighboring block 2630 to the right of the current block, the top field and the bottom field are filtered separately across the block boundary. For example, FIG. 26B shows filtering of the top field lines across the vertical block boundary.
Alternatively, an encoder/decoder performs filtering of pixels in a different way (for example, using different combinations of pixels for filtering, or by performing different filtering operations), but still filters only lines of the same fields together.
B. Determining Block Boundaries for Filtering in Interlaced Frames
FIGS. 27A-27B show loop filtering of luminance blocks in an interlaced field-coded macroblock in some implementations. FIG. 27A shows field coding of luminance blocks of an interlaced macroblock. Field coding is applied to the four 8×8 luminance blocks 2710 of a 16×16 interlaced macroblock yielding field-coded luminance blocks 2720, shown with horizontal and vertical block boundaries (in bold). Each of the four field transform coded luminance blocks 2720 contains only information from the top field (even numbered lines) or the bottom field (odd numbered lines).
FIG. 27B shows reconstruction and loop filtering of the field-coded luminance blocks 2720.
Field coding shifts the location of the horizontal block boundaries on the final re-interlaced macroblock/frame. As shown in FIG. 27B, if a macroblock is field coded with all 8×8 blocks, the internal 8×8 block boundary of the macroblock will be shifted to the top and bottom macroblock boundaries, since there is effectively no boundary between lines 14 and 1, as they are from different fields. The location of block boundaries also depends on transform size.
Accordingly, in some implementations, an encoder/decoder uses field/frame type and transform size to determine block boundaries for in-loop deblocking filtering.
FIG. 28 shows a technique 2800 for using field/frame transform type and transform size to select block boundaries for in-loop deblocking filtering. At 2810, an encoder/decoder gets transform size and field/frame type information for a current macroblock. At 2820, the encoder/decoder selects block boundary lines for in-loop deblocking based at least in part on the transform size and field/frame type information. At 2830, the encoder/decoder performs in-loop deblocking on the selected boundary lines.
For example, after a frame has been reconstructed in a motion estimation/compensation loop, an encoder/decoder takes into account block/subblock transform size (e.g., 4×4, 4×8, 8×4, or 8×8) and field/frame transform type to determine the block boundaries to be filtered in a current macroblock. The encoder/decoder then performs in-loop deblocking on those boundaries using a field-based deblocking filter. The encoder/decoder performs an inverse permutation (re-interlacing) to form the final reconstructed frame.
FIGS. 27B and 29 show examples of how the boundaries to be filtered can depend on field/frame type for macroblocks within 8×8 transform size blocks. FIG. 27B shows loop filtering of field-coded 8×8 luminance blocks 2720. In the reconstructed luminance blocks 2730, there is in effect no internal horizontal boundary between blocks (no boundary between lines 7 and 8). Instead, the block boundaries coincide with the macroblock boundaries, which are already being filtered. No internal horizontal boundary is filtered. Filtered horizontal block boundary 2740 is a block boundary at the bottom of the macroblock and is filtered using top field lines 2750 and bottom field lines 2760. In field-based filtering, top field lines are filtered together and bottom field lines are filtered together without mixing fields.
FIG. 29 shows loop filtering of frame-coded 8×8 luminance blocks 2910. An internal horizontal block boundary lies between bottom field line 7 and top field line 8. When the frame transform coded luminance blocks 2910 are reconstructed to form reconstructed luminance blocks 2920, the position of the internal horizontal boundary remains the same. In the example shown in FIG. 29B, the internal block boundary (shown as filtered horizontal block boundary 2930) is filtered using top field lines 2940 and bottom field lines 2942. Filtered horizontal block boundary 2932 is a block boundary at the bottom of the macroblock and is filtered using top field lines 2950 and bottom field lines 2952. Again, top field lines are filtered together and bottom field lines are filtered together without mixing fields.
The following paragraphs describe pseudo-code in FIGS. 30A-32B for filtering block boundaries in interlaced I-frames, P-frames and B-frames depending on field/frame type and transform size. The pseudo-code in FIGS. 30A-32B shows examples of how an encoder/decoder determines block boundaries to be filtered in one implementation. Other implementations are possible.
In the pseudo-code in FIGS. 30A-32B, row and column numbers represent rows and columns in current macroblocks and neighboring macroblocks. Row/column numbers 0-15 are in a current macroblock, and row/column numbers greater than 15 are in a neighboring macroblock. Block index numbers (Y0, Y1, etc.) follow the convention shown in FIG. 20, after field/frame coding. Field/frame transform type is indicated by the variable FIELDTX. In one implementation, FIELDTX is a macroblock-level bitstream element that is explicitly signaled in intra-coded macroblocks and inferred from another macroblock-level bitstream element (MBMODE) in inter-coded macroblocks. FIELDTX and MBMODE are explained in further detail in Section V, below.
In interlaced I-frames, each macroblock is 8×8 transform coded. For each macroblock, the horizontal block boundary filtering starts by filtering the intra-macroblock horizontal boundary only if the current macroblock is frame-coded. Next, the horizontal block boundary between the current macroblock and the macroblock directly below it (if available) is filtered. The pseudo-code 3000 in FIG. 30A describes the process of horizontal filtering for a macroblock in an interlaced I-frame. Vertical block boundary filtering starts by filtering the internal vertical boundary and then filtering the boundary between the current macroblock and the right neighboring macroblock (if available). The pseudo-code 3010 in FIG. 30B describes the process of the vertical filtering for a macroblock in an interlaced I-frame.
In interlaced P-frames and B-frames, each macroblock may be 4×4, 4×8, 8×4, or 8×8 transform coded. In one implementation, for each macroblock, the horizontal block boundary filtering occurs in the following order of blocks: Y0, Y1, Y2, Y3, Cb, Cr. The processing of the luma blocks depends on field/frame coding type. The pseudo-code 3100 in FIGS. 31A-B and pseudo-code 3110 in FIG. 31C describe the process of horizontal filtering for luma and chroma blocks, respectively, for macroblocks in interlaced P-frames or B-frames.
Similarly, for each macroblock in one implementation, the vertical block boundary filtering occurs in the in the same order of blocks: Y0, Y1, Y2, Y3, Cb, Cr. As with horizontal filtering, the processing of the luma blocks depends on field/frame coding type. The pseudo-code 3200 in FIG. 32A-B and pseudo-code 3210 in FIG. 32C describes the process of vertical filtering for luma and chroma blocks, respectively, for macroblocks in interlaced P-frames or B-frames.
Alternatively, an encoder/decoder uses different rules to determine which block and/or subblock boundaries are filtered or the order in which they are filtered, but still uses field/frame coding type and transform size to determine which boundaries are filtered. As another alternative, an encoder/decoder performs filtering operations in a different way (for example, using different combinations of pixels for filtering, or by performing different filtering operations).
V. Combined Implementations
A detailed combined implementation for a bitstream syntax, semantics, and decoder are now described, in addition to an alternative combined implementation with minor differences from the main combined implementation.
A. Bitstream Syntax
In various combined implementations, data for interlaced frame coded pictures (e.g., interlaced P-frames, interlaced B-frames, interlaced I-frames, etc.) is presented in the form of a bitstream having plural layers (e.g., sequence, entry point, frame, field, macroblock, block and/or sub-block layers).
In the syntax diagrams, arrow paths show the possible flows of syntax elements. Syntax elements shown with square-edged boundaries indicate fixed-length syntax elements; those with rounded boundaries indicate variable-length syntax elements and those with a rounded boundary within an outer rounded boundary indicate a syntax element (e.g., a bitplane) made up of simpler syntax elements. A fixed-length syntax element is defined to be a syntax element for which the length of the syntax element is not dependent on data in the syntax element itself; the length of a fixed-length syntax element is either constant or determined by prior data in the syntax flow. A lower layer in a layer diagram (e.g., a macroblock layer in a frame-layer diagram) is indicated by a rectangle within a rectangle.
Entry-point-level bitstream elements are shown in FIG. 33. In general, an entry point marks a position in a bitstream (e.g., an I-frame or other key frame) at which a decoder can begin decoding. In other words, no pictures before the entry point in the bitstream are needed to decode pictures after the entry point. An entry point header can be used to signal changes in coding control parameters (e.g., enabling or disabling compression tools, such as in-loop deblocking filtering, for frames following an entry point).
For interlaced I-frames, P-frames, and B-frames, frame-level bitstream elements are shown in FIGS. 34, 35, and 36, respectively. (Frame-level bitstream elements for interlaced BI-frames are identical to those for interlaced I-frames.) Data for each frame consists of a frame header followed by data for the macroblock layer (whether for intra or various inter type macroblocks).
The bitstream elements that make up the macroblock layer for interlaced P-frames (whether for intra or various inter type macroblocks) are shown in FIG. 37. Bitstream elements in the macroblock layer for interlaced P-frames (e.g., FIELDTX) may be present for macroblocks in other interlaced pictures (e.g., interlaced B-frames, interlaced I-frames etc.)
The following sections describe selected bitstream elements in the frame and macroblock layers that are related to signaling for interlaced pictures. Although the selected bitstream elements are described in the context of a particular layer, some bitstream elements can be used in more than one layer.
1. Selected Entry Point Layer Elements
Loop Filter (LOOPFILTER) (1 Bit)
LOOPFILTER is a Boolean flag that indicates whether loop filtering is enabled for the entry point segment. If LOOPFILTER=0, then loop filtering is not enabled. If LOOPFILTER=1, then loop filtering is enabled. In an alternative combined implementation, LOOPFILTER is a sequence level element.
Variable Sized Transform (VSTRANSFORM) (1 Bit)
VSTRANSFORM is a Boolean flag that indicates whether variable-sized transform coding is enabled for the sequence. If VSTRANSFORM=0, then variable-sized transform coding is not enabled. If VSTRANSFORM=1, then variable-sized transform coding is enabled.
2. Selected Frame Layer Elements
FIGS. 34, 35, and 36, are diagrams showing frame-level bitstream syntaxes for interlaced I-frames, P-frames, and B-frames, respectively. Specific bitstream elements are described below.
Frame Coding Mode (FCM) (Variable Size)
FCM is a variable length codeword [“VLC”] used to indicate the picture coding type. FCM takes on values for frame coding modes as shown in Table 1 below:
TABLE 1
Frame Coding Mode VLC
FCM value Frame Coding Mode
0 Progressive
10 Frame-Interlace
11 Field-Interlace

Picture Type (PTYPE) (Variable Size)
PTYPE is a variable size syntax element present in the frame header for interlaced P-frames and interlaced B-frame (or other kinds of interlaced frames such as interlaced I-frames). PTYPE takes on values for different frame types according to Table 2 below.
TABLE 2
Picture Type VLC
PTYPE VLC Picture Type
110 I
0 P
10 B
1110 BI
1111 Skipped

If PTYPE indicates that the frame is skipped then the frame is treated as a P-frame which is identical to its reference frame. The reconstruction of the skipped frame is equivalent conceptually to copying the reference frame. A skipped frame means that no further data is transmitted for this frame.
Macroblock-Level Transform Type Flag (TTMBF) (1 bit)
This syntax element is present in P-frames and B-frames if the sequence-level syntax element VSTRANSFORM=1. TTMBF is a one-bit syntax element that signals whether transform type coding is enabled at the frame or macroblock level. If TTMBF=1, the same transform type is used for all blocks in the frame. In this case, the transform type is signaled in the Frame-level Transform Type (TTFRM) syntax element that follows. If TTMBF=0, the transform type may vary throughout the frame and is signaled at the macroblock or block levels.
Frame-Level Transform Type (TTFRM) (2 bits)
This syntax element is present in P-frames and B-frames if VSTRANSFORM=1 and TTMBF=1. TTFRM signals the transform type used to transform the 8×8 pixel error signal in predicted blocks. The 8×8 error blocks may be transformed using an 8×8 transform, two 8×4 transforms, two 4×8 transforms or four 4×4 transforms.
Field Transform Bitplane (FIELDTX) (Variable Size)
At frame level or field level, FIELDTX is a bitplane indicating whether macroblocks in an interlaced I-frame are frame-coded or field-coded. FIELDTX is explained in further detail below.
3. Selected Macroblock Layer Elements
FIG. 37 is a diagram showing a macroblock-level bitstream syntax for macroblocks interlaced P-frames in the combined implementation. Specific bitstream elements are described below. Data for a macroblock consists of a macroblock header followed by block layer data. Bitstream elements in the macroblock layer for interlaced P-frames (e.g., FIELDTX) may potentially be present for macroblocks in other interlaced pictures (e.g., interlaced B-frames, etc.)
Macroblock Mode (MBMODE) (Variable Size)
MBMODE is a variable-size syntax element that jointly specifies macroblock type (e.g., 1 MV, 2 Field MV, 4 Field MV, 4 Frame MV or Intra), transform type (e.g., field, frame, or no coded blocks), and the presence of differential motion vector data for 1 MV macroblocks. MBMODE is explained in detail below.
Field Transform Flag (FIELDTX) (1 Bit)
FIELDTX is a 1-bit syntax present in interlaced B-frame intra-coded macroblocks. This syntax element indicates whether a macroblock is frame or field coded (basically, the internal organization of the macroblock). FIELDTX=1 indicates that the macroblock is field-coded. Otherwise, the macroblock is frame-coded. In inter-coded macroblocks, this syntax element can be inferred from MBMODE as explained in detail below.
MB-Level Transform Type (TTMB) (Variable Size)
TTMB is a variable-size syntax element in P-frame and B-frame macroblocks when the picture layer syntax element TTMBF=0. TTMB specifies a transform type, transform type signal level, and subblock pattern.
If TTMB indicates the signal level is block level then the transform type is signaled at block level. At block level, for a block that contains residual information, TTBLK indicates the transform type used for the block. TTBLK is not present for the first coded block since transform type for that block is joint coded in TTMB. TTBLK is present for all the remaining coded blocks and indicates the transform type. If the transform type is 8×4 or 4×8, the subblock pattern is decoded as part of TTMB (for the first coded block) or TTBLK (for each remaining coded block after the first one). If the transform type is 4×4, the subblock pattern is encoded in SUBBLKPAT at the block level for each coded block.
If the transform type signaling level is macroblock level and the transform type is 8×4, 4×8, or 4×4, the decoder may still need information about which subblocks have non-zero coefficients. If the transform type is 8×4 or 4×8, the subblock pattern is decoded as part of TTMB (for the first coded block) or SUBBLKPAT (for each remaining coded block). If the transform type is 4×4, the subblock pattern is encoded in SUBBLKPAT at the block level for each coded block.
Finally, if the transform type signaling level is frame level and the transform type is 8×4, 4×8, or 4×4, the decoder needs information about which subblocks have non-zero coefficients. The subblock pattern is encoded in SUBBLKPAT at the block level for each coded block.
If a subblock pattern indicates no non-zero coefficients are present for a subblock, then no additional coefficient information for that subblock is present in the bitstream. For an 8×4 transform type, data for the top subblock (if present) is coded first followed by data for the bottom subblock (if present). For a 4×8 transform type, data for the left subblock (if present) is coded first followed by data for the right subblock (if present). For a 4×4 transform type, data for the upper left subblock is coded first (if present) followed, in order, by data for the upper right, lower left and lower right subblocks (if present).
B. Decoding Aspects of Interlaced P-Frames
In an interlaced P-frame, each macroblock may be motion compensated in frame mode using one or four motion vectors or in field mode using two or four motion vectors. A macroblock that is inter-coded does not contain any intra blocks. In addition, the residual after motion compensation may be coded in frame transform mode or field transform mode. More specifically, the luma component of the residual is re-arranged according to fields if it is coded in field transform mode but remains unchanged in frame transform mode, while the chroma component remains the same. A macroblock may also be coded as intra.
Motion compensation may be restricted to not include four (both field/frame) motion vectors. The type of motion compensation and residual coding is jointly indicated for each macroblock through MBMODE and a skipped macroblock signal (SKIPMB).
Macroblocks in interlaced P-frames are classified into five types: 1 MV, 2 Field MV, 4 Frame MV, 4 Field MV, and Intra. The first four types of macroblock are inter-coded while the last type indicates that the macroblock is intra-coded. The macroblock type is signaled by the MBMODE syntax element in the macroblock layer along with the skip bit. (A skip condition for the macroblock also can be signaled at frame level in a compressed bit plane.) MBMODE jointly encodes macroblock types along with various pieces of information regarding the macroblock for different types of macroblock.
Macroblock Mode Signaling
MBMODE jointly specifies the type of macroblock (1 MV, 4 Frame MV, 2 Field MV, 4 Field MV, or intra), types of transform for inter-coded macroblock (i.e. field or frame or no coded blocks), and whether there is a differential motion vector for a 1 MV macroblock. MBMODE can take one of 15 possible values:
Let <MVP> denote the signaling of whether a nonzero 1 MV differential motion vector is present or absent. Let <Field/Frame transform> denote the signaling of whether the residual of the macroblock is (1) frame transform coded; (2) field transform coded; or (3) zero coded blocks (i.e. CBP=0). MBMODE signals the following information jointly:
    • MBMODE={<1 MV, MVP, Field/Frame transform>, <2 Field MV, Field/Frame transform>, <4 Frame MV, Field/Frame transform>, <4 Field MV, Field/Frame transform>, <INTRA>};
      The case <1 MV, MVP=0, CBP=0>, is not signaled by MBMODE, but is signaled by the skip condition.
For inter-coded macroblocks, the CBPCY syntax element is not decoded when <Field/frame Transform> in MBMODE indicates no coded blocks. On the other hand, if <Field/frame Transform> in MBMODE indicates field or frame transform, then CBPCY is decoded.
The decoded <Field/frame Transform> is used to set the flag FIELDTX. If it indicates that the macroblock is field transform coded, FIELDTX is set to 1. If it indicates that the macroblock is frame transform coded, FIELDTX is set to 0. If it indicates a zero-coded block, FIELDTX is set to the same type as the motion vector, i.e., FIELDTX is set to 1 if it is a field motion vector and to 0 if it is a frame motion vector.
For non-1 MV inter-coded macroblocks, an additional field is sent to indicate which of the differential motion vectors is non-zero. In the case of 2 Field MV macroblocks, the 2 MVBP field is sent to indicate which of the two motion vectors contain nonzero differential motion vectors. Similarly, the 4 MVBP field is sent to indicate which of the four motion vectors contain nonzero differential motion vectors.
For intra-coded macroblocks, the Field/Frame transform and zero coded blocks are coded in separate fields.
C. In-Loop Deblocking Filtering for Progressive Frames
Before describing the process for in-loop deblocking filtering for interlaced frames in the combined implementation, a process for in-loop deblocking filtering for progressive frames is described. The section describing the process for in-loop deblocking filtering for interlaced frames will proceed with reference to concepts discussed in this section.
In the combined implementation, if the entry-point-layer syntax element LOOPFILTER=1, an adaptive filtering operation is performed on each reconstructed frame in the entry point segment. This filtering operation is performed prior to using the reconstructed frame as a reference for motion compensation. When there are multiple slices in a picture, the filtering for each slice is performed independently.
Since the intent of loop filtering is to smooth discontinuities at block boundaries, the filtering process operates on pixels that border neighboring blocks. The locations of block boundaries depend on the size of the inverse transform used. For P-frames the block boundaries may occur at every 4th or 8th pixel row or column depending on whether an 8×8, 8×4 or 4×8 inverse transform is used. For I-frames, where an 8×8 transform is used, block boundaries occur at every 8th pixel row and column.
1. Progressive I-Frame In-Loop Deblocking Filtering
For progressive I-frames, adaptive deblocking filtering is performed at all 8×8 block boundaries. FIGS. 7 and 8 show the pixels that are filtered along the horizontal and vertical border regions in the upper left corner of a component (luma, Cb or Cr) plane. FIG. 7 shows filtered vertical block boundary pixels in an I-frame. FIG. 8 shows filtered horizontal block boundary pixels in an I-frame.
In FIGS. 7 and 8, crosses represent pixels (or, more precisely, samples) and circled crosses represent filtered pixels. As these figures show, the top horizontal line and first vertical line in the frame are not filtered, even though they lie on a block boundary, because these lines lie on the border of the frame. Although not depicted, the bottom horizontal line and last vertical line in the frame also are not filtered for the same reason. In more formal terms, the following lines are filtered:
    • Horizontal lines: (7, 8), (15, 16) . . . ((N−1)*8−1, (N-1)*8)
    • Vertical lines: (7, 8), (15, 16) . . . ((M−1)*8−1, (M−1)*8)
    • (N=number of horizontal 8×8 blocks in the plane (N*8=horizontal frame size))
    • (M=number of vertical 8×8 blocks in the frame (M*8=vertical frame size))
      The order in which the pixels are filtered is important. For progressive frames in this combined implementation, all horizontal boundary lines in the frame are filtered first, followed by the vertical boundary lines.
2. Progressive B-Frame In-Loop Deblocking Filtering
In the combined implementation, progressive B-frame in-loop deblocking is the same as progressive I-frame deblocking. As in progressive I-frame in-loop deblocking, 8×8 block boundaries are filtered, and motion vectors and 4×8/8×4 transforms are not considered.
3. Progressive P-Frame In-Loop Deblocking Filtering
For progressive P-frames, blocks can be intra or inter-coded. In the combined implementation, an encoder/decoder uses an 8×8 transform to transform the samples in intra-coded blocks. When at least one of the neighboring blocks is intra-coded, the 8×8 block boundaries are always adaptively filtered. An encoder/decoder uses an 8×8, 8×4, 4×8 or 4×4 transform for inter-coded blocks and uses a corresponding inverse transform to construct the samples that represent the residual error. Depending on the status of the neighboring blocks, the boundary between the current and neighboring blocks may or may not be filtered. The decision of whether to adaptively filter a block or subblock border is as follows:
    • 1) The boundaries between coded (at least one non-zero coefficient) subblocks (8×4, 4×8 or 4×4) within an 8×8 block are always adaptively filtered.
    • 2) The boundary between a block or subblock and a neighboring block or subblock shall not be not filtered only if both blocks are inter-coded, have the same motion vector, and have no residual error (no transform coefficients). Otherwise the boundary shall be adaptively filtered.
FIG. 9 shows examples of when filtering between neighboring blocks does and does not occur in progressive P-frames. In this example, it is assumed that the motion vectors for both blocks is the same (if the motion vectors are different, then the boundary is always adaptively filtered). The shaded blocks or subblocks represent the cases where at least one nonzero coefficient is present. Clear blocks or subblocks represent cases where no transform coefficients are present. Thick lines represent the boundaries that are adaptively filtered. Thin lines represent the boundaries that are not filtered. These examples illustrate only horizontal neighbors, but the same applies for vertical neighbors.
FIGS. 10 and 11 show an example of pixels that may be filtered in a progressive P-frame. The crosses represent pixel locations and the circled crosses represent the boundary pixels that are filtered if the conditions specified above are met.
FIG. 10 shows pixels adaptively filtered along horizontal boundaries. As the figure shows, the pixels on either side of the block or subblock boundary are candidates to be filtered. For the horizontal boundaries this could be every 4th and 5th, 8th and 9th, 12th and 13th, etc., pixel row in the frame as these are the 8×8 and 8×4 horizontal boundaries. FIG. 11 shows pixels adaptively filtered along vertical boundaries. For the vertical boundaries, every 4th and 5th, 8th and 9th, 12th and 13th, etc., pixel column in the frame may be adaptively filtered as these are the 8×8 and 4×8 vertical boundaries.
In this combined implementation, the first and last row and the first and last column in the frame are not filtered. The order in which pixels are filtered is important. First, in this combined implementation, all the 8×8 block horizontal boundary lines in the frame are adaptively filtered starting from the top line. Next, all 8×4 block horizontal boundary lines in the frame are adaptively filtered starting from the top line. Next, all 8×8 block vertical boundary lines are adaptively filtered starting from the leftmost line. Lastly, all 4×8 block vertical boundary lines are adaptively filtered starting with the leftmost line. In all cases in this combined implementation, the rules specified above are used to determine whether the boundary pixels are adaptively filtered for each block or subblock.
4. Filter Operation
This section describes an adaptive filtering operation that is performed on the boundary pixels in progressive I-, B- and P-frames in the combined implementation.
For progressive P-frames the decision criteria described above determine which vertical and horizontal boundaries are adaptively filtered. For progressive I-frames, all the 8×8 vertical and horizontal boundaries are adaptively filtered. Since the minimum number of consecutive pixels that are filtered in a row or column is four and the total number of pixels in a row or column is always a multiple of four, the adaptive filtering operation is performed on segments of four pixels.
For example, if the eight pixel pairs that make up the vertical boundary between two blocks is adaptively filtered, then the eight pixels are divided into two 4-pixel segments as shown in FIG. 12. In each 4-pixel segment, the third pixel pair is adaptively filtered first as indicated by the Xs in FIG. 12. The result of this filter operation determines whether the other three pixels in the segment are also adaptively filtered, as described below.
FIG. 13 shows the pixels that are used in the filtering operation performed on the 3rd pixel pair. In FIG. 13, pixels P4 and P5 are the pixel pairs that may be changed in the filter operation.
The pseudo-code 1400 of FIG. 14 shows the filtering operation performed on the 3rd pixel pair in each segment. The value filter_other3_pixels indicates whether the remaining three pixel pairs in the segment are also adaptively filtered. If filter_other3_pixels=TRUE, then the other three pixel pairs are adaptively filtered. If filter_other3_pixels=FALSE, then they are not adaptively filtered, and the filtering operation proceeds to the next 4-pixel segment. The pseudo-code 1500 of FIG. 15 shows the filtering operation that is performed on the 1st, 2nd and 4th pixel pair if filter_other3_pixels=TRUE.
The filtering operations described above are similarly used for adaptively filtering horizontal boundary pixels.
D. In-Loop Deblocking Filtering for Interlaced Frames
This section describes the process for in-loop deblocking filtering of interlaced frames in the combined implementation, with reference to concepts discussed in the previous section.
If the entry point layer syntax element LOOPFILTER=1, a filtering operation is performed on each reconstructed frame. (In an alternative combined implementation, LOOPFILTER is a sequence layer syntax element.) This filtering operation is performed prior to using the reconstructed frame as a reference for motion predictive coding.
Since the intent of loop filtering is to smooth out the discontinuities at block boundaries, the adaptive filtering process operates on the pixels that border neighboring blocks. For interlaced P-frames, the block boundaries may occur at every 4th, 8th, 12th, etc., pixel row or column, depending on whether an 8×8, 8×4, 4×8 or 4×4 inverse transform is used. For interlaced I-frames, adaptive filtering occurs at every 8th, 16th, 24th, etc., pixel row and column.
In interlace frame coded pictures, each macroblock may be frame transform coded or field transform coded according to its FIELDTX flag. The state of the FIELDTX flag along with the size of the transform used (4×4, 4×8, 8×4, 8×8) has an effect on where the in-loop deblocking takes place in the macroblock.
Field-Based Filtering
The adaptive filtering process is the same as described above with regard to progressive frames, with one important difference: the filtering is always done using the same field lines, never mixing different fields. FIGS. 26A-26B illustrate field-based filtering for horizontal and vertical block boundaries.
For a horizontal block boundary, the two top field lines are filtered across the block boundary using top field lines only and the two bottom field lines across the block boundary are filtered using bottom field lines only. For a vertical block boundary, the top field block boundary and the bottom field block boundary are filtered separately.
Filtering Order
For both inter (P, B) and intra (I) frame coded pictures, the in-loop deblocking process starts by processing all the horizontals edges first followed by all the vertical edges. The pseudo-code 2400 in FIG. 24 describes this filtering process in the combined implementation one macroblock at a time for the sake of simplicity, but alternate valid implementations of the filtering process may not follow this macroblock processing order.
Interlaced I-Frames
In interlaced I-frames, each macroblock is 8×8 transform coded. For each macroblock, the horizontal block boundary filtering starts by filtering the intra-macroblock horizontal boundary only if the current macroblock is frame transform coded. Next, the horizontal block boundary between the current macroblock and the macroblock directly below it (if available) is filtered. The pseudo-code 3000 in FIG. 30A describes the process of horizontal filtering for a macroblock in an interlaced I-frame.
For each macroblock, the vertical block boundary filtering starts by filtering the intra-macroblock vertical boundary and then followed by the filtering of the inter-macroblock boundary between the current macroblock and the macroblock to its immediate right (if available). The pseudo-code 3010 in FIG. 30B describes the process of the vertical filtering for a macroblock in an interlaced I-frame.
Interlaced P-Frames and Interlaced B-Frames
In interlaced P-frames and B-frames, each inter-coded macroblock may be 4×4, 4×8, 8×4, or 8×8 transform coded. For each macroblock, the horizontal block boundary filtering occurs in the order of block Y0, Y1, Y2, Y3, Cb, and then Cr. In this combined implementation, the luma blocks are processed differently according to field/frame coding status. The value is explicitly signaled in intra-coded macroblocks, and it is inferred from MBMODE in inter-coded macroblocks. The pseudo-code 3100 in FIG. 31A and pseudo-code 3110 in FIG. 31B describes the process of horizontal filtering for luma and chroma blocks, respectively, for a macroblock in an interlaced P-frame or B-frame.
Similarly, for each macroblock, the vertical block boundary filtering occurs in the order of block Y0, Y1, Y2, Y3, Cb, and then Cr. In this combined implementation, as with horizontal filtering, the luma blocks are processed differently according to field/frame coding status. The pseudo-code 3200 in FIG. 32A and pseudo-code 3210 in FIG. 32B describes the process of vertical filtering for luma and chroma blocks, respectively, for a macroblock in an interlaced P-frame or B-frame.
Having described and illustrated the principles of our invention with reference to various embodiments, it will be recognized that the various embodiments can be modified in arrangement and detail without departing from such principles. It should be understood that the programs, processes, or methods described herein are not related or limited to any particular type of computing environment, unless indicated otherwise. Various types of general purpose or specialized computing environments may be used with or perform operations in accordance with the teachings described herein. Elements of embodiments shown in software may be implemented in hardware and vice versa.
In view of the many possible embodiments to which the principles of our invention may be applied, we claim as our invention all such embodiments as may come within the scope and spirit of the following claims and equivalents thereto.

Claims (35)

We claim:
1. In a computing device that implements a video decoder, the computing device including a processor and memory, a method comprising:
receiving, at the computing device that implements the video decoder, encoded data for video in a bit stream, wherein bit stream syntax for the bit stream includes frame level, macroblock level and block level;
with the computing device that implements the video decoder, decoding an interlaced frame coded picture of the video using the received encoded data, including:
using one or more transform level syntax elements to select the frame level, the macroblock level, or the block level of bit stream syntax as including frequency transform block/sub-block size information, wherein the video decoder is configurable, depending on the one or more transform level syntax elements, to set frequency transform block/sub-block size for the interlaced frame coded picture, to switch the frequency transform block/sub-block size between macroblocks in the interlaced frame coded picture, and to switch the frequency transform block/sub-block size between blocks in the interlaced frame coded picture;
obtaining field/frame type information for a current macroblock in the interlaced frame coded picture;
obtaining the frequency transform block/sub-block size information for plural blocks in the current macroblock, the frequency transform block/sub-block size information indicating the frequency transform block/sub-block size from among plural possible frequency transform block/sub-block sizes;
selecting one or more block boundaries for in-loop deblocking, wherein the selecting is based at least in part on the frequency transform block/sub-block size information and the field/frame type information; and
performing in-loop deblocking on the selected block boundaries.
2. The method of claim 1 wherein the field/frame transform type information indicates whether the current macroblock is coded according to a field structure or a frame structure.
3. The method of claim 1 wherein the in-loop deblocking is field-based.
4. The method of claim 1 wherein the decoding further comprises obtaining picture type information for the interlaced frame coded picture, wherein the selecting is further based on the picture type information.
5. The method of claim 1 wherein the interlaced frame coded picture is an interlaced P-frame.
6. The method of claim 1 wherein the interlaced frame coded picture is an interlaced B-frame.
7. The method of claim 1 wherein the current macroblock is a 4:2:0 macroblock.
8. The method of claim 1 wherein at least one of the one or more block boundaries is a horizontal block boundary.
9. The method of claim 1 wherein at least one of the one or more block boundaries is a vertical block boundary.
10. The method of claim 1 wherein the performing in-loop deblocking comprises performing in-loop deblocking on horizontal block boundaries prior to performing in-loop deblocking on vertical block boundaries.
11. The method of claim 1 wherein the frequency transform block/sub-block size information indicates the frequency transform block/sub-block size from a group consisting of: 8×8, 8×4, 4×8, and 4×4.
12. A computer system including a processor, memory, speaker, voice input device, display, and storage medium, wherein the computer system is adapted to perform a method comprising:
receiving encoded data for video in a bit stream, wherein bit stream syntax for the bit stream includes frame level, macroblock level and block level; and
decoding an interlaced frame coded picture of the video using the received encoded data, including:
using one or more transform level syntax elements to select the frame level, the macroblock level, or the block level of bit stream syntax as including frequency transform block/sub-block size information, wherein the video decoder is configurable, depending on the one or more transform level syntax elements, to set frequency transform block/sub-block size for the interlaced frame coded picture, to switch the frequency transform block/sub-block size between macroblocks in the interlaced frame coded picture, and to switch the frequency transform block/sub-block size between blocks in the interlaced frame coded picture;
obtaining field/frame type information for a current macroblock in the interlaced frame coded picture;
obtaining the frequency transform block/sub-block size information for plural blocks in the current macroblock, the frequency transform block/sub-block size information indicating the frequency transform block/sub-block size from among plural possible frequency transform block/sub-block sizes;
selecting one or more block boundaries for in-loop deblocking, wherein the selecting is based at least in part on the frequency transform block/sub-block size information and the field/frame type information; and
performing in-loop deblocking on the selected block boundaries.
13. In a computing device that implements a video encoder, the computing device including a processor and memory, a method comprising:
with the computing device that implements the video encoder, encoding data for video that includes an interlaced frame coded picture, wherein the encoding includes:
obtaining field/frame type information for a current macroblock in the interlaced frame coded picture;
obtaining frequency transform block/sub-block size information for plural blocks in the current macroblock, the frequency transform block/sub-block size information indicating which frequency transform block/sub-block size of plural possible frequency transform block/sub-block sizes applies to each of the plural blocks;
selecting a block boundary between a first block in the macroblock and a second block for in-loop deblocking, the selecting based at least in part on the frequency transform block/sub-block size information and the field/frame type information;
obtaining pixel data from one or more field lines associated with the first block;
obtaining pixel data from one or more field lines associated with the second block; and
performing in-loop deblocking across the selected block boundary using the obtained pixel data;
wherein the in-loop deblocking comprises filtering operations performed on pixel data from field lines of same polarity only; and
with the computing device that implements the video encoder, outputting the encoded data as part of a bit stream, wherein bit stream syntax for the bit stream includes frame level, macroblock level and block level, wherein one or more transform level syntax elements indicate whether the frequency transform block/sub-block size information is signaled in the bit stream as part of the frame level, the macroblock level, or the block level of bit stream syntax, and wherein the video encoder is configurable, as indicated by the one or more transform level syntax elements, to set the frequency transform block/sub-block size for the interlaced frame coded picture, to switch the frequency transform block/sub-block size between macroblocks in the interlaced frame coded picture, and to switch the frequency transform block/sub-block size between blocks in the interlaced frame coded picture.
14. A computer system including a processor, memory, speaker, voice input device, display, and storage medium, wherein the computer system is adapted to perform a method comprising:
encoding data for video that includes an interlaced frame coded picture, wherein the encoding includes:
obtaining field/frame type information for a current macroblock in the interlaced frame coded picture;
obtaining frequency transform block/sub-block size information for plural blocks in the current macroblock, the frequency transform block/sub-block size information indicating which frequency transform block/sub-block size of plural possible frequency transform block/sub-block sizes applies to each of the plural blocks;
selecting a block boundary between a first block in the macroblock and a second block for in-loop deblocking, the selecting based at least in part on the frequency transform block/sub-block size information and the field/frame type information;
obtaining pixel data from one or more field lines associated with the first block;
obtaining pixel data from one or more field lines associated with the second block; and
performing in-loop deblocking across the selected block boundary using the obtained pixel data;
wherein the in-loop deblocking comprises filtering operations performed on pixel data from field lines of same polarity only; and
outputting the encoded data as part of a bit stream, wherein bit stream syntax for the bit stream includes frame level, macroblock level and block level, wherein one or more transform level syntax elements indicate whether the frequency transform block/sub-block size information is signaled in the bit stream as part of the frame level, the macroblock level, or the block level of bit stream syntax, and wherein the video encoder is configurable, as indicated by the one or more transform level syntax elements, to set the frequency transform block/sub-block size for the interlaced frame coded picture, to switch the frequency transform block/sub-block size between macroblocks in the interlaced frame coded picture, and to switch the frequency transform block/sub-block size between blocks in the interlaced frame coded picture.
15. The method of claim 1 wherein the frequency transform block/sub-block size information is signaled in the bit stream separately from the field/frame type information, and wherein the field/frame type information is signaled at any of frame level or macroblock level in the bit stream.
16. The method of claim 1 wherein the frequency transform block/sub-block size information is signaled in the bit stream using one or more transform size syntax elements, and wherein the field/frame type information is signaled in the bit stream using one or more field/frame type syntax elements different than the one or more transform size syntax elements.
17. The method of claim 1 wherein the plural blocks include plural luma blocks and plural chroma blocks, wherein the one or more block boundaries selected based at least in part on the frequency transform block/sub-block size information and the field/frame type information are selected for in-loop deblocking of the plural luma blocks, and wherein the decoding further includes:
selecting one or more block boundaries for in-loop deblocking of the plural chroma blocks, wherein the selecting the one or more block boundaries for in-loop deblocking of the plural chroma blocks is based at least in part on the frequency transform block/sub-block size information but independent of the field/frame type information.
18. The method of claim 13 wherein, as part of the outputting, the frequency transform block/sub-block size information is signaled in the bit stream separately from the field/frame type information, and wherein the field/frame type information is signaled at any of frame level or macroblock level in the bit stream.
19. The method of claim 13 wherein the plural blocks include plural luma blocks and plural chroma blocks, wherein the one or more block boundaries selected based at least in part on the frequency transform block/sub-block size information and the field/frame type information are selected for in-loop deblocking of the plural luma blocks, and wherein the encoding further includes:
selecting one or more block boundaries for in-loop deblocking of the plural chroma blocks, wherein the selecting the one or more block boundaries for in-loop deblocking of the plural chroma blocks is based at least in part on the frequency transform block/sub-block size information but independent of the field/frame type information.
20. In a computing device that implements a video encoder, the computing device including a processor and memory, a method comprising:
with the computing device that implements the video encoder, encoding data for video that includes an interlaced frame coded picture, wherein the encoding includes:
obtaining field/frame type information for a current macroblock in the interlaced frame coded picture;
obtaining frequency transform block/sub-block size information for plural blocks in the current macroblock, the frequency transform block/sub-block size information indicating which frequency transform block/sub-block size of plural possible frequency transform block/sub-block sizes applies to each of the plural blocks, wherein the plural blocks include plural luma blocks and plural chroma blocks;
selecting one or more block boundaries for in-loop deblocking of the plural chroma blocks, wherein the selecting the one or more block boundaries for in-loop deblocking of the plural chroma blocks is based at least in part on the frequency transform block/sub-block size information but is independent of the field/frame type information;
selecting one or more block boundaries for in-loop deblocking of the plural luma blocks, wherein the selecting the one or more block boundaries for in-loop deblocking of the plural luma blocks is based at least in part on the frequency transform block/sub-block size information and based at least in part on the field/frame type information; and
performing in-loop deblocking on the selected block boundaries; and
with the computing device that implements the video encoder, outputting the encoded data as part of a bit stream, wherein the frequency transform block/sub-block size information is signaled in the bit stream using one or more transform size syntax elements, and wherein one or more transform level syntax elements indicate whether the frequency transform block/sub-block size information is signaled in the bit stream as part of frame level, macroblock level, or block level of bit stream syntax.
21. The method of claim 20 wherein the field/frame transform type information indicates whether the current macroblock is coded according to a field structure or a frame structure.
22. The method of claim 20 wherein the in-loop deblocking is field-based.
23. The method of claim 20 wherein the interlaced frame coded picture is an interlaced P-frame.
24. The method of claim 20 wherein the interlaced frame coded picture is an interlaced B-frame.
25. The method of claim 20 wherein the current macroblock is a 4:2:0 macroblock.
26. The method of claim 20 wherein at least one of the one or more block boundaries is a horizontal block boundary.
27. The method of claim 20 wherein at least one of the one or more block boundaries is a vertical block boundary.
28. The method of claim 20 wherein the performing in-loop deblocking comprises performing in-loop deblocking on horizontal block boundaries prior to performing in-loop deblocking on vertical block boundaries.
29. The method of claim 20 wherein the frequency transform block/sub-block size information indicates the frequency transform block/sub-block size from a group consisting of: 8×8, 8×4, 4×8, and 4×4.
30. A computer system including a processor, memory, speaker, voice input device, display, and storage medium, wherein the computer system is adapted to perform a method comprising:
encoding data for video that includes an interlaced frame coded picture, wherein the encoding includes:
obtaining field/frame type information for a current macroblock in the interlaced frame coded picture;
obtaining frequency transform block/sub-block size information for plural blocks in the current macroblock, the frequency transform block/sub-block size information indicating which frequency transform block/sub-block size of plural possible frequency transform block/sub-block sizes applies to each of the plural blocks, wherein the plural blocks include plural luma blocks and plural chroma blocks;
selecting one or more block boundaries for in-loop deblocking of the plural chroma blocks, wherein the selecting the one or more block boundaries for in-loop deblocking of the plural chroma blocks is based at least in part on the frequency transform block/sub-block size information but is independent of the field/frame type information;
selecting one or more block boundaries for in-loop deblocking of the plural luma blocks, wherein the selecting the one or more block boundaries for in-loop deblocking of the plural luma blocks is based at least in part on the frequency transform block/sub-block size information and based at least in part on the field/frame type information; and
performing in-loop deblocking on the selected block boundaries; and
outputting the encoded data as part of a bit stream, wherein the frequency transform block/sub-block size information is signaled in the bit stream using one or more transform size syntax elements, and wherein one or more transform level syntax elements indicate whether the frequency transform block/sub-block size information is signaled in the bit stream as part of frame level, macroblock level, or block level of bit stream syntax.
31. The method of claim 20 wherein the frequency transform block/sub-block size information is signaled at frame level in the bit stream.
32. The method of claim 20 wherein the frequency transform block/sub-block size information is signaled at macroblock level in the bit stream.
33. The method of claim 20 wherein the frequency transform block/sub-block size information is signaled at block level in the bit stream.
34. The method of claim 20 wherein the field/frame type information is signaled at macroblock level in the bit stream.
35. The method of claim 20 wherein the field/frame type information is signaled at frame level in the bit stream.
US10/934,116 2003-09-07 2004-09-04 In-loop deblocking for interlaced video Active 2027-12-03 US8687709B2 (en)

Priority Applications (5)

Application Number Priority Date Filing Date Title
US10/934,116 US8687709B2 (en) 2003-09-07 2004-09-04 In-loop deblocking for interlaced video
US10/989,596 US7852919B2 (en) 2003-09-07 2004-11-15 Field start code for entry point frames with predicted first field
US10/989,843 US7609762B2 (en) 2003-09-07 2004-11-15 Signaling for entry point frames with predicted first field
US10/989,827 US8213779B2 (en) 2003-09-07 2004-11-15 Trick mode elementary stream and receiver system
US10/989,845 US7924921B2 (en) 2003-09-07 2004-11-15 Signaling coding and display options in entry point headers

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US50108103P 2003-09-07 2003-09-07
US10/934,116 US8687709B2 (en) 2003-09-07 2004-09-04 In-loop deblocking for interlaced video

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US10/882,739 Continuation-In-Part US7839930B2 (en) 2003-09-07 2004-06-30 Signaling valid entry points in a video stream

Related Child Applications (5)

Application Number Title Priority Date Filing Date
US10/882,739 Continuation-In-Part US7839930B2 (en) 2003-09-07 2004-06-30 Signaling valid entry points in a video stream
US10/989,845 Continuation-In-Part US7924921B2 (en) 2003-09-07 2004-11-15 Signaling coding and display options in entry point headers
US10/989,827 Continuation-In-Part US8213779B2 (en) 2003-09-07 2004-11-15 Trick mode elementary stream and receiver system
US10/989,843 Continuation-In-Part US7609762B2 (en) 2003-09-07 2004-11-15 Signaling for entry point frames with predicted first field
US10/989,596 Continuation-In-Part US7852919B2 (en) 2003-09-07 2004-11-15 Field start code for entry point frames with predicted first field

Publications (2)

Publication Number Publication Date
US20050084012A1 US20050084012A1 (en) 2005-04-21
US8687709B2 true US8687709B2 (en) 2014-04-01

Family

ID=37064688

Family Applications (9)

Application Number Title Priority Date Filing Date
US10/826,971 Active 2027-10-24 US7724827B2 (en) 2003-09-07 2004-04-15 Multi-layer run level encoding and decoding
US10/931,695 Active 2026-07-19 US7412102B2 (en) 2003-09-07 2004-08-31 Interlace frame lapped transform
US10/933,882 Active 2027-12-29 US7924920B2 (en) 2003-09-07 2004-09-02 Motion vector coding and decoding in interlaced frame coded pictures
US10/933,910 Active 2026-05-10 US7469011B2 (en) 2003-09-07 2004-09-02 Escape mode code resizing for fields and slices
US10/934,929 Active 2028-06-19 US7606311B2 (en) 2003-09-07 2004-09-02 Macroblock information signaling for interlaced frames
US10/933,883 Expired - Lifetime US7099515B2 (en) 2003-09-07 2004-09-02 Bitplane coding and decoding for AC prediction status information
US10/933,908 Active 2026-07-13 US7352905B2 (en) 2003-09-07 2004-09-02 Chroma motion vector derivation
US10/934,116 Active 2027-12-03 US8687709B2 (en) 2003-09-07 2004-09-04 In-loop deblocking for interlaced video
US10/934,117 Active 2026-10-19 US8116380B2 (en) 2003-09-07 2004-09-04 Signaling for field ordering and field/frame display repetition

Family Applications Before (7)

Application Number Title Priority Date Filing Date
US10/826,971 Active 2027-10-24 US7724827B2 (en) 2003-09-07 2004-04-15 Multi-layer run level encoding and decoding
US10/931,695 Active 2026-07-19 US7412102B2 (en) 2003-09-07 2004-08-31 Interlace frame lapped transform
US10/933,882 Active 2027-12-29 US7924920B2 (en) 2003-09-07 2004-09-02 Motion vector coding and decoding in interlaced frame coded pictures
US10/933,910 Active 2026-05-10 US7469011B2 (en) 2003-09-07 2004-09-02 Escape mode code resizing for fields and slices
US10/934,929 Active 2028-06-19 US7606311B2 (en) 2003-09-07 2004-09-02 Macroblock information signaling for interlaced frames
US10/933,883 Expired - Lifetime US7099515B2 (en) 2003-09-07 2004-09-02 Bitplane coding and decoding for AC prediction status information
US10/933,908 Active 2026-07-13 US7352905B2 (en) 2003-09-07 2004-09-02 Chroma motion vector derivation

Family Applications After (1)

Application Number Title Priority Date Filing Date
US10/934,117 Active 2026-10-19 US8116380B2 (en) 2003-09-07 2004-09-04 Signaling for field ordering and field/frame display repetition

Country Status (3)

Country Link
US (9) US7724827B2 (en)
EP (2) EP1658726B1 (en)
CN (5) CN100534164C (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130202052A1 (en) * 2012-02-06 2013-08-08 Nokia Corporation Method for coding and an apparatus
US9118933B1 (en) 2010-09-30 2015-08-25 Samsung Electronics Co., Ltd. Method and device for interpolating images by using a smoothing interpolation filter
US9225987B2 (en) 2010-01-14 2015-12-29 Samsung Electronics Co., Ltd. Method and apparatus for encoding video and method and apparatus for decoding video by considering skip and split order
US20160112706A1 (en) * 2011-01-12 2016-04-21 Mitsubishi Electric Corporation Image encoding device, image decoding device, image encoding method, and image decoding method for generating a prediction image
US9560385B2 (en) 2011-10-17 2017-01-31 Kt Corporation Method and apparatus for encoding/decoding image
US20190075317A1 (en) * 2001-12-17 2019-03-07 Microsoft Technology Licensing, Llc Video coding / decoding with sub-block transform sizes and adaptive deblock filtering
US10397607B2 (en) 2013-11-01 2019-08-27 Qualcomm Incorporated Color residual prediction for video coding
WO2020150347A1 (en) * 2019-01-15 2020-07-23 Tencent America LLC Chroma deblock filters for intra picture block compensation
US10958917B2 (en) 2003-07-18 2021-03-23 Microsoft Technology Licensing, Llc Decoding jointly coded transform type and subblock pattern information

Families Citing this family (375)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6563953B2 (en) 1998-11-30 2003-05-13 Microsoft Corporation Predictive image compression using a single variable length code for both the luminance and chrominance blocks for each macroblock
US7295509B2 (en) 2000-09-13 2007-11-13 Qualcomm, Incorporated Signaling method in an OFDM multiple access system
US9130810B2 (en) 2000-09-13 2015-09-08 Qualcomm Incorporated OFDM communications methods and apparatus
MXPA04000912A (en) * 2001-11-22 2004-04-02 Matsushita Electric Ind Co Ltd Variable length coding method and variable length decoding method.
WO2003053066A1 (en) 2001-12-17 2003-06-26 Microsoft Corporation Skip macroblock coding
US7016547B1 (en) * 2002-06-28 2006-03-21 Microsoft Corporation Adaptive entropy encoding/decoding for screen capture content
US7433824B2 (en) * 2002-09-04 2008-10-07 Microsoft Corporation Entropy coding by adapting coding between level and run-length/level modes
ES2378462T3 (en) 2002-09-04 2012-04-12 Microsoft Corporation Entropic coding by coding adaptation between modalities of level and length / cadence level
WO2004112400A1 (en) * 2003-06-16 2004-12-23 Matsushita Electric Industrial Co., Ltd. Coding apparatus, coding method, and codebook
US7580584B2 (en) * 2003-07-18 2009-08-25 Microsoft Corporation Adaptive multiple quantization
US7602851B2 (en) * 2003-07-18 2009-10-13 Microsoft Corporation Intelligent differential quantization of video coding
US7738554B2 (en) 2003-07-18 2010-06-15 Microsoft Corporation DC coefficient signaling at small quantization step sizes
US7609763B2 (en) * 2003-07-18 2009-10-27 Microsoft Corporation Advanced bi-directional predictive coding of video frames
US7426308B2 (en) * 2003-07-18 2008-09-16 Microsoft Corporation Intraframe and interframe interlace coding and decoding
US8218624B2 (en) 2003-07-18 2012-07-10 Microsoft Corporation Fractional quantization step sizes for high bit rates
US7092576B2 (en) * 2003-09-07 2006-08-15 Microsoft Corporation Bitplane coding for macroblock field/frame coding type information
US7961786B2 (en) 2003-09-07 2011-06-14 Microsoft Corporation Signaling field type information
US8107531B2 (en) * 2003-09-07 2012-01-31 Microsoft Corporation Signaling and repeat padding for skip frames
US8064520B2 (en) * 2003-09-07 2011-11-22 Microsoft Corporation Advanced bi-directional predictive coding of interlaced video
US20050058203A1 (en) * 2003-09-17 2005-03-17 Fernandes Felix C. Transcoders and methods
US20070206682A1 (en) * 2003-09-29 2007-09-06 Eric Hamilton Method And Apparatus For Coding Information
US8077778B2 (en) * 2003-10-31 2011-12-13 Broadcom Corporation Video display and decode utilizing off-chip processor and DRAM
JP4118232B2 (en) * 2003-12-19 2008-07-16 三菱電機株式会社 Video data processing method and video data processing apparatus
US8427494B2 (en) * 2004-01-30 2013-04-23 Nvidia Corporation Variable-length coding data transfer interface
US7801383B2 (en) 2004-05-15 2010-09-21 Microsoft Corporation Embedded scalar quantizers with arbitrary dead-zone ratios
US9208824B2 (en) 2004-05-18 2015-12-08 Broadcom Corporation Index table generation in PVR applications for AVC video streams
US20060029135A1 (en) * 2004-06-22 2006-02-09 Minhua Zhou In-loop deblocking filter
ATE532270T1 (en) 2004-07-14 2011-11-15 Slipstream Data Inc METHOD, SYSTEM AND COMPUTER PROGRAM FOR OPTIMIZING DATA COMPRESSION
US7570827B2 (en) 2004-07-14 2009-08-04 Slipstream Data Inc. Method, system and computer program product for optimization of data compression with cost function
US9137822B2 (en) 2004-07-21 2015-09-15 Qualcomm Incorporated Efficient signaling over access channel
US9148256B2 (en) 2004-07-21 2015-09-29 Qualcomm Incorporated Performance based rank prediction for MIMO design
JP3919115B2 (en) * 2004-08-18 2007-05-23 ソニー株式会社 DECODING DEVICE, DECODING METHOD, DECODING PROGRAM, RECORDING MEDIUM CONTAINING DECODING PROGRAM, AND REVERSE REPRODUCTION DEVICE, REVERSE REPRODUCTION METHOD, REVERSE REPRODUCTION PROGRAM, AND RECORDING MEDIUM CONTAINING REVERSE REPRODUCTION PROGRAM
US20070195887A1 (en) * 2004-09-29 2007-08-23 Comer Mary L Method and apparatus for reduced resolution update video coding and decoding
JP4533081B2 (en) * 2004-10-12 2010-08-25 キヤノン株式会社 Image encoding apparatus and method
US7574060B2 (en) * 2004-11-22 2009-08-11 Broadcom Corporation Deblocker for postprocess deblocking
JP4755093B2 (en) * 2005-02-01 2011-08-24 パナソニック株式会社 Image encoding method and image encoding apparatus
US9246560B2 (en) 2005-03-10 2016-01-26 Qualcomm Incorporated Systems and methods for beamforming and rate control in a multi-input multi-output communication systems
US9154211B2 (en) 2005-03-11 2015-10-06 Qualcomm Incorporated Systems and methods for beamforming feedback in multi antenna communication systems
US8446892B2 (en) 2005-03-16 2013-05-21 Qualcomm Incorporated Channel structures for a quasi-orthogonal multiple-access communication system
US9461859B2 (en) 2005-03-17 2016-10-04 Qualcomm Incorporated Pilot signal transmission for an orthogonal frequency division wireless communication system
US9520972B2 (en) 2005-03-17 2016-12-13 Qualcomm Incorporated Pilot signal transmission for an orthogonal frequency division wireless communication system
US9143305B2 (en) 2005-03-17 2015-09-22 Qualcomm Incorporated Pilot signal transmission for an orthogonal frequency division wireless communication system
US9184870B2 (en) 2005-04-01 2015-11-10 Qualcomm Incorporated Systems and methods for control channel signaling
US8149926B2 (en) * 2005-04-11 2012-04-03 Intel Corporation Generating edge masks for a deblocking filter
US9036538B2 (en) 2005-04-19 2015-05-19 Qualcomm Incorporated Frequency hopping design for single carrier FDMA systems
US9408220B2 (en) 2005-04-19 2016-08-02 Qualcomm Incorporated Channel quality reporting for adaptive sectorization
US20060248163A1 (en) * 2005-04-28 2006-11-02 Macinnis Alexander Systems, methods, and apparatus for video frame repeat indication & processing
US7768538B2 (en) * 2005-05-09 2010-08-03 Hewlett-Packard Development Company, L.P. Hybrid data planes
WO2006126148A1 (en) * 2005-05-25 2006-11-30 Nxp B.V. Multiple instance video decoder for macroblocks coded in a progressive and an interlaced way
US8422546B2 (en) * 2005-05-25 2013-04-16 Microsoft Corporation Adaptive video encoding using a perceptual model
US8565194B2 (en) 2005-10-27 2013-10-22 Qualcomm Incorporated Puncturing signaling channel for a wireless communication system
US8879511B2 (en) 2005-10-27 2014-11-04 Qualcomm Incorporated Assignment acknowledgement for a wireless communication system
US8611284B2 (en) 2005-05-31 2013-12-17 Qualcomm Incorporated Use of supplemental assignments to decrement resources
US8462859B2 (en) 2005-06-01 2013-06-11 Qualcomm Incorporated Sphere decoding apparatus
JP2008543209A (en) * 2005-06-03 2008-11-27 エヌエックスピー ビー ヴィ Video decoder with hybrid reference texture
US8599945B2 (en) 2005-06-16 2013-12-03 Qualcomm Incorporated Robust rank prediction for a MIMO system
US9179319B2 (en) 2005-06-16 2015-11-03 Qualcomm Incorporated Adaptive sectorization in cellular systems
KR100667806B1 (en) * 2005-07-07 2007-01-12 삼성전자주식회사 Method and apparatus for video encoding and decoding
US7599840B2 (en) * 2005-07-15 2009-10-06 Microsoft Corporation Selectively using multiple entropy models in adaptive coding and decoding
US7693709B2 (en) 2005-07-15 2010-04-06 Microsoft Corporation Reordering coefficients for waveform coding or decoding
US7684981B2 (en) * 2005-07-15 2010-03-23 Microsoft Corporation Prediction of spectral coefficients in waveform coding and decoding
WO2007010374A1 (en) * 2005-07-21 2007-01-25 Nokia Corporation Variable length codes for scalable video coding
US8625914B2 (en) 2013-02-04 2014-01-07 Sony Corporation Image processing system, image processing method and program
US8885628B2 (en) 2005-08-08 2014-11-11 Qualcomm Incorporated Code division multiplexing in a single-carrier frequency division multiple access system
US7565018B2 (en) * 2005-08-12 2009-07-21 Microsoft Corporation Adaptive coding and decoding of wide-range coefficients
US7933337B2 (en) 2005-08-12 2011-04-26 Microsoft Corporation Prediction of transform coefficients for image compression
US8599925B2 (en) * 2005-08-12 2013-12-03 Microsoft Corporation Efficient coding and decoding of transform blocks
US8036274B2 (en) * 2005-08-12 2011-10-11 Microsoft Corporation SIMD lapped transform-based digital media encoding/decoding
US9077960B2 (en) * 2005-08-12 2015-07-07 Microsoft Corporation Non-zero coefficient block pattern coding
US9209956B2 (en) 2005-08-22 2015-12-08 Qualcomm Incorporated Segment sensitive scheduling
US20070041457A1 (en) * 2005-08-22 2007-02-22 Tamer Kadous Method and apparatus for providing antenna diversity in a wireless communication system
US8644292B2 (en) * 2005-08-24 2014-02-04 Qualcomm Incorporated Varied transmission time intervals for wireless communication system
WO2007025160A2 (en) * 2005-08-24 2007-03-01 Qualcomm Incorporated Varied transmission time intervals for wireless communication system
US9136974B2 (en) 2005-08-30 2015-09-15 Qualcomm Incorporated Precoding and SDMA support
EP1989876A2 (en) * 2005-08-31 2008-11-12 Micronas USA, Inc. Systems and methods for video transformation and in loop filtering
KR100668346B1 (en) * 2005-10-04 2007-01-12 삼성전자주식회사 Filtering apparatus and method for a multi-codec
US8681867B2 (en) * 2005-10-18 2014-03-25 Qualcomm Incorporated Selective deblock filtering techniques for video coding based on motion compensation resulting in a coded block pattern value
US20070094035A1 (en) * 2005-10-21 2007-04-26 Nokia Corporation Audio coding
US7505069B2 (en) * 2005-10-26 2009-03-17 Hewlett-Packard Development Company, L.P. Method and apparatus for maintaining consistent white balance in successive digital images
US8693405B2 (en) 2005-10-27 2014-04-08 Qualcomm Incorporated SDMA resource management
US9225416B2 (en) 2005-10-27 2015-12-29 Qualcomm Incorporated Varied signaling channels for a reverse link in a wireless communication system
US9225488B2 (en) 2005-10-27 2015-12-29 Qualcomm Incorporated Shared signaling channel
US9144060B2 (en) 2005-10-27 2015-09-22 Qualcomm Incorporated Resource allocation for shared signaling channels
US8477684B2 (en) 2005-10-27 2013-07-02 Qualcomm Incorporated Acknowledgement of control messages in a wireless communication system
US9088384B2 (en) 2005-10-27 2015-07-21 Qualcomm Incorporated Pilot symbol transmission in wireless communication systems
US8582509B2 (en) 2005-10-27 2013-11-12 Qualcomm Incorporated Scalable frequency band operation in wireless communication systems
US8045512B2 (en) 2005-10-27 2011-10-25 Qualcomm Incorporated Scalable frequency band operation in wireless communication systems
US9172453B2 (en) 2005-10-27 2015-10-27 Qualcomm Incorporated Method and apparatus for pre-coding frequency division duplexing system
US9210651B2 (en) 2005-10-27 2015-12-08 Qualcomm Incorporated Method and apparatus for bootstraping information in a communication system
KR100873636B1 (en) 2005-11-14 2008-12-12 삼성전자주식회사 Method and apparatus for encoding/decoding image using single coding mode
US8582548B2 (en) 2005-11-18 2013-11-12 Qualcomm Incorporated Frequency division multiple access schemes for wireless communication
JP2007180723A (en) * 2005-12-27 2007-07-12 Toshiba Corp Image processor and image processing method
EP1977608B1 (en) 2006-01-09 2020-01-01 LG Electronics, Inc. Inter-layer prediction method for video signal
KR100791295B1 (en) * 2006-01-12 2008-01-04 삼성전자주식회사 Flag encoding method, flag decoding method, and apparatus thereof
JP2007195117A (en) * 2006-01-23 2007-08-02 Toshiba Corp Moving image decoding device
KR100775104B1 (en) * 2006-02-27 2007-11-08 삼성전자주식회사 Image stabilizer and system having the same and method thereof
US8116371B2 (en) * 2006-03-08 2012-02-14 Texas Instruments Incorporated VLC technique for layered video coding using distinct element grouping
KR101330630B1 (en) * 2006-03-13 2013-11-22 삼성전자주식회사 Method and apparatus for encoding moving picture, method and apparatus for decoding moving picture, applying adaptively an optimal prediction mode
US8130828B2 (en) 2006-04-07 2012-03-06 Microsoft Corporation Adjusting quantization to preserve non-zero AC coefficients
US8059721B2 (en) 2006-04-07 2011-11-15 Microsoft Corporation Estimating sample-domain distortion in the transform domain with rounding compensation
US8503536B2 (en) * 2006-04-07 2013-08-06 Microsoft Corporation Quantization adjustments for DC shift artifacts
US7995649B2 (en) 2006-04-07 2011-08-09 Microsoft Corporation Quantization adjustment based on texture level
US7974340B2 (en) * 2006-04-07 2011-07-05 Microsoft Corporation Adaptive B-picture quantization control
US8711925B2 (en) * 2006-05-05 2014-04-29 Microsoft Corporation Flexible quantization
WO2008042023A2 (en) * 2006-05-18 2008-04-10 Florida Atlantic University Methods for encrypting and compressing video
US8379723B2 (en) * 2006-06-27 2013-02-19 Intel Corporation Chroma motion vector processing apparatus, system, and method
JP2008048240A (en) * 2006-08-18 2008-02-28 Nec Electronics Corp Bit plane decoding device and its method
US7529416B2 (en) * 2006-08-18 2009-05-05 Terayon Communication Systems, Inc. Method and apparatus for transferring digital data between circuits
US7760960B2 (en) * 2006-09-15 2010-07-20 Freescale Semiconductor, Inc. Localized content adaptive filter for low power scalable image processing
US7327289B1 (en) * 2006-09-20 2008-02-05 Intel Corporation Data-modifying run length encoder to avoid data expansion
US20080084932A1 (en) * 2006-10-06 2008-04-10 Microsoft Corporation Controlling loop filtering for interlaced video frames
BRPI0719239A2 (en) * 2006-10-10 2014-10-07 Nippon Telegraph & Telephone CODING METHOD AND VIDEO DECODING METHOD, SAME DEVICES, SAME PROGRAMS, AND PROGRAM RECORDING STORAGE
KR100819289B1 (en) * 2006-10-20 2008-04-02 삼성전자주식회사 Deblocking filtering method and deblocking filter for video data
JP2008109389A (en) * 2006-10-25 2008-05-08 Canon Inc Image processing device and control method of image processing device
US7756348B2 (en) * 2006-10-30 2010-07-13 Hewlett-Packard Development Company, L.P. Method for decomposing a video sequence frame
US8711929B2 (en) * 2006-11-01 2014-04-29 Skyfire Labs, Inc. Network-based dynamic encoding
US9247260B1 (en) 2006-11-01 2016-01-26 Opera Software Ireland Limited Hybrid bitmap-mode encoding
US8443398B2 (en) * 2006-11-01 2013-05-14 Skyfire Labs, Inc. Architecture for delivery of video content responsive to remote interaction
US8375304B2 (en) * 2006-11-01 2013-02-12 Skyfire Labs, Inc. Maintaining state of a web page
US7460725B2 (en) * 2006-11-09 2008-12-02 Calista Technologies, Inc. System and method for effectively encoding and decoding electronic information
US20080159637A1 (en) * 2006-12-27 2008-07-03 Ricardo Citro Deblocking filter hardware accelerator with interlace frame support
US20080159407A1 (en) * 2006-12-28 2008-07-03 Yang Nick Y Mechanism for a parallel processing in-loop deblock filter
US7907789B2 (en) * 2007-01-05 2011-03-15 Freescale Semiconductor, Inc. Reduction of block effects in spatially re-sampled image information for block-based image coding
WO2008092104A2 (en) * 2007-01-25 2008-07-31 Skyfire Labs, Inc. Dynamic client-server video tiling streaming
US8238424B2 (en) * 2007-02-09 2012-08-07 Microsoft Corporation Complexity-based adaptive preprocessing for multiple-pass video compression
US8184710B2 (en) * 2007-02-21 2012-05-22 Microsoft Corporation Adaptive truncation of transform coefficient data in a transform-based digital media codec
US20080225947A1 (en) * 2007-03-13 2008-09-18 Matthias Narroschke Quantization for hybrid video coding
US8111750B2 (en) * 2007-03-20 2012-02-07 Himax Technologies Limited System and method for 3-D recursive search motion estimation
US8498335B2 (en) * 2007-03-26 2013-07-30 Microsoft Corporation Adaptive deadzone size adjustment in quantization
US8243797B2 (en) * 2007-03-30 2012-08-14 Microsoft Corporation Regions of interest for quality adjustments
JP5686594B2 (en) 2007-04-12 2015-03-18 トムソン ライセンシングThomson Licensing Method and apparatus for video usability information (VUI) for scalable video coding
US8442337B2 (en) * 2007-04-18 2013-05-14 Microsoft Corporation Encoding adjustments for animation content
US8331438B2 (en) 2007-06-05 2012-12-11 Microsoft Corporation Adaptive selection of picture-level quantization parameters for predicted video pictures
US8725504B1 (en) 2007-06-06 2014-05-13 Nvidia Corporation Inverse quantization in audio decoding
US8726125B1 (en) 2007-06-06 2014-05-13 Nvidia Corporation Reducing interpolation error
US7774205B2 (en) * 2007-06-15 2010-08-10 Microsoft Corporation Coding of sparse digital media spectral data
US8477852B2 (en) * 2007-06-20 2013-07-02 Nvidia Corporation Uniform video decoding and display
US8254455B2 (en) * 2007-06-30 2012-08-28 Microsoft Corporation Computing collocated macroblock information for direct mode macroblocks
TWI375470B (en) * 2007-08-03 2012-10-21 Via Tech Inc Method for determining boundary strength
US8605786B2 (en) * 2007-09-04 2013-12-10 The Regents Of The University Of California Hierarchical motion vector processing method, software and devices
WO2009034486A2 (en) * 2007-09-10 2009-03-19 Nxp B.V. Method and apparatus for line-based motion estimation in video image data
US8849051B2 (en) * 2007-09-17 2014-09-30 Nvidia Corporation Decoding variable length codes in JPEG applications
US8502709B2 (en) * 2007-09-17 2013-08-06 Nvidia Corporation Decoding variable length codes in media applications
JP5414684B2 (en) 2007-11-12 2014-02-12 ザ ニールセン カンパニー (ユー エス) エルエルシー Method and apparatus for performing audio watermarking, watermark detection, and watermark extraction
CN101179720B (en) * 2007-11-16 2010-09-01 海信集团有限公司 Video decoding method
CN101453651B (en) * 2007-11-30 2012-02-01 华为技术有限公司 A deblocking filtering method and apparatus
US8934539B2 (en) 2007-12-03 2015-01-13 Nvidia Corporation Vector processor acceleration for media quantization
US8704834B2 (en) 2007-12-03 2014-04-22 Nvidia Corporation Synchronization of video input data streams and video output data streams
US8687875B2 (en) 2007-12-03 2014-04-01 Nvidia Corporation Comparator based acceleration for media quantization
US8743972B2 (en) * 2007-12-20 2014-06-03 Vixs Systems, Inc. Coding adaptive deblocking filter and method for use therewith
US20090161757A1 (en) * 2007-12-21 2009-06-25 General Instrument Corporation Method and Apparatus for Selecting a Coding Mode for a Block
US8457951B2 (en) * 2008-01-29 2013-06-04 The Nielsen Company (Us), Llc Methods and apparatus for performing variable black length watermarking of media
JP5109707B2 (en) * 2008-02-19 2012-12-26 コニカミノルタビジネステクノロジーズ株式会社 Fixing apparatus and image forming apparatus
US8145794B2 (en) 2008-03-14 2012-03-27 Microsoft Corporation Encoding/decoding while allowing varying message formats per message
KR101431545B1 (en) * 2008-03-17 2014-08-20 삼성전자주식회사 Method and apparatus for Video encoding and decoding
EP2266318B1 (en) * 2008-03-19 2020-04-22 Nokia Technologies Oy Combined motion vector and reference index prediction for video coding
US20090238263A1 (en) * 2008-03-20 2009-09-24 Pawan Jaggi Flexible field based energy efficient multimedia processor architecture and method
US20090238479A1 (en) * 2008-03-20 2009-09-24 Pawan Jaggi Flexible frame based energy efficient multimedia processor architecture and method
TWI370690B (en) 2008-03-21 2012-08-11 Novatek Microelectronics Corp Method and apparatus for generating coded block pattern for highpass coeffecients
CN101552918B (en) * 2008-03-31 2011-05-11 联咏科技股份有限公司 Generation method of block type information with high-pass coefficient and generation circuit thereof
US8189933B2 (en) * 2008-03-31 2012-05-29 Microsoft Corporation Classifying and controlling encoding quality for textured, dark smooth and smooth video content
US8179974B2 (en) * 2008-05-02 2012-05-15 Microsoft Corporation Multi-level representation of reordered transform coefficients
US8369638B2 (en) 2008-05-27 2013-02-05 Microsoft Corporation Reducing DC leakage in HD photo transform
US8447591B2 (en) * 2008-05-30 2013-05-21 Microsoft Corporation Factorization of overlapping tranforms into two block transforms
US8897359B2 (en) * 2008-06-03 2014-11-25 Microsoft Corporation Adaptive quantization for enhancement layer video coding
US20090304086A1 (en) * 2008-06-06 2009-12-10 Apple Inc. Method and system for video coder and decoder joint optimization
KR101379187B1 (en) * 2008-06-23 2014-04-15 에스케이 텔레콤주식회사 Image Encoding/Decoding Method and Apparatus Using Block Transformation
US8406307B2 (en) 2008-08-22 2013-03-26 Microsoft Corporation Entropy coding/decoding of hierarchically organized data
US8326075B2 (en) 2008-09-11 2012-12-04 Google Inc. System and method for video encoding using adaptive loop filter
US8180166B2 (en) * 2008-09-23 2012-05-15 Mediatek Inc. Transcoding method
CA2679509C (en) * 2008-09-25 2014-08-05 Research In Motion Limited A method and apparatus for configuring compressed mode
US8275209B2 (en) * 2008-10-10 2012-09-25 Microsoft Corporation Reduced DC gain mismatch and DC leakage in overlap transform processing
KR101279573B1 (en) 2008-10-31 2013-06-27 에스케이텔레콤 주식회사 Motion Vector Encoding/Decoding Method and Apparatus and Video Encoding/Decoding Method and Apparatus
US9307267B2 (en) 2008-12-11 2016-04-05 Nvidia Corporation Techniques for scalable dynamic data encoding and decoding
FR2940736B1 (en) * 2008-12-30 2011-04-08 Sagem Comm SYSTEM AND METHOD FOR VIDEO CODING
US8189666B2 (en) 2009-02-02 2012-05-29 Microsoft Corporation Local picture identifier and computation of co-located information
WO2010092740A1 (en) * 2009-02-10 2010-08-19 パナソニック株式会社 Image processing apparatus, image processing method, program and integrated circuit
KR20100095992A (en) * 2009-02-23 2010-09-01 한국과학기술원 Method for encoding partitioned block in video encoding, method for decoding partitioned block in video decoding and recording medium implementing the same
JP5115498B2 (en) * 2009-03-05 2013-01-09 富士通株式会社 Image coding apparatus, image coding control method, and program
JP5800396B2 (en) * 2009-04-14 2015-10-28 トムソン ライセンシングThomson Licensing Method and apparatus for determining and selecting filter parameters in response to variable transformation in sparsity-based artifact removal filtering
US9076239B2 (en) * 2009-04-30 2015-07-07 Stmicroelectronics S.R.L. Method and systems for thumbnail generation, and corresponding computer program product
TWI343192B (en) * 2009-06-12 2011-06-01 Ind Tech Res Inst Decoding method
EP2449781B1 (en) * 2009-06-29 2016-12-28 Thomson Licensing Methods and apparatus for adaptive probability update for non-coded syntax
US9161057B2 (en) * 2009-07-09 2015-10-13 Qualcomm Incorporated Non-zero rounding and prediction mode selection techniques in video encoding
CN105120265B (en) * 2009-08-12 2019-01-29 汤姆森特许公司 For chroma coder in improved frame and decoded method and device
KR101452859B1 (en) * 2009-08-13 2014-10-23 삼성전자주식회사 Method and apparatus for encoding and decoding motion vector
US8654838B2 (en) * 2009-08-31 2014-02-18 Nxp B.V. System and method for video and graphic compression using multiple different compression techniques and compression error feedback
JP5234368B2 (en) * 2009-09-30 2013-07-10 ソニー株式会社 Image processing apparatus and method
USRE47243E1 (en) * 2009-12-09 2019-02-12 Samsung Electronics Co., Ltd. Method and apparatus for encoding video, and method and apparatus for decoding video
KR101700358B1 (en) * 2009-12-09 2017-01-26 삼성전자주식회사 Method and apparatus for encoding video, and method and apparatus for decoding video
WO2011075071A1 (en) 2009-12-17 2011-06-23 Telefonaktiebolaget Lm Ericsson (Publ) Method and arrangement for video coding
KR101703327B1 (en) * 2010-01-14 2017-02-06 삼성전자 주식회사 Method and apparatus for video encoding using pattern information of hierarchical data unit, and method and apparatus for video decoding using pattern information of hierarchical data unit
US20110176611A1 (en) * 2010-01-15 2011-07-21 Yu-Wen Huang Methods for decoder-side motion vector derivation
CA2785036A1 (en) * 2010-02-05 2011-08-11 Telefonaktiebolaget L M Ericsson (Publ) De-blocking filtering control
JP5020391B2 (en) * 2010-02-22 2012-09-05 パナソニック株式会社 Decoding device and decoding method
US8527649B2 (en) 2010-03-09 2013-09-03 Mobixell Networks Ltd. Multi-stream bit rate adaptation
CN106454371B (en) 2010-04-13 2020-03-20 Ge视频压缩有限责任公司 Decoder, array reconstruction method, encoder, encoding method, and storage medium
KR102166520B1 (en) 2010-04-13 2020-10-16 지이 비디오 컴프레션, 엘엘씨 Sample region merging
KR101584480B1 (en) 2010-04-13 2016-01-14 지이 비디오 컴프레션, 엘엘씨 Inter-plane prediction
ES2549734T3 (en) 2010-04-13 2015-11-02 Ge Video Compression, Llc Video encoding using multi-tree image subdivisions
US20110261070A1 (en) * 2010-04-23 2011-10-27 Peter Francis Chevalley De Rivaz Method and system for reducing remote display latency
WO2011138900A1 (en) 2010-05-06 2011-11-10 日本電信電話株式会社 Video encoding control method and apparatus
BR112012028184A2 (en) * 2010-05-07 2016-08-02 Nippon Telegraph & Telephone Video coding control method, video coding device and video coding program
CA2798354C (en) * 2010-05-12 2016-01-26 Nippon Telegraph And Telephone Corporation A video encoding bit rate control technique using a quantization statistic threshold to determine whether re-encoding of an encoding-order picture group is required
JP5625512B2 (en) * 2010-06-09 2014-11-19 ソニー株式会社 Encoding device, encoding method, program, and recording medium
CN101883286B (en) * 2010-06-25 2012-12-05 无锡中星微电子有限公司 Calibration method and device, and motion estimation method and device in motion estimation
US8832709B2 (en) 2010-07-19 2014-09-09 Flash Networks Ltd. Network optimization
KR101914018B1 (en) * 2010-09-30 2018-10-31 미쓰비시덴키 가부시키가이샤 Dynamic image decoding device, dynamic image decoding method, dynamic image encoding device, dynamic image encoding method, and recoding medium
US8885704B2 (en) * 2010-10-01 2014-11-11 Qualcomm Incorporated Coding prediction modes in video coding
US8787443B2 (en) 2010-10-05 2014-07-22 Microsoft Corporation Content adaptive deblocking during video encoding and decoding
KR102034004B1 (en) 2010-10-08 2019-10-18 지이 비디오 컴프레션, 엘엘씨 Picture coding supporting block partitioning and block merging
SI3595303T1 (en) 2010-11-25 2022-01-31 Lg Electronics Inc. Method for decoding image information, decoding apparatus, method for encoding image information, encoding apparatus and storage medium
US11284081B2 (en) 2010-11-25 2022-03-22 Lg Electronics Inc. Method for signaling image information, and method for decoding image information using same
US9137544B2 (en) * 2010-11-29 2015-09-15 Mediatek Inc. Method and apparatus for derivation of mv/mvp candidate for inter/skip/merge modes
US9060174B2 (en) 2010-12-28 2015-06-16 Fish Dive, Inc. Method and system for selectively breaking prediction in video coding
US8914534B2 (en) 2011-01-05 2014-12-16 Sonic Ip, Inc. Systems and methods for adaptive bitrate streaming of media stored in matroska container files using hypertext transfer protocol
US9635382B2 (en) 2011-01-07 2017-04-25 Texas Instruments Incorporated Method, system and computer program product for determining a motion vector
KR101824241B1 (en) * 2011-01-11 2018-03-14 에스케이 텔레콤주식회사 Intra Additional Information Encoding/Decoding Apparatus and Method
WO2012096164A1 (en) * 2011-01-12 2012-07-19 パナソニック株式会社 Image encoding method, image decoding method, image encoding device, and image decoding device
EP3668095B1 (en) * 2011-01-13 2021-07-07 Canon Kabushiki Kaisha Image coding apparatus, image coding method, and program, and image decoding apparatus, image decoding method, and program
JP6056122B2 (en) * 2011-01-24 2017-01-11 ソニー株式会社 Image encoding apparatus, image decoding apparatus, method and program thereof
US9380319B2 (en) 2011-02-04 2016-06-28 Google Technology Holdings LLC Implicit transform unit representation
CN107181958B (en) 2011-02-09 2020-04-28 Lg 电子株式会社 Method of encoding and decoding image and apparatus using the same
US8688074B2 (en) 2011-02-28 2014-04-01 Moisixell Networks Ltd. Service classification of web traffic
GB2488815C (en) 2011-03-09 2018-03-28 Canon Kk Video decoding
JP5982734B2 (en) * 2011-03-11 2016-08-31 ソニー株式会社 Image processing apparatus and method
JP5842357B2 (en) * 2011-03-25 2016-01-13 富士ゼロックス株式会社 Image processing apparatus and image processing program
US9042458B2 (en) * 2011-04-01 2015-05-26 Microsoft Technology Licensing, Llc Multi-threaded implementations of deblock filtering
US8780971B1 (en) 2011-04-07 2014-07-15 Google, Inc. System and method of encoding using selectable loop filters
US8781004B1 (en) 2011-04-07 2014-07-15 Google Inc. System and method for encoding video using variable loop filter
US8780996B2 (en) 2011-04-07 2014-07-15 Google, Inc. System and method for encoding and decoding video data
EP2700230A4 (en) * 2011-04-21 2014-08-06 Mediatek Inc Method and apparatus for improved in-loop filtering
US9058223B2 (en) * 2011-04-22 2015-06-16 Microsoft Technology Licensing Llc Parallel entropy encoding on GPU
JP5689563B2 (en) * 2011-05-10 2015-03-25 メディアテック インコーポレイテッド Method and apparatus for reducing in-loop filter buffer
PL3879831T3 (en) * 2011-05-31 2024-07-29 Jvckenwood Corporation Moving image encoding device, moving image encoding method and moving image encoding program, as well as moving image decoding device, moving image decoding method and moving image decoding program
KR20240042116A (en) 2011-06-15 2024-04-01 파나소닉 인텔렉츄얼 프로퍼티 코포레이션 오브 아메리카 Decoding method and device, and encoding method and device
JP5336004B2 (en) * 2011-06-17 2013-11-06 パナソニック株式会社 Video decoding device
USRE47366E1 (en) 2011-06-23 2019-04-23 Sun Patent Trust Image decoding method and apparatus based on a signal type of the control parameter of the current block
KR102008030B1 (en) 2011-06-23 2019-08-06 선 페이턴트 트러스트 Image decoding method, image encoding method, image decoding device, image encoding device, and image encoding/decoding device
RU2603552C2 (en) 2011-06-24 2016-11-27 Сан Пэтент Траст Image decoding method, image encoding method, image decoding device, image encoding device and image encoding and decoding device
WO2012176464A1 (en) 2011-06-24 2012-12-27 パナソニック株式会社 Image decoding method, image encoding method, image decoding device, image encoding device, and image encoding/decoding device
BR112013030347B1 (en) 2011-06-27 2022-06-28 Sun Patent Trust IMAGE DECODING METHOD, IMAGE ENCODING METHOD, IMAGE DECODING APPARATUS, IMAGE ENCODING APPARATUS AND IMAGE ENCODING AND DECODING APPARATUS
MY165469A (en) 2011-06-28 2018-03-23 Sun Patent Trust Image decoding method, image coding method, image decoding apparatus, image coding apparatus, and image coding and decoding apparatus
MX2013010892A (en) 2011-06-29 2013-12-06 Panasonic Corp Image decoding method, image encoding method, image decoding device, image encoding device, and image encoding/decoding device.
CN103583048B (en) 2011-06-30 2017-05-17 太阳专利托管公司 Image decoding method, image encoding method, image decoding device, image encoding device, and image encoding/decoding device
KR101955374B1 (en) * 2011-06-30 2019-05-31 에스케이 텔레콤주식회사 Method and Apparatus for Image Encoding/Decoding By Fast Coding Unit Mode Decision
KR102060619B1 (en) 2011-06-30 2019-12-30 선 페이턴트 트러스트 Image decoding method, image encoding method, image decoding device, image encoding device, and image encoding/decoding device
US10536701B2 (en) 2011-07-01 2020-01-14 Qualcomm Incorporated Video coding using adaptive motion vector resolution
US8767824B2 (en) 2011-07-11 2014-07-01 Sharp Kabushiki Kaisha Video decoder parallelization for tiles
CN103765885B (en) 2011-07-11 2017-04-12 太阳专利托管公司 Image decoding method, image encoding method, image decoding apparatus, image encoding apparatus, and image encoding/decoding apparatus
GB2493755B (en) 2011-08-17 2016-10-19 Canon Kk Method and device for encoding a sequence of images and method and device for decoding a sequence of images
CN108989847B (en) 2011-08-30 2021-03-09 帝威视有限公司 System and method for encoding and streaming video
US9467708B2 (en) 2011-08-30 2016-10-11 Sonic Ip, Inc. Selection of resolutions for seamless resolution switching of multimedia content
JP2014526818A (en) * 2011-09-09 2014-10-06 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ Low-complexity deblocking filter decision
US8885706B2 (en) 2011-09-16 2014-11-11 Google Inc. Apparatus and methodology for a video codec system with noise reduction capability
RU2646308C1 (en) * 2011-10-17 2018-03-02 Кт Корпорейшен Method of video signal decoding
US8891630B2 (en) * 2011-10-24 2014-11-18 Blackberry Limited Significance map encoding and decoding using partition set based context assignment
JPWO2013065263A1 (en) * 2011-11-02 2015-04-02 日本電気株式会社 Video encoding apparatus, video decoding apparatus, video encoding method, video decoding method, and program
KR20130050149A (en) * 2011-11-07 2013-05-15 오수미 Method for generating prediction block in inter prediction mode
JP2013102297A (en) * 2011-11-07 2013-05-23 Canon Inc Image encoding method, image encoder and program, image decoding method, and image decoder and program
TWI523497B (en) * 2011-11-10 2016-02-21 Sony Corp Image processing apparatus and method
CN108900839B (en) * 2011-12-28 2022-05-31 夏普株式会社 Image decoding device and method, image encoding device and method
BR122020018114B1 (en) * 2012-01-17 2023-11-21 Gensquare Llc METHOD FOR APPLYING AN EDGE OFFSET
US9013760B1 (en) 2012-02-15 2015-04-21 Marvell International Ltd. Method and apparatus for using data compression techniques to increase a speed at which documents are scanned through a scanning device
CN102595164A (en) * 2012-02-27 2012-07-18 中兴通讯股份有限公司 Method, device and system for sending video image
US9131073B1 (en) 2012-03-02 2015-09-08 Google Inc. Motion estimation aided noise reduction
EP2642755B1 (en) 2012-03-20 2018-01-03 Dolby Laboratories Licensing Corporation Complexity scalable multilayer video coding
US9432666B2 (en) * 2012-03-29 2016-08-30 Intel Corporation CAVLC decoder with multi-symbol run before parallel decode
GB2502047B (en) * 2012-04-04 2019-06-05 Snell Advanced Media Ltd Video sequence processing
US9621921B2 (en) 2012-04-16 2017-04-11 Qualcomm Incorporated Coefficient groups and coefficient coding for coefficient scans
GB2501535A (en) 2012-04-26 2013-10-30 Sony Corp Chrominance Processing in High Efficiency Video Codecs
EP2858353B1 (en) * 2012-06-01 2019-03-20 Velos Media International Limited Arithmetic decoding device, image decoding device, arithmetic encoding device, and image encoding device
GB2503875B (en) * 2012-06-29 2015-06-10 Canon Kk Method and device for encoding or decoding an image
RS57336B1 (en) * 2012-07-02 2018-08-31 Samsung Electronics Co Ltd Method for entropy decoding of a video
WO2014007515A1 (en) 2012-07-02 2014-01-09 엘지전자 주식회사 Method for decoding image and apparatus using same
US9344729B1 (en) 2012-07-11 2016-05-17 Google Inc. Selective prediction signal filtering
CN103634606B (en) * 2012-08-21 2015-04-08 腾讯科技(深圳)有限公司 Video encoding method and apparatus
KR101654814B1 (en) * 2012-09-28 2016-09-22 브이아이디 스케일, 인크. Cross-plane filtering for chroma signal enhancement in video coding
KR101661436B1 (en) 2012-09-29 2016-09-29 후아웨이 테크놀러지 컴퍼니 리미티드 Method, apparatus and system for encoding and decoding video
US20140092992A1 (en) 2012-09-30 2014-04-03 Microsoft Corporation Supplemental enhancement information including confidence level and mixed content information
US9979960B2 (en) * 2012-10-01 2018-05-22 Microsoft Technology Licensing, Llc Frame packing and unpacking between frames of chroma sampling formats with different chroma resolutions
US9998755B2 (en) 2012-10-03 2018-06-12 Mediatek Inc. Method and apparatus for motion information inheritance in three-dimensional video coding
CN103841425B (en) * 2012-10-08 2017-04-05 华为技术有限公司 For the method for the motion vector list foundation of motion-vector prediction, device
CN102883163B (en) 2012-10-08 2014-05-28 华为技术有限公司 Method and device for building motion vector lists for prediction of motion vectors
CN102946504B (en) * 2012-11-22 2015-02-18 四川虹微技术有限公司 Self-adaptive moving detection method based on edge detection
US9560361B2 (en) * 2012-12-05 2017-01-31 Vixs Systems Inc. Adaptive single-field/dual-field video encoding
US9191457B2 (en) 2012-12-31 2015-11-17 Sonic Ip, Inc. Systems, methods, and media for controlling delivery of content
US9008363B1 (en) 2013-01-02 2015-04-14 Google Inc. System and method for computing optical flow
CN108259900B (en) * 2013-01-16 2021-01-01 黑莓有限公司 Transform coefficient coding for context adaptive binary entropy coding of video
US9219915B1 (en) * 2013-01-17 2015-12-22 Google Inc. Selection of transform size in video coding
CN103051857B (en) * 2013-01-25 2015-07-15 西安电子科技大学 Motion compensation-based 1/4 pixel precision video image deinterlacing method
US9544597B1 (en) 2013-02-11 2017-01-10 Google Inc. Hybrid transform in video encoding and decoding
US9967559B1 (en) 2013-02-11 2018-05-08 Google Llc Motion vector dependent spatial transformation in video coding
WO2014146079A1 (en) * 2013-03-15 2014-09-18 Zenkich Raymond System and method for non-uniform video coding
US9749627B2 (en) * 2013-04-08 2017-08-29 Microsoft Technology Licensing, Llc Control data for motion-constrained tile set
US9674530B1 (en) 2013-04-30 2017-06-06 Google Inc. Hybrid transforms in video coding
JP6003803B2 (en) * 2013-05-22 2016-10-05 株式会社Jvcケンウッド Moving picture coding apparatus, moving picture coding method, and moving picture coding program
US10003792B2 (en) 2013-05-27 2018-06-19 Microsoft Technology Licensing, Llc Video encoder for images
JP6022060B2 (en) * 2013-06-12 2016-11-09 三菱電機株式会社 Image coding apparatus and image coding method
US9215464B2 (en) 2013-09-19 2015-12-15 Blackberry Limited Coding position data for the last non-zero transform coefficient in a coefficient group
US9813737B2 (en) 2013-09-19 2017-11-07 Blackberry Limited Transposing a block of transform coefficients, based upon an intra-prediction mode
FR3011429A1 (en) * 2013-09-27 2015-04-03 Orange VIDEO CODING AND DECODING BY HERITAGE OF A FIELD OF MOTION VECTORS
US9473778B2 (en) 2013-09-27 2016-10-18 Apple Inc. Skip thresholding in pipelined video encoders
KR20160065860A (en) * 2013-10-07 2016-06-09 엘지전자 주식회사 Method for encoding and decoding a media signal and apparatus using the same
JP6336058B2 (en) 2013-10-14 2018-06-06 マイクロソフト テクノロジー ライセンシング,エルエルシー Features of base color index map mode for video and image encoding and decoding
WO2015054813A1 (en) 2013-10-14 2015-04-23 Microsoft Technology Licensing, Llc Encoder-side options for intra block copy prediction mode for video and image coding
WO2015054811A1 (en) 2013-10-14 2015-04-23 Microsoft Corporation Features of intra block copy prediction mode for video and image coding and decoding
US9330171B1 (en) * 2013-10-17 2016-05-03 Google Inc. Video annotation using deep network architectures
CN105659320B (en) 2013-10-21 2019-07-12 杜比国际公司 Audio coder and decoder
BR112016015080A2 (en) 2014-01-03 2017-08-08 Microsoft Technology Licensing Llc BLOCK VECTOR PREDICTION IN VIDEO AND IMAGE ENCODING / DECODING
US10390034B2 (en) 2014-01-03 2019-08-20 Microsoft Technology Licensing, Llc Innovations in block vector prediction and estimation of reconstructed sample values within an overlap area
US9794558B2 (en) * 2014-01-08 2017-10-17 Qualcomm Incorporated Support of non-HEVC base layer in HEVC multi-layer extensions
US11284103B2 (en) 2014-01-17 2022-03-22 Microsoft Technology Licensing, Llc Intra block copy prediction with asymmetric partitions and encoder-side search patterns, search ranges and approaches to partitioning
US10542274B2 (en) 2014-02-21 2020-01-21 Microsoft Technology Licensing, Llc Dictionary encoding and decoding of screen content
MX361228B (en) 2014-03-04 2018-11-29 Microsoft Technology Licensing Llc Block flipping and skip mode in intra block copy prediction.
US10237575B2 (en) 2014-03-14 2019-03-19 Vid Scale, Inc. Palette coding for screen content coding
US10136140B2 (en) 2014-03-17 2018-11-20 Microsoft Technology Licensing, Llc Encoder-side decisions for screen content encoding
US9877048B2 (en) * 2014-06-09 2018-01-23 Qualcomm Incorporated Entropy coding techniques for display stream compression (DSC)
EP3158734A1 (en) 2014-06-19 2017-04-26 Microsoft Technology Licensing, LLC Unified intra block copy and inter prediction modes
US9807410B2 (en) 2014-07-02 2017-10-31 Apple Inc. Late-stage mode conversions in pipelined video encoders
US10102613B2 (en) 2014-09-25 2018-10-16 Google Llc Frequency-domain denoising
EP3917146A1 (en) 2014-09-30 2021-12-01 Microsoft Technology Licensing, LLC Rules for intra-picture prediction modes when wavefront parallel processing is enabled
US9294782B1 (en) 2014-10-28 2016-03-22 Sony Corporation Image processing system with artifact reduction mechanism and method of operation thereof
US9674554B2 (en) 2014-10-28 2017-06-06 Sony Corporation Image processing system with coding mode and method of operation thereof
US9357232B2 (en) 2014-10-28 2016-05-31 Sony Corporation Image processing system with binary decomposition and method of operation thereof
US10063889B2 (en) 2014-10-28 2018-08-28 Sony Corporation Image processing system with conditional coding and method of operation thereof
US9357237B2 (en) 2014-10-28 2016-05-31 Sony Corporation Image processing system with bitstream reduction and method of operation thereof
US10356410B2 (en) 2014-10-28 2019-07-16 Sony Corporation Image processing system with joint encoding and method of operation thereof
US9854201B2 (en) 2015-01-16 2017-12-26 Microsoft Technology Licensing, Llc Dynamically updating quality to higher chroma sampling rate
US9749646B2 (en) 2015-01-16 2017-08-29 Microsoft Technology Licensing, Llc Encoding/decoding of high chroma resolution details
US9591325B2 (en) 2015-01-27 2017-03-07 Microsoft Technology Licensing, Llc Special case handling for merged chroma blocks in intra block copy prediction mode
US10924743B2 (en) 2015-02-06 2021-02-16 Microsoft Technology Licensing, Llc Skipping evaluation stages during media encoding
WO2016133504A1 (en) * 2015-02-18 2016-08-25 Hewlett Packard Enterprise Development Lp Continuous viewing media
US10958927B2 (en) 2015-03-27 2021-03-23 Qualcomm Incorporated Motion information derivation mode determination in video coding
US10659783B2 (en) 2015-06-09 2020-05-19 Microsoft Technology Licensing, Llc Robust encoding/decoding of escape-coded pixels in palette mode
US10038917B2 (en) 2015-06-12 2018-07-31 Microsoft Technology Licensing, Llc Search strategies for intra-picture prediction modes
US10136132B2 (en) * 2015-07-21 2018-11-20 Microsoft Technology Licensing, Llc Adaptive skip or zero block detection combined with transform size decision
US9769499B2 (en) 2015-08-11 2017-09-19 Google Inc. Super-transform video coding
US10277905B2 (en) 2015-09-14 2019-04-30 Google Llc Transform selection for non-baseband signal coding
US9807423B1 (en) 2015-11-24 2017-10-31 Google Inc. Hybrid transform scheme for video coding
US10756755B2 (en) * 2016-05-10 2020-08-25 Immersion Networks, Inc. Adaptive audio codec system, method and article
US10368080B2 (en) 2016-10-21 2019-07-30 Microsoft Technology Licensing, Llc Selective upsampling or refresh of chroma sample values
US10235763B2 (en) 2016-12-01 2019-03-19 Google Llc Determining optical flow
EP3349451A1 (en) 2017-01-11 2018-07-18 Thomson Licensing Method and apparatus for selecting a coding mode used for encoding/decoding a residual block
BR112019022007A2 (en) 2017-04-21 2020-05-12 Zenimax Media Inc. SYSTEMS AND METHODS FOR MOTION VECTORS GENERATED IN GAMES
US11070818B2 (en) * 2017-07-05 2021-07-20 Telefonaktiebolaget Lm Ericsson (Publ) Decoding a block of video samples
US10986349B2 (en) 2017-12-29 2021-04-20 Microsoft Technology Licensing, Llc Constraints on locations of reference blocks for intra block copy prediction
US11012715B2 (en) 2018-02-08 2021-05-18 Qualcomm Incorporated Intra block copy for video coding
US10735025B2 (en) * 2018-03-02 2020-08-04 Microsoft Technology Licensing, Llc Use of data prefixes to increase compression ratios
KR20230141952A (en) * 2018-03-29 2023-10-10 닛폰 호소 교카이 Image encoding device, image decoding device, and program
CN110324627B (en) * 2018-03-30 2022-04-05 杭州海康威视数字技术股份有限公司 Chroma intra-frame prediction method and device
US10469869B1 (en) * 2018-06-01 2019-11-05 Tencent America LLC Method and apparatus for video coding
WO2019235849A1 (en) * 2018-06-06 2019-12-12 엘지전자 주식회사 Method for processing overlay media in 360 video system, and device therefor
WO2019234669A1 (en) * 2018-06-07 2019-12-12 Beijing Bytedance Network Technology Co., Ltd. Signaled mv precision
US11025946B2 (en) * 2018-06-14 2021-06-01 Tencent America LLC Method and apparatus for video coding
WO2020040623A1 (en) * 2018-08-24 2020-02-27 삼성전자 주식회사 Method and apparatus for image encoding, and method and apparatus for image decoding
US11477476B2 (en) * 2018-10-04 2022-10-18 Qualcomm Incorporated Affine restrictions for the worst-case bandwidth reduction in video coding
US11140403B2 (en) * 2018-12-20 2021-10-05 Tencent America LLC Identifying tile from network abstraction unit header
WO2020140948A1 (en) * 2019-01-02 2020-07-09 Beijing Bytedance Network Technology Co., Ltd. Motion vector derivation between dividing patterns
US11051035B2 (en) * 2019-02-08 2021-06-29 Qualcomm Incorporated Processing of illegal motion vectors for intra block copy mode in video coding
US11632563B2 (en) 2019-02-22 2023-04-18 Qualcomm Incorporated Motion vector derivation in video coding
US10687062B1 (en) * 2019-02-22 2020-06-16 Google Llc Compression across multiple images
KR20210134391A (en) * 2019-03-12 2021-11-09 후아웨이 테크놀러지 컴퍼니 리미티드 Coding and decoding patch data units for point cloud coding
CN110175185B (en) * 2019-04-17 2023-04-07 上海天数智芯半导体有限公司 Self-adaptive lossless compression method based on time sequence data distribution characteristics
US11122297B2 (en) 2019-05-03 2021-09-14 Google Llc Using border-aligned block functions for image compression
WO2021003447A1 (en) * 2019-07-03 2021-01-07 Futurewei Technologies, Inc. Types of reference pictures in reference picture lists
EP4000267A4 (en) 2019-08-23 2023-02-22 Beijing Bytedance Network Technology Co., Ltd. Clipping in reference picture resampling
US11380343B2 (en) 2019-09-12 2022-07-05 Immersion Networks, Inc. Systems and methods for processing high frequency audio signal
JP7395727B2 (en) 2019-10-23 2023-12-11 北京字節跳動網絡技術有限公司 Methods, devices and storage methods for processing video data
KR102708041B1 (en) 2019-10-23 2024-09-19 베이징 바이트댄스 네트워크 테크놀로지 컴퍼니, 리미티드 Signaling for reference picture resampling
US11418792B2 (en) * 2020-03-27 2022-08-16 Tencent America LLC Estimating attributes for the classification of adaptive loop filtering based on projection-slice theorem
EP4298795A1 (en) * 2021-02-25 2024-01-03 Qualcomm Incorporated Machine learning based flow determination for video coding
US12003734B2 (en) 2021-02-25 2024-06-04 Qualcomm Incorporated Machine learning based flow determination for video coding
US12015801B2 (en) * 2021-09-13 2024-06-18 Apple Inc. Systems and methods for streaming extensions for video encoding
CN115348456B (en) * 2022-08-11 2023-06-06 上海久尺网络科技有限公司 Video image processing method, device, equipment and storage medium

Citations (149)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4691329A (en) 1985-07-02 1987-09-01 Matsushita Electric Industrial Co., Ltd. Block encoder
US4796087A (en) 1986-05-29 1989-01-03 Jacques Guichard Process for coding by transformation for the transmission of picture signals
US5089889A (en) 1989-04-28 1992-02-18 Victor Company Of Japan, Ltd. Apparatus for inter-frame predictive encoding of video signal
US5117287A (en) 1990-03-02 1992-05-26 Kokusai Denshin Denwa Co., Ltd. Hybrid coding system for moving image
US5220616A (en) 1991-02-27 1993-06-15 Northern Telecom Limited Image processing
US5367385A (en) 1992-05-07 1994-11-22 Picturetel Corporation Method and apparatus for processing block coded image data to reduce boundary artifacts between adjacent image blocks
US5422676A (en) 1991-04-25 1995-06-06 Deutsche Thomson-Brandt Gmbh System for coding an image representative signal
US5467134A (en) 1992-12-22 1995-11-14 Microsoft Corporation Method and system for compressing video data
US5473384A (en) 1993-12-16 1995-12-05 At&T Corp. Method of and system for enhancing distorted graphical information
US5477272A (en) 1993-07-22 1995-12-19 Gte Laboratories Incorporated Variable-block size multi-resolution motion estimation scheme for pyramid coding
US5544286A (en) 1993-01-29 1996-08-06 Microsoft Corporation Digital video data compression technique
US5590064A (en) 1994-10-26 1996-12-31 Intel Corporation Post-filtering for decoded video signals
US5598483A (en) 1993-04-13 1997-01-28 C-Cube Microsystems, Inc. MPEG video decompression processor
US5719958A (en) 1993-11-30 1998-02-17 Polaroid Corporation System and method for image edge detection using discrete cosine transforms
US5737455A (en) 1994-12-12 1998-04-07 Xerox Corporation Antialiasing with grey masking techniques
US5737019A (en) 1996-01-29 1998-04-07 Matsushita Electric Corporation Of America Method and apparatus for changing resolution by direct DCT mapping
US5748789A (en) 1996-10-31 1998-05-05 Microsoft Corporation Transparent block skipping in object-based video coding systems
US5757982A (en) 1994-10-18 1998-05-26 Hewlett-Packard Company Quadrantal scaling of dot matrix data
US5771318A (en) 1996-06-27 1998-06-23 Siemens Corporate Research, Inc. Adaptive edge-preserving smoothing filter
US5787203A (en) 1996-01-19 1998-07-28 Microsoft Corporation Method and system for filtering compressed video images
US5793897A (en) 1993-12-16 1998-08-11 Samsung Electronics Co., Ltd. Adaptive variable-length coding and decoding methods for image data
US5796875A (en) 1996-08-13 1998-08-18 Sony Electronics, Inc. Selective de-blocking filter for DCT compressed images
US5799113A (en) 1996-01-19 1998-08-25 Microsoft Corporation Method for expanding contracted video images
US5825929A (en) 1995-10-05 1998-10-20 Microsoft Corporation Transformation block optimization method
US5835618A (en) 1996-09-27 1998-11-10 Siemens Corporate Research, Inc. Uniform and non-uniform dynamic range remapping for optimum image display
US5844613A (en) 1997-03-17 1998-12-01 Microsoft Corporation Global motion estimator for motion video signal encoding
US5874995A (en) 1994-10-28 1999-02-23 Matsuhita Electric Corporation Of America MPEG video decoder having a high bandwidth memory for use in decoding interlaced and progressive signals
US5905815A (en) 1994-09-09 1999-05-18 Intel Corporation Decoding encoded image signals encoded by further transforming transformed DC signals
US5937095A (en) 1995-01-31 1999-08-10 Matsushita Electric Industrial Co., Ltd. Method for encoding and decoding moving picture signals
US5946043A (en) 1997-12-31 1999-08-31 Microsoft Corporation Video coding using adaptive coding of block parameters for coded/uncoded blocks
US5982459A (en) 1995-05-31 1999-11-09 8×8, Inc. Integrated multimedia communications processor and codec
TW379509B (en) 1998-09-15 2000-01-11 Acer Inc Adaptive post-filtering of compressed video images to remove artifacts
US6016365A (en) 1997-10-16 2000-01-18 Samsung Electro-Mechanics Co., Ltd. Decoder having adaptive function of eliminating block effect
US6028967A (en) 1997-07-30 2000-02-22 Lg Electronics Inc. Method of reducing a blocking artifact when coding moving picture
US6038256A (en) 1996-12-31 2000-03-14 C-Cube Microsystems Inc. Statistical multiplexed video encoding using pre-encoding a priori statistics and a priori and a posteriori statistics
US6067322A (en) 1997-06-04 2000-05-23 Microsoft Corporation Half pixel motion estimation in motion video signal encoding
US6160503A (en) 1992-02-19 2000-12-12 8×8, Inc. Deblocking filter for encoder/decoder arrangement and method with divergence reduction
US6167164A (en) 1997-03-10 2000-12-26 Samsung Electronics Co., Ltd. One-dimensional signal adaptive filter for reducing blocking effect and filtering method
US6178205B1 (en) 1997-12-12 2001-01-23 Vtel Corporation Video postfiltering with motion-compensated temporal filtering and/or spatial-adaptive filtering
US6188799B1 (en) 1997-02-07 2001-02-13 Matsushita Electric Industrial Co., Ltd. Method and apparatus for removing noise in still and moving pictures
US6215910B1 (en) 1996-03-28 2001-04-10 Microsoft Corporation Table-based compression with embedded coding
US6233017B1 (en) 1996-09-16 2001-05-15 Microsoft Corporation Multimedia compression system with adaptive block sizes
US6236764B1 (en) 1998-11-30 2001-05-22 Equator Technologies, Inc. Image processing circuit and method for reducing a difference between pixel values across an image boundary
US6240135B1 (en) 1997-09-09 2001-05-29 Lg Electronics Inc Method of removing blocking artifacts in a coding system of a moving picture
US6249610B1 (en) 1996-06-19 2001-06-19 Matsushita Electric Industrial Co., Ltd. Apparatus and method for coding a picture and apparatus and method for decoding a picture
US6281942B1 (en) 1997-08-11 2001-08-28 Microsoft Corporation Spatial and temporal filtering mechanism for digital motion video signals
US20010017944A1 (en) 2000-01-20 2001-08-30 Nokia Mobile Pnones Ltd. Method and associated device for filtering digital video images
US6285801B1 (en) 1998-05-29 2001-09-04 Stmicroelectronics, Inc. Non-linear adaptive image filter for filtering noise such as blocking artifacts
US6320905B1 (en) 1998-07-08 2001-11-20 Stream Machine Company Postprocessing system for removing blocking artifacts in block-based codecs
US20020009146A1 (en) 1998-03-20 2002-01-24 Barbara A. Hall Adaptively encoding a picture of contrasted complexity having normal video and noisy video portions
GB2365647A (en) 2000-08-04 2002-02-20 Snell & Wilcox Ltd Deriving parameters for post-processing from an encoded signal
US20020027954A1 (en) 1998-06-30 2002-03-07 Kenneth S. Singh Method and device for gathering block statistics during inverse quantization and iscan
US6380985B1 (en) 1998-09-14 2002-04-30 Webtv Networks, Inc. Resizing and anti-flicker filtering in reduced-size video images
US20020067369A1 (en) 2000-04-21 2002-06-06 Sullivan Gary J. Application program interface (API) facilitating decoder control of accelerator resources
US20020097802A1 (en) 1998-11-30 2002-07-25 Chih-Lung (Bruce) Lin "Coding techniques for coded block parameters of blocks of macroblocks"
US20020110284A1 (en) 1996-07-02 2002-08-15 Ke-Chiang Chu System and method using edge processing to remove blocking artifacts from decompressed images
US20020136303A1 (en) 2001-03-26 2002-09-26 Shijun Sun Method and apparatus for controlling loop filtering or post filtering in block based motion compensationed video coding
US6466624B1 (en) 1998-10-28 2002-10-15 Pixonics, Llc Video decoder with bit stream based enhancements
US20020150166A1 (en) 2001-03-02 2002-10-17 Johnson Andrew W. Edge adaptive texture discriminating filtering
US20020154227A1 (en) 2001-04-18 2002-10-24 Koninklijke Philips Electronics N.V. Dynamic complexity prediction and regulation of MPEG2 decoding in a media processor
US6473409B1 (en) 1999-02-26 2002-10-29 Microsoft Corp. Adaptive filtering system and method for adaptively canceling echoes and reducing noise in digital signals
US20020186890A1 (en) 2001-05-03 2002-12-12 Ming-Chieh Lee Dynamic filtering for lossy compression
US6501798B1 (en) 1998-01-22 2002-12-31 International Business Machines Corporation Device for generating multiple quality level bit-rates in a video encoder
US6504873B1 (en) 1997-06-13 2003-01-07 Nokia Mobile Phones Ltd. Filtering based on activities inside the video blocks and at their boundary
EP1085763B1 (en) 1996-05-28 2003-01-22 Matsushita Electric Industrial Co., Ltd. Image predictive coding apparatus and method.
US20030021489A1 (en) 2001-07-24 2003-01-30 Seiko Epson Corporation Image processor and image processing program, and image processing method
US6529638B1 (en) 1999-02-01 2003-03-04 Sharp Laboratories Of America, Inc. Block boundary artifact reduction for block-based image compression
US20030044080A1 (en) 2001-09-05 2003-03-06 Emblaze Systems Ltd Method for reducing blocking artifacts
US20030053708A1 (en) 2001-07-02 2003-03-20 Jasc Software Removal of block encoding artifacts
US20030053711A1 (en) 2001-09-20 2003-03-20 Changick Kim Reducing blocking and ringing artifacts in low-bit-rate coding
US20030053541A1 (en) 2001-09-14 2003-03-20 Shijun Sun Adaptive filtering based upon boundary strength
US20030058944A1 (en) * 2001-09-24 2003-03-27 Macinnis Alexander G. Method and apparatus for performing deblocking filtering with interlace capability
US6571016B1 (en) 1997-05-05 2003-05-27 Microsoft Corporation Intra compression of pixel blocks using predicted mean
US20030099292A1 (en) 2001-11-27 2003-05-29 Limin Wang Macroblock level adaptive frame/field coding for digital video content
US6597860B2 (en) 1997-08-14 2003-07-22 Samsung Electronics Digital camcorder apparatus with MPEG-2 compatible video compression
US20030138154A1 (en) 2001-12-28 2003-07-24 Tooru Suino Image-processing apparatus, image-processing method, program and computer readable information recording medium
US20030152146A1 (en) 2001-12-17 2003-08-14 Microsoft Corporation Motion compensation loop with filtering
US20030185306A1 (en) 2002-04-01 2003-10-02 Macinnis Alexander G. Video decoding system supporting multiple standards
US20030202608A1 (en) * 2001-09-24 2003-10-30 Macinnis Alexander G. Method for deblocking field-frame video
US6646578B1 (en) 2002-11-22 2003-11-11 Ub Video Inc. Context adaptive variable length decoding system and method
US20030219074A1 (en) 2002-01-31 2003-11-27 Samsung Electronics Co., Ltd. Filtering method for removing block artifacts and/or ringing noise and apparatus therefor
US6665346B1 (en) 1998-08-01 2003-12-16 Samsung Electronics Co., Ltd. Loop-filtering method for image data and apparatus therefor
US20030235248A1 (en) 2002-06-21 2003-12-25 Changick Kim Hybrid technique for reducing blocking and ringing artifacts in low-bit-rate coding
US20030235250A1 (en) * 2002-06-24 2003-12-25 Ankur Varma Video deblocking
US20040005096A1 (en) 1995-10-26 2004-01-08 Jae-Kyoon Kim Apparatus and method of encoding/decoding a coded block pattern
US6704718B2 (en) 2001-06-05 2004-03-09 Microsoft Corporation System and method for trainable nonlinear prediction of transform coefficients in data compression
US20040057517A1 (en) 2002-09-25 2004-03-25 Aaron Wells Content adaptive video processor using motion compensation
US20040062310A1 (en) * 2002-01-17 2004-04-01 Zhong Xue Coding distortion removal method, video encoding method, video decoding method, and apparatus and program for the same
US20040062309A1 (en) 2000-05-10 2004-04-01 Alexander Romanowski Method for transformation-coding full motion image sequences
US6724944B1 (en) 1997-03-13 2004-04-20 Nokia Mobile Phones, Ltd. Adaptive filter
US20040076338A1 (en) 2002-10-21 2004-04-22 Sharp Laboratories Of America, Inc. JPEG artifact removal
US6741752B1 (en) 1999-04-16 2004-05-25 Samsung Electronics Co., Ltd. Method of removing block boundary noise components in block-coded images
US20040101059A1 (en) * 2002-11-21 2004-05-27 Anthony Joch Low-complexity deblocking filter
US6748113B1 (en) 1999-08-25 2004-06-08 Matsushita Electric Insdustrial Co., Ltd. Noise detecting method, noise detector and image decoding apparatus
US20040120597A1 (en) 2001-06-12 2004-06-24 Le Dinh Chon Tam Apparatus and method for adaptive spatial segmentation-based noise reducing for encoded image signal
US6766063B2 (en) 2001-02-02 2004-07-20 Avid Technology, Inc. Generation adaptive filtering for subsampling component video as input to a nonlinear editing system
US20040141557A1 (en) 2003-01-16 2004-07-22 Samsung Electronics Co. Ltd. Methods and apparatus for removing blocking artifacts of MPEG signals in real-time video reception
US6768774B1 (en) 1998-11-09 2004-07-27 Broadcom Corporation Video and graphics system with video scaling
US6795584B2 (en) 2002-10-03 2004-09-21 Nokia Corporation Context-based adaptive variable length coding for adaptive block transforms
US20040208392A1 (en) 2003-03-17 2004-10-21 Raveendran Vijayalakshmi R. Method and apparatus for improving video quality of low bit-rate video
US20040252768A1 (en) 2003-06-10 2004-12-16 Yoshinori Suzuki Computing apparatus and encoding program
US20050008251A1 (en) 2003-05-17 2005-01-13 Stmicroelectronics Asia Pacific Pte Ltd. Edge enhancement process and system
US20050013494A1 (en) * 2003-07-18 2005-01-20 Microsoft Corporation In-loop deblocking filter
US20050025246A1 (en) 2003-07-18 2005-02-03 Microsoft Corporation Decoding jointly coded transform type and subblock pattern information
US20050036759A1 (en) 1998-11-30 2005-02-17 Microsoft Corporation Efficient motion vector coding for video compression
US20050105889A1 (en) * 2002-03-22 2005-05-19 Conklin Gregory J. Video picture compression artifacts reduction via filtering and dithering
US20050117651A1 (en) * 2001-11-27 2005-06-02 Limin Wang Picture level adaptive frame/field coding for digital video content
US20050135484A1 (en) 2003-12-18 2005-06-23 Daeyang Foundation (Sejong University) Method of encoding mode determination, method of motion estimation and encoding apparatus
US20050196063A1 (en) 2004-01-14 2005-09-08 Samsung Electronics Co., Ltd. Loop filtering method and apparatus
US20050207492A1 (en) 2004-03-18 2005-09-22 Sony Corporation And Sony Electronics Inc. Methods and apparatus to reduce blocking noise and contouring effect in motion compensated compressed video
US20050237433A1 (en) 1999-07-30 2005-10-27 Roy Van Dijk System and method for motion compensation of image planes in color sequential displays
US20050243916A1 (en) 2004-04-29 2005-11-03 Do-Kyoung Kwon Adaptive de-blocking filtering apparatus and method for mpeg video decoder
US20050243915A1 (en) 2004-04-29 2005-11-03 Do-Kyoung Kwon Adaptive de-blocking filtering apparatus and method for mpeg video decoder
US20050243914A1 (en) 2004-04-29 2005-11-03 Do-Kyoung Kwon Adaptive de-blocking filtering apparatus and method for mpeg video decoder
US20050243912A1 (en) 2004-04-29 2005-11-03 Do-Kyoung Kwon Adaptive de-blocking filtering apparatus and method for mpeg video decoder
US20050243911A1 (en) 2004-04-29 2005-11-03 Do-Kyoung Kwon Adaptive de-blocking filtering apparatus and method for mpeg video decoder
US20050244063A1 (en) 2004-04-29 2005-11-03 Do-Kyoung Kwon Adaptive de-blocking filtering apparatus and method for mpeg video decoder
US20050243913A1 (en) 2004-04-29 2005-11-03 Do-Kyoung Kwon Adaptive de-blocking filtering apparatus and method for mpeg video decoder
US20050276505A1 (en) 2004-05-06 2005-12-15 Qualcomm Incorporated Method and apparatus for image enhancement for low bit rate video compression
US7003035B2 (en) 2002-01-25 2006-02-21 Microsoft Corporation Video coding methods and apparatuses
US20060050783A1 (en) 2004-07-30 2006-03-09 Le Dinh Chon T Apparatus and method for adaptive 3D artifact reducing for encoded image signal
US20060072668A1 (en) 2004-10-06 2006-04-06 Microsoft Corporation Adaptive vertical macroblock alignment for mixed frame video sequences
US20060072669A1 (en) 2004-10-06 2006-04-06 Microsoft Corporation Efficient repeat padding for hybrid video sequence with arbitrary video resolution
US20060078052A1 (en) 2004-10-08 2006-04-13 Dang Philip P Method and apparatus for parallel processing of in-loop deblocking filter for H.264 video compression standard
US20060110062A1 (en) 2004-11-23 2006-05-25 Stmicroelectronics Asia Pacific Pte. Ltd. Edge adaptive filtering system for reducing artifacts and method
US20060126962A1 (en) 2001-03-26 2006-06-15 Sharp Laboratories Of America, Inc. Methods and systems for reducing blocking artifacts with reduced complexity for spatially-scalable video coding
US20060181740A1 (en) 2004-12-08 2006-08-17 Byung-Gyu Kim Block artifact phenomenon eliminating device and eliminating method thereof
US20060209962A1 (en) 2003-02-06 2006-09-21 Hyun-Sang Park Video encoding method and video encoder for improving performance
US20060215754A1 (en) 2005-03-24 2006-09-28 Intel Corporation Method and apparatus for performing video decoding in a multi-thread environment
US20060274959A1 (en) 2005-06-03 2006-12-07 Patrick Piastowski Image processing to reduce blocking artifacts
US20070237241A1 (en) 2006-04-06 2007-10-11 Samsung Electronics Co., Ltd. Estimation of block artifact strength based on edge statistics
US20070280552A1 (en) 2006-06-06 2007-12-06 Samsung Electronics Co., Ltd. Method and device for measuring MPEG noise strength of compressed digital image
US20070291141A1 (en) 2003-11-05 2007-12-20 Per Thorell Methods of processing digital image and/or video data including luminance filtering based on chrominance data and related systems and computer program products
US20070291858A1 (en) 2006-06-16 2007-12-20 Via Technologies, Inc. Systems and Methods of Video Compression Deblocking
US20080084932A1 (en) 2006-10-06 2008-04-10 Microsoft Corporation Controlling loop filtering for interlaced video frames
US20080159407A1 (en) 2006-12-28 2008-07-03 Yang Nick Y Mechanism for a parallel processing in-loop deblock filter
US20080187053A1 (en) 2007-02-06 2008-08-07 Microsoft Corporation Scalable multi-thread video decoding
US20080266398A1 (en) 2007-04-09 2008-10-30 Tektronix, Inc. Systems and methods for spatially isolated artifact dissection, classification and measurement
US20090003446A1 (en) 2007-06-30 2009-01-01 Microsoft Corporation Computing collocated macroblock information for direct mode macroblocks
US20090148062A1 (en) 2007-12-07 2009-06-11 Guy Gabso System and method for detecting edges in a video signal
US7616829B1 (en) 2003-10-29 2009-11-10 Apple Inc. Reducing undesirable block based image processing artifacts by DC image filtering
US20090327386A1 (en) 2008-06-25 2009-12-31 Joel Warren Schoenblum Combined deblocking and denoising filter
US20100033633A1 (en) 2006-12-28 2010-02-11 Gokce Dane Detecting block artifacts in coded images and video
US20100128803A1 (en) 2007-06-08 2010-05-27 Oscar Divorra Escoda Methods and apparatus for in-loop de-artifacting filtering based on multi-lattice sparsity-based filtering
US20100183068A1 (en) 2007-01-04 2010-07-22 Thomson Licensing Methods and apparatus for reducing coding artifacts for illumination compensation and/or color compensation in multi-view coded video
US20110200103A1 (en) 2008-10-23 2011-08-18 Sk Telecom. Co., Ltd. Video encoding/decoding apparatus, de-blocking filter and filtering method based on intra-prediction directions for same, and recording media
US20110200100A1 (en) 2008-10-27 2011-08-18 Sk Telecom. Co., Ltd. Motion picture encoding/decoding apparatus, adaptive deblocking filtering apparatus and filtering method for same, and recording medium
US20110222597A1 (en) 2008-11-25 2011-09-15 Thomson Licensing Method and apparatus for sparsity-based de-artifact filtering for video encoding and decoding
US20120082219A1 (en) 2010-10-05 2012-04-05 Microsoft Corporation Content adaptive deblocking during video encoding and decoding

Family Cites Families (486)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US519451A (en) * 1894-05-08 Paper box
JPS56128070A (en) * 1980-03-13 1981-10-07 Fuji Photo Film Co Ltd Band compressing equipment of variable density picture
US4420771A (en) 1981-02-09 1983-12-13 Bell Telephone Laboratories, Incorporated Technique for encoding multi-level signals
JPS60158786A (en) 1984-01-30 1985-08-20 Kokusai Denshin Denwa Co Ltd <Kdd> Detection system of picture moving quantity
JPS61205086A (en) 1985-03-08 1986-09-11 Mitsubishi Electric Corp Picture encoding and decoding device
US4754492A (en) 1985-06-03 1988-06-28 Picturetel Corporation Method and system for adapting a digitized signal processing system for block processing with minimal blocking artifacts
US4661849A (en) 1985-06-03 1987-04-28 Pictel Corporation Method and apparatus for providing motion estimation signals for communicating image sequences
JPH0669145B2 (en) 1985-08-05 1994-08-31 日本電信電話株式会社 Predictive coding
US4661853A (en) 1985-11-01 1987-04-28 Rca Corporation Interfield image motion detector for video signals
ATE108587T1 (en) * 1986-09-13 1994-07-15 Philips Nv METHOD AND CIRCUIT ARRANGEMENT FOR BIT RATE REDUCTION.
US4730348A (en) * 1986-09-19 1988-03-08 Adaptive Computer Technologies Adaptive data compression system
US4800432A (en) * 1986-10-24 1989-01-24 The Grass Valley Group, Inc. Video Difference key generator
US4698672A (en) * 1986-10-27 1987-10-06 Compression Labs, Inc. Coding system for reducing redundancy
US4706260A (en) * 1986-11-07 1987-11-10 Rca Corporation DPCM system with rate-of-fill control of buffer occupancy
DE3704777C1 (en) 1987-02-16 1988-04-07 Ant Nachrichtentech Method of transmitting and playing back television picture sequences
NL8700565A (en) * 1987-03-10 1988-10-03 Philips Nv TV SYSTEM IN WHICH TRANSFORMED CODING TRANSFERS DIGITIZED IMAGES FROM A CODING STATION TO A DECODING STATION.
DE3855114D1 (en) * 1987-05-06 1996-04-25 Philips Patentverwaltung System for the transmission of video images
DE3854171T2 (en) 1987-06-09 1995-12-21 Sony Corp Evaluation of motion vectors in television pictures.
DE3854337T2 (en) 1987-06-09 1996-02-01 Sony Corp Motion compensated interpolation of digital television pictures.
US4968135A (en) 1987-08-17 1990-11-06 Digital Equipment Corporation System for producing pixel image data from CCITT encoded pixel data
JP2577745B2 (en) 1987-08-19 1997-02-05 三菱電機株式会社 Receiver
US4792981A (en) * 1987-09-21 1988-12-20 Am International, Inc. Manipulation of run-length encoded images
US4813056A (en) * 1987-12-08 1989-03-14 General Electric Company Modified statistical coding of digital signals
EP0339589A3 (en) 1988-04-28 1992-01-02 Sharp Kabushiki Kaisha Orthogonal transform coding system for image data
DE68925011T2 (en) 1988-09-16 1996-06-27 Philips Electronics Nv High definition television system.
FR2648254B2 (en) * 1988-09-23 1991-08-30 Thomson Csf METHOD AND DEVICE FOR ESTIMATING MOTION IN A SEQUENCE OF MOVED IMAGES
US5043919A (en) * 1988-12-19 1991-08-27 International Business Machines Corporation Method of and system for updating a display unit
US4985768A (en) 1989-01-20 1991-01-15 Victor Company Of Japan, Ltd. Inter-frame predictive encoding system with encoded and transmitted prediction error
US5297236A (en) 1989-01-27 1994-03-22 Dolby Laboratories Licensing Corporation Low computational-complexity digital filter bank for encoder, decoder, and encoder/decoder
US5379351A (en) * 1992-02-19 1995-01-03 Integrated Information Technology, Inc. Video compression/decompression processing and processors
US4954892A (en) * 1989-02-14 1990-09-04 Mitsubishi Denki Kabushiki Kaisha Buffer controlled picture signal encoding and decoding system
DE3943881B4 (en) * 1989-04-17 2008-07-17 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Digital coding method
JPH07109990B2 (en) 1989-04-27 1995-11-22 日本ビクター株式会社 Adaptive interframe predictive coding method and decoding method
USRE35910E (en) 1989-05-11 1998-09-29 Matsushita Electric Industrial Co., Ltd. Moving image signal encoding apparatus and decoding apparatus
AU612543B2 (en) 1989-05-11 1991-07-11 Panasonic Corporation Moving image signal encoding apparatus and decoding apparatus
FR2646978B1 (en) * 1989-05-11 1991-08-23 France Etat METHOD AND INSTALLATION FOR ENCODING SOUND SIGNALS
JP2562499B2 (en) 1989-05-29 1996-12-11 日本電信電話株式会社 High-efficiency image encoding device and its decoding device
US5179442A (en) * 1989-06-02 1993-01-12 North American Philips Corporation Method and apparatus for digitally processing a high definition television augmentation signal
US5128758A (en) * 1989-06-02 1992-07-07 North American Philips Corporation Method and apparatus for digitally processing a high definition television augmentation signal
JPH0832039B2 (en) * 1989-08-19 1996-03-27 日本ビクター株式会社 Variable length coding method and apparatus thereof
JPH03117991A (en) 1989-09-29 1991-05-20 Victor Co Of Japan Ltd Encoding and decoder device for movement vector
US5144426A (en) 1989-10-13 1992-09-01 Matsushita Electric Industrial Co., Ltd. Motion compensated prediction interframe coding system
EP0424026B1 (en) 1989-10-14 1997-07-23 Sony Corporation Video signal transmitting system and method
US5040217A (en) * 1989-10-18 1991-08-13 At&T Bell Laboratories Perceptual coding of audio signals
JP2787599B2 (en) * 1989-11-06 1998-08-20 富士通株式会社 Image signal coding control method
NL9000424A (en) 1990-02-22 1991-09-16 Philips Nv TRANSFER SYSTEM FOR DIGITALIZED TELEVISION IMAGES.
US5270832A (en) 1990-03-14 1993-12-14 C-Cube Microsystems System for compression and decompression of video data using discrete cosine transform and coding techniques
JPH03265290A (en) 1990-03-14 1991-11-26 Toshiba Corp Television signal scanning line converter
US5103306A (en) 1990-03-28 1992-04-07 Transitions Research Corporation Digital image compression employing a resolution gradient
US5091782A (en) 1990-04-09 1992-02-25 General Instrument Corporation Apparatus and method for adaptively compressing successive blocks of digital video
US4999705A (en) 1990-05-03 1991-03-12 At&T Bell Laboratories Three dimensional motion compensated video coding
JP2969782B2 (en) 1990-05-09 1999-11-02 ソニー株式会社 Encoded data editing method and encoded data editing device
US5155594A (en) 1990-05-11 1992-10-13 Picturetel Corporation Hierarchical encoding method and apparatus employing background references for efficiently communicating image sequences
CA2043670C (en) 1990-06-05 2002-01-08 Wiebe De Haan Method of transmitting a picture sequence of a full-motion video scene, and a medium for said transmission
GB9012538D0 (en) * 1990-06-05 1990-07-25 Philips Nv Coding of video signals
US5068724A (en) 1990-06-15 1991-11-26 General Instrument Corporation Adaptive motion compensation for digital television
US5146324A (en) 1990-07-31 1992-09-08 Ampex Corporation Data compression using a feedforward quantization estimator
JP3037383B2 (en) 1990-09-03 2000-04-24 キヤノン株式会社 Image processing system and method
KR950011200B1 (en) 1990-10-31 1995-09-29 니뽕 빅터 가부시끼가이샤 Compression method of inderlace moving image signals
JPH04199981A (en) * 1990-11-29 1992-07-21 Nec Corp Prompt processing type one-dimensional coder
JP3303869B2 (en) 1990-11-30 2002-07-22 株式会社日立製作所 Image encoding method, image encoding device, image decoding method
JP3191935B2 (en) 1990-11-30 2001-07-23 株式会社日立製作所 Image encoding method, image encoding device, image decoding method
US5193004A (en) 1990-12-03 1993-03-09 The Trustees Of Columbia University In The City Of New York Systems and methods for coding even fields of interlaced video sequences
USRE35093E (en) 1990-12-03 1995-11-21 The Trustees Of Columbia University In The City Of New York Systems and methods for coding even fields of interlaced video sequences
US5266941A (en) * 1991-02-15 1993-11-30 Silicon Graphics, Inc. Apparatus and method for controlling storage of display information in a computer system
US5111292A (en) 1991-02-27 1992-05-05 General Electric Company Priority selection apparatus as for a video signal processor
JPH04297179A (en) 1991-03-15 1992-10-21 Mitsubishi Electric Corp Data communication system
JPH0630280A (en) * 1991-03-19 1994-02-04 Nec Eng Ltd Selective coding preprocessing system by blocks for binary image data
JP3119888B2 (en) 1991-04-18 2000-12-25 松下電器産業株式会社 Signal processing method and recording / reproducing device
US5212549A (en) * 1991-04-29 1993-05-18 Rca Thomson Licensing Corporation Error concealment apparatus for a compressed video signal processing system
JPH04334188A (en) 1991-05-08 1992-11-20 Nec Corp Coding system for moving picture signal
EP0514663A3 (en) * 1991-05-24 1993-07-14 International Business Machines Corporation An apparatus and method for motion video encoding employing an adaptive quantizer
HU9300005D0 (en) * 1991-05-24 1993-04-28 British Broadcasting Corp Method for processing video picture
US5467136A (en) 1991-05-31 1995-11-14 Kabushiki Kaisha Toshiba Video decoder for determining a motion vector from a scaled vector and a difference vector
US5317397A (en) 1991-05-31 1994-05-31 Kabushiki Kaisha Toshiba Predictive coding using spatial-temporal filtering and plural motion vectors
JP2684941B2 (en) 1992-11-25 1997-12-03 松下電器産業株式会社 Image encoding method and image encoding device
US5784107A (en) 1991-06-17 1998-07-21 Matsushita Electric Industrial Co., Ltd. Method and apparatus for picture coding and method and apparatus for picture decoding
JP2977104B2 (en) * 1991-07-26 1999-11-10 ソニー株式会社 Moving image data encoding method and apparatus, and moving image data decoding method and apparatus
US5539466A (en) * 1991-07-30 1996-07-23 Sony Corporation Efficient coding apparatus for picture signal and decoding apparatus therefor
JP2699703B2 (en) 1991-07-31 1998-01-19 松下電器産業株式会社 Motion compensation prediction method and image signal encoding method using the same
US5428396A (en) 1991-08-03 1995-06-27 Sony Corporation Variable length coding/decoding method for motion vectors
JPH0541862A (en) 1991-08-03 1993-02-19 Sony Corp Variable length coding system for motion vector
JP3001688B2 (en) 1991-08-05 2000-01-24 株式会社大一商会 Pachinko ball circulation controller
US5291486A (en) * 1991-08-19 1994-03-01 Sony Corporation Data multiplexing apparatus and multiplexed data demultiplexing apparatus
EP0535746B1 (en) * 1991-09-30 1997-01-29 Philips Electronics Uk Limited Motion vector estimation, motion picture encoding and storage
JP2991833B2 (en) 1991-10-11 1999-12-20 松下電器産業株式会社 Interlace scanning digital video signal encoding apparatus and method
JP2586260B2 (en) 1991-10-22 1997-02-26 三菱電機株式会社 Adaptive blocking image coding device
JP3134424B2 (en) 1991-10-31 2001-02-13 ソニー株式会社 Variable length encoding method and apparatus
JP2962012B2 (en) 1991-11-08 1999-10-12 日本ビクター株式会社 Video encoding device and decoding device therefor
JPH05137131A (en) 1991-11-13 1993-06-01 Sony Corp Inter-frame motion predicting method
US5227878A (en) 1991-11-15 1993-07-13 At&T Bell Laboratories Adaptive coding and decoding of frames and fields of video
JP2549479B2 (en) 1991-12-06 1996-10-30 日本電信電話株式会社 Motion compensation inter-frame band division coding processing method
DE69228983T2 (en) * 1991-12-18 1999-10-28 Koninklijke Philips Electronics N.V., Eindhoven System for transmitting and / or storing signals from textured images
US5510840A (en) * 1991-12-27 1996-04-23 Sony Corporation Methods and devices for encoding and decoding frame signals and recording medium therefor
JP2524044B2 (en) 1992-01-22 1996-08-14 松下電器産業株式会社 Image coding method and image coding apparatus
US5745789A (en) * 1992-01-23 1998-04-28 Hitachi, Ltd. Disc system for holding data in a form of a plurality of data blocks dispersed in a plurality of disc units connected by a common data bus
US5594813A (en) 1992-02-19 1997-01-14 Integrated Information Technology, Inc. Programmable architecture and methods for motion estimation
US6441842B1 (en) 1992-02-19 2002-08-27 8×8, Inc. Video compression/decompression processing and processors
JP2882161B2 (en) 1992-02-20 1999-04-12 松下電器産業株式会社 Video signal recording / reproducing device, video signal transmitting device, video signal encoding device, and video signal reproducing device
US5227788A (en) * 1992-03-02 1993-07-13 At&T Bell Laboratories Method and apparatus for two-component signal compression
US5293229A (en) 1992-03-27 1994-03-08 Matsushita Electric Corporation Of America Apparatus and method for processing groups of fields in a video data compression system
US5287420A (en) 1992-04-08 1994-02-15 Supermac Technology Method for image compression on a personal computer
KR0148130B1 (en) 1992-05-18 1998-09-15 강진구 Apparatus and method for encoding/decoding due to restrain blocking artifact
KR0166716B1 (en) 1992-06-18 1999-03-20 강진구 Encoding and decoding method and apparatus by using block dpcm
JP3443867B2 (en) 1992-06-26 2003-09-08 ソニー株式会社 Image signal encoding / decoding method and image signal recording medium
JP2899478B2 (en) 1992-06-25 1999-06-02 松下電器産業株式会社 Image encoding method and image encoding device
US6160849A (en) 1992-06-29 2000-12-12 Sony Corporation Selectable field and frame based predictive video coding
TW241416B (en) * 1992-06-29 1995-02-21 Sony Co Ltd
US6226327B1 (en) 1992-06-29 2001-05-01 Sony Corporation Video coding method and apparatus which select between frame-based and field-based predictive modes
JPH0621830A (en) * 1992-06-30 1994-01-28 Sony Corp Two-dimension huffman coding method
JP3201079B2 (en) 1992-07-03 2001-08-20 ケイディーディーアイ株式会社 Motion compensated prediction method, coding method and apparatus for interlaced video signal
US5412435A (en) 1992-07-03 1995-05-02 Kokusai Denshin Denwa Kabushiki Kaisha Interlaced video signal motion compensation prediction system
KR950010913B1 (en) * 1992-07-23 1995-09-25 삼성전자주식회사 Vlc & vld system
JPH06153180A (en) 1992-09-16 1994-05-31 Fujitsu Ltd Picture data coding method and device
US5461420A (en) 1992-09-18 1995-10-24 Sony Corporation Apparatus for coding and decoding a digital video signal derived from a motion picture film source
JP3348310B2 (en) * 1992-09-28 2002-11-20 ソニー株式会社 Moving picture coding method and moving picture coding apparatus
JPH06113287A (en) 1992-09-30 1994-04-22 Matsushita Electric Ind Co Ltd Picture coder and picture decoder
CA2107727C (en) 1992-10-07 1999-06-01 Hiroaki Ueda Synchronous compression and reconstruction system
US5982437A (en) * 1992-10-26 1999-11-09 Sony Corporation Coding method and system, and decoding method and system
JP2959916B2 (en) * 1992-10-28 1999-10-06 松下電器産業株式会社 Versatile escape run level coder for digital video coder
US5365552A (en) * 1992-11-16 1994-11-15 Intel Corporation Buffer fullness indicator
KR0166722B1 (en) * 1992-11-30 1999-03-20 윤종용 Encoding and decoding method and apparatus thereof
JP3358835B2 (en) 1992-12-14 2002-12-24 ソニー株式会社 Image coding method and apparatus
US5535305A (en) 1992-12-31 1996-07-09 Apple Computer, Inc. Sub-partitioned vector quantization of probability density functions
US5400075A (en) * 1993-01-13 1995-03-21 Thomson Consumer Electronics, Inc. Adaptive variable length encoder/decoder
US5491516A (en) 1993-01-14 1996-02-13 Rca Thomson Licensing Corporation Field elimination apparatus for a video compression/decompression system
TW224553B (en) * 1993-03-01 1994-06-01 Sony Co Ltd Method and apparatus for inverse discrete consine transform and coding/decoding of moving picture
US5592228A (en) 1993-03-04 1997-01-07 Kabushiki Kaisha Toshiba Video encoder using global motion estimation and polygonal patch motion estimation
US5376968A (en) 1993-03-11 1994-12-27 General Instrument Corporation Adaptive compression of digital video data using different modes such as PCM and DPCM
WO1994022269A1 (en) 1993-03-24 1994-09-29 Sony Corporation Method and apparatus for coding/decoding motion vector, and method and apparatus for coding/decoding image signal
US5621481A (en) 1993-04-08 1997-04-15 Sony Corporation Motion vector detecting apparatus for determining interframe, predictive error as a function of interfield predictive errors
US5442400A (en) 1993-04-29 1995-08-15 Rca Thomson Licensing Corporation Error concealment apparatus for MPEG-like video data
DE69416717T2 (en) 1993-05-21 1999-10-07 Nippon Telegraph And Telephone Corp., Tokio/Tokyo Moving picture encoders and decoders
KR100458969B1 (en) * 1993-05-31 2005-04-06 소니 가부시끼 가이샤 Signal encoding or decoding apparatus, and signal encoding or decoding method
JPH06343172A (en) 1993-06-01 1994-12-13 Matsushita Electric Ind Co Ltd Motion vector detection method and motion vector encoding method
US5448297A (en) 1993-06-16 1995-09-05 Intel Corporation Method and system for encoding images using skip blocks
JPH0730896A (en) 1993-06-25 1995-01-31 Matsushita Electric Ind Co Ltd Moving vector coding and decoding method
US5517327A (en) 1993-06-30 1996-05-14 Minolta Camera Kabushiki Kaisha Data processor for image data using orthogonal transformation
US5453799A (en) * 1993-11-05 1995-09-26 Comsat Corporation Unified motion estimation architecture
JP3050736B2 (en) 1993-12-13 2000-06-12 シャープ株式会社 Video encoding device
US5465118A (en) 1993-12-17 1995-11-07 International Business Machines Corporation Luminance transition coding method for software motion video compression/decompression
EP0665688A3 (en) 1993-12-29 1995-11-02 Toshiba Kk Video data arranging method and video data encoding/decoding apparatus.
US5566208A (en) * 1994-03-17 1996-10-15 Philips Electronics North America Corp. Encoder buffer having an effective size which varies automatically with the channel bit-rate
EP0675652B1 (en) 1994-03-30 2009-05-13 Nxp B.V. Method and circuit for estimating motion between images of two interlaced fields, and digital signal coding devices comprising such a circuit
US5550541A (en) 1994-04-01 1996-08-27 Dolby Laboratories Licensing Corporation Compact source coding tables for encoder/decoder system
TW283289B (en) 1994-04-11 1996-08-11 Gen Instrument Corp
US5541852A (en) * 1994-04-14 1996-07-30 Motorola, Inc. Device, method and system for variable bit-rate packet video communications
US5650829A (en) 1994-04-21 1997-07-22 Sanyo Electric Co., Ltd. Motion video coding systems with motion vector detection
US5933451A (en) * 1994-04-22 1999-08-03 Thomson Consumer Electronics, Inc. Complexity determining apparatus
US5504591A (en) * 1994-04-25 1996-04-02 Microsoft Corporation System and method for compressing graphic images
US5457495A (en) * 1994-05-25 1995-10-10 At&T Ipm Corp. Adaptive video coder with dynamic bit allocation
US5767898A (en) 1994-06-23 1998-06-16 Sanyo Electric Co., Ltd. Three-dimensional image coding by merger of left and right images
US5796438A (en) * 1994-07-05 1998-08-18 Sony Corporation Methods and apparatus for interpolating picture information
US5594504A (en) 1994-07-06 1997-01-14 Lucent Technologies Inc. Predictive video coding using a motion vector updating routine
JP3237089B2 (en) * 1994-07-28 2001-12-10 株式会社日立製作所 Acoustic signal encoding / decoding method
KR0126871B1 (en) 1994-07-30 1997-12-29 심상철 HIGH SPEED BMA FOR Bi-DIRECTIONAL MOVING VECTOR ESTIMATION
US5684538A (en) 1994-08-18 1997-11-04 Hitachi, Ltd. System and method for performing video coding/decoding using motion compensation
US6141446A (en) * 1994-09-21 2000-10-31 Ricoh Company, Ltd. Compression and decompression system with reversible wavelets and lossy reconstruction
US5568167A (en) 1994-09-23 1996-10-22 C-Cube Microsystems, Inc. System for providing antialiased video overlays
FR2725577B1 (en) * 1994-10-10 1996-11-29 Thomson Consumer Electronics CODING OR DECODING METHOD OF MOTION VECTORS AND CODING OR DECODING DEVICE USING THE SAME
US5550847A (en) 1994-10-11 1996-08-27 Motorola, Inc. Device and method of signal loss recovery for realtime and/or interactive communications
JP3474005B2 (en) * 1994-10-13 2003-12-08 沖電気工業株式会社 Video coding method and video decoding method
US5552832A (en) * 1994-10-26 1996-09-03 Intel Corporation Run-length encoding sequence for video signals
US5623311A (en) 1994-10-28 1997-04-22 Matsushita Electric Corporation Of America MPEG video decoder having a high bandwidth memory
BR9506449A (en) * 1994-11-04 1997-09-02 Philips Electronics Nv Apparatus for encoding a digital broadband information signal and for decoding an encoded digital signal and process for encoding a digital broadband information signal
KR0141875B1 (en) * 1994-11-30 1998-06-15 배순훈 Run length decoder
KR100254402B1 (en) * 1994-12-19 2000-05-01 전주범 A method and a device for encoding picture signals by run-length coding
JP3371590B2 (en) 1994-12-28 2003-01-27 ソニー株式会社 High efficiency coding method and high efficiency decoding method
JP2951861B2 (en) 1994-12-28 1999-09-20 シャープ株式会社 Image encoding device and image decoding device
MY113223A (en) * 1994-12-29 2001-12-31 Sony Corp Processing of redundant fields in a moving picture to achive synchronized system operation
EP0720383B1 (en) 1994-12-30 2000-09-13 Daewoo Electronics Co., Ltd Method and apparatus for detecting motion vectors in a frame decimating video encoder
EP0721287A1 (en) 1995-01-09 1996-07-10 Daewoo Electronics Co., Ltd Method and apparatus for encoding a video signal
JP3674072B2 (en) 1995-02-16 2005-07-20 富士ゼロックス株式会社 Facsimile communication method and facsimile apparatus
US5574449A (en) * 1995-02-24 1996-11-12 Intel Corporation Signal processing with hybrid variable-length and entropy encodidng
DE69619002T2 (en) * 1995-03-10 2002-11-21 Kabushiki Kaisha Toshiba, Kawasaki Image coding - / - decoding device
US6104754A (en) * 1995-03-15 2000-08-15 Kabushiki Kaisha Toshiba Moving picture coding and/or decoding systems, and variable-length coding and/or decoding system
KR0171118B1 (en) 1995-03-20 1999-03-20 배순훈 Apparatus for encoding video signal
KR0181027B1 (en) 1995-03-20 1999-05-01 배순훈 An image processing system using pixel-by-pixel motion estimation
US5991451A (en) 1995-03-23 1999-11-23 Intel Corporation Variable-length encoding using code swapping
KR100209410B1 (en) * 1995-03-28 1999-07-15 전주범 Apparatus for encoding an image signal
US5884269A (en) 1995-04-17 1999-03-16 Merging Technologies Lossless compression/decompression of digital audio data
KR0181063B1 (en) * 1995-04-29 1999-05-01 배순훈 Method and apparatus for forming grid in motion compensation technique using feature point
JP3803122B2 (en) 1995-05-02 2006-08-02 松下電器産業株式会社 Image memory device and motion vector detection circuit
US5654771A (en) 1995-05-23 1997-08-05 The University Of Rochester Video compression system using a dense motion vector field and a triangular patch mesh overlay model
US5835149A (en) 1995-06-06 1998-11-10 Intel Corporation Bit allocation in a coded video sequence
GB2301971B (en) 1995-06-06 1999-10-06 Sony Uk Ltd Video compression
GB2301972B (en) 1995-06-06 1999-10-20 Sony Uk Ltd Video compression
US5731850A (en) 1995-06-07 1998-03-24 Maturi; Gregory V. Hybrid hierarchial/full-search MPEG encoder motion estimation
US5864711A (en) * 1995-07-05 1999-01-26 Microsoft Corporation System for determining more accurate translation between first and second translator, and providing translated data to second computer if first translator is more accurate
US6208761B1 (en) * 1995-07-11 2001-03-27 Telefonaktiebolaget Lm Ericsson (Publ) Video coding
US5687097A (en) 1995-07-13 1997-11-11 Zapex Technologies, Inc. Method and apparatus for efficiently determining a frame motion vector in a video encoder
US5668608A (en) * 1995-07-26 1997-09-16 Daewoo Electronics Co., Ltd. Motion vector estimation method and apparatus for use in an image signal encoding system
FR2737931B1 (en) 1995-08-17 1998-10-02 Siemens Ag METHOD FOR PROCESSING DECODED IMAGE BLOCKS OF A BLOCK-BASED IMAGE CODING METHOD
US5825830A (en) * 1995-08-17 1998-10-20 Kopf; David A. Method and apparatus for the compression of audio, video or other data
GB2305797B (en) * 1995-09-27 2000-03-01 Sony Uk Ltd Video data compression
US6307967B1 (en) * 1995-09-29 2001-10-23 Kabushiki Kaisha Toshiba Video coding and video decoding apparatus
US5883678A (en) 1995-09-29 1999-03-16 Kabushiki Kaisha Toshiba Video coding and video decoding apparatus for reducing an alpha-map signal at a controlled reduction ratio
US5819215A (en) 1995-10-13 1998-10-06 Dobson; Kurt Method and apparatus for wavelet based data compression having adaptive bit rate control for compression of digital audio or other sensory data
US5929940A (en) 1995-10-25 1999-07-27 U.S. Philips Corporation Method and device for estimating motion between images, system for encoding segmented images
US6192081B1 (en) * 1995-10-26 2001-02-20 Sarnoff Corporation Apparatus and method for selecting a coding mode in a block-based coding system
KR100211917B1 (en) 1995-10-26 1999-08-02 김영환 Object shape information coding method
US6064776A (en) 1995-10-27 2000-05-16 Kabushiki Kaisha Toshiba Image processing apparatus
US5991463A (en) * 1995-11-08 1999-11-23 Genesis Microchip Inc. Source data interpolation method and apparatus
US5889891A (en) 1995-11-21 1999-03-30 Regents Of The University Of California Universal codebook vector quantization with constrained storage
US5956674A (en) 1995-12-01 1999-09-21 Digital Theater Systems, Inc. Multi-channel predictive subband audio coder using psychoacoustic adaptive bit allocation in frequency, time and over the multiple channels
US5850294A (en) 1995-12-18 1998-12-15 Lucent Technologies Inc. Method and apparatus for post-processing images
US5963673A (en) 1995-12-20 1999-10-05 Sanyo Electric Co., Ltd. Method and apparatus for adaptively selecting a coding mode for video encoding
JP2798035B2 (en) * 1996-01-17 1998-09-17 日本電気株式会社 Motion compensated inter-frame prediction method using adaptive motion vector interpolation
US5692063A (en) 1996-01-19 1997-11-25 Microsoft Corporation Method and system for unrestricted motion estimation for video
US5831559A (en) * 1996-01-24 1998-11-03 Intel Corporation Encoding/decoding video signals using multiple run-val mapping tables
US6957350B1 (en) * 1996-01-30 2005-10-18 Dolby Laboratories Licensing Corporation Encrypted and watermarked temporal and resolution layering in advanced television
JP3130464B2 (en) * 1996-02-02 2001-01-31 ローム株式会社 Data decryption device
DE69614500T2 (en) 1996-02-27 2001-11-22 Stmicroelectronics S.R.L., Agrate Brianza Memory reduction for the basic profile and the main level of an MPEG-2 decoder
US5682152A (en) * 1996-03-19 1997-10-28 Johnson-Grace Company Data compression using adaptive bit allocation and hybrid lossless entropy encoding
US5982438A (en) 1996-03-22 1999-11-09 Microsoft Corporation Overlapped motion compensation for object coding
US5764814A (en) 1996-03-22 1998-06-09 Microsoft Corporation Representation and encoding of general arbitrary shapes
JPH09261266A (en) 1996-03-26 1997-10-03 Matsushita Electric Ind Co Ltd Service information communication system
US5805739A (en) 1996-04-02 1998-09-08 Picturetel Corporation Lapped orthogonal vector quantization
US5847776A (en) 1996-06-24 1998-12-08 Vdonet Corporation Ltd. Method for entropy constrained motion estimation and coding of motion vectors with increased search range
JP3628810B2 (en) * 1996-06-28 2005-03-16 三菱電機株式会社 Image encoding device
DE19628293C1 (en) * 1996-07-12 1997-12-11 Fraunhofer Ges Forschung Encoding and decoding audio signals using intensity stereo and prediction
DE19628292B4 (en) 1996-07-12 2007-08-02 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Method for coding and decoding stereo audio spectral values
US5828426A (en) 1996-08-20 1998-10-27 Samsung Electronics Co., Ltd. Apparatus for decoding variable length coded data of both MPEG-1 and MPEG-2 standards
EP0825778A3 (en) 1996-08-22 1998-06-10 Cirrus Logic, Inc. Method for motion estimation
JP2907146B2 (en) * 1996-09-11 1999-06-21 日本電気株式会社 Method and apparatus for searching for specific part of memory LSI
DE19637522A1 (en) 1996-09-13 1998-03-19 Bosch Gmbh Robert Process for reducing data in video signals
JP4049280B2 (en) * 1996-09-24 2008-02-20 株式会社ハイニックスセミコンダクター Grayscale shape information encoding / decoding apparatus and method
KR100303685B1 (en) * 1996-09-30 2001-09-24 송문섭 Image prediction encoding device and method thereof
US5952943A (en) * 1996-10-11 1999-09-14 Intel Corporation Encoding image data for decode rate control
JP4034380B2 (en) * 1996-10-31 2008-01-16 株式会社東芝 Image encoding / decoding method and apparatus
JPH10145779A (en) 1996-11-06 1998-05-29 Sony Corp Field detection device and method, image encoding device and method, and recording medium and its recording method
KR100318057B1 (en) * 1996-11-06 2001-12-24 모리시타 요이찌 Image decoding method
ATE371298T1 (en) 1996-11-07 2007-09-15 Koninkl Philips Electronics Nv TRANSMISSION OF A BIT STREAM SIGNAL
EP0876709B1 (en) 1996-11-11 2003-08-06 Koninklijke Philips Electronics N.V. Data compression/expansion using a rice encoder/decoder
US6130963A (en) 1996-11-22 2000-10-10 C-Cube Semiconductor Ii, Inc. Memory efficient decoding of video frame chroma
US5905542A (en) 1996-12-04 1999-05-18 C-Cube Microsystems, Inc. Simplified dual prime video motion estimation
KR100355324B1 (en) 1996-12-12 2002-11-18 마쯔시다덴기산교 가부시키가이샤 Picture encoder and picture decoder
US6377628B1 (en) * 1996-12-18 2002-04-23 Thomson Licensing S.A. System for maintaining datastream continuity in the presence of disrupted source data
US6167090A (en) 1996-12-26 2000-12-26 Nippon Steel Corporation Motion vector detecting apparatus
US6141053A (en) * 1997-01-03 2000-10-31 Saukkonen; Jukka I. Method of optimizing bandwidth for transmitting compressed video data streams
JP3484310B2 (en) 1997-01-17 2004-01-06 松下電器産業株式会社 Variable length encoder
EP0786907A3 (en) 1997-01-24 2001-06-13 Texas Instruments Incorporated Video encoder
NL1005084C2 (en) * 1997-01-24 1998-07-27 Oce Tech Bv A method for performing an image editing operation on run-length encoded bitmaps.
ES2162411T3 (en) * 1997-01-30 2001-12-16 Matsushita Electric Ind Co Ltd DIGITAL IMAGE FILLING PROCEDURE, IMAGE PROCESSING DEVICE AND DATA RECORDING MEDIA.
US6038536A (en) 1997-01-31 2000-03-14 Texas Instruments Incorporated Data compression using bit change statistics
US6272175B1 (en) 1997-02-13 2001-08-07 Conexant Systems, Inc. Video signal coding systems and processes using adaptive quantization
DE69838639T2 (en) * 1997-02-14 2008-08-28 Nippon Telegraph And Telephone Corp. PREDICTIVE CODING AND DECODING METHOD FOR DYNAMIC PICTURES
US6201927B1 (en) 1997-02-18 2001-03-13 Mary Lafuze Comer Trick play reproduction of MPEG encoded signals
US5974184A (en) * 1997-03-07 1999-10-26 General Instrument Corporation Intra-macroblock DC and AC coefficient prediction for interlaced digital video
US6005980A (en) 1997-03-07 1999-12-21 General Instrument Corporation Motion estimation and compensation of video object planes for interlaced digital video
US5991447A (en) * 1997-03-07 1999-11-23 General Instrument Corporation Prediction and coding of bi-directionally predicted video object planes for interlaced digital video
FI114248B (en) 1997-03-14 2004-09-15 Nokia Corp Method and apparatus for audio coding and audio decoding
US6728775B1 (en) * 1997-03-17 2004-04-27 Microsoft Corporation Multiple multicasting of multimedia streams
US6263065B1 (en) 1997-03-18 2001-07-17 At&T Corp. Method and apparatus for simulating central queue for distributing call in distributed arrangement of automatic call distributors
US6404813B1 (en) 1997-03-27 2002-06-11 At&T Corp. Bidirectionally predicted pictures or video object planes for efficient and flexible video coding
JP3217987B2 (en) 1997-03-31 2001-10-15 松下電器産業株式会社 Video signal decoding method and encoding method
CN1253652A (en) 1997-03-31 2000-05-17 松下电器产业株式会社 Dynatic image display method and device therefor
US5973755A (en) 1997-04-04 1999-10-26 Microsoft Corporation Video encoder and decoder using bilinear motion compensation and lapped orthogonal transforms
SG65064A1 (en) 1997-04-09 1999-05-25 Matsushita Electric Ind Co Ltd Image predictive decoding method image predictive decoding apparatus image predictive coding method image predictive coding apparatus and data storage media
US6259810B1 (en) * 1997-04-15 2001-07-10 Microsoft Corporation Method and system of decoding compressed image data
US5883633A (en) * 1997-04-15 1999-03-16 Microsoft Corporation Method and system of variable run length image encoding using sub-palette
US6441813B1 (en) * 1997-05-16 2002-08-27 Kabushiki Kaisha Toshiba Computer system, and video decoder used in the system
US6101195A (en) * 1997-05-28 2000-08-08 Sarnoff Corporation Timing correction method and apparatus
US6580834B2 (en) 1997-05-30 2003-06-17 Competitive Technologies Of Pa, Inc. Method and apparatus for encoding and decoding signals
JP2002507339A (en) 1997-05-30 2002-03-05 サーノフ コーポレイション Hierarchical motion estimation execution method and apparatus using nonlinear pyramid
JP3164031B2 (en) 1997-05-30 2001-05-08 日本ビクター株式会社 Moving image encoding / decoding device, moving image encoding / decoding method, and moving image encoded recording medium
AU8055798A (en) * 1997-06-05 1998-12-21 Wisconsin Alumni Research Foundation Image compression system using block transforms and tree-type coefficient truncation
US6057884A (en) 1997-06-05 2000-05-02 General Instrument Corporation Temporal and spatial scaleable coding for video object planes
ES2545066T3 (en) * 1997-06-09 2015-09-08 Hitachi, Ltd. Recording medium for image information
US6574371B2 (en) 1997-06-09 2003-06-03 Hitachi, Ltd. Image decoding method
SE512719C2 (en) * 1997-06-10 2000-05-02 Lars Gustaf Liljeryd A method and apparatus for reducing data flow based on harmonic bandwidth expansion
JPH1169345A (en) 1997-06-11 1999-03-09 Fujitsu Ltd Inter-frame predictive dynamic image encoding device and decoding device, inter-frame predictive dynamic image encoding method and decoding method
GB9712651D0 (en) 1997-06-18 1997-08-20 Nds Ltd Improvements in or relating to encoding digital signals
US6064771A (en) 1997-06-23 2000-05-16 Real-Time Geometry Corp. System and method for asynchronous, adaptive moving picture compression, and decompression
US6351563B1 (en) * 1997-07-09 2002-02-26 Hyundai Electronics Ind. Co., Ltd. Apparatus and method for coding/decoding scalable shape binary image using mode of lower and current layers
DE19730129C2 (en) * 1997-07-14 2002-03-07 Fraunhofer Ges Forschung Method for signaling noise substitution when encoding an audio signal
US6421738B1 (en) * 1997-07-15 2002-07-16 Microsoft Corporation Method and system for capturing and encoding full-screen video graphics
JP2897763B2 (en) 1997-07-28 1999-05-31 日本ビクター株式会社 Motion compensation coding device, decoding device, coding method and decoding method
KR100244291B1 (en) 1997-07-30 2000-02-01 구본준 Method for motion vector coding of moving picture
US6266091B1 (en) 1997-07-31 2001-07-24 Lsi Logic Corporation System and method for low delay mode operation video decoding
US6310918B1 (en) 1997-07-31 2001-10-30 Lsi Logic Corporation System and method for motion vector extraction and computation meeting 2-frame store and letterboxing requirements
FR2766946B1 (en) 1997-08-04 2000-08-11 Thomson Multimedia Sa PRETREATMENT METHOD AND DEVICE FOR MOTION ESTIMATION
KR100252342B1 (en) 1997-08-12 2000-04-15 전주범 Motion vector coding method and apparatus
US5859788A (en) 1997-08-15 1999-01-12 The Aerospace Corporation Modulated lapped transform method
DE69838869T2 (en) * 1997-10-03 2008-12-04 Sony Corp. Device and method for splicing coded data streams and device and method for generating coded data streams
US6493385B1 (en) * 1997-10-23 2002-12-10 Mitsubishi Denki Kabushiki Kaisha Image encoding method, image encoder, image decoding method, and image decoder
SG116400A1 (en) * 1997-10-24 2005-11-28 Matsushita Electric Ind Co Ltd A method for computational graceful degradation inan audiovisual compression system.
US6060997A (en) * 1997-10-27 2000-05-09 Motorola, Inc. Selective call device and method for providing a stream of information
US6148033A (en) 1997-11-20 2000-11-14 Hitachi America, Ltd. Methods and apparatus for improving picture quality in reduced resolution video decoders
JPH11161782A (en) * 1997-11-27 1999-06-18 Seiko Epson Corp Method and device for encoding color picture, and method and device for decoding color picture
CN1668111A (en) 1997-12-01 2005-09-14 三星电子株式会社 Motion vector prediction method
US6111914A (en) * 1997-12-01 2000-08-29 Conexant Systems, Inc. Adaptive entropy coding in adaptive quantization framework for video signal coding systems and processes
EP0921683B1 (en) * 1997-12-02 2010-09-08 Daewoo Electronics Corporation Method and apparatus for encoding mode signals for use in a binary shape coder
US5973743A (en) * 1997-12-02 1999-10-26 Daewoo Electronics Co., Ltd. Mode coding method and apparatus for use in an interlaced shape coder
KR100523908B1 (en) * 1997-12-12 2006-01-27 주식회사 팬택앤큐리텔 Apparatus and method for encoding video signal for progressive scan image
JP3740813B2 (en) 1997-12-12 2006-02-01 ソニー株式会社 Image encoding method and image encoding apparatus
US6198773B1 (en) * 1997-12-18 2001-03-06 Zoran Corporation Video memory management for MPEG video decode and display system
US6775840B1 (en) * 1997-12-19 2004-08-10 Cisco Technology, Inc. Method and apparatus for using a spectrum analyzer for locating ingress noise gaps
KR100252108B1 (en) * 1997-12-20 2000-04-15 윤종용 Apparatus and method for digital recording and reproducing using mpeg compression codec
US6339656B1 (en) 1997-12-25 2002-01-15 Matsushita Electric Industrial Co., Ltd. Moving picture encoding decoding processing apparatus
KR100301826B1 (en) * 1997-12-29 2001-10-27 구자홍 Video decoder
US6393156B1 (en) 1998-01-07 2002-05-21 Truong Q. Nguyen Enhanced transform compatibility for standardized data compression
US6122017A (en) * 1998-01-22 2000-09-19 Hewlett-Packard Company Method for providing motion-compensated multi-field enhancement of still images from video
JPH11275592A (en) 1998-01-22 1999-10-08 Victor Co Of Japan Ltd Moving image code stream converter and its method
CA2320177A1 (en) 1998-02-13 1999-08-19 Quvis, Inc. Apparatus and method for optimized compression of interlaced motion images
KR100328417B1 (en) * 1998-03-05 2002-03-16 마츠시타 덴끼 산교 가부시키가이샤 Image enconding/decoding apparatus, image encoding/decoding method, and data recording medium
US6226407B1 (en) * 1998-03-18 2001-05-01 Microsoft Corporation Method and apparatus for analyzing computer screens
EP0944245B1 (en) 1998-03-20 2001-07-25 SGS-THOMSON MICROELECTRONICS S.r.l. Hierarchical recursive motion estimator for video images encoder
US6054943A (en) * 1998-03-25 2000-04-25 Lawrence; John Clifton Multilevel digital information compression based on lawrence algorithm
KR100281462B1 (en) * 1998-03-30 2001-02-01 전주범 Method for encoding motion vector of binary shape signals in interlaced shape coding technique
EP1075762A1 (en) * 1998-04-02 2001-02-14 Sarnoff Corporation Bursty data transmission of compressed video data
US6408029B1 (en) 1998-04-02 2002-06-18 Intel Corporation Method and apparatus for simplifying real-time data encoding
US6393061B1 (en) 1998-05-15 2002-05-21 Hughes Electronics Corporation Method for reducing blocking artifacts in digital images
US6115689A (en) 1998-05-27 2000-09-05 Microsoft Corporation Scalable audio coder and decoder
US6029126A (en) 1998-06-30 2000-02-22 Microsoft Corporation Scalable audio coder and decoder
US6073153A (en) 1998-06-03 2000-06-06 Microsoft Corporation Fast system and method for computing modulated lapped transforms
US6154762A (en) 1998-06-03 2000-11-28 Microsoft Corporation Fast system and method for computing modulated lapped transforms
JP3097665B2 (en) * 1998-06-19 2000-10-10 日本電気株式会社 Time-lapse recorder with anomaly detection function
JP2002518916A (en) * 1998-06-19 2002-06-25 イクエーター テクノロジーズ インコーポレイテッド Circuit and method for directly decoding an encoded format image having a first resolution into a decoded format image having a second resolution
JP3888597B2 (en) * 1998-06-24 2007-03-07 日本ビクター株式会社 Motion compensation coding apparatus and motion compensation coding / decoding method
JP3413720B2 (en) 1998-06-26 2003-06-09 ソニー株式会社 Image encoding method and apparatus, and image decoding method and apparatus
DE69934939T2 (en) * 1998-06-29 2007-10-18 Xerox Corp. Compression of boundaries between images
US6253165B1 (en) * 1998-06-30 2001-06-26 Microsoft Corporation System and method for modeling probability distribution functions of transform coefficients of encoded signal
US6519287B1 (en) 1998-07-13 2003-02-11 Motorola, Inc. Method and apparatus for encoding and decoding video signals by using storage and retrieval of motion vectors
US6275531B1 (en) 1998-07-23 2001-08-14 Optivision, Inc. Scalable video coding method and apparatus
JP4026238B2 (en) * 1998-07-23 2007-12-26 ソニー株式会社 Image decoding apparatus and image decoding method
US6704705B1 (en) * 1998-09-04 2004-03-09 Nortel Networks Limited Perceptual audio coding
DE19840835C2 (en) 1998-09-07 2003-01-09 Fraunhofer Ges Forschung Apparatus and method for entropy coding information words and apparatus and method for decoding entropy coded information words
US6219070B1 (en) * 1998-09-30 2001-04-17 Webtv Networks, Inc. System and method for adjusting pixel parameters by subpixel positioning
JP3723740B2 (en) 1998-10-06 2005-12-07 松下電器産業株式会社 Lossless compression coding method and apparatus, and lossless compression decoding method and apparatus
GB2343579A (en) 1998-11-07 2000-05-10 Ibm Hybrid-linear-bicubic interpolation method and apparatus
US6573905B1 (en) * 1999-11-09 2003-06-03 Broadcom Corporation Video and graphics system with parallel processing of graphics windows
US6081209A (en) * 1998-11-12 2000-06-27 Hewlett-Packard Company Search system for use in compression
US6629318B1 (en) 1998-11-18 2003-09-30 Koninklijke Philips Electronics N.V. Decoder buffer for streaming video receiver and method of operation
US6418166B1 (en) 1998-11-30 2002-07-09 Microsoft Corporation Motion estimation and block matching pattern
US6404931B1 (en) * 1998-12-14 2002-06-11 Microsoft Corporation Code book construction for variable to variable length entropy encoding
US6233226B1 (en) * 1998-12-14 2001-05-15 Verizon Laboratories Inc. System and method for analyzing and transmitting video over a switched network
US6300888B1 (en) * 1998-12-14 2001-10-09 Microsoft Corporation Entrophy code mode switching for frequency-domain audio coding
US6377930B1 (en) * 1998-12-14 2002-04-23 Microsoft Corporation Variable to variable length entropy encoding
US6223162B1 (en) * 1998-12-14 2001-04-24 Microsoft Corporation Multi-level run length coding for frequency-domain audio coding
US6421464B1 (en) * 1998-12-16 2002-07-16 Fastvdo Llc Fast lapped image transforms using lifting steps
JP3580777B2 (en) 1998-12-28 2004-10-27 フラウンホーファー−ゲゼルシャフト・ツール・フェルデルング・デル・アンゲヴァンテン・フォルシュング・アインゲトラーゲネル・フェライン Method and apparatus for encoding or decoding an audio signal or bit stream
US6100825A (en) * 1998-12-31 2000-08-08 Microsoft Corporation Cluster-based data compression system and method
US6496608B1 (en) 1999-01-15 2002-12-17 Picsurf, Inc. Image data interpolation system and method
KR100420740B1 (en) 1999-02-05 2004-03-02 소니 가부시끼 가이샤 Encoding device, encoding method, decoding device, decoding method, coding system and coding method
US6259741B1 (en) * 1999-02-18 2001-07-10 General Instrument Corporation Method of architecture for converting MPEG-2 4:2:2-profile bitstreams into main-profile bitstreams
US6496795B1 (en) 1999-05-05 2002-12-17 Microsoft Corporation Modulated complex lapped transform for integrated signal enhancement and coding
US6487574B1 (en) 1999-02-26 2002-11-26 Microsoft Corp. System and method for producing modulated complex lapped transforms
US6499060B1 (en) 1999-03-12 2002-12-24 Microsoft Corporation Media coding for loss recovery with remotely predicted data units
JP3778721B2 (en) 1999-03-18 2006-05-24 富士通株式会社 Video coding method and apparatus
JP2000278692A (en) * 1999-03-25 2000-10-06 Victor Co Of Japan Ltd Compressed data processing method, processor and recording and reproducing system
US6477280B1 (en) * 1999-03-26 2002-11-05 Microsoft Corporation Lossless adaptive encoding of finite alphabet data
US6678419B1 (en) * 1999-03-26 2004-01-13 Microsoft Corporation Reordering wavelet coefficients for improved encoding
JP2000286865A (en) 1999-03-31 2000-10-13 Toshiba Corp Continuous media data transmission system
US6320593B1 (en) 1999-04-20 2001-11-20 Agilent Technologies, Inc. Method of fast bi-cubic interpolation of image information
JP2002543714A (en) 1999-04-30 2002-12-17 コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ Video encoding method with B-frame encoding mode
US6519005B2 (en) 1999-04-30 2003-02-11 Koninklijke Philips Electronics N.V. Method of concurrent multiple-mode motion estimation for digital video
US6370502B1 (en) 1999-05-27 2002-04-09 America Online, Inc. Method and system for reduction of quantization-induced block-discontinuities and general purpose audio codec
US6968008B1 (en) 1999-07-27 2005-11-22 Sharp Laboratories Of America, Inc. Methods for motion estimation with adaptive motion accuracy
US6735249B1 (en) * 1999-08-11 2004-05-11 Nokia Corporation Apparatus, and associated method, for forming a compressed motion vector field utilizing predictive motion coding
JP4283950B2 (en) 1999-10-06 2009-06-24 パナソニック株式会社 Network management system
US6771829B1 (en) * 1999-10-23 2004-08-03 Fastvdo Llc Method for local zerotree image coding
KR100636110B1 (en) 1999-10-29 2006-10-18 삼성전자주식회사 Terminal supporting signaling for MPEG-4 tranceiving
CN1182726C (en) * 1999-10-29 2004-12-29 皇家菲利浦电子有限公司 Video encoding-method
GB9928022D0 (en) 1999-11-26 2000-01-26 British Telecomm Video coding and decording
JP3694888B2 (en) * 1999-12-03 2005-09-14 ソニー株式会社 Decoding device and method, encoding device and method, information processing device and method, and recording medium
US6573915B1 (en) * 1999-12-08 2003-06-03 International Business Machines Corporation Efficient capture of computer screens
US6865229B1 (en) 1999-12-14 2005-03-08 Koninklijke Philips Electronics N.V. Method and apparatus for reducing the “blocky picture” effect in MPEG decoded images
US6493392B1 (en) * 1999-12-27 2002-12-10 Hyundai Electronics Industries Co., Ltd. Method for coding digital interlaced moving video
US6567781B1 (en) 1999-12-30 2003-05-20 Quikcat.Com, Inc. Method and apparatus for compressing audio data using a dynamical system having a multi-state dynamical rule set and associated transform basis function
GB9930788D0 (en) * 1999-12-30 2000-02-16 Koninkl Philips Electronics Nv Method and apparatus for converting data streams
US6499010B1 (en) 2000-01-04 2002-12-24 Agere Systems Inc. Perceptual audio coder bit allocation scheme providing improved perceptual quality consistency
JP2001218172A (en) * 2000-01-31 2001-08-10 Nec Corp Device and method for converting frame rate in moving picture decoder, its recording medium and integrated circuit device
KR100739281B1 (en) * 2000-02-21 2007-07-12 주식회사 팬택앤큐리텔 Motion estimation method and appratus
JP4378824B2 (en) * 2000-02-22 2009-12-09 ソニー株式会社 Image processing apparatus and method
KR100619377B1 (en) 2000-02-22 2006-09-08 주식회사 팬택앤큐리텔 Motion estimation method and device
US6771828B1 (en) 2000-03-03 2004-08-03 Microsoft Corporation System and method for progessively transform coding digital data
TW526666B (en) * 2000-03-29 2003-04-01 Matsushita Electric Ind Co Ltd Reproducing method for compression coded data and device for the same
KR100796085B1 (en) * 2000-04-14 2008-01-21 소니 가부시끼 가이샤 Decoder, decoding method, and recorded medium
CN1322759C (en) * 2000-04-27 2007-06-20 三菱电机株式会社 Coding apparatus and coding method
WO2001091470A1 (en) * 2000-05-23 2001-11-29 Matsushita Electric Industrial Co., Ltd. Variable length encoding method and variable length encoder
JP3662171B2 (en) 2000-06-05 2005-06-22 三菱電機株式会社 Encoding apparatus and encoding method
US6449312B1 (en) 2000-06-08 2002-09-10 Motorola, Inc. Method of estimating motion in interlaced video
US6647061B1 (en) 2000-06-09 2003-11-11 General Instrument Corporation Video size conversion and transcoding from MPEG-2 to MPEG-4
US6542863B1 (en) 2000-06-14 2003-04-01 Intervideo, Inc. Fast codebook search method for MPEG audio encoding
JP3846771B2 (en) * 2000-06-26 2006-11-15 三菱電機株式会社 Decoder and playback device
US6614442B1 (en) * 2000-06-26 2003-09-02 S3 Graphics Co., Ltd. Macroblock tiling format for motion compensation
KR100353851B1 (en) 2000-07-07 2002-09-28 한국전자통신연구원 Water ring scan apparatus and method, video coding/decoding apparatus and method using that
AU2001273510A1 (en) 2000-07-17 2002-01-30 Trustees Of Boston University Generalized lapped biorthogonal transform embedded inverse discrete cosine transform and low bit rate video sequence coding artifact removal
WO2002009425A1 (en) * 2000-07-25 2002-01-31 Agilevision, L.L.C. Splicing compressed, local video segments into fixed time slots in a network feed
EP1320831A2 (en) 2000-09-12 2003-06-25 Koninklijke Philips Electronics N.V. Video coding method
EP1199812A1 (en) * 2000-10-20 2002-04-24 Telefonaktiebolaget Lm Ericsson Perceptually improved encoding of acoustic signals
US6735339B1 (en) 2000-10-27 2004-05-11 Dolby Laboratories Licensing Corporation Multi-stage encoding of signal components that are classified according to component value
US7454222B2 (en) * 2000-11-22 2008-11-18 Dragonwave, Inc. Apparatus and method for controlling wireless communication signals
KR100355831B1 (en) * 2000-12-06 2002-10-19 엘지전자 주식회사 Motion vector coding method based on 2-demension least bits prediction
US7227895B1 (en) * 2000-12-12 2007-06-05 Sony Corporation System and method for generating decoded digital video image data
US6757439B2 (en) * 2000-12-15 2004-06-29 International Business Machines Corporation JPEG packed block structure
US6765963B2 (en) * 2001-01-03 2004-07-20 Nokia Corporation Video decoder architecture and method for using same
US6920175B2 (en) * 2001-01-03 2005-07-19 Nokia Corporation Video coding architecture and methods for using same
US20020168066A1 (en) 2001-01-22 2002-11-14 Weiping Li Video encoding and decoding techniques and apparatus
CN1248509C (en) * 2001-02-13 2006-03-29 皇家菲利浦电子有限公司 Motion information coding and decoding method
US6778610B2 (en) 2001-03-02 2004-08-17 Redrock Semiconductor, Ltd. Simultaneous search for different resync-marker patterns to recover from corrupted MPEG-4 bitstreams
US20030012287A1 (en) * 2001-03-05 2003-01-16 Ioannis Katsavounidis Systems and methods for decoding of systematic forward error correction (FEC) codes of selected data in a video bitstream
US7675994B2 (en) * 2001-04-02 2010-03-09 Koninklijke Philips Electronics N.V. Packet identification mechanism at the transmitter and receiver for an enhanced ATSC 8-VSB system
WO2002089369A1 (en) * 2001-05-02 2002-11-07 Strix Systems, Inc. Method and system for indicating link quality among neighboring wireless base stations
US6859235B2 (en) * 2001-05-14 2005-02-22 Webtv Networks Inc. Adaptively deinterlacing video on a per pixel basis
JP4458714B2 (en) * 2001-06-20 2010-04-28 富士通マイクロエレクトロニクス株式会社 Image decoding apparatus, image decoding method, and program
US6593392B2 (en) * 2001-06-22 2003-07-15 Corning Incorporated Curable halogenated compositions
US6650784B2 (en) 2001-07-02 2003-11-18 Qualcomm, Incorporated Lossless intraframe encoding using Golomb-Rice
US20030033143A1 (en) * 2001-08-13 2003-02-13 Hagai Aronowitz Decreasing noise sensitivity in speech processing under adverse conditions
US6950469B2 (en) * 2001-09-17 2005-09-27 Nokia Corporation Method for sub-pixel value interpolation
US6968091B2 (en) * 2001-09-18 2005-11-22 Emc Corporation Insertion of noise for reduction in the number of bits for variable-length coding of (run, level) pairs
US7646816B2 (en) * 2001-09-19 2010-01-12 Microsoft Corporation Generalized reference decoder for image or video processing
JP3834495B2 (en) 2001-09-27 2006-10-18 株式会社東芝 Fine pattern inspection apparatus, CD-SEM apparatus management apparatus, fine pattern inspection method, CD-SEM apparatus management method, program, and computer-readable recording medium
EP1445956A4 (en) * 2001-11-16 2009-09-02 Ntt Docomo Inc Image encoding method, image decoding method, image encoder, image decoder, program, computer data signal and image transmission system
US20030095603A1 (en) 2001-11-16 2003-05-22 Koninklijke Philips Electronics N.V. Reduced-complexity video decoding using larger pixel-grid motion compensation
US6825847B1 (en) 2001-11-30 2004-11-30 Nvidia Corporation System and method for real-time compression of pixel colors
US7165028B2 (en) * 2001-12-12 2007-01-16 Texas Instruments Incorporated Method of speech recognition resistant to convolutive distortion and additive distortion
US6934677B2 (en) 2001-12-14 2005-08-23 Microsoft Corporation Quantization matrices based on critical band pattern information for digital audio wherein quantization bands differ from critical bands
WO2003053066A1 (en) * 2001-12-17 2003-06-26 Microsoft Corporation Skip macroblock coding
WO2003061284A1 (en) 2001-12-21 2003-07-24 Polycom, Inc. Motion wake identification and control mechanism
US6763068B2 (en) 2001-12-28 2004-07-13 Nokia Corporation Method and apparatus for selecting macroblock quantization parameters in a video encoder
SG152047A1 (en) 2002-01-18 2009-05-29 Toshiba Kk Video encoding method and apparatus and video decoding method and apparatus
US6690307B2 (en) * 2002-01-22 2004-02-10 Nokia Corporation Adaptive variable length coding of digital video
WO2003063498A1 (en) * 2002-01-22 2003-07-31 Koninklijke Philips Electronics N.V. Reducing bit rate of already compressed multimedia
US7236207B2 (en) * 2002-01-22 2007-06-26 Broadcom Corporation System and method of transmission and reception of progressive content with isolated fields for conversion to interlaced display
KR100846769B1 (en) * 2002-02-19 2008-07-16 삼성전자주식회사 Method for encoding motion image having fixed computational complexity and apparatus thereof
US6947886B2 (en) * 2002-02-21 2005-09-20 The Regents Of The University Of California Scalable compression of audio and other signals
EP1347649A1 (en) 2002-03-18 2003-09-24 Lg Electronics Inc. B picture mode determining method and apparatus in video coding system
US7099387B2 (en) * 2002-03-22 2006-08-29 Realnetorks, Inc. Context-adaptive VLC video transform coefficients encoding/decoding methods and apparatuses
US7155065B1 (en) 2002-03-27 2006-12-26 Microsoft Corporation System and method for progressively transforming and coding digital data
US7006699B2 (en) 2002-03-27 2006-02-28 Microsoft Corporation System and method for progressively transforming and coding digital data
US7034897B2 (en) * 2002-04-01 2006-04-25 Broadcom Corporation Method of operating a video decoding system
KR100931750B1 (en) 2002-04-19 2009-12-14 파나소닉 주식회사 Motion vector calculating method
TWI232682B (en) * 2002-04-26 2005-05-11 Ntt Docomo Inc Signal encoding method, signal decoding method, signal encoding device, signal decoding device, signal encoding program, and signal decoding program
US7277587B2 (en) 2002-04-26 2007-10-02 Sharp Laboratories Of America, Inc. System and method for lossless video coding
US20030202590A1 (en) * 2002-04-30 2003-10-30 Qunshan Gu Video encoding using direct mode predicted frames
US7242713B2 (en) 2002-05-02 2007-07-10 Microsoft Corporation 2-D transforms for image and video coding
US7010046B2 (en) * 2002-05-02 2006-03-07 Lsi Logic Corporation Method and/or architecture for implementing MPEG frame display using four frame stores
JP2004048711A (en) 2002-05-22 2004-02-12 Matsushita Electric Ind Co Ltd Method for coding and decoding moving picture and data recording medium
US7474668B2 (en) * 2002-06-04 2009-01-06 Alcatel-Lucent Usa Inc. Flexible multilevel output traffic control
US7302387B2 (en) 2002-06-04 2007-11-27 Texas Instruments Incorporated Modification of fixed codebook search in G.729 Annex E audio coding
US7016547B1 (en) 2002-06-28 2006-03-21 Microsoft Corporation Adaptive entropy encoding/decoding for screen capture content
US7136417B2 (en) * 2002-07-15 2006-11-14 Scientific-Atlanta, Inc. Chroma conversion optimization
US6728315B2 (en) 2002-07-24 2004-04-27 Apple Computer, Inc. Method and apparatus for variable accuracy inter-picture timing specification for digital video encoding with reduced requirements for division operations
US7020200B2 (en) 2002-08-13 2006-03-28 Lsi Logic Corporation System and method for direct motion vector prediction in bi-predictive video frames and fields
US7072394B2 (en) * 2002-08-27 2006-07-04 National Chiao Tung University Architecture and method for fine granularity scalable video coding
US7424434B2 (en) * 2002-09-04 2008-09-09 Microsoft Corporation Unified lossy and lossless audio compression
US7328150B2 (en) * 2002-09-04 2008-02-05 Microsoft Corporation Innovations in pure lossless audio compression
US7433824B2 (en) * 2002-09-04 2008-10-07 Microsoft Corporation Entropy coding by adapting coding between level and run-length/level modes
US7502743B2 (en) * 2002-09-04 2009-03-10 Microsoft Corporation Multi-channel audio encoding and decoding with multi-channel transform selection
KR100506864B1 (en) 2002-10-04 2005-08-05 엘지전자 주식회사 Method of determining motion vector
US6729316B1 (en) * 2002-10-12 2004-05-04 Vortex Automotive Corporation Method and apparatus for treating crankcase emissions
US20040136457A1 (en) * 2002-10-23 2004-07-15 John Funnell Method and system for supercompression of compressed digital video
JP4093405B2 (en) * 2002-10-25 2008-06-04 株式会社リコー Image processing apparatus, program, and storage medium
JP3878591B2 (en) 2002-11-01 2007-02-07 松下電器産業株式会社 Video encoding method and video decoding method
US6957157B2 (en) * 2002-11-12 2005-10-18 Flow Metrix, Inc. Tracking vibrations in a pipeline network
US7050088B2 (en) * 2003-01-06 2006-05-23 Silicon Integrated Systems Corp. Method for 3:2 pull-down film source detection
US7167522B2 (en) 2003-02-27 2007-01-23 Texas Instruments Incorporated Video deblocking filter
US7380028B2 (en) * 2003-06-13 2008-05-27 Microsoft Corporation Robust delivery of video data
JP4207684B2 (en) 2003-06-27 2009-01-14 富士電機デバイステクノロジー株式会社 Magnetic recording medium manufacturing method and manufacturing apparatus
US7471726B2 (en) 2003-07-15 2008-12-30 Microsoft Corporation Spatial-domain lapped transform in digital media compression
US7426308B2 (en) * 2003-07-18 2008-09-16 Microsoft Corporation Intraframe and interframe interlace coding and decoding
US20050013498A1 (en) 2003-07-18 2005-01-20 Microsoft Corporation Coding of motion vector information
US7567617B2 (en) * 2003-09-07 2009-07-28 Microsoft Corporation Predicting motion vectors for fields of forward-predicted interlaced video frames
US7577200B2 (en) 2003-09-07 2009-08-18 Microsoft Corporation Extended range variable length coding/decoding of differential motion vector information
US8107531B2 (en) * 2003-09-07 2012-01-31 Microsoft Corporation Signaling and repeat padding for skip frames
US7616692B2 (en) * 2003-09-07 2009-11-10 Microsoft Corporation Hybrid motion vector prediction for interlaced forward-predicted fields
US7961786B2 (en) * 2003-09-07 2011-06-14 Microsoft Corporation Signaling field type information
US7620106B2 (en) * 2003-09-07 2009-11-17 Microsoft Corporation Joint coding and decoding of a reference field selection and differential motion vector information
US7317839B2 (en) 2003-09-07 2008-01-08 Microsoft Corporation Chroma motion vector derivation for interlaced forward-predicted fields
US7623574B2 (en) * 2003-09-07 2009-11-24 Microsoft Corporation Selecting between dominant and non-dominant motion vector predictor polarities
US7609762B2 (en) * 2003-09-07 2009-10-27 Microsoft Corporation Signaling for entry point frames with predicted first field
US8064520B2 (en) 2003-09-07 2011-11-22 Microsoft Corporation Advanced bi-directional predictive coding of interlaced video
US8345754B2 (en) * 2003-09-07 2013-01-01 Microsoft Corporation Signaling buffer fullness
US7295616B2 (en) 2003-11-17 2007-11-13 Eastman Kodak Company Method and system for video filtering with joint motion and noise estimation
US7283176B2 (en) * 2004-03-12 2007-10-16 Broadcom Corporation Method and system for detecting field ID
KR100586882B1 (en) 2004-04-13 2006-06-08 삼성전자주식회사 Method and Apparatus for supporting motion scalability
FR2872973A1 (en) * 2004-07-06 2006-01-13 Thomson Licensing Sa METHOD OR DEVICE FOR CODING A SEQUENCE OF SOURCE IMAGES
US8600217B2 (en) * 2004-07-14 2013-12-03 Arturo A. Rodriguez System and method for improving quality of displayed picture during trick modes
US20060143678A1 (en) * 2004-12-10 2006-06-29 Microsoft Corporation System and process for controlling the coding bit rate of streaming media data employing a linear quadratic control technique and leaky bucket model
US7305139B2 (en) 2004-12-17 2007-12-04 Microsoft Corporation Reversible 2-dimensional pre-/post-filtering for lapped biorthogonal transform
CN1293868C (en) 2004-12-29 2007-01-10 朱旭祥 Application of alpha cyclo-alanine in the process for preparing medicine to treat cerebrovascular and cardiovascular disease
US8190425B2 (en) * 2006-01-20 2012-05-29 Microsoft Corporation Complex cross-correlation parameters for multi-channel audio
US7831434B2 (en) * 2006-01-20 2010-11-09 Microsoft Corporation Complex-transform channel coding with extended-band frequency coding
JP3129986U (en) 2006-12-26 2007-03-08 ライオン株式会社 Plate cushioning material

Patent Citations (172)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4691329A (en) 1985-07-02 1987-09-01 Matsushita Electric Industrial Co., Ltd. Block encoder
US4796087A (en) 1986-05-29 1989-01-03 Jacques Guichard Process for coding by transformation for the transmission of picture signals
US5089889A (en) 1989-04-28 1992-02-18 Victor Company Of Japan, Ltd. Apparatus for inter-frame predictive encoding of video signal
US5117287A (en) 1990-03-02 1992-05-26 Kokusai Denshin Denwa Co., Ltd. Hybrid coding system for moving image
US5220616A (en) 1991-02-27 1993-06-15 Northern Telecom Limited Image processing
US5422676A (en) 1991-04-25 1995-06-06 Deutsche Thomson-Brandt Gmbh System for coding an image representative signal
US6160503A (en) 1992-02-19 2000-12-12 8×8, Inc. Deblocking filter for encoder/decoder arrangement and method with divergence reduction
US5367385A (en) 1992-05-07 1994-11-22 Picturetel Corporation Method and apparatus for processing block coded image data to reduce boundary artifacts between adjacent image blocks
US5467134A (en) 1992-12-22 1995-11-14 Microsoft Corporation Method and system for compressing video data
US5544286A (en) 1993-01-29 1996-08-06 Microsoft Corporation Digital video data compression technique
US5598483A (en) 1993-04-13 1997-01-28 C-Cube Microsystems, Inc. MPEG video decompression processor
US5477272A (en) 1993-07-22 1995-12-19 Gte Laboratories Incorporated Variable-block size multi-resolution motion estimation scheme for pyramid coding
US5719958A (en) 1993-11-30 1998-02-17 Polaroid Corporation System and method for image edge detection using discrete cosine transforms
US5793897A (en) 1993-12-16 1998-08-11 Samsung Electronics Co., Ltd. Adaptive variable-length coding and decoding methods for image data
US5473384A (en) 1993-12-16 1995-12-05 At&T Corp. Method of and system for enhancing distorted graphical information
US5905815A (en) 1994-09-09 1999-05-18 Intel Corporation Decoding encoded image signals encoded by further transforming transformed DC signals
US5757982A (en) 1994-10-18 1998-05-26 Hewlett-Packard Company Quadrantal scaling of dot matrix data
US5590064A (en) 1994-10-26 1996-12-31 Intel Corporation Post-filtering for decoded video signals
US5874995A (en) 1994-10-28 1999-02-23 Matsuhita Electric Corporation Of America MPEG video decoder having a high bandwidth memory for use in decoding interlaced and progressive signals
US5737455A (en) 1994-12-12 1998-04-07 Xerox Corporation Antialiasing with grey masking techniques
US5937095A (en) 1995-01-31 1999-08-10 Matsushita Electric Industrial Co., Ltd. Method for encoding and decoding moving picture signals
US5982459A (en) 1995-05-31 1999-11-09 8×8, Inc. Integrated multimedia communications processor and codec
US5825929A (en) 1995-10-05 1998-10-20 Microsoft Corporation Transformation block optimization method
US5970173A (en) 1995-10-05 1999-10-19 Microsoft Corporation Image compression and affine transformation for image motion compensation
US20050254583A1 (en) 1995-10-26 2005-11-17 Jae-Kyoon Kim Apparatus and method of encoding/decoding a coded block pattern
US20040005096A1 (en) 1995-10-26 2004-01-08 Jae-Kyoon Kim Apparatus and method of encoding/decoding a coded block pattern
US5787203A (en) 1996-01-19 1998-07-28 Microsoft Corporation Method and system for filtering compressed video images
US5799113A (en) 1996-01-19 1998-08-25 Microsoft Corporation Method for expanding contracted video images
US5737019A (en) 1996-01-29 1998-04-07 Matsushita Electric Corporation Of America Method and apparatus for changing resolution by direct DCT mapping
US6215910B1 (en) 1996-03-28 2001-04-10 Microsoft Corporation Table-based compression with embedded coding
EP1085763B1 (en) 1996-05-28 2003-01-22 Matsushita Electric Industrial Co., Ltd. Image predictive coding apparatus and method.
US6249610B1 (en) 1996-06-19 2001-06-19 Matsushita Electric Industrial Co., Ltd. Apparatus and method for coding a picture and apparatus and method for decoding a picture
US5771318A (en) 1996-06-27 1998-06-23 Siemens Corporate Research, Inc. Adaptive edge-preserving smoothing filter
US20020110284A1 (en) 1996-07-02 2002-08-15 Ke-Chiang Chu System and method using edge processing to remove blocking artifacts from decompressed images
US5796875A (en) 1996-08-13 1998-08-18 Sony Electronics, Inc. Selective de-blocking filter for DCT compressed images
US6233017B1 (en) 1996-09-16 2001-05-15 Microsoft Corporation Multimedia compression system with adaptive block sizes
US6337881B1 (en) 1996-09-16 2002-01-08 Microsoft Corporation Multimedia compression system with adaptive block sizes
US5835618A (en) 1996-09-27 1998-11-10 Siemens Corporate Research, Inc. Uniform and non-uniform dynamic range remapping for optimum image display
US5748789A (en) 1996-10-31 1998-05-05 Microsoft Corporation Transparent block skipping in object-based video coding systems
US6038256A (en) 1996-12-31 2000-03-14 C-Cube Microsystems Inc. Statistical multiplexed video encoding using pre-encoding a priori statistics and a priori and a posteriori statistics
US6188799B1 (en) 1997-02-07 2001-02-13 Matsushita Electric Industrial Co., Ltd. Method and apparatus for removing noise in still and moving pictures
US6167164A (en) 1997-03-10 2000-12-26 Samsung Electronics Co., Ltd. One-dimensional signal adaptive filter for reducing blocking effect and filtering method
US6724944B1 (en) 1997-03-13 2004-04-20 Nokia Mobile Phones, Ltd. Adaptive filter
US20040146210A1 (en) 1997-03-13 2004-07-29 Ossi Kalevo Adaptive filter
US5844613A (en) 1997-03-17 1998-12-01 Microsoft Corporation Global motion estimator for motion video signal encoding
US6571016B1 (en) 1997-05-05 2003-05-27 Microsoft Corporation Intra compression of pixel blocks using predicted mean
US6067322A (en) 1997-06-04 2000-05-23 Microsoft Corporation Half pixel motion estimation in motion video signal encoding
US6504873B1 (en) 1997-06-13 2003-01-07 Nokia Mobile Phones Ltd. Filtering based on activities inside the video blocks and at their boundary
US6028967A (en) 1997-07-30 2000-02-22 Lg Electronics Inc. Method of reducing a blocking artifact when coding moving picture
US6281942B1 (en) 1997-08-11 2001-08-28 Microsoft Corporation Spatial and temporal filtering mechanism for digital motion video signals
US6597860B2 (en) 1997-08-14 2003-07-22 Samsung Electronics Digital camcorder apparatus with MPEG-2 compatible video compression
US6240135B1 (en) 1997-09-09 2001-05-29 Lg Electronics Inc Method of removing blocking artifacts in a coding system of a moving picture
US6016365A (en) 1997-10-16 2000-01-18 Samsung Electro-Mechanics Co., Ltd. Decoder having adaptive function of eliminating block effect
US6178205B1 (en) 1997-12-12 2001-01-23 Vtel Corporation Video postfiltering with motion-compensated temporal filtering and/or spatial-adaptive filtering
US5946043A (en) 1997-12-31 1999-08-31 Microsoft Corporation Video coding using adaptive coding of block parameters for coded/uncoded blocks
US6501798B1 (en) 1998-01-22 2002-12-31 International Business Machines Corporation Device for generating multiple quality level bit-rates in a video encoder
US20020009146A1 (en) 1998-03-20 2002-01-24 Barbara A. Hall Adaptively encoding a picture of contrasted complexity having normal video and noisy video portions
US6285801B1 (en) 1998-05-29 2001-09-04 Stmicroelectronics, Inc. Non-linear adaptive image filter for filtering noise such as blocking artifacts
US6600839B2 (en) 1998-05-29 2003-07-29 Stmicroelectronics, Inc. Non-linear adaptive image filter for filtering noise such as blocking artifacts
US20020027954A1 (en) 1998-06-30 2002-03-07 Kenneth S. Singh Method and device for gathering block statistics during inverse quantization and iscan
US6320905B1 (en) 1998-07-08 2001-11-20 Stream Machine Company Postprocessing system for removing blocking artifacts in block-based codecs
US6665346B1 (en) 1998-08-01 2003-12-16 Samsung Electronics Co., Ltd. Loop-filtering method for image data and apparatus therefor
US6380985B1 (en) 1998-09-14 2002-04-30 Webtv Networks, Inc. Resizing and anti-flicker filtering in reduced-size video images
TW379509B (en) 1998-09-15 2000-01-11 Acer Inc Adaptive post-filtering of compressed video images to remove artifacts
US6466624B1 (en) 1998-10-28 2002-10-15 Pixonics, Llc Video decoder with bit stream based enhancements
US6768774B1 (en) 1998-11-09 2004-07-27 Broadcom Corporation Video and graphics system with video scaling
US20020097802A1 (en) 1998-11-30 2002-07-25 Chih-Lung (Bruce) Lin "Coding techniques for coded block parameters of blocks of macroblocks"
US6983018B1 (en) 1998-11-30 2006-01-03 Microsoft Corporation Efficient motion vector coding for video compression
US6236764B1 (en) 1998-11-30 2001-05-22 Equator Technologies, Inc. Image processing circuit and method for reducing a difference between pixel values across an image boundary
US6690838B2 (en) 1998-11-30 2004-02-10 Equator Technologies, Inc. Image processing circuit and method for reducing a difference between pixel values across an image boundary
US20050036759A1 (en) 1998-11-30 2005-02-17 Microsoft Corporation Efficient motion vector coding for video compression
US6529638B1 (en) 1999-02-01 2003-03-04 Sharp Laboratories Of America, Inc. Block boundary artifact reduction for block-based image compression
US20030103680A1 (en) 1999-02-01 2003-06-05 Westerman Larry Alan Block boundary artifact reduction for block-based image compression
US6473409B1 (en) 1999-02-26 2002-10-29 Microsoft Corp. Adaptive filtering system and method for adaptively canceling echoes and reducing noise in digital signals
US6741752B1 (en) 1999-04-16 2004-05-25 Samsung Electronics Co., Ltd. Method of removing block boundary noise components in block-coded images
US20050237433A1 (en) 1999-07-30 2005-10-27 Roy Van Dijk System and method for motion compensation of image planes in color sequential displays
US6748113B1 (en) 1999-08-25 2004-06-08 Matsushita Electric Insdustrial Co., Ltd. Noise detecting method, noise detector and image decoding apparatus
US20010017944A1 (en) 2000-01-20 2001-08-30 Nokia Mobile Pnones Ltd. Method and associated device for filtering digital video images
US20020067369A1 (en) 2000-04-21 2002-06-06 Sullivan Gary J. Application program interface (API) facilitating decoder control of accelerator resources
US20040062309A1 (en) 2000-05-10 2004-04-01 Alexander Romanowski Method for transformation-coding full motion image sequences
GB2365647A (en) 2000-08-04 2002-02-20 Snell & Wilcox Ltd Deriving parameters for post-processing from an encoded signal
US6766063B2 (en) 2001-02-02 2004-07-20 Avid Technology, Inc. Generation adaptive filtering for subsampling component video as input to a nonlinear editing system
US20020150166A1 (en) 2001-03-02 2002-10-17 Johnson Andrew W. Edge adaptive texture discriminating filtering
US20020136303A1 (en) 2001-03-26 2002-09-26 Shijun Sun Method and apparatus for controlling loop filtering or post filtering in block based motion compensationed video coding
EP1727373B1 (en) 2001-03-26 2009-12-23 Sharp Kabushiki Kaisha Method and apparatus for controlling loop filtering or post filtering in block based motion compensated video coding
US20060126962A1 (en) 2001-03-26 2006-06-15 Sharp Laboratories Of America, Inc. Methods and systems for reducing blocking artifacts with reduced complexity for spatially-scalable video coding
US20050175103A1 (en) 2001-03-26 2005-08-11 Sharp Laboratories Of America, Inc. Method and apparatus for controlling loop filtering or post filtering in block based motion compensationed video coding
US20020146072A1 (en) 2001-03-26 2002-10-10 Shijun Sun Method and apparatus for controlling loop filtering or post filtering in block based motion compensationed video coding
EP1246131B1 (en) 2001-03-26 2006-10-11 Sharp Kabushiki Kaisha Method and apparatus for the reduction of artifact in decompressed images using post-filtering
US6931063B2 (en) 2001-03-26 2005-08-16 Sharp Laboratories Of America, Inc. Method and apparatus for controlling loop filtering or post filtering in block based motion compensationed video coding
US20020154227A1 (en) 2001-04-18 2002-10-24 Koninklijke Philips Electronics N.V. Dynamic complexity prediction and regulation of MPEG2 decoding in a media processor
US20020186890A1 (en) 2001-05-03 2002-12-12 Ming-Chieh Lee Dynamic filtering for lossy compression
US6704718B2 (en) 2001-06-05 2004-03-09 Microsoft Corporation System and method for trainable nonlinear prediction of transform coefficients in data compression
US20040120597A1 (en) 2001-06-12 2004-06-24 Le Dinh Chon Tam Apparatus and method for adaptive spatial segmentation-based noise reducing for encoded image signal
US20030053708A1 (en) 2001-07-02 2003-03-20 Jasc Software Removal of block encoding artifacts
US20030021489A1 (en) 2001-07-24 2003-01-30 Seiko Epson Corporation Image processor and image processing program, and image processing method
US7426315B2 (en) 2001-09-05 2008-09-16 Zoran Microelectronics Ltd. Method for reducing blocking artifacts
US20030044080A1 (en) 2001-09-05 2003-03-06 Emblaze Systems Ltd Method for reducing blocking artifacts
US20040190626A1 (en) 2001-09-14 2004-09-30 Shijun Sun Adaptive filtering based upon boundary strength
US20030053541A1 (en) 2001-09-14 2003-03-20 Shijun Sun Adaptive filtering based upon boundary strength
US20060268988A1 (en) 2001-09-14 2006-11-30 Shijun Sun Adaptive filtering based upon boundary strength
EP1562384B1 (en) 2001-09-14 2012-05-02 Sharp Kabushiki Kaisha Adaptive filtering based upon boundary strength
US20060171472A1 (en) 2001-09-14 2006-08-03 Shijun Sun Adaptive filtering based upon boundary strength
US20030053711A1 (en) 2001-09-20 2003-03-20 Changick Kim Reducing blocking and ringing artifacts in low-bit-rate coding
US6983079B2 (en) 2001-09-20 2006-01-03 Seiko Epson Corporation Reducing blocking and ringing artifacts in low-bit-rate coding
US20030202608A1 (en) * 2001-09-24 2003-10-30 Macinnis Alexander G. Method for deblocking field-frame video
US20030058944A1 (en) * 2001-09-24 2003-03-27 Macinnis Alexander G. Method and apparatus for performing deblocking filtering with interlace capability
US20030099292A1 (en) 2001-11-27 2003-05-29 Limin Wang Macroblock level adaptive frame/field coding for digital video content
US20050117651A1 (en) * 2001-11-27 2005-06-02 Limin Wang Picture level adaptive frame/field coding for digital video content
US20030152146A1 (en) 2001-12-17 2003-08-14 Microsoft Corporation Motion compensation loop with filtering
US20030156648A1 (en) 2001-12-17 2003-08-21 Microsoft Corporation Sub-block transform coding of prediction residuals
US20080049834A1 (en) 2001-12-17 2008-02-28 Microsoft Corporation Sub-block transform coding of prediction residuals
US7266149B2 (en) 2001-12-17 2007-09-04 Microsoft Corporation Sub-block transform coding of prediction residuals
US20030138154A1 (en) 2001-12-28 2003-07-24 Tooru Suino Image-processing apparatus, image-processing method, program and computer readable information recording medium
US20040062310A1 (en) * 2002-01-17 2004-04-01 Zhong Xue Coding distortion removal method, video encoding method, video decoding method, and apparatus and program for the same
US7003035B2 (en) 2002-01-25 2006-02-21 Microsoft Corporation Video coding methods and apparatuses
US20030219074A1 (en) 2002-01-31 2003-11-27 Samsung Electronics Co., Ltd. Filtering method for removing block artifacts and/or ringing noise and apparatus therefor
US20050105889A1 (en) * 2002-03-22 2005-05-19 Conklin Gregory J. Video picture compression artifacts reduction via filtering and dithering
US20030185306A1 (en) 2002-04-01 2003-10-02 Macinnis Alexander G. Video decoding system supporting multiple standards
US20030235248A1 (en) 2002-06-21 2003-12-25 Changick Kim Hybrid technique for reducing blocking and ringing artifacts in low-bit-rate coding
US20030235250A1 (en) * 2002-06-24 2003-12-25 Ankur Varma Video deblocking
US20040057517A1 (en) 2002-09-25 2004-03-25 Aaron Wells Content adaptive video processor using motion compensation
US6795584B2 (en) 2002-10-03 2004-09-21 Nokia Corporation Context-based adaptive variable length coding for adaptive block transforms
US20040076338A1 (en) 2002-10-21 2004-04-22 Sharp Laboratories Of America, Inc. JPEG artifact removal
US20040101059A1 (en) * 2002-11-21 2004-05-27 Anthony Joch Low-complexity deblocking filter
US6646578B1 (en) 2002-11-22 2003-11-11 Ub Video Inc. Context adaptive variable length decoding system and method
US20040141557A1 (en) 2003-01-16 2004-07-22 Samsung Electronics Co. Ltd. Methods and apparatus for removing blocking artifacts of MPEG signals in real-time video reception
US20060209962A1 (en) 2003-02-06 2006-09-21 Hyun-Sang Park Video encoding method and video encoder for improving performance
US20040208392A1 (en) 2003-03-17 2004-10-21 Raveendran Vijayalakshmi R. Method and apparatus for improving video quality of low bit-rate video
US20050008251A1 (en) 2003-05-17 2005-01-13 Stmicroelectronics Asia Pacific Pte Ltd. Edge enhancement process and system
US20040252768A1 (en) 2003-06-10 2004-12-16 Yoshinori Suzuki Computing apparatus and encoding program
US20050013494A1 (en) * 2003-07-18 2005-01-20 Microsoft Corporation In-loop deblocking filter
US20050025246A1 (en) 2003-07-18 2005-02-03 Microsoft Corporation Decoding jointly coded transform type and subblock pattern information
US7616829B1 (en) 2003-10-29 2009-11-10 Apple Inc. Reducing undesirable block based image processing artifacts by DC image filtering
US20070291141A1 (en) 2003-11-05 2007-12-20 Per Thorell Methods of processing digital image and/or video data including luminance filtering based on chrominance data and related systems and computer program products
US20050135484A1 (en) 2003-12-18 2005-06-23 Daeyang Foundation (Sejong University) Method of encoding mode determination, method of motion estimation and encoding apparatus
US20050196063A1 (en) 2004-01-14 2005-09-08 Samsung Electronics Co., Ltd. Loop filtering method and apparatus
US20050207492A1 (en) 2004-03-18 2005-09-22 Sony Corporation And Sony Electronics Inc. Methods and apparatus to reduce blocking noise and contouring effect in motion compensated compressed video
US20050243911A1 (en) 2004-04-29 2005-11-03 Do-Kyoung Kwon Adaptive de-blocking filtering apparatus and method for mpeg video decoder
US20050243916A1 (en) 2004-04-29 2005-11-03 Do-Kyoung Kwon Adaptive de-blocking filtering apparatus and method for mpeg video decoder
US20050244063A1 (en) 2004-04-29 2005-11-03 Do-Kyoung Kwon Adaptive de-blocking filtering apparatus and method for mpeg video decoder
US20050243913A1 (en) 2004-04-29 2005-11-03 Do-Kyoung Kwon Adaptive de-blocking filtering apparatus and method for mpeg video decoder
US20050243912A1 (en) 2004-04-29 2005-11-03 Do-Kyoung Kwon Adaptive de-blocking filtering apparatus and method for mpeg video decoder
US20050243914A1 (en) 2004-04-29 2005-11-03 Do-Kyoung Kwon Adaptive de-blocking filtering apparatus and method for mpeg video decoder
US20050243915A1 (en) 2004-04-29 2005-11-03 Do-Kyoung Kwon Adaptive de-blocking filtering apparatus and method for mpeg video decoder
US20050276505A1 (en) 2004-05-06 2005-12-15 Qualcomm Incorporated Method and apparatus for image enhancement for low bit rate video compression
US7430336B2 (en) 2004-05-06 2008-09-30 Qualcomm Incorporated Method and apparatus for image enhancement for low bit rate video compression
US20060050783A1 (en) 2004-07-30 2006-03-09 Le Dinh Chon T Apparatus and method for adaptive 3D artifact reducing for encoded image signal
US20060072669A1 (en) 2004-10-06 2006-04-06 Microsoft Corporation Efficient repeat padding for hybrid video sequence with arbitrary video resolution
US20060072668A1 (en) 2004-10-06 2006-04-06 Microsoft Corporation Adaptive vertical macroblock alignment for mixed frame video sequences
US20060078052A1 (en) 2004-10-08 2006-04-13 Dang Philip P Method and apparatus for parallel processing of in-loop deblocking filter for H.264 video compression standard
US20060110062A1 (en) 2004-11-23 2006-05-25 Stmicroelectronics Asia Pacific Pte. Ltd. Edge adaptive filtering system for reducing artifacts and method
US20060181740A1 (en) 2004-12-08 2006-08-17 Byung-Gyu Kim Block artifact phenomenon eliminating device and eliminating method thereof
US20060215754A1 (en) 2005-03-24 2006-09-28 Intel Corporation Method and apparatus for performing video decoding in a multi-thread environment
US20060274959A1 (en) 2005-06-03 2006-12-07 Patrick Piastowski Image processing to reduce blocking artifacts
US20070237241A1 (en) 2006-04-06 2007-10-11 Samsung Electronics Co., Ltd. Estimation of block artifact strength based on edge statistics
US20070280552A1 (en) 2006-06-06 2007-12-06 Samsung Electronics Co., Ltd. Method and device for measuring MPEG noise strength of compressed digital image
US20070291858A1 (en) 2006-06-16 2007-12-20 Via Technologies, Inc. Systems and Methods of Video Compression Deblocking
US20080084932A1 (en) 2006-10-06 2008-04-10 Microsoft Corporation Controlling loop filtering for interlaced video frames
US20080159407A1 (en) 2006-12-28 2008-07-03 Yang Nick Y Mechanism for a parallel processing in-loop deblock filter
US20100033633A1 (en) 2006-12-28 2010-02-11 Gokce Dane Detecting block artifacts in coded images and video
US20100183068A1 (en) 2007-01-04 2010-07-22 Thomson Licensing Methods and apparatus for reducing coding artifacts for illumination compensation and/or color compensation in multi-view coded video
US20080187053A1 (en) 2007-02-06 2008-08-07 Microsoft Corporation Scalable multi-thread video decoding
US20080266398A1 (en) 2007-04-09 2008-10-30 Tektronix, Inc. Systems and methods for spatially isolated artifact dissection, classification and measurement
US20100128803A1 (en) 2007-06-08 2010-05-27 Oscar Divorra Escoda Methods and apparatus for in-loop de-artifacting filtering based on multi-lattice sparsity-based filtering
US20090003446A1 (en) 2007-06-30 2009-01-01 Microsoft Corporation Computing collocated macroblock information for direct mode macroblocks
US20090148062A1 (en) 2007-12-07 2009-06-11 Guy Gabso System and method for detecting edges in a video signal
US20090327386A1 (en) 2008-06-25 2009-12-31 Joel Warren Schoenblum Combined deblocking and denoising filter
US20110200103A1 (en) 2008-10-23 2011-08-18 Sk Telecom. Co., Ltd. Video encoding/decoding apparatus, de-blocking filter and filtering method based on intra-prediction directions for same, and recording media
US20110200100A1 (en) 2008-10-27 2011-08-18 Sk Telecom. Co., Ltd. Motion picture encoding/decoding apparatus, adaptive deblocking filtering apparatus and filtering method for same, and recording medium
US20110222597A1 (en) 2008-11-25 2011-09-15 Thomson Licensing Method and apparatus for sparsity-based de-artifact filtering for video encoding and decoding
US20120082219A1 (en) 2010-10-05 2012-04-05 Microsoft Corporation Content adaptive deblocking during video encoding and decoding

Non-Patent Citations (69)

* Cited by examiner, † Cited by third party
Title
Ati Avivo, "ATI AVIVO. Part 0: Introduction and Description of Video Technologies," 8 pp. (downloaded from the World Wide Web on Jun. 28, 2006).
Chen et al., "Adaptive post-filtering of transform coefficients for the reduction of blocking artifacts," IEEE Transactions on Circuits and Systems for Video Technology vol. 11, No. 5, pp. 594-602, 2001.
Chen et al., "Variable Block-size Image Coding by Resource Planning," Proc. Int'l Conf. on Image Science, Systems, and Technology, Las Vegas, 10 pp. (1997).
Cheung et al., "Video Coding on Multi-Core Graphics Processors," IEEE Signal Processing Magazine-Special Issue on Signal Processing on Platforms with Multiple Cores: Design and Applications, vol. 27, No. 2, pp. 79-89 (Mar. 2010).
Chien et al., "A High Throughput Deblocking Filter Design Supporting Multiple Video Coding Standards," ISCAS, pp. 2377-2380 (2009).
Choy et al., "Reduction of coding artifacts in transform image coding by using local statistics of transform coefficients," IEEE International Symposium on Circuits and Systems, pp. 1089-1092, 1997.
Citro et al., "A Multi-Standard Micro-Programmable Deblocking Filter Architecture and its Application to VC-1 Video Decoder," IEEE Int'l SOC Conf., pp. 225-228 (2008).
Citro et al., "Programmable Deblocking Filter Architecture for a VC-1 Video Decoder," IEEE Trans. on Circuits and Systems for Video Technology, vol. 19, pp. 1227-1233 (2009).
Elecard Ltd., "AVC/H.264 Decoder with DXVA Support," 2 pp., (downloaded from the World Wide Web on Aug. 27, 2006).
Fong et al., "Integer Lapped Transforms and Their Applications to Image Coding," IEEE Trans. Image Processing, vol. 11, No. 10, pp. 1152-1159 (Oct. 2002).
Hallapuro et al., "Performance Analysis of Low Bit Rate H.26L Video Encoder," Proc. IEEE International Conference on Acoustics, Speech and Signal Processing, vol. 2, pp. 1129-1132 (May 2001).
Horn et al, "Bit allocation methods for closed-loop coding of oversampled pyramid decompositions," Proc. of IEEE International Conference on Image Processing, 4 pp. (1997).
Huang et al., "A Post Deblocking Filter for H.264 Video," IEEE Proc. Int'l Conf. on Computer Communications and Networks, pp. 1137-1142 (Aug. 2007).
ISO/IEC, "Information Technology-Coding of Audio-Visual Objects: Visual, ISO/IEC 14496-2, Committee Draft," 330 pp. (1998).
ISO/IEC, "ISO/IEC 11172-2: Information Technology-Coding of Moving Pictures and Associated Audio for Storage Media at up to About 1,5 Mbit/s," 122 pp. (1993).
ITU-T, "ITU-T Recommendation H.261: Video Codec for Audiovisual Services at p ×64 kbits," 28 pp. (1993).
ITU-T, "ITU-T Recommendation H.262: Information Technology-Generic Coding of Moving Pictures and Associated Audio Information: Video," 218 pp. (1995).
ITU-T, "ITU-T Recommendation H.263: Video Coding for Low Bit Rate Communication," 167 pp. (1998).
Jacobs et al., "Thread-Parallel MPEG-2, MPEG-4 and H.264 Video Encoders for SoC Multi-Processor Architectures," IEEE Trans. on Consumer Electronics, vol. 52, No. 1, pp. 269-275 (2006).
Jeong et al., "A directional deblocking filter based on intra prediction for H.264 AVC," IEICE Electronics Express, vol. 6, No. 12, pp. 864-869 (Jun. 2009).
Joch et al., "A Performance Analysis of the ITU-T Draft H.26L Video Coding Standard." http://pv2002.ece.cmu.edu/papers (Current Aug. 2002).
Joint Collaborative Team on Video Coding (JCT-VC), "Description of video coding technology proposal by Microsoft," JCTVC-A118, 15 pp. (Apr. 2010).
Joint Video Team (JVT) of ISO/IEC MPEG and ITU-T VCEG, "Joint Committee Draft (CD), JVT-C167," 3rd Meeting: Fairfax, Virginia, USA, 142 pp. (May 2002).
Joint Video Team of ISO/IEC MPEG and ITU-T VCEG, "Final Joint Committee Draft of Joint Video Specification (ITU-T Recommendation H.264, ISO/IEC 14496-10 AVC," 206 pp. (Aug. 2002).
Kaup, "Reduction of Ringing Noise in Transform Image Coding Using a Simple Adaptive Filter," Electronics Letters, vol. 34, No. 22, 8 pp. (Oct. 1998).
Kong et al., "Edge Map Guided Adaptive Post-Filter for Blocking and Ringing Artifacts Removal," Mitsubishi Electric Research Technical Report TR-2004-003, 6 pp. (Feb. 2004).
Kotropoulos et al., "Adaptive LMS L-filters for Noise Suppression in Images," IEEE Transactions on Image Processing, vol. 5, No. 12, pp. 1596-1609 (1996). [48 pp. as downloaded from the World Wide Web on Apr. 30, 2001.].
Lee et al. "Analysis and Efficient Architecture Design for VC-1 Overlap Smoothing and In-Loop Deblocking Filter," IEEE Trans. on Circuits and Systems for Video Technology, vol. 18, pp. 1786-1796 (2008).
Lee et al., "Analysis and Integrated Architecture Design for Overlap Smooth and In-Loop Deblocking Filter in VC-1," ICIP, vol. 5, pp. 169-172 (2007).
Lee et al., "Blocking Effect Reduction of JPEG Images by Signal Adaptive Filtering," IEEE Trans. on Image Processing, vol. 7, pp. 229-234, Feb. 1998.
Lee et al., "Loop filtering and post-filtering for low-bit-rates moving picture coding," Signal Processing: Image Communication 16, pp. 871-890 (2001).
Lee et al., "Variable Block Size Techniques for Motion Sequence Coding," Proc. First Korea-Japan Joint Workshop on Multi-media Communications, 12 pp. (1994).
Linares et al., "JPEG Estimated Spectrum Adaptive Postfiltering Using Image-Adaptive Q-Tables and Canny Edge Detectors," Proc. ISCAS'96, Atlanta GA, May 1996.
List et al., "Adaptive Deblocking Filter," IEEE Trans. Circuits Syst. Video Technol., vol. 13, No. 7, pp. 614-619 (Jul. 2003).
Liu et al., "An In/Post-Loop Deblocking Filter with Hybrid Filtering Schedule," IEEE Trans. on Circuits and Systems for Video Technology, vol. 17, pp. 937-943 (2007).
Malvar, "A pre- and post-filtering technique for the reduction of blocking effects," in Proc. Picture Coding Symp., Stockholm, Sweden, Jun. 1987.
Malvar, "Biorthogonal and Nonuniform Lapped Transforms for Transform Coding with Reduced Blocking and Ringing Artifacts," IEEE Transactions on Signal Processing, vol. 46, No. 4, pp. 1043-1053, Apr. 1998.
Mehrotra et al., "Adaptive Coding Using Finite State Hierarchical Table Lookup Vector Quantization with Variable Block Sizes," 5 pp. (1996).
Meier et al., "Reduction of Blocking Artifacts in Image and Video Coding," IEEE Trans. on Circuits and Systems for Video Technology, vol. 9, No. 3, pp. 490-500, Apr. 1999.
Microsoft Corporation, "Microsoft Debuts New Windows Media Player 9 Series, Redefining Digital Media on the PC," 4 pp. (Sep. 4, 2002) [Downloaded from the World Wide Web on May 14, 2004].
Minami et al., "An optimization approach for removing blocking effects in transform coding," IEEE Transactions on Circuits and Systems for Video Technology, vol. 5, No. 2, pp. 74-82, 1995.
Mook, "Next-Gen Windows Media Player Leaks to the Web," BetaNews, 17 pp. (Jul. 19, 2002) [Downloaded from the World Wide Web on Aug. 8, 2003].
O'Rourke et al., "Improved Image Decompression for Reduced Transform Coding Artifacts," IEEE Trans. on Circuits and Systems for Video Technology, vol. 5, No. 6, (Dec. 1995).
Ostermann et al., "Video Coding with H.264/AVC: Tools, Performance, and Complexity," IEEE Circuits and Systems Magazine, pp. 7-28 (2004).
Panis et al., "A method for reducing block artifacts by interpolating block borders," available at http://www.cs.mcgill.ca/~gstamm/Siemens1/paper1.html.
Panis et al., "A method for reducing block artifacts by interpolating block borders," available at http://www.cs.mcgill.ca/˜gstamm/Siemens1/paper1.html.
Panis et al., "Reduction of block artifacts by selective removal and reconstruction of the block borders," Picture Coding Symposium 97, Berlin, Sep. 10-12, 1997.
Printouts of FTP directories from http://ftp3.itu.ch , 8 pp. (downloaded from the World Wide Web on Sep. 20, 2005.).
Reader, "History of MPEG Video Compression-Ver. 4.0," 99 pp., document marked Dec. 16, 2003.
Ren et al., "Computationally Efficient Mode Selection in H.264/AVC Video Coding," IEEE Trans. on Consumer Electronics, vol. 54, No. 2, pp. 877-886 (May 2008).
Ribas-Corbera et al., "On the Optimal Block Size for Block-based Motion-Compensated Video Coders," SPIE Proc. of Visual Communications and Image Processing, vol. 3024, 12 pp. (1997).
Ribas-Corbera et al., "On the Optimal Motion Vector Accuracy for Block-based Motion-Compensated Video Coders," Proc. SPIE Digital Video Compression, San Jose, CA, 13 pp. (1996).
Richardson, H.264 and MPEG-4 Video Compression: Video Coding for Next-generation Multimedia, pp. 184-187 (2003).
SMPTE, "VC-1 Compressed Video Bitstream Format and Decoding Process," SMPTE 421M-2006, 493 pp. (2006).
Sullivan et al., "Microsoft DirectX VA: Video Acceleration API/DDI," DirectX® VA Version 1.01, 88 pp. (Jan. 23, 2001).
Sullivan et al., "The H.264/AVC Advanced Video Coding Standard: Overview and Introduction to the Fidelity Range Extensions," 21 pp. (Aug. 2004).
Sun et al., "Loop Filter with Skip Mode," Study Group 16, Video Coding Experts Group, 8 pp. (2001).
U.S. Appl. No. 60/341,674, filed Dec. 17, 2001, Lee et al.
U.S. Appl. No. 60/488,710, filed Jul. 18, 2003, Srinivasan et al.
Wang et al., "A Multi-Core Architecture Based Parallel Framework for H.264/AVC Deblocking Filters," J. Sign. Process. Syst., vol. 57, No. 2, 17 pp. (document marked "published online: Dec. 4, 2008").
Wang et al., "Interlace Coding Tools for H.26L Video Coding," ITU, VCEG-O37, 20 pp. (Nov. 2001).
Wang et al., "Objective Video Quality Assessment," Ch. 41 in The Handbook of Video Databases: Design and Applications, pp. 1041-1078 (Sep. 2003).
Wang, "H.264 Baseline Video Implementation on the CT3400 Multi-core DSP," Cradle Technologies, 15 pp.
Wiegand, "Joint Model No. 1, Revision 1 (JM1-rl)," JVT-A003r1, 80 pp. (document marked "Generated: Jan. 18, 2002").
Wien et al., "16 Bit Adaptive Block size Transforms," JVT-C107r1, 54 pp.
Wien, "H.26L Core Experiment on Adaptive Block Transforms," International Telecommunications Union, 2 pp. [Downloaded from the World Wide Web on Nov. 11, 2002].
Wien, "Variable Block-Size Transforms for Hybrid Video Coding," Dissertion, 182 pp. (Feb. 2004).
Wu et al., "Joint estimation of forward and backward motion vectors for interpolative prediction of video," IEEE Transactions on Image Processing, vol. 3, No. 5, pp. 684-687, Sep. 1994.
Zhang et al., "A new approach to reduce the "blocking effect" of transform coding," IEEE Transactions on Communications, vol. 41, No. 2, pp. 299-302, 1993.

Cited By (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190075317A1 (en) * 2001-12-17 2019-03-07 Microsoft Technology Licensing, Llc Video coding / decoding with sub-block transform sizes and adaptive deblock filtering
US10931967B2 (en) * 2001-12-17 2021-02-23 Microsoft Technology Licensing, Llc Video coding/decoding with sub-block transform sizes and adaptive deblock filtering
US10567791B2 (en) * 2001-12-17 2020-02-18 Microsoft Technology Licensing, Llc Video coding / decoding with sub-block transform sizes and adaptive deblock filtering
US10531117B2 (en) 2001-12-17 2020-01-07 Microsoft Technology Licensing, Llc Sub-block transform coding of prediction residuals
US10390037B2 (en) * 2001-12-17 2019-08-20 Microsoft Technology Licensing, Llc Video coding/decoding with sub-block transform sizes and adaptive deblock filtering
US10958917B2 (en) 2003-07-18 2021-03-23 Microsoft Technology Licensing, Llc Decoding jointly coded transform type and subblock pattern information
US9894356B2 (en) 2010-01-14 2018-02-13 Samsung Electronics Co., Ltd. Method and apparatus for encoding video and method and apparatus for decoding video by considering skip and split order
US10110894B2 (en) 2010-01-14 2018-10-23 Samsung Electronics Co., Ltd. Method and apparatus for encoding video and method and apparatus for decoding video by considering skip and split order
US11128856B2 (en) 2010-01-14 2021-09-21 Samsung Electronics Co., Ltd. Method and apparatus for encoding video and method and apparatus for decoding video by considering skip and split order
US10582194B2 (en) 2010-01-14 2020-03-03 Samsung Electronics Co., Ltd. Method and apparatus for encoding video and method and apparatus for decoding video by considering skip and split order
US9225987B2 (en) 2010-01-14 2015-12-29 Samsung Electronics Co., Ltd. Method and apparatus for encoding video and method and apparatus for decoding video by considering skip and split order
US9253507B2 (en) 2010-09-30 2016-02-02 Samsung Electronics Co., Ltd. Method and device for interpolating images by using a smoothing interpolation filter
US9179167B2 (en) 2010-09-30 2015-11-03 Samsung Electronics Co., Ltd. Method and device for interpolating images by using a smoothing interpolation filter
US9118933B1 (en) 2010-09-30 2015-08-25 Samsung Electronics Co., Ltd. Method and device for interpolating images by using a smoothing interpolation filter
US9124902B2 (en) 2010-09-30 2015-09-01 Samsung Electronics Co., Ltd. Method and device for interpolating images by using a smoothing interpolation filter
US9277247B2 (en) 2010-09-30 2016-03-01 Samsung Electronics Co., Ltd. Method and device for interpolating images by using a smoothing interpolation filter
US20160112706A1 (en) * 2011-01-12 2016-04-21 Mitsubishi Electric Corporation Image encoding device, image decoding device, image encoding method, and image decoding method for generating a prediction image
US10931946B2 (en) 2011-01-12 2021-02-23 Mitsubishi Electric Corporation Image encoding device, image decoding device, image encoding method, and image decoding method for generating a prediction image
US9414073B2 (en) * 2011-01-12 2016-08-09 Mitsubishi Electric Corporation Image encoding device, image decoding device, image encoding method, and image decoding method for generating a prediction image
US10205944B2 (en) 2011-01-12 2019-02-12 Mistubishi Electric Corporation Image encoding device, image decoding device, image encoding method, and image decoding method for generating a prediction image
US9560384B2 (en) 2011-10-17 2017-01-31 Kt Corporation Method and apparatus for encoding/decoding image
US9560385B2 (en) 2011-10-17 2017-01-31 Kt Corporation Method and apparatus for encoding/decoding image
US9661352B2 (en) 2011-10-17 2017-05-23 Kt Corporation Method and apparatus for encoding/decoding image
US9826251B2 (en) 2011-10-17 2017-11-21 Kt Corporation Method and apparatus for encoding/decoding image
US9661346B2 (en) 2011-10-17 2017-05-23 Kt Corporation Method and apparatus for encoding/decoding image
US9661354B2 (en) 2011-10-17 2017-05-23 Kt Corporation Method and apparatus for encoding/decoding image
US9210423B2 (en) * 2012-02-06 2015-12-08 Nokia Technologies Oy Method for coding and an apparatus
US10349052B2 (en) 2012-02-06 2019-07-09 Nokia Technologies Oy Method for coding and an apparatus
US20130202052A1 (en) * 2012-02-06 2013-08-08 Nokia Corporation Method for coding and an apparatus
US10397607B2 (en) 2013-11-01 2019-08-27 Qualcomm Incorporated Color residual prediction for video coding
WO2020150347A1 (en) * 2019-01-15 2020-07-23 Tencent America LLC Chroma deblock filters for intra picture block compensation
US11019359B2 (en) 2019-01-15 2021-05-25 Tencent America LLC Chroma deblock filters for intra picture block compensation

Also Published As

Publication number Publication date
CN100534164C (en) 2009-08-26
US7412102B2 (en) 2008-08-12
CN101001374B (en) 2011-08-10
CN1846437A (en) 2006-10-11
US20050083218A1 (en) 2005-04-21
US20050053294A1 (en) 2005-03-10
US7469011B2 (en) 2008-12-23
US20050053145A1 (en) 2005-03-10
CN100586183C (en) 2010-01-27
CN101155306A (en) 2008-04-02
EP1658726A4 (en) 2011-11-23
US20050053302A1 (en) 2005-03-10
EP1658726B1 (en) 2020-09-16
US20050053156A1 (en) 2005-03-10
US20050052294A1 (en) 2005-03-10
US7352905B2 (en) 2008-04-01
US20050084012A1 (en) 2005-04-21
CN1950832A (en) 2007-04-18
CN100456833C (en) 2009-01-28
CN101001374A (en) 2007-07-18
CN1965321A (en) 2007-05-16
US7724827B2 (en) 2010-05-25
EP1658726A2 (en) 2006-05-24
CN100407224C (en) 2008-07-30
EP2285113B1 (en) 2020-05-06
US20050053293A1 (en) 2005-03-10
US8116380B2 (en) 2012-02-14
US7924920B2 (en) 2011-04-12
US20050053151A1 (en) 2005-03-10
EP2285113A2 (en) 2011-02-16
EP2285113A3 (en) 2011-08-10
US7606311B2 (en) 2009-10-20
US7099515B2 (en) 2006-08-29

Similar Documents

Publication Publication Date Title
US8687709B2 (en) In-loop deblocking for interlaced video
US7092576B2 (en) Bitplane coding for macroblock field/frame coding type information
US8625669B2 (en) Predicting motion vectors for fields of forward-predicted interlaced video frames
US7426308B2 (en) Intraframe and interframe interlace coding and decoding
US7688894B2 (en) Scan patterns for interlaced video content
US7782954B2 (en) Scan patterns for progressive video content
US8107531B2 (en) Signaling and repeat padding for skip frames

Legal Events

Date Code Title Description
AS Assignment

Owner name: MICROSOFT CORPORATION, WASHINGTON

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:HSU, POHSIANG;LIN, CHIH-LUNG;SRINIVASAN, SRIDHAR;AND OTHERS;SIGNING DATES FROM 20041208 TO 20041213;REEL/FRAME:015467/0076

Owner name: MICROSOFT CORPORATION, WASHINGTON

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:HSU, POHSIANG;LIN, CHIH-LUNG;SRINIVASAN, SRIDHAR;AND OTHERS;REEL/FRAME:015467/0076;SIGNING DATES FROM 20041208 TO 20041213

STCF Information on status: patent grant

Free format text: PATENTED CASE

AS Assignment

Owner name: MICROSOFT TECHNOLOGY LICENSING, LLC, WASHINGTON

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MICROSOFT CORPORATION;REEL/FRAME:034541/0477

Effective date: 20141014

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551)

Year of fee payment: 4

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 8