WO2010016820A1 - Method and apparatus for banding artifact detection - Google Patents

Method and apparatus for banding artifact detection Download PDF

Info

Publication number
WO2010016820A1
WO2010016820A1 PCT/US2008/009525 US2008009525W WO2010016820A1 WO 2010016820 A1 WO2010016820 A1 WO 2010016820A1 US 2008009525 W US2008009525 W US 2008009525W WO 2010016820 A1 WO2010016820 A1 WO 2010016820A1
Authority
WO
WIPO (PCT)
Prior art keywords
area
artifact
banding artifact
banding
pixel
Prior art date
Application number
PCT/US2008/009525
Other languages
French (fr)
Inventor
Zhen Li
Adeel Abbas
Xiaoan Lu
Christina Gomila
Original Assignee
Thomson Licensing
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Thomson Licensing filed Critical Thomson Licensing
Priority to JP2011522034A priority Critical patent/JP5276170B2/en
Priority to CN2008801307005A priority patent/CN102119401B/en
Priority to PCT/US2008/009525 priority patent/WO2010016820A1/en
Priority to US12/737,662 priority patent/US20110129020A1/en
Priority to EP08795142.2A priority patent/EP2311007B1/en
Priority to BRPI0822999A priority patent/BRPI0822999A2/en
Priority to KR1020117002878A priority patent/KR101441175B1/en
Publication of WO2010016820A1 publication Critical patent/WO2010016820A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/115Selection of the code volume for a coding unit prior to coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/124Quantisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/146Data rate or code amount at the encoder output
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/146Data rate or code amount at the encoder output
    • H04N19/152Data rate or code amount at the encoder output by measuring the fullness of the transmission buffer
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/154Measured or subjectively estimated visual quality after decoding, e.g. measurement of distortion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/61Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/85Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
    • H04N19/86Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving reduction of coding artifacts, e.g. of blockiness
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20032Median filtering

Definitions

  • Non-real time image and video processing applications such as DVD authoring, aim at achieving the best possible visual quality from an image and video processor.
  • the processed images or video contents are reviewed to identify pictures with artifacts. This is often a manual and subjective process that requires substantial experience, time and effort affecting the production time and budget. It is also subject to inconsistency due to different visual evaluation standards imposed by different evaluators.
  • detected pictures are post-processed or re-encoded with fine-tuned parameters and subject to further review.
  • the post-processing or re-encoding algorithms can tune their parameters based on the artifact strength and locations in order to get better picture quality.
  • banding artifact describes a particular type of visual artifact that appears as a visually continuous band or a false contour in an otherwise smooth transition area. It is generally the result of inadequate bit depth representation caused by bit depth conversion. It may also be introduced by other image/video processors, such as a video compressor. Banding artifact is typically observed in animated contents, but can also be observed in film contents.
  • Bit depth describes the number of bits used to represent the color of a single pixel in a bitmapped image or video frame buffer. This concept is also known as color depth or bits per pixel (bpp), particularly when specified along with the number of bits used. Higher color depth gives a broader range of distinct colors.
  • Bit depth conversion, or color depth conversion is the process of converting from one bit depth to another, such as from 64 bits to 8 bits per pixel.
  • a banding artifact detection algorithm needs to provide a strength metric that represents the severity of the artifact such that the re-encoding or post-processing algorithm can automatically identify or prioritize the allocation of resources within the project constraints. Furthermore, a banding artifact detection algorithm needs to provide the strength metric not only on a global level such as a group of pictures or one picture, but also on a local level such as a macroblock or block inside a picture. By locating the banding artifact to the local level, an encoding or processing module can adjust the encoding or processing parameters in the artifact areas only, which can be particularly useful when the overall bit budgets or computational resources are limited. Consequently, there is a strong need for the ability to automatically detect banding artifacts and determine the strength of the banding artifact per block and per picture.
  • the method for detecting banding artifacts includes screening candidate banding artifact areas in a digital image based on at least one feature of the areas, filtering the screened candidate banding artifact areas to eliminate artifact areas that are less noticeable to the human eyes, determining a pixel as a banding artifact based on its local or spatial temporal information, and computing a banding artifact strength metric for a set of pixels in the banding artifact areas.
  • the video encoder includes a banding artifact detector configured to: 1) screen candidate banding artifact areas of a digital image based on at least one feature of the area; 2) eliminate artifact areas that are less noticeable to the human eyes; 3) identify a pixel as a banding artifact pixel; and 4) calculate a banding artifact strength metric for a set of identified pixels.
  • the filtering can be performed using a median filter, and the various steps performed by the method and apparatus can be performed on a pixel or transform domain.
  • the various steps performed by the method and apparatus can be part of a pre-processing step prior to encoding, or can be part of a post-processing step after decoding.
  • Figure 1 is a flow diagram of the method for detecting banding artifacts according to an implementation of the present principles
  • Figures 2 and 3 are a flow diagram of the method for detecting banding artifacts according to an implementation of the present principles
  • Figure 4 is a flow diagram of the method for detecting banding artifacts at the pixel level according to an implementation of the present principles
  • FIG. 5 is a block diagram of a rate control system implementing the methods of the present principles.
  • Figure 6 is a block diagram of a predictive encoder implementing the method of the present principles.
  • the present principles provides a method and apparatus to (i) find the locations of the banding artifacts, (ii) determine the strength of the banding artifact per block, and (iii) determine overall banding artifact strength per picture.
  • Figure 1 shows a high level flow diagram of the banding artifact detection method 10 according to an implementation of the present principles.
  • the banding artifact detection is done by first screening (12) the targeted picture or pictures and locating the candidate banding artifact areas.
  • the candidate banding artifact areas are then filtered (14) to eliminate the isolated areas.
  • Each pixel in the candidate areas is then subject to a local spatial or temporal context evaluation to reduce false detection.
  • the pixel level decision can be further transformed or computed (18) to determine a banding artifact metric that represents the banding artifact strength level for a group of pixels, such as a block, a picture, or a group of pictures.
  • the metric can then be compared against a threshold automatically by the video encoder, or the metric can be presented to a compressionist who will determined the need for re-encoding on an individual case basis.
  • the screening step (12) is used to eliminate the areas where typical banding artifacts are unlikely to occur and hence speed-up the artifact detection.
  • the screening can be done on a pixel level or a group of pixels level.
  • a number of features in the pixel domain or the transform domain can be used to eliminate unlikely candidates.
  • an exemplary implementation is shown with a 16 x 16 macroblock level using the following features: 1.
  • the mean of the luminance component of this macroblock in the YUV color space is greater than a pre-determined value;
  • the mean of the R component of this macroblock in the RGB color space is within a pre-determined range
  • the mean of the B component of this macroblock in the RGB color space is within a pre-determined range
  • the mean of the G component of this macroblock in the RGB color space is within a pre-determined range
  • the difference between the mean of U component and the mean of the V components in the YUV color space is greater than a pre-determined value; 6.
  • the variance of the luminance component of the macroblock is within a predetermined range;
  • a macroblock that satisfies all the above criteria is classified as a candidate banding artifact area.
  • a temporal and/or spatial filter (14) can be used on these areas to eliminate the isolated areas.
  • a spatial median filter can be used to filter out the isolated candidate banding artifact macroblocks inside a video frame.
  • Other filters such as a temporal median filter, can also be used to eliminate the isolated areas.
  • Banding artifact pixel level detection For each pixel in the remaining candidate banding artifact areas, we further consider its local spatial or temporal context information to reduce false detection (step 16). As an exemplary implementation, a determination that a pixel is a banding artifact pixel is made when at least one of the following conditions are satisfied:
  • the maximum difference between this pixel and its neighboring pixels is within a pre-determined range for all three components in the YUV color space.
  • One example of the neighboring pixels can be every pixel (except the target pixel) in a 5x5 block centered at the targeted pixel;
  • the total number of candidate banding artifact pixels in the macroblock is within a pre-determined range.
  • One example can be that more than half of the pixels in the macroblock are considered as candidate banding artifact pixels.
  • the banding artifact strength can be computed (18) for a group of pixels.
  • One example of such metric can be the percentage of pixels being detected with banding artifact inside a picture.
  • a rate control algorithm 500 can be used to adjust the encoding parameters for re- encoding (See FIG. 5).
  • a simple example of such rate control would be to allocate more bits to areas or pictures with banding artifacts using bits from areas or pictures without banding artifacts.
  • the banding artifact threshold can be presented as an indicator after which an operator can determined whether re-encoding is required and/or the degree of re- encoding required.
  • FIGs. 2-3 illustrate the block diagram of a banding artifact detection module 100 according to an implementation of the present principles.
  • a mask map is created to indicate whether one macroblock will be a candidate banding artifact macroblock.
  • the banding artifact detection method For each macroblock in a picture (block 110), the banding artifact detection method first screens and eliminates the unlikely artifact candidate areas using different features described above (Block 120). Depending on whether the considered macroblock is a candidate banding artifact area (Block 130), the detected banding artifact candidate is marked as 1 in the mask map (Block 150), otherwise marked as 0 (Block 140). The loop is ended at that point for that group of macroblocks.
  • the median filtering is done on the artifact mask map to eliminate the isolated areas (Block 170).
  • each macroblock is cycled through again (loop 180), and a determination is made whether the macroblock has been marked as 1 on the banding artifact map (Block 190). Every pixel outside of the candidate artifact area is classified as non-banding artifact pixel (Block 200), while for pixels inside the candidate artifact area, a pixel level classification that considers the neighborhood information is done to further reduce the false detection (Block 210).
  • the loop then ends (Block 220). Based on the pixel level detection results, banding artifact strength for a group of pixels such as a block or a picture can be formed or calculated (Block 230).
  • FIG. 4 illustrates the block diagram of a pixel level banding artifact detection module 300 that can be used in FIG. 3 (e.g., for block 210).
  • the pixel level banding artifact detection method calculates the temporal and spatial feature based on the neighborhood information to determine if the pixel is a candidate banding artifact pixel (Block 320).
  • the pixels are then identified as either a candidate banding artifact pixel (Block 340), or not a banding artifact pixel (Block 330).
  • the loop then ends (Block 350).
  • the algorithm counts the total number of the candidate banding artifact pixels to determine if the total number of banding artifact pixels fall in the pre-determined range (Block 360). If the total number falls in a pre-determined range, every candidate banding artifact pixels in the area is classified as banding artifact pixel (Block 380). Otherwise, every pixel in the area is classified as non-banding artifact pixel (Block 370).
  • FIG. 5 illustrates the block diagram of a rate control algorithm 500 that could apply the banding artifact detection method 10 shown and described in Figures 1-3.
  • an exemplary apparatus for rate control to which the present principles may be applied is indicated generally by the reference numeral 500.
  • the apparatus 500 is configured to apply banding artifact parameters estimation described herein in accordance with various embodiments of the present principles.
  • the apparatus 500 comprises a banding artifact detector 510, a rate constraint memory 520, a rate controller 530, and a video encoder 540.
  • An output of the banding artifact detector 210 is connected in signal communication with a first input of the rate controller 530.
  • the rate constraint memory 520 is connected in signal communications with a second input of the rate controller 530.
  • An output of the rate controller 530 is connected in signal communication with a first input of the video encoder 540.
  • An input of the banding artifact detector 510 and a second input of the video encoder 540 are available as inputs of the apparatus 500, for receiving input video and/or image(s).
  • An output of the video encoder 540 is available as an output of the apparatus 500, for outputting a bitstream.
  • the banding artifact detector 510 generates a banding artifact strength metric according to the methods described according to Figs. 1-3 and passes said metric to the rate controller 530.
  • the rate controller 530 uses this banding artifact strength metric along with additional rate constraints stored in the rate constraint memory 520 to generate a rate control parameter for controlling the video encoder 540.
  • the artifact strength metric can be stored in a memory, where said banding artifact strength metric can later be retrieved and a decision can be made when re-encoding is required or not.
  • an exemplary predictive video encoder to which the present principles may be applied is indicated generally by the reference numeral 600 that could apply the rate control algorithm in FIG. 5 with an integrated banding artifact detection module 695 implementing the banding artifact detection method of the present principles.
  • the encoder 600 may be used, for example, as the encoder 540 in FIG. 5. In such a case, the encoder 600 is configured to apply the rate control (as per the rate controller 530) corresponding to the apparatus 500 of FIG. 5.
  • the video encoder 600 includes a frame ordering buffer 610 having an output in signal communication with a first input of a combiner 685.
  • An output of the combiner 685 is connected in signal communication with a first input of a transformer and quantizer 625.
  • An output of the transformer and quantizer 625 is connected in signal communication with a first input of an entropy coder 645 and an input of an inverse transformer and inverse quantizer 650.
  • An output of the entropy coder 645 is connected in signal communication with a first input of a combiner 690.
  • An output of the combiner 690 is connected in signal communication with an input of an output buffer 635.
  • a first output of the output buffer is connected in signal communication with an input of the rate controller 605.
  • An output of a rate controller 605 is connected in signal communication with an input of a picture-type decision module 615, a first input of a macroblock-type (MB-type) decision module 620. a second input of the transformer and quantizer 625, and an input of a Sequence Parameter Set (SPS) and Picture Parameter Set (PPS) inserter 640.
  • SPS Sequence Parameter Set
  • PPS Picture Parameter Set
  • a first output of the picture-type decision module 615 is connected in signal communication with a second input of a frame ordering buffer 610.
  • a second output of the picture-type decision module 615 is connected in signal communication with a second input of a macroblock-type decision module 620.
  • SPS Sequence Parameter Set
  • PPS Picture Parameter Set
  • An output of the inverse quantizer and inverse transformer 650 is connected in signal communication with a first input of a combiner 627.
  • An output of the combiner 627 is connected in signal communication with an input of an intra prediction module 660 and an input of the deblocking filter 665.
  • An output of the deblocking filter 665 is connected in signal communication with an input of a reference picture buffer 680.
  • An output of the reference picture buffer 680 is connected in signal communication with an input of the motion estimator 675 and a first input of a motion compensator 670.
  • a first output of the motion estimator 675 is connected in signal communication with a second input of the motion compensator 670.
  • a second output of the motion estimator 675 is connected in signal communication with a second input of the entropy coder 645.
  • An output of the motion compensator 670 is connected in signal communication with a first input of a switch 697.
  • An output of the intra prediction module 660 is connected in signal communication with a second input of the switch 697.
  • An output of the macroblock-type decision module 620 is connected in signal communication with a third input of the switch 697.
  • An output of the switch 697 is connected in signal communication with a second input of the combiner 627.
  • An input of the frame ordering buffer 610 is available as input of the encoder 600, for receiving an input picture.
  • an input of the Supplemental Enhancement Information (SEI) inserter 630 is available as an input of the encoder 600, for receiving metadata.
  • SEI Supplemental Enhancement Information
  • a second output of the output buffer 635 is available as an output of the encoder
  • the methods may be implemented by instructions being performed by a processor, and such instructions may be stored on a processor-readable medium such as, for example, an integrated circuit, a software carrier or other storage device such as, for example, a hard disk, a compact diskette, a random access memory ("RAM"), or a read-only memory (“ROM”)
  • the instructions may form an application program tangibly embodied on a processor-readable medium.
  • a processor may include a processor- readable medium having, for example, instructions for carrying out a process.
  • implementations may also produce a signal formatted to carry information that may be, for example, stored or transmitted.
  • the information may include, for example, instructions for performing a method, or data produced by one of the described implementations.
  • a signal may be formatted, for example, as an electromagnetic wave (for example, using a radio frequency portion of spectrum) or as a baseband signal.
  • the formatting may include, for example, encoding a data stream, packetizing the encoded stream, and modulating a carrier with the packetized stream.
  • the information that the signal carries may be, for example, analog or digital information.
  • the signal may be transmitted over a variety of different wired or wireless links, as is known.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Quality & Reliability (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
  • Train Traffic Observation, Control, And Security (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

A method and apparatus for detecting banding artifacts in digital images and video contents. The method operates to (i) find the locations of the banding artifacts, (ii) determine the strength of the banding artifact per block, and (iii) determine overall banding artifact strength per picture. The banding artifact detection and strength assignment is done by first finding areas that are prone to banding artifact and then considering the local characteristics of the area to reduce the false detection. The banding artifact strength of a picture is determined by considering the size and the strength of the artifact areas in this picture as well as the artifact strength in the neighboring pictures.

Description

METHOD AND APPARATUS FOR BANDING ARTIFACT DETECTION
BACKGROUND
Technical Field Principles of the present invention relate to processing digital images and video content.
More particularly, they relate to detecting banding artifacts in digital images and video content.
Description of the related art
Non-real time image and video processing applications such as DVD authoring, aim at achieving the best possible visual quality from an image and video processor. To that goal, the processed images or video contents are reviewed to identify pictures with artifacts. This is often a manual and subjective process that requires substantial experience, time and effort affecting the production time and budget. It is also subject to inconsistency due to different visual evaluation standards imposed by different evaluators. In common practice, detected pictures are post-processed or re-encoded with fine-tuned parameters and subject to further review. The post-processing or re-encoding algorithms can tune their parameters based on the artifact strength and locations in order to get better picture quality.
In this context, automatic artifact detection is needed to facilitate the process. In order to automatically identify a problematic scene or segment, it is essential to find objective metrics that detect the presence of visual artifacts. Detection of common artifacts caused by MPEG-2 encoding, such as blockiness, blurriness and "mosquito" noise, has been extensively studied in the past. However, this is a difficult problem not properly handled by conventional and widely-accepted objective metrics such as the Peak Signal-to-Noise-Ratio (PSNR). Furthermore, the use of new compression standards such as MPEG-4 AVC or VC-I jointly with the fact that the new High Definition DVD formats operate at higher bit-rates has brought into play new types of visual artifacts.
The term banding artifact describes a particular type of visual artifact that appears as a visually continuous band or a false contour in an otherwise smooth transition area. It is generally the result of inadequate bit depth representation caused by bit depth conversion. It may also be introduced by other image/video processors, such as a video compressor. Banding artifact is typically observed in animated contents, but can also be observed in film contents. Bit depth describes the number of bits used to represent the color of a single pixel in a bitmapped image or video frame buffer. This concept is also known as color depth or bits per pixel (bpp), particularly when specified along with the number of bits used. Higher color depth gives a broader range of distinct colors. Bit depth conversion, or color depth conversion, is the process of converting from one bit depth to another, such as from 64 bits to 8 bits per pixel.
To effectively prevent or reduce the banding artifact, a banding artifact detection algorithm needs to provide a strength metric that represents the severity of the artifact such that the re-encoding or post-processing algorithm can automatically identify or prioritize the allocation of resources within the project constraints. Furthermore, a banding artifact detection algorithm needs to provide the strength metric not only on a global level such as a group of pictures or one picture, but also on a local level such as a macroblock or block inside a picture. By locating the banding artifact to the local level, an encoding or processing module can adjust the encoding or processing parameters in the artifact areas only, which can be particularly useful when the overall bit budgets or computational resources are limited. Consequently, there is a strong need for the ability to automatically detect banding artifacts and determine the strength of the banding artifact per block and per picture. SUMMARY
According to one aspect of the present invention, the method for detecting banding artifacts includes screening candidate banding artifact areas in a digital image based on at least one feature of the areas, filtering the screened candidate banding artifact areas to eliminate artifact areas that are less noticeable to the human eyes, determining a pixel as a banding artifact based on its local or spatial temporal information, and computing a banding artifact strength metric for a set of pixels in the banding artifact areas.
According to another aspect of the present invention, the video encoder includes a banding artifact detector configured to: 1) screen candidate banding artifact areas of a digital image based on at least one feature of the area; 2) eliminate artifact areas that are less noticeable to the human eyes; 3) identify a pixel as a banding artifact pixel; and 4) calculate a banding artifact strength metric for a set of identified pixels.
The filtering can be performed using a median filter, and the various steps performed by the method and apparatus can be performed on a pixel or transform domain. In addition, the various steps performed by the method and apparatus can be part of a pre-processing step prior to encoding, or can be part of a post-processing step after decoding.
Other aspects and features of the present principles will become apparent from the following detailed description considered in conjunction with the accompanying drawings. It is to be understood, however, that the drawings are designed solely for purposes of illustration and not as a definition of the limits of the present invention, for which reference should be made to the appended claims. It should be further understood that the drawings are not necessarily drawn to scale and that, unless otherwise indicated, they are merely intended to conceptually illustrate the structures and procedures described herein. BRIEF DESCRIPTION OF THE DRAWINGS
In the drawings wherein like reference numerals denote similar components throughout the views: Figure 1 is a flow diagram of the method for detecting banding artifacts according to an implementation of the present principles;
Figures 2 and 3 are a flow diagram of the method for detecting banding artifacts according to an implementation of the present principles;
Figure 4 is a flow diagram of the method for detecting banding artifacts at the pixel level according to an implementation of the present principles;
Figure 5 is a block diagram of a rate control system implementing the methods of the present principles; and
Figure 6 is a block diagram of a predictive encoder implementing the method of the present principles.
DETAILED DESCRIPTION
The present principles provides a method and apparatus to (i) find the locations of the banding artifacts, (ii) determine the strength of the banding artifact per block, and (iii) determine overall banding artifact strength per picture. Figure 1 shows a high level flow diagram of the banding artifact detection method 10 according to an implementation of the present principles. In this implementation, the banding artifact detection is done by first screening (12) the targeted picture or pictures and locating the candidate banding artifact areas. The candidate banding artifact areas are then filtered (14) to eliminate the isolated areas. Each pixel in the candidate areas is then subject to a local spatial or temporal context evaluation to reduce false detection. A decision is then made (16) on a pixel level regarding whether a pixel is part of a banding artifact area. The pixel level decision can be further transformed or computed (18) to determine a banding artifact metric that represents the banding artifact strength level for a group of pixels, such as a block, a picture, or a group of pictures. The metric can then be compared against a threshold automatically by the video encoder, or the metric can be presented to a compressionist who will determined the need for re-encoding on an individual case basis.
Banding artifact area screening
The screening step (12) is used to eliminate the areas where typical banding artifacts are unlikely to occur and hence speed-up the artifact detection. The screening can be done on a pixel level or a group of pixels level. A number of features in the pixel domain or the transform domain can be used to eliminate unlikely candidates. For purposes of this description, an exemplary implementation is shown with a 16 x 16 macroblock level using the following features: 1. The mean of the luminance component of this macroblock in the YUV color space is greater than a pre-determined value;
2. The mean of the R component of this macroblock in the RGB color space is within a pre-determined range;
3. The mean of the B component of this macroblock in the RGB color space is within a pre-determined range;
4. The mean of the G component of this macroblock in the RGB color space is within a pre-determined range;
5. The difference between the mean of U component and the mean of the V components in the YUV color space is greater than a pre-determined value; 6. The variance of the luminance component of the macroblock is within a predetermined range;
7. Divide the macroblock into four sub-blocks of size 8x8, where the maximum variance of the luminance component in the YUV color space for all four sub-blocks is within a pre- determined range; and
8. Divide the macroblock into four sub-blocks of size 8x8, where the minimum variance of the luminance component in the YUV color space for all four sub-blocks is within a predetermined range.
In this example, a macroblock that satisfies all the above criteria is classified as a candidate banding artifact area.
Candidate banding artifact area filtering
Once the candidate banding artifact areas are identified in step 12, a temporal and/or spatial filter (14) can be used on these areas to eliminate the isolated areas. As an exemplary implementation, a spatial median filter can be used to filter out the isolated candidate banding artifact macroblocks inside a video frame. Other filters, such as a temporal median filter, can also be used to eliminate the isolated areas.
Banding artifact pixel level detection For each pixel in the remaining candidate banding artifact areas, we further consider its local spatial or temporal context information to reduce false detection (step 16). As an exemplary implementation, a determination that a pixel is a banding artifact pixel is made when at least one of the following conditions are satisfied:
1) The maximum difference between this pixel and its neighboring pixels is within a pre-determined range for all three components in the YUV color space. One example of the neighboring pixels can be every pixel (except the target pixel) in a 5x5 block centered at the targeted pixel; and
2) The total number of candidate banding artifact pixels in the macroblock is within a pre-determined range. One example can be that more than half of the pixels in the macroblock are considered as candidate banding artifact pixels.
Banding artifact metric for a group of pixels
Based on the pixel level banding artifact detection results, the banding artifact strength can be computed (18) for a group of pixels. One example of such metric can be the percentage of pixels being detected with banding artifact inside a picture.
For areas or pictures that are identified with banding artifact strength above a desired threshold, a rate control algorithm 500 can be used to adjust the encoding parameters for re- encoding (See FIG. 5). A simple example of such rate control would be to allocate more bits to areas or pictures with banding artifacts using bits from areas or pictures without banding artifacts. Alternatively, the banding artifact threshold can be presented as an indicator after which an operator can determined whether re-encoding is required and/or the degree of re- encoding required.
FIGs. 2-3 illustrate the block diagram of a banding artifact detection module 100 according to an implementation of the present principles. A mask map is created to indicate whether one macroblock will be a candidate banding artifact macroblock. For each macroblock in a picture (block 110), the banding artifact detection method first screens and eliminates the unlikely artifact candidate areas using different features described above (Block 120). Depending on whether the considered macroblock is a candidate banding artifact area (Block 130), the detected banding artifact candidate is marked as 1 in the mask map (Block 150), otherwise marked as 0 (Block 140). The loop is ended at that point for that group of macroblocks.
The median filtering is done on the artifact mask map to eliminate the isolated areas (Block 170). After the median filtering, each macroblock is cycled through again (loop 180), and a determination is made whether the macroblock has been marked as 1 on the banding artifact map (Block 190). Every pixel outside of the candidate artifact area is classified as non-banding artifact pixel (Block 200), while for pixels inside the candidate artifact area, a pixel level classification that considers the neighborhood information is done to further reduce the false detection (Block 210). The loop then ends (Block 220). Based on the pixel level detection results, banding artifact strength for a group of pixels such as a block or a picture can be formed or calculated (Block 230).
FIG. 4 illustrates the block diagram of a pixel level banding artifact detection module 300 that can be used in FIG. 3 (e.g., for block 210). For every pixel inside the candidate banding artifact area (Block 310), the pixel level banding artifact detection method calculates the temporal and spatial feature based on the neighborhood information to determine if the pixel is a candidate banding artifact pixel (Block 320). The pixels are then identified as either a candidate banding artifact pixel (Block 340), or not a banding artifact pixel (Block 330). The loop then ends (Block 350).
After each pixel in the candidate banding artifact area is classified, the algorithm counts the total number of the candidate banding artifact pixels to determine if the total number of banding artifact pixels fall in the pre-determined range (Block 360). If the total number falls in a pre-determined range, every candidate banding artifact pixels in the area is classified as banding artifact pixel (Block 380). Otherwise, every pixel in the area is classified as non-banding artifact pixel (Block 370).
FIG. 5 illustrates the block diagram of a rate control algorithm 500 that could apply the banding artifact detection method 10 shown and described in Figures 1-3. Turning to FlG. 5, an exemplary apparatus for rate control to which the present principles may be applied is indicated generally by the reference numeral 500. The apparatus 500 is configured to apply banding artifact parameters estimation described herein in accordance with various embodiments of the present principles. The apparatus 500 comprises a banding artifact detector 510, a rate constraint memory 520, a rate controller 530, and a video encoder 540. An output of the banding artifact detector 210 is connected in signal communication with a first input of the rate controller 530. The rate constraint memory 520 is connected in signal communications with a second input of the rate controller 530. An output of the rate controller 530 is connected in signal communication with a first input of the video encoder 540.
An input of the banding artifact detector 510 and a second input of the video encoder 540 are available as inputs of the apparatus 500, for receiving input video and/or image(s). An output of the video encoder 540 is available as an output of the apparatus 500, for outputting a bitstream. In one exemplary embodiment, the banding artifact detector 510 generates a banding artifact strength metric according to the methods described according to Figs. 1-3 and passes said metric to the rate controller 530. The rate controller 530 uses this banding artifact strength metric along with additional rate constraints stored in the rate constraint memory 520 to generate a rate control parameter for controlling the video encoder 540. Alternatively, the artifact strength metric can be stored in a memory, where said banding artifact strength metric can later be retrieved and a decision can be made when re-encoding is required or not.
Turning to FIG. 6, an exemplary predictive video encoder to which the present principles may be applied is indicated generally by the reference numeral 600 that could apply the rate control algorithm in FIG. 5 with an integrated banding artifact detection module 695 implementing the banding artifact detection method of the present principles. The encoder 600 may be used, for example, as the encoder 540 in FIG. 5. In such a case, the encoder 600 is configured to apply the rate control (as per the rate controller 530) corresponding to the apparatus 500 of FIG. 5.
The video encoder 600 includes a frame ordering buffer 610 having an output in signal communication with a first input of a combiner 685. An output of the combiner 685 is connected in signal communication with a first input of a transformer and quantizer 625. An output of the transformer and quantizer 625 is connected in signal communication with a first input of an entropy coder 645 and an input of an inverse transformer and inverse quantizer 650. An output of the entropy coder 645 is connected in signal communication with a first input of a combiner 690. An output of the combiner 690 is connected in signal communication with an input of an output buffer 635. A first output of the output buffer is connected in signal communication with an input of the rate controller 605. An output of a rate controller 605 is connected in signal communication with an input of a picture-type decision module 615, a first input of a macroblock-type (MB-type) decision module 620. a second input of the transformer and quantizer 625, and an input of a Sequence Parameter Set (SPS) and Picture Parameter Set (PPS) inserter 640.
A first output of the picture-type decision module 615 is connected in signal communication with a second input of a frame ordering buffer 610. A second output of the picture-type decision module 615 is connected in signal communication with a second input of a macroblock-type decision module 620.
An output of the Sequence Parameter Set (SPS) and Picture Parameter Set (PPS) inserter 640 is connected in signal communication with a third input of the combiner 690.
An output of the inverse quantizer and inverse transformer 650 is connected in signal communication with a first input of a combiner 627. An output of the combiner 627 is connected in signal communication with an input of an intra prediction module 660 and an input of the deblocking filter 665. An output of the deblocking filter 665 is connected in signal communication with an input of a reference picture buffer 680. An output of the reference picture buffer 680 is connected in signal communication with an input of the motion estimator 675 and a first input of a motion compensator 670. A first output of the motion estimator 675 is connected in signal communication with a second input of the motion compensator 670. A second output of the motion estimator 675 is connected in signal communication with a second input of the entropy coder 645.
An output of the motion compensator 670 is connected in signal communication with a first input of a switch 697. An output of the intra prediction module 660 is connected in signal communication with a second input of the switch 697. An output of the macroblock-type decision module 620 is connected in signal communication with a third input of the switch 697. An output of the switch 697 is connected in signal communication with a second input of the combiner 627.
An input of the frame ordering buffer 610 is available as input of the encoder 600, for receiving an input picture. Moreover, an input of the Supplemental Enhancement Information (SEI) inserter 630 is available as an input of the encoder 600, for receiving metadata. A second output of the output buffer 635 is available as an output of the encoder
600, for outputting a bitstream.
Additionally, the methods may be implemented by instructions being performed by a processor, and such instructions may be stored on a processor-readable medium such as, for example, an integrated circuit, a software carrier or other storage device such as, for example, a hard disk, a compact diskette, a random access memory ("RAM"), or a read-only memory ("ROM") The instructions may form an application program tangibly embodied on a processor-readable medium. As should be clear, a processor may include a processor- readable medium having, for example, instructions for carrying out a process. As should be evident to one of skill in the art, implementations may also produce a signal formatted to carry information that may be, for example, stored or transmitted. The information may include, for example, instructions for performing a method, or data produced by one of the described implementations. Such a signal may be formatted, for example, as an electromagnetic wave (for example, using a radio frequency portion of spectrum) or as a baseband signal. The formatting may include, for example, encoding a data stream, packetizing the encoded stream, and modulating a carrier with the packetized stream. The information that the signal carries may be, for example, analog or digital information. The signal may be transmitted over a variety of different wired or wireless links, as is known. A number of implementations have been described. Nevertheless, it will be understood that various modifications may be made. For example, elements of different implementations may be combined, supplemented, modified, or removed to produce other implementations. Additionally, one of ordinary skill will understand that other structures and processes may be substituted for those disclosed and the resulting implementations will perform at least substantially the same function(s), in at least substantially the same way(s), to achieve at least substantially the same result(s) as the implementations disclosed. Accordingly, these and other implementations are within the scope of the following claims.

Claims

1. A method comprising the steps of: filtering (14) an area of a digital image to eliminate an artifact within said area; determining (16) a pixel within the area as a banding artifact based on at least one of local, special, and temporal information; and computing (18) a banding artifact strength metric for a set of pixels in the area.
2. The method of claim 1 further comprising the step of screening an area in said digital image based on luminance information relating to the area.
3. The method of claim 1 further comprising the step of screening an area in said digital image based on spatial activity information relating to the area.
4. The method of claim 1 further comprising the step of screening an area in said digital image based on texture information relating to the area.
5. The method of claim 1 further comprising the step of screening an area in said digital image based on temporal information relating to the area.
6. The method of claim 1, wherein said filtering comprises median filtering.
7. The method of claim 6, wherein said median filtering comprises considering neighborhood information relating to an identified artifact area to further reduce false detections.
8. The method of claim 1, wherein the steps of filtering, determining and computing are performed on a pixel domain.
9. The method of claim 1 , wherein the steps of filtering, determining and computing are performed on a transform domain.
10. The method of claim 1 , wherein the steps of filtering, determining and computing are performed as part of a pre-processing step prior to encoding of a picture or set of pictures.
1 1. The method of claim 1 , wherein the steps of filtering, determining and computing are performed as part of a post-processing step after the decoding of a picture or set of pictures.
12. The method of claim 1 , wherein the digital image is one of a series of digital images in digital video content.
13. The method of claim 1 wherein said banding artifact strength metric is compared to a threshold, wherein if said banding artifact strength metric exceeds said threshold, said set of pixels in the banding artifact areas is re-encoded.
14. The method of claim 1 wherein said banding artifact strength metric is provided as a system output.
15. A video encoder comprising: a detector (10, 610) configured to eliminate a banding artifact within an area, identify a pixel as a banding artifact pixel, and calculate a banding artifact strength metric for a set of identified pixels.
16. The video encoder of claim 15, wherein said detector further comprises a median filter configured to eliminate the artifact areas that are less noticeable to the human eyes.
17. The video encoder of claim 15, wherein said detector identifies the pixel as a banding artifact area based on its local spatial information.
18. The video encoder- of claim 15, wherein said detector identifies the pixel as a banding artifact area based on its local temporal information.
19. The video encoder of claim 15, wherein the encoder is compliant with at least one standard selected from a group consisting of MPEG-4 AVC, VC-I and MPEG-2.
20. The video encoder of claim 15, wherein the digital image is part of a series of digital images making up video content.
21. The video encoder of claim 16, wherein the median filter is configured to consider neighborhood information relating to an identified artifact area to further reduce false detections.
22. The video encoder of claim 15 wherein said banding artifact strength metric is compared to a threshold, wherein if said banding artifact strength metric exceeds said threshold, said set of pixels in the banding artifact areas is re-encoded.
23. The video encoder of claim 15 wherein said banding artifact strength metric is provided as a system output.
PCT/US2008/009525 2008-08-08 2008-08-08 Method and apparatus for banding artifact detection WO2010016820A1 (en)

Priority Applications (7)

Application Number Priority Date Filing Date Title
JP2011522034A JP5276170B2 (en) 2008-08-08 2008-08-08 Method and apparatus for detecting banding artifacts
CN2008801307005A CN102119401B (en) 2008-08-08 2008-08-08 Method and apparatus for banding artifact detection
PCT/US2008/009525 WO2010016820A1 (en) 2008-08-08 2008-08-08 Method and apparatus for banding artifact detection
US12/737,662 US20110129020A1 (en) 2008-08-08 2008-08-08 Method and apparatus for banding artifact detection
EP08795142.2A EP2311007B1 (en) 2008-08-08 2008-08-08 Method and apparatus for banding artifact detection
BRPI0822999A BRPI0822999A2 (en) 2008-08-08 2008-08-08 method and apparatus for detecting band forming artifacts
KR1020117002878A KR101441175B1 (en) 2008-08-08 2008-08-08 Method and apparatus for banding artifact detection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US2008/009525 WO2010016820A1 (en) 2008-08-08 2008-08-08 Method and apparatus for banding artifact detection

Publications (1)

Publication Number Publication Date
WO2010016820A1 true WO2010016820A1 (en) 2010-02-11

Family

ID=40282245

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2008/009525 WO2010016820A1 (en) 2008-08-08 2008-08-08 Method and apparatus for banding artifact detection

Country Status (7)

Country Link
US (1) US20110129020A1 (en)
EP (1) EP2311007B1 (en)
JP (1) JP5276170B2 (en)
KR (1) KR101441175B1 (en)
CN (1) CN102119401B (en)
BR (1) BRPI0822999A2 (en)
WO (1) WO2010016820A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012024089A2 (en) * 2010-08-20 2012-02-23 Intel Corporation Techniques for identifying block artifacts
JPWO2014091984A1 (en) * 2012-12-13 2017-01-12 ソニー株式会社 Transmitting apparatus, transmitting method, receiving apparatus, and receiving method
US11032446B2 (en) 2016-11-17 2021-06-08 Sony Interactive Entertainment Inc. Image processing device, image processing method, and program for correcting color in an image
WO2023235730A1 (en) * 2022-05-31 2023-12-07 Netflix, Inc. Banding artifact detector

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20100021235A (en) * 2008-08-14 2010-02-24 엘지디스플레이 주식회사 Edge revision method for image
JP5570310B2 (en) * 2010-06-07 2014-08-13 キヤノン株式会社 Image forming apparatus
CN103918274B (en) 2011-11-01 2015-08-26 杜比实验室特许公司 To the self adaptation false contouring prevention had in the hierarchical coding of dynamic range expanded image
US9432694B2 (en) * 2012-03-06 2016-08-30 Apple Inc. Signal shaping techniques for video data that is susceptible to banding artifacts
US9565404B2 (en) * 2012-07-30 2017-02-07 Apple Inc. Encoding techniques for banding reduction
US9183453B2 (en) * 2013-10-31 2015-11-10 Stmicroelectronics Asia Pacific Pte. Ltd. Banding noise detector for digital images
CN107329259B (en) * 2013-11-27 2019-10-11 奇跃公司 Virtual and augmented reality System and method for
US9911179B2 (en) 2014-07-18 2018-03-06 Dolby Laboratories Licensing Corporation Image decontouring in high dynamic range video processing
US9747673B2 (en) 2014-11-05 2017-08-29 Dolby Laboratories Licensing Corporation Systems and methods for rectifying image artifacts
JP6962165B2 (en) * 2017-12-11 2021-11-05 株式会社島津製作所 X-ray fluoroscopy equipment
US11477351B2 (en) * 2020-04-10 2022-10-18 Ssimwave, Inc. Image and video banding assessment
US11778240B2 (en) * 2021-01-26 2023-10-03 Netflix, Inc. Banding artifact detection in images and videos

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080123989A1 (en) * 2006-11-29 2008-05-29 Chih Jung Lin Image processing method and image processing apparatus
WO2008088482A1 (en) * 2006-12-28 2008-07-24 Thomson Licensing Method and apparatus for automatic visual artifact analysis and artifact reduction

Family Cites Families (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH06153167A (en) * 1992-11-13 1994-05-31 Oki Electric Ind Co Ltd Motion vector detection circuit
US6757438B2 (en) * 2000-02-28 2004-06-29 Next Software, Inc. Method and apparatus for video compression using microwavelets
US6031937A (en) * 1994-05-19 2000-02-29 Next Software, Inc. Method and apparatus for video compression using block and wavelet techniques
US5832135A (en) * 1996-03-06 1998-11-03 Hewlett-Packard Company Fast method and apparatus for filtering compressed images in the DCT domain
US6865291B1 (en) * 1996-06-24 2005-03-08 Andrew Michael Zador Method apparatus and system for compressing data that wavelet decomposes by color plane and then divides by magnitude range non-dc terms between a scalar quantizer and a vector quantizer
JP3855349B2 (en) * 1997-03-31 2006-12-06 株式会社デンソー Image recognition method and image information encoding method
US5990955A (en) * 1997-10-03 1999-11-23 Innovacom Inc. Dual encoding/compression method and system for picture quality/data density enhancement
US6327307B1 (en) * 1998-08-07 2001-12-04 Motorola, Inc. Device, article of manufacture, method, memory, and computer-readable memory for removing video coding errors
US6310982B1 (en) * 1998-11-12 2001-10-30 Oec Medical Systems, Inc. Method and apparatus for reducing motion artifacts and noise in video image processing
US6493023B1 (en) * 1999-03-12 2002-12-10 The United States Of America As Represented By The Administrator Of The National Aeronautics And Space Administration Method and apparatus for evaluating the visual quality of processed digital video sequences
US20020063807A1 (en) * 1999-04-19 2002-05-30 Neal Margulis Method for Performing Image Transforms in a Digital Display System
US6263022B1 (en) * 1999-07-06 2001-07-17 Philips Electronics North America Corp. System and method for fine granular scalable video with selective quality enhancement
WO2001020912A1 (en) * 1999-09-14 2001-03-22 Koninklijke Philips Electronics N.V. Method and device for identifying block artifacts in digital video pictures
US7203234B1 (en) * 2000-03-31 2007-04-10 Sharp Laboratories Of America, Inc. Method of directional filtering for post-processing compressed video
US6993191B2 (en) * 2001-05-04 2006-01-31 Pts Corporation Methods and apparatus for removing compression artifacts in video sequences
US7003174B2 (en) * 2001-07-02 2006-02-21 Corel Corporation Removal of block encoding artifacts
US6895121B2 (en) * 2001-07-03 2005-05-17 Eastman Kodak Company Method for utilizing subject content analysis for producing a compressed bit stream from a digital image
WO2004049243A1 (en) * 2002-11-25 2004-06-10 Sarnoff Corporation Method and apparatus for measuring quality of compressed video sequences without references
KR100504824B1 (en) * 2003-04-08 2005-07-29 엘지전자 주식회사 A device and a method of revising image signal with block error
EP1634458B1 (en) * 2003-06-16 2011-08-17 Thomson Licensing Decoding method and apparatus enabling fast channel change of compressed video
US7346226B2 (en) * 2003-12-16 2008-03-18 Genesis Microchip Inc. Method and apparatus for MPEG artifacts reduction
FI116959B (en) * 2004-03-17 2006-04-13 Nokia Corp An electronic device and a method for processing image data in an electronic device
US7255500B2 (en) * 2004-11-05 2007-08-14 Konica Minolta Medical & Graphic, Inc. Heat developing method and heat developing apparatus
US7848408B2 (en) * 2005-01-28 2010-12-07 Broadcom Corporation Method and system for parameter generation for digital noise reduction based on bitstream properties
US7933328B2 (en) * 2005-02-02 2011-04-26 Broadcom Corporation Rate control for digital video compression processing
WO2008085425A2 (en) * 2006-12-28 2008-07-17 Thomson Licensing Detecting block artifacts in coded images and video
CA2674149A1 (en) * 2006-12-28 2008-07-17 Thomson Licensing Banding artifact detection in digital video content
CA2675758C (en) * 2007-01-19 2015-05-19 Thomson Licensing Reducing contours in digital images

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080123989A1 (en) * 2006-11-29 2008-05-29 Chih Jung Lin Image processing method and image processing apparatus
WO2008088482A1 (en) * 2006-12-28 2008-07-24 Thomson Licensing Method and apparatus for automatic visual artifact analysis and artifact reduction

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
GABBOUJ M ET AL: "AN OVERVIEW OF MEDIAN AND STACK FILTERING", CIRCUITS, SYSTEMS AND SIGNAL PROCESSING, CAMBRIDGE, MS, US, vol. 11, no. 1, 1 January 1992 (1992-01-01), pages 7 - 45, XP000613176, ISSN: 0278-081X *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012024089A2 (en) * 2010-08-20 2012-02-23 Intel Corporation Techniques for identifying block artifacts
WO2012024089A3 (en) * 2010-08-20 2012-04-26 Intel Corporation Techniques for identifying block artifacts
CN103119939A (en) * 2010-08-20 2013-05-22 英特尔公司 Techniques for identifying block artifacts
US8542751B2 (en) 2010-08-20 2013-09-24 Intel Corporation Techniques for identifying and reducing block artifacts
CN103119939B (en) * 2010-08-20 2016-06-08 英特尔公司 For identifying the technology of blocking effect
JPWO2014091984A1 (en) * 2012-12-13 2017-01-12 ソニー株式会社 Transmitting apparatus, transmitting method, receiving apparatus, and receiving method
US11032446B2 (en) 2016-11-17 2021-06-08 Sony Interactive Entertainment Inc. Image processing device, image processing method, and program for correcting color in an image
WO2023235730A1 (en) * 2022-05-31 2023-12-07 Netflix, Inc. Banding artifact detector

Also Published As

Publication number Publication date
US20110129020A1 (en) 2011-06-02
KR101441175B1 (en) 2014-09-18
CN102119401A (en) 2011-07-06
CN102119401B (en) 2013-12-04
JP2011530857A (en) 2011-12-22
EP2311007B1 (en) 2016-12-28
JP5276170B2 (en) 2013-08-28
BRPI0822999A2 (en) 2019-05-07
KR20110043649A (en) 2011-04-27
EP2311007A1 (en) 2011-04-20

Similar Documents

Publication Publication Date Title
EP2311007B1 (en) Method and apparatus for banding artifact detection
US9967556B2 (en) Video coding method using at least evaluated visual quality and related video coding apparatus
US8204334B2 (en) Adaptive pixel-based filtering
US8218082B2 (en) Content adaptive noise reduction filtering for image signals
EP2321796B1 (en) Method and apparatus for detecting dark noise artifacts
KR20070116717A (en) Method and device for measuring mpeg noise strength of compressed digital image
KR20090101911A (en) Detecting block artifacts in coded image and video
US7031388B2 (en) System for and method of sharpness enhancement for coded digital video
KR100683060B1 (en) Device and method for deblocking of video frame
Nadernejad et al. Adaptive deblocking and deringing of H. 264/AVC video sequences
Casali et al. Adaptive quantisation in HEVC for contouring artefacts removal in UHD content
WO2010021039A1 (en) Image processing device, image processing method, and image processing program
Hwang et al. Enhanced film grain noise removal and synthesis for high fidelity video coding
JP2008079148A (en) Encoder
JP2007538451A (en) Algorithms to reduce artifacts in decoded video
Engelke et al. Quality Assessment of an Adaptive Filter for Artifact Reduction in Mobile Video Sequences
Boroczky et al. Post-processing of compressed video using a unified metric for digital video processing

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 200880130700.5

Country of ref document: CN

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 08795142

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 12737662

Country of ref document: US

ENP Entry into the national phase

Ref document number: 2011522034

Country of ref document: JP

Kind code of ref document: A

Ref document number: 20117002878

Country of ref document: KR

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

REEP Request for entry into the european phase

Ref document number: 2008795142

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 2008795142

Country of ref document: EP

ENP Entry into the national phase

Ref document number: PI0822999

Country of ref document: BR

Kind code of ref document: A2

Effective date: 20110202