WO2006072913A1 - Processeur d'images comportant un dispositif d'amelioration de nettete - Google Patents

Processeur d'images comportant un dispositif d'amelioration de nettete Download PDF

Info

Publication number
WO2006072913A1
WO2006072913A1 PCT/IB2006/050039 IB2006050039W WO2006072913A1 WO 2006072913 A1 WO2006072913 A1 WO 2006072913A1 IB 2006050039 W IB2006050039 W IB 2006050039W WO 2006072913 A1 WO2006072913 A1 WO 2006072913A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
pixel
filter
texture
quantization parameter
Prior art date
Application number
PCT/IB2006/050039
Other languages
English (en)
Inventor
Antoine Chouly
Estelle Lesellier
Original Assignee
Koninklijke Philips Electronics N.V.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Koninklijke Philips Electronics N.V. filed Critical Koninklijke Philips Electronics N.V.
Publication of WO2006072913A1 publication Critical patent/WO2006072913A1/fr

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/85Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
    • H04N19/86Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving reduction of coding artifacts, e.g. of blockiness
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/117Filters, e.g. for pre-processing or post-processing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • H04N19/14Coding unit complexity, e.g. amount of activity or edge presence estimation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/189Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding
    • H04N19/196Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding being specially adapted for the computation of encoding parameters, e.g. by averaging previously computed encoding parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding

Definitions

  • An aspect of the invention relates to an image processor that comprises a sharpness enhancer.
  • the image processor may process, for example, a series of successive images that form a video.
  • the image processor may be implemented in the form of, for example, a suitably programmed multi-purpose microprocessor.
  • Other aspects of the invention relate to a method of processing an image, a computer program product for an image processor, and an image-rendering apparatus.
  • the image-rendering apparatus may be, for example, a cellular phone or a personal digital assistant (PDA).
  • PDA personal digital assistant
  • US patent number 4,571,635 describes a method of enhancing images.
  • a point- by-point record of an image is made with successive pixels in a logical array. The standard deviation of the pixels is determined.
  • an effective central pixel value is determined.
  • An image is displayed or recorded using the determined central pixel values. The image will show enhanced detail relative to an original image.
  • an image processor has the following characteristics.
  • the image processor processes an image that has been compressed and subsequently decompressed.
  • the image processor comprises a sharpness enhancer.
  • the sharpness enhancer applies a peaking function to at least a portion of the image.
  • the peaking function depends on a quantization parameter that represents an extent to which the image has been compressed.
  • the invention takes the following aspects into consideration.
  • Many image and video encoding techniques compress an image in the sense that an encoded image comprises less data than the original image.
  • Image compression generally introduces a loss of information.
  • a decompressed image will have certain artifacts due to the loss of information. The greater the extent to which the image is compressed, the greater the loss of information will be, and, consequently, the stronger the artifacts will be.
  • MPEG2 and MPEG4 are examples of video encoding techniques that may introduce a loss of information.
  • An MPEG encoder typically divides an image into blocks of pixels. The MPEG encoder establishes a set of frequency coefficients for each block.
  • the MPEG encoder quantizes the set of frequency coefficients. Each frequency coefficient is rounded off to a nearest value in a limited set of different possible values. This is a form of data compression that introduces a loss of information.
  • An MPEG-decoded image will have certain artifacts due to this loss of information. For example, sufficiently visible blocks may appear in the MPEG-decoded image. These so-called block effects degrade image quality as perceived by human beings.
  • a sharpness enhancer typically applies a peaking function, which enhances differences between a pixel and neighboring pixels. Such differences may originate from an original image as captured by a camera, for example. However, such differences may also be due to a loss of information, which results in artifacts, as described hereinbefore.
  • a sharpness enhancer may amplify artifacts, such as block effects, so that these become more visible. Let it be assumed, for example, that the prior-art sharpness enhancer, which has been identified hereinbefore, is used for enhancing an MPEG-decoded image. There is a serious risk that the enhanced image will be perceived as having a lesser quality compared with the decoded image that has not been enhanced. In popular terms, the medicine may be worse than the illness. This is particularly true in cases where high video compression rates are applied because the loss of information will be significant in such cases.
  • the sharpness enhancer applies a peaking function that depends on a quantization parameter, which represents an extent to which the image has been compressed.
  • the invention allows that differences between a pixel and neighboring pixels are amplified with a factor that depends on a compression rate.
  • the factor of amplification can be relatively high when the compression rate is relatively modest. Details in the original image will be enhanced to a relatively great extent. Any artifacts will generally be relatively weak so that, even when amplified to a relatively great extent, the amplified artifacts will be acceptable.
  • the factor of amplification is preferably relatively low when the compression rate is relatively high. Artifacts will be relatively strong. Too much amplification will generally degrade image quality.
  • the invention allows a satisfactory, adaptive compromise between enhancement of details in the original image, on the one hand, and artifact amplification, on the other hand. For those reasons, the invention allows a satisfactory image quality, in particular in cases where variable video compression rates are applied.
  • FIG. 1 is a block diagram that illustrates a portable video apparatus.
  • FIG. 2 is a block diagram that illustrates a video processor, which forms part of the portable video apparatus.
  • FIG. 3 is a functional diagram that illustrates operations that the video processor carries out.
  • FIG. 4 is a diagram that illustrates an image comprising blocks of pixels.
  • FIG. 5 is a functional diagram that illustrates a sharpness enhancer that forms part of the video processor.
  • FIG. 6 is a state diagram that illustrates the three-state module, which forms part of the sharpness enhancer.
  • FIG. 7A and 7B are tables that illustrate a control module, which forms part of the sharpness enhancer.
  • FIG. 8 is a functional diagram that illustrates a peaking filter module that forms part of the sharpness enhancer.
  • FIGS 9A, 9B, and 9C are diagrams that illustrate a standard filter window of the peaking filter module.
  • FIGS 1OA, 1OB, and 1OC are diagrams that illustrate a vertical-boundary filter window of the peaking filter module.
  • FIGS 1 IA, 1 IB, and 11C are diagrams that illustrate a horizontal-boundary filter window of the peaking filter module.
  • FIG. 12 is a graph that illustrates a clipping operation within the sharpness enhancer.
  • FIG. 13 is a graph that illustrates a filter window of a smoothing function within the sharpness enhancer.
  • FIG. 1 illustrates a portable video apparatus PVA, which may be, for example, a cellular phone or a personal digital assistant (PDA).
  • the portable video apparatus PVA comprises a receiver REC, a video processor VPR, and a display device DPL.
  • the receiver REC retrieves a coded video signal VC from a received input signal INP.
  • the coded video signal VC results from an encoding step performed at a transmitting end on a sequence of images.
  • the coded video signal VC may be, for example, an MPEG4 transport stream.
  • the coded video signal VC may also result from an encoding of a single image, a so-called still picture.
  • the video processor VPR decodes the coded video signal VC.
  • the video processor VPR carries out other video processing so as to obtain a video signal VID suitable for display.
  • the display device DPL displays the video display signal VID.
  • FIG. 2 illustrates the video processor VPR.
  • the video processor VPR comprises an input buffer IBU, a processing circuit CPU, a program memory PMEM, a data memory DMEM, an output buffer OBU, and a bus BS, which couples the aforementioned elements to each other.
  • the video processor VPR carries out various different video- processing operations.
  • the program memory PMEM comprises a set of instructions, i.e. software, which causes the processing circuit CPU to effect these various different video- processing operations.
  • a video-processing operation typically results from an execution of a software module, which may be in the form of, for example, a subroutine.
  • the data memory DMEM stores intermediate results of video-processing operations.
  • An operation may be defined by a software module, such as, for example, a subroutine.
  • FIG. 3 is a functional diagram of the video processor VPR, which illustrates operations that the video processor VPR carries out.
  • operations, or functions are represented as blocks.
  • a block may thus correspond to a software module in the form of, for example, a subroutine.
  • the various blocks will be described hereinafter as if they were functional entities for reasons of ease of description.
  • FIG. 3 illustrates that the video processor VPR comprises the following functional entities: a video decoder DEC, a decoding postprocessor DPP, a sharpness enhancer ENH, and a video driver DRV.
  • the video decoder DEC decodes the coded video signal VC so as to obtain a decoded video signal VD.
  • the video decoder DEC may be, for example, compliant with the MPEG4 standard so as to decode the aforementioned MPEG4 transport stream.
  • the decoding postprocessor DPP processes the decoded video signal VD so as to attenuate certain artifacts that are related to the video coding technique by means of which the coded video signal VC has been obtained.
  • such artifacts may include so-called blocking and ringing effects that degrade image quality as perceived by human beings.
  • the decoding postprocessor DPP provides a post- processed decoded video signal VDP in which such blocking and ringing effects are attenuated.
  • the sharpness enhancer ENH processes the post-processed decoded video signal VDP so as to enhance the sharpness of images that the coded video signal VC represents.
  • the decoding postprocessor DPP and the sharpness enhancer ENH thus improve the subjective quality of images displayed on the display device DPL illustrated in FIG. 1.
  • the video driver DRV receives an enhanced post-processed decoded video signal VDPE from the sharpness enhancer ENH and processes this signal for delivering the video signal VID, for the purpose of display on the display device DPL.
  • This processing may include, for example, video format conversion, amplification, and contrast, brightness and color adjustments.
  • FIG. 4 illustrates an image IM in the video signal VID for display on the display device DPL.
  • the image is formed by various blocks of pixels B.
  • a block can be regarded as a matrix of 64 pixels, the matrix having 8 rows and 8 columns. This block-wise composition of an image is typical for many video encoding techniques.
  • MPEG2 and MPEG4 are examples.
  • An MPEG encoder typically divides an image into blocks of pixels.
  • the MPEG encoder establishes a set of frequency coefficients for each block. Subsequently, the MPEG encoder quantizes the set of frequency coefficients, which is a form of data compression. Each frequency coefficient is rounded off to a nearest value in a limited set of different possible values.
  • a quantization step size determines the number of different possible values that a frequency coefficient may have. The quantization step size may vary from one block of pixels to another.
  • the decoded video signal VD can be regarded as a stream of blocks of pixels.
  • the decoding postprocessor DPP may comprise, for example, a memory for temporarily storing a block of pixels and blocks of pixels adjacent thereto. This memory will physically form part of the data memory DMEM, which is illustrated in FIG. 2.
  • a set of memory locations, defined by addresses, within the data memory DMEM is statically or dynamically assigned to the decoding postprocessor DPP. The same applies to the sharpness enhancer ENH.
  • FIG. 5 is a functional diagram that illustrates the sharpness enhancer ENH.
  • the sharpness enhancer ENH comprises a filter FIL and a filter controller FCTRL.
  • the filter controller FCTRL comprises a variance calculator VAC, an activity detector ADT, a counter CNT, a texture detector TDT, a three-state module TSM, and a control module CTM.
  • the filter controller FCTRL receives a quantization parameter Qp from the video decoder DEC illustrated in FIG. 3.
  • the quantization parameter Qp is associated with a current image that the sharpness enhancer ENH processes.
  • the quantization parameter Qp indicates the extent to which the current image has been compressed in the encoding at the transmitting end.
  • the quantization parameter Qp may be, for example, an average quantization step size for the blocks of pixels that form the current image.
  • the quantization parameter Qp may have a value that is comprised, for example, between 0 and 31. The higher the value is, the greater the extent is to which the current image has been compressed in the encoding at the transmitting end.
  • the value 1 indicates that no rounding-off has taken place in the encoding at the transmitting end.
  • the value 0 is a special case : this value indicates that no encoding has taken place at the transmitting end.
  • the video decoder DEC which is illustrated in FIG. 3, is idle.
  • the sharpness enhancer ENH processes pixels within a block on a pixel-by- pixel basis.
  • the filter FIL provides an output pixel Yo for each input pixel Yi.
  • the output pixel Yo results from a filter function that the filter FIL applies to the input pixel Yi and neighboring pixels.
  • the filter controller FCTRL determines the filter function.
  • the filter function may be a first peaking function PKl, a second peaking function PK2, a third peaking function PK3, a fourth peaking function PK4, a neutral function NEU, or a smoothing function SMT.
  • the peaking functions PK can be associated with a high-pass filter
  • the neutral function NEU can be associated with an all-pass filter
  • the smoothing function SMT can be associated with a low-pass filter.
  • the filter controller FCTRL determines the filter function in the following manner.
  • the variance calculator VAC calculates a variance value W for a set of pixels that comprises the input pixel Yi and neighboring pixels. For example, a window of 3 -by- 3 pixels may define a set of pixels. The input pixel Yi typically has a center position in the window. The pixels within the window have a particular statistical variance.
  • the variance value W represents that statistical variance.
  • a relatively low variance value indicates that the window, of which the input pixel Yi forms part, is rather uniform. There are few details. Conversely, a relatively high variance value indicates that the window comprises relatively many details.
  • the activity detector ADT provides an activity indication AI on the basis of the variance value W. To that end, the activity detector ADT compares the variance value W with an activity threshold value ATH.
  • the activity indication AI has a low-activity value LA if the variance value W is below the activity threshold value ATH.
  • the window, of which the input pixel Yi forms part, is substantially uniform. There is little activity, as it were.
  • the activity indication AI has a high-activity value HA if the variance value W is above the activity threshold value ATH, or equal thereto. In that case, the window comprises relatively many details. There is much activity, as it were.
  • the counter CNT has a counter value CV that increments by one unit if the activity indication AI has the high-activity value HA.
  • the counter value CV is zero (0) when the sharpness enhancer ENH starts to process an image. Let it be assumed that the sharpness enhancer ENH has finished processing the image.
  • the counter value CV then represents a number of pixels in the image for which the activity indication AI had the high-activity value HA.
  • the counter CNT is reset when the sharpness enhancer ENH starts to process a new image.
  • the counter value CV is set to zero (0) again. For each image, the counter CNT thus counts the number of pixels in the image for which the activity indication AI has the high- activity value HA. A count cycle corresponds with an image.
  • the texture detector TDT provides a texture indication TI on the basis of the counter value CV. To that end, the texture detector TDT compares the counter value CV with a texture threshold value TTH. This comparison is made when the sharpness enhancer ENH has finished processing an image.
  • the texture indication TI has a little-texture value LT if the counter value CV is below the texture threshold value TTH. There are relatively few pixels for which there is much activity. This implies that there is little texture within the image. In contradistinction, the texture indication TI has a much-texture value MT if the counter value CV is above the texture threshold value TTH, or equal thereto. There are relatively many pixels for which there is much activity. This implies that there is much texture within the image.
  • the three-state module TSM can have any of the following states: an active texture state SATX, a passive texture state SPTX, and a no-texture state SNTX. Only one state applies to a particular image. That state can be anyone of the three aforementioned states.
  • the state that applies to an image depends on the texture indication TI of another image, which the sharpness enhancer ENH has previously processed, and also on the quantization parameter Qp of the image concerned.
  • the state that applies to a current image "n” depends on the quantization parameter Qp of the current image "n”.
  • the state that applies further depends on the texture indication TI of a preceding image "n-1", and also on the state that applied to the preceding image "n-1", in other words, the previous state.
  • the three-state module TSM provides an inhibit parameter INH and a state indication SI.
  • the inhibit parameter INH determines whether the variance calculator VAC, the activity detector ADT, the counter CNT, and the texture detector TDT are active or not. It has been mentioned hereinbefore that the aforementioned functions can be implemented in the form of software modules. In such an implementation, the inhibit parameter INH determines whether these software modules are executed, or not, for a particular image.
  • the state indication SI indicates whether the three-state module TSM is in the active texture state SATX, the passive texture state SPTX, or the no-texture state SNTX.
  • FIG. 6 illustrates the three-state module TSM. Furthermore, FIG. 6 illustrates the filter function that the filter FIL applies.
  • the filter function that the filter FIL applies depends on the quantization parameter Qp only.
  • the activity indication AI plays no role.
  • the inhibit parameter INH has a No value (N) in the active texture state SATX.
  • the No value means that the activity detector ADT, the counter CNT, and the texture detector TDT are active.
  • the filter FIL applies the second peaking function PK2 regardless of the quantization parameter Qp and the activity indication AI.
  • the inhibit parameter INH has a Yes value (Y) in the passive texture state SPTX.
  • the Yes value means that the activity detector ADT, the counter CNT, and the texture detector TDT are idle.
  • the three-state module TSM inhibits the aforementioned functions.
  • the filter function that the filter FIL applies depends on the quantization parameter Qp and the activity indication AI.
  • the inhibit parameter INH has the No value. Consequently, the activity detector ADT, the counter CNT, and the texture detector TDT are active.
  • the three-state module TSM is in the active texture state SATX, which applies to a current image "n". Any of the three aforementioned states can apply to a subsequent image "n+1". That is, the three-state module TSM can either jump, as it were, to the passive texture state SPTX, remain in the active texture state SATX, or jump to the no-texture state SNTX.
  • the three-state module TSM jumps to the passive texture state SPTX, if the texture indication TI of the current image "n" has the much-texture value MT and the quantization parameter Qp of the subsequent image "n+1" is greater than 1.
  • the three- state module TSM remains in the active texture state SATX if the current image "n" has the much-texture value MT and the quantization parameter Qp of the subsequent image "n+1" is 0 or 1.
  • the three-state module TSM jumps to the no-texture state SNTX if the texture indication TI of the current image "n” has the little-texture value LT.
  • the three-state module TSM is in the passive texture state SPTX, which applies to a current image "m”.
  • the active texture state SATX will systematically apply to a subsequent image "m+1".
  • the three-state module TSM systematically jumps from the passive texture state SPTX to the active texture state SATX.
  • the texture indication TI plays no role. Neither does the quantization parameter Qp.
  • the texture indication TI can play no role because, in the passive texture state SPTX, the activity detector ADT, the counter CNT, and the texture detector TDT are idle. Consequently, the filter controller FCTRL can not establish the texture indication TI in the passive texture state SPTX.
  • the three-state module TSM is in the no-texture state SNTX, which applies to a particular image "k".
  • the active texture state SATX will apply to a subsequent image "k+1" if the texture indication TI has the much-texture value MT.
  • the much-texture value MT causes the three-state module TSM to jump from the no-texture state SNTX to the active texture state SATX.
  • the no-texture state SNTX will newly apply to the subsequent image "k+1” if the texture indication TI has the little-texture value LT.
  • the little-texture value LT causes the three-state module TSM to remain in the no- texture state SNTX.
  • FIG. 7A and 7B illustrates the control module CTM.
  • the control module CTM determines the filter function that the filter FIL applies as a function of the quantization parameter Qp, the activity indication AI, and the state indication SI.
  • FIG. 6 has already illustrated that the filter FIL applies the second peaking function PK2 if the state indication SI indicates that the three-state module TSM is in the passive texture state SPTX. In that particular case, the activity indication AI plays no role. Neither does the quantization parameter Qp.
  • FIG. 7A illustrates the filter function that the filter FIL applies when the three- state module TSM is in the active texture state SATX. As mentioned hereinbefore, the filter function depends on the quantization parameter Qp in that case.
  • FIG. 7A illustrates that the filter FIL applies the first peaking function PKl when the quantization parameter Qp is 0 or 1.
  • the filter FIL applies the second peaking function PK2 when the quantization parameter Qp is in a range comprised between 2 and 6, or in a range comprised between 7 and 31.
  • FIG. 7B illustrates the filter function that the filter FIL applies when the three- state module TSM is in the no-texture state SNTX.
  • the filter function depends on the quantization parameter Qp and the activity indication AI.
  • FIG. 7B is table that comprises three columns that represent the quantization parameter Qp. Each column represents a different range of values.
  • the table further comprises two rows that represent the activity indication AI. One row represents that the activity indication AI has the low-activity value LA. The other row represents that the activity indication AI has the high-activity value HA.
  • FIG. 7B illustrates that the filter FIL applies the neutral function NEU when the activity indication AI has the low-activity value LA, and the quantization parameter Qp is 0 or 1, or in a range comprised between 2 and 6.
  • the filter FIL applies the smoothing function SMT when the activity indication AI has the low-activity value LA, and the quantization parameter Qp is in a range comprised between 7 and 31.
  • the filter FIL applies the third peaking function PK3 when the activity indication AI has the high-activity value HA, and when the quantization parameter Qp is 0 or 1.
  • the filter FIL applies the fourth peaking function PK4 when the activity indication AI has the high-activity value HA, and when the quantization parameter Qp is in the range comprised between 2 and 6, or in the range comprised between 7 and 31.
  • FIG. 8 illustrates a peaking filter module PKF, which forms part of the filter FIL illustrated in FIG. 5.
  • the peaking filter module PKF can provide any of the aforementioned four peaking functions PKl, PK2, PK3, PK4.
  • the peaking filter module PKF comprises a high-pass filter HPF, a clipper CLP, a sealer SCL, and an adder ADD.
  • the peaking filter module PKF globally operates as follows.
  • the high-pass filter HPF provides a high-pass filtered pixel L on the basis of the input pixel Yi and neighboring pixels.
  • the clipper CLP which receives the high-pass filtered pixel L, establishes a clipped high-pass filtered pixel Lc.
  • the sealer SCL scales the clipped high-pass filtered pixel Lc so as to obtain a clipped-and-scaled high-pass filtered pixel KpLc.
  • the adder ADD adds the clipped-and-scaled high-pass filtered pixel KpLc to the input pixel Yi.
  • the high-pass filter HPF makes a weighed combination of pixels that lie within a filter window.
  • the filter window comprises the input pixel Yi and neighboring pixels.
  • the filter window provides a filter coefficient for each pixel within the filter window.
  • the high- pass filter HPF multiplies each pixel with the filter coefficient that the filter window provides for that pixel. Accordingly, a set of weighed pixels is obtained.
  • the respective filter coefficients constitute respective weighing factors.
  • the high-pass filtered pixel L is the sum of the weighed pixels.
  • the filter window is fixed when peaking filter module PKF provides the first peaking function PKl or the third peaking function PK3. That is, the filter window is the same for all input pixels Yi when either of the aforementioned filter functions applies.
  • the filter window is adapted when the peaking filter module
  • PKF provides the second peaking function PK2 or the fourth peaking function PK4.
  • the filter window is adapted near a block boundary so that the filter window remains within the block concerned.
  • Such a filter window adaptation avoids enhancement of certain coding artiiacts, such as, for example, so-called block effects. Coding artifacts are particularly strong if the quantization parameter Qp has a relatively high value. This is because rounding errors are significant in that case, as explained hereinbefore.
  • the peaking filter module PKF provides the first peaking function PKl or the third peaking function PK3 when the quantization parameter Qp is equal to 0 or 1.
  • the peaking filter module PKF provides the second peaking function PK2 or the fourth peaking function PK4 when the quantization parameter Qp is equal to 2 or a higher value (see FIGS. 7 A and 7B). Consequently, the sharpness enhancement ENH adapts the filter window near block boundaries when the quantization parameter Qp has a relatively high value, but not when the quantization parameter has a relatively low value. This contributes to a satisfactory image quality.
  • FIGS. 9A-9C, 10A- 1OC, and 11 A-11C illustrate various different filter windows of the high-pass filter HPF.
  • Each of the aforementioned figures shows a block of 8 by 8 pixels.
  • the rows and columns of pixels are numbered from 0 to 7. This numbering allows identification of each individual pixel by means of coordinates. For example, a pixel that is in column number 5 and in row number 2 is designated as Y(5,2).
  • FIG. 9A illustrates a standard filter window Wst.
  • the standard filter window Wst applies to all pixels for the first peaking function PKl and the third peaking function PK3. However, the standard filter window Wst applies only to inner pixels of a block for the second peaking function PK2 and the fourth peaking function PK4.
  • FIG. 9A illustrates a case in which pixel Y(5,2) is the input pixel Yi for which the high-pass filter HPF establishes an output pixel Yo.
  • FIG. 9A shows numerals in the standard filter window Wst. These numerals represent filter coefficients.
  • the standard filter window Wst comprises a center pixel for which the filter coefficient is 4.
  • the center pixel is pixel Y(5,2), which is the input pixel Yi.
  • the standard filter window Wst further comprises an upper-center pixel, a lower-center pixel, a left-center pixel, and a right-center pixel for which the filter coefficient is -1. These are pixels Y(5,l), Y(5,3), Y(4,2), and Y(6,2), respectively. These are neighbors of the input pixel Yi.
  • FIG. 9B illustrates a horizontal filter window Wh.
  • the horizontal filter window Wh comprises a center pixel, a left adjacent pixel, and a right adjacent pixel. These are pixels Y(5,2), Y(4,2), and Y(6,2), respectively.
  • the center pixel coincides with the input pixel Yi.
  • the filter coefficient for the center pixel is 2.
  • the filter coefficient for the left adjacent pixel and the right adjacent pixel is - 1.
  • FIG. 9C illustrates a vertical filter window Wv.
  • the vertical filter window Wv comprises a center pixel, an upper adjacent pixel, and a lower adjacent pixel. These are pixels Y(5,2), Y(5,l), and Y(5,3), respectively.
  • the center pixel coincides with the input pixel Yi.
  • the filter coefficient for the center pixel is 2.
  • the filter coefficient for the upper adjacent pixel and the lower adjacent pixel is -1.
  • the standard filter window Wst which is illustrated in FIG. 9A, results from a combination of the horizontal filter window Wh illustrated in the FIGS. 9B and the vertical filter window Wv illustrated in FIG. 9C.
  • the respective filter coefficients of the horizontal filter window Wh and of the vertical filter window Wv are added.
  • FIG. 1OA illustrates a vertical-boundary filter window Wbv, which the high- pass filter HPF may have for the second peaking function PK2 and fourth peaking function PK4.
  • the vertical-boundary filter window Wbv applies to pixels that form part of a vertical boundary of a block.
  • FIG. 1OA illustrates a case in which pixel Y(0,3) is the input pixel Yi for which the high-pass filter HPF establishes an output pixel Yo.
  • FIG. 1OA shows numerals in the vertical-boundary filter window Wbv. These numerals represent filter coefficients.
  • the vertical-boundary filter window Wbv comprises a left-center pixel for which the filter coefficient is 1.
  • the left-center pixel is pixel Y(0,3), which is the input pixel Yi.
  • the vertical-boundary filter window Wbv comprises a center pixel for which the filter coefficient is 2.
  • the center pixel is pixel Y(1, 3).
  • the center pixel is not the input pixel Yi but a neighbor thereof.
  • the vertical- boundary filter window Wbv further comprises an upper-left pixel, a lower- left pixel, and a right-center pixel for which the filter coefficient is -1. These are pixels Y(0,2), Y(0,4), and Y(2,3), respectively.
  • FIGS. 1OB and 1OC illustrate a horizontal filter window Wh and a vertical filter window Wv. These filter windows are identical to the horizontal filter window Wh and the vertical filter window Wv illustrated in FIGS. 9B and 9C, respectively. Only the respective positions are different.
  • the vertical-boundary filter window Wbv which is illustrated in FIG. 1OA, results from a combination of the horizontal filter window Wh illustrated in the FIG. 1 OB and the vertical filter window Wv illustrated in the FIG. 1 OC.
  • the horizontal filter window Wh is positioned so that this filter window remains within the block of interest.
  • the horizontal filter window Wh would leave the block if the center pixel of this window were the input pixel Yi, which is pixel Y(0,3).
  • the horizontal filter window Wh is stopped against the vertical block boundary of interest.
  • the left-adjacent pixel of the horizontal filter window Wh is the input pixel Yi, instead of the center pixel.
  • the center pixel, which is pixel Y(1 ,3) is now a neighbor of the input pixel Yi.
  • Coefficients of the vertical-boundary filter window Wbv are obtained by adding coefficients of the horizontal filter window Wh and those of the vertical filter window Wv.
  • FIG. 1 IA illustrates a horizontal-boundary filter window Wbh, which the high- pass filter HPF may have for the second peaking function PK2 and the fourth peaking function PK4.
  • the horizontal-boundary filter window Wbh applies to pixels that form part of a horizontal boundary of a block.
  • FIG. 1 IA illustrates a case in which pixel Y(4,7) is the input pixel Yi for which the high-pass filter HPF establishes an output pixel Yo.
  • FIG. 1 IA shows numerals in the horizontal-boundary filter window Wbh. These numerals represent filter coefficients.
  • the horizontal-boundary filter window Wbh comprises a lower-center pixel for which the filter coefficient is 1.
  • the lower-center pixel is pixel Y(4,7), which is the input pixel Yi.
  • the horizontal-boundary filter window Wbh comprises a center pixel for which the filter coefficient is 2.
  • the center pixel is pixel Y(4,6).
  • the center pixel is not the input pixel Yi but a neighbor thereof.
  • the horizontal-boundary filter window Wbh further comprises an lower-left pixel, a lower-right pixel, and a upper-center pixel for which the filter coefficient is -1. These are pixels Y(3,7), Y(5,7) and Y(4,5), respectively. These are also neighbors of the input pixel Yi.
  • FIGS. 1 IB and 11C illustrate a horizontal filter window Wh and a vertical filter window Wv. Again, these filter windows are identical to the horizontal filter window Wh and the vertical filter window Wv illustrated in FIGS. 9B and 9C, respectively. Only the respective positions are different.
  • the horizontal-boundary filter window Wbh which is illustrated in FIG. 1 IA, results from a combination of the horizontal filter window Wh illustrated in the FIG. 1 IB and the vertical filter window Wv illustrated in the FIG. 11C.
  • the vertical filter window Wv is positioned so that this filter window remains within the block of interest.
  • the vertical filter window Wv would leave the block if the center pixel of this window were the input pixel Yi, which is pixel Y(4,7).
  • the vertical filter window Wv is stopped against the horizontal block boundary of interest.
  • the lower-adjacent pixel of the horizontal filter window Wh is the input pixel Yi, instead of the center pixel.
  • the center pixel, which is pixel Y(4,6) is now a neighbor of the input pixel Yi.
  • Coefficients of the horizontal-boundary filter window Wbh are obtained by adding coefficients of the horizontal filter window Wh and those of the vertical filter window Wv.
  • the horizontal filter window Wh and the vertical filter window Wv have only pixel
  • FIG. 12 illustrates a transfer function of the clipper CLP.
  • the horizontal axis represents the value of the high-pass filtered pixel L that the clipper CLP receives.
  • the vertical axis represents the value of the clipped high-pass filtered pixel Lc that the clipper CLP provides.
  • the adder ADD which is illustrated in FIG. 8, adds the clipped high-pass filtered pixel Lc to the input pixel Yi. Accordingly, a negative value of the high-pass filtered pixel L causes the peaked pixel Yp to be darker than the input pixel Yi. This can be regarded as a dark shift. Conversely, a positive value of the high-pass filtered pixel L causes the peaked pixel Yp to be brighter than the input pixel Yi. This corresponds to a bright shift.
  • FIG. 12 illustrates that the clipper CLP defines a desired range of values for the high-pass filtered pixel L.
  • the desired range lies between a negative clipping value NCL and a positive clipping value PCL.
  • the clipper CLP provides a clipped high-pass filtered pixel Lc whose value is identical to that of the high-pass filtered pixel L if the value of this pixel lies within the desired range.
  • the clipped high-pass filtered pixel Lc has the negative clipping value NCL if the high-pass filtered pixel L is below the negative clipping value NCL or equal thereto. This limits the dark shift.
  • the clipped high-pass filtered pixel Lc has the positive clipping value PCL if the high-pass filtered pixel L is above the positive clipping value PCL or equal thereto. This limits the bright shift. Too much dark shift or to much bright shift, or both, can cause an image to be perceived as unnatural.
  • the clipper CLP which limits the dark shift and the bright shift, accounts for this.
  • the negative clipping value NCL or the positive clipping value PCL, or both, may differ from one to another peaking filter function.
  • the negative clipping value NCL and the positive clipping value PCL may be -20 and +15, respectively, for the first peaking function PKl and the second peaking function PK2.
  • the negative clipping value NCL and the positive clipping value PCL may be -50, and +15, respectively, for the third peaking function PK3 and the fourth peaking function PK4.
  • the negative clipping value NCL has a higher magnitude than the positive clipping value PCL.
  • FIG. 12 illustrates this: the transfer function of the clipper CLP is asymmetrical with respect to zero.
  • the clipper CLP limits the bright shift to a greater extent than the dark shift.
  • the clipper the CLP allows a greater dark shift for the third peaking function PK3 and the fourth peaking function PK4, which apply in the active texture state SATX or the passive texture state SPTX, than for the first peaking function PKl and second peaking function PK2, which apply in the no-texture state SNTX.
  • the aforementioned values have empirically been established for an 8-bit pixel resolution and a particular display device. The values were found to provide a satisfactory image quality. Other values may be preferred for other pixel resolutions or other display devices. For example, a different display device may require an asymmetric clipping transfer function that is opposite to the transfer function illustrated in FIG. 12.
  • FIG. 13 illustrates a filter window for the smoothing function SMT, which the filter FIL illustrated in FIG. 5 may provide.
  • FIG. 13 shows numerals in the filter window. These numerals represent filter coefficients.
  • the filter window has a center pixel, which is the input pixel Yi. Other pixels are neighboring pixels.
  • the smoothing function SMT provides a weighed average of the input pixel Yi and the neighboring pixels. The weighed average constitutes the output pixel Yo.
  • An image processor processes an image that has been compressed and subsequently decompressed (the image is comprised in the decoded video signal VD).
  • the image processor comprises a sharpness enhancer (ENH) that applies a peaking function (PK) to at least a portion of the image.
  • PK peaking function
  • Qp quantization parameter
  • the sharpness enhancer (ENH) applies another function (NEU, SMT) to another portion of the image, the other function depending on the quantization parameter (Qp).
  • the sharpness enhancer comprises an image-texture detecting arrangement (VAC, ADT, CNT, TDT, TSM) that establishes an image-texture indication (SI), which indicates an extent to which the image (VD) comprises details.
  • the sharpness enhancer applies the peaking function depending on the image-texture indication and the quantization parameter (Qp).
  • the sharpness enhancer comprises a variance calculator (VAC) that calculates a variance (W) within a pixel area (or window) that comprises an input pixel (Yi) and neighboring pixels.
  • VAC variance calculator
  • the sharpness enhancer applies a filter function (PKl, PK2, PK3, PK4, NEU, SMT) to the input pixel (Yi) and neighboring pixels in dependence on the quantization parameter (Qp) and the variance (W).
  • the image (VD) forms part of a series of successive images that form a video.
  • the image-texture detecting arrangement (VAC, ADT, CNT, TDT, TSM) establishes the image-texture indication (SI) on the basis of respective variances (VV) that the variance calculator (VAC) has calculated for respective pixels in another image of the video.
  • the sharpness enhancer comprises a high- pass filter (HPF) which provides a high-pass filtered pixel (L) in response to an input pixel (Yi), and a combiner (CLP, SCL, ADD) for combining the high-pass filtered pixel (L) and the input pixel (Yi) so as to obtain a peaked pixel (Yp).
  • HPF high- pass filter
  • CLP, SCL, ADD combiner
  • the combiner (CLP, SCL, ADD) comprises a clipper (CLP) which limits a dark shift and a bright shift that the high-pass filtered pixel (L) may cause in the output pixel (Yp) with respect to the input pixel (Yi).
  • CLP clipper
  • the image, which is processed, may be a so-called still image, a photo.
  • the image may have been compressed and decompressed by means of an encoding technique other than MPEG2 or MPEG4.
  • H.263 is an example.
  • the quantization parameter may be derived from an encoded video data stream in a direct or an indirect manner. For example, it is possible to estimate the quantization parameter on the basis of the amount of encoded data that represents the image of interest. This is an example of an indirect manner.
  • FIG. 5 is an example of a sophisticated implementation, which takes into account parameters other than the quantization parameter Qp.
  • simpler implementations which take into account the quantization parameter Qp only, are also possible.
  • the sharpness enhancer ENH illustrated in FIG. 5 may be modified as follows. All elements are omitted except for the filter module PKF and the control module CTM, which remain. The control module can then be simplified because it only has to take into account the quantization parameter Qp.
  • the peaking filter module PKF illustrated in FIG. 8 is merely an example of such an implementation.
  • the peaking filter module PKF may be modified as follows. All elements are omitted except for the high-pass filter HPF. This is an example of a basic implementation.
  • the clipper CLP which is illustrated in FIG. 8, may have a transfer function that provides a so-called soft clipping rather than a hard clipping as illustrated in FIG. 12.
  • filter windows may comprise 2-by-2 pixels, or 2-or-3 pixels, or any other size. The filter window may adapt in various different manners.
  • the sharpness enhancer in accordance with the invention may comprise a table that defines a suitable filter window and the coefficients therein, for each pixel within a block.
  • the filter window for pixels at the boundary of the block may be different from the filter window for other pixels.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computing Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Image Processing (AREA)

Abstract

Processeur d'image qui traite une image ayant été comprimée et ultérieurement décomprimée. Cette image peut être, par exemple, une image décodée selon MPEG. Ledit processeur vidéo comporte un dispositif d'amélioration de netteté (ENH) qui applique une fonction de correction (PK) à au moins une partie de l'image. La fonction de correction dépend d'un paramètre de quantification (Qp) qui représente le degré de compression de l'image.
PCT/IB2006/050039 2005-01-10 2006-01-05 Processeur d'images comportant un dispositif d'amelioration de nettete WO2006072913A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP05300014 2005-01-10
EP05300014.7 2005-01-10

Publications (1)

Publication Number Publication Date
WO2006072913A1 true WO2006072913A1 (fr) 2006-07-13

Family

ID=36177980

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2006/050039 WO2006072913A1 (fr) 2005-01-10 2006-01-05 Processeur d'images comportant un dispositif d'amelioration de nettete

Country Status (1)

Country Link
WO (1) WO2006072913A1 (fr)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2003061295A2 (fr) * 2001-12-27 2003-07-24 Koninklijke Philips Electronics N.V. Systeme et procede permettant d'ameliorer la nettete au moyen d'informations de codage et de caracteristiques spatiales locales
WO2004054270A1 (fr) * 2002-12-10 2004-06-24 Koninklijke Philips Electronics N.V. Mesure metrique unifiee pour traitement de video numerique (umdvp)

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2003061295A2 (fr) * 2001-12-27 2003-07-24 Koninklijke Philips Electronics N.V. Systeme et procede permettant d'ameliorer la nettete au moyen d'informations de codage et de caracteristiques spatiales locales
WO2004054270A1 (fr) * 2002-12-10 2004-06-24 Koninklijke Philips Electronics N.V. Mesure metrique unifiee pour traitement de video numerique (umdvp)

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
BOROCZKY L ET AL: "Post-processing of compressed video using a Unified Metric for Digital Video Processing", PROCEEDINGS OF THE SPIE, SPIE, BELLINGHAM, VA, US, vol. 5308, January 2004 (2004-01-01), pages 124 - 131, XP002328982, ISSN: 0277-786X *
JASPERS E G T ET AL: "A generic 2D sharpness enhancement algorithm for luminance signals", IMAGE PROCESSING AND ITS APPLICATIONS, 1997., SIXTH INTERNATIONAL CONFERENCE ON DUBLIN, IRELAND 14-17 JULY 1997, LONDON, UK,IEE, UK, vol. 1, 14 July 1997 (1997-07-14), pages 269 - 273, XP006508295, ISBN: 0-85296-692-X *
YIBIN YANG ET AL: "A new enhancement method for digital video applications", IEEE TRANSACTIONS ON CONSUMER ELECTRONICS, IEEE SERVICE CENTER, NEW YORK, NY, US, vol. 48, no. 3, 24 June 2002 (2002-06-24), pages 435 - 443, XP002272081, ISSN: 0098-3063 *

Similar Documents

Publication Publication Date Title
JP5233014B2 (ja) 方法および装置
CA2547954C (fr) Filtres video directionnels pour la reduction adaptative locale de bruit spatial
US9414066B2 (en) Deblocking filtering
US20020191699A1 (en) Detection system and method for enhancing digital video
US7706446B2 (en) Image-data processing apparatus
US20060233456A1 (en) Apparatus for removing false contour and method thereof
US20040208392A1 (en) Method and apparatus for improving video quality of low bit-rate video
US8422800B2 (en) Deblock method and image processing apparatus
US20100315558A1 (en) Content adaptive noise reduction filtering for image signals
US20080123979A1 (en) Method and system for digital image contour removal (dcr)
US20080013849A1 (en) Video Processor Comprising a Sharpness Enhancer
KR20090009232A (ko) 시각 처리 장치, 시각 처리 방법, 프로그램, 기록 매체, 표시 장치 및 집적 회로
JPH08251422A (ja) ブロック歪み補正器及び画像信号伸張装置
KR101052102B1 (ko) 화상 신호 처리 장치
US20070285729A1 (en) Image processing apparatus and image processing method
US8116584B2 (en) Adaptively de-blocking circuit and associated method
US20120314969A1 (en) Image processing apparatus and display device including the same, and image processing method
JP4380498B2 (ja) ブロック歪み低減装置
JP2018019239A (ja) 撮像装置及びその制御方法及びプログラム
KR20060127158A (ko) 압축된 비디오 애플리케이션들을 위한 링잉 아티팩트 감소
WO2006003102A1 (fr) Dispositif et procede de pretraitement avant le codage d'une sequence d'images video
KR20050099256A (ko) 디블록킹을 이용한 영상처리 장치와 영상처리 방법
CN102316249B (zh) 图像处理器、显示设备及图像处理方法
WO2006072913A1 (fr) Processeur d'images comportant un dispositif d'amelioration de nettete
WO2006077550A2 (fr) Processeur d'image comprenant un filtre

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application
NENP Non-entry into the national phase

Ref country code: DE