WO2002056583A2 - Method and system for sharpness enhancement for coded video - Google Patents

Method and system for sharpness enhancement for coded video Download PDF

Info

Publication number
WO2002056583A2
WO2002056583A2 PCT/IB2001/002550 IB0102550W WO02056583A2 WO 2002056583 A2 WO2002056583 A2 WO 2002056583A2 IB 0102550 W IB0102550 W IB 0102550W WO 02056583 A2 WO02056583 A2 WO 02056583A2
Authority
WO
WIPO (PCT)
Prior art keywords
video
coding
sharpness
usefulness metric
gain
Prior art date
Application number
PCT/IB2001/002550
Other languages
French (fr)
Other versions
WO2002056583A3 (en
Inventor
Lilla Boroczky
Johan Janssen
Original Assignee
Koninklijke Philips Electronics N.V.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Koninklijke Philips Electronics N.V. filed Critical Koninklijke Philips Electronics N.V.
Priority to JP2002557116A priority Critical patent/JP2004518338A/en
Priority to KR1020027011857A priority patent/KR20020081428A/en
Priority to EP01273149A priority patent/EP1352516A2/en
Publication of WO2002056583A2 publication Critical patent/WO2002056583A2/en
Publication of WO2002056583A3 publication Critical patent/WO2002056583A3/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/20Circuitry for controlling amplitude response
    • H04N5/205Circuitry for controlling amplitude response for correcting amplitude versus frequency characteristic
    • H04N5/208Circuitry for controlling amplitude response for correcting amplitude versus frequency characteristic for compensating for attenuation of high frequency components, e.g. crispening, aperture distortion correction

Definitions

  • This invention uses the UME of co-pending application, Apparatus and Method for Providing a Usefulness Metric based on Coding Information for Video Enhancement, inventors Lilla Boroczky and Johan Janssen, filed concurrently herewith.
  • the present invention is entitled to the benefit of Provisional Patent Application Serial Number 60/260,845 filed January 10, 2001.
  • the present invention is directed to a system and method for enhancing the sharpness of encoded/transcoded digital video, without enhancing encoding artifacts, which has particular utility in connection with spatial domain sharpness enhancement algorithms used in multimedia devices.
  • high-quality multi-media devices such as set-top boxes, high-end TV's, Digital TV's, Personal TV's, storage products, PDA's, wireless internet devices, etc.
  • high-quality multi-media devices such as set-top boxes, high-end TV's, Digital TV's, Personal TV's, storage products, PDA's, wireless internet devices, etc.
  • the development of these new products and their ability to display video data in any format has resulted in new requirements and opportunities with respect to video processing and video enhancement algorithms.
  • Most of these devices receive and/or store video in the MPEG-2 format and in the future they may receive/store in the MPEG-4 format.
  • the picture quality of these MPEG sources can vary between very good and extremely bad.
  • Next generation storage devices such as the blue-laser-based Digital Video Recorder (DVR) will have to some extent HD (ATSC) capability and are an example of the type of device for which a new method of picture enhancement would be advantageous.
  • An HD program is typically broadcast at 20 Mb/s and encoded according to the MPEG-2 video standard. Taking into account the approximately 25 GB storage capacity of the DVR, this represents about a two-hour recording time of HD video per disc.
  • several long-play modes can be defined, such as Long-Play (LP) and Extended-Long- Play (ELP) modes.
  • the average storage bitrate is assumed to be approximately 10 Mb/s, which allows double record time for HD.
  • transcoding is an integral part of the video processing chain, which reduces the broadcast bitrate of 20 Mb/s to the storage bitrate of 10 Mb/s.
  • the picture quality e.g., sharpness
  • the picture quality should not be compromised too much. Therefore, for the LP mode, postprocessing plays an important role in improving the perceived picture quality.
  • most of the state-of-the-art sharpness enhancement algorithms were developed and optimized for analog video transmission standards like NTSC, PAL and SECAM.
  • image enhancement algorithms either reduce certain unwanted aspects in a picture (e.g., noise reduction) or improve certain desired characteristics of an image (e.g., sharpness enhancement).
  • the traditional sharpness enhancement algorithms may perform sub-optimally on MPEG encoded or transcoded video due to the different characteristics of these sources.
  • information which allows for determining the quality of the encoded source can be derived from the MPEG stream. This information can potentially be used to increase the performance of image enhancement algorithms. Because image quality will remain a distinguishing factor for high-end video products, new approaches for performing image enhancement, specifically adapted for use with these sources, will be beneficial.
  • a usefulness metric calculates how much a pixel can be enhanced without increasing coding artifacts. It is an object of this invention to provide a system in which the usefulness metric is separate from the enhancement algorithm such that a variety of different enhancement algorithms can be used in conjunction with the metric.
  • Fig. 1 is a block diagram of the invention
  • Fig. 2 is a flowchart of the invention using only the coding gain
  • Fig. 1 shows a system in which the present invention can be implemented, for example, in a video receiver 56.
  • Figure 1 illustrates how a usefulness metric (UME) can be applied to, a sharpness enhancement algorithm, adaptive peaking, for example.
  • UME usefulness metric
  • the adaptive peaking algorithm directed at increasing the amplitude to the transient of a luminance signal 2, does not always provide optimal video quality for an a priori encoded/transcoded video source. This is mainly a result of the fact that the characteristics of the MPEG source are not taken into account.
  • a UME is generated, which does take into account the characteristics of the MPEG source.
  • the example algorithm adaptive peaking
  • the adaptive peaking algorithm and the principle of adaptive peaking are well known in the prior art.
  • An example is shown in Fig. 1.
  • the algorithm includes four control blocks, 6 8 10 12. These pixel-based control blocks 6 8 10 12 operate in parallel and each calculate a maximum allowable gain factor gl g2 g3 g4, respectively, to achieve a target image quality. These control blocks 6 8 10 12 take into account particular local characteristics of the video signal such as contrast, dynamic range, and noise level, but not coding properties.
  • the coding gain block 14 uses the usefulness metric (UME) 18 to determine the allowable amount of peaking g CO ding 36.
  • UAE usefulness metric
  • a dynamic gain control 16 selects the minimum of the gains gl 28, g2 30, g3 32, g4 34, which is added to the gcodi n g generating a final gain g 38.
  • the multiplier 22 multiplies the final gain 38 by the high-pass signal 20, which has been filtered by the 2D peaking filter 4.
  • the adder 24 adds this product to the original luminance value of a pixel 2. In this manner, the enhanced luminance signal 26 is generated.
  • the UME 18 calculates on a pixel by pixel basis, how much a pixel or region can be enhanced without increasing coding artifacts.
  • the UME 18 is derived from the MPEG coding information present in the bitstream. Choosing the MPEG information to be used with the UME 18 is far from trivial. The information must provide an indication of the spatio-temporal characteristics or picture quality of the video.
  • the finest granularity of MPEG information, which can be directly obtained during decoding is either block-based or macroblock-based.
  • the UME 18 must be calculated for each pixel of a picture in order to ensure the highest picture quality.
  • quantization parameter As it is present for every coded macroblock (MB).
  • a high quantization error results in coding artifacts and consequently, enhancement of pixels in a MB with a high quantization parameter must be suppressed more.
  • Another parameter that can easily be extracted from the MPEG stream is the number of bits spent in coding a MB or block.
  • the value of the aforementioned coding information is dependent upon other factors including: scene content, bitrate, picture type, and motion estimation/compensation.
  • the UME 18 of pixel(ij) can be defined by the following equation:
  • UME(i j) 1 - complpi Xe i(ij)/2 * compl where compl P i xe ⁇ (i j) is the coding complexity of pixel (i,j) and compl is the average coding complexity of a picture.
  • compl P i xe ⁇ (i j) is estimated from the MB or block complexity map Figure 248 by means of bilinear interpolation Figure 2 58.
  • UME(i j) can range from 0 to 1. In this aspect, zero means that no sharpness enhancement is allowed for a particular pixel, while 1 means that the pixel can be freely enhanced without the risk of enhancing any coding artifacts.
  • the UME equation can be extended, by the addition of a term directly related to the quantization parameter, to incorporate a stronger bitrate dependency. This can be especially advantageous for video that has been encoded at a low bitrate.
  • the UME is estimated Figure 2 50 from surrounding values.
  • the UME 18 is calculated to account for coding characteristics, it only prevents the enhancement of coding artifacts such as blocking and ringing. Thus, the prevention or reduction of artifacts of non-coding origin, which might result from applying too much enhancement, is addressed by other parts of the sharpness enhancement algorithm.
  • the aforementioned UME 18 can be combined with any peaking algorithm, or it can be adapted to any spatial domain sharpness enhancement algorithm. It is also possible to utilize coding information Figure 2 46 and incorporate scene content related information Figure 244, in combination with an adaptive peaking algorithm.
  • Scene content information such as edge information 44
  • the scene-content related information 44 compensates for the uncertainty of the UME calculation Fig. 1 18, the uncertainty resulting from assumptions made and interpolations applied in its calculation, Fig. 2 58 36.
  • the complexity map 56 of the MB/block has an inherited block structure.
  • a spatial low-pass filtering 52 is applied by a filter.
  • An example filter kernel, which can be used for low-pass filtering is:
  • temporal filtering 54 is applied to the coding gain using the gain of the previous frame.
  • the MB or block-based complexity map 48 is filtered temporally using an IIR filter 54.
  • the coding gain 36 is then applied to the adaptive peaking algorithm using the frame 160 to produce an enhanced frame 160.
  • the invention can also be applied to HD and SD sequences such as would be present in a video storage application having HD capabilities and allowing long-play mode. The majority of such video sequences are transcoded to a lower storage bitrate from broadcast MPEG-2 bitstreams. For the long play mode of this application, format change can also take place during transcoding.
  • Well-known SD video sequences encoded, decoded, and then processed with the sharpness enhancement algorithm, according to the present invention provide superior video quality for a priori encoded or transcoded video sequences as compared to algorithms that do not use coding information.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Picture Signal Circuits (AREA)

Abstract

A system, (i.e., a method, an apparatus, and computer-executable process steps), providing sharpness enhancement for a coded video, in which a usefulness metric, calculates how much a pixel can be enhanced without increasing coding artifacts. The usefulness metric is separate from the enhancement algorithm such that a variety of different enhancement algorithms can be used in conjunction with the metric.

Description

Method and system for sharpness enhancement for coded video
This invention uses the UME of co-pending application, Apparatus and Method for Providing a Usefulness Metric based on Coding Information for Video Enhancement, inventors Lilla Boroczky and Johan Janssen, filed concurrently herewith. The present invention is entitled to the benefit of Provisional Patent Application Serial Number 60/260,845 filed January 10, 2001.
The present invention is directed to a system and method for enhancing the sharpness of encoded/transcoded digital video, without enhancing encoding artifacts, which has particular utility in connection with spatial domain sharpness enhancement algorithms used in multimedia devices. The development of high-quality multi-media devices, such as set-top boxes, high-end TV's, Digital TV's, Personal TV's, storage products, PDA's, wireless internet devices, etc., is leading to a variety of architectures and to more openness towards new features for these devices. Moreover, the development of these new products and their ability to display video data in any format, has resulted in new requirements and opportunities with respect to video processing and video enhancement algorithms. Most of these devices receive and/or store video in the MPEG-2 format and in the future they may receive/store in the MPEG-4 format. The picture quality of these MPEG sources can vary between very good and extremely bad.
Next generation storage devices, such as the blue-laser-based Digital Video Recorder (DVR) will have to some extent HD (ATSC) capability and are an example of the type of device for which a new method of picture enhancement would be advantageous. An HD program is typically broadcast at 20 Mb/s and encoded according to the MPEG-2 video standard. Taking into account the approximately 25 GB storage capacity of the DVR, this represents about a two-hour recording time of HD video per disc. To increase the record time, several long-play modes can be defined, such as Long-Play (LP) and Extended-Long- Play (ELP) modes.
For LP-mode the average storage bitrate is assumed to be approximately 10 Mb/s, which allows double record time for HD. As a consequence, transcoding is an integral part of the video processing chain, which reduces the broadcast bitrate of 20 Mb/s to the storage bitrate of 10 Mb/s. During the MPEG-2 transcoding, the picture quality (e.g., sharpness) of the video, is most likely reduced. However, especially for the LP mode, the picture quality should not be compromised too much. Therefore, for the LP mode, postprocessing plays an important role in improving the perceived picture quality. To date, most of the state-of-the-art sharpness enhancement algorithms were developed and optimized for analog video transmission standards like NTSC, PAL and SECAM. Traditionally, image enhancement algorithms either reduce certain unwanted aspects in a picture (e.g., noise reduction) or improve certain desired characteristics of an image (e.g., sharpness enhancement). For these emerging storage devices, the traditional sharpness enhancement algorithms may perform sub-optimally on MPEG encoded or transcoded video due to the different characteristics of these sources. In the closed video processing chain of the storage system, information which allows for determining the quality of the encoded source can be derived from the MPEG stream. This information can potentially be used to increase the performance of image enhancement algorithms. Because image quality will remain a distinguishing factor for high-end video products, new approaches for performing image enhancement, specifically adapted for use with these sources, will be beneficial. In C-J Tsai, P. Karunaratne, N. P. Galatsanos and A. K. Katsaggelos, "A Compressed Video Enhancement Algorithm", Proc. of IEEE, ICIP'99, Kobe, Japan, Oct. 25-28, 1999, the authors propose an iterative algorithm for enhancing video sequences that are encoded at low bit rates. For MPEG sources, the degradation of the picture quality originates mostly from the quantization function. Thus, the iterative gradient- projection algorithm employed by the authors uses coding information such as quantization step size, macroblock types and forward motion vectors in its cost function. The algorithm shows promising results for low bit rate video, however its main disadvantage is its high computational complexity.
In B. Martins and S. Forchammer, "Improved Decoding of MPEG-2 Coded Video", Proc. ofIBC'2000, Amsterdam, The Netherlands, Sept. 7-12, 2000, pp. 109-115, the authors describe a new concept for improving the decoding of MPEG-2 coded video. Specifically, a unified approach for deinterlacing and format conversion, integrated in the decoding process, is proposed. The technique results in considerably higher picture quality than that obtained by ordinary decoding. However, to date, its computational complexity prevents its implementation in consumer applications.
Both papers describe video enhancement algorithms using MPEG coding information and a cost function. However, both of these scenarios, in addition to being impractical, combine the enhancement and the cost function. A cost function determines how much, and at which locations in a picture, enhancement can be applied. The problem which results from this combination of cost and enhancement functions is that only one algorithm can be used with the cost function. The present invention addresses the foregoing needs by providing a system,
(i.e., a method, an apparatus, and computer-executable process steps), in which a usefulness metric, calculates how much a pixel can be enhanced without increasing coding artifacts. It is an object of this invention to provide a system in which the usefulness metric is separate from the enhancement algorithm such that a variety of different enhancement algorithms can be used in conjunction with the metric.
It is a further object of the invention to provide a usefulness metric which can be tuned towards the constraints of the system such that an optimal trade-off between performance and complexity is assured.
It is a further object of the invention to provide a system of image enhancement which will perform optimally with encoded and transcoded video sources.
This brief summary has been provided so that the nature of the invention may be understood quickly. A more complete understanding of the invention can be obtained by reference to the following detailed description of the preferred embodiments thereof in connection with the attached drawings.
For a better understanding of the invention, reference is made to the following drawings:
Fig. 1 is a block diagram of the invention Fig. 2 is a flowchart of the invention using only the coding gain
Fig. 1 shows a system in which the present invention can be implemented, for example, in a video receiver 56. Figure 1 illustrates how a usefulness metric (UME) can be applied to, a sharpness enhancement algorithm, adaptive peaking, for example. (Other sharpness enhancement algorithms, besides adaptive peaking, can also be used.) The adaptive peaking algorithm, directed at increasing the amplitude to the transient of a luminance signal 2, does not always provide optimal video quality for an a priori encoded/transcoded video source. This is mainly a result of the fact that the characteristics of the MPEG source are not taken into account. In the present invention, a UME is generated, which does take into account the characteristics of the MPEG source. The example algorithm, adaptive peaking, is extended to use this UME, thereby increasing the performance of the algorithm significantly. The adaptive peaking algorithm and the principle of adaptive peaking, are well known in the prior art. An example is shown in Fig. 1. The algorithm includes four control blocks, 6 8 10 12. These pixel-based control blocks 6 8 10 12 operate in parallel and each calculate a maximum allowable gain factor gl g2 g3 g4, respectively, to achieve a target image quality. These control blocks 6 8 10 12 take into account particular local characteristics of the video signal such as contrast, dynamic range, and noise level, but not coding properties. The coding gain block 14 uses the usefulness metric (UME) 18 to determine the allowable amount of peaking gCOding 36. A dynamic gain control 16 selects the minimum of the gains gl 28, g2 30, g3 32, g4 34, which is added to the gcoding generating a final gain g 38. The multiplier 22, multiplies the final gain 38 by the high-pass signal 20, which has been filtered by the 2D peaking filter 4. The adder 24 adds this product to the original luminance value of a pixel 2. In this manner, the enhanced luminance signal 26 is generated.
The UME 18 calculates on a pixel by pixel basis, how much a pixel or region can be enhanced without increasing coding artifacts. The UME 18 is derived from the MPEG coding information present in the bitstream. Choosing the MPEG information to be used with the UME 18 is far from trivial. The information must provide an indication of the spatio-temporal characteristics or picture quality of the video.
The finest granularity of MPEG information, which can be directly obtained during decoding is either block-based or macroblock-based. However for spatial (pixel) domain video enhancement, the UME 18 must be calculated for each pixel of a picture in order to ensure the highest picture quality.
One parameter easily extracted from MPEG information is the quantization parameter, as it is present for every coded macroblock (MB). The higher the quantization parameter, the coarser the quantization, and therefore, the higher the quantization error. A high quantization error results in coding artifacts and consequently, enhancement of pixels in a MB with a high quantization parameter must be suppressed more.
Another parameter that can easily be extracted from the MPEG stream is the number of bits spent in coding a MB or block. The value of the aforementioned coding information is dependent upon other factors including: scene content, bitrate, picture type, and motion estimation/compensation.
Both the quantization parameter and the number of bits spent are widely used in rate control calculations of MPEG encoding and are commonly used to calculate the coding complexity. Coding complexity is defined as the product of the quantization parameter and the number of bits spent to encode a MB or block. Coding complexity is therefore described by the following equation: comρlMB/biock(k,l)=mquant(k,l) * bitS B biock(k,l) where mquant is the quantization parameter and bitSMB iock is the number of bits of DCT coefficients used to encode the MB or block(k,l). The underlying assumption is that the higher the complexity of a MB or block with respect to the average complexity of a frame, the higher the probability of having coding artifacts in that MB or block. Thus, enhancement should be suppressed for the pixels of the blocks with relatively high coding complexity. Accordingly, the UME 18 of pixel(ij) can be defined by the following equation:
UME(i j) = 1 - complpiXei(ij)/2 * compl where complPixeι(i j) is the coding complexity of pixel (i,j) and compl is the average coding complexity of a picture. In the present invention, complPixeι(i j) is estimated from the MB or block complexity map Figure 248 by means of bilinear interpolation Figure 2 58. In one aspect of the invention, UME(i j) can range from 0 to 1. In this aspect, zero means that no sharpness enhancement is allowed for a particular pixel, while 1 means that the pixel can be freely enhanced without the risk of enhancing any coding artifacts.
The UME equation can be extended, by the addition of a term directly related to the quantization parameter, to incorporate a stronger bitrate dependency. This can be especially advantageous for video that has been encoded at a low bitrate.
For skipped or uncoded MBs/blocks, the UME is estimated Figure 2 50 from surrounding values.
Because the UME 18 is calculated to account for coding characteristics, it only prevents the enhancement of coding artifacts such as blocking and ringing. Thus, the prevention or reduction of artifacts of non-coding origin, which might result from applying too much enhancement, is addressed by other parts of the sharpness enhancement algorithm. The aforementioned UME 18 can be combined with any peaking algorithm, or it can be adapted to any spatial domain sharpness enhancement algorithm. It is also possible to utilize coding information Figure 2 46 and incorporate scene content related information Figure 244, in combination with an adaptive peaking algorithm.
In this embodiment, shown in Figure 2, the four control blocks 6 8 10 12 shown in Figure 1 are eliminated. Scene content information, such as edge information 44, is incorporated into the coding gain calculation via the edge detection 42. The scene-content related information 44 compensates for the uncertainty of the UME calculation Fig. 1 18, the uncertainty resulting from assumptions made and interpolations applied in its calculation, Fig. 2 58 36.
In this embodiment, the coding gain of a pixel (ij) 36 is determined by summing the UME which is embedded in the coding gain calculation 36 with an Edge Map 44 related term according to the equation below: gcoding(ij)=UME(ij) + gedge(ij) UME is defined above and gedge is based on edge-related pixel information.
It should be noted that the complexity map 56 of the MB/block has an inherited block structure. To decrease this non-desirable characteristic of the complexity map 56, a spatial low-pass filtering 52 is applied by a filter. An example filter kernel, which can be used for low-pass filtering is:
Figure imgf000007_0001
Another problem is that abrupt frame to frame changes in the coding gain for any given pixel can result in temporally inconsistent sharpness enhancement, which is undesirable. Such changes can also intensify temporally visible and annoying artifacts such as mosquito noise.
To remedy this effect, temporal filtering 54 is applied to the coding gain using the gain of the previous frame. To reduce the high computational complexity and memory requirement, instead of filtering the gain-map, the MB or block-based complexity map 48 is filtered temporally using an IIR filter 54. The following equation represents this processing: comρlMB/biock(r,s,t) = k * complMB/biock(r,s,f) + seal * (1-k) * complMB biock(r,s,t-l) where r,s is the spatial coordinate of a MB or block, t represents the current picture, k is the IIR filter coefficient and seal is a scaling term taking into account the complexity differences among different picture types. The coding gain 36 is then applied to the adaptive peaking algorithm using the frame 160 to produce an enhanced frame 160. The invention can also be applied to HD and SD sequences such as would be present in a video storage application having HD capabilities and allowing long-play mode. The majority of such video sequences are transcoded to a lower storage bitrate from broadcast MPEG-2 bitstreams. For the long play mode of this application, format change can also take place during transcoding. Well-known SD video sequences encoded, decoded, and then processed with the sharpness enhancement algorithm, according to the present invention, provide superior video quality for a priori encoded or transcoded video sequences as compared to algorithms that do not use coding information.
The present invention has been described with respect to particular illustrative embodiments. It is to be understood that the invention is not limited to the above-described embodiments and modifications thereto, and that various changes and modifications may be made by those of ordinary skill in the art without departing from the spirit and scope of the appended claims.

Claims

CLAIMS:
1. A method for enhancing image quality comprising:
- developing a usefulness metric 18 which identifies a limit to sharpness enhancement that can be applied to decoded video without enhancing coding artifacts; and
- applying the usefulness metric 18 to at least one sharpness enhancement algorithm 4, the usefulness metric 18 and the sharpness enhancement algorithm 4 being separate such that the usefulness metric 18 can be used with a variety of algorithms.
2. A method for enhancing the sharpness of a coded digital video, comprising the steps of: - selecting and extracting statistical information 46 from a coded video bit stream in order to identify the video's coding complexity 48;
- based upon the coding complexity 48, developing a usefulness metric 18 for the coded video, which identifies a limit to sharpness enhancement that can be applied to the coded video after it is decoded, without enhancing coding artifacts; and - applying a sharpness enhancement algorithm 4 to the decoded video to increase sharpness within the limit prescribed by the usefulness metric 18.
3. The method as claimed in claim 2 wherein the sharpness enhancement algorithm is a peaking algorithm 4.
4. The method as claimed in claim 2 wherein the sharpness enhancement algorithm is a spatial-domain algorithm 4.
5. The method as claimed in claim 2 wherein the usefulness metric 18 is calculated on a pixel-by-pixel basis.
6. The method as claimed in claim 2 wherein the coding complexity 48 is defined as the product of a quantization parameter and a number of bits used to code a macro block.
7. The method as claimed in claim 2 wherein the coding complexity 48 is defined as the product of a quantization parameter and a number of bits used to code a block.
8. The method as claimed in claim 2, wherein the usefulness metric 18 occupies a range, a first terminus of the range meaning no sharpness enhancement is allowed for a particular pixel and second terminus of the range meaning that the pixel can be freely enhanced.
9. The method as claimed in claim 2, wherein the method is also applied to skipped macroblocks, the usefulness metric 18 being estimated based upon the coding complexity 48 of surrounding macro blocks or the coding complexity 48 of a previous frame.
10. The method as claimed in claim 2, wherein the method is also applied to uncoded blocks, the usefulness metric 18 being estimated based upon the coding complexity 48 of surrounding blocks or the coding complexity 48 of a previous frame.
11. The method as claimed in claim 2, wherein in addition to the usefulness metric 18, scene-content related information 42 is incorporated into a coding gain calculation.
12. The method as claimed in claim 2, wherein the scene-content related information 44 is derived from edge 42 information.
13. The method as claimed in claim 5, wherein coding gain 14 of a pixel is determined by the equation: gcoding(ij) = UME(i ) + gedge(i j) and wherein i and j are pixel coordinates, gCOding 36 is the pixel coding gain, UME 18 is the usefulness metric and gedge is based upon edge-related information 44 derived from the image.
14. The method as claimed in claim 13, wherein spatial low-pass filtering 52 is applied to a complexity map 48 calculated from the coded digital video.
15. The method as claimed in claim 13 , wherein temporal filtering 54 is applied to the coding gain 36 using the coding 36 gain of a previous frame.
16. The method as claimed in claim 13, wherein the equation 14 can be extended to include an additional term directly related to the quantization parameter.
17. The method as claimed in claim 6, wherein a block-based complexity map 48 is filtered temporally using an IIR filter 54.
18. The method as claimed in claim 6, wherein a macro block-based complexity map 48 is filtered temporally using an IIR filter 54.
19. The method as claimed in claim 17 or 18, wherein the temporal filtering 54 is in accordance with the following equation: compl MB biock (r.s,t) = k * compl MB/ iock (r,s,t) + seal * (1-k) * compl MB biock (r,s,t-l) and wherein r,s is the spatial coordinate of a macro block or block, 160 represents the current picture, k is the IIR filter coefficient and seal is a scaling term taking into account picture complexity 48 determined by the image's picture type.
20. A device for image quality enhancement comprising:
- a peaking filter 4 which filters a decoded luminance 2 signal, generating a high pass signal 20;
- a plurality of pixel based control blocks 68, 10, 12, 14, operating in parallel on the decoded luminance signal 2, each calculating a maximum allowable gain factor, based upon a characteristic of the luminance signal, wherein at least one control block is a coding gain block 14 which implements a usefulness metric 18 which determines the allowable amount of peaking;
- a dynamic gain control 32 for selecting a minimum gain based upon the calculated maximum gain factors;
- a multiplier 22 for multiplying the high pass signal 20 by the minimum gain 38 generating a multiplied signal; and - an adder 24 for combining the decoded luminance signal 2 with the multiplied signal, generating an enhanced signal 26.
21. A device as claimed in claim 20, wherein the control blocks comprise:
- a contrast control block 6: - a dynamic range control block 8;
- a clipping prevention control block 10;
- an adaptive coring control block 12; and
- a coding gain block 14, all of the blocks being connected in parallel.
22. A device for enhancing the image quality of a digital video comprising:
- a usefulness metric 18 generator which identifies a limit to sharpness enhancement that can be applied, without enhancing coding artifacts, to decoded digital video; - a controller which applies the usefulness metric to at least one sharpness enhancement algorithm 4, the usefulness metric 18 and the sharpness enhancement algorithm 4 being separate such that the usefulness metric can be used with a variety of algorithms.
23. A system which enhances sharpness of a coded digital video, comprising: - a selector which selects and extracts statistical information from a coded video bit stream in order to identify the video's coding complexity 48;
- a usefulness metric generator that, based upon the coding complexity 50, develops a usefulness metric 18 for the coded digital video after decoding, which identifies a limit to sharpness enhancement that can be applied to a decoded video without enhancing coding artifacts; and
- a sharpness enhancer 4 which applies a sharpness enhancement algorithm to the decoded video to increase sharpness within the limit prescribed by the usefulness metric.
24. Computer-executable process steps to enhance image quality, the computer- executable process steps being stored on a computer-readable medium and comprising:
- an extracting step to extract statistical information 46 from a coded video bit stream in order to identify a video's coding complexity;
- a generating step to generate a usefulness metric 18 for a coded video based upon the coding complexity 48, which identifies a limit to sharpness enhancement that can be applied to the coded video after decoding without enhancing coding artifacts; and
- an enhancement step to enhance the sharpness of the image by applying a sharpness enhancement algorithm 4 to a decoded video to increase sharpness within the limit prescribed by the usefulness metric 18.
25. Means for enhancing the sharpness of a coded digital video, comprising:
- extracting means for extracting statistical information 46 from a coded video bit stream in order to identify the coded digital video's coding complexity 48;
- generating means for developing a usefulness metric 18 for the coded digital video, based upon the coding complexity 48, which identifies a limit to sharpness enhancement that can be applied to the coded digital video after decoding without enhancing coding artifacts; and
- enhancement means for applying a sharpness enhancement algorithm 4 to a decoded video to increase sharpness within the limit prescribed by the usefulness metric.
26. A signal, embodied in a carrier wave, representing data for enhancing sharpness of a decoded digital video, comprising:
- statistical information 46 selected from a coded video bit stream to be used in identifying the complexity 48 of a video; - a usefulness metric 18, based upon the complexity 48 of the video, which identifies a limit to sharpness enhancement which can be applied to the decoded video without enhancing coding artifacts; and
- a sharpness enhancement algorithm 4 to be used for increasing the sharpness of the decoded video within the limit prescribed by the usefulness metric.
27. A method for enhancing image quality comprising the steps of:
- peaking filtering 4 a coded luminance signal 2, increasing the amplitude of the luminance signal 2 and generating a high pass signal 20;
- calculating at least one maximum gain factor 928, 930, 932, 934, 936 for the luminance signal, based on a characteristic of the luminance signal, wherein at least one gain factor calculation 14 implements a usefulness metric 18 which determines an allowable amount of peaking which will not intensify coding artifacts;
- selecting a minimum gain 938 from the maximum gain factors 928,930,932,934,936; - multiplying the high pass signal 20 by the minimum gain 938 generating a multiplied signal 22; and
- adding 24 a decoded luminance signal with the multiplied signal, generating an enhanced signal 26.
28. A video receiving device 56 comprising:
- a peaking filter 4 which filters a decoded luminance signal, generating a high pass signal 20;
- a plurality of pixel based control blocks 6,8,10,12,14, operating in parallel on the decoded luminance signal 2, each calculating a maximum allowable gain factor
928,930,932,934,936, based upon a characteristic of the luminance signal 2, wherein at least one control block is a coding gain block 14 which implements a usefulness metric 18 which determines the allowable amount of peaking;
- a dynamic gain control 16 for selecting a minimum gain 938 based upon the calculated maximum gain factors;
- a multiplier 22 for multiplying the high pass signal 20 by the minimum gain 938 generating a multiplied signal 22; and
- an adder 24 for combining the decoded luminance signal 2 with the multiplied signal, generating an enhanced signal 26.
PCT/IB2001/002550 2001-01-10 2001-12-14 Method and system for sharpness enhancement for coded video WO2002056583A2 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
JP2002557116A JP2004518338A (en) 2001-01-10 2001-12-14 Method and system for enhancing the sharpness of an encoded video
KR1020027011857A KR20020081428A (en) 2001-01-10 2001-12-14 Method and system for sharpness enhancement for coded video
EP01273149A EP1352516A2 (en) 2001-01-10 2001-12-14 Method and system for sharpness enhancement for coded video

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US26084501P 2001-01-10 2001-01-10
US60/260,845 2001-01-10
US09/976,340 2001-10-12
US09/976,340 US6950561B2 (en) 2001-01-10 2001-10-12 Method and system for sharpness enhancement for coded video

Publications (2)

Publication Number Publication Date
WO2002056583A2 true WO2002056583A2 (en) 2002-07-18
WO2002056583A3 WO2002056583A3 (en) 2002-10-31

Family

ID=26948213

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2001/002550 WO2002056583A2 (en) 2001-01-10 2001-12-14 Method and system for sharpness enhancement for coded video

Country Status (6)

Country Link
US (1) US6950561B2 (en)
EP (1) EP1352516A2 (en)
JP (1) JP2004518338A (en)
KR (1) KR20020081428A (en)
CN (1) CN1218560C (en)
WO (1) WO2002056583A2 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1439717A2 (en) * 2003-01-16 2004-07-21 Samsung Electronics Co., Ltd. Colour transient improvement
EP2051524A1 (en) * 2007-10-15 2009-04-22 Panasonic Corporation Image enhancement considering the prediction error
US7825992B2 (en) 2005-07-04 2010-11-02 Samsung Electronics Co., Ltd. Video processing apparatus and video processing method

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7161633B2 (en) * 2001-01-10 2007-01-09 Koninklijke Philips Electronics N.V. Apparatus and method for providing a usefulness metric based on coding information for video enhancement
US6933953B2 (en) * 2002-04-22 2005-08-23 Koninklijke Philips Electronics N.V. Cost function to measure objective quality for sharpness enhancement functions
US7031388B2 (en) * 2002-05-06 2006-04-18 Koninklijke Philips Electronics N.V. System for and method of sharpness enhancement for coded digital video
JP4333150B2 (en) * 2003-01-31 2009-09-16 ソニー株式会社 Signal processing apparatus and method, recording medium, and program
TWI234398B (en) * 2003-11-20 2005-06-11 Sunplus Technology Co Ltd Automatic contrast limiting circuit by spatial domain infinite impulse response filter and method thereof
US20080266307A1 (en) * 2004-05-25 2008-10-30 Koninklijke Philips Electronics, N.V. Method and System for Enhancing the Sharpness of a Video Signal
FI20045201A (en) * 2004-05-31 2005-12-01 Nokia Corp A method and system for viewing and enhancing images
US7620263B2 (en) * 2005-10-06 2009-11-17 Samsung Electronics Co., Ltd. Anti-clipping method for image sharpness enhancement
US20080025390A1 (en) * 2006-07-25 2008-01-31 Fang Shi Adaptive video frame interpolation
KR101244679B1 (en) * 2006-07-27 2013-03-18 삼성전자주식회사 Dynamic gain adjusting method according to brightness, and apparatus thereof
US7983501B2 (en) * 2007-03-29 2011-07-19 Intel Corporation Noise detection and estimation techniques for picture enhancement
WO2010107411A1 (en) 2009-03-17 2010-09-23 Utc Fire & Security Corporation Region-of-interest video quality enhancement for object recognition
US8718145B1 (en) * 2009-08-24 2014-05-06 Google Inc. Relative quality score for video transcoding
US8718395B2 (en) * 2010-02-26 2014-05-06 Sharp Kabushiki Kaisha Image processing apparatus, display apparatus provided with same, and image processing method
US11019349B2 (en) 2017-01-20 2021-05-25 Snap Inc. Content-based client side video transcoding

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6072538A (en) * 1997-07-22 2000-06-06 Sony Corporation Digital image enhancement
WO2000042778A1 (en) * 1999-01-15 2000-07-20 Koninklijke Philips Electronics N.V. Sharpness enhancement

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5038388A (en) * 1989-05-15 1991-08-06 Polaroid Corporation Method for adaptively sharpening electronic images
US6285801B1 (en) * 1998-05-29 2001-09-04 Stmicroelectronics, Inc. Non-linear adaptive image filter for filtering noise such as blocking artifacts
US6466624B1 (en) * 1998-10-28 2002-10-15 Pixonics, Llc Video decoder with bit stream based enhancements
US6580835B1 (en) * 1999-06-02 2003-06-17 Eastman Kodak Company Method for enhancing the edge contrast of a digital image

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6072538A (en) * 1997-07-22 2000-06-06 Sony Corporation Digital image enhancement
WO2000042778A1 (en) * 1999-01-15 2000-07-20 Koninklijke Philips Electronics N.V. Sharpness enhancement

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
CHOY S O ET AL.: "Reduction or Block-Transform Image Coding Artifact by Using Local Statistics of Transform Coefficients" IEEE SIGNAL PROCESSING LETTERS, vol. 4, no. 1, - January 1997 (1997-01) XP002206800 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1439717A2 (en) * 2003-01-16 2004-07-21 Samsung Electronics Co., Ltd. Colour transient improvement
EP1439717A3 (en) * 2003-01-16 2007-08-29 Samsung Electronics Co., Ltd. Colour transient improvement
US7808558B2 (en) 2003-01-16 2010-10-05 Samsung Electronics Co., Ltd. Adaptive color transient improvement
US7825992B2 (en) 2005-07-04 2010-11-02 Samsung Electronics Co., Ltd. Video processing apparatus and video processing method
EP2051524A1 (en) * 2007-10-15 2009-04-22 Panasonic Corporation Image enhancement considering the prediction error
EP2207358A1 (en) * 2007-10-15 2010-07-14 Panasonic Corporation Video decoding method and video encoding method
EP2207358A4 (en) * 2007-10-15 2011-08-24 Panasonic Corp Video decoding method and video encoding method

Also Published As

Publication number Publication date
JP2004518338A (en) 2004-06-17
CN1218560C (en) 2005-09-07
EP1352516A2 (en) 2003-10-15
WO2002056583A3 (en) 2002-10-31
KR20020081428A (en) 2002-10-26
CN1428042A (en) 2003-07-02
US20020122603A1 (en) 2002-09-05
US6950561B2 (en) 2005-09-27

Similar Documents

Publication Publication Date Title
US6862372B2 (en) System for and method of sharpness enhancement using coding information and local spatial features
US6950561B2 (en) Method and system for sharpness enhancement for coded video
US7620261B2 (en) Edge adaptive filtering system for reducing artifacts and method
JP4334768B2 (en) Method and apparatus for reducing breathing artifacts in compressed video
EP1506525B1 (en) System for and method of sharpness enhancement for coded digital video
US6873657B2 (en) Method of and system for improving temporal consistency in sharpness enhancement for a video signal
US20140247890A1 (en) Encoding device, encoding method, decoding device, and decoding method
US20060093232A1 (en) Unified metric for digital video processing (umdvp)
EP1352515B1 (en) Apparatus and method for providing a usefulness metric based on coding information for video enhancement
Boroczyky et al. Sharpness enhancement for MPEG-2 encoded/transcoded video sources
Yang et al. UMDVP-controlled post-processing system for compressed video

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): CN JP KR

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR

WWE Wipo information: entry into national phase

Ref document number: 2001273149

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 1020027011857

Country of ref document: KR

121 Ep: the epo has been informed by wipo that ep was designated in this application
WWP Wipo information: published in national office

Ref document number: 1020027011857

Country of ref document: KR

AK Designated states

Kind code of ref document: A3

Designated state(s): CN JP KR

AL Designated countries for regional patents

Kind code of ref document: A3

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR

WWE Wipo information: entry into national phase

Ref document number: 018090370

Country of ref document: CN

WWE Wipo information: entry into national phase

Ref document number: 2002557116

Country of ref document: JP

WWP Wipo information: published in national office

Ref document number: 2001273149

Country of ref document: EP

WWW Wipo information: withdrawn in national office

Ref document number: 2001273149

Country of ref document: EP