WO2010134973A1 - Methods and apparatus for a generalized filtering structure for video coding and decoding - Google Patents

Methods and apparatus for a generalized filtering structure for video coding and decoding Download PDF

Info

Publication number
WO2010134973A1
WO2010134973A1 PCT/US2010/001458 US2010001458W WO2010134973A1 WO 2010134973 A1 WO2010134973 A1 WO 2010134973A1 US 2010001458 W US2010001458 W US 2010001458W WO 2010134973 A1 WO2010134973 A1 WO 2010134973A1
Authority
WO
WIPO (PCT)
Prior art keywords
input
loop
output
signal communication
filtering structure
Prior art date
Application number
PCT/US2010/001458
Other languages
French (fr)
Inventor
Yunfei Zheng
Peng Yin
Joel Sole
Original Assignee
Thomson Licensing
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Thomson Licensing filed Critical Thomson Licensing
Publication of WO2010134973A1 publication Critical patent/WO2010134973A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/85Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
    • H04N19/86Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving reduction of coding artifacts, e.g. of blockiness
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/117Filters, e.g. for pre-processing or post-processing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/146Data rate or code amount at the encoder output
    • H04N19/147Data rate or code amount at the encoder output according to rate distortion criteria
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/174Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a slice, e.g. a line of blocks or a group of blocks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/46Embedding additional information in the video signal during the compression process
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/61Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/70Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/80Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation
    • H04N19/82Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation involving filtering within a prediction loop

Definitions

  • the present principles relate generally to video encoding and decoding and, more particularly, to methods and apparatus for a generalized filtering structure for video coding and decoding.
  • MPEG-4 AVC Standard Standardization/International Electrotechnical Commission (ISO/IEC) Moving Picture Experts Group-4 (MPEG-4) Part 10 Advanced Video Coding (AVC) standard/International Telecommunication Union, Telecommunication Sector (ITU- T) H.264 Recommendation
  • AVC Advanced Video Coding
  • ITU- T International Telecommunication Union, Telecommunication Sector
  • filtering processes are involved in compressing a video source.
  • the deblocking filters are used in-loop to remove blocky artifacts in order to better display the current picture and/or to provide a better reference frame for frames that are coded thereafter.
  • a post filter can also be applied before the picture is displayed. Turning to FIG.
  • the filtering structure 100 includes an in-loop filter 110 and an out-loop filter 120.
  • An output of the in-loop filter 110 is connected in signal communication with an input of the out-loop filter 120.
  • An input of the in-loop filter 110 is available as an input of the filtering structure 100, for receiving a decoded picture.
  • the output of the in-loop filter 110 is also available as an output of the filtering structure 100, for outputting a reference picture.
  • An output of the out-loop filter 120 is available as an output of the filtering structure 100, for outputting a display picture.
  • both filters are directly additively applied to the pictures that will be displayed.
  • the additive structure can improve the coding performance by reducing the bit rate and increasing the objective or visual qualities.
  • the in-loop and out-loop filters have some conflicting characteristics, such a filtering structure can worsen the coding performance.
  • a de-artifacting filter as described with respect to a first prior art approach, demonstrates other types of filters which can remove more general compression artifacts other than blockiness. With so many filters, it is critical to design a reasonable structure to make the filters functionally work together. As shown in FIG. 1 , an in-loop filter is used to improve the quality of a current picture for displaying or to provide a better reference picture.
  • the MPEG-4 AVC Standard uses a deblocking filter scheme to remove the blocky artifacts of the reconstructed picture.
  • BALF block- based adaptive loop filter
  • MSE mean square error
  • the filter function can be switched on and off on a block-wise basis.
  • the block size ranges from 16x16 to 128x128.
  • the BALF is designed to work with the MPEG-4 AVC Standard deblocking filter as shown in FIG. 2.
  • an exemplary block-based adaptive loop filter (BALF) filtering structure is indicated generally by the reference numeral 200.
  • the filtering structure 200 includes an in-loop filter 210 and a BALF 220.
  • An output of the in-loop filter 210 is connected in signal communication with an input of the BALF 220.
  • An input of the in-loop filter 210 is available as an input of the filtering structure 200, for receiving a decoded picture.
  • An output of the BALF 220 is available as an output of the filtering structure 200, for outputting a display picture.
  • the output of the BALF 220 is also available as an output of the filtering structure 200, for outputting a reference picture.
  • both the displaying picture and the reference picture will be impacted by the in-loop filter. Since the reference picture is prepared for use in the coding of subsequent pictures, increasing the quality of the reference picture may reduce the number of bits needed to code the subsequent pictures. However, higher quality for reference picture purposes does not necessarily mean higher quality for displaying purposes. The other way around is also true. This means that we need to decouple the impacts of the in-loop filter to the reference picture and the display picture.
  • SEI deblocking filter display preference supplemental enhancement information
  • the deblocking filter display preference supplemental enhancement information message By using the deblocking filter display preference supplemental enhancement information message, the preservation of picture details in output pictures is maintained, while allowing motion compensation to use less noisy reference pictures (the latter improving coding efficiency when pictures include a significant amount of details such as, for example, film grains).
  • the deblocking filter display preference supplemental enhancement information message is defined just for the deblocking filter, so such message does not incorporate other filter deployments and is not suitable for multiple filters in the coding process.
  • an apparatus includes a video decoder for decoding a picture.
  • the video decoder includes an in-loop filtering structure for performing in-loop filtering of a reconstructed version of the picture. An impact of the in-loop filtering is decoupled from a display picture formed from the reconstructed version of the picture data.
  • a method in a video decoder includes decoding a picture using an in- loop filtering structure included in the video decoder.
  • the in-loop filtering structure is for performing in-loop filtering of a reconstructed version of the picture. An impact of the in-loop filtering is decoupled from a display picture formed from the reconstructed version of the picture data.
  • an apparatus includes a video decoder having an in-loop filtering structure and an out-loop filtering structure.
  • the in-loop filtering structure includes one or more in-loop filters and the out-loop filtering structure includes one or more out-loop filters. Multiple filters from among at least one of the one or more in-loop filters and the one or more out-loop filters are selected and combined to generate at least one of a reference picture and a display picture from an original version of a picture.
  • a method in a video decoder there is provided.
  • the method includes decoding a picture using at least one filter from among an in-loop filtering structure and an out-loop filtering structure included in the video decoder.
  • the in-loop filtering structure has one or more in- loop filters and the out-loop filtering structure has one or more out-loop filters. Multiple filters from among at least one of the one or more in-loop filters and the one or more out-loop filters are selected and combined to generate at least one of a reference picture and a display picture from an original version of a picture.
  • FIG. 1 is a block diagram showing an exemplary filtering structure, wherein both in-loop and out-loop filters exist, in accordance with an embodiment of the present principles
  • FIG. 2 is block diagram showing an exemplary block-based adaptive loop filter (BALF) filtering structure, in accordance with an embodiment of the present principles
  • BALF adaptive loop filter
  • FIG. 3 is a block diagram showing an exemplary MPEG-4 AVC Standard based video encoder to which the present principles may be applied, in accordance with an embodiment of the present principles;
  • FIG. 4 is a block diagram showing an exemplary MPEG-4 AVC Standard based video decoder to which the present principles may be applied, in accordance with an embodiment of the present principles
  • FIG. 5 is a block diagram showing an exemplary filtering structure that decouples the impacts of the in-loop filter on a reference picture and a display picture, in accordance with an embodiment of the present principles
  • FIG. 6 is a block diagram showing an exemplary MPEG-4 AVC Standard based video decoder corresponding to the filtering structure 500 of FIG. 5, in accordance with an embodiment of the present principles;
  • FIG. 7 is a block diagram showing an exemplary filtering structure for decoupling the impacts of an in-loop filter on reference pictures and display pictures, in accordance with an embodiment of the present principles
  • FIG. 8 is a block diagram showing an exemplary MPEG-4 AVC Standard based video encoder corresponding to the filtering structure 700 of FIG. 7, in accordance with an embodiment of the present principles;
  • FIG. 9 is a block diagram showing an exemplary MPEG-4 AVC Standard based video decoder corresponding to the filtering structure 700 of FIG. 7, in accordance with an embodiment of the present principles
  • FIG. 10 is a block diagram showing an exemplary filtering structure involving multiple out-loop filters, in accordance with an embodiment of the present principles
  • FIG. 11 is a block diagram showing an exemplary MPEG-4 AVC Standard based video decoder corresponding to the filtering structure 1000 of FIG. 10, in accordance with an embodiment of the present principles
  • FIG. 12 is a block diagram showing an exemplary filtering structure involving multiple in-loop filters and multiple out-loop filters, in accordance with an embodiment of the present principles
  • FIG. 13 is a block diagram showing an exemplary MPEG-4 AVC Standard based video encoder corresponding to the filtering structure 1200 of FIG. 12, in accordance with an embodiment of the present principles;
  • FIG. 14 is a block diagram showing an exemplary MPEG-4 AVC Standard based video decoder corresponding to the filtering structure 1200 of FIG. 12, in accordance with an embodiment of the present principles;
  • FIG. 15 is a block diagram showing another exemplary filtering structure involving multiple in-loop filters and multiple out-loop filters, in accordance with an embodiment of the present principles
  • FIG. 16 is a block diagram showing an exemplary MPEG-4 AVC Standard based video encoder corresponding to the filtering structure 1500 of FIG. 15, in accordance with an embodiment of the present principles
  • FIG. 17 is a block diagram showing an exemplary MPEG-4 AVC Standard based video decoder corresponding to the filtering structure 1500 of FIG. 15, in accordance with an embodiment of the present principles;
  • FIG. 18 is a flow diagram showing an exemplary method for encoding picture data by selecting from a plurality of filtering structures, in accordance with an embodiment of the present principles.
  • FIG. 19 is a flow diagram showing an exemplary method for decoding picture data by determining a particular filtering structure from a plurality of filtering structures, in accordance with an embodiment of the present principles.
  • the present principles are directed to methods and apparatus for a generalized filtering structure for video coding and decoding.
  • processor or “controller” should not be construed to refer exclusively to hardware capable of executing software, and may implicitly include, without limitation, digital signal processor (“DSP”) hardware, read-only memory (“ROM”) for storing software, random access memory (“RAM”), and non-volatile storage.
  • DSP digital signal processor
  • ROM read-only memory
  • RAM random access memory
  • any switches shown in the figures are conceptual only. Their function may be carried out through the operation of program logic, through dedicated logic, through the interaction of program control and dedicated logic, or even manually, the particular technique being selectable by the implementer as more specifically understood from the context.
  • any element expressed as a means for performing a specified function is intended to encompass any way of performing that function including, for example, a) a combination of circuit elements that performs that function or b) software in any form, including, therefore, firmware, microcode or the like, combined with appropriate circuitry for executing that software to perform the function.
  • the present principles as defined by such claims reside in the fact that the functionalities provided by the various recited means are combined and brought together in the manner which the claims call for. It is thus regarded that any means that can provide those functionalities are equivalent to those shown herein.
  • T 1 "and/or” and “at least one of, for example, in the cases of “A/B”, “A and/or B” and “at least one of A and B”, is intended to encompass the selection of the first listed option (A) only, or the selection of the second listed option (B) only, or the selection of both options (A and B).
  • such phrasing is intended to encompass the selection of the first listed option (A) only, or the selection of the second listed option (B) only, or the selection of the third listed option (C) only, or the selection of the first and the second listed options (A and B) only, or the selection of the first and third listed options (A and C) only, or the selection of the second and third listed options (B and C) only, or the selection of all three options (A and B and C).
  • This may be extended, as readily apparent by one of ordinary skill in this and related arts, for as many items listed.
  • the words “picture” and “image” are used interchangeably and refer to a still image or a picture from a video sequence. As is known, a picture may be a frame or a field. Additionally, as used herein, the word “signal” refers to indicating something to a corresponding decoder. For example, the encoder may signal a particular filter deployment from among a plurality of available filter deployments in order to make the decoder aware of which particular filter deployment was used on the encoder side. In this way, the same filter deployment may be used at both the encoder side and the decoder side.
  • signaling may be used (without transmitting) to simply allow the decoder to know and select the particular filter deployment. By avoiding transmission of any actual filter deployments, a bit savings may be realized. It is to be appreciated that signaling may be accomplished in a variety of ways. For example, one or more syntax elements, flags, and so forth may be used to signal information to a corresponding decoder.
  • the present principles are not limited to solely this standard and, thus, may be utilized with respect to other video coding standards, recommendations, and extensions thereof, including extensions of the MPEG-4 AVC standard, while maintaining the spirit of the present principles.
  • filtering processes are involved in compressing a video source.
  • the filters can be in- loop or out-loop (post filter). If both in-loop and out-loop filters exist during the coding process, their positions in the coding pipeline are generally shown in FIG. 1.
  • Such a configuration means that both filters are directly additively applied to the pictures that will be displayed. In some case, neither the in-loop nor the out-loop filters can improve the coding performance of the current displaying pictures, or can have a good impact on subsequently coded pictures.
  • an out-loop filter is simpler than an in-loop filter, since the out-loop filter simply functions on the displaying picture. That means out-loop filters will not impact subsequently coded pictures.
  • To improve the coding performance it is possible that multiple post filters are applied to the pictures that will be displayed. However, how to deploy these filters is also a problem that needs to be solved.
  • FIG. 3 an exemplary MPEG-4 AVC Standard based video encoder to which the present principles may be applied is indicated generally by the reference numeral 300.
  • the video encoder 300 includes a frame ordering buffer 310 having an output in signal communication with a non-inverting input of a combiner 385.
  • An output of the combiner 385 is connected in signal communication with a first input of a transformer and quantizer 325.
  • An output of the transformer and quantizer 325 is connected in signal communication with a first input of an entropy coder 345 and a first input of an inverse transformer and inverse quantizer 350.
  • An output of the entropy coder 345 is connected in signal communication with a first non-inverting input of a combiner 390.
  • An output of the combiner 390 is connected in signal communication with a first input of an output buffer 335.
  • a first output of an encoder controller 305 is connected in signal communication with a second input of the frame ordering buffer 310, a second input of the inverse transformer and inverse quantizer 350, an input of a picture-type decision module 315, a first input of a macroblock-type (MB-type) decision module 320, a second input of an intra prediction module 360, a second input of a deblocking filter 365, a first input of a motion compensator 370, a first input of a motion estimator 375, and a second input of a reference picture buffer 380.
  • MB-type macroblock-type
  • a second output of the encoder controller 305 is connected in signal communication with a first input of a Supplemental Enhancement Information (SEI) inserter 330, a second input of the transformer and quantizer 325, a second input of the entropy coder 345, a second input of the output buffer 335, and an input of the Sequence Parameter Set (SPS) and Picture Parameter Set (PPS) inserter 340.
  • SEI Supplemental Enhancement Information
  • An output of the SEI inserter 330 is connected in signal communication with a second non-inverting input of the combiner 390.
  • a first output of the picture-type decision module 315 is connected in signal communication with a third input of the frame ordering buffer 310.
  • a second output of the picture-type decision module 315 is connected in signal communication with a second input of a macroblock-type decision module 320.
  • SPS Sequence Parameter Set
  • PPS Picture Parameter Set
  • An output of the inverse quantizer and inverse transformer 350 is connected in signal communication with a first non-inverting input of a combiner 319.
  • An output of the combiner 319 is connected in signal communication with a first input of the intra prediction module 360 and a first input of the deblocking filter 365.
  • An output of the deblocking filter 365 is connected in signal communication with a first input of a reference picture buffer 380.
  • An output of the reference picture buffer 380 is connected in signal communication with a second input of the motion estimator 375 and a third input of the motion compensator 370.
  • a first output of the motion estimator 375 is connected in signal communication with a second input of the motion compensator 370.
  • a second output of the motion estimator 375 is connected in signal communication with a third input of the entropy coder 345.
  • An output of the motion compensator 370 is connected in signal communication with a first input of a switch 397.
  • An output of the intra prediction module 360 is connected in signal communication with a second input of the switch 397.
  • An output of the macroblock-type decision module 320 is connected in signal communication with a third input of the switch 397.
  • the third input of the switch 397 determines whether or not the "data" input of the switch (as compared to the control input, i.e., the third input) is to be provided by the motion compensator 370 or the intra prediction module 360.
  • the output of the switch 397 is connected in signal communication with a second non-inverting input of the combiner 319 and an inverting input of the combiner 385.
  • a first input of the frame ordering buffer 310 and an input of the encoder controller 305 are available as inputs of the encoder 300, for receiving an input picture 303.
  • a second input of the Supplemental Enhancement Information (SEI) inserter 330 is available as an input of the encoder 300, for receiving metadata.
  • An output of the output buffer 335 is available as an output of the encoder 300, for outputting a bitstream.
  • SEI Supplemental Enhancement Information
  • the video decoder 400 includes an input buffer 410 having an output connected in signal communication with a first input of an entropy decoder 445.
  • a first output of the entropy decoder 445 is connected in signal communication with a first input of an inverse transformer and inverse quantizer 450.
  • An output of the inverse transformer and inverse quantizer 450 is connected in signal communication with a second non-inverting input of a combiner 425.
  • An output of the combiner 425 is connected in signal communication with a second input of a deblocking filter 465 and a first input of an intra prediction module 460.
  • a second output of the deblocking filter 465 is connected in signal communication with a first input of a reference picture buffer 480.
  • An output of the reference picture buffer 480 is connected in signal communication with a second input of a motion compensator 470.
  • a second output of the entropy decoder 445 is connected in signal communication with a third input of the motion compensator 470, a first input of the deblocking filter 465, and a third input of the intra predictor 460.
  • a third output of the entropy decoder 445 is connected in signal communication with an input of a decoder controller 405.
  • a first output of the decoder controller 405 is connected in signal communication with a second input of the entropy decoder 445.
  • a second output of the decoder controller 405 is connected in signal communication with a second input of the inverse transformer and inverse quantizer 450.
  • a third output of the decoder controller 405 is connected in signal communication with a third input of the deblocking filter 465.
  • a fourth output of the decoder controller 405 is connected in signal communication with a second input of the intra prediction module 460, a first input of the motion compensator 470, and a second input of the reference picture buffer 480.
  • An output of the motion compensator 470 is connected in signal communication with a first input of a switch 497.
  • An output of the intra prediction module 460 is connected in signal communication with a second input of the switch 497.
  • An output of the switch 497 is connected in signal communication with a first non-inverting input of the combiner 425.
  • An input of the input buffer 410 is available as an input of the decoder 400, for receiving an input bitstream.
  • a first output of the deblocking filter 465 is available as an output of the decoder 400, for outputting an output picture.
  • some filters are good for reference picture filtering only, some filters are good for displaying pictures only, and some filters work relatively well for both purposes.
  • multiple filters exist in the coding process, either in- loop or out-loop filters, a precise deployment of these filters is desired in order to improve the coding performance.
  • the position of one or more filters in the whole coding pipeline can be designed based on the filter characteristics and the statistics of the signals that need to be coded.
  • the purpose of such strategies is to improve the compression performance, i.e., reducing the bit rate while improving objective and/or subjective gains.
  • new filter deployments are proposed to optimize the filtering performance for compression.
  • the impact of in-loop filtering can be decoupled from picture display.
  • One or more in-loop filters can be applied only to the reference picture and may not be applied to the display picture.
  • we can decouple the impact of in-loop filter from the displaying picture by setting the reconstructed picture to be the direct input of the out-loop filter.
  • New filter deployment can be applied to the single in-loop and/or out-loop filter case and the multiple in-loop and/or out-loop filters case.
  • Embodiment 1 is a diagrammatic representation of Embodiment 1:
  • the filtering structure 500 includes an in-loop filter 510 and an out-loop filter 520.
  • An input of the in-loop filter 510 and an input of the out-loop filter are available as inputs of the filtering structure 500, for receiving a decoded picture.
  • An output of the in-loop filter 510 is available as an output of the filtering structure 500, for outputting a reference picture.
  • An output of the out-loop filter 520 is available as an output of the filtering structure 500, for outputting a display picture.
  • encoder 100 of FIG. 1 may be used to encode a bitstream that can be decoded using the filtering structure 500 of FIG. 5.
  • FIG. 6 an exemplary MPEG-4 AVC Standard based video decoder corresponding to the filtering structure 500 of FIG. 5 is indicated generally by the reference numeral 600.
  • the video decoder 600 includes an input buffer 610 having an output connected in signal communication with a first input of an entropy decoder 645.
  • a first output of the entropy decoder 645 is connected in signal communication with a first input of an inverse transformer and inverse quantizer 650.
  • An output of the inverse transformer and inverse quantizer 650 is connected in signal communication with a second non-inverting input of a combiner 625.
  • An output of the combiner 625 is connected in signal communication with a second input of a deblocking filter 665, a first input of an intra prediction module 660, and an input of an out-loop filter 688.
  • An output of the deblocking filter 665 is connected in signal communication with a first input of a reference picture buffer 680.
  • An output of the reference picture buffer 680 is connected in signal communication with a second input of a motion compensator 670.
  • a second output of the entropy decoder 645 is connected in signal communication with a third input of the motion compensator 670, a first input of the deblocking filter 665, and a third input of the intra predictor 660.
  • a third output of the entropy decoder 645 is connected in signal communication with an input of a decoder controller 605.
  • a first output of the decoder controller 605 is connected in signal communication with a second input of the entropy decoder 645.
  • a second output of the decoder controller 605 is connected in signal communication with a second input of the inverse transformer and inverse quantizer 650.
  • a third output of the decoder controller 605 is connected in signal communication with a third input of the deblocking filter 665.
  • a fourth output of the decoder controller 605 is connected in signal communication with a second input of the intra prediction module 660, a first input of the motion compensator 670, and a second input of the reference picture buffer 680.
  • An output of the motion compensator 670 is connected in signal communication with a first input of a switch 697.
  • An output of the intra prediction module 660 is connected in signal communication with a second input of the switch 697.
  • An output of the switch 697 is connected in signal communication with a first non-inverting input of the combiner 625.
  • An input of the input buffer 610 is available as an input of the decoder 600, for receiving an input bitstream.
  • An output of the out-loop filter 688 is available as an output of the decoder 600, for outputting an output picture.
  • the video decoder 600 includes an in-loop filtering structure 601 that, in turn, includes deblocking filter 665.
  • the video decoder 600 includes an out-loop filtering structure 602 that, in turn, includes out-loop filter 688.
  • Embodiment 2 is a diagrammatic representation of Embodiment 1:
  • FIG. 7 shows a more general embodiment, in which there are multiple in-loop filters that are sequentially connected. We can maintain the impact of several in- loop filters in the display picture but remove others by the structure shown in FIG. 7.
  • an exemplary filtering structure for decoupling the impacts of an in-loop filter on reference pictures and display pictures is indicated generally by the reference numeral 700.
  • the filtering structure 700 includes an in-loop filter-1 710, an in-loop filter-2 720, an in-loop filter-N 730, and an out-loop filter 740.
  • An output of the in-loop filter-1 710 is connected in signal communication with an input of the in-loop filter-2 720 and an input of the out-loop filter 740.
  • An output of the in- loop filter-2 720 is connected in signal communication with an input of the in-loop filter 730.
  • An input of the in-loop filter-1 710 is available as an input of the filtering structure 700, for receiving a decoded picture.
  • An output of the in-loop filter-N 730 is available as an output of the filtering structure 700, for outputting a reference picture.
  • An output of the out-loop filter 740 is available as an output of the filtering structure 800, for outputting a display picture.
  • the video encoder 800 includes a frame ordering buffer 810 having an output in signal communication with a non-inverting input of a combiner 885.
  • An output of the combiner 885 is connected in signal communication with a first input of a transformer and quantizer 825.
  • An output of the transformer and quantizer 825 is connected in signal communication with a first input of an entropy coder 845 and a first input of an inverse transformer and inverse quantizer 850.
  • An output of the entropy coder 845 is connected in signal communication with a first non-inverting input of a combiner 890.
  • An output of the combiner 890 is connected in signal communication with a first input of an output buffer 835.
  • a first output of an encoder controller 805 is connected in signal communication with a second input of the frame ordering buffer 810, a second input of the inverse transformer and inverse quantizer 850, an input of a picture-type decision module 815, a first input of a macroblock-type (MB-type) decision module 820, a second input of an intra prediction module 860, a second input of an in-loop filter-1 866, a second input of an in-loop filter-2 867, a second input of an in-loop filter-N 868, a first input of a motion compensator 870, a first input of a motion estimator 875, and a second input of a reference picture buffer 880.
  • MB-type macroblock-type
  • a second output of the encoder controller 805 is connected in signal communication with a first input of a Supplemental Enhancement Information (SEI) inserter 830, a second input of the transformer and quantizer 825, a second input of the entropy coder 845, a second input of the output buffer 835, and an input of the Sequence Parameter Set (SPS) and Picture Parameter Set (PPS) inserter 840.
  • SEI Supplemental Enhancement Information
  • An output of the SEI inserter 830 is connected in signal communication with a second non-inverting input of the combiner 890.
  • a first output of the picture-type decision module 815 is connected in signal communication with a third input of the frame ordering buffer 810.
  • a second output of the picture-type decision module 815 is connected in signal communication with a second input of a macroblock-type decision module 820.
  • SPS Sequence Parameter Set
  • PPS Picture Parameter Set
  • An output of the inverse quantizer and inverse transformer 850 is connected in signal communication with a first non-inverting input of a combiner 819.
  • An output of the combiner 819 is connected in signal communication with a first input of the intra prediction module 860 and a first input of the in-loop filter-1 866.
  • An output of the in-loop filter-1 866 is connected in signal communication with a first input of the in-loop filter-2 867.
  • An output of the in-loop filter-2 867 is connected in signal communication with a first input of the in-loop filter-N 868.
  • An output of the in-loop filter-N 868 is connected in signal communication with a first input of a reference picture buffer 880.
  • An output of the reference picture buffer 880 is connected in signal communication with a second input of the motion estimator 875 and a third input of the motion compensator 870.
  • a first output of the motion estimator 875 is connected in signal communication with a second input of the motion compensator 870.
  • a second output of the motion estimator 875 is connected in signal communication with a third input of the entropy coder 845.
  • An output of the motion compensator 870 is connected in signal communication with a first input of a switch 897.
  • An output of the intra prediction module 860 is connected in signal communication with a second input of the switch 897.
  • An output of the macroblock-type decision module 820 is connected in signal communication with a third input of the switch 897.
  • the third input of the switch 897 determines whether or not the "data" input of the switch (as compared to the control input, i.e., the third input) is to be provided by the motion compensator 870 or the intra prediction module 860.
  • the output of the switch 897 is connected in signal communication with a second non-inverting input of the combiner 819 and an inverting input of the combiner 885.
  • a first input of the frame ordering buffer 810 and an input of the encoder controller 805 are available as inputs of the encoder 800, for receiving an input picture 803.
  • a second input of the Supplemental Enhancement Information (SEI) inserter 830 is available as an input of the encoder 800, for receiving metadata.
  • An output of the output buffer 835 is available as an output of the encoder 800, for outputting a bitstream.
  • SEI Supplemental Enhancement Information
  • the video encoder 800 includes an in-loop filtering structure 801 that, in turn, includes in-loop filter-1 866, in-loop filter-2 867, and in-loop filter-N 868.
  • the video decoder 900 includes an input buffer 910 having an output connected in signal communication with a first input of an entropy decoder 945.
  • a first output of the entropy decoder 945 is connected in signal communication with a first input of an inverse transformer and inverse quantizer 950.
  • An output of the inverse transformer and inverse quantizer 950 is connected in signal communication with a second non-inverting input of a combiner 925.
  • An output of the combiner 925 is connected in signal communication with a second input of an in-loop filter-1 966 and a first input of an intra prediction module 960.
  • An output of the in-loop filter-1 966 is connected in signal communication with a second input of an out-loop filter and a second input of an in-loop filter-2 967.
  • An output of the in-loop filter-2 967 is connected in signal communication with a second input of an in-loop filter-N 968.
  • An output of the in-loop filter-N 968 is connected in signal communication with a first input of a reference picture buffer 980.
  • An output of the reference picture buffer 980 is connected in signal communication with a second input of a motion compensator 970.
  • a second output of the entropy decoder 945 is connected in signal communication with a third input of the motion compensator 970, a first input of the in-loop filter-1 966, and a third input of the intra predictor 960.
  • a third output of the entropy decoder 945 is connected in signal communication with an input of a decoder controller 905.
  • a first output of the decoder controller 905 is connected in signal communication with a second input of the entropy decoder 945.
  • a second output of the decoder controller 905 is connected in signal communication with a second input of the inverse transformer and inverse quantizer 950.
  • a third output of the decoder controller 905 is connected in signal communication with a third input of the in-loop filter-1 966, a first input of the in-loop filter-2 967, a first input of the in- loop filter-N 968, and a first input of the out-loop filter 969.
  • a fourth output of the decoder controller 905 is connected in signal communication with a second input of the intra prediction module 960, a first input of the motion compensator 970, and a second input of the reference picture buffer 980.
  • An output of the motion compensator 970 is connected in signal communication with a first input of a switch 997.
  • An output of the intra prediction module 960 is connected in signal communication with a second input of the switch 997.
  • An output of the switch 997 is connected in signal communication with a first non-inverting input of the combiner 925.
  • An input of the input buffer 910 is available as an input of the decoder 900, for receiving an input bitstream.
  • An output of the out-loop filter 969 is available as an output of the decoder 900, for outputting an output picture.
  • the video decoder 900 includes an in-loop filtering structure 901 that, in turn, includes in-loop filter-1 966, in-loop filter-2 967, and in-loop filter-N 968. Moreover, the video decoder 900 includes an out-loop filtering structure 902 that, in turn, includes out-loop filter 969.
  • out-loop filter 969 is shown as a single filter, in other embodiments, the out-loop filter can be a set of multiple filters sequentially connected.
  • Embodiment 3 is shown as a single filter, in other embodiments, the out-loop filter can be a set of multiple filters sequentially connected.
  • the final output of the in-loop/out-loop filters can be optimized by on/off switching or combination schemes, for example, involving linear or nonlinear combinations.
  • an exemplary filtering structure involving multiple out-loop filters is indicated generally by the reference numeral 1000.
  • the display picture can be the output of the combination function F(.), which is fed with the output of multiple out-loop filters.
  • the combination function F(.) can be an on/off switch, a linear combination, or a nonlinear combination.
  • the filtering structure 1000 includes an in-loop filter 1005, an out-loop filter-1 1010, an out-loop filter-2 1020, an out-loop filter-N 1030, and a combiner 1040.
  • the combiner 1040 implements the combination function F(.).
  • An output of the in-loop filter 1005 is connected in signal communication with an input of the out-loop filter-1 1010, an input of the out-loop filter-2 1020, and an input of the out-loop filter-N 1030.
  • An output of the out-loop filter-1 1010, an output of the out-loop filter-2 1020, and an output of the out-loop filter-N 1030 are connected in signal communication with a first input, a second input, and a third input, respectively, of the combiner 1040.
  • An input of the in-loop filter 1005 is available as an input of the filtering structure 1000, for receiving a decoded picture.
  • the output of the in-loop filter 1005 is also available as an output of the filtering structure 1000, for outputting a reference picture.
  • An output of the combiner 1040 is available as an output of the filtering structure 1000, for outputting a display picture. It is to be appreciated that encoder 100 of FIG. 1 may be used to encode a bitstream that can be decoded using the filtering structure 1000 of FIG. 10.
  • an exemplary MPEG-4 AVC Standard based video decoder corresponding to the filtering structure 1000 of FIG. 10 is indicated generally by the reference numeral 1100.
  • the video decoder 1100 includes an input buffer 1110 having an output connected in signal communication with a first input of an entropy decoder 1145.
  • a first output of the entropy decoder 1145 is connected in signal communication with a first input of an inverse transformer and inverse quantizer 1150.
  • An output of the inverse transformer and inverse quantizer 1150 is connected in signal communication with a second non-inverting input of a combiner 1125.
  • An output of the combiner 1125 is connected in signal communication with a second input of an in-loop filter 1165 and a first input of an intra prediction module 1160.
  • An output of the in-loop filter 1165 is connected in signal communication with a first input of a reference picture buffer 1180, a first input of an out-loop filter-1 1166, a first input of an out-loop filter-2 1167, and a first input of an out-loop filter-N 1168.
  • An output of the reference picture buffer 1180 is connected in signal communication with a second input of a motion compensator 1170.
  • a second output of the entropy decoder 1145 is connected in signal communication with a third input of the motion compensator 1170, a first input of the in-loop filter 1165, and a third input of the intra predictor 1160.
  • a third output of the entropy decoder 1145 is connected in signal communication with an input of a decoder controller 1105.
  • a first output of the decoder controller 1105 is connected in signal communication with a second input of the entropy decoder 1145.
  • a second output of the decoder controller 1105 is connected in signal communication with a second input of the inverse transformer and inverse quantizer 1150.
  • a third output of the decoder controller 1105 is connected in signal communication with a third input of the in-loop filter 1165, a second input of the out-loop filter-1 1166, a second input of the out-loop filter-2 1167, and a second input off the out-loop filter-N 1168.
  • a fourth output of the decoder controller 1105 is connected in signal communication with a second input of the intra prediction module 1160, a first input of the motion compensator 1170, and a second input of the reference picture buffer 1180.
  • An output of the out-loop filter-1 1166, an output of the out-loop filter-2 1167, and an output of the out-loop filter-N 1168 are connected to a first input, a second input, and third input, respectively, of a combiner 1169.
  • An output of the motion compensator 1170 is connected in signal communication with a first input of a switch 1197.
  • An output of the intra prediction module 1160 is connected in signal communication with a second input of the switch 1197.
  • An output of the switch 1197 is connected in signal communication with a first non-inverting input of the combiner 1125.
  • An input of the input buffer 1110 is available as an input of the decoder 1100, for receiving an input bitstream.
  • An output of the combiner 1169 is available as an output of the decoder 1100, for outputting an output picture.
  • the video decoder 1100 includes an in-loop filtering structure 1101 that, in turn, includes in-loop filter-1 1165. Moreover, the video decoder 1100 includes an out-loop filtering structure 1102 that, in turn, includes out-loop filter-1 1166, out-loop filter-2 1167, out-loop filter-N 1168, and combiner 1169.
  • Embodiment 4 Combining the above ideas for in-loop and out-loop filters, we have more general filter deployment strategies for in-loop and out-loop filters.
  • an exemplary filtering structure involving multiple in-loop filters and multiple out-loop filters is indicated generally by the reference numeral 1200.
  • the filtering structure 1200 includes an in-loop filter-1 1210, an in-loop filter-2 1220, an in-loop filter-N 1230, an out-loop filter-1 1240, an out-loop filter-2 1250, an out- loop filter-N 1260, and a combiner 1270.
  • An output of the in-loop filter-1 1210 is connected in signal communication with an input of the in-loop filter-2 1220 and an input of the out-loop filter-2 1250.
  • An output of the in-loop filter-2 is connected in signal communication with an input of the in-loop filter-N 1230 and an input of the out-loop filter 1260.
  • An output of the out-loop filter-1 1240, an output of the out-loop filter-2 1250, and an output of the out-loop filter-N 1260 are connected in signal communication with a first input, a second input, and a third input, respectively, of the combiner 1270.
  • An input of the in-loop filter-1 1210 and an input of the out-loop filter-1 1240 are available as input of the filtering structure 1200, for receiving a decoded picture.
  • An output of the in-loop filter-N 1230 is available as an input of the filtering structure 1200, for outputting a reference picture.
  • An output of the combiner 1270 is available as an output of the filtering structure 1200, for outputting a display picture.
  • FIG. 13 an exemplary MPEG-4 AVC Standard based video encoder corresponding to the filtering structure 1200 of FIG. 12 is indicated generally by the reference numeral 1300.
  • the video encoder 1300 includes a frame ordering buffer 1310 having an output in signal communication with a non-inverting input of a combiner 1385.
  • An output of the combiner 1385 is connected in signal communication with a first input of a transformer and quantizer 1325.
  • An output of the transformer and quantizer 1325 is connected in signal communication with a first input of an entropy coder 1345 and a first input of an inverse transformer and inverse quantizer 1350.
  • An output of the entropy coder 1345 is connected in signal communication with a first non-inverting input of a combiner 1390.
  • An output of the combiner 1390 is connected in signal communication with a first input of an output buffer 1335.
  • a first output of an encoder controller 1305 is connected in signal communication with a second input of the frame ordering buffer 1310, a second input of the inverse transformer and inverse quantizer 1350, an input of a picture- type decision module 1315, a first input of a macrob lock-type (MB-type) decision module 1320, a second input of an intra prediction module 1360, a second input of an in-loop filter-1 deblocking filter 1366, a first input of a motion compensator 1370, a first input of a motion estimator 1375, and a second input of a reference picture buffer 1380.
  • MB-type macrob lock-type
  • a second output of the encoder controller 1305 is connected in signal communication with a first input of a Supplemental Enhancement Information (SEI) inserter 1330, a second input of the transformer and quantizer 1325, a second input of the entropy coder 1345, a second input of the output buffer 1335, and an input of the Sequence Parameter Set (SPS) and Picture Parameter Set (PPS) inserter 1340.
  • SEI Supplemental Enhancement Information
  • An output of the SEI inserter 1330 is connected in signal communication with a second non-inverting input of the combiner 1390.
  • a first output of the picture-type decision module 1315 is connected in signal communication with a third input of the frame ordering buffer 1310.
  • a second output of the picture-type decision module 1315 is connected in signal communication with a second input of a macroblock-type decision module 1320.
  • SPS Sequence Parameter Set
  • PPS Picture Parameter Set
  • An output of the inverse quantizer and inverse transformer 1350 is connected in signal communication with a first non-inverting input of a combiner 1319.
  • An output of the combiner 1319 is connected in signal communication with a first input of the intra prediction module 1360 and a first input of the in-loop filter-1 1366.
  • An output of the in-loop filter-1 1366 is connected in signal communication with an input of an in-loop filter-2 1367.
  • An output of the in-loop filter-2 1367 is connected in signal communication with an input of an in-loop filter-N 1368.
  • An output of the in- loop filter-N 1368 is connected in signal communication with a first input of a reference picture buffer 1380.
  • An output of the reference picture buffer 1380 is connected in signal communication with a second input of the motion estimator 1375 and a third input of the motion compensator 1370.
  • a first output of the motion estimator 1375 is connected in signal communication with a second input of the motion compensator 1370.
  • a second output of the motion estimator 1375 is connected in signal communication with a third input of the entropy coder 1345.
  • An output of the motion compensator 1370 is connected in signal communication with a first input of a switch 1397.
  • An output of the intra prediction module 1360 is connected in signal communication with a second input of the switch 1397.
  • An output of the macroblock-type decision module 1320 is connected in signal communication with a third input of the switch 1397.
  • the third input of the switch 1397 determines whether or not the "data" input of the switch (as compared to the control input, i.e., the third input) is to be provided by the motion compensator 1370 or the intra prediction module 1360.
  • the output of the switch 1397 is connected in signal communication with a second non-inverting input of the combiner 1319 and an inverting input of the combiner 1385.
  • a first input of the frame ordering buffer 1310 and an input of the encoder controller 1305 are available as inputs of the encoder 1300, for receiving an input picture 1303.
  • a second input of the Supplemental Enhancement Information (SEI) inserter 1330 is available as an input of the encoder 1300, for receiving metadata.
  • An output of the output buffer 1335 is available as an output of the encoder 1300, for outputting a bitstream.
  • SEI Supplemental Enhancement Information
  • the video encoder 1300 includes an in-loop filtering structure 1301 that, in turn, includes in-loop filter-1 1366, in-loop filter-2 1367, and in-loop filter-N 1368.
  • an exemplary MPEG-4 AVC Standard based video decoder corresponding to the filtering structure 1200 of FIG r 12 is indicated generally by the reference numeral 1400.
  • the video decoder 1400 includes an input buffer 1410 having an output connected in signal communication with a first input of an entropy decoder 1445.
  • a first output of the entropy decoder 1445 is connected in signal communication with a first input of an inverse transformer and inverse quantizer 1450.
  • An output of the inverse transformer and inverse quantizer 1450 is connected in signal communication with a second non-inverting input of a combiner 1425.
  • An output of the combiner 1425 is connected in signal communication with a second input of an in-loop filter-1 1466, a first input of an out- loop filter-1 1476, and a first input of an intra prediction module 1460.
  • An output of the in-loop filter-1 1466 is connected in signal communication with an input of an in- loop filter-2 1467 and a first input of an out-loop filter-2 1477.
  • An output of the in- loop filter-2 is connected in signal communication with an input of an in-loop filter-N 1468 and a first input of an out-loop filter-N 1478.
  • an output of the out-loop filter-2 1477, and an output of the out-loop filter-N 1478 are connected in signal communication with a first input, a second input, and a third input, respectively, or a combiner 1479.
  • An output of the in-loop filter-N 1468 is connected in signal communication with a first input of a reference picture buffer 1480.
  • An output of the reference picture buffer 1480 is connected in signal communication with a second input of a motion compensator 1470.
  • a second output of the entropy decoder 1445 is connected in signal communication with a third input of the motion compensator 1470, a first input of the in-loop filter-1 1466, and a third input of the intra predictor 1460.
  • a third output of the entropy decoder 1445 is connected in signal communication with an input of a decoder controller 1405.
  • a first output of the decoder controller 1405 is connected in signal communication with a second input of the entropy decoder 1445.
  • a second output of the decoder controller 1405 is connected in signal communication with a second input of the inverse transformer and inverse quantizer 1450.
  • a third output of the decoder controller 1405 is connected in signal communication with a second input of the out-loop filter-1 1476, a second input of the out-loop filter-2
  • a fourth output of the decoder controller 1405 is connected in signal communication with a second input of the intra prediction module 1460, a first input of the motion compensator 1470, and a second input of the reference picture buffer 1480.
  • An output of the motion compensator 1470 is connected in signal communication with a first input of a switch 1497.
  • An output of the intra prediction module 1460 is connected in signal communication with a second input of the switch 1497.
  • An output of the switch 1497 is connected in signal communication with a first non-inverting input of the combiner 1425.
  • An input of the input buffer 1410 is available as an input of the decoder 1400, for receiving an input bitstream.
  • An output of the combiner 1479 is available as an output of the decoder 1400, for outputting an output picture.
  • the video decoder 1400 includes an in-loop filtering structure 1401 that, in turn, includes in-loop filter-1 1466, in-loop filter-2 1467, and in-loop filter-N 1468. Moreover, the video decoder 1400 includes an out-loop filtering structure 1402 that, in turn, includes out-loop filter-1 1476, out-loop filter-2 1477, out-loop filter-N 1478, and combiner 1479.
  • Embodiment 5 is a diagrammatic representation of Embodiment 5:
  • FIG. 15 another exemplary filtering structure involving multiple in- loop filters and multiple out-loop filters is indicated generally by the reference numeral 1500.
  • the filtering structure 1500 includes an in-loop filter-1 1510, an in-loop filter-2 1520, an in-loop filter-N 1530, a combiner 1535, an out-loop filter-1 1540, an out-loop filter-2 1550, an out-loop filter-N 1560, and a combiner 1565.
  • An output of the in-loop filter-1 1510 is connected in signal communication with an input of the out-loop filter-1 1540 and a first input of the combiner 1535.
  • An output of the in- loop-2 1520 is connected in signal communication with a second input of the combiner 1535.
  • An output of the in-loop filter-N 1530 is connected in signal communication with a third input of the combiner 1535.
  • An output of the in-loop filter-1 1540, an output of the in-loop filter-2 1550, and an output of the in-loop filter- N 1560 are connected in signal communication with a first input, a second input, and a third input, respectively, of the combiner 1565.
  • An output of the combiner 1535 is available as an output of the filtering structure 1500, for outputting a reference picture.
  • An output of the combiner 1565 is available as an output of the filtering structure 1500, for outputting a display picture.
  • the video encoder 1600 includes a frame ordering buffer 1610 having an output in signal communication with a non-inverting input of a combiner 1685.
  • An output of the combiner 1685 is connected in signal communication with a first input of a transformer and quantizer 1625.
  • An output of the transformer and quantizer 1625 is connected in signal communication with a first input of an entropy coder 1645 and a first input of an inverse transformer and inverse quantizer 1650.
  • An output of the entropy coder 1645 is connected in signal communication with a first non-inverting input of a combiner 1690.
  • An output of the combiner 1690 is connected in signal communication with a first input of an output buffer 1635.
  • a first output of an encoder controller 1605 is connected in signal communication with a second input of the frame ordering buffer 1610, a second input of the inverse transformer and inverse quantizer 1650, an input of a picture- type decision module 1615, a first input of a macroblock-type (MB-type) decision module 1620, a second input of an intra prediction module 1660, a second input of an in-loop filter-1 1666, a second input of an in-loop filter-2 1667, a second input of an in-loop filter-N 1668, a first input of a motion compensator 1670, a first input of a motion estimator 1675, and a second input of a reference picture buffer 1680.
  • MB-type macroblock-type
  • a second output of the encoder controller 1605 is connected in signal communication with a first input of a Supplemental Enhancement Information (SEI) inserter 1630, a second input of the transformer and quantizer 1625, a second input of the entropy coder 1645, a second input of the output buffer 1635, and an input of the Sequence Parameter Set (SPS) and Picture Parameter Set (PPS) inserter 1640.
  • SEI Supplemental Enhancement Information
  • An output of the SEI inserter 1630 is connected in signal communication with a second non-inverting input of the combiner 1690.
  • a first output of the picture-type decision module 1615 is connected in signal communication with a third input of the frame ordering buffer 1610.
  • a second output of the picture-type decision module 1615 is connected in signal communication with a second input of a macroblock-type decision module 1620.
  • SPS Sequence Parameter Set
  • PPS Picture Parameter Set
  • An output of the inverse quantizer and inverse transformer 1650 is connected in signal communication with a first non-inverting input of a combiner 1619.
  • An output of the combiner 1619 is connected in signal communication with a first input of the intra prediction module 1660, a first input of the in-loop filter-1 1666, a first input of the in-loop filter-2 1667, and a first input of the in-loop filter-N 1668.
  • An output of the in-loop filter-1 1666, an output of the in-loop filter-2 1667, and an output of the in-loop filter-N 1668 are connected to a first input, a second input, and a third input, respectively, of a combiner 1669.
  • An output of the combiner 1669 is connected in signal communication with a first input of a reference picture buffer 1680.
  • An output of the reference picture buffer 1680 is connected in signal communication with a second input of the motion estimator 1675 and a third input of the motion compensator 1670.
  • a first output of the motion estimator 1675 is connected in signal communication with a second input of the motion compensator 1670.
  • a second output of the motion estimator 1675 is connected in signal communication with a third input of the entropy coder 1645.
  • An output of the motion compensator 1670 is connected in signal communication with a first input of a switch 1697.
  • An output of the intra prediction module 1660 is connected in signal communication with a second input of the switch 1697.
  • An output of the macroblock-type decision module 1620 is connected in signal communication with a third input of the switch 1697.
  • the third input of the switch 1697 determines whether or not the "data" input of the switch (as compared to the control input, i.e., the third input) is to be provided by the motion compensator 1670 or the intra prediction module 1660.
  • the output of the switch 1697 is connected in signal communication with a second non-inverting input of the combiner 1619 and an inverting input of the combiner 1685.
  • a first input of the frame ordering buffer 1610 and an input of the encoder controller 1605 are available as inputs of the encoder 1600, for receiving an input picture 1603.
  • a second input of the Supplemental Enhancement Information (SEI) inserter 1630 is available as an input of the encoder 1600, for receiving metadata.
  • An output of the output buffer 1635 is available as an output of the encoder 1600, for outputting a bitstream.
  • the video encoder 1600 includes an in-loop filtering structure 1601 that, in turn, includes in-loop filter-1 1666, in-loop filter-2 1667, in-loop filter-N 1668, and a combiner 1669. Turning to FIG.
  • the video decoder 1700 includes an input buffer 1710 having an output connected in signal communication with a first input of an entropy decoder 1745.
  • a first output of the entropy decoder 1745 is connected in signal communication with a first input of an inverse transformer and inverse quantizer 1750.
  • An output of the inverse transformer and inverse quantizer 1750 is connected in signal communication with a second non-inverting input of a combiner 1725.
  • An output of the combiner 1725 is connected in signal communication with an input of an in-loop filter-1 1766, an input of an in-loop filter-2 1767, a first input of an in-loop filter-N 1768, an input of an out-loop filter-N 1778, an input of an out-loop filter-2 1777, and a first input of an intra prediction module 1760.
  • An output of the in-loop filter-N 1768 is connected in signal communication with a third input of a combiner 1769.
  • An output of the in-loop filter-2 1767 is connected in signal communication with a second input of the combiner 1769.
  • An output of the in-loop filter-1 1766 is connected in signal communication with a first input of the combiner 1969 and an input of an out-loop filter-1 1776.
  • An output of the combiner 1769 is connected in signal communication with a first input of a reference picture buffer 1780.
  • An output of the reference picture buffer 1780 is connected in signal communication with a second input of a motion compensator 1770.
  • An output of the out-loop filter-1 1776, an output of the out-loop filter-22 1777, and an output of the out-loop filter-N 1978 are connected in signal communication with a first input, a second input, and a third input, respectively, of a combiner 1779.
  • a second output of the entropy decoder 1745 is connected in signal communication with a third input of the motion compensator 1770, a first input of the deblocking filter 1765, and a third input of the intra predictor 1760.
  • a third output of the entropy decoder 1745 is connected in signal communication with an input of a decoder controller 1705.
  • a first output of the decoder controller 1705 is connected in signal communication with a second input of the entropy decoder 1745.
  • a second output of the decoder controller 1705 is connected in signal communication with a second input of the inverse transformer and inverse quantizer 1750.
  • a third output of the decoder controller 1705 is connected in signal communication with a third input of the deblocking filter 1765.
  • a fourth output of the decoder controller 1705 is connected in signal communication with a second input of the intra prediction module 1760, a first input of the motion compensator 1770, and a second input of the reference picture buffer 1780.
  • An output of the motion compensator 1770 is connected in signal communication with a first input of a switch 1797.
  • An output of the intra prediction module 1760 is connected in signal communication with a second input of the switch 1797.
  • An output of the switch 1797 is connected in signal communication with a first non-inverting input of the combiner 1725.
  • An input of the input buffer 1710 is available as an input of the decoder 1700, for receiving an input bitstream.
  • An output of the combiner 1779 is available as an output of the decoder 1700, for outputting an output picture.
  • the video decoder 1700 includes an in-loop filtering structure 1701 that, in turn, includes in-loop filter-1 1766, in-loop filter-2 1767, in-loop filter-N 1768, and a combiner 1769.
  • the video decoder 1700 includes an out-loop filtering structure 1702 that, in turn, includes out-loop filter-1 1776, out-loop filter-2 1777, out-loop filter-N 1778, and combiner 1779.
  • the in-loop filters and out-loop filters can be implemented as respective single filters or respective groups of filters, and so forth, arranged in series, in parallel, and so forth, while maintaining the scope and spirit of the present principles.
  • the selection of the filter deployment structure can be specified, for example, using one or more high level syntax elements.
  • FIG. 1 filter_deploy1
  • FIG. 5 filter_deploy2
  • FIG. 10 filter_deploy3
  • FIG. 10 we presume that there are three out-loop filters, i.e., out-loop filter-1 , out-loop filter-2, and out-loop filter-N.
  • TABLE 1 shows exemplary slice header syntax, in accordance with an embodiment of the present principles.
  • new_filter_deployment_available 1 specifies that there are new filter deployments available for encoding the slice.
  • new_filter_deployment_available 0 specifies that there is no new filter deployment available for encoding the picture. The default filter deployment is used.
  • filter_deployment_idc indicates the filter deployment that will be used for generating current slice.
  • the value is selected from 0, 1 , and 2.
  • Filter_deployment_ext_idc indicates the filter deployment extension information. If the filter_deploy3 is selected, there is other extended information to be sent. In this example, the filter_deployment_ext_idc indicates the out-loop filter strategy from three out-loop filters.
  • the method 1800 includes a start block 1805 that passes control to a function block 1810.
  • the function block 1810 encodes a slice, and passes control to a decision block 1815.
  • the decision block 1815 determines whether or not new_filter_deployment_available is equal to one. If so, then control is passed to a function block 1820, a function block 1825, a function block 1830, and a function block 1835. Otherwise, control is passed to a function block 1875.
  • the function block 1820 performs filtering using a filter_deployment1 , and passes control to a function block 1840.
  • the function block 1825 performs filtering using a filter_deployment2, and passes control to the function block 1840.
  • the function block 1830 performs filtering using a filter_deployment3, and passes control to the function block 1840.
  • the function block 1835 performs filtering using a default filtering process, and passes control to the function block 1840.
  • the function block 1840 selects the best filter deployment based on rate-distortion (RD) cost, sets filter_deployment_idc to the selected filter deployment, and passes control to a decision block 1845.
  • the decision block 1845 determines whether or not filter_deploy3 is selected. If so, then control is passed to a function block 2050, a function block 1855, and a function block 1860. Otherwise, control is passed to a function block 1870.
  • RD rate-distortion
  • the function block 1850 performs filtering using an out-loop filter-1 deployment, and passes control to a function block 1865.
  • the function block 1855 performs filtering using an out-loop filter-2 deployment, and passes control to the function block 1865.
  • the function block 1860 performs filtering using an out-loop filter-3 deployment, and passes control to the function block 1865.
  • the function block 1865 selects the best filter deployment based on the RD cost, sets Filter_deployment_ext_idc to the selected filter deployment, and passes control to the function block 1870.
  • the function block 1870 encodes the corresponding syntax for the selected filter deployments, and passes control to an end block 1899.
  • the function block 1875 performs filtering using a default filtering process, and passes control to the function block 1870.
  • FIG. 19 an exemplary method for decoding picture data by determining a particular filtering structure from a plurality of filtering structures is indicated generally by the reference numeral 1900.
  • the method 1900 includes a start block 1905 that passes control to a function block 1910.
  • the function block 1910 parses syntax, and passes control to a function block 1915.
  • the function block 1915 decodes a slice, and passes control to a decision block 1920.
  • the decision block 1920 determines whether or not new_filter_deployment_available is equal to one. If so, then control is passed to a function block 1925. Otherwise, control is passed to a function block 1930.
  • the function block 1925 filters the picture with the filter deployment indicated by filter_deployment_idc and filter_deployment_ext_idc, and passes control to an end block 1999.
  • the function block 1930 filters the picture with a default filter deployment, and passes control to the end block 1999.
  • one advantage/feature is an apparatus having a video decoder for decoding a picture, wherein the video decoder includes an in-loop filtering structure for performing in-loop filtering of a reconstructed version of the picture, wherein an impact of the in-loop filtering is decoupled from a display picture formed from the reconstructed version of the picture data.
  • Another advantage/feature is the apparatus having the video decoder as described above, wherein the display picture is not filtered by the in-loop filtering structure.
  • Still another advantage/feature is the apparatus having the video decoder as described above, wherein the video decoder further includes an out-loop filtering structure, and the display picture is obtained by processing the reconstructed version of the picture using only the out-loop filtering structure from among the out- loop filtering structure and the in-loop filtering structure.
  • Yet another advantage/feature is the apparatus having the video decoder wherein the video decoder further includes an out-loop filtering structure, and the display picture is obtained by processing the reconstructed version of the picture using only the out-loop filtering structure from among the out-loop filtering structure and the in-loop filtering structure as described above, wherein the out-loop filtering structure is used to generate the display picture from the reconstructed version of the picture.
  • Still yet another advantage/feature is the apparatus having the video decoder as described above, wherein the in-loop filtering structure comprises a plurality of in- loop filters.
  • the in-loop filtering structure comprises a plurality of in-loop filters as described above, wherein the plurality of in-loop filters included in the in-loop filtering structure are arranged such that an impact of only some of the plurality of in-loop filters are decoupled from the display picture.
  • another advantage/feature is the apparatus having a video decoder that, in turn, has an in-loop filtering structure and an out-loop filtering structure, the in-loop filtering structure including one or more in-loop filters and the out-loop filtering structure including one or more out-loop filters, and wherein multiple filters from among at least one of the one or more in-loop filters and the one or more out- loop filters are selected and combined to generate at least one of a reference picture and a display picture from an original version of a picture.
  • another advantage/feature is the apparatus having the video decoder as described above, wherein at least one of a number of the multiple filters and one or more types of the multiple filters are determined from one or more high level syntax elements.
  • another advantage/feature is the apparatus having the video decoder as described above, wherein the multiple filters include at least one in-loop filter and at least one out-loop filter, and wherein the at least one in-loop filter is arranged such that an impact of the at least one in-loop filter is decoupled from the display picture.
  • the apparatus having the video decoder as described above, wherein the multiple filters include a plurality of in-loop filters, and wherein the plurality of in-loop filters are arranged such that an impact of only some of the plurality of in-loop filters are decoupled from the display picture.
  • the teachings of the present principles are implemented as a combination of hardware and software.
  • the software may be implemented as an application program tangibly embodied on a program storage unit.
  • the application program may be uploaded to, and executed by, a machine comprising any suitable architecture.
  • the machine is implemented on a computer platform having hardware such as one or more central processing units (“CPU"), a random access memory (“RAM”), and input/output ("I/O") interfaces.
  • CPU central processing units
  • RAM random access memory
  • I/O input/output
  • the computer platform may also include an operating system and microinstruction code.
  • the various processes and functions described herein may be either part of the microinstruction code or part of the application program, or any combination thereof, which may be executed by a CPU.
  • various other peripheral units may be connected to the computer platform such as an additional data storage unit and a printing unit.

Abstract

Methods and apparatus are provided for a generalized filtering structure for video coding and decoding. An apparatus includes a video decoder (600) for decoding a picture. The video decoder includes an in-loop filtering structure (601) for performing in-loop filtering of a reconstructed version of the picture. An impact of the in-loop filtering is decoupled from a display picture formed from the reconstructed version of the picture data.

Description

METHODS AND APPARATUS FOR A GENERALIZED FILTERING STRUCTURE
FOR VIDEO CODING AND DECODING
CROSS-REFERENCE TO RELATED APPLICATIONS This application claims the benefit of U.S. Provisional Application Serial No.
61/179,269, filed May 18, 2009 (Attorney Docket No. PU090037), which is incorporated by reference herein in its entirety.
TECHNICAL FIELD The present principles relate generally to video encoding and decoding and, more particularly, to methods and apparatus for a generalized filtering structure for video coding and decoding.
BACKGROUND In many coding standards, such as the International Organization for
Standardization/International Electrotechnical Commission (ISO/IEC) Moving Picture Experts Group-4 (MPEG-4) Part 10 Advanced Video Coding (AVC) standard/International Telecommunication Union, Telecommunication Sector (ITU- T) H.264 Recommendation (hereinafter the "MPEG-4 AVC Standard"), filtering processes are involved in compressing a video source. For example, in the MPEG- 4 AVC Standard, the deblocking filters are used in-loop to remove blocky artifacts in order to better display the current picture and/or to provide a better reference frame for frames that are coded thereafter. To further improve the objective and/or subjective performance, a post filter can also be applied before the picture is displayed. Turning to FIG. 1 , an exemplary filtering structure, wherein both in-loop and out-loop filters exist, is indicated generally by the reference numeral 100. The filtering structure 100 includes an in-loop filter 110 and an out-loop filter 120. An output of the in-loop filter 110 is connected in signal communication with an input of the out-loop filter 120. An input of the in-loop filter 110 is available as an input of the filtering structure 100, for receiving a decoded picture. The output of the in-loop filter 110 is also available as an output of the filtering structure 100, for outputting a reference picture. An output of the out-loop filter 120 is available as an output of the filtering structure 100, for outputting a display picture. With respect to the filtering structure 100, both filters are directly additively applied to the pictures that will be displayed.
In some cases, the additive structure can improve the coding performance by reducing the bit rate and increasing the objective or visual qualities. However, when the in-loop and out-loop filters have some conflicting characteristics, such a filtering structure can worsen the coding performance.
Moreover, there are a significant amount of advanced filters proposed which have potential to further improve coding performance. For example, in the Video Coding Experts Group (VCEG) key technical area (KTA) software, a block-based adaptive loop filter (BALF) and a post filter are adopted.
A de-artifacting filter (DAF), as described with respect to a first prior art approach, demonstrates other types of filters which can remove more general compression artifacts other than blockiness. With so many filters, it is critical to design a reasonable structure to make the filters functionally work together. As shown in FIG. 1 , an in-loop filter is used to improve the quality of a current picture for displaying or to provide a better reference picture. The MPEG-4 AVC Standard uses a deblocking filter scheme to remove the blocky artifacts of the reconstructed picture.
In the first prior art approach, a de-artifacting filter replaces the deblocking filter, which can remove more general artifacts and provide higher compression quality. In the VCEG KTA, another in-loop filter was adopted, which is a block- based adaptive loop filter (BALF). BALF is a Wiener filter based approach. BALF estimates a set of filters at the encoder by minimizing the mean square error (MSE) between the original picture and reconstructed picture. The coefficients of the filters will be sent to the decoder as side information. The filter function can be switched on and off on a block-wise basis. The block size ranges from 16x16 to 128x128. The BALF is designed to work with the MPEG-4 AVC Standard deblocking filter as shown in FIG. 2. Turning to FIG. 2, an exemplary block-based adaptive loop filter (BALF) filtering structure is indicated generally by the reference numeral 200. The filtering structure 200 includes an in-loop filter 210 and a BALF 220. An output of the in-loop filter 210 is connected in signal communication with an input of the BALF 220. An input of the in-loop filter 210 is available as an input of the filtering structure 200, for receiving a decoded picture. An output of the BALF 220 is available as an output of the filtering structure 200, for outputting a display picture. The output of the BALF 220 is also available as an output of the filtering structure 200, for outputting a reference picture.
As can be seen, both the displaying picture and the reference picture will be impacted by the in-loop filter. Since the reference picture is prepared for use in the coding of subsequent pictures, increasing the quality of the reference picture may reduce the number of bits needed to code the subsequent pictures. However, higher quality for reference picture purposes does not necessarily mean higher quality for displaying purposes. The other way around is also true. This means that we need to decouple the impacts of the in-loop filter to the reference picture and the display picture. A deblocking filter display preference supplemental enhancement information (SEI) message has been adopted into Annex D of the MPEG-4 AVC Standard. The deblocking filter display preference supplemental enhancement information message provides flexibility by allowing pictures to be filtered for motion compensation while using the unfiltered copy for output. By using the deblocking filter display preference supplemental enhancement information message, the preservation of picture details in output pictures is maintained, while allowing motion compensation to use less noisy reference pictures (the latter improving coding efficiency when pictures include a significant amount of details such as, for example, film grains). However, the deblocking filter display preference supplemental enhancement information message is defined just for the deblocking filter, so such message does not incorporate other filter deployments and is not suitable for multiple filters in the coding process.
SUMMARY
These and other drawbacks and disadvantages of the prior art are addressed by the present principles, which are directed to methods and apparatus for a generalized filtering structure for video coding and decoding.
According to an aspect of the present principles, there is provided an apparatus. The apparatus includes a video decoder for decoding a picture. The video decoder includes an in-loop filtering structure for performing in-loop filtering of a reconstructed version of the picture. An impact of the in-loop filtering is decoupled from a display picture formed from the reconstructed version of the picture data. According to another aspect of the present principles, there is provided a method in a video decoder. The method includes decoding a picture using an in- loop filtering structure included in the video decoder. The in-loop filtering structure is for performing in-loop filtering of a reconstructed version of the picture. An impact of the in-loop filtering is decoupled from a display picture formed from the reconstructed version of the picture data.
According to yet another aspect of the present principles, there is provided an apparatus. The apparatus includes a video decoder having an in-loop filtering structure and an out-loop filtering structure. The in-loop filtering structure includes one or more in-loop filters and the out-loop filtering structure includes one or more out-loop filters. Multiple filters from among at least one of the one or more in-loop filters and the one or more out-loop filters are selected and combined to generate at least one of a reference picture and a display picture from an original version of a picture. According to still another aspect of the present principles, there is provided a method in a video decoder. The method includes decoding a picture using at least one filter from among an in-loop filtering structure and an out-loop filtering structure included in the video decoder. The in-loop filtering structure has one or more in- loop filters and the out-loop filtering structure has one or more out-loop filters. Multiple filters from among at least one of the one or more in-loop filters and the one or more out-loop filters are selected and combined to generate at least one of a reference picture and a display picture from an original version of a picture.
These and other aspects, features and advantages of the present principles will become apparent from the following detailed description of exemplary embodiments, which is to be read in connection with the accompanying drawings.
BRIEF DESCRIPTION OF THE DRAWINGS
The present principles may be better understood in accordance with the following exemplary figures, in which: FIG. 1 is a block diagram showing an exemplary filtering structure, wherein both in-loop and out-loop filters exist, in accordance with an embodiment of the present principles; FIG. 2 is block diagram showing an exemplary block-based adaptive loop filter (BALF) filtering structure, in accordance with an embodiment of the present principles;
FIG. 3 is a block diagram showing an exemplary MPEG-4 AVC Standard based video encoder to which the present principles may be applied, in accordance with an embodiment of the present principles;
FIG. 4 is a block diagram showing an exemplary MPEG-4 AVC Standard based video decoder to which the present principles may be applied, in accordance with an embodiment of the present principles; FIG. 5 is a block diagram showing an exemplary filtering structure that decouples the impacts of the in-loop filter on a reference picture and a display picture, in accordance with an embodiment of the present principles;
FIG. 6 is a block diagram showing an exemplary MPEG-4 AVC Standard based video decoder corresponding to the filtering structure 500 of FIG. 5, in accordance with an embodiment of the present principles;
FIG. 7 is a block diagram showing an exemplary filtering structure for decoupling the impacts of an in-loop filter on reference pictures and display pictures, in accordance with an embodiment of the present principles;
FIG. 8 is a block diagram showing an exemplary MPEG-4 AVC Standard based video encoder corresponding to the filtering structure 700 of FIG. 7, in accordance with an embodiment of the present principles;
FIG. 9 is a block diagram showing an exemplary MPEG-4 AVC Standard based video decoder corresponding to the filtering structure 700 of FIG. 7, in accordance with an embodiment of the present principles; FIG. 10 is a block diagram showing an exemplary filtering structure involving multiple out-loop filters, in accordance with an embodiment of the present principles;
FIG. 11 is a block diagram showing an exemplary MPEG-4 AVC Standard based video decoder corresponding to the filtering structure 1000 of FIG. 10, in accordance with an embodiment of the present principles; FIG. 12 is a block diagram showing an exemplary filtering structure involving multiple in-loop filters and multiple out-loop filters, in accordance with an embodiment of the present principles; FIG. 13 is a block diagram showing an exemplary MPEG-4 AVC Standard based video encoder corresponding to the filtering structure 1200 of FIG. 12, in accordance with an embodiment of the present principles;
FIG. 14 is a block diagram showing an exemplary MPEG-4 AVC Standard based video decoder corresponding to the filtering structure 1200 of FIG. 12, in accordance with an embodiment of the present principles;
FIG. 15 is a block diagram showing another exemplary filtering structure involving multiple in-loop filters and multiple out-loop filters, in accordance with an embodiment of the present principles; FIG. 16 is a block diagram showing an exemplary MPEG-4 AVC Standard based video encoder corresponding to the filtering structure 1500 of FIG. 15, in accordance with an embodiment of the present principles;
FIG. 17 is a block diagram showing an exemplary MPEG-4 AVC Standard based video decoder corresponding to the filtering structure 1500 of FIG. 15, in accordance with an embodiment of the present principles;
FIG. 18 is a flow diagram showing an exemplary method for encoding picture data by selecting from a plurality of filtering structures, in accordance with an embodiment of the present principles; and
FIG. 19 is a flow diagram showing an exemplary method for decoding picture data by determining a particular filtering structure from a plurality of filtering structures, in accordance with an embodiment of the present principles.
DETAILED DESCRIPTION
The present principles are directed to methods and apparatus for a generalized filtering structure for video coding and decoding.
The present description illustrates the present principles. It will thus be appreciated that those skilled in the art will be able to devise various arrangements that, although not explicitly described or shown herein, embody the present principles and are included within its spirit and scope. All examples and conditional language recited herein are intended for pedagogical purposes to aid the reader in understanding the present principles and the concepts contributed by the inventor(s) to furthering the art, and are to be construed as being without limitation to such specifically recited examples and conditions.
Moreover, all statements herein reciting principles, aspects, and embodiments of the present principles, as well as specific examples thereof, are intended to encompass both structural and functional equivalents thereof. Additionally, it is intended that such equivalents include both currently known equivalents as well as equivalents developed in the future, i.e., any elements developed that perform the same function, regardless of structure.
Thus, for example, it will be appreciated by those skilled in the art that the block diagrams presented herein represent conceptual views of illustrative circuitry embodying the present principles. Similarly, it will be appreciated that any flow charts, flow diagrams, state transition diagrams, pseudocode, and the like represent various processes which may be substantially represented in computer readable media and so executed by a computer or processor, whether or not such computer or processor is explicitly shown.
The functions of the various elements shown in the figures may be provided through the use of dedicated hardware as well as hardware capable of executing software in association with appropriate software. When provided by a processor, the functions may be provided by a single dedicated processor, by a single shared processor, or by a plurality of individual processors, some of which may be shared. Moreover, explicit use of the term "processor" or "controller" should not be construed to refer exclusively to hardware capable of executing software, and may implicitly include, without limitation, digital signal processor ("DSP") hardware, read-only memory ("ROM") for storing software, random access memory ("RAM"), and non-volatile storage.
Other hardware, conventional and/or custom, may also be included. Similarly, any switches shown in the figures are conceptual only. Their function may be carried out through the operation of program logic, through dedicated logic, through the interaction of program control and dedicated logic, or even manually, the particular technique being selectable by the implementer as more specifically understood from the context.
In the claims hereof, any element expressed as a means for performing a specified function is intended to encompass any way of performing that function including, for example, a) a combination of circuit elements that performs that function or b) software in any form, including, therefore, firmware, microcode or the like, combined with appropriate circuitry for executing that software to perform the function. The present principles as defined by such claims reside in the fact that the functionalities provided by the various recited means are combined and brought together in the manner which the claims call for. It is thus regarded that any means that can provide those functionalities are equivalent to those shown herein.
Reference in the specification to "one embodiment" or "an embodiment" of the present principles, as well as other variations thereof, means that a particular feature, structure, characteristic, and so forth described in connection with the embodiment is included in at least one embodiment of the present principles. Thus, the appearances of the phrase "in one embodiment" or "in an embodiment", as well any other variations, appearing in various places throughout the specification are not necessarily all referring to the same embodiment. It is to be appreciated that the use of any of the following T1 "and/or", and "at least one of, for example, in the cases of "A/B", "A and/or B" and "at least one of A and B", is intended to encompass the selection of the first listed option (A) only, or the selection of the second listed option (B) only, or the selection of both options (A and B). As a further example, in the cases of "A, B, and/or C" and "at least one of A1 B, and C", such phrasing is intended to encompass the selection of the first listed option (A) only, or the selection of the second listed option (B) only, or the selection of the third listed option (C) only, or the selection of the first and the second listed options (A and B) only, or the selection of the first and third listed options (A and C) only, or the selection of the second and third listed options (B and C) only, or the selection of all three options (A and B and C). This may be extended, as readily apparent by one of ordinary skill in this and related arts, for as many items listed.
Also, as used herein, the words "picture" and "image" are used interchangeably and refer to a still image or a picture from a video sequence. As is known, a picture may be a frame or a field. Additionally, as used herein, the word "signal" refers to indicating something to a corresponding decoder. For example, the encoder may signal a particular filter deployment from among a plurality of available filter deployments in order to make the decoder aware of which particular filter deployment was used on the encoder side. In this way, the same filter deployment may be used at both the encoder side and the decoder side. Thus, for example, if the decoder already has the particular filter deployment as well as others, then signaling may be used (without transmitting) to simply allow the decoder to know and select the particular filter deployment. By avoiding transmission of any actual filter deployments, a bit savings may be realized. It is to be appreciated that signaling may be accomplished in a variety of ways. For example, one or more syntax elements, flags, and so forth may be used to signal information to a corresponding decoder.
Moreover, it is to be appreciated that while one or more embodiments of the present principles are described herein with respect to the MPEG-4 AVC standard, the present principles are not limited to solely this standard and, thus, may be utilized with respect to other video coding standards, recommendations, and extensions thereof, including extensions of the MPEG-4 AVC standard, while maintaining the spirit of the present principles. As noted above, in many video coding standards, filtering processes are involved in compressing a video source. As shown in FIG. 1 , the filters can be in- loop or out-loop (post filter). If both in-loop and out-loop filters exist during the coding process, their positions in the coding pipeline are generally shown in FIG. 1. Such a configuration means that both filters are directly additively applied to the pictures that will be displayed. In some case, neither the in-loop nor the out-loop filters can improve the coding performance of the current displaying pictures, or can have a good impact on subsequently coded pictures.
An out-loop filter is simpler than an in-loop filter, since the out-loop filter simply functions on the displaying picture. That means out-loop filters will not impact subsequently coded pictures. To improve the coding performance, it is possible that multiple post filters are applied to the pictures that will be displayed. However, how to deploy these filters is also a problem that needs to be solved. In accordance with the present principles, we propose new strategies to design the position of the filters to improve the coding performance. Turning to FIG. 3, an exemplary MPEG-4 AVC Standard based video encoder to which the present principles may be applied is indicated generally by the reference numeral 300. The video encoder 300 includes a frame ordering buffer 310 having an output in signal communication with a non-inverting input of a combiner 385. An output of the combiner 385 is connected in signal communication with a first input of a transformer and quantizer 325. An output of the transformer and quantizer 325 is connected in signal communication with a first input of an entropy coder 345 and a first input of an inverse transformer and inverse quantizer 350. An output of the entropy coder 345 is connected in signal communication with a first non-inverting input of a combiner 390. An output of the combiner 390 is connected in signal communication with a first input of an output buffer 335.
A first output of an encoder controller 305 is connected in signal communication with a second input of the frame ordering buffer 310, a second input of the inverse transformer and inverse quantizer 350, an input of a picture-type decision module 315, a first input of a macroblock-type (MB-type) decision module 320, a second input of an intra prediction module 360, a second input of a deblocking filter 365, a first input of a motion compensator 370, a first input of a motion estimator 375, and a second input of a reference picture buffer 380. A second output of the encoder controller 305 is connected in signal communication with a first input of a Supplemental Enhancement Information (SEI) inserter 330, a second input of the transformer and quantizer 325, a second input of the entropy coder 345, a second input of the output buffer 335, and an input of the Sequence Parameter Set (SPS) and Picture Parameter Set (PPS) inserter 340. An output of the SEI inserter 330 is connected in signal communication with a second non-inverting input of the combiner 390.
A first output of the picture-type decision module 315 is connected in signal communication with a third input of the frame ordering buffer 310. A second output of the picture-type decision module 315 is connected in signal communication with a second input of a macroblock-type decision module 320.
An output of the Sequence Parameter Set (SPS) and Picture Parameter Set (PPS) inserter 340 is connected in signal communication with a third non-inverting input of the combiner 390.
An output of the inverse quantizer and inverse transformer 350 is connected in signal communication with a first non-inverting input of a combiner 319. An output of the combiner 319 is connected in signal communication with a first input of the intra prediction module 360 and a first input of the deblocking filter 365. An output of the deblocking filter 365 is connected in signal communication with a first input of a reference picture buffer 380. An output of the reference picture buffer 380 is connected in signal communication with a second input of the motion estimator 375 and a third input of the motion compensator 370. A first output of the motion estimator 375 is connected in signal communication with a second input of the motion compensator 370. A second output of the motion estimator 375 is connected in signal communication with a third input of the entropy coder 345.
An output of the motion compensator 370 is connected in signal communication with a first input of a switch 397. An output of the intra prediction module 360 is connected in signal communication with a second input of the switch 397. An output of the macroblock-type decision module 320 is connected in signal communication with a third input of the switch 397. The third input of the switch 397 determines whether or not the "data" input of the switch (as compared to the control input, i.e., the third input) is to be provided by the motion compensator 370 or the intra prediction module 360. The output of the switch 397 is connected in signal communication with a second non-inverting input of the combiner 319 and an inverting input of the combiner 385.
A first input of the frame ordering buffer 310 and an input of the encoder controller 305 are available as inputs of the encoder 300, for receiving an input picture 303. Moreover, a second input of the Supplemental Enhancement Information (SEI) inserter 330 is available as an input of the encoder 300, for receiving metadata. An output of the output buffer 335 is available as an output of the encoder 300, for outputting a bitstream.
Turning to FIG. 4, an exemplary MPEG-4 AVC Standard based video decoder to which the present principles may be applied is indicated generally by the reference numeral 400. The video decoder 400 includes an input buffer 410 having an output connected in signal communication with a first input of an entropy decoder 445. A first output of the entropy decoder 445 is connected in signal communication with a first input of an inverse transformer and inverse quantizer 450. An output of the inverse transformer and inverse quantizer 450 is connected in signal communication with a second non-inverting input of a combiner 425. An output of the combiner 425 is connected in signal communication with a second input of a deblocking filter 465 and a first input of an intra prediction module 460. A second output of the deblocking filter 465 is connected in signal communication with a first input of a reference picture buffer 480. An output of the reference picture buffer 480 is connected in signal communication with a second input of a motion compensator 470.
A second output of the entropy decoder 445 is connected in signal communication with a third input of the motion compensator 470, a first input of the deblocking filter 465, and a third input of the intra predictor 460. A third output of the entropy decoder 445 is connected in signal communication with an input of a decoder controller 405. A first output of the decoder controller 405 is connected in signal communication with a second input of the entropy decoder 445. A second output of the decoder controller 405 is connected in signal communication with a second input of the inverse transformer and inverse quantizer 450. A third output of the decoder controller 405 is connected in signal communication with a third input of the deblocking filter 465. A fourth output of the decoder controller 405 is connected in signal communication with a second input of the intra prediction module 460, a first input of the motion compensator 470, and a second input of the reference picture buffer 480.
An output of the motion compensator 470 is connected in signal communication with a first input of a switch 497. An output of the intra prediction module 460 is connected in signal communication with a second input of the switch 497. An output of the switch 497 is connected in signal communication with a first non-inverting input of the combiner 425.
An input of the input buffer 410 is available as an input of the decoder 400, for receiving an input bitstream. A first output of the deblocking filter 465 is available as an output of the decoder 400, for outputting an output picture. As noted above, some filters are good for reference picture filtering only, some filters are good for displaying pictures only, and some filters work relatively well for both purposes. When multiple filters exist in the coding process, either in- loop or out-loop filters, a precise deployment of these filters is desired in order to improve the coding performance. Thus, in accordance with the present principles, we describe several schemes and embodiments for filter deployment structures that are based on the characteristics of filters and input signals. Thus, in accordance with the present principles, we describe new methods and apparatus to deploy filter(s) for improving compression performance. When there is filtering processing involved in compression, the position of one or more filters in the whole coding pipeline can be designed based on the filter characteristics and the statistics of the signals that need to be coded. The purpose of such strategies is to improve the compression performance, i.e., reducing the bit rate while improving objective and/or subjective gains.
Filter deployment structures:
In accordance with the present principles, new filter deployments are proposed to optimize the filtering performance for compression. We propose that the impact of in-loop filtering can be decoupled from picture display. One or more in-loop filters can be applied only to the reference picture and may not be applied to the display picture. In an embodiment, we can decouple the impact of in-loop filter from the displaying picture by setting the reconstructed picture to be the direct input of the out-loop filter. New filter deployment can be applied to the single in-loop and/or out-loop filter case and the multiple in-loop and/or out-loop filters case.
Embodiment 1:
Turning to FIG. 5, an exemplary filtering structure that decouples the impacts of the in-loop filter on a reference picture and a display picture is indicated generally by the reference numeral 500. The filtering structure 500 includes an in-loop filter 510 and an out-loop filter 520. An input of the in-loop filter 510 and an input of the out-loop filter are available as inputs of the filtering structure 500, for receiving a decoded picture. An output of the in-loop filter 510 is available as an output of the filtering structure 500, for outputting a reference picture. An output of the out-loop filter 520 is available as an output of the filtering structure 500, for outputting a display picture. To decouple the impact of the in-loop filter 510 on the reference pictures and the display pictures, the reconstructed (or decoded) picture is directly fed to the out-loop filter 520. There is no impact of the in-loop filter 510 to the display picture. However, the in-loop filter 510 is working on the reference pictures. It is to be appreciated that encoder 100 of FIG. 1 may be used to encode a bitstream that can be decoded using the filtering structure 500 of FIG. 5.
Turning to FIG. 6, an exemplary MPEG-4 AVC Standard based video decoder corresponding to the filtering structure 500 of FIG. 5 is indicated generally by the reference numeral 600. The video decoder 600 includes an input buffer 610 having an output connected in signal communication with a first input of an entropy decoder 645. A first output of the entropy decoder 645 is connected in signal communication with a first input of an inverse transformer and inverse quantizer 650. An output of the inverse transformer and inverse quantizer 650 is connected in signal communication with a second non-inverting input of a combiner 625. An output of the combiner 625 is connected in signal communication with a second input of a deblocking filter 665, a first input of an intra prediction module 660, and an input of an out-loop filter 688. An output of the deblocking filter 665 is connected in signal communication with a first input of a reference picture buffer 680. An output of the reference picture buffer 680 is connected in signal communication with a second input of a motion compensator 670.
A second output of the entropy decoder 645 is connected in signal communication with a third input of the motion compensator 670, a first input of the deblocking filter 665, and a third input of the intra predictor 660. A third output of the entropy decoder 645 is connected in signal communication with an input of a decoder controller 605. A first output of the decoder controller 605 is connected in signal communication with a second input of the entropy decoder 645. A second output of the decoder controller 605 is connected in signal communication with a second input of the inverse transformer and inverse quantizer 650. A third output of the decoder controller 605 is connected in signal communication with a third input of the deblocking filter 665. A fourth output of the decoder controller 605 is connected in signal communication with a second input of the intra prediction module 660, a first input of the motion compensator 670, and a second input of the reference picture buffer 680.
An output of the motion compensator 670 is connected in signal communication with a first input of a switch 697. An output of the intra prediction module 660 is connected in signal communication with a second input of the switch 697. An output of the switch 697 is connected in signal communication with a first non-inverting input of the combiner 625.
An input of the input buffer 610 is available as an input of the decoder 600, for receiving an input bitstream. An output of the out-loop filter 688 is available as an output of the decoder 600, for outputting an output picture. Thus, the video decoder 600 includes an in-loop filtering structure 601 that, in turn, includes deblocking filter 665. Moreover, the video decoder 600 includes an out-loop filtering structure 602 that, in turn, includes out-loop filter 688.
Embodiment 2:
FIG. 7 shows a more general embodiment, in which there are multiple in-loop filters that are sequentially connected. We can maintain the impact of several in- loop filters in the display picture but remove others by the structure shown in FIG. 7. Turning to FIG. 7, an exemplary filtering structure for decoupling the impacts of an in-loop filter on reference pictures and display pictures is indicated generally by the reference numeral 700. The filtering structure 700 includes an in-loop filter-1 710, an in-loop filter-2 720, an in-loop filter-N 730, and an out-loop filter 740. An output of the in-loop filter-1 710 is connected in signal communication with an input of the in-loop filter-2 720 and an input of the out-loop filter 740. An output of the in- loop filter-2 720 is connected in signal communication with an input of the in-loop filter 730. An input of the in-loop filter-1 710 is available as an input of the filtering structure 700, for receiving a decoded picture. An output of the in-loop filter-N 730 is available as an output of the filtering structure 700, for outputting a reference picture. An output of the out-loop filter 740 is available as an output of the filtering structure 800, for outputting a display picture.
Turning to FIG. 8, an exemplary MPEG-4 AVC Standard based video encoder corresponding to the filtering structure 700 of FIG. 7 is indicated generally by the reference numeral 800. The video encoder 800 includes a frame ordering buffer 810 having an output in signal communication with a non-inverting input of a combiner 885. An output of the combiner 885 is connected in signal communication with a first input of a transformer and quantizer 825. An output of the transformer and quantizer 825 is connected in signal communication with a first input of an entropy coder 845 and a first input of an inverse transformer and inverse quantizer 850. An output of the entropy coder 845 is connected in signal communication with a first non-inverting input of a combiner 890. An output of the combiner 890 is connected in signal communication with a first input of an output buffer 835.
A first output of an encoder controller 805 is connected in signal communication with a second input of the frame ordering buffer 810, a second input of the inverse transformer and inverse quantizer 850, an input of a picture-type decision module 815, a first input of a macroblock-type (MB-type) decision module 820, a second input of an intra prediction module 860, a second input of an in-loop filter-1 866, a second input of an in-loop filter-2 867, a second input of an in-loop filter-N 868, a first input of a motion compensator 870, a first input of a motion estimator 875, and a second input of a reference picture buffer 880.
A second output of the encoder controller 805 is connected in signal communication with a first input of a Supplemental Enhancement Information (SEI) inserter 830, a second input of the transformer and quantizer 825, a second input of the entropy coder 845, a second input of the output buffer 835, and an input of the Sequence Parameter Set (SPS) and Picture Parameter Set (PPS) inserter 840.
An output of the SEI inserter 830 is connected in signal communication with a second non-inverting input of the combiner 890.
A first output of the picture-type decision module 815 is connected in signal communication with a third input of the frame ordering buffer 810. A second output of the picture-type decision module 815 is connected in signal communication with a second input of a macroblock-type decision module 820.
An output of the Sequence Parameter Set (SPS) and Picture Parameter Set (PPS) inserter 840 is connected in signal communication with a third non-inverting input of the combiner 890.
An output of the inverse quantizer and inverse transformer 850 is connected in signal communication with a first non-inverting input of a combiner 819. An output of the combiner 819 is connected in signal communication with a first input of the intra prediction module 860 and a first input of the in-loop filter-1 866. An output of the in-loop filter-1 866 is connected in signal communication with a first input of the in-loop filter-2 867. An output of the in-loop filter-2 867 is connected in signal communication with a first input of the in-loop filter-N 868. An output of the in-loop filter-N 868 is connected in signal communication with a first input of a reference picture buffer 880. An output of the reference picture buffer 880 is connected in signal communication with a second input of the motion estimator 875 and a third input of the motion compensator 870. A first output of the motion estimator 875 is connected in signal communication with a second input of the motion compensator 870. A second output of the motion estimator 875 is connected in signal communication with a third input of the entropy coder 845.
An output of the motion compensator 870 is connected in signal communication with a first input of a switch 897. An output of the intra prediction module 860 is connected in signal communication with a second input of the switch 897. An output of the macroblock-type decision module 820 is connected in signal communication with a third input of the switch 897. The third input of the switch 897 determines whether or not the "data" input of the switch (as compared to the control input, i.e., the third input) is to be provided by the motion compensator 870 or the intra prediction module 860. The output of the switch 897 is connected in signal communication with a second non-inverting input of the combiner 819 and an inverting input of the combiner 885.
A first input of the frame ordering buffer 810 and an input of the encoder controller 805 are available as inputs of the encoder 800, for receiving an input picture 803. Moreover, a second input of the Supplemental Enhancement Information (SEI) inserter 830 is available as an input of the encoder 800, for receiving metadata. An output of the output buffer 835 is available as an output of the encoder 800, for outputting a bitstream.
Thus, the video encoder 800 includes an in-loop filtering structure 801 that, in turn, includes in-loop filter-1 866, in-loop filter-2 867, and in-loop filter-N 868.
Turning to FIG. 9, an exemplary MPEG-4 AVC Standard based video decoder corresponding to the filtering structure 700 of FIG. 7 is indicated generally by the reference numeral 900. The video decoder 900 includes an input buffer 910 having an output connected in signal communication with a first input of an entropy decoder 945. A first output of the entropy decoder 945 is connected in signal communication with a first input of an inverse transformer and inverse quantizer 950. An output of the inverse transformer and inverse quantizer 950 is connected in signal communication with a second non-inverting input of a combiner 925. An output of the combiner 925 is connected in signal communication with a second input of an in-loop filter-1 966 and a first input of an intra prediction module 960. An output of the in-loop filter-1 966 is connected in signal communication with a second input of an out-loop filter and a second input of an in-loop filter-2 967. An output of the in-loop filter-2 967 is connected in signal communication with a second input of an in-loop filter-N 968. An output of the in-loop filter-N 968 is connected in signal communication with a first input of a reference picture buffer 980. An output of the reference picture buffer 980 is connected in signal communication with a second input of a motion compensator 970. A second output of the entropy decoder 945 is connected in signal communication with a third input of the motion compensator 970, a first input of the in-loop filter-1 966, and a third input of the intra predictor 960. A third output of the entropy decoder 945 is connected in signal communication with an input of a decoder controller 905. A first output of the decoder controller 905 is connected in signal communication with a second input of the entropy decoder 945. A second output of the decoder controller 905 is connected in signal communication with a second input of the inverse transformer and inverse quantizer 950. A third output of the decoder controller 905 is connected in signal communication with a third input of the in-loop filter-1 966, a first input of the in-loop filter-2 967, a first input of the in- loop filter-N 968, and a first input of the out-loop filter 969. A fourth output of the decoder controller 905 is connected in signal communication with a second input of the intra prediction module 960, a first input of the motion compensator 970, and a second input of the reference picture buffer 980.
An output of the motion compensator 970 is connected in signal communication with a first input of a switch 997. An output of the intra prediction module 960 is connected in signal communication with a second input of the switch 997. An output of the switch 997 is connected in signal communication with a first non-inverting input of the combiner 925.
An input of the input buffer 910 is available as an input of the decoder 900, for receiving an input bitstream. An output of the out-loop filter 969 is available as an output of the decoder 900, for outputting an output picture.
Thus, the video decoder 900 includes an in-loop filtering structure 901 that, in turn, includes in-loop filter-1 966, in-loop filter-2 967, and in-loop filter-N 968. Moreover, the video decoder 900 includes an out-loop filtering structure 902 that, in turn, includes out-loop filter 969.
It is to be appreciated that while out-loop filter 969 is shown as a single filter, in other embodiments, the out-loop filter can be a set of multiple filters sequentially connected. Embodiment 3:
In one or more embodiments of the present principles, we also propose to optimize the performance of multiple in-loop/out-loop filters. The final output of the in-loop/out-loop filters can be optimized by on/off switching or combination schemes, for example, involving linear or nonlinear combinations.
Turning to FIG. 10, an exemplary filtering structure involving multiple out-loop filters is indicated generally by the reference numeral 1000. In the filtering structure 1000, the display picture can be the output of the combination function F(.), which is fed with the output of multiple out-loop filters. The combination function F(.) can be an on/off switch, a linear combination, or a nonlinear combination. The filtering structure 1000 includes an in-loop filter 1005, an out-loop filter-1 1010, an out-loop filter-2 1020, an out-loop filter-N 1030, and a combiner 1040. The combiner 1040 implements the combination function F(.). An output of the in-loop filter 1005 is connected in signal communication with an input of the out-loop filter-1 1010, an input of the out-loop filter-2 1020, and an input of the out-loop filter-N 1030. An output of the out-loop filter-1 1010, an output of the out-loop filter-2 1020, and an output of the out-loop filter-N 1030 are connected in signal communication with a first input, a second input, and a third input, respectively, of the combiner 1040. An input of the in-loop filter 1005 is available as an input of the filtering structure 1000, for receiving a decoded picture. The output of the in-loop filter 1005 is also available as an output of the filtering structure 1000, for outputting a reference picture. An output of the combiner 1040 is available as an output of the filtering structure 1000, for outputting a display picture. It is to be appreciated that encoder 100 of FIG. 1 may be used to encode a bitstream that can be decoded using the filtering structure 1000 of FIG. 10.
Turning to FIG. 11 , an exemplary MPEG-4 AVC Standard based video decoder corresponding to the filtering structure 1000 of FIG. 10 is indicated generally by the reference numeral 1100. The video decoder 1100 includes an input buffer 1110 having an output connected in signal communication with a first input of an entropy decoder 1145. A first output of the entropy decoder 1145 is connected in signal communication with a first input of an inverse transformer and inverse quantizer 1150. An output of the inverse transformer and inverse quantizer 1150 is connected in signal communication with a second non-inverting input of a combiner 1125. An output of the combiner 1125 is connected in signal communication with a second input of an in-loop filter 1165 and a first input of an intra prediction module 1160. An output of the in-loop filter 1165 is connected in signal communication with a first input of a reference picture buffer 1180, a first input of an out-loop filter-1 1166, a first input of an out-loop filter-2 1167, and a first input of an out-loop filter-N 1168. An output of the reference picture buffer 1180 is connected in signal communication with a second input of a motion compensator 1170. A second output of the entropy decoder 1145 is connected in signal communication with a third input of the motion compensator 1170, a first input of the in-loop filter 1165, and a third input of the intra predictor 1160. A third output of the entropy decoder 1145 is connected in signal communication with an input of a decoder controller 1105. A first output of the decoder controller 1105 is connected in signal communication with a second input of the entropy decoder 1145. A second output of the decoder controller 1105 is connected in signal communication with a second input of the inverse transformer and inverse quantizer 1150. A third output of the decoder controller 1105 is connected in signal communication with a third input of the in-loop filter 1165, a second input of the out-loop filter-1 1166, a second input of the out-loop filter-2 1167, and a second input off the out-loop filter-N 1168. A fourth output of the decoder controller 1105 is connected in signal communication with a second input of the intra prediction module 1160, a first input of the motion compensator 1170, and a second input of the reference picture buffer 1180. An output of the out-loop filter-1 1166, an output of the out-loop filter-2 1167, and an output of the out-loop filter-N 1168 are connected to a first input, a second input, and third input, respectively, of a combiner 1169.
An output of the motion compensator 1170 is connected in signal communication with a first input of a switch 1197. An output of the intra prediction module 1160 is connected in signal communication with a second input of the switch 1197. An output of the switch 1197 is connected in signal communication with a first non-inverting input of the combiner 1125. An input of the input buffer 1110 is available as an input of the decoder 1100, for receiving an input bitstream. An output of the combiner 1169 is available as an output of the decoder 1100, for outputting an output picture.
Thus, the video decoder 1100 includes an in-loop filtering structure 1101 that, in turn, includes in-loop filter-1 1165. Moreover, the video decoder 1100 includes an out-loop filtering structure 1102 that, in turn, includes out-loop filter-1 1166, out-loop filter-2 1167, out-loop filter-N 1168, and combiner 1169.
Embodiment 4: Combining the above ideas for in-loop and out-loop filters, we have more general filter deployment strategies for in-loop and out-loop filters.
Turning to FIG. 12, an exemplary filtering structure involving multiple in-loop filters and multiple out-loop filters is indicated generally by the reference numeral 1200. In the filtering structure 1200, the in-loop filters are connected sequentially. The filtering structure 1200 includes an in-loop filter-1 1210, an in-loop filter-2 1220, an in-loop filter-N 1230, an out-loop filter-1 1240, an out-loop filter-2 1250, an out- loop filter-N 1260, and a combiner 1270. An output of the in-loop filter-1 1210 is connected in signal communication with an input of the in-loop filter-2 1220 and an input of the out-loop filter-2 1250. An output of the in-loop filter-2 is connected in signal communication with an input of the in-loop filter-N 1230 and an input of the out-loop filter 1260. An output of the out-loop filter-1 1240, an output of the out-loop filter-2 1250, and an output of the out-loop filter-N 1260 are connected in signal communication with a first input, a second input, and a third input, respectively, of the combiner 1270. An input of the in-loop filter-1 1210 and an input of the out-loop filter-1 1240 are available as input of the filtering structure 1200, for receiving a decoded picture. An output of the in-loop filter-N 1230 is available as an input of the filtering structure 1200, for outputting a reference picture. An output of the combiner 1270 is available as an output of the filtering structure 1200, for outputting a display picture. Turning to FIG. 13, an exemplary MPEG-4 AVC Standard based video encoder corresponding to the filtering structure 1200 of FIG. 12 is indicated generally by the reference numeral 1300. The video encoder 1300 includes a frame ordering buffer 1310 having an output in signal communication with a non-inverting input of a combiner 1385. An output of the combiner 1385 is connected in signal communication with a first input of a transformer and quantizer 1325. An output of the transformer and quantizer 1325 is connected in signal communication with a first input of an entropy coder 1345 and a first input of an inverse transformer and inverse quantizer 1350. An output of the entropy coder 1345 is connected in signal communication with a first non-inverting input of a combiner 1390. An output of the combiner 1390 is connected in signal communication with a first input of an output buffer 1335.
A first output of an encoder controller 1305 is connected in signal communication with a second input of the frame ordering buffer 1310, a second input of the inverse transformer and inverse quantizer 1350, an input of a picture- type decision module 1315, a first input of a macrob lock-type (MB-type) decision module 1320, a second input of an intra prediction module 1360, a second input of an in-loop filter-1 deblocking filter 1366, a first input of a motion compensator 1370, a first input of a motion estimator 1375, and a second input of a reference picture buffer 1380.
A second output of the encoder controller 1305 is connected in signal communication with a first input of a Supplemental Enhancement Information (SEI) inserter 1330, a second input of the transformer and quantizer 1325, a second input of the entropy coder 1345, a second input of the output buffer 1335, and an input of the Sequence Parameter Set (SPS) and Picture Parameter Set (PPS) inserter 1340.
An output of the SEI inserter 1330 is connected in signal communication with a second non-inverting input of the combiner 1390.
A first output of the picture-type decision module 1315 is connected in signal communication with a third input of the frame ordering buffer 1310. A second output of the picture-type decision module 1315 is connected in signal communication with a second input of a macroblock-type decision module 1320.
An output of the Sequence Parameter Set (SPS) and Picture Parameter Set (PPS) inserter 1340 is connected in signal communication with a third non-inverting input of the combiner 1390.
An output of the inverse quantizer and inverse transformer 1350 is connected in signal communication with a first non-inverting input of a combiner 1319. An output of the combiner 1319 is connected in signal communication with a first input of the intra prediction module 1360 and a first input of the in-loop filter-1 1366. An output of the in-loop filter-1 1366 is connected in signal communication with an input of an in-loop filter-2 1367. An output of the in-loop filter-2 1367 is connected in signal communication with an input of an in-loop filter-N 1368. An output of the in- loop filter-N 1368 is connected in signal communication with a first input of a reference picture buffer 1380. An output of the reference picture buffer 1380 is connected in signal communication with a second input of the motion estimator 1375 and a third input of the motion compensator 1370. A first output of the motion estimator 1375 is connected in signal communication with a second input of the motion compensator 1370. A second output of the motion estimator 1375 is connected in signal communication with a third input of the entropy coder 1345.
An output of the motion compensator 1370 is connected in signal communication with a first input of a switch 1397. An output of the intra prediction module 1360 is connected in signal communication with a second input of the switch 1397. An output of the macroblock-type decision module 1320 is connected in signal communication with a third input of the switch 1397. The third input of the switch 1397 determines whether or not the "data" input of the switch (as compared to the control input, i.e., the third input) is to be provided by the motion compensator 1370 or the intra prediction module 1360. The output of the switch 1397 is connected in signal communication with a second non-inverting input of the combiner 1319 and an inverting input of the combiner 1385.
A first input of the frame ordering buffer 1310 and an input of the encoder controller 1305 are available as inputs of the encoder 1300, for receiving an input picture 1303. Moreover, a second input of the Supplemental Enhancement Information (SEI) inserter 1330 is available as an input of the encoder 1300, for receiving metadata. An output of the output buffer 1335 is available as an output of the encoder 1300, for outputting a bitstream.
Thus, the video encoder 1300 includes an in-loop filtering structure 1301 that, in turn, includes in-loop filter-1 1366, in-loop filter-2 1367, and in-loop filter-N 1368. Turning to FIG. 14, an exemplary MPEG-4 AVC Standard based video decoder corresponding to the filtering structure 1200 of FIGr 12 is indicated generally by the reference numeral 1400. The video decoder 1400 includes an input buffer 1410 having an output connected in signal communication with a first input of an entropy decoder 1445. A first output of the entropy decoder 1445 is connected in signal communication with a first input of an inverse transformer and inverse quantizer 1450. An output of the inverse transformer and inverse quantizer 1450 is connected in signal communication with a second non-inverting input of a combiner 1425. An output of the combiner 1425 is connected in signal communication with a second input of an in-loop filter-1 1466, a first input of an out- loop filter-1 1476, and a first input of an intra prediction module 1460. An output of the in-loop filter-1 1466 is connected in signal communication with an input of an in- loop filter-2 1467 and a first input of an out-loop filter-2 1477. An output of the in- loop filter-2 is connected in signal communication with an input of an in-loop filter-N 1468 and a first input of an out-loop filter-N 1478. An output of the out-loop filter-1
1476, an output of the out-loop filter-2 1477, and an output of the out-loop filter-N 1478 are connected in signal communication with a first input, a second input, and a third input, respectively, or a combiner 1479. An output of the in-loop filter-N 1468 is connected in signal communication with a first input of a reference picture buffer 1480. An output of the reference picture buffer 1480 is connected in signal communication with a second input of a motion compensator 1470.
A second output of the entropy decoder 1445 is connected in signal communication with a third input of the motion compensator 1470, a first input of the in-loop filter-1 1466, and a third input of the intra predictor 1460. A third output of the entropy decoder 1445 is connected in signal communication with an input of a decoder controller 1405. A first output of the decoder controller 1405 is connected in signal communication with a second input of the entropy decoder 1445. A second output of the decoder controller 1405 is connected in signal communication with a second input of the inverse transformer and inverse quantizer 1450. A third output of the decoder controller 1405 is connected in signal communication with a second input of the out-loop filter-1 1476, a second input of the out-loop filter-2
1477, and a second input of the out-loop filter-N 1478. A fourth output of the decoder controller 1405 is connected in signal communication with a second input of the intra prediction module 1460, a first input of the motion compensator 1470, and a second input of the reference picture buffer 1480.
An output of the motion compensator 1470 is connected in signal communication with a first input of a switch 1497. An output of the intra prediction module 1460 is connected in signal communication with a second input of the switch 1497. An output of the switch 1497 is connected in signal communication with a first non-inverting input of the combiner 1425.
An input of the input buffer 1410 is available as an input of the decoder 1400, for receiving an input bitstream. An output of the combiner 1479 is available as an output of the decoder 1400, for outputting an output picture.
Thus, the video decoder 1400 includes an in-loop filtering structure 1401 that, in turn, includes in-loop filter-1 1466, in-loop filter-2 1467, and in-loop filter-N 1468. Moreover, the video decoder 1400 includes an out-loop filtering structure 1402 that, in turn, includes out-loop filter-1 1476, out-loop filter-2 1477, out-loop filter-N 1478, and combiner 1479.
Embodiment 5:
Turning to FIG. 15, another exemplary filtering structure involving multiple in- loop filters and multiple out-loop filters is indicated generally by the reference numeral 1500. In the filtering structure 1500, the in-loop filters are connected in parallel. The filtering structure 1500 includes an in-loop filter-1 1510, an in-loop filter-2 1520, an in-loop filter-N 1530, a combiner 1535, an out-loop filter-1 1540, an out-loop filter-2 1550, an out-loop filter-N 1560, and a combiner 1565. An output of the in-loop filter-1 1510 is connected in signal communication with an input of the out-loop filter-1 1540 and a first input of the combiner 1535. An output of the in- loop-2 1520 is connected in signal communication with a second input of the combiner 1535. An output of the in-loop filter-N 1530 is connected in signal communication with a third input of the combiner 1535. An output of the in-loop filter-1 1540, an output of the in-loop filter-2 1550, and an output of the in-loop filter- N 1560 are connected in signal communication with a first input, a second input, and a third input, respectively, of the combiner 1565. An output of the combiner 1535 is available as an output of the filtering structure 1500, for outputting a reference picture. An output of the combiner 1565 is available as an output of the filtering structure 1500, for outputting a display picture.
Turning to FIG. 16, an exemplary MPEG-4 AVC Standard based video encoder corresponding to the filtering structure 1500 of FIG. 15 is indicated generally by the reference numeral 1600. The video encoder 1600 includes a frame ordering buffer 1610 having an output in signal communication with a non-inverting input of a combiner 1685. An output of the combiner 1685 is connected in signal communication with a first input of a transformer and quantizer 1625. An output of the transformer and quantizer 1625 is connected in signal communication with a first input of an entropy coder 1645 and a first input of an inverse transformer and inverse quantizer 1650. An output of the entropy coder 1645 is connected in signal communication with a first non-inverting input of a combiner 1690. An output of the combiner 1690 is connected in signal communication with a first input of an output buffer 1635. A first output of an encoder controller 1605 is connected in signal communication with a second input of the frame ordering buffer 1610, a second input of the inverse transformer and inverse quantizer 1650, an input of a picture- type decision module 1615, a first input of a macroblock-type (MB-type) decision module 1620, a second input of an intra prediction module 1660, a second input of an in-loop filter-1 1666, a second input of an in-loop filter-2 1667, a second input of an in-loop filter-N 1668, a first input of a motion compensator 1670, a first input of a motion estimator 1675, and a second input of a reference picture buffer 1680.
A second output of the encoder controller 1605 is connected in signal communication with a first input of a Supplemental Enhancement Information (SEI) inserter 1630, a second input of the transformer and quantizer 1625, a second input of the entropy coder 1645, a second input of the output buffer 1635, and an input of the Sequence Parameter Set (SPS) and Picture Parameter Set (PPS) inserter 1640. An output of the SEI inserter 1630 is connected in signal communication with a second non-inverting input of the combiner 1690. A first output of the picture-type decision module 1615 is connected in signal communication with a third input of the frame ordering buffer 1610. A second output of the picture-type decision module 1615 is connected in signal communication with a second input of a macroblock-type decision module 1620.
An output of the Sequence Parameter Set (SPS) and Picture Parameter Set (PPS) inserter 1640 is connected in signal communication with a third non-inverting input of the combiner 1690.
An output of the inverse quantizer and inverse transformer 1650 is connected in signal communication with a first non-inverting input of a combiner 1619. An output of the combiner 1619 is connected in signal communication with a first input of the intra prediction module 1660, a first input of the in-loop filter-1 1666, a first input of the in-loop filter-2 1667, and a first input of the in-loop filter-N 1668. An output of the in-loop filter-1 1666, an output of the in-loop filter-2 1667, and an output of the in-loop filter-N 1668 are connected to a first input, a second input, and a third input, respectively, of a combiner 1669. An output of the combiner 1669 is connected in signal communication with a first input of a reference picture buffer 1680. An output of the reference picture buffer 1680 is connected in signal communication with a second input of the motion estimator 1675 and a third input of the motion compensator 1670. A first output of the motion estimator 1675 is connected in signal communication with a second input of the motion compensator 1670. A second output of the motion estimator 1675 is connected in signal communication with a third input of the entropy coder 1645.
An output of the motion compensator 1670 is connected in signal communication with a first input of a switch 1697. An output of the intra prediction module 1660 is connected in signal communication with a second input of the switch 1697. An output of the macroblock-type decision module 1620 is connected in signal communication with a third input of the switch 1697. The third input of the switch 1697 determines whether or not the "data" input of the switch (as compared to the control input, i.e., the third input) is to be provided by the motion compensator 1670 or the intra prediction module 1660. The output of the switch 1697 is connected in signal communication with a second non-inverting input of the combiner 1619 and an inverting input of the combiner 1685.
A first input of the frame ordering buffer 1610 and an input of the encoder controller 1605 are available as inputs of the encoder 1600, for receiving an input picture 1603. Moreover, a second input of the Supplemental Enhancement Information (SEI) inserter 1630 is available as an input of the encoder 1600, for receiving metadata. An output of the output buffer 1635 is available as an output of the encoder 1600, for outputting a bitstream. Thus, the video encoder 1600 includes an in-loop filtering structure 1601 that, in turn, includes in-loop filter-1 1666, in-loop filter-2 1667, in-loop filter-N 1668, and a combiner 1669. Turning to FIG. 17, an exemplary MPEG-4 AVC Standard based video decoder corresponding to the filtering structure 1500 of FIG. 15 is indicated generally by the reference numeral 1700. The video decoder 1700 includes an input buffer 1710 having an output connected in signal communication with a first input of an entropy decoder 1745. A first output of the entropy decoder 1745 is connected in signal communication with a first input of an inverse transformer and inverse quantizer 1750. An output of the inverse transformer and inverse quantizer 1750 is connected in signal communication with a second non-inverting input of a combiner 1725. An output of the combiner 1725 is connected in signal communication with an input of an in-loop filter-1 1766, an input of an in-loop filter-2 1767, a first input of an in-loop filter-N 1768, an input of an out-loop filter-N 1778, an input of an out-loop filter-2 1777, and a first input of an intra prediction module 1760. An output of the in-loop filter-N 1768 is connected in signal communication with a third input of a combiner 1769. An output of the in-loop filter-2 1767 is connected in signal communication with a second input of the combiner 1769. An output of the in-loop filter-1 1766 is connected in signal communication with a first input of the combiner 1969 and an input of an out-loop filter-1 1776. An output of the combiner 1769 is connected in signal communication with a first input of a reference picture buffer 1780. An output of the reference picture buffer 1780 is connected in signal communication with a second input of a motion compensator 1770. An output of the out-loop filter-1 1776, an output of the out-loop filter-22 1777, and an output of the out-loop filter-N 1978 are connected in signal communication with a first input, a second input, and a third input, respectively, of a combiner 1779.
A second output of the entropy decoder 1745 is connected in signal communication with a third input of the motion compensator 1770, a first input of the deblocking filter 1765, and a third input of the intra predictor 1760. A third output of the entropy decoder 1745 is connected in signal communication with an input of a decoder controller 1705. A first output of the decoder controller 1705 is connected in signal communication with a second input of the entropy decoder 1745. A second output of the decoder controller 1705 is connected in signal communication with a second input of the inverse transformer and inverse quantizer 1750. A third output of the decoder controller 1705 is connected in signal communication with a third input of the deblocking filter 1765. A fourth output of the decoder controller 1705 is connected in signal communication with a second input of the intra prediction module 1760, a first input of the motion compensator 1770, and a second input of the reference picture buffer 1780.
An output of the motion compensator 1770 is connected in signal communication with a first input of a switch 1797. An output of the intra prediction module 1760 is connected in signal communication with a second input of the switch 1797. An output of the switch 1797 is connected in signal communication with a first non-inverting input of the combiner 1725.
An input of the input buffer 1710 is available as an input of the decoder 1700, for receiving an input bitstream. An output of the combiner 1779 is available as an output of the decoder 1700, for outputting an output picture.
Thus, the video decoder 1700 includes an in-loop filtering structure 1701 that, in turn, includes in-loop filter-1 1766, in-loop filter-2 1767, in-loop filter-N 1768, and a combiner 1769. Moreover, the video decoder 1700 includes an out-loop filtering structure 1702 that, in turn, includes out-loop filter-1 1776, out-loop filter-2 1777, out-loop filter-N 1778, and combiner 1779.
Given the teachings of the present principles provided herein, it is to be appreciated that with respect to encoder 1600 and decoder 1700 (as well as the other encoders and decoders described herein), the in-loop filters and out-loop filters can be implemented as respective single filters or respective groups of filters, and so forth, arranged in series, in parallel, and so forth, while maintaining the scope and spirit of the present principles.
Syntax: The selection of the filter deployment structure can be specified, for example, using one or more high level syntax elements. In one embodiment, we show how to define the syntax to apply to our proposed invention through an example. We presume that there are three new filter deployments in the example which are described by FIG. 1 (filter_deploy1), FIG. 5 (filter_deploy2), and FIG. 10 (filter_deploy3). In FIG. 10, we presume that there are three out-loop filters, i.e., out-loop filter-1 , out-loop filter-2, and out-loop filter-N. TABLE 1 shows exemplary slice header syntax, in accordance with an embodiment of the present principles. TABLE 1
Figure imgf000031_0001
The semantics of some of the syntax elements shown in TABLE 1 are as follows:
new_filter_deployment_available equal to 1 specifies that there are new filter deployments available for encoding the slice. new_filter_deployment_available equal to 0 specifies that there is no new filter deployment available for encoding the picture. The default filter deployment is used.
filter_deployment_idc indicates the filter deployment that will be used for generating current slice. In the example, the value is selected from 0, 1 , and 2.
Filter_deployment_ext_idc indicates the filter deployment extension information. If the filter_deploy3 is selected, there is other extended information to be sent. In this example, the filter_deployment_ext_idc indicates the out-loop filter strategy from three out-loop filters.
Turning to FIG. 18, an exemplary method for encoding picture data by selecting from a plurality of filtering structures is indicated generally by the reference numeral 1800. The method 1800 includes a start block 1805 that passes control to a function block 1810. The function block 1810 encodes a slice, and passes control to a decision block 1815. The decision block 1815 determines whether or not new_filter_deployment_available is equal to one. If so, then control is passed to a function block 1820, a function block 1825, a function block 1830, and a function block 1835. Otherwise, control is passed to a function block 1875. The function block 1820 performs filtering using a filter_deployment1 , and passes control to a function block 1840. The function block 1825 performs filtering using a filter_deployment2, and passes control to the function block 1840. The function block 1830 performs filtering using a filter_deployment3, and passes control to the function block 1840. The function block 1835 performs filtering using a default filtering process, and passes control to the function block 1840. The function block 1840 selects the best filter deployment based on rate-distortion (RD) cost, sets filter_deployment_idc to the selected filter deployment, and passes control to a decision block 1845. The decision block 1845 determines whether or not filter_deploy3 is selected. If so, then control is passed to a function block 2050, a function block 1855, and a function block 1860. Otherwise, control is passed to a function block 1870.
The function block 1850 performs filtering using an out-loop filter-1 deployment, and passes control to a function block 1865. The function block 1855 performs filtering using an out-loop filter-2 deployment, and passes control to the function block 1865. The function block 1860 performs filtering using an out-loop filter-3 deployment, and passes control to the function block 1865. The function block 1865 selects the best filter deployment based on the RD cost, sets Filter_deployment_ext_idc to the selected filter deployment, and passes control to the function block 1870.
The function block 1870 encodes the corresponding syntax for the selected filter deployments, and passes control to an end block 1899.
The function block 1875 performs filtering using a default filtering process, and passes control to the function block 1870. Turning to FIG. 19, an exemplary method for decoding picture data by determining a particular filtering structure from a plurality of filtering structures is indicated generally by the reference numeral 1900. The method 1900 includes a start block 1905 that passes control to a function block 1910. The function block 1910 parses syntax, and passes control to a function block 1915. The function block 1915 decodes a slice, and passes control to a decision block 1920. The decision block 1920 determines whether or not new_filter_deployment_available is equal to one. If so, then control is passed to a function block 1925. Otherwise, control is passed to a function block 1930. The function block 1925 filters the picture with the filter deployment indicated by filter_deployment_idc and filter_deployment_ext_idc, and passes control to an end block 1999.
The function block 1930 filters the picture with a default filter deployment, and passes control to the end block 1999.
A description will now be given of some of the many attendant advantages/features of the present invention, some of which have been mentioned above. For example, one advantage/feature is an apparatus having a video decoder for decoding a picture, wherein the video decoder includes an in-loop filtering structure for performing in-loop filtering of a reconstructed version of the picture, wherein an impact of the in-loop filtering is decoupled from a display picture formed from the reconstructed version of the picture data.
Another advantage/feature is the apparatus having the video decoder as described above, wherein the display picture is not filtered by the in-loop filtering structure.
Still another advantage/feature is the apparatus having the video decoder as described above, wherein the video decoder further includes an out-loop filtering structure, and the display picture is obtained by processing the reconstructed version of the picture using only the out-loop filtering structure from among the out- loop filtering structure and the in-loop filtering structure.
Yet another advantage/feature is the apparatus having the video decoder wherein the video decoder further includes an out-loop filtering structure, and the display picture is obtained by processing the reconstructed version of the picture using only the out-loop filtering structure from among the out-loop filtering structure and the in-loop filtering structure as described above, wherein the out-loop filtering structure is used to generate the display picture from the reconstructed version of the picture.
Still yet another advantage/feature is the apparatus having the video decoder as described above, wherein the in-loop filtering structure comprises a plurality of in- loop filters.
Moreover, another advantage/feature is apparatus having the video decoder wherein the in-loop filtering structure comprises a plurality of in-loop filters as described above, wherein the plurality of in-loop filters included in the in-loop filtering structure are arranged such that an impact of only some of the plurality of in-loop filters are decoupled from the display picture.
Further, another advantage/feature is the apparatus having a video decoder that, in turn, has an in-loop filtering structure and an out-loop filtering structure, the in-loop filtering structure including one or more in-loop filters and the out-loop filtering structure including one or more out-loop filters, and wherein multiple filters from among at least one of the one or more in-loop filters and the one or more out- loop filters are selected and combined to generate at least one of a reference picture and a display picture from an original version of a picture. Also, another advantage/feature is the apparatus having the video decoder as described above, wherein at least one of a number of the multiple filters and one or more types of the multiple filters are determined from one or more high level syntax elements.
Additionally, another advantage/feature is the apparatus having the video decoder as described above, wherein the multiple filters include at least one in-loop filter and at least one out-loop filter, and wherein the at least one in-loop filter is arranged such that an impact of the at least one in-loop filter is decoupled from the display picture.
Moreover, another advantage/feature is the apparatus having the video decoder as described above, wherein the multiple filters include a plurality of in-loop filters, and wherein the plurality of in-loop filters are arranged such that an impact of only some of the plurality of in-loop filters are decoupled from the display picture. These and other features and advantages of the present principles may be readily ascertained by one of ordinary skill in the pertinent art based on the teachings herein. It is to be understood that the teachings of the present principles may be implemented in various forms of hardware, software, firmware, special purpose processors, or combinations thereof.
Most preferably, the teachings of the present principles are implemented as a combination of hardware and software. Moreover, the software may be implemented as an application program tangibly embodied on a program storage unit. The application program may be uploaded to, and executed by, a machine comprising any suitable architecture. Preferably, the machine is implemented on a computer platform having hardware such as one or more central processing units ("CPU"), a random access memory ("RAM"), and input/output ("I/O") interfaces. The computer platform may also include an operating system and microinstruction code. The various processes and functions described herein may be either part of the microinstruction code or part of the application program, or any combination thereof, which may be executed by a CPU. In addition, various other peripheral units may be connected to the computer platform such as an additional data storage unit and a printing unit.
It is to be further understood that, because some of the constituent system components and methods depicted in the accompanying drawings are preferably implemented in software, the actual connections between the system components or the process function blocks may differ depending upon the manner in which the present principles are programmed. Given the teachings herein, one of ordinary skill in the pertinent art will be able to contemplate these and similar implementations or configurations of the present principles. Although the illustrative embodiments have been described herein with reference to the accompanying drawings, it is to be understood that the present principles is not limited to those precise embodiments, and that various changes and modifications may be effected therein by one of ordinary skill in the pertinent art without departing from the scope or spirit of the present principles. All such changes and modifications are intended to be included within the scope of the present principles as set forth in the appended claims.

Claims

CLAIMS:What is claimed is:
1. An apparatus, comprising: a video decoder (600, 900, 1400, 1700) for decoding a picture, wherein said video decoder includes an in-loop filtering structure (601 , 901 , 1401 , 1701) for performing in-loop filtering of a reconstructed version of the picture, and wherein an impact of the in-loop filtering is decoupled from a display picture formed from the reconstructed version of the picture data.
2. The apparatus of claim 1 , wherein the display picture is not filtered by the in-loop filtering structure (600).
3. The apparatus of claim 1 , wherein said video decoder (600, 900,
1400, 1700) further comprises an out-loop filtering structure (602, 902, 1402, 1702), and the display picture is obtained by processing the reconstructed version of the picture using only the out-loop filtering structure from among the out-loop filtering structure and the in-loop filtering structure.
4. The apparatus of claim 3, wherein the out-loop filtering structure is used to generate the display picture from the reconstructed version of the picture.
5. The apparatus of claim 1 , wherein the in-loop filtering structure comprises a plurality of in-loop filters (966, 967, 968, 1466, 1467, 1468, 1766, 1767, 1768).
6. The apparatus of claim 5, wherein the plurality of in-loop filters comprised in the in-loop filtering structure are arranged such that an impact of only some (967, 968, 1767, 1768) of the plurality of in-loop filters are decoupled from the display picture.
7. In a video decoder, a method, comprising: decoding a picture (1915) using an in-loop filtering structure (1920) included in the video decoder, the in-loop filtering structure for performing in-loop filtering of a reconstructed version of the picture, wherein an impact of the in-loop filtering is decoupled from a display picture formed from the reconstructed version of the picture data (1925).
8. The method of claim 7, wherein the display picture is not filtered by the in-loop filtering structure (1925).
9. The method of claim 7, wherein the video decoder further comprises an out-loop filtering structure, and the display picture is obtained by processing the reconstructed version of the picture using only the out-loop filtering structure from among the out-loop filtering structure and the in-loop filtering structure (1925).
10. The method of claim 9, wherein the out-loop filtering structure is used to generate the display picture from the reconstructed version of the picture (1925).
11. The method of claim 7, wherein the in-loop filtering structure comprises a plurality of in-loop filters (1925).
12. The method of claim 11 , wherein the plurality of in-loop filters comprised in the in-loop filtering structure are arranged such that an impact of only some of the plurality of in-loop filters are decoupled from the display picture (1925).
13. An apparatus, comprising: a video decoder (600, 900, 1100, 1400, 1700) having an in-loop filtering structure (601 , 901 , 1101 , 1401 , 1701) and an out-loop filtering structure (602, 902, 1102, 1402, 1702), the in-loop filtering structure comprising one or more in-loop filters (665, 966, 967, 968, 1165, 1466, 1467, 1468, 1766, 1767, 1768) and the out- loop filtering structure comprising one or more out-loop filters(688, 969, 1166, 1167, 1168, 1476, 1477, 1478, 1776, 1777, 1778), and wherein multiple filters from among at least one of the one or more in-loop filters and the one or more out-loop filters are selected and combined to generate at least one of a reference picture and a display picture from an original version of a picture.
14. The apparatus of claim 13, wherein at least one of a number of the multiple filters and one or more types of the multiple filters are determined from one or more high level syntax elements.
15. The apparatus of claim 13, wherein the multiple filters comprise at least one in-loop filter and at least one out-loop filter, and wherein the at least one in-loop filter is arranged such that an impact of the at least one in-loop filter (967, 968, 1767, 1768) is decoupled from the display picture.
16. The apparatus of claim 13, wherein the multiple filters comprise a plurality of in-loop filters, and wherein the plurality of in-loop filters are arranged such that an impact of only some (967, 968, 1767, 1768) of the plurality of in-loop filters are decoupled from the display picture.
17. In a video decoder, a method, comprising: decoding (1915) a picture using at least one filter from among an in-loop filtering structure and an out-loop filtering structure comprised in the video decoder, the in-loop filtering structure having one or more in-loop filters and the out-loop filtering structure having one or more out-loop filters, wherein multiple filters from among at least one of the one or more in-loop filters and the one or more out-loop filters are selected and combined to generate at least one of a reference picture and a display picture from an original version of a picture (1920, 1925).
18. The method of claim 17, wherein at least one of a number of the multiple filters and one or more types of the multiple filters are determined from one or more high level syntax elements (1925).
19. The method of claim 17, wherein the multiple filters comprise at least one in-loop filter and at least one out-loop filter, and wherein the at least one in-loop filter is arranged such that an impact of the at least one in-loop filter is decoupled from the display picture (1925).
20. The method of claim 17, wherein the multiple filters comprise a plurality of in-loop filters, and wherein the plurality of in-loop filters are arranged such that an impact of only some of the plurality of in-loop filters are decoupled from the display picture (1925).
PCT/US2010/001458 2009-05-18 2010-05-18 Methods and apparatus for a generalized filtering structure for video coding and decoding WO2010134973A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US17926909P 2009-05-18 2009-05-18
US61/179,269 2009-05-18

Publications (1)

Publication Number Publication Date
WO2010134973A1 true WO2010134973A1 (en) 2010-11-25

Family

ID=42470835

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2010/001458 WO2010134973A1 (en) 2009-05-18 2010-05-18 Methods and apparatus for a generalized filtering structure for video coding and decoding

Country Status (1)

Country Link
WO (1) WO2010134973A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9237351B2 (en) 2012-02-21 2016-01-12 Samsung Electronics Co., Ltd. Encoding/decoding apparatus and method for parallel correction of in-loop pixels based on measured complexity, using video parameter
WO2017201011A1 (en) * 2016-05-16 2017-11-23 Qualcomm Incorporated Confusion of multiple filters in adaptive loop filtering in video coding

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0772365A2 (en) * 1995-11-02 1997-05-07 Matsushita Electric Industrial Co., Ltd. Method and device for filtering a picture signal, and encoding/decoding using the same
US20040012582A1 (en) * 2003-07-10 2004-01-22 Samsung Electronics Co., Ltd. Methods of suppressing ringing artifact of decompressed images
US20080267297A1 (en) * 2007-04-26 2008-10-30 Polycom, Inc. De-blocking filter arrangements
EP2192786A1 (en) * 2008-11-27 2010-06-02 Panasonic Corporation Frequency domain filters for video coding

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0772365A2 (en) * 1995-11-02 1997-05-07 Matsushita Electric Industrial Co., Ltd. Method and device for filtering a picture signal, and encoding/decoding using the same
US20040012582A1 (en) * 2003-07-10 2004-01-22 Samsung Electronics Co., Ltd. Methods of suppressing ringing artifact of decompressed images
US20080267297A1 (en) * 2007-04-26 2008-10-30 Polycom, Inc. De-blocking filter arrangements
EP2192786A1 (en) * 2008-11-27 2010-06-02 Panasonic Corporation Frequency domain filters for video coding

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
LEE Y L ET AL: "Loop-filtering and post-filtering for low bit-rates moving picture coding", IMAGE PROCESSING, 1999. ICIP 99. PROCEEDINGS. 1999 INTERNATIONAL CONFERENCE ON - KOBE, JAPAN 24-28 OCT. 1999, IEEE, PISCATAWAY, NJ, USA LNKD- DOI:10.1109/ICIP.1999.821573, vol. 1, 24 October 1999 (1999-10-24), pages 94 - 98, XP010369215, ISBN: 978-0-7803-5467-8 *
LEE Y-L, PARK H-W, PARK D-S, KIM Y-S: "Loop Filtering for Reducing Blocking Effects and Ringing Effects", ITU-TELECOMMUNICATIONS STANDARDIZATION SECTOR STRUDY GROUP 16 VIDEO CODING EXPERTS GROUP, 21 July 1998 (1998-07-21) - 24 July 1998 (1998-07-24), Whistler, BC, Canada, pages 1 - 8, XP040417462 *
NAKAJIMA Y: "POSTPROCESSING ALGORITHMS FOR NOISE REDUCTION OF MPEG CODED VIDEO", DENSHI JOUHOU TSUUSHIN GAKKAI GIJUTSU KENKYUU HOUKOKU // INSTITUTE OF ELECTRONICS, INFORMATION AND COMMUNICATION ENGINEERS. TECHNICAL REPORT, DENSHI JOUHOU TSUUSHIN GAKKAI, JP, 1 January 1994 (1994-01-01), pages 45 - 51, XP000613116, ISSN: 0913-5685 *
RICHARDSON IAIN E. G.: "H.264/MPEG-4 PART 10: Inter Prediction", 30 April 2003 (2003-04-30), pages 1 - 3, XP002596717, Retrieved from the Internet <URL:http://www.vcodex.com/files/h264_interpred.pdf> [retrieved on 20100817] *
YU HU ET AL: "Decoder-Friendly Adaptive Deblocking Filter (DF-ADF) Mode Decision in H.264/AVC", CIRCUITS AND SYSTEMS, 2007. ISCAS 2007. IEEE INTERNATIONAL SYMPOSIUM O N, IEEE, PI, 1 May 2007 (2007-05-01), pages 3976 - 3979, XP031182179, ISBN: 978-1-4244-0920-4 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9237351B2 (en) 2012-02-21 2016-01-12 Samsung Electronics Co., Ltd. Encoding/decoding apparatus and method for parallel correction of in-loop pixels based on measured complexity, using video parameter
WO2017201011A1 (en) * 2016-05-16 2017-11-23 Qualcomm Incorporated Confusion of multiple filters in adaptive loop filtering in video coding
CN109076218A (en) * 2016-05-16 2018-12-21 高通股份有限公司 Multiple filters in video coding in adaptive loop filter are obscured
KR20190008230A (en) * 2016-05-16 2019-01-23 퀄컴 인코포레이티드 Confusion of Multiple Filters in Adaptive Loop Filtering in Video Coding
US10419755B2 (en) 2016-05-16 2019-09-17 Qualcomm Incorporated Confusion of multiple filters in adaptive loop filtering in video coding
KR102519780B1 (en) * 2016-05-16 2023-04-07 퀄컴 인코포레이티드 Confusion of multiple filters in adaptive loop filtering in video coding

Similar Documents

Publication Publication Date Title
US11483556B2 (en) Methods and apparatus for collaborative partition coding for region based filters
US9918064B2 (en) Method and apparatus for providing reduced resolution update mode for multi-view video coding
EP2486731B1 (en) Methods and apparatus for adaptive filtering of prediction pixels for chroma components in video encoding and decoding
EP4224853A1 (en) Methods and apparatus for in-loop de-artifact filtering
US9736500B2 (en) Methods and apparatus for spatially varying residue coding
EP2420063B1 (en) Methods and apparatus for filter parameter determination and selection responsive to variable transforms in sparsity-based de-artifact filtering
US20130044814A1 (en) Methods and apparatus for adaptive interpolative intra block encoding and decoding
EP2294827A1 (en) Methods and apparatus for video coding and decoding with reduced bit-depth update mode and reduced chroma sampling update mode
EP2545711A1 (en) Methods and apparatus for a classification-based loop filter
US9294784B2 (en) Method and apparatus for region-based filter parameter selection for de-artifact filtering
US9167270B2 (en) Methods and apparatus for efficient adaptive filtering for video encoders and decoders
WO2010134973A1 (en) Methods and apparatus for a generalized filtering structure for video coding and decoding

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 10726665

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 10726665

Country of ref document: EP

Kind code of ref document: A1