WO2018149995A1 - Appareil et procédés de filtre - Google Patents

Appareil et procédés de filtre Download PDF

Info

Publication number
WO2018149995A1
WO2018149995A1 PCT/EP2018/053939 EP2018053939W WO2018149995A1 WO 2018149995 A1 WO2018149995 A1 WO 2018149995A1 EP 2018053939 W EP2018053939 W EP 2018053939W WO 2018149995 A1 WO2018149995 A1 WO 2018149995A1
Authority
WO
WIPO (PCT)
Prior art keywords
value
pixel
surrounding
filtered
weights
Prior art date
Application number
PCT/EP2018/053939
Other languages
English (en)
Inventor
Jacob STRÖM
Kenneth Andersson
Per Wennersten
Original Assignee
Telefonaktiebolaget Lm Ericsson (Publ)
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Telefonaktiebolaget Lm Ericsson (Publ) filed Critical Telefonaktiebolaget Lm Ericsson (Publ)
Publication of WO2018149995A1 publication Critical patent/WO2018149995A1/fr

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/80Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation
    • H04N19/82Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation involving filtering within a prediction loop
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/70Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/85Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
    • H04N19/86Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving reduction of coding artifacts, e.g. of blockiness

Definitions

  • the present embodiments generally relate to filter apparatus and methods, for example to filter apparatus and methods for video decoding and/or encoding, and in particular to deringing filtering in video decoding and/or encoding.
  • H.266 High Efficiency Video Coding
  • JCT-VC Joint Collaborative Team on Video Coding
  • Spatial prediction is achieved using intra (I) prediction from within the current picture.
  • a picture consisting of only intra coded blocks is referred to as an l-picture.
  • Temporal prediction is achieved using inter (P) or bi-directional inter (B) prediction on block level.
  • HEVC was finalized in 2013.
  • bilateral filtering is widely used in image processing because of its edge-preserving and noise-reducing features.
  • a bilateral filter decides its coefficients based on the contrast of the pixels in addition to the geometric distance.
  • a Gaussian function has usually been used to relate coefficients to the geometric distance and contrast of the pixel values.
  • the weight ⁇ ( ⁇ ,; ' , k, I) assigned for pixel (k, I) to filter the pixel (i, j) is defined as: where a d is the spatial parameter, and a r is the range parameter.
  • the bilateral filter is controlled by these two parameters.
  • l ⁇ i,f) and I(k, I) are the original pixel values of pixels (i ) and (k, I) respectively.
  • These pixel values or sample values can be intensity levels, also referred to as luminance values or luma values. However, the pixel values can also be chroma or chrominance values, or any other associated pixel value or sample value.
  • Rate-Distortion Optimization is part of the video encoding process. It improves coding efficiency by finding the "best" coding parameters. It measures both the number of bits used for each possible decision outcome of the block and the resulting distortion of the block.
  • a deblocking filter (DBF) and a Sample Adaptive Offset (SAO) filter are included in the HEVC standard.
  • DPF deblocking filter
  • SAO Sample Adaptive Offset
  • ALF Adaptive Loop Filter
  • a problem with deploying bilateral filtering in video coding is that it is too complex, lacks sufficient parameter settings and adaptivity.
  • the embodiments disclosed herein relate to further improvements to a filter.
  • a method performed by a decoder and/or encoder, for filtering at least two pixels in a block of pixels, each pixel being associated with a pixel value, wherein a filtered pixel value is calculated from the pixel value and the pixel values of surrounding pixels using a set of weights consisting of a center weight and surrounding weights.
  • the center weight is associated with the pixel to be filtered and each surrounding weight is associated with a surrounding pixel, and where each surrounding weight depends on a pixel value difference between the pixel value of the surrounding pixel associated with the weight and the pixel value of the pixel to be filtered.
  • the method comprises obtaining a center weight value based on a parameter that is constant in said block of pixels.
  • the method comprises calculating a nominator value using at least one of said surrounding weights.
  • the method comprises calculating a denominator value using the sum of all said weights.
  • the method comprises calculating the filtered pixel value using said nominator value and said denominator value.
  • a filter for filtering at least two pixels in a block of pixels, each pixel being associated with a pixel value, wherein a filtered pixel value is calculated from the pixel value and the pixel values of surrounding pixels using a set of weights consisting of a center weight and surrounding weights.
  • the center weight is associated with the pixel to be filtered and each surrounding weight is associated with a surrounding pixel, and where each surrounding weight depends on a pixel value difference between the pixel value of the surrounding pixel associated with the weight and the pixel value of the pixel to be filtered.
  • the filter is operative to obtain a center weight value based on a parameter that is constant in said block of pixels.
  • the filter is operative to calculate a nominator value using at least one of said surrounding weights and calculate a denominator value using the sum of all said weights.
  • the filter is operative to calculate the filtered pixel value using said nominator value and said denominator value.
  • a decoder comprising a modifying means and at least one look-up table, LUT.
  • the modifying means is configured to modify at least two pixels in a block of pixels, each pixel being associated with a pixel value, wherein a filtered pixel value is calculated from the pixel value and the pixel values of surrounding pixels using a set of weights consisting of a center weight and surrounding weights, where the center weight is associated with the pixel to be filtered and each surrounding weight is associated with a surrounding pixel, and where each surrounding weight depends on a pixel value difference between the pixel value of the surrounding pixel associated with the weight and the pixel value of the pixel to be filtered.
  • the at least one LUT stores the set of weights.
  • the decoder is operative to obtain a center weight value based on a parameter that is constant in said block of pixels; calculate a nominator value using at least one of said surrounding weights; calculate a denominator value using the sum of all said weights; and calculate the filtered pixel value using said nominator value and said denominator value.
  • a computer program comprising instructions which, when executed on at least one processor, cause the at least one processor to carry out the method as described herein and defined in the appended claims.
  • a computer program product comprising a computer-readable medium with the computer program as described above.
  • Figures 1 (A) and (B) illustrate the ringing effect on a zoomed original video frame and a zoomed compressed video frame respectively;
  • Figure 2 illustrates an 8x8 transform unit block and the filter aperture for the pixel located at (1 ,1 );
  • Figure 3 illustrates a plus sign shaped deringing filter aperture
  • Figure 5 illustrates the steps performed in a filtering method according to an example
  • Figure 6 illustrates a filter according to an example
  • Figure 7 illustrates a data processing system in accordance with an example
  • Figure 8 shows an example of a method according to an embodiment
  • Figure 9 shows an example of a filter according to an embodiment
  • Figure 10 shows an example of a decoder according to an embodiment
  • Figure 1 1 illustrates schematically a video encoder according to an embodiment
  • Figure 12 illustrates schematically a video decoder according to an embodiment.
  • the technology can additionally be considered to be embodied entirely within any form of computer- readable memory, such as solid-state memory, magnetic disk, or optical disk containing an appropriate set of computer instructions that would cause a processor to carry out the techniques described herein.
  • Hardware implementation may include or encompass, without limitation, digital signal processor (DSP) hardware, a reduced instruction set processor, hardware (e.g., digital or analog) circuitry including but not limited to application specific integrated circuit(s) (ASIC) and/or field programmable gate array(s) (FPGA(s)), and (where appropriate) state machines capable of performing such functions.
  • DSP digital signal processor
  • ASIC application specific integrated circuit
  • FPGA field programmable gate array
  • a computer is generally understood to comprise one or more processors, one or more processing units, one or more processing modules or one or more controllers, and the terms computer, processor, processing unit, processing module and controller may be employed interchangeably.
  • the functions may be provided by a single dedicated computer, processor, processing unit, processing module or controller, by a single shared computer, processor, processing unit, processing module or controller, or by a plurality of individual computers, processors, processing units, processing modules or controllers, some of which may be shared or distributed.
  • these terms also refer to other hardware capable of performing such functions and/or executing software, such as the example hardware recited above.
  • the filters described herein may be used in any form of user equipment, such as a mobile telephone, tablet, desktop, netbook, multimedia player, video streaming server, set-top box or computer.
  • user equipment UE
  • UE user equipment
  • a UE herein may comprise a UE (in its general sense) capable of operating or at least performing measurements in one or more frequencies, carrier frequencies, component carriers or frequency bands.
  • terminal device it may be a “UE” operating in single- or multi- radio access technology (RAT) or multi-standard mode.
  • RAT radio access technology
  • wireless communication device the general terms “terminal device”, “communication device” and “wireless communication device” are used in the following description, and it will be appreciated that such a device may or may not be 'mobile' in the sense that it is carried by a user.
  • the term “terminal device” encompasses any device that is capable of communicating with communication networks that operate according to one or more mobile communication standards, such as the Global System for Mobile communications, GSM, UMTS, Long-Term Evolution, LTE, etc.
  • a UE may comprise a Universal Subscription Identity Module (USIM) on a smart-card or implemented directly in the UE, e.g., as software or as an integrated circuit.
  • USIM Universal Subscription Identity Module
  • the operations described herein may be partly or fully implemented in the USIM or outside of the USIM.
  • a dedicated deringing filter in HEVC which introduces a deringing filter into the Future Video Codec (the successor to HEVC).
  • the deringing filter proposed in the earlier application is evolved from a bilateral filter, and proposes some simplifications, and how to adapt the filtering to local parameters in order to improve the filtering performance.
  • the embodiments described herein are concerned with reducing the amount of look-up- tables (LUTs) needed in filters, including for example a deringing filter as described in the earlier application.
  • a filter e.g.
  • the embodiments are configured to alter the center weight, thus making it possible to reuse or share the tables without a costly multiplication of the weight for each surrounding pixel.
  • An advantage of the embodiments described herein is that it is possible to keep the complexity of an algorithm the same while reducing the LUT storage by, for example, 5/6 or 83%. This has benefits in hardware implementations, where LUT size and complexity both come at a premium.
  • each pixel uses four or five weights. In some implementations it may be of interest to read these four or five weights at the same time. This would mean that the LUT would have to be implemented not once, but four or five times.
  • the filter according to the embodiments described herein may be implemented in a video decoder and/or a video encoder. It may be implemented in hardware, in software or a combination of hardware and software.
  • the filter may be implemented in, e.g. comprised in, user equipment, such as a mobile telephone, tablet, desktop, netbook, multimedia player, video streaming server, set-top box or computer.
  • user equipment such as a mobile telephone, tablet, desktop, netbook, multimedia player, video streaming server, set-top box or computer.
  • pixel and “sample” interchangeably, since they mean the same thing.
  • pixel value and “sample value” interchangeably, since they mean the same thing.
  • a video sequence can be divided into pictures, also known as images or frames. Each such frame can be further divided into pixels.
  • each such pixel When displayed on a screen, each such pixel typically has three values associated with it, such as red, green and blue.
  • red, green and blue when displayed on a screen, typically another color space is used, where instead of red, green and blue, there is one luma or luminance component, such as Y, and two chroma or chrominance components, such as Cb and Cr.
  • Other examples include YuV, ICtCp and Lab.
  • each pixel typically only has one associated pixel value, often a luma value Y. This value is often called pixel value or sample value.
  • each pixel can sometime have three associated values, Y, Cb and Cr.
  • pixel value can mean either Y, Cb or Cr. More often however, the Cb and Cr values are stored at lower resolutions. In these cases several pixels will share a Cb pixel value. Even in this case, "pixel value” or “sample value” can mean either Y, Cb or Cr.
  • the examples described in the earlier application define a method, performed by a filter, for filtering of a picture of a video signal, wherein the picture comprises pixels, each pixel being associated with a pixel value.
  • the method comprises: modifying a pixel value by a weighted combination of the pixel value and at least one spatially neighboring pixel value, wherein the filtering is controlled by two parameters, a d and a r , and wherein at least one of the parameters a d and a r depends on at least one of: quantization parameter, quantization scaling matrix, transform width, transform height, picture width, picture height, a magnitude of a negative filter coefficient used as part of inter/intra prediction.
  • the examples described in the earlier application also define a filter, for filtering of a picture of a video signal, wherein the picture comprises pixels, each pixel being associated with a pixel value.
  • the filter is configured to: modify a pixel value by a weighted combination of the pixel value and at least one spatially neighboring pixel value, wherein the filtering is controlled by two parameters, a d and a r , and wherein at least one of the parameters a d and a r depends on at least one of: quantization parameter, quantization scaling matrix, transform width, transform height, picture width, picture height, a magnitude of a negative filter coefficient used as part of inter/intra prediction.
  • the examples described in the earlier application also define a filter, for filtering of a picture of a video signal, wherein the picture comprises pixels, each pixel being associated with a pixel value.
  • the filter comprises a modifying module for modifying a pixel value by a weighted combination of the pixel value and at least one spatially neighboring pixel value, wherein the filtering is controlled by two parameters, a d and a r , and wherein at least one of the parameters a d and a r depends on at least one of: quantization parameter, quantization scaling matrix, transform width, transform height, picture width, picture height, a magnitude of a negative filter coefficient used as part of inter/intra prediction.
  • a decoder that comprises a modifying means, configured to modify a pixel value by a weighted combination of the pixel value and at least one spatially neighboring pixel value, wherein the filtering is controlled by two parameters, a d and a r , and wherein at least one of the parameters a d and a r depends on at least one of: quantization parameter, quantization scaling matrix, transform width, transform height, picture width, picture height, a magnitude of a negative filter coefficient used as part of inter/intra prediction.
  • the examples described in the earlier application also define a computer program for a filter comprising a computer program code which, when executed, causes the filter to: modify a pixel value by a weighted combination of the pixel value and at least one spatially neighboring pixel value, wherein the filtering is controlled by two parameters, a d and a r , and wherein at least one of the parameters a d and a r depends on at least one of: quantization parameter, quantization scaling matrix, transform width, transform height, picture width, picture height, a magnitude of a negative filter coefficient used as part of inter/intra prediction.
  • a further aspect of the examples defined in the earlier application comprise a computer program product comprising a computer program for a filter and a computer readable means on which the computer program for a filter is stored.
  • a bilateral deringing filter with a plus sign-shaped filter aperture is used directly after the inverse transform.
  • An identical filter and identical filtering process is used in the corresponding video encoder and decoder to ensure that there is no drift between the encoder and the decoder.
  • the first example describes a way to remove ringing artifacts by using a deringing filter designed in the earlier application.
  • the deringing filter is evolved from a bilateral filter.
  • each pixel in the reconstructed picture is replaced by a weighted average of itself and its neighbors. For instance, a pixel located at (i, j), will be filtered using its neighboring pixel (k, I).
  • the weight ⁇ ( ⁇ ,; ' , k, I) is the weight assigned for pixel (k, I) to filter the pixel (i, j), and it is defined, as mentioned earlier, as: and I(k, I) are the original reconstructed intensity values of pixels (i ) and (k, I) respectively.
  • I(k, I) are the decoded intensity values (pixel values) of pixels and (k, 1) respectively.
  • o d is the spatial parameter
  • a r is the range parameter.
  • the bilateral filter is controlled by these two parameters. In this way, the weight of a reference pixel (k, I) to the pixel (i ) is dependent both on the distance between the pixels and the intensity difference between the pixels. In this way, the pixels located closer to the pixel to be filtered, and that have smaller intensity difference to the pixel to be filtered, will have larger weight than the other more distant (spatial or intensity) pixels.
  • o d and a r are constant values in the block to be filtered.
  • the deringing filter in this example, is applied to each TU block after reverse transform in an encoder or a decoder, as shown in Figure 2, which shows an example of an 8x8 block.
  • the filter may also be used during R-D optimization in the encoder.
  • the identical deringing filter is also applied to each TU block after reverse transform in the corresponding video decoder.
  • each pixel in the transform unit is filtered using its direct neighboring pixels only, as shown in Figure 3.
  • the filter has a plus sign shaped filter aperture centered at the pixel to be filtered.
  • a filter as described in the embodiments below, or the examples above, may be implemented in a video encoder and a video decoder. It may be implemented in hardware, in software or a combination of hardware and software.
  • the filter may be implemented in, e.g. comprised in, user equipment, such as a mobile telephone, tablet, desktop, netbook, multimedia player, video streaming server, set-top box or a computer.
  • a filter e.g. deringing filter apparatus and method, reuse or share at least one of the same look-up table LUT for both inter and intra, and for the different Transform Unit, TU, sizes.
  • the embodiments are configured to alter the center weight, thus making it possible to reuse or share the tables without a costly multiplication of the weight for each surrounding pixel.
  • Figure 8 shows an example of a method according to an embodiment, performed by a decoder and/or encoder, for filtering at least two pixels in a block of pixels, each pixel being associated with a pixel value.
  • a filtered pixel value is calculated from the pixel value and the pixel values of surrounding pixels using a set of weights consisting of a center weight and surrounding weights (for example surrounding weights corresponding to surrounding weights for the surrounding pixels directly above, below, left and right of the pixel to be filtered, respectively, for example in embodiments in which a plus shaped filter is adopted). It is noted that other examples of surrounding weights may also be used.
  • the center weight is associated with the pixel to be filtered and each surrounding weight is associated with a surrounding pixel, and where each surrounding weight depends on a pixel value difference between the pixel value of the surrounding pixel associated with the weight and the pixel value of the pixel to be filtered.
  • the method comprises obtaining the center weight value based on a parameter that is constant in said block of pixels, step 801 , further details of which will be described in examples below.
  • the method comprises calculating a nominator value using at least one of said surrounding weights, step 803, further details of which will be described in examples below.
  • the method comprises calculating a denominator value using the sum of all said weights, step 805, further details of which will also be described in examples below.
  • the method comprises calculating the filtered pixel value using said nominator value and said denominator value, step 807.
  • the method has the advantage of enabling a set of weights to be shared between intra coding and inter coding, for example by adjusting the center weight value of the center pixel.
  • the center weight value is obtained using the transform unit size (TU size) of the filtered block.
  • the center weight value may be obtained using the minimum of the transform unit width (TUwidth) and transform unit height (TUheight).
  • the center weight value is obtained as the first entry of a centerweight look-up table if min(TUwidth, TUheight) equals a first value, and as the second entry of said centerweight look-up table if min(TUwidth, TUheight) equals a second value. Further entries may also be provided in the centerweight look-up table.
  • a centerweight table may therefore comprise part of a centerweight table as described in further detail later in the application, repeated below as Table 1 :
  • the look-up table value 100000 is used in this example.
  • the first centerweight look-up table may comprise just the values 100000, 124720 and 302075 of Table 1 above stored in memory.
  • a first centerweight look-up table is used if the block is an intra block and a second centerweight look-up table is used if the block is a inter block.
  • the first centerweight look-up table may comprise the values 100000, 124720 and 302075 in the example of Table 1 above, which is used for intra blocks, with the second centerweight look-up table comprising the values 174564, 302075 and 6275307 of Table 1 above, which are used for inter blocks.
  • the center weight value may therefore be obtained from a look-up table (LUT), and where for example different look-up tables are used for different transform unit sizes.
  • the center weight value is obtained by multiplying a first weight value ⁇ ⁇ by a multiplication factor - that is based on a transform unit TU size.
  • the denominator value in the method of Figure 8 is calculated according to ⁇ ⁇ + ⁇ ⁇ + ⁇ ⁇ + ⁇ ⁇ + ⁇ ⁇ ⁇ , wherein ⁇ ⁇ is the obtained center weight and ⁇ ⁇ , ⁇ ⁇ , o) L and ⁇ ⁇ are the surrounding weights for the surrounding pixels directly above, below, left and right of the pixel to be filtered, respectively.
  • the nominator value in the method of Figure 8 is calculated according to oy c l c + oy L l L + ) R I R + ⁇ ⁇ ⁇ ⁇ + ⁇ ⁇ ⁇ ⁇ , and wherein l c is the pixel value of the pixel to be filtered, and wherein I L , I R , I A and I B are the pixel values associated with the surrounding pixels directly to the left, right, above and below of the pixel to be filtered respectively.
  • the nominator value in the method of Figure 8 is calculated according to ) L AI L + ⁇ ⁇ ⁇ ⁇ + ⁇ ⁇ ⁇ ⁇ + ⁇ ⁇ ⁇ ⁇ ,
  • AI L is the difference I L - I c between the pixel value I L of the surrounding pixel immediately to the left and the pixel value I c of the pixel to be filtered
  • ⁇ / ⁇ is the difference I R - I c between the pixel value I R of the surrounding pixel immediately to the right and the pixel value I c of the pixel to be filtered
  • AI A is the difference I A - I c between the pixel value I A of the surrounding pixel immediately above and the pixel value I c of the pixel to be filtered
  • AI B is the difference I B - I c between the pixel value I B of the surrounding pixel immediately below and the pixel value I c of the pixel to be filtered.
  • calculating the filtered pixel value using said nominator value nominator value and said denominator value comprises calculating
  • the quotient : — is calculated as t div b , where t is based denominator _value
  • b equals the denominator value and div performs integer division.
  • the integer division t div b is calculated as a multiplication (t * LUT1) » k, where LUT1 is a look-up value based on the denominator value and » k performs bit shift k times.
  • t equals nominator _value + (f » 1), wherein / is based on the denominator value.
  • / equals the denominator value.
  • weights relating to intra coding operations being stored in a shared LUT i.e. weights oi intra
  • approximated weights relating to inter coding operations being derived therefrom i.e. weights oi inter
  • weights relating to inter coding operations are stored in a shared LUT, and approximated weights relating to intra coding operations being derived therefrom, with the approximation functions being adapted accordingly.
  • weights i.e., coefficients that are zero.
  • the value of the weight in equation (1 ) only depends on the absolute value of the difference between two luma values (or intensity values). If varies between 0 and 1023 and I(k, I) also varies between 0 and 1023 there are only 1024 possible values of the absolute value
  • the intensity dimension of the LUT does not need to be larger than 1024. However, it can be made shorter than this.
  • the resulting weight ⁇ is so small that it will be quantized to zero.
  • the first LUT (LUTFIRST) stores the value eV 2CT i /, which depends on (i - k) 2 + (j - I) 2 and o d .
  • (i - k) 2 + (j - I) 2 is always equal to 1 for every pixel except the middle pixel.
  • the expression therefore only depends on a d for these pixels, and o d depends in turn only on the TU- size, of which there are a predetermined number of sizes, for example three TU sizes.
  • o d depends in turn only on the TU- size, of which there are a predetermined number of sizes, for example three TU sizes.
  • o d the TU- size
  • AI L is used to denote the difference in intensity between the center pixel and the pixel to the left
  • IDOJ ⁇ k,i a)(i, j, k, l)
  • is used to denote the weight for the left pixel
  • ⁇ ⁇ is used to denote the weight for the right pixel
  • ⁇ ⁇ and ⁇ ⁇ the weights for the pixel above and below.
  • I c has been used to denote the intensity in the center pixel and so forth.
  • Equation 14 this can be rewritten as
  • Equation 18 is the same as Equation 15, but with e g replaced by e f .
  • the center weight is multiplied by a value, it is the same as exchanging the g value, which is the same as exchanging the o d value.
  • the center weight is always 1 .0, so by selecting a different value such as here it is possible to have the same effect as exchanging the ⁇ ⁇ .
  • Equation 16 gives the same result as
  • LUT(x) ( ⁇ ) LUT(x), where LUT(x) can be for instance e ⁇ e h ⁇ 2 .
  • a d is value used to create the LUT and a d! is the desired value of a d .
  • the largest possible a d is used when creating the table.
  • the smallest TU size is used, and intra filtering which gives the strongest filtering (largest coefficients).
  • the embodiment may use: 4
  • embodiments can change the center weight ⁇ ⁇ per pixel instead of per block. This can be done, for example, in order to vary filtering strength between different pixels in the same block.
  • the center weight can then be multiplied by a pixel dependent factor in order to make the filtering stronger or weaker. This is much lower complexity- wise than having to multiply every non-center weight (of which there are four) with a pixel-dependent factor.
  • the center weight equals 6275307, which is almost 63 times larger than the smallest value of 100000. This can prove to be an issue in terms of complexity.
  • the accuracy of the filtered value will go down, but only to the level used for intra x
  • Equation 16 can be written
  • LUT( ⁇ AI L ⁇ ) is the LUT entry giving the weight for the left pixel
  • LUT( ⁇ AI R ⁇ ) is ditto for the right pixel etc. Shifting all coefficients two steps (which is equivalent of dividing by four) would give the following (Equation 20):
  • I L (LUT( ⁇ AI L ⁇ )) + I R (LUT( ⁇ AI R ⁇ )) + I A (LUT( ⁇ AI A ⁇ )) + I B (LUT( ⁇ AI B ⁇ )) can never overflow since the maximum value it can get is 47539 * 1023 + 47539 * 1023 +
  • Equation 20 I B (LUT( ⁇ AI B ⁇ ))
  • the number of bits per entry in the LUT is selected to be sufficiently large to keep most of the BD-rate gain of doing the bilateral filtering, but small enough in order not to waste computational resources.
  • the filter strength, the center pixel value can be increased or decreased according to an offset, where the offset can be coded and provided in a video/image coded bit stream.
  • the offset can be provided in SPS (sequence parameter set) or PPS (picture parameter set) or slice header or for a block.
  • the offset may be derived as the difference between the used center pixel value and a fixed center pixel value (for example as given above for intra or inter blocks) and may then be encoded by some entropy coding method (e.g. VLC or CABAC).
  • the offset may be decoded by some entropy decoding method (e.g. VLC or CABAC) and the center pixel value may then be derived by adding the fixed center pixel value to the decoded offset.
  • the offset can be specific for intra and inter blocks. The offset can also be specific for different transform block sizes.
  • FIG. 9 shows an example of a filter 900 according to an embodiment, whereby the filter is implemented as a data processing system.
  • the data processing system includes at least one processor 901 that is further coupled to a network interface via a network interface 905.
  • the at least one processor 901 is also coupled to a memory 903 via the network interface 905.
  • the memory 903 can be implemented by a hard disk drive, flash memory, or read-only memory and stores computer-readable instructions.
  • the at least one processor 901 executes the computer-readable instructions and implements the functionality described in the embodiments above.
  • the network interface 905 enables the data processing system 900 to communicate with other nodes in a network.
  • Alternative examples may include additional components responsible for providing additional functionality, including any functionality described above and/or any functionality necessary to support the solution described herein.
  • the filter 900 may be operative to filter at least two pixels in a block of pixels, each pixel being associated with a pixel value, wherein a filtered pixel value is calculated from the pixel value and the pixel values of surrounding pixels using a set of weights consisting of a center weight and surrounding weights, and wherein the center weight is associated with the pixel to be filtered and each surrounding weight is associated with a surrounding pixel, and where each surrounding weight depends on a pixel value difference between the pixel value of the surrounding pixel associated with the weight and the pixel value of the pixel to be filtered.
  • the filter 900 is operative to obtain the center weight value based on a parameter that is constant in said block of pixels.
  • the filter 900 is operative to calculate a nominator value using at least one of said surrounding weights.
  • the filter 900 is operative to calculate a denominator value using the sum of all said weights.
  • the filter 900 is operative to calculate the filtered pixel value using said nominator value and said denominator value.
  • the filter 900 may be further operative to perform filtering operations as described herein, and defined in the appended claims.
  • FIG 10 shows an example of a decoder 1000 according to another embodiment.
  • the decoder 1000 comprises a modifying means 1001 , for example a filter as described herein, configured to modify at least two pixels in a block of pixels, each pixel being associated with a pixel value, wherein a filtered pixel value is calculated from the pixel value and the pixel values of surrounding pixels using a set of weights consisting of a center weight and surrounding weights, where the center weight is associated with the pixel to be filtered and each surrounding weight is associated with a surrounding pixel, and where each surrounding weight depends on a pixel value difference between the pixel value of the surrounding pixel associated with the weight and the pixel value of the pixel to be filtered.
  • the decoder comprises 1000 comprises at least one LUT 1003, for storing the set of weights.
  • the decoder 1000 is operative to: obtain a center weight value based on a parameter that is constant in said block of pixels; calculate a nominator value using at least one of said surrounding weights; calculate a denominator value using the sum of all said weights; and calculate the filtered pixel value using said nominator value and said denominator value.
  • the decoder 1000 may be further operative to perform a filtering method as described herein, and as defined in the appended claims.
  • parameters o d and a r described herein may also depends on at least one of: quantization parameter, quantization scaling matrix, transform width, transform height, picture width, picture height, a magnitude of a negative filter coefficient used as part of inter/intra prediction.
  • the embodiments described herein provide an improved filter for video coding and/or decoding.
  • Figure 1 1 is a schematic block diagram of a video encoder 40 according to an embodiment.
  • a current sample block also referred to as pixel block or block of pixels, is predicted by performing a motion estimation by a motion estimator 50 from already encoded and reconstructed sample block(s) in the same picture and/or in reference picture(s).
  • the result of the motion estimation is a motion vector in the case of inter prediction.
  • the motion vector is utilized by a motion compensator 50 for outputting an inter prediction of the sample block.
  • An intra predictor 49 computes an intra prediction of the current sample block.
  • the outputs from the motion estimator/compensator 50 and the intra predictor 49 are input in a selector 51 that either selects intra prediction or inter prediction for the current sample block.
  • the output from the selector 51 is input 10 to an error calculator in the form of an adder 41 that also receives the sample values of the current sample block.
  • the adder 41 calculates and outputs a residual error as the difference in sample values between the sample block and its prediction, i.e., prediction block.
  • the error is transformed in a transformer 42, such as by a discrete cosine transform (DCT), and quantized 15 by a quantizer 43 followed by coding in an encoder 44, such as by an entropy encoder.
  • DCT discrete cosine transform
  • the estimated motion vector is brought to the encoder 44 for generating the coded representation of the current sample block.
  • the transformed and quantized residual error for the current sample block is also provided to an inverse quantizer 45 and inverse transformer 46 to reconstruct the residual error.
  • This residual error is added by an adder 47 to the prediction output from the motion compensator 50 or the intra predictor 49 to create a reconstructed sample block that can be used as prediction block in the prediction and coding of other sample blocks.
  • This reconstructed sample block is first processed by a device 100 for filtering of a picture according to the embodiments in order to suppress deringing artifacts.
  • the modified, i.e., filtered, reconstructed sample block is then temporarily stored in a Decoded Picture Buffer (DPB) 48, where it is available to the intra predictor 49 and the motion estimator/compensator 50.
  • DPB Decoded Picture Buffer
  • the device 100 is preferably instead arranged between the inverse transformer 46 and the adder 47.
  • An embodiment relates to a video decoder comprising a device for filtering of a picture according to the embodiments.
  • Figure 12 is a schematic block diagram of a video decoder 60 comprising a device 100 for filtering of a picture according to the embodiments.
  • the video decoder 60 comprises a decoder 61 , such as entropy decoder, for decoding a bit stream comprising an encoded representation of a sample block to get a quantized and transformed residual error.
  • the residual error is dequantized in an inverse quantizer 62 and inverse transformed by an inverse transformer 63 to get a decoded residual error.
  • the decoded residual error is added in an adder 64 to the sample prediction values of a prediction block.
  • the prediction block is determined by a motion stimator/compensator 67 or intra predictor 66, depending on whether inter or intra prediction is performed.
  • a selector 68 is thereby interconnected to the adder 64 and the motion estimator/compensator 67 and the intra predictor 66.
  • the resulting decoded sample block output from the adder 64 is input to a device 100 for filtering of a picture in order to suppress and combat any ringing artifacts.
  • the filtered sample block enters a DPB 65 and can be used as prediction block for subsequently decoded sample blocks.
  • the DPB 65 is thereby connected to the motion estimator/compensator 67 to make the stored sample blocks available to the motion estimator/compensator 67.
  • the output from the adder 64 is preferably also input to the intra predictor 66 to be used as an unfiltered prediction block.
  • the filtered sample block is furthermore output from the video decoder 60, such as output for display on a screen.
  • the device 100 is preferably instead arranged between the inverse transformer 63 and the adder 64.
  • One idea of embodiments of the present invention is to introduce a deringing filter into the Future Video Codec, i.e., the successor to HEVC.
  • the following subject matter is also described in relation to examples of the earlier application.
  • the output filtered pixel intensity I D (i,j), as mentioned earlier, is defined using
  • the proposed deringing filter's all possible weights (coefficients) are calculated and stored in a two-dimensional look-up-table (LUT).
  • the LUT can for instance, use spatial distance and intensity difference between the pixel to be filtered and reference pixels as index of the LUT.
  • the filter aperture is a plus
  • LUT one-dimensional lookup table
  • w _ d the current pixel
  • w_ r the weight dependent on closeness in pixel value
  • the LUT could be optimized based on some error metric (SSD, SSIM) or according to human vision.
  • one LUT could also have one LUT for weights vertically above or below of current pixel and another LUT for weights horizontally left or right of current pixel.
  • a deringing filter with a rectangular shaped filter aperture is used in the video encoder's R-D optimization process.
  • the same filter is also used in the corresponding video decoder.
  • each pixel is filtered using its neighboring pixels within a M by N size rectangular shaped filter aperture centered at the pixel to be filtered, as shown in Figure 4.
  • the same deringing filter as in the first example is used.
  • the deringing filter according to the third example of the earlier application is used after prediction and transform have been performed for an entire frame or part of a frame.
  • the same filter is also used in the corresponding video decoder.
  • the third example is the same as the first or second example, except that the filtering is not done right after the inverse transform. Instead the proposed filter is applied to reconstructed picture in both encoder and decoder. On the one hand this could lead to worse performance since filtered pixels will not be used for intra prediction, but on the other hand the difference is likely very small and the existing filters such as SAO and deblocking are currently placed at this stage of the encoder/decoder.
  • o d and/or o r are related to TU size.
  • a d and a r can be calculated using functions of the form (e.g. a polynomial function): f 2 (TU size)
  • a preferred example is to have different functions f x ⁇ f 2 .
  • the a d can be separate for filter coefficients vertically and horizontally so ⁇ d ver , ⁇ d hor and a r ver , ⁇ r hor a function of the form (e.g. a polynomial function):
  • o d and ⁇ ⁇ are related to the QP value.
  • the QP mentioned here relates to the coarseness of the quantization of transform coefficients.
  • the QP can correspond to a picture or slice QP or even a locally used QP, i.e. QP for TU block.
  • the QP can be defined differently in different standards so that the QP in one standard do not correspond to the QP in another standard.
  • HEVC High Efficiency Video Coding
  • JEM six steps of QP change doubles the quantization step. This could be different in a final version of H .266 where steps could be finer or coarser and the range could be extended beyond 51 .
  • the range parameter is a polynomial model, for example first order model, of the QP.
  • Another approach is to define a table with an entry for each table where each entry relates to the reconstruction level of at least one transform coefficient quantized with QP to 1 .
  • a table of o d and/or or a table of ⁇ ⁇ created where each entry, i.e., QP value, relates to the reconstruction level, i.e., pixel value after inverse transform and inverse quantization, for one transform coefficient quantized with QP to 1 , e.g., the smallest possible value a quantized transform coefficient can have.
  • This reconstruction level indicates the smallest pixel value change that can originate from a true signal. Changes smaller than half of this value can be regarded as coding noise that the deringing filter should remove.
  • HEVC uses by default a uniform reconstruction quantization (URQ) scheme that quantizes frequencies equally.
  • HEVC has the option of using quantization scaling matrices, also referred to as scaling lists, either default ones, or quantization scaling matrices that are signaled as scaling list data in the sequence parameter set (SPS) or picture parameter set (PPS).
  • SPS sequence parameter set
  • PPS picture parameter set
  • scaling matrices are typically only be specified for 4x4 and 8x8 matrices.
  • the signaled 8x8 matrix is applied by having 2x2 and 4x4 blocks share the same scaling value, except at the DC positions.
  • a scaling matrix with individual scaling factors for respective transform coefficient, can be used to make a different quantization effect for respective transform coefficient by scaling the transform coefficients individually with respective scaling factor as part of the quantization. This enables, for example, that the quantization effect is stronger for higher frequency transform coefficients than for lower frequency transform coefficients.
  • default scaling matrices are defined for each transform size and can be invoked by flags in the SPS and/or the PPS. Scaling matrices also exist in H.264. In HEVC it is also possible to define own scaling matrices in SPS or PPS specifically for each combination of color component, transform size and prediction type (intra or inter mode).
  • deringing filtering is performed for at least reconstruction sample values from one transform coefficient using the corresponding scaling factor, as the QP, to determine o d and/or ⁇ ⁇ .
  • This could be performed before adding the intra/inter prediction or after adding the intra/inter prediction.
  • Another less complex approach would be to use the maximum or minimum scaling factor, as the QP, to determine o d and/or ⁇ ⁇ .
  • the size of the filter can also be dependent of the QP so that the filter is larger for larger QP than for small QPs.
  • the width and/or the height of the filter kernel of the deringing filter is defined for each QP.
  • Another example is to use a first width and/or a first height of the filter kernel for QP values equal or smaller than a threshold and a second, different width and/or a second, different height for QP values larger than a threshold.
  • o d and o r are related to video resolution.
  • the o d and a r can be a function of the form:
  • a r f 6 (frame diagonal )
  • the size of the filter can also be dependent of the size of the frame. If both a d and ⁇ ⁇ are derived based on frame diagonal, a preferred example is to have different functions f 5 ⁇ f 6 - Small resolutions can contain sharper texture than large resolutions, which can cause more ringing when coding small resolutions. Accordingly, at least one of the spatial parameter and the range parameter can be set such that stronger deringing filtering is applied for small resolutions as compared to large resolutions.
  • the o d and ⁇ ⁇ are related to QP, TU block size, video resolution and other video properties.
  • An example my comprise the example 1 combined with the functions o d 0.92 - (TU block width) * 0.025
  • the de-ringing filter is applied if an inter prediction is interpolated, e.g. not integer pixel motion, or the intra prediction is predicted from reference samples in a specific direction (e.g. non DC) or that the transform block has non-zero transform coefficients.
  • an inter prediction is interpolated, e.g. not integer pixel motion, or the intra prediction is predicted from reference samples in a specific direction (e.g. non DC) or that the transform block has non-zero transform coefficients.
  • De-ringing can be applied directly after intra/inter prediction to improve the accuracy of the prediction signal or directly after the transform on residual samples to remove transform effects or on reconstructed samples (after addition of intra/inter prediction and residual) to remove both ringing effects from prediction and transform or both on intra/inter prediction and residual or reconstruction.
  • the filter weights (w d , w r or similarly ⁇ ⁇ , ⁇ ⁇ ) and/or filter size can be individually for intra prediction mode and/or inter prediction mode.
  • the filter weights and/or filter size can be different in vertical and horizontal direction depending on intra prediction mode or interpolation filter used for inter prediction. For example, if close to horizontal intra prediction is performed the weights could be smaller for the horizontal direction than the vertical direction and for close to vertical intra prediction weights could be smaller for the vertical direction than the horizontal direction. If sub-pel interpolation with an interpolation filter with negative filter coefficients only is applied in the vertical direction the filter weights could be smaller in the horizontal direction than in the vertical direction and if sub-pel interpolation filter with negative filter coefficients only is applied in the horizontal direction the filter weights could be smaller in the vertical direction than in the horizontal direction.
  • the filter weights (w d , w r or similarly o d , o r ) and/or filter size can depend on the position of non-zero transform coefficients.
  • the filter weights and/or filter size can be different in vertical and horizontal direction depending non-zero transform coefficient positions. For example, if non-zero transform coefficients only exist in the vertical direction at the lowest frequency in the horizontal direction the filter weights can be smaller in the horizontal direction than in the vertical direction. Alternatively, the filter is only applied in the vertical direction. Similarly, if nonzero transform coefficients only exist in the horizontal direction at the lowest frequency in the vertical direction the filter weights can be smaller in the vertical direction than in the horizontal direction. Alternatively, the filter is only applied in the horizontal direction.
  • the filter weights and/or filter size can also be dependent on existence of non-zero transform coefficients above a certain frequency.
  • the filter weights can be smaller if only low frequency non-zero transform coefficients exist than when high frequency non-zero transform coefficients exist.
  • the filter weights (w d , w r or similarly ⁇ ⁇ , ⁇ ⁇ ) and/or filter size can be different for depending on a transform type.
  • Type of transform can refer to transform skip, KLT like transforms, DCT like transforms, DST transforms, non-separable 2D transforms, rotational transforms and combination of those.
  • the bilateral filter could only be applied to fast transforms, weight equal to 0 for all other transform types. Different types of transforms can require smaller weights than others since they cause less ringing than other transforms.
  • the filtering may be implemented as a differential filter which output is clipped (Clip) to be larger than or equal to a MIN value and less than or equal to a MAX value, and added to the pixel value instead of using a smoothing filter kernel like the Gaussian.
  • I D (i ) I(k, Q + s * Clip(MIN, MAX, I(k, Q * ⁇ ( ⁇ ,; ' , k, /))) (3)
  • the differential filter can for example be designed as the difference between a dirac function and a Gaussian filter kernel.
  • a sign (s) can optionally also be used to make the filtering to enhance edges rather than smooth edges if that is desired for some cases.
  • the MAX and MIN value can be a function of other parameters as discussed in other examples.
  • the usage of a clipping function can be omitted but allows for an extra freedom to limit the amount of filtering enabling the use of a stronger bilateral filter although limiting how much it is allowed to change the pixel value.
  • I D (i,j) I(k, l) + s * (Clip (MIN_ver, MAX ver, Y I(k, I) * ⁇ ( ⁇ ,; ' , k, I ]
  • the MAX hor, MAX ver, and MIN hor and MIN ver can be a function of other parameters as discussed in other examples.
  • one aspect is to keep the size of a LUT small.
  • a d and a r parameters using
  • Equation 1 can be rewritten as
  • Equation 5 The first factor of the expression in Equation 5 depends on o d . Since there are four TU sizes, there are four different possible values of o d .
  • Equation (2) thus becomes:
  • _) ( ⁇ h-i 0 ⁇ 2 ) ( Wh-ioW 2 ) ( ⁇ h-i 0 ⁇ 2 ) ( W -io W 2 ) I 0 e 2 (J dJ + / l£ A 2 ⁇ ⁇ 2 ) + i 2 e 2 ⁇ ? ) + i 3 e ⁇ ) + j ⁇ 2 ⁇ )
  • the approach as described above can be implemented with filtering in float or integers (8 or 16bit or 32 bit).
  • a table lookup is used to determine respective weight.
  • filtering in integers that avoids division by doing table lookup of a multiplication factor and shift factor.
  • lookup_M determines a multiplication factor to increase the gain of the filtering to close to unity (weights sum up to 1 « lookup_Sh) given that the "division" using right shift (») has the shift value (lookup_Sh) limited to be a multiple of 2.
  • lookup_Sh(A) gives a shift factor that together with the multiplication factor
  • lookup_M gives a sufficient approximation of 1/A.
  • roundF is a rounding factor which is equal to lookup_Sh » 1 . If this approximation is done so that the gain is less or equal to unity the filtering will not increase the value of the filtered pixel outside the value of the pixel values in the neighborhood before the filtering.
  • Example 15 one approach to reduce the amount of filtering is to omit filtering if the sum of the weights is equal to the weight for the center pixel.
  • the filtering as described in other examples can alternatively be performed by separable filtering in horizontal and vertical direction instead for 2D filtering as mostly described in other examples.
  • one set of weights (wd, wr or similarly a_(d ),a_(r )) and/or filter size is used for blocks that have been intra predicted and another set of weights and/or filter size is used for blocks that have been inter predicted.
  • the weights are set to reduce the amount of filtering for blocks which have been predicted with higher quality compared to blocks that have been predicted with lower quality. Since blocks that have been inter predicted typically has higher quality than blocks have been intra predicted they are filtered less to preserve the prediction quality.
  • one set of weights (wd, wr or similarly o_(d ),o_(r )) and/or filter size depends on picture type /slice type.
  • One example is to use one set of weights for intra pictures/slices and another set weights are used for inter pictures/slices.
  • One example to have one_w d or similarly a d for pictures/slices that have only been intra predicted and a smaller w d or similarly_a d for other pictures/slices.
  • B slices that typically have better prediction quality than P slices (only single prediction) can in another variant of this example have a smaller weight than P slices.
  • generalized B-slices that are used instead of P-slices for uni-directional prediction can have same weight as P-slices.
  • "normal" B-slices that can predict from both future and past can have a larger weight than generalized B-slices.
  • one set of weights (wd, wr or similarly o_(d ),o_(r )) and/or filter size is used intra pictures/slices and another set weights are used for inter pictures/slices that are used for reference for prediction of other pictures and a third set of weights are used for inter pictures/slices that not are used for reference for prediction of other pictures.
  • One example is to have one w d or similarily a d for pictures/slices that have only been intra predicted and a somewhat smaller w d or similarily o d for pictures/slices that have been inter predicted and are used for predicting other pictures and smallest w d or similarily o d for pictures/slices that have been inter predicted but not are used for prediction of other pictures (non reference picture).
  • Example weights for intra pictures/slices e.g. I_SLICE
  • a d 0.92— min(TU block width, TU block height) * 0.025
  • Example weights for inter pictures/slices (e.g. P_SLICE, B SLICE) that are used for reference are:
  • an encoder can select which values of the weights to use and encode them in SPS (sequence parameter sets) , PPS (picture parameter sets) or slice header.
  • a decoder can then decode the values of the weights to be used for filtering respective picture/slice.
  • weights are given for blocks that are intra predicted compared to blocks that are inter predicted are encoded in SPS/PPS or slice header.
  • a decoder can then decode the values of the weights to be used for blocks that are intra predicted and the values of the weights to be used for blocks that are inter predicted.
  • Figure 6 illustrates a filter according to an embodiment of the present invention.
  • a data processing system as illustrated in Figure 7, can be used to implement the filter of the examples described above.
  • the data processing system includes at least one processor that is further coupled to a network interface via an interconnect.
  • the at least one processor is also coupled to a memory via the interconnect.
  • the memory can be implemented by a hard disk drive, flash memory, or read-only memory and stores computer-readable instructions.
  • the at least one processor executes the computer- readable instructions and implements the functionality described above.
  • the network interface enables the data processing system to communicate with other nodes in a network.
  • Alternative examples may include additional components responsible for providing additional functionality, including any functionality described above and/or any functionality necessary to support the solution described herein.
  • a bilateral filter works by basing the filter weights not only on the distance to
  • ⁇ ⁇ is the spatial parameter
  • o r is the range parameter.
  • the properties (or strength) of the bilateral filter are controlled by these two parameters. Samples located closer to the sample to be filtered, and samples having smaller intensity difference to the sample to be filtered, will have a larger weight than samples further away and with larger
  • the application of the bilateral filter after inverse transform can improve the objective coding efficiency for all intra and random access configuration.
  • Inter predicted blocks typically have less residual than intra predicted blocks and therefore it makes sense to filter the reconstruction of inter predicted blocks less.
  • Ericsson may have current or pending patent rights relating to the technology described in this contribution and, conditioned on reciprocity, is prepared to grant licenses under reasonable and non-discriminatory terms as necessary for implementation of the resulting ITU-T Recommendation
  • ISO/IEC International Standard per box 2 of the ITU-T/ITU-R/ISO/IEC patent statement and licensing declaration form).
  • H.266 High Efficiency Video Coding
  • JCT-VC High Efficiency Video Coding
  • Spatial prediction is achieved using intra (I) prediction from within the current picture.
  • a picture consisting of only intra coded blocks is referred to as an l-picture.
  • Temporal prediction is achieved using inter (P) or bi- directional inter (B) prediction on block level.
  • HEVC was finalized in 2013.
  • ITU-T VCEG and ISO/IEC MPEG are studying the potential need for standardization of future video coding technology with a compression capability that significantly exceeds that of the current HEVC standard. Such future standardization action could either take the form of additional extension(s) of HEVC or an entirely new standard.
  • the groups are working together on this exploration activity in a joint collaboration effort known as the Joint Video Exploration Team (JVET) to evaluate compression technology designs proposed by their experts in this area.
  • JVET Joint Video Exploration Team
  • the ringing effect (Gibbs phenomenon) appears in video frames as oscillations near sharp edges. It is a result of a cut-off of high-frequency information in the block DCT transformation and lossy quantization process. Ringing also comes from inter prediction where sub-pixel interpolation using filter with negative weights can cause ringing near sharp edges. Artificial patterns that resemble ringing can also appear from intra prediction, as shown in Fig. 1 . The ringing effect degrades the objective and subjective quality of video frames.
  • bilateral filtering is widely used in image processing because of its edge-preserving and noise-reducing features.
  • a bilateral filter decides its coefficients based on the contrast of the pixels in addition to the geometric distance.
  • a Gaussian function has usually been used to relate coefficients to the geometric
  • o d is the spatial parameter
  • a r is the range parameter.
  • the bilateral filter is controlled by these two parameters.
  • I(i, j ) and l(k, I) are the original intensity levels of pixels(i, j) and (k,l) respectively.
  • I D is the filtered intensity of pixel (i, j).
  • Annex A we have described a method where the o d is calculated using the TU-size and the a r is calculated using the QP. Efficient ways to calculate these are described in
  • Rate-Distortion Optimization is part of the video encoding process. It improves coding efficiency by finding the "best" coding parameters. It measures both the number of bits used for each possible decision outcome of the block and the resulting distortion of the block.
  • One idea of embodiments of the present invention is to reuse the same look-up table (LUT) for both inter and intra, and use a scale and offset to obtain a better
  • An aspect of the embodiments defines a method, performed by a filter, for filtering of a picture of a video signal, wherein the picture comprises pixels, each pixel being associated with a pixel value, the method comprising:
  • the filtering is controlled by two parameters, o d and a r , wherein a d depends on a pixel distance between the pixel value and the neighboring pixel value, wherein a r depends on a pixel value difference between the pixel value and the neighboring pixel value, and wherein at least one of the parameters a d and a r also depends on at least one of: quantization parameter, quantization scaling matrix, transform width, transform height, picture width, picture height, a magnitude of a negative filter coefficient used as part of inter/intra prediction.
  • Another aspect of the embodiments defines a filter, for filtering of a picture of a video signal, wherein the picture comprises pixels, each pixel being associated with a pixel value, the filter being configured to:
  • the filtering is controlled by two parameters, a d and a r , wherein a d depends on a pixel distance between the pixel value and the neighboring pixel value, wherein a r depends on a pixel value difference between the pixel value and the neighboring pixel value, and wherein at least one of the parameters a d and a r also depends on at least one of: quantization parameter, quantization scaling matrix, transform width, transform height, picture width, picture height, a magnitude of a negative filter coefficient used as part of inter/intra prediction.
  • Another aspect of the embodiments defines filter, for filtering of a picture of a video signal, wherein the picture comprises pixels, each pixel being associated with a pixel value, the filter comprising a modifying module for modifying a pixel value by a weighted combination of the pixel value and at least one spatially neighboring pixel value, wherein the filtering is controlled by two parameters, o_d and o_r, wherein o_d depends on a pixel distance between the pixel value and the neighboring pixel value, wherein o_r depends on a pixel value difference between the pixel value and the neighboring pixel value, and wherein at least one of the parameters o_d and o_r also depends on at least one of: quantization parameter, quantization scaling matrix, transform width, transform height, picture width, picture height, a magnitude of a negative filter coefficient used as part of inter/intra prediction.
  • the decoder could also comprise a modifying means, configured to modify a pixel value by a weighted combination of the pixel value and at least one spatially neighboring pixel value, wherein the filtering is controlled by two parameters, o_d and o_r, wherein o_d depends on a pixel distance between the pixel value and the neighboring pixel value, wherein o_r depends on a pixel value difference between the pixel value and the neighboring pixel value, and wherein at least one of the parameters o_d and o_r also depends on at least one of: quantization parameter, quantization scaling matrix, transform width, transform height, picture width, picture height, a magnitude of a negative filter coefficient used as part of inter/intra prediction.
  • a modifying means configured to modify a pixel value by a weighted combination of the pixel value and at least one spatially neighboring pixel value, wherein the filtering is controlled by two parameters, o_d and o_r, wherein o_d depends on
  • the filter may be implemented in a video encoder and a video decoder. It may be implemented in hardware, in software or a combination of hardware and software.
  • the filter may be implemented in, e.g. comprised in, user equipment, such as a mobile telephone, tablet, desktop, netbook, multimedia player, video streaming server, set-top box or computer.
  • a data processing system (as illustrated in Figure 7), can be used to implement the filter.
  • the data processing system includes at least one processor that is further coupled to a network interface via an interconnect.
  • the at least one processor is also coupled to a memory via the interconnect.
  • the memory can be implemented by a hard disk drive, flash memory, or read-only memory and stores computer-readable instructions.
  • the at least one processor executes the computer- readable instructions and implements the functionality described above.
  • the network interface enables the data processing system to communicate with other nodes in a network.
  • Alternative embodiments of the present invention may include additional components responsible for providing additional functionality, including any
  • a further aspect of the embodiments defines a computer program for a filter comprising a computer program code which, when executed, causes the filter to:
  • the filtering is controlled by two parameters, o d and a r , wherein o d depends on a pixel distance between the pixel value and the neighboring pixel value, wherein a r depends on a pixel value difference between the pixel value and the neighboring pixel value, and wherein at least one of the parameters a d and a r also depends on at least one of: quantization parameter, quantization scaling matrix, transform width, transform height, picture width, picture height, a magnitude of a negative filter coefficient used as part of inter/intra prediction.
  • a further aspect of the embodiments defines a computer program product comprising a computer program for a filter and a computer readable means on which the computer program for a filter is stored.
  • the main advantage of the current invention is that the size of the LUT goes down by 50%. This is important in hardware implementations, where LUT size comes at a premium.
  • each pixel uses four or five weights. In some implementations it may be of interest to read these four or five weights at the same time. This would mean that the LUT would have to be
  • Figures 1 (A) and (B) illustrate the ringing effect on a zoomed original video frame and a zoomed compressed video frame respectively.
  • Figure 2 illustrates an 8x8 transform unit block and the filter aperture for the pixel located at (1 ,1 ).
  • Figure 3 illustrates a plus sign shaped deringing filter aperture.
  • Figure 5 illustrates the steps performed in a filtering method according to the embodiments of the present invention.
  • Figure 6 illustrates a filter according to the embodiments of the present invention.
  • Figure 7 illustrates a data processing system in accordance with the embodiments of the present invention.
  • the value of the weight in equation (1 ) only depends on the absolute value of the difference between two luma values (or intensity values). If varies between 0 and 1023 and I(k, I) also varies between 0 and 1023 there are only 1023 possible values of the absolute value - I(k, Hence the intensity dimension of the LUT never needs to be larger than 1023. However, it can be made shorter than this. For values of ⁇ I(.i ) - H over a certain value maxabs, the resulting weight ⁇ is so small that we can set the weight to zero without much of an error. Hence we only need to tabulate the intensity dimension of the LUT from zero to maxabs.
  • the other LUT depends both on the absolute difference in intensity - I(k, l ⁇ and on the a r , which in turn depends on QP.
  • a typical value of maxabs can be 244, which makes the maximum filter error be smaller than 0.5. Also smaller values of maxabs are possible.
  • p is an offset that depends on TU size. In another embodiment neither p nor s depends on TU size.
  • is multiplied with a scaling factor q according to q * - I(k, /)
  • is first multiplied with a scaling factor q and then an offset r is added, according to q * - l(k, /)
  • absmax for different QPs.
  • the absmax value may instead be 122 (half of previous absmax 244).

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

L'appareil concerne un procédé, mis en œuvre par un décodeur et/ou un encodeur, permettant de filtrer au moins deux pixels dans un bloc de pixels, chaque pixel étant associé à une valeur de pixel, une valeur de pixel filtrée étant calculée à partir de la valeur de pixel et des valeurs de pixel de pixels environnants en utilisant un ensemble de pondérations constitué d'une pondération centrale et de pondérations environnantes, la pondération centrale étant associée au pixel à filtrer et chaque pondération environnante étant associée à un pixel environnant, et où chaque pondération environnante dépend d'une différence de valeur de pixel entre la valeur de pixel du pixel environnant associé à la pondération et la valeur de pixel du pixel à filtrer. Le procédé consiste à obtenir une valeur de pondération centrale en fonction d'un paramètre qui est constant dans ledit bloc de pixels, calculer une valeur de numérateur en utilisant au moins une desdites pondérations environnantes, calculer une valeur de dénominateur en utilisant la somme de toutes lesdites pondérations, et calculer la valeur de pixel filtrée en utilisant ladite valeur de numérateur et ladite valeur de dénominateur.
PCT/EP2018/053939 2017-02-16 2018-02-16 Appareil et procédés de filtre WO2018149995A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201762459709P 2017-02-16 2017-02-16
US62/459709 2017-02-16

Publications (1)

Publication Number Publication Date
WO2018149995A1 true WO2018149995A1 (fr) 2018-08-23

Family

ID=61283209

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2018/053939 WO2018149995A1 (fr) 2017-02-16 2018-02-16 Appareil et procédés de filtre

Country Status (1)

Country Link
WO (1) WO2018149995A1 (fr)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019010315A1 (fr) * 2017-07-05 2019-01-10 Qualcomm Incorporated Filtre bilatéral sans division
CN110891177A (zh) * 2018-09-07 2020-03-17 腾讯科技(深圳)有限公司 视频降噪、视频转码中的降噪处理方法、装置和机器设备
WO2020253816A1 (fr) * 2019-06-21 2020-12-24 Huawei Technologies Co., Ltd. Codeur, décodeur et procédés correspondants pour un mode de partitionnement de sous-bloc
US11074678B2 (en) 2019-04-24 2021-07-27 Apple Inc. Biasing a noise filter to preserve image texture
WO2021158047A1 (fr) * 2020-02-05 2021-08-12 엘지전자 주식회사 Procédé de décodage d'image utilisant des informations d'image comprenant un drapeau disponible tsrc et appareil associé
WO2021159081A1 (fr) * 2020-02-07 2021-08-12 Beijing Dajia Internet Informationtechnology Co., Ltd. Modes de codage sans perte pour codage vidéo
WO2021158048A1 (fr) * 2020-02-05 2021-08-12 엘지전자 주식회사 Procédé de décodage d'image associé à la signalisation d'un drapeau indiquant si tsrc est disponible, et dispositif associé
WO2021158049A1 (fr) * 2020-02-05 2021-08-12 엘지전자 주식회사 Procédé de décodage d'image pour le codage d'informations d'image et dispositif associé
US11206395B2 (en) * 2019-09-24 2021-12-21 Mediatek Inc. Signaling quantization matrix
WO2023237808A1 (fr) * 2022-06-07 2023-12-14 Nokia Technologies Oy Procédé, appareil et produit-programme d'ordinateur pour le codage et le décodage de contenu média numérique
RU2811983C2 (ru) * 2019-06-21 2024-01-22 Хуавэй Текнолоджиз Ко., Лтд. Кодер, декодер и соответствующие способы для режима субблочного разделения

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2003014914A1 (fr) * 2001-08-07 2003-02-20 Nokia Corporation Procede et appareil d'execution d'une division

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2003014914A1 (fr) * 2001-08-07 2003-02-20 Nokia Corporation Procede et appareil d'execution d'une division

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
J. STROM; P. WENNERSTEN; K. ANDERSSON; J. ENHORN: "EE2-JVET-D0069 Bilateral Filter Test1, Test2 and Test3", JVET-E0031, January 2017 (2017-01-01)
J. STROM; P. WENNERSTEN; Y. WANG; K. ANDERSSON; J. SAMUELSSON: "Bilateral Filter After Inverse Transform", JVET-D0069, 15 October 2016 (2016-10-15)
JACOB STRÖM ET AL: "Bilateral filter after inverse transform", 4. JVET MEETING; 15-10-2016 - 21-10-2016; CHENGDU; (THE JOINT VIDEO EXPLORATION TEAM OF ISO/IEC JTC1/SC29/WG11 AND ITU-T SG.16 ); URL: HTTP://PHENIX.INT-EVRY.FR/JVET/,, no. JVET-D0069, 6 October 2016 (2016-10-06), XP030150302 *
STRÖM J ET AL: "Bilateral filter strength based on prediction mode", 5. JVET MEETING; 12-1-2017 - 20-1-2017; GENEVA; (THE JOINT VIDEO EXPLORATION TEAM OF ISO/IEC JTC1/SC29/WG11 AND ITU-T SG.16 ); URL: HTTP://PHENIX.INT-EVRY.FR/JVET/,, no. JVET-E0032, 3 January 2017 (2017-01-03), XP030150498 *
STRÖM J ET AL: "EE2-JVET related: Division-free bilateral filter", 6. JVET MEETING; 31-3-2017 - 7-4-2017; HOBART; (THE JOINT VIDEO EXPLORATION TEAM OF ISO/IEC JTC1/SC29/WG11 AND ITU-T SG.16 ), no. JVET-F0096, 2 April 2017 (2017-04-02), XP030150774 *
STRÖM J ET AL: "EE2-JVET-E0032 Bilateral filter Test 1, Test2", 6. JVET MEETING; 31-3-2017 - 7-4-2017; HOBART; (THE JOINT VIDEO EXPLORATION TEAM OF ISO/IEC JTC1/SC29/WG11 AND ITU-T SG.16 ), no. JVET-F0034, 23 March 2017 (2017-03-23), XP030150687 *

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019010315A1 (fr) * 2017-07-05 2019-01-10 Qualcomm Incorporated Filtre bilatéral sans division
US10887622B2 (en) 2017-07-05 2021-01-05 Qualcomm Incorporated Division-free bilateral filter
CN110891177A (zh) * 2018-09-07 2020-03-17 腾讯科技(深圳)有限公司 视频降噪、视频转码中的降噪处理方法、装置和机器设备
CN110891177B (zh) * 2018-09-07 2023-03-21 腾讯科技(深圳)有限公司 视频降噪、视频转码中的降噪处理方法、装置和机器设备
US11074678B2 (en) 2019-04-24 2021-07-27 Apple Inc. Biasing a noise filter to preserve image texture
WO2020253816A1 (fr) * 2019-06-21 2020-12-24 Huawei Technologies Co., Ltd. Codeur, décodeur et procédés correspondants pour un mode de partitionnement de sous-bloc
US11962773B2 (en) 2019-06-21 2024-04-16 Huawei Technologies Co., Ltd. Encoder, decoder and corresponding methods for sub-block partitioning mode
RU2811983C2 (ru) * 2019-06-21 2024-01-22 Хуавэй Текнолоджиз Ко., Лтд. Кодер, декодер и соответствующие способы для режима субблочного разделения
US11539948B2 (en) 2019-06-21 2022-12-27 Huawei Technologies Co., Ltd. Encoder, a decoder and corresponding methods for sub-block partitioning mode
US11206395B2 (en) * 2019-09-24 2021-12-21 Mediatek Inc. Signaling quantization matrix
WO2021158049A1 (fr) * 2020-02-05 2021-08-12 엘지전자 주식회사 Procédé de décodage d'image pour le codage d'informations d'image et dispositif associé
WO2021158048A1 (fr) * 2020-02-05 2021-08-12 엘지전자 주식회사 Procédé de décodage d'image associé à la signalisation d'un drapeau indiquant si tsrc est disponible, et dispositif associé
WO2021158047A1 (fr) * 2020-02-05 2021-08-12 엘지전자 주식회사 Procédé de décodage d'image utilisant des informations d'image comprenant un drapeau disponible tsrc et appareil associé
WO2021159081A1 (fr) * 2020-02-07 2021-08-12 Beijing Dajia Internet Informationtechnology Co., Ltd. Modes de codage sans perte pour codage vidéo
WO2023237808A1 (fr) * 2022-06-07 2023-12-14 Nokia Technologies Oy Procédé, appareil et produit-programme d'ordinateur pour le codage et le décodage de contenu média numérique

Similar Documents

Publication Publication Date Title
US11272175B2 (en) Deringing filter for video coding
WO2018149995A1 (fr) Appareil et procédés de filtre
US10063859B2 (en) Method of applying edge offset
AU2015415109B2 (en) Method and apparatus of adaptive filtering of samples for video coding
EP3247117B1 (fr) Procédé et dispositif pour optimiser le codage/décodage d'écarts de compensation pour un ensemble d'échantillons reconstitués d'une image
US11122263B2 (en) Deringing filter for video coding
KR101919394B1 (ko) 무손실 인트라 hevc 코딩을 위한 지수-골룸 이진화에 대한 파라미터 결정
KR101752612B1 (ko) 비디오 코딩을 위한 샘플 적응적 오프셋 프로세싱의 방법
EP4087247A1 (fr) Outils de codage basés sur la luminance pour compression vidéo
CN107347157B (zh) 视频解码装置
US20150264406A1 (en) Deblock filtering using pixel distance
US20140092958A1 (en) Image processing device and method
WO2018134128A1 (fr) Filtrage de données vidéo à l'aide d'une table de consultation partagée
WO2018134362A1 (fr) Appareil de filtrage et procédés
WO2024041658A1 (fr) Sao et ccsao
CN118077197A (zh) 组合用于视频编码和/或解码的去块滤波和另一滤波

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18707314

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18707314

Country of ref document: EP

Kind code of ref document: A1