WO2018134362A1 - Appareil de filtrage et procédés - Google Patents

Appareil de filtrage et procédés Download PDF

Info

Publication number
WO2018134362A1
WO2018134362A1 PCT/EP2018/051328 EP2018051328W WO2018134362A1 WO 2018134362 A1 WO2018134362 A1 WO 2018134362A1 EP 2018051328 W EP2018051328 W EP 2018051328W WO 2018134362 A1 WO2018134362 A1 WO 2018134362A1
Authority
WO
WIPO (PCT)
Prior art keywords
block
filtering
filter
pixels
current block
Prior art date
Application number
PCT/EP2018/051328
Other languages
English (en)
Inventor
Kenneth Andersson
Per Wennersten
Jacob STRÖM
Jack ENHORN
Original Assignee
Telefonaktiebolaget Lm Ericsson (Publ)
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Telefonaktiebolaget Lm Ericsson (Publ) filed Critical Telefonaktiebolaget Lm Ericsson (Publ)
Publication of WO2018134362A1 publication Critical patent/WO2018134362A1/fr

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/80Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation
    • H04N19/82Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation involving filtering within a prediction loop
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/593Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/80Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation

Definitions

  • the present embodiments generally relate filter apparatus and methods, for example to filter apparatus and methods for video coding and decoding, and in particular to deringing filtering in video coding and decoding.
  • H.266 High Efficiency Video Coding
  • JCT-VC Joint Collaborative Team on Video Coding
  • Spatial prediction is achieved using intra (I) prediction from within the current picture.
  • a picture consisting of only intra coded blocks is referred to as an l-picture.
  • Temporal prediction is achieved using inter (P) or bi-directional inter (B) prediction on block level.
  • HEVC was finalized in 2013.
  • JVET Joint Video Exploration Team
  • Ringing also referred to as Gibbs phenomenon, appears in video frames as oscillations near sharp edges. It is a result of a cut-off of high-frequency information in the block Discrete Cosine Transform (DCT) transformation and lossy quantization process. Ringing also comes from inter prediction where sub-pixel interpolation using a filter with negative weights can cause ringing near sharp edges. Artificial patterns that resemble ringing can also appear from intra prediction, as shown in the right part of Figure 1 (whereby Figures 1 (A) and (B) illustrate the ringing effect on a zoomed original video frame and a zoomed compressed video frame respectively). The ringing effect degrades the objective and subjective quality of video frames.
  • DCT Discrete Cosine Transform
  • bilateral filtering is widely used in image processing because of its edge-preserving and noise-reducing features.
  • a bilateral filter decides its coefficients based on the contrast of the pixels in addition to the geometric distance.
  • a Gaussian function has usually been used to relate coefficients to the geometric distance and contrast of the pixel values.
  • the weight ⁇ ( ⁇ ,; ' , k, I) assigned for pixel (k, I) to filter the pixel (i, j) is defined as: ⁇ ⁇ is the spatial parameter, and o r is the range parameter.
  • the bilateral filter is controlled by these two parameters. I(i, j ) and l(k, I) are the original intensity levels of pixels(i, j) and (k,l) respectively.
  • I D is the filtered intensity of pixel (i, j).
  • Rate-Distortion Optimization is part of the video encoding process. It improves coding efficiency by finding the "best" coding parameters. It measures both the number of bits used for each possible decision outcome of the block and the resulting distortion of the block.
  • a deblocking filter (DBF) and a Sample Adaptive Offset (SAO) filter are included in the HEVC standard.
  • DPF deblocking filter
  • SAO Sample Adaptive Offset
  • ALF Adaptive Loop Filter
  • SAO will remove some of the ringing artifacts but there is still room for improvements.
  • Another problem for deploying bilateral filtering in video coding is that they are too complex, lack sufficient parameter settings and adaptivity.
  • the embodiments disclosed herein relate to further improvements to a filter.
  • a method for filtering of a picture of a video signal, wherein the picture comprises pixels, each pixel being associated with a pixel value, wherein a pixel value is modified by a weighted combination of the pixel value and at least one spatially neighboring pixel value.
  • the method comprises performing a filtering operation on a block by block basis, each block comprising rows and columns of pixels.
  • the method comprises selectively filtering pixels in the current block by omitting the filtering of pixels in a column and/or row of the current block, where such column and/or row interfaces with a next prediction operation of an immediately subsequent block.
  • a filter for filtering of a picture of a video signal, wherein the picture comprises pixels, each pixel being associated with a pixel value, the filter being configured to modify a pixel value by a weighted combination of the pixel value and at least one spatially neighboring pixel value.
  • the filter is configured to filter on a block by block basis, each block comprising rows and columns of pixels, and selectively filter pixels in the block by omitting the filtering of pixels in a column and/or row of the current block, where such column and/or row interfaces with a next prediction operation of an immediately subsequent block.
  • a decoder comprising a modifying means configured to modify a pixel value by a weighted combination of the pixel value and at least one spatially neighboring pixel value.
  • the modifying means is operative to filter on a block by block basis, each block comprising rows and columns of pixels, and selectively filter pixels in the block by omitting the filtering of pixels in a column and/or row of the current block, where such column and/or row interfaces with a next prediction operation of an immediately subsequent block.
  • a computer program comprising instructions which, when executed on at least one processor, cause the at least one processor to carry out the method as described in the embodiments herein, and defined in the appended claims.
  • a computer program product comprising a computer-readable medium with the computer program as above.
  • Figures 1 (A) and (B) illustrate the ringing effect on a zoomed original video frame and a zoomed compressed video frame respectively;
  • Figure 2 illustrates an 8x8 transform unit block and the filter aperture for the pixel located at (1 ,1 );
  • Figure 3 illustrates a plus sign shaped deringing filter aperture
  • Figure 5 illustrates the steps performed in a filtering method according to an example
  • Figure 6 illustrates a decoder according to an example
  • Figure 7 illustrates a data processing system in accordance with an example
  • Figure 8 shows an example of a method according to an embodiment
  • Figure 9 shows an example of block filtering according to an embodiment
  • Figure 10 shows an example of block filtering according to an embodiment
  • Figure 1 1 shows an example of block filtering according to an embodiment
  • Figure 12 shows an example of block filtering according to an embodiment
  • Figure 13 shows an example of block filtering according to an embodiment
  • Figure 14 shows an example of a filter according to an embodiment
  • Figure 15 shows an example of a video coding system having a filter according to an embodiment
  • Figure 16 shows an example of a decoder according to an embodiment
  • Figure 17 illustrates schematically a video encoder according to an embodiment
  • Figure 18 illustrates schematically a video decoder according to an embodiment.
  • the technology can additionally be considered to be embodied entirely within any form of computer- readable memory, such as solid-state memory, magnetic disk, or optical disk containing an appropriate set of computer instructions that would cause a processor to carry out the techniques described herein.
  • Hardware implementation may include or encompass, without limitation, digital signal processor (DSP) hardware, a reduced instruction set processor, hardware (e.g., digital or analog) circuitry including but not limited to application specific integrated circuit(s) (ASIC) and/or field programmable gate array(s) (FPGA(s)), and (where appropriate) state machines capable of performing such functions.
  • DSP digital signal processor
  • ASIC application specific integrated circuit
  • FPGA field programmable gate array
  • a computer is generally understood to comprise one or more processors, one or more processing units, one or more processing modules or one or more controllers, and the terms computer, processor, processing unit, processing module and controller may be employed interchangeably.
  • the functions may be provided by a single dedicated computer, processor, processing unit, processing module or controller, by a single shared computer, processor, processing unit, processing module or controller, or by a plurality of individual computers, processors, processing units, processing modules or controllers, some of which may be shared or distributed.
  • these terms also refer to other hardware capable of performing such functions and/or executing software, such as the example hardware recited above.
  • the filters described herein may be used in any form of user equipment, such as a mobile telephone, tablet, desktop, netbook, multimedia player, video streaming server, set-top box or computer.
  • user equipment UE
  • UE user equipment
  • a UE herein may comprise a UE (in its general sense) capable of operating or at least performing measurements in one or more frequencies, carrier frequencies, component carriers or frequency bands.
  • terminal device it may be a “UE” operating in single- or multi- radio access technology (RAT) or multi-standard mode.
  • RAT radio access technology
  • wireless communication device the general terms “terminal device”, “communication device” and “wireless communication device” are used in the following description, and it will be appreciated that such a device may or may not be 'mobile' in the sense that it is carried by a user.
  • the term “terminal device” encompasses any device that is capable of communicating with communication networks that operate according to one or more mobile communication standards, such as the Global System for Mobile communications, GSM, UMTS, Long-Term Evolution, LTE, etc.
  • a UE may comprise a Universal Subscription Identity Module (USIM) on a smart-card or implemented directly in the UE, e.g., as software or as an integrated circuit.
  • USIM Universal Subscription Identity Module
  • the operations described herein may be partly or fully implemented in the USIM or outside of the USIM.
  • Embodiments described here are related to providing filtering blocks that can be used in filters, including for example a deringing filter as described in the earlier application, for intra prediction by extrapolation, as for example intra prediction in HEVC, or as in upcoming standards.
  • the bilateral filter for example directly after a transform for a current block, adds one additional filtering step before one can perform intra prediction of a block to the right and/or below the current block.
  • Embodiments herein disclose how the pixels within a block can be selectively filtered, such that in certain circumstances a next block can be predicted from unfiltered pixels. This means, for example, that a next block can be predicted before the filtering of reconstructed samples in the block have finished, thereby reducing latency.
  • a bilateral deringing filter with a plus sign shaped filter aperture is used directly after inverse transform.
  • An identical filter and identical filtering process is used in the corresponding video encoder and decoder to ensure that there is no drift between the encoder and the decoder.
  • the first example describes a way to remove ringing artifacts by using a deringing filter designed in the earlier application.
  • the deringing filter is evolved from a bilateral filter.
  • each pixel in the reconstructed picture is replaced by a weighted average of itself and its neighbors. For instance, a pixel located at (i, j), will be filtered using its neighboring pixel (k, I).
  • the weight ⁇ ( ⁇ ,], k, ⁇ ) is the weight assigned for pixel (k, I) to filter the pixel (i, j),and it is defined as: l(i, j ) and l(k, I) are the original reconstructed intensity value of pixels(i, j) and (k,l) respectively.
  • ⁇ ⁇ is the spatial parameter
  • o r is the range parameter.
  • the bilateral filter is controlled by these two parameters.
  • the weight of a reference pixel (k,l) to the pixel(IJ) is dependent both on the distance between the pixels and the intensity difference between the pixels.
  • the pixels located closer to the pixel to be filtered, and that have smaller intensity difference to the pixel to be filtered, will have larger weight than the other more distant (spatial or intensity) pixels.
  • ⁇ ⁇ and o r are constant values.
  • the deringing filter in this example, is applied to each TU block after reverse transform in an encoder, as shown in Figure 2, which shows an example of a 8x8 block. This means, for example, that following subsequent Intra-coded blocks will predict from the filtered pixel values.
  • the filter may also be used during R-D optimization in the encoder.
  • the identical deringing filter is also applied to each TU block after reverse transform in the corresponding video decoder.
  • each pixel in the transform unit is filtered using its direct neighboring pixels only, as shown in Figure 3.
  • the filter has a plus sign shaped filter aperture centered at the pixel to be filtered.
  • the output filtered pixel intensity I D (i,j) is:
  • the proposed deringing filter's all possible weights (coefficients) are calculated and stored in a two-dimensional look-up-table(LUT).
  • the LUT can for instance, use spatial distance and intensity difference between the pixel to be filtered and reference pixels as index of the LUT.
  • the filter aperture is a plus
  • a one-dimensional lookup table (LUT) indexed on the difference in intensity, or indexed on the absolute value of the difference in intensity.
  • one LUT could have one LUT dedicated to a weight dependent on distance from the current pixel (w_d) and another LUT dedicated to a weight dependent on closeness in pixel value (w _ r ). It should be noted that the exponential function used to determine the weights could be some other function as well.
  • the LUT could be optimized based on some error metric (SSD, SSIM) or according to human vision.
  • a deringing filter with a rectangular shaped filter aperture is used in the video encoder's R-D optimization process.
  • the same filter is also used in the corresponding video decoder.
  • each pixel is filtered using its neighboring pixels within a M by N size rectangular shaped filter aperture centered at the pixel to be filtered, as shown in Figure 4.
  • the same deranging filter as in the first example is used.
  • Example 3
  • the deringing filter according to the third example of the earlier application is used after prediction and transform have been performed for an entire frame or part of a frame.
  • the same filter is also used in the corresponding video decoder.
  • the third example is the same as the first or second example, except that the filtering is not done right after the inverse transform. Instead the proposed filter applies to reconstructed picture in both encoder and decoder. On the one hand this could lead to worse performance since filtered pixels will not be used for intra prediction, but on the other hand the difference is likely very small and the existing filters are currently placed at this stage of the encoder/decoder.
  • a d and/or ⁇ ⁇ are related to Transform Unit, TU, size.
  • the ⁇ ⁇ and o r can be a function of the form (e.g. a polynomial function):
  • both ⁇ ⁇ and o r are derived based on TU size, a preferred example is to have different functions f ⁇ f 2 .
  • ⁇ ⁇ 0.92 - max ⁇ TU block width, TU block height] * 0.025
  • the ⁇ ⁇ can be separate for filter coefficients vertically and horizontally so
  • a further generalization is to have to have a weight and/or size dependent on distance based on a function based on TU size or TU width or TU height and a weight and /or size dependent on pixel closeness based on a function based on TU size or TU width or TU height.
  • bit_depth bit_depth
  • bit_depth i.e. the number of bits used to represent pixels in the video.
  • both a d and ⁇ ⁇ are derived based on QP, a preferred example is to have different functions f 3 ⁇ f 4 .
  • the QP mentioned here relates to the coarseness of the quantization of transform coefficients.
  • the QP can correspond to a picture or slice QP or even a locally used QP, i.e. QP for TU block.
  • QP can be defined differently in different standards so that the QP in one standard do not correspond to the QP in another standard.
  • HEVC High Efficiency Video Coding
  • JEM six steps of QP change doubles the quantization step. This could be different in a final version of H.266 where steps could be finer or coarser and the range could be extended beyond 51 .
  • the range parameter is a polynomial model, for example first order model, of the QP.
  • Another approach is to define a table with an entry for each table where each entry relates to the reconstruction level of at least one transform coefficient quantized with QP to 1 .
  • a table of a d and/or or a table of ⁇ ⁇ created where each entry, i.e., QP value, relates to the reconstruction level, i.e., pixel value after inverse transform and inverse quantization, for one transform coefficient quantized with QP to 1 , e.g., the smallest possible value a quantized transform coefficient can have.
  • This reconstruction level indicates the smallest pixel value change that can originate from a true signal. Changes smaller than half of this value can be regarded as coding noise that the deringing filter should remove.
  • HEVC uses by default a uniform reconstruction quantization (URQ) scheme that quantizes frequencies equally.
  • HEVC has the option of using quantization scaling matrices, also referred to as scaling lists, either default ones, or quantization scaling matrices that are signaled as scaling list data in the sequence parameter set (SPS) or picture parameter set (PPS).
  • SPS sequence parameter set
  • PPS picture parameter set
  • scaling matrices are typically only be specified for 4x4 and 8x8 matrices.
  • the signaled 8x8 matrix is applied by having 2x2 and 4x4 blocks share the same scaling value, except at the DC positions.
  • a scaling matrix with individual scaling factors for respective transform coefficient, can be used to make a different quantization effect for respective transform coefficient by scaling the transform coefficients individually with respective scaling factor as part of the quantization. This enables, for example, that the quantization effect is strongerfor higher frequency transform coefficients than for lower frequency transform coefficients.
  • default scaling matrices are defined for each transform size and can be invoked by flags in the SPS and/or the PPS. Scaling matrices also exist in H.264. In HEVC it is also possible to define own scaling matrices in SPS or PPS specifically for each combination of color component, transform size and prediction type (intra or inter mode).
  • deringing filtering is performed for at least reconstruction sample values from one transform coefficient using the corresponding scaling factor, as the QP, to determine a d and/or ⁇ ⁇ .
  • This could be performed before adding the intra/inter prediction or after adding the intra/inter prediction.
  • Another less complex approach would be to use the maximum or minimum scaling factor, as the QP, to determine a d and/or ⁇ ⁇ .
  • the size of the filter can also be dependent of the QP so that the filter is larger for larger QP than for small QPs.
  • the width and/or the height of the filter kernel of the deringing filter is defined for each QP.
  • Another example is to use a first width and/or a first height of the filter kernel for QP values equal or smaller than a threshold and a second, different width and/or a second, different height for QP values larger than a threshold.
  • a d and ⁇ ⁇ are related to video resolution.
  • the ⁇ ⁇ and o r can be a function of the form:
  • the size of the filter can also be dependent of the size of the frame. If both a d and ⁇ ⁇ are derived based on frame diagonal, a preferred example is to have different functions f 5 ⁇ f 6 .
  • At least one of the spatial parameter and the range parameter can be set such that stronger deringing filtering is applied for small resolutions as compared to large resolutions.
  • the a d and ⁇ ⁇ are related to QP, TU block size, video resolution and other video properties.
  • the ⁇ ⁇ and o r can be a function of the form:
  • the de-ringing filter is applied if an inter prediction is interpolated, e.g. not integer pixel motion, or the intra prediction is predicted from reference samples in a specific direction (e.g. non DC) or that the transform block has non-zero transform coefficients.
  • De-ringing can be applied directly after intra/inter prediction to improve the accuracy of the prediction signal or directly after the transform on residual samples to remove transform effects or on reconstructed samples (after addition of intra/inter prediction and residual) to remove both ringing effects from prediction and transform or both on intra/inter prediction and residual or reconstruction.
  • the filter weights (Wd , w r or similarly a d , a r ) and/or filter size can be individually for intra prediction mode and/or inter prediction mode.
  • the filter weights and/or filter size can be different in vertical and horizontal direction depending on intra prediction mode or interpolation filter used for inter prediction. For example, if close to horizontal intra prediction is performed the weights could be smaller for the horizontal direction than the vertical direction and for close to vertical intra prediction weights could be smaller for the vertical direction than the horizontal direction.
  • the filter weights (Wd , w r or similarly a d , ⁇ ⁇ ) and/or filter size can depend on the position of non-zero transform coefficients.
  • the filter weights and/or filter size can be different in vertical and horizontal direction depending non-zero transform coefficient positions. For example, if non-zero transform coefficients only exist in the vertical direction at the lowest frequency in the horizontal direction the filter weights can be smaller in the horizontal direction than in the vertical direction. Alternatively, the filter is only applied in the vertical direction. Similarly, if non- zero transform coefficients only exist in the horizontal direction at the lowest frequency in the vertical direction the filter weights can be smaller in the vertical direction than in the horizontal direction. Alternatively, the filter is only applied in the horizontal direction.
  • the filter weights and/or filter size can also be dependent on existence of non-zero transform coefficients above a certain frequency.
  • the filter weights can be smaller if only low frequency non-zero transform coefficients exist than when high frequency non-zero transform coefficients exist.
  • the filter weights (Wd , w r or similarly a d , a r ) and/or filter size can be different for depending on a transform type.
  • Type of transform can refer to transform skip, KLT like transforms, DCT like transforms, DST transforms, non-separable 2D transforms, rotational transforms and combination of those.
  • the bilateral filter could only be applied to fast transforms, weight equal to 0 for all other transform types. Different types of transforms can require smaller weights than others since they cause less ringing than other transforms.
  • the filtering may be implemented as a differential filter which output is clipped (Clip) to be larger than or equal to a MIN value and less than or equal to a MAX value, and added to the pixel value instead of using a smoothing filter kernel like the Gaussian.
  • the differential filter can for example be designed as the difference between a dirac function and a Gaussian filter kernel.
  • a sign (s) can optionally also be used to make the filtering to enhance edges rather than smooth edges if that is desired for some cases.
  • the MAX and MIN value can be a function of other parameters as discussed in other examples.
  • the usage of a clipping function can be omitted but allows for an extra freedom to limit the amount of filtering enabling the use of a stronger bilateral filter although limiting how much it is allowed to change the pixel value.
  • the filtering can be described as a vertical filtering part and a horizontal filtering part as shown below:
  • the MAXJior, MAX_ver, and MINJior and MIN_ver can be a function of other parameters as discussed in other examples.
  • one aspect is to keep the size of a LUT small.
  • ⁇ ⁇ and o r parameters using
  • the size of the LUT can become quite big.
  • the absolute difference between two luma values can be between 0 and 1023.
  • Equation 1 can be rewritten as If we keep a r fixed, we can now create one LUT for the expression
  • Equation 5 The first factor of the expression in Equation 5 depends on ⁇ ⁇ . Since there are four TU sizes, there are four different possible values of on ⁇ ⁇ .
  • Equation (2) thus becomes: and we can see that we can divide both the nominator and denominator with which yields If we let I 0 be the intensity of the middle pixel and we let the intensity of the
  • neighboring upper pixel be the intensity of the neighboring right pixel be
  • the approach as described above can be implemented with filtering in float or integers (8 or 16bit or 32 bit).
  • a table lookup is used to determine respective weight.
  • filtering in integers that avoids division by doing table lookup of a multiplication factor and shift factor.
  • lookup_M determines a multiplication factor to increase the gain of the filtering to close to unity (weights sum up to 1 « lookup_Sh) given that the "division" using right shift (») has the shift value (lookup_Sh) limited to be a multiple of 2.
  • lookup_Sh(A) gives a shift factor that together with the multiplication factor lookup_M gives a sufficient approximation of 1/A.
  • roundF is a rounding factor which is equal to lookup_Sh » 1. If this approximation is done so that the gain is less or equal to unity the filtering will not increase the value of the filtered pixel outside the value of the pixel values in the neighborhood before the filtering.
  • one approach to reduce the amount of filtering is to omit filtering if the sum of the weights is equal to the weight for the center pixel.
  • the filtering as described in other examples can alternatively be performed by separable filtering in horizontal and vertical direction instead for 2D filtering as mostly described in other examples.
  • one set of weights (wd, wr or similarly o_(d ),o_(r )) and/or filter size is used for blocks that have been intra predicted and another set of weights and/or filter size is used for blocks that have been inter predicted.
  • the weights are set to reduce the amount of filtering for blocks which have been predicted with higher quality compared to blocks that have been predicted with lower quality. Since blocks that have been inter predicted typically has higher quality than blocks have been intra predicted they are filtered less to preserve the prediction quality.
  • Example weights for intra predicted blocks are:
  • Example weights for inter predicted blocks are:
  • one set of weights (wd, wr or similarly o_(d ),o_(r )) and/or filter size depends on picture type /slice type.
  • One example is to use one set of weights for intra pictures/slices and another set weights are used for inter pictures/slices.
  • One example to have one_Wd or similarly a d for pictures/slices that have only been intra predicted and a smaller Wd or similarly_a d for other pictures/slices.
  • Example weights for intra pictures/slices are:
  • Example weights for inter pictures/slices are:
  • B slices that typically have better prediction quality than P slices (only single prediction) can in another variant of this example have a smaller weight than P slices.
  • generalized B-slices that are used instead of P-slices for uni-directional prediction can have same weight as P-slices.
  • "normal" B-slices that can predict from both future and past can have a larger weight than generalized B-slices.
  • Example weights for "normal" B-slices are: Example 19
  • one set of weights (wd, wr or similarly o_(d ),o_(r )) and/or filter size is used intra pictures/slices and another set weights are used for inter pictures/slices that are used for reference for prediction of other pictures and a third set of weights are used for inter pictures/slices that not are used for reference for prediction of other pictures.
  • One example is to have one Wd or similarly a d for pictures/slices that have only been intra predicted and a somewhat smaller Wd or similarly a d for pictures/slices that have been inter predicted and are used for predicting other pictures and smallest Wd or similarly a d for pictures/slices that have been inter predicted but not are used for prediction of other pictures (non_reference picture).
  • Example weights for intra pictures/slices are:
  • Example weights for inter pictures/slices e.g. P_SLICE, B_SLICE that not are used for reference (non_reference picture) are:
  • Example weights for inter pictures/slices (e.g. P_SLICE, B_SLICE) that are used for reference are:
  • Example 20
  • an encoder can select which values of the weights to use and encode them in SPS (sequence parameter sets) , PPS (picture parameter sets) or slice header.
  • a decoder can then decode the values of the weights to be used for filtering respective picture/slice.
  • specific values of the weights are given for blocks that are intra predicted compared to blocks that are inter predicted are encoded in SPS/PPS or slice header.
  • a decoder can then decode the values of the weights to be used for blocks that are intra predicted and the values of the weights to be used for blocks that are inter predicted.
  • a data processing system can be used to implement the filter of the examples described above.
  • the data processing system includes at least one processor that is further coupled to a network interface via an interconnect.
  • the at least one processor is also coupled to a memory via the interconnect.
  • the memory can be implemented by a hard disk drive, flash memory, or read-only memory and stores computer-readable instructions.
  • the at least one processor executes the computer- readable instructions and implements the functionality described above.
  • the network interface enables the data processing system to communicate with other nodes in a network.
  • Alternative examples may include additional components responsible for providing additional functionality, including any functionality described above and/or any functionality necessary to support the solution described herein.
  • a filter as described in the embodiments below, or the examples above, may be implemented in a video encoder and a video decoder. It may be implemented in hardware, in software or a combination of hardware and software.
  • the filter may be implemented in, e.g. comprised in, user equipment, such as a mobile telephone, tablet, desktop, netbook, multimedia player, video streaming server, set-top box or computer.
  • Figure 8 shows a method according to a first embodiment, performed by a filter, for filtering of a picture of a video signal.
  • the picture comprises pixels, each pixel being associated with a pixel value, wherein a pixel value is modified by a weighted combination of the pixel value and at least one spatially neighboring pixel value.
  • the method comprises performing a filtering operation on a block by block basis, each block comprising rows and columns of pixels, step 801 , for example M rows and N columns.
  • the filtering operation comprises selectively filtering pixels in the current block by omitting the filtering of pixels in a column and/or row of the current block, where such column and/or row interfaces with a next prediction operation of an immediately subsequent block, step 803.
  • the filtering operation is performed between a transform operation for a current block and a prediction operation for an immediately subsequent block to the current block. It is noted that the filtering may be performed, in some examples, in parallel with an intra prediction operation.
  • the filtering operation may be performed directly after a transform operation for a current block, as an additional filtering step before performing prediction of an immediately subsequent or adjacent block, for example a block to the right or below a current block.
  • the immediately subsequent block performs intra prediction.
  • the method of Figure 8 comprises the steps of: determining if a block to the right and/or below of a current block can use intra prediction, and, if so, selectively filtering all pixels in the block, except the rightmost column of the current block and/or the bottom row of the current block.
  • some intra prediction modes use the nearest samples just outside the block to be predicted, e.g. from the left or above or both. For some intra prediction modes, they also sample from a block above right and below left, and the above left is used.
  • blocks are coded in a certain order. Starting with, for example, a large block typically referred to as CTU (in JEM 128x128 but could also be 256x256) that is then split into a smaller number of blocks, e.g. four blocks, or can be split into half blocks.
  • the order may be from left to right for a vertical split, and from top to bottom for a horizontal split. So it can be noted that blocks in the top left one quadrant (64x64) of 128x128 could be split down to for example 4x4 and the block to the right could be 64x64.
  • Each of these blocks can be intra predicted or inter predicted. As such, besides the possibility of a block to the right or below the current block using samples from the current block for prediction by extrapolation, for example intra prediction, it can also happen that the block to the bottom left of the current block uses samples from the current block. For example, that block can be the block next in processing order. Since a block to the right always can be a prediction block, e.g.
  • an intra block to minimize impact on latency
  • the right-most column and the bottom most column of the block are never filtered.
  • the current block can have a next block directly after in coding order to the right, then filtering the right most column is omitted. Similarly, according to some embodiments if the next block directly after in coding order is below (including below left), then filtering of the bottom row is omitted.
  • the partition is a half block (vertical split)
  • the first block is the top block, then the bottom row is not filtered. All following blocks do the same. If the partition is a half block but horizontal split, the first block is to the left, then the rightmost column is not filtered. Similarly for all blocks to the right.
  • the method comprises reconstructing the block by adding dequantized and residual inverse transformed coefficients to the prediction samples.
  • the current block can be predicted, e.g. by intra prediction using reconstructed samples from the current picture or by inter prediction using samples from another picture that already have been reconstructed.
  • the error from that prediction compared to the source is typically compressed by a transform.
  • the transform coefficients ae quantized to reduce overhead. All coding parameters (prediction parameters, quantized transform coefficients) may be entropy coded to further reduce overhead.
  • Embodiment 1
  • this embodiment relates to filtering blocks that can be used for intra prediction by extrapolation, as for example intra prediction in HEVC, or as in upcoming standards.
  • Having a filter, for example as the bilateral filter directly after the transform for current block adds one additional filtering step before one can perform intra prediction of an adjacent block, for example a block to the right of the current block.
  • all pixels are filtered except the rightmost column of the current block if the block to the right can use intra prediction.
  • the block is reconstructed, for example by adding the dequantized and inverse transformed coefficients (residual) to the prediction samples.
  • the method of Figure 8 further comprises the steps of: determining if a block to the right of a current block can use intra prediction, and, if so, selectively filtering all pixels in the block, except the rightmost column of the current block.
  • this embodiment relates to filtering blocks that can be used for intra prediction by extrapolation, as for example intra prediction in HEVC, or as in upcoming standards
  • a filter for example as the bilateral filter
  • directly after the transform for current block adds one additional filtering step before one can perform intra prediction of an adjacent block, for example a block below the current block.
  • all pixels are filtered except the bottom row of the current block if the block below uses intra prediction.
  • the block is reconstructed, for example by adding the dequantized and inverse transformed coefficients (residual) to the prediction samples.
  • the bilateral filter is applied to all samples except the bottom row, in a case where there is a block below the current block that uses intra prediction by extrapolation, this can be predicted directly after the reconstruction of the current block, if desired, and does not have to wait for the filtering to be performed.
  • the method of Figure 8 further comprises the steps of: determining if a block below a current block can use intra prediction, and, if so, selectively filtering all pixels in the block, except the bottom row of the current block. In one example, the bottommost row is never filtered if the next block in the coding order is below the current block (including below to the left), as will be described further in embodiments 7 and 8 below.
  • the bottom row is never filtered, as will also be described further in embodiments 7 and 8 below.
  • this embodiment relates to filtering blocks of samples that can be used for intra prediction by extrapolation, as for example intra prediction in HEVC, or as in upcoming standards.
  • a filter for example as the bilateral filter
  • directly after the transform for current block adds one additional filtering step before one can perform intra prediction of an adjacent block, for example a block to the right of the current block.
  • an adjacent block for example a block to the right of the current block.
  • pixels in the rightmost column of the current block are filtered if the block to the right can use intra prediction.
  • the block is reconstructed, for example by adding the dequantized and inverse transformed coefficients (residual) to the prediction samples. Then the right most column is filtered and can then be used for intra prediction of a block to the right of current block.
  • the method of Figure 8 further comprises the steps of: determining if a block to the right of a current block can use intra prediction, and, if so, selectively filtering only pixels in the rightmost column of the current block.
  • this embodiment relates to filtering of samples that can be used for intra prediction by extrapolation, as for example intra prediction in HEVC, or as in upcoming standards.
  • a filter for example as the bilateral filter
  • directly after the transform for current block adds one additional filtering step before one can perform intra prediction of an adjacent block, for example a block below the current block.
  • an adjacent block for example a block below the current block.
  • pixels in the bottom row of the current block are filtered, if the block below can use intra prediction.
  • the block is reconstructed, for example by adding the dequantized and inverse transformed coefficients (residual) to the prediction samples. Then the row in the bottom of current block is filtered and can then be used for intra prediction of a block below the current block.
  • the method of Figure 8 further comprises the steps of: determining if a block below a current block can use intra prediction, and, if so, selectively filtering only pixels in the bottom row of the current block.
  • this embodiment relates to filtering blocks that can be used for intra prediction by extrapolation, as for example intra prediction in HEVC, or as in upcoming standards.
  • Having a filter, for example as the bilateral filter directly after the transform for current block adds one additional filtering step before one can perform intra prediction of an adjacent block, for example a block to the right of the current block.
  • all pixels are filtered except the rightmost column of the current block and the bottom row of the current block, if the block to the right and below uses intra prediction.
  • the block is reconstructed, for example by adding the dequantized and inverse transformed coefficients (residual) to the prediction samples.
  • the bilateral filter is applied to all samples except the rightmost column and bottom row, in a case where there is a block to the right or below that uses intra prediction by extrapolation, this can be predicted directly after reconstruction of current block, if desired, and do not have to wait for the filtering to be performed.
  • the method of Figure 8 further comprises the steps of: determining if a block to the right and below of a current block can use intra prediction, and, if so, selectively filtering all pixels in the block, except the rightmost column and the bottom row of the current block.
  • the rightmost column and the bottom row are never filtered if the block is the top-left block of a quadrant since the next block in coding order is to the right and directly after that the block below, as will be described further in embodiments 7 and 8 below.
  • the right most column and the bottom row are never filtered, as will be described further in embodiments 7 and 8 below.
  • This embodiment relates to filtering blocks that can be used for intra prediction by extrapolation, as for example intra prediction in HEVC, or as in upcoming standards.
  • a filter for example as the bilateral filter
  • directly after the transform for current block adds one additional filtering step before one can perform intra prediction of an adjacent block, for example a block to the right of the current block.
  • the next block is predicted from unfiltered pixels. This means that the next block can be predicted before the filtering of the reconstructed samples in the block have finished, breaking the latency problem.
  • This embodiment relates to filtering a current block where the pixels of the current block can be used later for prediction of a subsequent block.
  • the pixels of the current blocks can be used for prediction of an immediately subsequent block, i.e., the next block in the decoding (or coding) order. If all pixels in the current block are filtered, this filtering would need to finish before the pixels can be used for prediction in the immediately subsequent block.
  • a decoder would therefore need to wait to decode the immediately subsequent block until the filtering of the current block is finished. This wait may mean that the decoder can run out of cycles, i.e., it will not have time to decode the entire frame before it must be displayed. This problem is most acute for small blocks, such as 4x4 blocks, since these take more cycles to decode per pixel.
  • the last column and the last row of the current bock are never filtered, i.e. not filtered, as shown in Figure 13. Since an immediately subsequent block can only use the last row or the last column of pixels from the current block for prediction, and since these pixels remain unfiltered, it is possible for the decoding of the subsequent block to commence before filtering of the current block has ended. This means that decoding of the immediately subsequent block can happen in parallel with the filtering of the current block, or at least in parallel with some of the filtering of the current block, whereby such decoding in parallel saves cycles and reduces latency.
  • the step of selectively filtering in Figure 8 comprises never filtering a last column and a last row of the current block, i.e. never filtering the last column and last row.
  • the decoding of an immediately subsequent block can commence before filtering of a current block has ended.
  • decoding of an immediately subsequent block can occur in parallel with at least some of the filtering of the current block.
  • filtering is avoided only for 4x4 blocks.
  • the last column and the last row of the current bock is not filtered, as shown in Figure 13, if the block is the smallest possible block, such as a 4x4 block. Since an immediately subsequent block can only use the last row or the last column of pixels from the current block for prediction, and since these pixels remain unfiltered, it is possible for the decoding of the subsequent block to commence before filtering of the current block has ended, if the current block is a 4x4 block. This means that decoding of the immediately subsequent block can happen in parallel with the filtering of the current 4x4 block, saving cycles and reducing latency. Since the clock cycle budget is especially tight for 4x4 blocks, saving cycles for these blocks is sufficient. Also, it will allow for more pixels being filtered compared to embodiment 7, since all pixels of larger blocks, such as 4x8 blocks, may be filtered.
  • a group of block sizes are excluded, such as 4x4, 4x8 and 8x4 from having their last row and last column filtered.
  • the step of not filtering a last column and a last row of the current block is applied to a group of block sizes, including block sizes comprising 4x4, 4x8 and 8x4.
  • Figure 14 shows an example of a filter 1400 according to an embodiment, whereby the filter is implemented as a data processing system.
  • the data processing system includes at least one processor 1401 that is further coupled to a network interface via a network interface 1405.
  • the at least one processor 1401 is also coupled to a memory 1403 via the network interface 1405.
  • the memory 1403 can be implemented by a hard disk drive, flash memory, or read-only memory and stores computer-readable instructions.
  • the at least one processor 1401 executes the computer-readable instructions and implements the functionality described in the embodiments above.
  • the network interface 1405 enables the data processing system 1400 to communicate with other nodes in a network.
  • Alternative examples may include additional components responsible for providing additional functionality, including any functionality described above and/or any functionality necessary to support the solution described herein.
  • the filter 1400 may be operative to filter a picture of a video signal, wherein the picture comprises pixels, each pixel being associated with a pixel value, the filter being configured to modify a pixel value by a weighted combination of the pixel value and at least one spatially neighboring pixel value.
  • the filter 1400 may be operative to filter on a block by block basis, each block comprising rows and columns of pixels, and selectively filter pixels in the block by omitting the filtering of pixels in a column and/or row of the current block, where such column and/or row interfaces with a next prediction operation of an immediately subsequent block.
  • filter 1400 may be positioned between a transform operation for a current block and a prediction operation for an immediately subsequent block to the current block. It is noted that the filtering may be performed, in some examples, in parallel with an intra prediction operation.
  • the filter 1400 may be further operative to perform filtering operations as described here, and defined in the appended claims.
  • Figure 15 shows an example of part of a video coding system 1500 having a filter 1400 according to an embodiment.
  • the filter 1400 may comprise a filter as described in any of the embodiments herein.
  • the filter 1400 is shown as being positioned between a transform module 1501 for a current block and a prediction module 1503, the prediction module 1503 configured to provide an intra prediction operation for a block to the right or below a current block. It is noted that the filtering may be performed, in some examples, in parallel with an intra prediction operation.
  • Figure 16 shows an example of a decoder 1600 that comprises a modifying means, for example a filter as described herein, configured to modify a pixel value by a weighted combination of the pixel value and at least one spatially neighboring pixel value.
  • the modifying means may be operative to filter on a block by block basis, each block comprising rows and columns of pixels, e.g. M rows and N columns, and selectively filter pixels in the block by omitting the filtering of pixels in a column or row of the current block, where such column or row interfaces with a next prediction operation of an adjacent block.
  • the modifying means may be further operative to perform a filtering method as described herein, and as defined in the appended claims.
  • the at least one of the parameters ⁇ ⁇ and a r may also depends on at least one of: quantization parameter, quantization scaling matrix, transform width, transform height, picture width, picture height, a magnitude of a negative filter coefficient used as part of inter/intra prediction.
  • the embodiments described herein provide an improved filter for video coding.
  • Figure 17 is a schematic block diagram of a video encoder 40 according to an embodiment.
  • a current sample block also referred to as pixel block or block of pixels, is predicted by performing a motion estimation by a motion estimator 50 from already encoded and reconstructed sample block(s) in the same picture and/or in reference picture(s).
  • the result of the motion estimation is a motion vector in the case of inter prediction.
  • the motion vector is utilized by a motion compensator 50 for outputting an inter prediction of the sample block.
  • An intra predictor 49 computes an intra prediction of the current sample block.
  • the outputs from the motion estimator/compensator 50 and the intra predictor 49 are input in a selector 51 that either selects intra prediction or inter prediction for the current sample block.
  • the output from the selector 51 is input 10 to an error calculator in the form of an adder 41 that also receives the sample values of the current sample block.
  • the adder 41 calculates and outputs a residual error as the difference in sample values between the sample block and its prediction, i.e., prediction block.
  • the error is transformed in a transformer 42, such as by a discrete cosine transform (DCT), and the resulting coefficients are quantized 15 by a quantizer 43 followed by coding in an encoder 44, such as by an entropy encoder.
  • a transformer 42 such as by a discrete cosine transform (DCT)
  • the resulting coefficients are quantized 15 by a quantizer 43
  • an encoder 44 such as by an entropy encoder.
  • the estimated motion vector is brought to the encoder 44 for generating the coded representation of the current sample block.
  • the transformed and quantized residual error for the current sample block is also provided to an inverse quantizer 45 and inverse transformer 46 to reconstruct the residual error.
  • This residual error is added by an adder 47 to the prediction output from the motion compensator 50 or the intra predictor 49 to create a reconstructed sample block that can be used as prediction block in the prediction and coding of other sample blocks.
  • This reconstructed sample block is first processed by a device 100 for filtering of a picture according to the embodiments in order to suppress deringing artifacts.
  • the modified, i.e., filtered, reconstructed sample block is then temporarily stored in a Decoded Picture Buffer (DPB) 48, where it is available to the intra predictor 49 and the motion estimator/compensator 50.
  • DPB Decoded Picture Buffer
  • the modified, i.e. filtered, reconstructed sample block from device 100 is also coupled directly to the intra predictor 49.
  • the device 100 is preferably instead arranged between the inverse transformer 46 and the adder 47.
  • FIG. 18 is a schematic block diagram of a video decoder 60 comprising a device 100 for filtering of a picture according to the embodiments.
  • the video decoder 60 comprises a decoder 61 , such as an entropy decoder, for decoding a bitstream comprising an encoded representation of a sample block to get a set of quantized and transformed coefficients. These coefficients are dequantized in an inverse quantizer 62 and inverse transformed by an inverse transformer 63 to get a decoded residual error.
  • the decoded residual error is added in an adder 64 to the sample prediction values of a prediction block.
  • the prediction block is determined by a motion stimator/compensator 67 or intra predictor 66, depending on whether inter or intra prediction is performed.
  • a selector 68 is thereby interconnected to the adder 64 and the motion estimator/compensator 67 and the intra predictor 66.
  • the resulting decoded sample block output from the adder 64 is input to a device 100 for filtering of a picture or part of a picture in order to suppress and combat any ringing artifacts.
  • the filtered sample block enters a DPB 65 and can be used as prediction block for subsequently decoded sample blocks.
  • the DPB 65 is thereby connected to the motion estimator/compensator 67 to make the stored sample blocks available to the motion estimator/compensator 67.
  • the output from the adder 64 is preferably also input to the intra predictor 66 to be used as an unfiltered prediction block.
  • the filtered sample block is furthermore output from the video decoder 60, such as output for display on a screen. If the deringing filtering instead is applied following inverse transform, the device 100 is preferably instead arranged between the inverse transformer 63 and the adder 64.
  • One idea of embodiments of the present invention is to introduce a deringing filter into the Future Video Codec, i.e., the successor to HEVC.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

L'invention concerne un procédé, réalisé par un filtre, permettant de filtrer une image d'un signal vidéo, l'image comprenant des pixels, chaque pixel étant associé à une valeur de pixel, une valeur de pixel étant modifiée par une combinaison pondérée de la valeur de pixel et d'au moins une valeur de pixel spatialement voisine. Le procédé consiste à effectuer le filtrage sur une base bloc par bloc, chaque bloc comprenant des rangées et des colonnes de pixels. L'opération de filtrage consiste à filtrer sélectivement des pixels dans le bloc en omettant le filtrage de pixels dans une colonne et/ou une rangée du bloc courant, une telle colonne et/ou une telle ligne servant d'interface avec une opération de prédiction suivante d'un bloc immédiatement suivant.
PCT/EP2018/051328 2017-01-19 2018-01-19 Appareil de filtrage et procédés WO2018134362A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201762448058P 2017-01-19 2017-01-19
US62/448,058 2017-01-19

Publications (1)

Publication Number Publication Date
WO2018134362A1 true WO2018134362A1 (fr) 2018-07-26

Family

ID=61187271

Family Applications (2)

Application Number Title Priority Date Filing Date
PCT/EP2018/051329 WO2018134363A1 (fr) 2017-01-19 2018-01-19 Appareil de filtrage et procédés
PCT/EP2018/051328 WO2018134362A1 (fr) 2017-01-19 2018-01-19 Appareil de filtrage et procédés

Family Applications Before (1)

Application Number Title Priority Date Filing Date
PCT/EP2018/051329 WO2018134363A1 (fr) 2017-01-19 2018-01-19 Appareil de filtrage et procédés

Country Status (1)

Country Link
WO (2) WO2018134363A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110855987A (zh) * 2018-08-21 2020-02-28 北京字节跳动网络技术有限公司 用于在双边滤波器中的加权参数推导的量化差
CN113905236A (zh) * 2019-09-24 2022-01-07 Oppo广东移动通信有限公司 图像编解码方法、编码器、解码器以及存储介质

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140146875A1 (en) * 2012-11-26 2014-05-29 Qualcomm Incorporated Loop filtering across constrained intra block boundaries in video coding
WO2015191834A1 (fr) * 2014-06-11 2015-12-17 Qualcomm Incorporated Détermination de l'application d'un filtre de dégroupage sur des blocs codés par palette, dans un codage vidéo

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9635360B2 (en) * 2012-08-01 2017-04-25 Mediatek Inc. Method and apparatus for video processing incorporating deblocking and sample adaptive offset
BR112016007151A2 (pt) * 2013-10-14 2017-09-12 Microsoft Tech Licensing recursos de modo de predição de cópia intrabloco para codificação e decodificação de vídeo e de imagem

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140146875A1 (en) * 2012-11-26 2014-05-29 Qualcomm Incorporated Loop filtering across constrained intra block boundaries in video coding
WO2015191834A1 (fr) * 2014-06-11 2015-12-17 Qualcomm Incorporated Détermination de l'application d'un filtre de dégroupage sur des blocs codés par palette, dans un codage vidéo

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110855987A (zh) * 2018-08-21 2020-02-28 北京字节跳动网络技术有限公司 用于在双边滤波器中的加权参数推导的量化差
CN110855986A (zh) * 2018-08-21 2020-02-28 北京字节跳动网络技术有限公司 双边滤波器的减小的窗口尺寸
US11490081B2 (en) 2018-08-21 2022-11-01 Beijing Bytedance Network Technology Co., Ltd. Unequal weighted sample averages for bilateral filter
US11558610B2 (en) 2018-08-21 2023-01-17 Beijing Bytedance Network Technology Co., Ltd. Quantized difference used for weighting parameters derivation in bilateral filters
CN110855986B (zh) * 2018-08-21 2023-03-10 北京字节跳动网络技术有限公司 双边滤波器的减小的窗口尺寸
CN110855987B (zh) * 2018-08-21 2023-03-10 北京字节跳动网络技术有限公司 用于在双边滤波器中的加权参数推导的量化差
CN113905236A (zh) * 2019-09-24 2022-01-07 Oppo广东移动通信有限公司 图像编解码方法、编码器、解码器以及存储介质
CN113905236B (zh) * 2019-09-24 2023-03-28 Oppo广东移动通信有限公司 图像编解码方法、编码器、解码器以及存储介质
US11882304B2 (en) 2019-09-24 2024-01-23 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Image coding/decoding method, coder, decoder, and storage medium

Also Published As

Publication number Publication date
WO2018134363A1 (fr) 2018-07-26

Similar Documents

Publication Publication Date Title
US11272175B2 (en) Deringing filter for video coding
US11902515B2 (en) Method and apparatus for video coding
US11122263B2 (en) Deringing filter for video coding
KR101752612B1 (ko) 비디오 코딩을 위한 샘플 적응적 오프셋 프로세싱의 방법
CN107347157B (zh) 视频解码装置
KR101530832B1 (ko) 영상의 재구성된 샘플 세트에 대한 보상 오프셋들의 인코딩/디코딩을 최적화하는 방법 및 장치
US20170272758A1 (en) Video encoding method and apparatus using independent partition coding and associated video decoding method and apparatus
CN110024405B (zh) 图像处理方法及其装置
US10999603B2 (en) Method and apparatus for video coding with adaptive clipping
WO2018149995A1 (fr) Appareil et procédés de filtre
KR102393178B1 (ko) 복원 블록을 생성하는 방법 및 장치
JP7295330B2 (ja) パレットモードのための量子化処理
WO2018134128A1 (fr) Filtrage de données vidéo à l'aide d'une table de consultation partagée
WO2018134362A1 (fr) Appareil de filtrage et procédés
US20240414379A1 (en) Combining deblock filtering and another filtering for video encoding and/or decoding

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18703695

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18703695

Country of ref document: EP

Kind code of ref document: A1