WO2013162441A1 - Commande de filtrage de dégroupage - Google Patents

Commande de filtrage de dégroupage Download PDF

Info

Publication number
WO2013162441A1
WO2013162441A1 PCT/SE2013/050237 SE2013050237W WO2013162441A1 WO 2013162441 A1 WO2013162441 A1 WO 2013162441A1 SE 2013050237 W SE2013050237 W SE 2013050237W WO 2013162441 A1 WO2013162441 A1 WO 2013162441A1
Authority
WO
WIPO (PCT)
Prior art keywords
parameter value
pixels
deblocking filtering
block boundary
beta
Prior art date
Application number
PCT/SE2013/050237
Other languages
English (en)
Inventor
Andrey Norkin
Rickard Sjöberg
Original Assignee
Telefonaktiebolaget L M Ericsson (Publ)
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Telefonaktiebolaget L M Ericsson (Publ) filed Critical Telefonaktiebolaget L M Ericsson (Publ)
Publication of WO2013162441A1 publication Critical patent/WO2013162441A1/fr

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/117Filters, e.g. for pre-processing or post-processing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • H04N19/14Coding unit complexity, e.g. amount of activity or edge presence estimation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/70Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/85Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
    • H04N19/86Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving reduction of coding artifacts, e.g. of blockiness

Definitions

  • the present embodiments generally relate to deblocking filtering and in particular to controlling deblocking filtering over a boundary between neighboring blocks of pixels in a picture.
  • Deblocking filters are used in video coding standards in order to combat blocking artifacts.
  • the blocking artifacts arise because the original video is split into blocks which are processed relatively independently.
  • the blocking artifacts can arise due to different intra prediction of blocks, quantization effects and motion compensation. Two particular variants of deblocking are described below.
  • deblocking filtering In state of the art video coding such as H.264 there is an adaptive de-blocking filter/loop filter after prediction and residual reconstruction, but before storage of the reconstruction for later reference when encoding or decoding subsequent frames.
  • the deblocking filtering consists of several steps such as filter decisions, filtering operations, a clipping function and changes of pixel values. The decision to filter the border or not is made based on evaluating several conditions. Filter decisions depend on macro block (MB) type, motion vector (MV) difference between neighboring blocks, whether neighboring blocks have coded residuals and on the local structure of the current and/or neighboring blocks.
  • MB macro block
  • MV motion vector
  • the amount of filtering for a pixel depends on the position of that pixel relative to the block boundary and on the quantization parameter (QP) value used for residual coding.
  • QP quantization parameter
  • the filter decision is based on comparing three pixel differences with three thresholds.
  • the thresholds are adapted to the QP. If the following conditions are fulfilled the filtering is done: abs(d-e) ⁇ thr1 ,
  • filtering can be described with a delta value that the filtering changes the current pixel value with.
  • d' is here the pixel value at position d after filtering and e' is the pixel value after filtering at position e. More filtering is allowed for high QP than for low QP.
  • delta_clipped max(-thr3,min(thr3,delta)) where thr3 is controlling the filter strength.
  • thr3 is controlling the filter strength.
  • a larger value of thr3 means that the filtering is stronger, which in turns means that a stronger low-pass filtering effect will happen.
  • the filter strength can be increased if any of the following two conditions also holds: abs(b-d) ⁇ thr2 and abs(e-g) ⁇ thr2
  • the filter strength is adapted by clipping the delta less, e.g. to allow for more variation.
  • the second filtering mode (strong filtering) is applied for intra macroblock boundaries only, when the following condition is fulfilled: abs(d-e) ⁇ thr1/4.
  • the thresholds thrl, thr2 and thr3 are derived from table lookup using QP as index. Each slice can contain modifications of thr2 and thr3 using slice_beta_offset_div2 and thrl using slice_alp a_c0_offset_div2.
  • the slice parameters 2xslice_beta_offset_div2 and 2x slice_alp a_c0_offset_div2 are added to the current QP index before table lookup of thr2/thr3 and thrl respectively.
  • p 0 to p3 and qo to q3 represent pixel values across a vertical block boundary.
  • the deblocking filter works differently than H.264.
  • the filtering is performed if at least one of the blocks on the side of the border is intra, or has non-zero coefficients, or the difference between the motion vector components of the blocks is greater than or equal to one integer pixel. For example, if one is filtering the border between the blocks A and B below, then the following condition should satisfy for the block boundary to be filtered: A B
  • the two filtering modes (weak and strong filtering) in the HEVC draft look like in the following:
  • Weak filtering is performed based on the above conditions.
  • the actual filtering works by computing an offset ( ⁇ ) for each of the lines / that the weak filter has been selected for.
  • the following weak filtering procedure is applied for every line, where it has been chosen.
  • Aq Clip3( -( tc » 1 ), tc » 1 , ( ( ( q2 + qO + 1 ) » 1 ) - q1 - ⁇ ) »1 )
  • p1 ' Clip3( p1 -2xtc, p1 +2xtc, ( p2 + p1 + pO + qO + 2 ) » 2 )
  • p2' Clip3( p2-2xtc, p2+2xtc, ( 2xp3 + 3xp2 + p1 + pO + qO + 4 ) » 3 )
  • qO' Clip3( q0-2xtc, q0+2xtc, ( p1 + 2xp0 + 2xq0 + 2xq1 + q2 + 4 ) » 3 )
  • q1 ' Clip3( q1 -2xtc, q1 +2xtc, ( pO + qO + q1 + q2 + 2 ) » 2 )
  • q2' Clip3( q2-2xtc, q2+2xtc, ( pO + qO + q1 + 3xq2 + 2xq3 + 4 ) » 3 )
  • the parameter beta_o ⁇ fset_div2 and pps_beta_o ⁇ fset_div2 is used in order to adjust the amount of deblocking filtering as in the following.
  • the number of samples from the block boundary modified by deblocking filtering depends on equations (4), (5), (7), (8).
  • equations (4), (5), (7), (8) use a comparison with the parameter ⁇ , divided with some factor, such as ( ⁇ »2 ) in (4), ( ⁇ »3 ) in (5) and ( ⁇ + ( ⁇ » 1 ) ) » 3) in (7) and (8), where the parameter ⁇ depends on the QP and is normally derived from the look-up table, such as Table 1.
  • an offset to the parameter ⁇ can be signaled, for instance, in the slice header or in a parameter set, such as Adaptation Parameter Set (APS) or Picture Parameter Set (PPS), as beta_o ⁇ fset_div2.
  • APS Adaptation Parameter Set
  • PPS Picture Parameter Set
  • beta_o ⁇ fset_div2 Adaptation Parameter Set
  • significant changes of the parameter ⁇ is required in order to change the threshold values ( ⁇ »2 ), ( ⁇ »3 ), ( ⁇ + ( ⁇ » 1 ) ) » 3).
  • a general objective is to provide an efficient deblocking filtering control.
  • a particular objective is to enable modifying threshold values used for determining deblocking filtering mode and/or length separately from modifications of a parameter that determines which parts of block boundaries are modified by deblocking filtering.
  • An aspect of the embodiments relates to a deblocking filtering control method performed in connection with video decoding.
  • the method comprises retrieving, based on encoded video data, a first syntax element defining a beta parameter value and a second syntax element defining a length offset parameter value.
  • the method also comprises determining, based at least partly on the beta parameter value, whether or not to apply deblocking filtering on a block boundary between two blocks of pixels in a picture decoded based on the encoded video data, and determining, based at least partly on the length offset parameter value, at least one of i) whether to apply weak deblocking filtering or strong deblocking filtering on the block boundary and ii) how many pixels to filter on each side of the block boundary.
  • a related aspect of the embodiments defines a filtering control device comprising a determining unit configured to determine a beta parameter value from a first syntax element retrieved based on encoded video data and a length offset parameter value from a second syntax element retrieved based on the encoded video data.
  • the filtering control device also comprises a processing unit configured to i) determine, based at least partly on the beta parameter value, whether or not to apply deblocking filtering on a block boundary between two blocks of pixels in a picture decoded based on the encoded video data, and ii) determine, based at least partly on the length offset parameter value, at least one of i) whether to apply weak deblocking filtering or strong deblocking filtering on the block boundary and ii) how many pixels to filter on each side of the block boundary.
  • Another related aspect of the embodiments defines a decoder configured to decode encoded video data of a video sequence.
  • the decoder comprises a filtering control device according to above.
  • a further related aspect of the embodiments defines a user equipment comprising a decoder according to above.
  • the computer program comprises code means which when run on a computer causes the computer to retrieve, based on encoded video data, a first syntax element defining a beta parameter value and a second syntax element defining a length offset parameter value.
  • the code means also causes the computer to determine, based at least partly on the beta parameter value, whether or not to apply deblocking filtering on a block boundary between two blocks of pixels in a picture decoded based on the encoded video data.
  • the code means further causes the computer to determine, based at least partly on the length offset parameter value, at least one of i) whether to apply weak deblocking filtering or strong deblocking filtering on the block boundary and ii) how many pixels to filter on each side of the block boundary.
  • a further related aspect of the embodiments defines a computer program product comprising computer readable code means and a computer program according to above stored on the computer readable code means.
  • the method comprises determining a beta parameter value defining whether or not to apply deblocking filtering on a block boundary between two blocks of pixels in a picture of a video sequence.
  • the method also comprises determining a length offset parameter value defining at least one of i) whether to apply weak deblocking filtering or strong deblocking filtering on the block boundary and ii) how many pixels to filter on each side of the block boundary.
  • a first syntax element representing the beta parameter value and a second syntax element representing the length offset parameter value are associated to an encoded representation of the picture.
  • a related aspect of the embodiments defines a filtering control device comprising a beta parameter determining unit configured to determine a beta parameter value defining whether or not to apply deblocking filtering on a block boundary between two blocks of pixels in a picture of a video sequence.
  • a length offset determining unit is configured to determine a length offset parameter value defining at least one of i) whether to apply weak deblocking filtering or strong deblocking filtering on the block boundary and ii) how many pixels to filter on each side of the block boundary.
  • the filtering control device also comprises an associating unit configured to associate a first syntax element representing the beta parameter value and a second syntax element representing the length offset parameter value to an encoded representation of the picture.
  • Another related aspect of the embodiments defines an encoder configured to encode video data of a video sequence and comprising a filtering control device according to above.
  • a further aspect of the embodiments defines a user equipment comprising an encoder according to above.
  • Still another aspect of the embodiments defines a network device being or belonging to a network node in a communication network.
  • the network device comprises an encoder and/or a decoder according to above.
  • the present embodiments enable flexibility of applying stronger/weaker filtering and thereby modifying more/less pixels from the block boundaries in the block boundaries chosen for deblocking filtering without applying deblocking to most of the boundaries in the picture. Hence, the embodiments enable this adaptation of deblocking filtering mode/length while avoiding filtering on most of the block boundaries
  • Fig. 1A schematically illustrates a method performed in a filtering control device according to an embodiment
  • Fig. 1 B schematically illustrates a method performed in a transmitter according to an embodiment
  • Fig. 1 C schematically illustrates a method performed in a receiver according to an embodiment
  • Fig. 2 is a schematic block diagram of an encoder according to an embodiment
  • Fig. 3 is a schematic block diagram of a decoder according to an embodiment
  • Fig. 4 is a schematic block diagram of a user equipment according to an embodiment
  • Fig. 5 is a schematic block diagram of a user equipment according to another embodiment
  • Fig. 6 is a schematic block diagram of a network device according to an embodiment
  • Fig. 7 is a schematic block diagram of a filtering control device according to an embodiment
  • Fig. 8 is a schematic block diagram of a computer according to an embodiment
  • Fig. 11 is a flow diagram illustrating a deblocking filtering control method according to an embodiment
  • Fig. 12 is a flow diagram illustrating an embodiment of the step of determining whether to apply deblocking filtering in Fig. 11 ;
  • Fig. 13 is a flow diagram illustrating an embodiment of the step of determining whether to apply weak or strong deblocking filtering and/or how many pixels to filter in Fig. 11 ;
  • Fig. 14 is a flow diagram illustrating an embodiment of the step of determining whether to apply weak or strong deblocking filtering in Fig. 13;
  • Fig. 15 is a flow diagram illustrating an embodiment of the step of determining how many pixels to filter in Fig. 13;
  • Fig. 16 is a schematic block diagram of a processing unit in Fig. 7 according to an embodiment
  • Fig. 17 is a flow diagram illustrating a deblocking filtering control method according to another embodiment
  • Fig. 18 is a flow diagram illustrating the step of associating the first and second syntax element in Fig. 17 according to an embodiment
  • Fig. 19 is a schematic block diagram of a filtering control device according to another embodiment
  • Fig. 20 schematically illustrates a video sequence of pictures
  • Fig. 21 schematically illustrates a data packet carrying encoded video data
  • Fig. 22 schematically illustrates an encoded representation of a picture.
  • the present embodiments generally relate to deblocking filtering and in particular to controlling deblocking filtering over a boundary of neighboring blocks of pixels in a picture.
  • the embodiments are based on the insight that prior art techniques use a single parameter beta ( ⁇ ) to determine whether or not to apply deblocking filtering on a block boundary between two block of pixels in a picture, see equation (3), to determine whether to apply weak or strong filtering, see equations (4) and (5), and to determine how many pixels to filter on each side of the block boundary, see equations (7) and (8).
  • beta
  • equations (4), (5), (7) and (8) use thresholds where the parameter ⁇ is divided by some number, e.g. four in equation (4), eight in equation (5) and 16/3 in equations (7), (8).
  • increasing the value of ⁇ will affect the filter decision in equation (3) determining whether to filter a block boundary or not.
  • the relevant threshold is simply the ⁇ value, i.e. not divided by any number. This means that an increase in the ⁇ value as required in order to affect the decision according to any of equations (4), (5), (7) and (8) will have a significant impact on the threshold used in equation (3).
  • the decision in equation (3) will be true for most block boundaries in a picture and filtering will therefore be applied on most block boundaries. This may lead to excessive blurriness.
  • deblocking filtering is unintentionally applied to some block boundaries since the decision in equation (3) will be true for most block boundaries then structures present in the block of pixels could be removed or at least significantly suppressed. For instance, a clear edge between two pixel areas could be present close to the block boundary. It is then generally not preferred to apply deblocking filtering since such deblocking filtering could remove or blur the clear edge leading to visual artifacts. If is of course also computationally wasteful to apply deblocking filtering on block boundaries if the decision in equation (3) will be true for most block boundaries if the deblocking filtering does not lead to any significant quality improvement or might even cause a deterioration in visual quality.
  • Fig. 11 is a flow diagram illustrating a deblocking filtering control method performed in connection with, such during, video decoding. The method comprises retrieving, in step S30 and based on encoded video data, a first syntax element defining a beta ( ⁇ ) parameter value and a second syntax element defining a length offset parameter value.
  • Step S31 comprises determining whether or not to apply deblocking filtering on a block boundary between two blocks of pixels in a picture decoded based on the encoded video data.
  • the determination or decision taken in step S31 is performed based at least partly on the beta parameter value.
  • Step S32 comprises determining at least one of i) whether to apply weak deblocking filtering, sometimes also denoted normal deblocking filtering, or strong deblocking filtering on the block boundary and ii) how many pixels to filter on each side of the block boundary.
  • the determination or decision taken in step S32 is performed based at least partly on the length offset parameter value.
  • the decision of whether to apply weak or strong deblocking filtering can be regarded as a decision of which out of two deblocking filtering modes to use.
  • Weak deblocking filtering is a deblocking filtering mode that applies a weaker filtering and modification of pixel values as compared to a strong deblocking filtering mode.
  • Strong deblocking filtering is generally preferred if there are not any structures in the pixels value when traveling along a pixel line across the block boundary, hence the pixel values are substantially the same on both sides of the block boundary or differ little. If there is any structure in the pixel values, such as an edge between elements, then weak deblocking filtering is generally preferred since a strong deblocking filtering could remove or at least suppress such a structure.
  • method step S32 could be regarded as comprising determining how many pixels to filter on each side of the block boundary, or expressed alternatively, determining a length of a deblocking filtering. The reason being that in HEVC the strong deblocking filtering mode involves filtering three pixels in a pixel line on each side of the block boundary, whereas the weak deblocking filtering mode involves filtering either one or two pixels in the pixel line on each side of the block boundary.
  • Pixel value as used herein generally relates to a value that a pixel or sample in a block of a slice in a picture has.
  • a pixel value typically represents a color value according to some defined color format, such as RGB or, typically, luminance (luma) and chrominance (chroma).
  • the deblocking filtering as disclosed herein is in particular suitable in connection with filtering luma values.
  • the block boundary is a block boundary or block border between two neighboring or adjacent blocks of pixels in a slice 3 of a picture 2 in a video sequence 1 , see Fig. 20.
  • the picture 2 could comprise a single slice 3 or multiple, i.e. at least two, slices 3.
  • the block boundary could be a vertical block boundary as shown below for two neighboring blocks A and B positioned side by side in the picture 2.
  • the block boundary is a horizontal block boundary as shown below for two neighboring blocks A and B, where block A is positioned on top of block B in the picture 2.
  • Deblocking filtering is applied to a line of pixels, denoted pixel line herein, i.e. to a row of pixels for a vertical block boundary or to a column of pixels for a horizontal block boundary.
  • the deblocking filtering control method as disclosed in Fig. 11 hence, adds a new parameter, i.e. the length offset value, which is used in order to determine whether to apply weak or strong deblocking filtering and/or how many pixels to filter.
  • a new parameter i.e. the length offset value
  • the decision of which deblocking filtering mode to use and/or the length of the deblocking filtering can be made independent of changing the value of the beta parameter used in step S31 to determine whether or not to apply deblocking filtering on the block boundary.
  • the embodiments thereby enable, for instance, going from applying weak deblocking filtering to strong deblocking filtering for a pixel line on a block boundary and/or going from filtering and modifying a single pixel on each side of the block boundary in a pixel line to filtering and modifying two pixels on each side of the block boundary in the pixel line without affecting the beta parameter value and thereby without affecting the number of block boundaries on which deblocking filtering is applied in a picture.
  • the first syntax element retrieved in step S30 and defining the beta parameter value could be any syntax element in the encoded video data, i.e. bitstream, or associated with encoded video data that enables determination of the beta parameter value.
  • the first syntax element comprises the quantization parameter (QP) value used for residual coding.
  • QP quantization parameter
  • a base QP parameter could be set at the picture level with a first delta QP parameter that can further change the base QP parameter value on the slice level.
  • two neighboring blocks of pixels in a slice in a picture can have different QP parameter values.
  • an average QP value of the two neighboring blocks is typically used to derive the beta parameter value from a look-up table, such as Table 1.
  • the first syntax element could then comprise the syntax elements defining the base QP parameter, the first delta QP parameter and the second delta QP parameters.
  • the syntax element defining the first delta QP parameter is typically signaled in a slice header in an encoded representation of a slice and with the optional second delta QP parameter signaled on coding unit (block of pixels) basis.
  • the syntax element defining the base QP parameter may also be retrieved from the slice header but is typically included in another header or data element or structure in or associated with an encoded representation of a picture. Examples of the latter include various parameter sets, such as Picture Parameter Set (PPS), Sequence Parameter Set (SPS), Video Parameter Set (VPS) and Adaptation Parameter Set (APS), and preferably PPS.
  • PPS Picture Parameter Set
  • SPS Sequence Parameter Set
  • VPS Video Parameter Set
  • APS Adaptation Parameter
  • the second syntax element defining the length offset parameter value could be retrieved from a slice header in an encoded representation of a slice.
  • the syntax element is retrieved from a data element or structure associated with the encoded representation, such a PPS, SPS, VPS or APS.
  • the slice header preferably comprises a parameter set identifier directly or indirectly identifying the relevant parameter set.
  • a PPS can be identified by a PPS identifier in the slice header
  • an SPS can be identified by an SPS identifier in a PPS, which is identified by a PPS identifier in the slice header
  • a VPS can be identified by a VPS identifier in an SPS identified by an SPS identifier in a PPS, which is identified by a PPS identifier in the slice header.
  • the retrieval of syntax elements in step S30 could, for instance, be performed once per block boundary or once per slice in the picture.
  • the block parameter value and/or the length offset parameter value could be reused for multiple block boundaries in the slice.
  • Retrieving the syntax elements in step S30 preferably comprises reading and decoding the syntax elements from the relevant data structure, such as slice header or parameter set.
  • the decoded values could be used directly as the beta parameter value and the length offset parameter value.
  • the beta parameter value and/or the length offset parameter value is calculated or otherwise determined based on decoded values. For instance, decoded values of the base QP parameter and the delta QP parameters are used to calculate a QP parameter value which is used a table input in a look-up table to get the relevant beta parameter value.
  • the value of the length offset parameter is defined based on the size of the block of pixels. This means that the length offset parameter value is then linked to a particular type of block boundary.
  • the second syntax element retrieved in step S30 is a syntax element defining the size of the current block of pixels. The size is then used, for instance, in a look-up table to get the value of the length offset parameter to be used for the current block of pixels.
  • the following steps S31 and S32 of Fig. 11 are preferably performed for each vertical and horizontal block boundary between two neighboring blocks present in the same slice in the picture.
  • step S31 is preferably performed for the first and fourth pixel line for a block boundary.
  • the determination of how many pixels to filter is preferably performed for each pixel line relative to a block boundary for which weak deblocking filtering has been applied.
  • step S32 involves determining how many pixels to filter based on the length offset value and this determination is made for each pixel line relative to the block boundary.
  • the decision could first be to determine, based at least partly on the length offset parameter value, whether to filter three or filter one/two pixels on each side of a block boundary in a given pixel line. This corresponds to selecting between strong and weak deblocking filtering. If weak deblocking filtering is selected for the first and/or fourth pixel line a further decision is made, based at least partly on the length offset parameter value, whether to filter one or two pixels on each side of the block boundary on the given pixel line. Strong deblocking filtering is generally not applicable to the second and third pixel lines.
  • step S32 could thereby involve determining, at least partly based on the length offset parameter value, whether to filter one or two pixels on each side of the block boundary.
  • Step S31 comprises determining whether or not to apply deblocking filtering on the block boundary based at least partly on the beta parameter value but preferably not based on the length offset parameter value.
  • the length offset parameter value is used in the decision or determination performed in step S32 but not in the decision or determination in step S31.
  • a particular embodiment of step S31 is illustrated in the flow diagram of Fig. 12. The method continues from step S30 in Fig. 1 1.
  • a next step S40 comprises calculating a variable d based on pixel values of pixels in a first block of pixels (block A above) and in a second, neighboring block of pixels (block B above).
  • Equation (9) and further herein pA represents a pixel value of a pixel in a pixel line number i in a first or current block of pixels (block A) at a pixel position number A relative to the block boundary and qA represents a pixel value of a pixel in a pixel line number i in a second block of pixels (block B) at a pixel position number A relative to the block boundary.
  • Equation (9) above basically corresponds to a combination of equation (2) and some of the equations referred to as (1 ).
  • a next step S41 comprises comparing the variable d with a threshold value, which in an embodiment corresponds to the beta parameter value, represented by ⁇ in Fig. 12. If the variable d is smaller than the beta parameter value the method continues to step S42, which comprises determining to apply deblocking filtering on the block boundary. The method then continues to step S32 of Fig. 1 1. If the variable d instead is not smaller than the beta parameter value the method continues from step S41 to S43. This step S43 comprises determining not to apply deblocking filtering on the block boundary. In such a case, the method ends.
  • the method as shown in Fig. 12 involving steps S40 to S43 is preferably performed for each (vertical and horizontal) block boundary between neighboring blocks of pixels present in a same slice in a picture of a video sequence.
  • the decision of whether to apply deblocking filtering or not is preferably performed once as shown in Fig. 12 for each such block boundary.
  • step S32 of Fig. 1 1 comprises determining at least one of i) whether to apply weak deblocking filtering or strong deblocking filtering on the block boundary and ii) how many pixels to filter on each side of the block boundary based at least partly on the length offset parameter value and the beta parameter value.
  • both the length offset parameter value and the beta parameter value are used in the decision on the deblocking filtering mode and/or the decision on the deblocking filtering length. This should be compared to step S31 , in which, as discussed in the foregoing, the decision is made based on the beta parameter value but preferably not based on the length offset parameter value.
  • Fig. 13 is flow diagram illustrating an embodiment of step S32 using both the beta parameter value and the length offset value.
  • the method continues from step S31 in Fig. 11 and continues to step S50.
  • Step S50 comprises calculating a beta length parameter value based on the beta parameter value and the length offset parameter value.
  • BetaLength represents the beta length parameter value
  • Beta represents the beta parameter value
  • LengthOffset represents the length offset parameter value
  • Base represents a base parameter value.
  • a next step S51 comprises determining at least one of i) whether to apply weak deblocking filtering or strong deblocking filtering on the block boundary and ii) how many pixels to filter on each side of the block boundary based at least partly on the beta length parameter value calculated in step S50.
  • the base parameter value used in equation (10) is optional. Hence, in an embodiment the base parameter value is zero. In such a case, the beta length parameter is defined as Beta ⁇ LengthOffset.
  • the base parameter has a fixed value. The fixed value could advantageously be represented as a power of two, for example 2, 4, 8, etc. In such a case, the fixed value could be known to both the encoder and the decoder. No signaling of the base parameter value is thereby required.
  • step S30 of Fig. 1 1 preferably also comprises retrieving, based on the encoded video data, a third syntax element defining the base parameter value in addition to retrieving the previously mentioned first and second syntax elements.
  • step S32 comprises determining whether to apply weak deblocking filtering or strong deblocking filtering on the block boundary based at least partly on the length offset parameter value, preferably at least partly based on the length offset parameter value and the beta parameter value, such as based on the beta length parameter value.
  • Fig. 14 is a flow diagram illustrating such a decision on the deblocking filtering mode performed based on the beta length parameter value.
  • the method continues, in an embodiment from step S50 in Fig. 13.
  • Step S61 comprises calculating a second threshold, T 2 , based on the beta length parameter value, preferably defined as BetaLength » 5.
  • the first threshold Ti could be regarded as corresponding to the threshold used in equation (4), i.e. ⁇ »2.
  • the embodiment uses the beta length parameter value [BetaLength).
  • the base parameter value is defined as 2 X
  • the second threshold T 2 could be regarded as corresponding to the threshold used in equation (5), i.e. ⁇ »3.
  • the embodiment uses the beta length parameter value [BetaLength).
  • Step S62 comprises calculating a first variable, Vi, defined as 2x(
  • the first and second variables correspond to the values used in equations (4) and (5).
  • Steps S60 to S63 can be performed serially in any order or at least partly in parallel.
  • the method then continues to step S64 which compares the first variable to the first threshold value, compares the second variable to the second threshold value and preferably also compares the value
  • step S66 comprises determining to apply weak deblocking filtering for the pixel line number i.
  • the decision whether to apply strong or weak deblocking filtering as shown in Fig. 14 is preferably a pixel line specific decision.
  • p1i' Clip3( p1r2xtc, p1i+2xtc, ( p2i + p1i + p0i + q0i + 2 ) » 2 )
  • p2i Clip3( p2i-2xtc, p2i+2xtc, ( 2xp3i + 3xp2i + p1 i + pOi + qOi + 4 ) » 3 )
  • qOi Clip3( q0i-2xtc, q0i+2xtc, ( p1 i + 2xp0i + 2xq0i + 2xq1 i + q2 + 4 ) » 3 )
  • q1i' Clip3( q1i-2xtc, q1i+2xtc, ( p0i + q0i + q1i + q2i + 2 ) » 2 )
  • q2i Clip3( q2 r 2xtc, q2i+2xtc, ( pOi + qOi + q1 i + 3xq2i + 2xq3i + 4 ) » 3 )
  • step S32 comprises determining how many pixels to filter on each side of the block boundary based at least partly on the length offset parameter value, preferably at least partly based on the length offset parameter value and the beta parameter value, such as based on the beta length parameter value.
  • Fig. 15 is a flow diagram illustrating such a decision on the deblocking filtering length performed based on the beta length parameter value.
  • the method continues, in an embodiment from step S50 in Fig. 13.
  • the decision on the deblocking filtering length is performed only if weak deblocking filtering has been selected or is pre-selected for a current pixel line.
  • the method could continue from step S50 for pixel lines for which weak deblocking filtering should be applied or, if the method as shown in Fig. 14 is used to select between weak and strong deblocking filtering, from step S66 in Fig. 14.
  • Step S70 comprises calculating a side threshold, T s , based on the beta length parameter value, preferably defined as ( BetaLength + ( BetaLength » 1 ) ) » 5.
  • the side threshold T s could be regarded as corresponding to the thresholds used in equation (7) and (8), i.e. ( ⁇ + ( ⁇ » 1 ) » 3).
  • the embodiment uses the beta length parameter value ⁇ BetaLength).
  • the variable dp corresponds to the value used in equation (7) and calculated as defined in equation (1 ).
  • Steps S70 and S71 can be performed serially in any order or at least partly in parallel.
  • step S72 compares the variable dp to the side threshold. If the variable dp is smaller than the side threshold the method continues to step S73, which determines to filter and modify two pixels in the pixel line number i. If the variable dp, however, is not smaller than the side threshold the method instead continues from step S72 to step S74, which determines to filter and modify one pixel in the pixel line.
  • step S72 preferably also comprises comparing this variable dq to the side threshold. If the variable dq is smaller than the side threshold the method continues to step S73, where it is determined to filter and modify two pixels in the neighboring block of pixels on the pixel line number i. If the variable dq is not smaller than the side threshold the method instead continues to step S74 where one pixel is filtered and modified in the pixel line i in the neighboring block of pixels.
  • Step S73 then filters and modifies the values of the two pixels that are closest to the block boundary in the pixel line number i in the neighboring block of pixels.
  • the method as shown in Fig. 15 may comprise one additional, optional step that is preferably performed prior to step S70.
  • This optional step involves determining whether to apply any weak deblocking filtering at all to the pixel line number i. In a particular embodiment, this decision is based on a comparison of a delta value ( ⁇ ) and a threshold defined based on the parameter tc. Thus, weak deblocking filtering is then in this optional embodiment only applied to a pixel line number i if
  • step S32 of Fig. 13 comprises determining whether to apply weak deblocking filtering or strong deblocking filtering on the block boundary and determining how many pixels to filter on each side of the block boundary based at least partly on the length offset parameter value, preferably based at least partly on the length offset parameter value and the beta parameter value, such as based on the beta length parameter value.
  • An implementation example of such an embodiment is basically a combination of Figs. 15 and 16.
  • an idea of the embodiments is to use a single parameter to adjust the "length" of the deblocking filtering in both weak (normal) and strong filtering modes while at the same time be able to adjust the length of the deblocking filtering independently of changing the parameter ⁇ that determines which block boundaries that are processed by deblocking filtering.
  • Embodiments relate to introduction of a new parameter indicative of a length offset, which is used to determine a beta length.
  • the beta will then be replaced by the beta length in equations (4), (5), (7) and (8).
  • Fig. 1A schematically illustrates such an embodiment performed in a filtering control device.
  • the method generally states in step S1 where the beta length parameter value is determined as disclosed herein.
  • the beta parameter in equations (4), (5), (7) and (8) is then replaced by this determined beta length parameter in step S2.
  • Fig. 1 B illustrates a method performed in a transmitter, which involves, in step S10, send signaling according to the embodiments, i.e. transmit the syntax elements defining the beta parameter and the length offset parameter.
  • Fig. 1 C illustrates a method performed in a receiver, which involves, in step S20, receive signaling according to the embodiments, i.e. receiving the syntax elements defining the beta parameter and the length offset parameter.
  • the "length of deblocking filtering" can be adjusted.
  • the length of the deblocking filtering is here referred to as the number of pixels from the block boundary that can be modified by deblocking filtering.
  • the length of deblocking filtering can alternatively be adjusted by the same parameter for both the strong and the weak (normal) deblocking filter.
  • a parameter to control the thresholds is signaled in e.g. the slice header or alternatively, in the APS header or in the other parts of the bitstream, such as in another parameter set, such as PPS or SPS.
  • the thresholds that are used in the decisions how many pixels are modified relative to the block boundary depend on the beta parameter. In order to increase these thresholds, i.e.
  • the two parts i.e. the number of pixels to be modified and the signaling, are independent of each other and can be used independently or in combination.
  • the details on the particular implementation are in the following detailed description and the embodiments.
  • the proposed embodiments allow adjusting subjective quality of deblocking filtering, for instance, on the sequence-base and/or slice-based basis. This gives the content provider possibility to adjust the deblocking filtering strength or length of deblocking filtering to a particular sequence.
  • the deblocking filtering strength can also vary on the frame or picture basis.
  • BetaLength Beta ⁇ ( Base + LengthOffset ) (10) wherein Beta is the same as ⁇ and Base is preferably a fixed value that can be represented as a power of two, for example 2, 4, 8 etc. Base could be signaled in the bitstream or be fixed for the encoder and decoder. Then the values of the respective thresholds controlling the length of deblocking filtering are obtained by using the value of BetaLength instead of Beta, see Fig. 1 A, and additionally dividing it by the value equal to the value of Base. In this way, the value of BetaLength is approximately equal to the value of Beta when the LengthOffset is equal to 0. Then the operation required to obtain the values of BetaLength is basically multiplying the beta parameter value with the base parameter value and then dividing it with the same parameters when calculating the threshold in equation (4), (5), (7), (8).
  • a Base value which is greater than 1 may be used in order to have finer granularity of threshold values when sending integer values of LengthOffset.
  • the value of Base equal to 2 enables the granularity of the thresholds to be half of the Beta value
  • using Base value of 4 enables a quarter value granularity, etc.
  • the value [Base + LengthOffset) can also be clipped in the decoder in order to ensure that the value is in the range (MinValue, MaxValue), where the MinValue can be, for example, 0 and MaxValue is a defined maximum value.
  • the value of BetaLength can be used for calculation of all the mentioned thresholds in equations (4), (5), (7) and (8) or for a subset of these thresholds. An example the latter case could be to use BetaLength, for instance, for thresholds related to application of strong filter, i.e. equations (4) and (5). Alternatively, it can be used for threshold used in equations (7), (8), related to choosing between filtering one or two pixels from the block boundary in weak (normal) filtering.
  • the presented threshold can also be used together with some other conditions, for example, for intra blocks only or with boundary strength equal to some particular value.
  • the filtering control method is limited to be used for block boundaries for intra blocks only or for block boundaries equal to or larger than the particular value.
  • BetaLength with some particular value can also be linked to conditions like the size of the block of pixels.
  • the particular value of BetaLength is preferably increasing for increasing block sizes. This can be achieved by defining the value of the length offset parameter to be dependent on the size of the block of pixels.
  • the value of LengthOffset can be either signaled in the bitstream or be hardcoded. In the former case, different values of LengthOffset are used for different boundary strength or when certain conditions are met. In the latter case, the second syntax element does not need to be signaled in the bitstream.
  • a deblocking length offset ⁇ LengthOffset is preferably sent in the bitstream, see Figs. 1 B and 1 C.
  • LengthOffset can be sent in the slice header, the APS or in the other parts of the bitstream.
  • LengthOffset can alternatively be signaled in the SPS or PPS. If LengthOffset is signaled in PPS, the same PPS can be used for a particular sequence picture type. Sending LengthOffset in the SPS provides modifications of the offsets for the video sequence.
  • VLC variable length code
  • An alternative embodiment is to signal a value of LengthOffset using unsigned VLC values.
  • a value A representing [Base + LengthOffset) can be transmitted in the bitstream, e.g. slice header or APS.
  • a positive integer value for ⁇ Base + LengthOffset) can always be signaled.
  • the BetaLength is then equal to Beta ⁇ A instead of Beta ⁇ [Base + LengthOffset).
  • a value of the length offset parameter with some multiplier or divisor can be used, e.g. length_offset_div2.
  • the resulting threshold values should be multiplied by the respective bit depth scaling factor.
  • the beta value should be multiplied by the respective bit scaling factor.
  • an additional parameter is sent in the bitstream to control whether strong or weak filtering is done as well as controlling how many pixels on each side that should be filtered. This parameter is a complement to the existing parameter that, in the embodiments, is used to decide whether to filter a block boundary or not.
  • the decision whether to filter a block or not is based on the old parameter and the decisions whether to use strong or weak filtering should be done is based on a new parameter.
  • the decision how many pixels to filter on each side of a block boundary in the weak (normal) filter operation is also based on this new parameter.
  • a decoder is, according to this embodiment, configured to perform the following steps.
  • the decoder receives video data and parses syntax elements that control the deblocking filtering process.
  • the syntax elements include one parameter A and one parameter B.
  • the decoder decodes the picture.
  • the decoder performs deblocking filter operations on the decoded picture.
  • the decisions whether to use weak or strong filtering is based partly on A.
  • the decisions on how many pixels to filter on each side of a block boundary are based partly on A.
  • the decisions whether to filter a block or not is based partly on B.
  • the decisions whether to use weak or strong filtering is based partly on A and B, the decisions on how many pixels to filter on each side of a block boundary are based partly on A and B.
  • the new parameter from the first embodiment here called A
  • B B x ( X + A )
  • X is a predetermined, fixed value.
  • the decision whether to use weak or strong filtering is based on C. How many pixels to filter on each side of a block boundary is also based on C. 1.
  • the decoder receives video data and parses syntax elements that control the deblocking filtering process.
  • the syntax elements include one parameter A and one parameter B.
  • the decoder decodes the picture 3.
  • the decoder performs deblocking filter operations on the decoded picture.
  • the decisions whether to use weak or strong filtering is based partly on C.
  • the decisions on how many pixels to filter on each side of a block boundary are based partly on C.
  • the decisions whether to filter a block or not is based partly on B.
  • the parameter C from the second embodiment or the parameter C/X, or C»(log 2 (X)), is used instead of ⁇ in equations (4), (5), (7) and (8).
  • the decoder receives video data and parses syntax elements that control the deblocking filtering process.
  • the syntax elements include one parameter A and one parameter B.
  • the decoder decodes the picture.
  • the decoder performs deblocking filter operations on the decoded picture.
  • the syntax code below provides an example of deblocking parameter signaling in the slice header. if( deblocking_filter_control_present_flag ) ⁇
  • the syntax code below provides an example of deblocking parameter signaling in a parameter set, here represented by an APS.
  • m_dfLenghtOffset represents the length offset parameter value and is preferably obtained from the syntax element df_length_offset in the syntax code above.
  • ilndexB is an index used in a look-up table (betatable_8x8) and BitdepthScale is derived from a bit depth as signaled in the PPS or SPS.
  • the previously mentioned side threshold is then preferably calculated as:
  • Equations (4) and (5) become, in this embodiment, equal to (12) and (13) respectively.
  • Equations (4) and (5) in this embodiment are equal to (15) and (16) respectively.
  • the parameters in the form described in one of the previous embodiments can be put into the adaptation parameter set.
  • the parameters are applied to the whole frame or picture rather than to one slice.
  • Fig. 17 is a flow diagram illustrating a deblocking filtering control method performed during video encoding.
  • the method starts in step S80 where a beta parameter value defining whether or not to apply deblocking filtering on a block boundary between two blocks of pixels in a picture of a video sequence is determined.
  • a next step S81 comprises determining a length offset parameter value defining at least one of i) whether to apply weak deblocking filtering or strong deblocking filtering on the block boundary and ii) how many pixels to filter on each side of the block boundary.
  • a next step S82 associates a first syntax element representing or defining the beta parameter value determined in step S80 and a second syntax element representing or defining the length offset parameter value determined in step S81 to an encoded representation of a picture.
  • the length offset parameter value is determined in step S81 based on pixel values in the two neighbouring blocks of pixels. In another embodiment, the length offset parameter is determined in step S81 based on pixel values in a current slice of the picture. Thus, in this embodiment a same length offset parameter value is used for all block boundaries between neighboring blocks of pixels in the slice. Therefore, the length offset parameter value is, in this embodiment, estimated based on, typically, all pixel values adjacent to block boundaries in the slice. For instance, it can be an average of these pixel values that is used to determine the length offset parameter value. For instance, if the pixels in the two blocks represent a rather smooth background or area, i.e.
  • the length offset parameter value is set so that Vi ⁇ Ti and V 2 ⁇ T 2 for the first and fourth pixel line in Fig. 14 and dp ⁇ T s (and typically dq ⁇ T s ) for the second and third pixel line in Fig. 15.
  • the variables Vi, V 2 , dp and dq are dictated by the pixel values in the blocks of pixels, whereas the value of the length offset parameter affects the values of the thresholds Ti, T 2 and T s .
  • the value of the length offset parameter affects the values of the thresholds Ti, T 2 and T s .
  • the length offset parameter is preferably set to a value so that at least one of the variables Vi and V 2 is not smaller than its associated threshold Ti and T 2 , which are defined at least partly based on the length offset parameter value.
  • the length offset parameter is preferably set to a value so that dp (and dq) is preferably not smaller than T s for the pixel line(s), which comprises the structure.
  • the length offset parameter value is determined based on at least one encoding or slice parameter used for encoding the picture. Examples of such parameters based on which the length offset parameter value can be determined include the quantization parameter (QP) or the lambda parameter used in the rate-distortion optimization of the encoded picture.
  • the encodi ng of a slice in a picture of a video sequence generates an encoded representation 20 of the slice comprising a slice header 21 and slice data 22 as shown in Fig. 22.
  • the encoded presentation 20 is output from the encoding process as a so called Network Adaptation Layer (NAL) unit 1 1 as shown in Fig. 21 .
  • the first part of the NAL unit 1 1 is a header that contains an indication of the type of data in the NAL unit 11.
  • the remaining part of the NAL unit 11 contains payload data in the form of the slice header 21 and slice data 22.
  • the NAL unit 1 1 may then be added with headers 12 to form a data packet 10 that can be transmitted as a part of a bitstream from the encoder to the decoder.
  • headers 12 For instance, Real-time Transport Protocol (RTP), User Datagram Protocol (UDP) and Internet Protocol (IP) headers 12 could be added to the NAL unit 1 1.
  • RTP Real-time Transport Protocol
  • UDP User Datagram Protocol
  • IP Internet Protocol
  • This form of packetization of NAL units 11 merely constitutes an example in connection with video transport. Other approaches of handling NAL units 11 , such as file format, MPEG-2 transport streams, MPEG-2 program streams, etc. are possible.
  • step S82 in Fig. 17 applicable to the generation of an encoded representation 20 of a picture comprising at least one slice header 21 and encoded video data represented by the slice data 22 in Fig. 22, the first syntax element and the second syntax element are inserted into a slice header 21 of the at least one slice header 21.
  • the encoded representation of the picture itself carries the syntax elements defining and enabling determination of the beta parameter value and the length offset parameter value.
  • Fig. 18 is a flow diagram illustrating another embodiment of step S82 in Fig. 17. The method continues from step S81 in Fig. 17 and continues to step S90.
  • Step S90 comprises inserting the first syntax element and the second syntax element into a parameter set associated with the video sequence.
  • the syntax elements could be inserted into an APS, PPS, SPS or VPS. It is generally preferred to include the syntax elements in the same parameter set but this is not necessary.
  • the first syntax element could be included in one of an APS, PPS, SPS or VPS with the second syntax element in another of the APS, PPS, SPS or VPS. It is in fact possible to distribute, for instance, the first syntax element, which could comprise multiple syntax element parameter, such as the previously mentioned base QP parameter and delta QP parameters, among multiple parameter sets.
  • a next step S91 comprises inserting a parameter set identifier into a slice header 21 of the at least one slice header 21 in the encoded representation 20 of the picture.
  • This parameter set identifier enables identification of the parameter set into which the first and second syntax elements were inserted in step S90.
  • the parameter set identifier could directly identify the relevant parameter set, such as an APS identifier or PPS identifier.
  • the parameter set identifier identifies a first parameter set, such as PPS, which in turn comprises a second parameter set identifier identifying a second parameter set, such as SPS, which comprises the first or second syntax elements or comprises a third parameter set identifier identifying a third parameter set, such as VPS, which comprises the first or second syntax elements.
  • step S91 optionally comprises inserting multiple parameter set identifiers into the slice header.
  • step S82 may be combined. Hence, it is possible to distribute the first and second parameter sets among a slice header and at least one parameter set.
  • Fig. 7 is a schematic block diagram of a filtering control device 100 according to an embodiment.
  • the filtering control device 100 comprises a determining unit 1 10, also referred to as determiner, determining means or module.
  • the determining unit 110 is configured to determine a beta parameter value from a first syntax element retrieved based on encoded video data and a length offset parameter value from a second syntax element retrieved based on the encoded video data.
  • the determined beta parameter value and the length offset parameters are used by a connecting processing unit 120, also referred to a processor or processing means or module.
  • the processing unit 120 is configured to determine whether or not to apply deblocking filtering on a block boundary between two blocks of pixels in a picture decoded based on the encoded video data.
  • the processing unit 120 performs this determination at least partly based on the beta parameter value from the determining unit 1 10.
  • the processing unit 120 is also configured to determine at least one of i) whether to apply weak deblocking filtering or strong deblocking filtering on the block boundary and ii) how many pixels to filter on each side of the block boundary and perform this determination at least partly on the length offset parameter value from the determining unit 110.
  • the processing unit 120 is configured to determine whether or not to apply deblocking filtering on the block boundary based at least partly on the beta parameter value but not based on the length offset parameter value.
  • the processing unit 120 is configured to determine at least one of i) whether to apply weak deblocking filtering or strong deblocking filtering on the block boundary and ii) how many pixels to filter on each side of the block boundary based at least partly on the length offset parameter value and the beta parameter value.
  • Fig. 16 is a schematic block diagram illustrating optional units of the processing unit 120 of the filtering control device 100 in Fig. 7.
  • the processing unit 120 comprises a beta length calculator 121 , also referred to as beta length calculating unit, means or module.
  • the beta length calculator 121 is configured to calculate a beta length parameter value based on the beta parameter value and the length offset parameter value.
  • the beta length calculator 121 is configured to calculate the beta length parameter value based on, preferably equal to, Beta ⁇ ( Base + LengthOffset ).
  • the processing unit 120 is preferably configured to determine at least one of i) whether to apply weak deblocking filtering or strong deblocking filtering on the block boundary and ii) how many pixels to filter on each side of the block boundary based at least partly on the beta length parameter value.
  • the processing unit 120 optionally comprises a d variable calculator 128, also referred to as a d variable calculating unit, means or module.
  • the d variable calculator 129 is configured to calculate a variable d for the block boundary between the two neighboring blocks of pixels based on pixel values in the first and fourth pixel lines in the two neighboring blocks of pixels.
  • the processing unit 120 is, in this embodiment, configured to determine to apply deblocking filtering on the block boundary if the variable d is smaller than the beta parameter value and otherwise determine not to apply deblocking filtering on the block boundary. In an embodiment, the processing unit 120 is configured to determine whether to apply strong or weak deblocking filtering for a pixel line number i crossing the block boundary between the two neighboring blocks of pixel. The processing unit 120 is configured to perform this determination based at least partly on the length offset parameter value, preferably based at least partly on the length offset parameter value and the beta parameter value and more preferably at least partly based on the beta length parameter value.
  • the processing unit 120 preferably comprises a first threshold calculator 124, a second threshold calculator 125, a first variable calculator 126 and a second variable calculator 127, which are also referred to as first threshold calculating unit, means or module, second threshold calculating unit, means or module, first variable calculating unit, means or module and second variable calculating unit, means or module.
  • the first threshold calculator 124 is configured to calculate a first threshold based on the beta length parameter value, preferably defined as BetaLengt » (2+X), wherein the base parameter, Base, has a value of 2 X .
  • the first threshold is calculated as BetaLeng t » 4 by the first threshold calculator 124.
  • the second threshold calculator 125 is correspondingly configured to calculate a second threshold based on the beta length parameter value, preferably defined as BetaLength » (3+X), such as BetaLength » 5.
  • the first variable calculator 126 preferably calculates the first variable defined as 2x(
  • the second variable calculator 127 is correspondingly configured to calculate a second variable for the current pixel line number i based on pixel values of pixels present in the first block and pixels present in the second block divided by the block boundary.
  • the second variable calculator 127 preferably calculates the second variable defined as
  • the processing unit 120 is, in this embodiment, preferably configured to determine to apply strong deblocking filtering for the pixel line number i if the first variable is smaller than the first threshold, the second variable is smaller than the second threshold and optionally but preferably
  • the processing unit 120 optionally comprises a side threshold calculator 122 and a dp variable calculator 123, also referred to as side threshold calculating unit, means or module and dp variable calculating unit, means or module.
  • the side threshold calculator 122 is configured to calculate a side threshold based on the beta length parameter value. In a particular embodiment the side threshold calculator 122 is configured to calculate the side threshold as ( BetaLengt + ( BetaLengt » 1) ) » (3+X), preferably as ( BetaLength + ( BetaLength » 1) ) » 5.
  • the pixel line is a line for which the processing unit 120 has determined to apply weak deblocking filtering.
  • the dp variable calculator 123 is configured to calculate the variable dp based on pixel values of pixels present in the first (current) block of the two blocks of pixels divided by the block boundary.
  • the processing unit 120 is, in this embodiment, preferably configured to determine to filter and modify two pixels in the pixel line number i if the variable dp is smaller than the side threshold and otherwise determine to filter and modify one pixel in the pixel line.
  • the two pixels are preferably the two pixels in pixel line number i that are closest to the block boundary whereas in the latter case the one pixel is the pixel closest to the block boundary in the pixel line number i.
  • the dp variable calculator 123 is also configured to calculate a variable dq based on pixel values of pixels present in the pixel line number i in the second block of the two pixels divided by the block boundary.
  • the processing unit 120 is then preferably configured to determine to filter and modify two pixels in the pixel line number i in the second block if the variable dq is smaller than the side threshold and otherwise determine to filter and modify one pixel in the pixel line number i in the second block.
  • the processing unit 120 is configured to determine both to i) whether to apply weak or strong deblocking filtering on the block boundary and ii) how many pixels to filter on each side of the block boundary based at least partly on the length offset parameter value, preferably based at least partly on the length offset parameter value and the beta parameter value, more preferably based at least partly on the beta length parameter value.
  • the processing unit 120 may contain all the units 121-128 as shown in Fig. 16.
  • the filtering control device 100 implements the functions of the previously disclosed embodiments, such as the first to seventh embodiment, or a combination thereof by the determining unit 110, which is configured to determine, in a particular embodiment, BetaLengt .
  • This BetaLength is processed by the processing unit 120.
  • Fig. 19 is a schematic block diagram of a filtering control device 200 according to another embodiment.
  • This filtering control device 200 is in particular configured to be implemented within or connected to an encoder.
  • the filtering control device 200 comprises a beta parameter determining unit 210, also referred to as a beta parameter determiner or beta parameter determining means or module.
  • the beta parameter determining unit 210 is configured to determine a beta parameter value defining whether or not to apply deblocking filtering on a block boundary between two blocks of pixels in a picture of a video sequence.
  • a length offset determining unit 220 also referred to as a length offset determiner or length offset determining means or module, is configured to determine a length offset parameter value defining at least one of i) whether to apply weak deblocking filtering or strong deblocking filtering on the block boundary and ii) how many pixels to filter on each side of the block boundary.
  • the filtering control device 200 also comprises an associating unit 230, also referred to as associator or associating means or module, configured to associate a first syntax element representing or defining the beta parameter value and a second syntax element representing or defining the length offset parameter value to an encoded representation of the picture.
  • the length offset determining unit 220 is configured to determine the length offset parameter value based on pixel values in the two blocks of pixels or based on pixel values close to block boundaries in the slice as previously disclosed herein. In another alternative or additional embodiment the length offset determining unit 220 is configured to determine the length offset parameter value based on at least one encoding or slice parameter used for encoding the picture as previously disclosed herein.
  • the associating unit 230 is, in an embodiment, configured to insert the first and second syntax elements into a slice header of the encoded representation of the picture. In another embodiment the associating unit 230 is configured to insert first and second syntax elements into a parameter set associated with the video sequence and insert a parameter set identifier enabling identification of the parameter set into a slice header of the encoded representation of the picture. The associating unit 230 may alternatively be configured to distribute the first and second syntax elements between different parameter sets or between a slice header and a parameter set.
  • the filtering control device 200 of Fig. 19 can, in an embodiment, be viewed as an implementation example of the filtering control unit 100 of Fig. 7. In such a case, the determining unit 100 is configured to perform the operations of the beta parameter determining unit 210 and the length offset determining unit 220, whereas the processing unit 120 is configured to perform the operations of the associating unit 230.
  • the filtering control device 100, 200 of Figs. 7, 19 with their including units 110-120 (and optional units 121-128), 210-230 could be implemented in hardware.
  • circuitry elements that can be used and combined to achieve the functions of the units 110-120, 210-230 of the filtering control device 100, 200. Such variants are encompassed by the embodiments.
  • Particular examples of hardware implementation of the filtering control device 100, 200 is implementation in digital signal processor (DSP) hardware and integrated circuit technology, including both general-purpose electronic circuitry and application-specific circuitry.
  • DSP digital signal processor
  • the filtering control device 100, 200 described herein could alternatively be implemented e.g. by one or more of a processing unit 72 in a computer 70 and adequate software with suitable storage or memory therefore, a programmable logic device (PLD) or other electronic component(s) as shown in Fig. 8.
  • a processing unit 72 in a computer 70 and adequate software with suitable storage or memory therefore, a programmable logic device (PLD) or other electronic component(s) as shown in Fig. 8.
  • PLD programmable logic device
  • Fig. 8 schematically illustrates an embodiment of a computer 70 having a processing unit 72, such as a DSP (Digital Signal Processor) or CPU (Central Processing Unit).
  • the processing unit 72 can be a single unit or a plurality of units for performing different steps of the method described herein.
  • the computer 70 also comprises an input/output (I/O) unit 71 for receiving recorded or generated video frames or encoded video frames and outputting encoded video frame or decoded video data.
  • the I/O unit 71 has been illustrated as a single unit in Fig. 8 but can likewise be in the form of a separate input unit and a separate output unit.
  • the computer 70 comprises at least one computer program product 73 in the form of a nonvolatile memory, for instance an EEPROM (Electrically Erasable Programmable Read-Only Memory), a flash memory or a disk drive.
  • the computer program product 73 comprises a computer program 74, which comprises code means which when run on or executed by the computer 70, such as by the processing unit 72, causes the computer 70 to perform the steps of the method described in the foregoing in connection with Figs. 1A-1C, 11-15, 17-18.
  • the code means in the computer program 74 comprises a module 310 configured to implement embodiments as disclosed herein or combinations thereof. This module 310 essentially performs the steps of the flow diagrams in Figs. 1A-1C, 11-15, 17-18 when run on the processing unit 72.
  • the module 310 is run on the processing unit 72 it corresponds to the corresponding units 110-120, 210-230 of Figs 7, 19.
  • the computer program 74 is a computer program 74 for deblocking filtering control and comprises code means which when run on the computer 70 causes the computer 70 to retrieve, based on encoded video data, a first syntax element defining a beta parameter value and a second syntax element defining a length offset parameter value.
  • the code means also causes the computer 70 to determine, based at least partly on the beta parameter value, whether or not to apply deblocking filtering on a block boundary between two blocks of pixels in a picture decoded based on the encoded video data and determine, based at least partly on the length offset parameter value, at least one of i) whether to apply weak deblocking filtering or strong deblocking filtering on the block boundary and ii) how many pixels to filter on each side of the block boundary.
  • the computer program 74 is a computer program 74 for deblocking filtering control and comprises code means which when run on the computer 70 causes the computer 70 to determine a beta parameter value defining whether or not to apply deblocking filtering on a block boundary between two blocks of pixels in a picture of a video sequence.
  • the code means also causes the computer 70 to determine a length offset parameter value defining at least one of i) whether to apply weak deblocking filtering or strong deblocking filtering on the block boundary and ii) how many pixels to filter on each side of said the boundary, and associate a first syntax element representing the beta parameter value and a second syntax element representing the length offset parameter value to an encoded representation of the picture.
  • An embodiment also relates to a computer program product 73 comprising computer readable code means and a computer program 74 as defined according to any of the embodiments above stored on the computer readable code means.
  • the filtering control device 200 of Fig. 19 is preferably implemented or arranged in an encoder configured to encode video data of a video sequence.
  • Fig. 2 is a schematic block diagram of an encoder 40 for encoding a block of pixels in a video frame or picture of a video sequence according to an embodiment.
  • a current block of pixels is predicted by performing a motion estimation by a motion estimator 50 from an already provided block of pixels in the same frame or in a previous frame.
  • the result of the motion estimation is a motion or displacement vector associated with the reference block, in the case of inter prediction.
  • the motion vector is utilized by a motion compensator 50 for outputting an inter prediction of the block of pixels.
  • An intra predictor 49 computes an intra prediction of the current block of pixels.
  • the outputs from the motion estimator/compensator 50 and the intra predictor 49 are input in a selector 51 that either selects intra prediction or inter prediction for the current block of pixels.
  • the output from the selector 51 is input to an error calculator in the form of an adder 41 that also receives the pixel values of the current block of pixels.
  • the adder 41 calculates and outputs a residual error as the difference in pixel values between the block of pixels and its prediction.
  • the error is transformed in a transformer 42, such as by a discrete cosine transform, and quantized by a quantizer 43 followed by coding in an encoder 44, such as by entropy encoder.
  • a transformer 42 such as by a discrete cosine transform
  • a quantizer 43 quantized by a quantizer 43
  • an encoder 44 such as by entropy encoder.
  • the estimated motion vector is brought to the encoder 44 for generating the coded representation of the current block of pixels.
  • the transformed and quantized residual error for the current block of pixels is also provided to an inverse quantizer 45 and inverse transformer 46 to retrieve the original residual error.
  • This error is added by an adder 47 to the block prediction output from the motion compensator 50 or the intra predictor 49 to create a reference block of pixels that can be used in the prediction and coding of a next block of pixels.
  • This new reference block is first processed by a filtering control device 200 in order to control any filtering that is applied to the reference block to combat any artifact.
  • the processed new reference block is then temporarily stored in a frame buffer 48, where it is available to the intra predictor 49 and the motion estimator/compensator 50.
  • the filtering control device 100 of Fig. 7 is preferably implemented or arranged in a decoder configured to decode encoded video data of a video sequence.
  • Fig. 3 is a corresponding schematic block diagram of a decoder 60 comprising a filtering control device 100 according to any of the embodiments or in combinations thereof.
  • the decoder 60 comprises a decoder 61, such as entropy decoder, for decoding an encoded representation of a block of pixels to get a set of quantized and transformed residual errors. These residual errors are dequantized in an inverse quantizer 62 5 and inverse transformed by an inverse transformer 63 to get a set of residual errors.
  • the resulting decoded block of pixels output from the adder 64 is input to a filtering control device 100 in order to control any filter that is applied to combat any artifacts.
  • the filtered block of pixels is output form the decoder 60 and is furthermore preferably temporarily provided to a frame buffer 65 and can be used as a reference block of pixels for a subsequent block of pixels to be decoded.
  • the frame buffer 65 is thereby connected to the motion estimator/compensator
  • the output from the adder 64 is preferably also input to the intra predictor 66 to be used as an unfiltered reference block of pixels.
  • the filtering control device 100, 200 controls filtering in the form of so called in-loop filtering.
  • the filtering control device 100 is arranged to perform so called post-processing filtering.
  • the filtering control device 100 operates on the output frames outside of the loop formed by the adder 64, the frame buffer 65, the intra predictor 66, the motion estimator/compensator 67 and the selector 68.
  • No filtering and filter control is then typically done at the encoder although, in principle, the encoder can still estimate the length offset parameter and signal it with some non-normative means, e.g. in a non- normative SEI message.
  • the embodiments can be used in an encoder 40 and/or a decoder 60 or completely outside the 30 coding loop as a post filter.
  • the methods of the embodiments are performed in a filtering control device 100, 200 which can be located in an encoder 40 or a decoder 60 as schematically illustrated in Figs. 2 and 3.
  • Figs. 2 and 3 illustrate the example when the method is performed inside the coding loop.
  • the decoder 60 with a filtering control device 100 may be implemented in a user equipment or media terminal 80 as shown in Fig. 5.
  • Fig. 5 is a schematic block diagram of a user equipment or media terminal 80 housing a decoder 60 with a filtering control device.
  • the user equipment 80 can be any device having media decoding functions that operates on an encoded video stream of encoded video frames to thereby decode the video frames and make the video data available. Non-limiting examples of such devices include mobile telephones and other portable media players, tablets, desktops, notebooks, personal video recorders, multimedia players, video streaming servers, set-top boxes, TVs, computers, decoders, game consoles, etc.
  • the user equipment 80 comprises a memory 84 configured to store encoded video frames or pictures. These encoded video frames or pictures can have been generated by the user equipment 80 itself. Alternatively, the encoded video frames or pictures are generated by some other device and wirelessly transmitted or transmitted by wire to the user equipment 80.
  • the user equipment 80 then comprises a transceiver (transmitter and receiver) or input and output port 82 to achieve the data transfer.
  • the encoded video frames or pictures are brought from the memory 84 to a decoder 60, such as the decoder illustrated in Fig. 3.
  • the decoder 60 comprises a filtering control device 100 according to embodiments.
  • the decoder 60 then decodes the encoded video frames or pictures into decoded video frames or pictures.
  • the decoded video frames pictures are provided to a media player 86 that is configured to render the decoded video frames into video data that is displayable on a display or screen 88 of or connected to the user equipment 80.
  • the user equipment 80 has been illustrated as comprising both the decoder 60 and the media player 86, with the decoder 60 implemented as a part of the media player 86.
  • Also distributed implementations are possible where the decoder 60 and the media player 86 are provided in two physically separated devices are possible and within the scope of user equipment 80 as used herein.
  • the display 88 could also be provided as a separate device connected to the user equipment 80, where the actual data processing is taking place.
  • the encoder 40 with a filtering control device 200 may be implemented in a user equipment or media terminal 80 as shown in Fig. 4.
  • Fig. 4 illustrates another embodiment of a user equipment 80 that comprises en encoder 40, such as the encoder of Fig. 2, comprising a filtering control device according to the embodiments.
  • the encoder 40 is then configured to encode video frames or pictures received by the I/O unit 82 and/or generated by the user equipment 80 itself.
  • the user equipment 80 preferably comprises a media 5 engine or recorder, such as in the form of or connected to a (video) camera.
  • the user equipment 80 may optionally also comprise a media player 86, such as a media player 86 with a decoder and filtering control device according to the embodiments, and a display 88.
  • the encoder 40 and/or decoder 60 may be 10 implemented in a network device 30 being or belonging to a network node in a communication network 32 between a sending unit 34 and a receiving user equipment 36.
  • a network device 30 may be a device for converting video according to one video coding standard to another video coding standard, for example, if it has been established that the receiving user equipment 36 is only capable of or prefers another video coding standard than the one sent from the sending unit 34.
  • the network device 15 30 can be in the form of or comprised in a radio base station, a Node-B or any other network node in a communication network 32, such as a radio-based network.
  • a transmitter associated with the encoder is provided for signaling the parameters according to embodiments above. Accordingly a receiver is provided for receiving the signaled parameters. The 20 received parameters are used by the decoder when decoding the bit stream. Thus the receiver and the transmitter respectively implement the methods shown in Figs. 1 B and 1 C.
  • the embodiments above apply to a decoder, an encoder and any element that operates on a bitstream, such as a network-node or a Media Aware Network Element.
  • the encoder may for example 25 be located in a transmitter in a video camera in e.g. a mobile device.
  • the decoder may for example be located in a receiver in a video camera or any other device for displaying, decoding or transcoding a video stream.
  • the embodiments are not limited to HEVC but may be applied to any extension of HEVC such as a 30 scalable extension or multiview extension or to a different video codec.
  • slice sao interleaving flag u(1) slice_sample_adaptive_offset_flag u(1) if( slice_sao_interleaving_flag &&
  • slice_adaptiveJoopJilterJlag u(1) if( slice_adaptiveJoop_filter_flag && alf_coef_in_slice_flag ) alf_param( )
  • slice_adaptive_loop_filter_flag 1 slice_sample_adaptive_offset_flag 1 1
  • n um_entry_poi nt_off sets ue(v) if( num_entry_point_offsets > 0 ) ⁇

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

L'invention concerne la commande de filtrage de dégroupage qui consiste à déterminer, sur la base d'au moins partiellement une valeur paramétrique bêta, s'il faut ou non appliquer un filtrage de déblocage sur une limite de blocs entre deux blocs de pixels dans une image. Une valeur paramétrique d'écart de longueur est utilisée afin de déterminer s'il faut appliquer un filtrage de déblocage faible ou fort sur la limite des blocs et/ou combien de pixels il faut filtrer de chaque côté de la limite des blocs. La décision quant au choix de l'application du filtrage de déblocage et quant au choix du mode et/ou de la longueur de filtrage de déblocage peut donc être indépendante en utilisant différentes valeurs paramétriques définissant un seuil.
PCT/SE2013/050237 2012-04-25 2013-03-14 Commande de filtrage de dégroupage WO2013162441A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201261638157P 2012-04-25 2012-04-25
US61/638,157 2012-04-25

Publications (1)

Publication Number Publication Date
WO2013162441A1 true WO2013162441A1 (fr) 2013-10-31

Family

ID=48045655

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/SE2013/050237 WO2013162441A1 (fr) 2012-04-25 2013-03-14 Commande de filtrage de dégroupage

Country Status (1)

Country Link
WO (1) WO2013162441A1 (fr)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018132043A1 (fr) * 2017-01-10 2018-07-19 Telefonaktiebolaget Lm Ericsson (Publ) Commande de filtrage de déblocage
WO2019144732A1 (fr) 2018-01-29 2019-08-01 Mediatek Inc. Filtrage de déblocage à longueur adaptative dans un codage vidéo
KR20190121377A (ko) * 2017-04-06 2019-10-25 파나소닉 인텔렉츄얼 프로퍼티 코포레이션 오브 아메리카 부호화 장치, 복호 장치, 부호화 방법 및 복호 방법
CN110495179A (zh) * 2017-04-06 2019-11-22 松下电器(美国)知识产权公司 编码装置、解码装置、编码方法及解码方法
JPWO2018186433A1 (ja) * 2017-04-06 2020-02-06 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカPanasonic Intellectual Property Corporation of America 符号化装置、復号装置、符号化方法及び復号方法
JPWO2018186430A1 (ja) * 2017-04-06 2020-02-06 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカPanasonic Intellectual Property Corporation of America 符号化装置、復号装置、符号化方法及び復号方法
CN111131821A (zh) * 2018-10-31 2020-05-08 北京字节跳动网络技术有限公司 依赖性量化下的去方块滤波
JP2021073802A (ja) * 2017-04-06 2021-05-13 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカPanasonic Intellectual Property Corporation of America 復号装置及び復号方法
CN114025159A (zh) * 2017-04-06 2022-02-08 松下电器(美国)知识产权公司 编码装置和解码装置
EP3991308A4 (fr) * 2020-03-27 2022-08-31 Tencent America LLC Commande de haut niveau pour opérations de déblocage

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110194614A1 (en) * 2010-02-05 2011-08-11 Andrey Norkin De-Blocking Filtering Control

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110194614A1 (en) * 2010-02-05 2011-08-11 Andrey Norkin De-Blocking Filtering Control

Non-Patent Citations (10)

* Cited by examiner, † Cited by third party
Title
AN J ET AL: "CE12 Subtest 1: Improved Deblocking Filter", 20110309, no. JCTVC-E079, 9 March 2011 (2011-03-09), XP030008585, ISSN: 0000-0007 *
ANDREY NORKIN ET AL: "CE12.1: Ericsson deblocking filter", 96. MPEG MEETING; 21-3-2011 - 25-3-2011; GENEVA; (MOTION PICTURE EXPERT GROUP OR ISO/IEC JTC1/SC29/WG11),, no. m19803, 19 March 2011 (2011-03-19), XP030048370 *
CHUN-LUNG HSU ET AL: "A Fast-Deblocking Boundary-strength Based Architecture Design of Deblocking Filter in H.264/AVC Applications", JOURNAL OF SIGNAL PROCESSING SYSTEMS ; FOR SIGNAL, IMAGE, AND VIDEO TECHNOLOGY (FORMERLY THE JOURNAL OF VLSI SIGNAL PROCESSING SYSTEMS FOR SIGNAL, IMAGE, AND VIDEO TECHNOLOGY), SPRINGER US, BOSTON, vol. 52, no. 3, 20 November 2007 (2007-11-20), pages 211 - 229, XP019616669, ISSN: 1939-8115 *
KOTRA A ET AL: "Deblocking simplification and rounding optimization", 7. JCT-VC MEETING; 98. MPEG MEETING; 21-11-2011 - 30-11-2011; GENEVA; (JOINT COLLABORATIVE TEAM ON VIDEO CODING OF ISO/IEC JTC1/SC29/WG11 AND ITU-T SG.16 ); URL: HTTP://WFTP3.ITU.INT/AV-ARCH/JCTVC-SITE/,, no. JCTVC-G639, 8 November 2011 (2011-11-08), XP030110623 *
NORKIN A ET AL: "AHG6: On deblocking filter and parameters signaling", 103. MPEG MEETING; 21-1-2013 - 25-1-2013; GENEVA; (MOTION PICTURE EXPERT GROUP OR ISO/IEC JTC1/SC29/WG11),, no. m27569, 18 January 2013 (2013-01-18), XP030056136 *
NORKIN A ET AL: "CE10.4: deblocking parameters signaling", 8. JCT-VC MEETING; 99. MPEG MEETING; 1-2-2012 - 10-2-2012; SAN JOSE; (JOINT COLLABORATIVE TEAM ON VIDEO CODING OF ISO/IEC JTC1/SC29/WG11 AND ITU-T SG.16 ); URL: HTTP://WFTP3.ITU.INT/AV-ARCH/JCTVC-SITE/,, no. JCTVC-H0574, 22 January 2012 (2012-01-22), XP030111601 *
NORKIN A ET AL: "CE12: Ericsson's and MediaTek's deblocking filter", 6. JCT-VC MEETING; 97. MPEG MEETING; 14-7-2011 - 22-7-2011; TORINO; (JOINT COLLABORATIVE TEAM ON VIDEO CODING OF ISO/IEC JTC1/SC29/WG11 AND ITU-T SG.16 ); URL: HTTP://WFTP3.ITU.INT/AV-ARCH/JCTVC-SITE/,, no. JCTVC-F118, 9 August 2011 (2011-08-09), XP030009141 *
NORKIN A ET AL: "Deblocking filter length adjustment", 9. JCT-VC MEETING; 100. MPEG MEETING; 27-4-2012 - 7-5-2012; GENEVA; (JOINT COLLABORATIVE TEAM ON VIDEO CODING OF ISO/IEC JTC1/SC29/WG11 AND ITU-T SG.16 ); URL: HTTP://WFTP3.ITU.INT/AV-ARCH/JCTVC-SITE/,, no. JCTVC-I0542, 27 April 2012 (2012-04-27), XP030112305 *
WIEGAND T ET AL: "WD3: Working Draft 3 of High-Efficiency Video Coding", 20110329, no. JCTVC-E603, 29 March 2011 (2011-03-29), XP030009014, ISSN: 0000-0003 *
YANG J ET AL: "CE12: SK Telecom/SKKU Deblocking Filter", 6. JCT-VC MEETING; 97. MPEG MEETING; 14-7-2011 - 22-7-2011; TORINO; (JOINT COLLABORATIVE TEAM ON VIDEO CODING OF ISO/IEC JTC1/SC29/WG11 AND ITU-T SG.16 ); URL: HTTP://WFTP3.ITU.INT/AV-ARCH/JCTVC-SITE/,, no. JCTVC-F258, 18 July 2011 (2011-07-18), XP030009281 *

Cited By (49)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10869063B2 (en) 2017-01-10 2020-12-15 Telefonaktiebolaget Lm Ericsson (Publ) Deblocking filtering control
WO2018132043A1 (fr) * 2017-01-10 2018-07-19 Telefonaktiebolaget Lm Ericsson (Publ) Commande de filtrage de déblocage
CN114449264A (zh) * 2017-04-06 2022-05-06 松下电器(美国)知识产权公司 编码方法、解码方法及发送方法
JP7364768B2 (ja) 2017-04-06 2023-10-18 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ 符号化装置及び復号装置
JPWO2018186433A1 (ja) * 2017-04-06 2020-02-06 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカPanasonic Intellectual Property Corporation of America 符号化装置、復号装置、符号化方法及び復号方法
JPWO2018186430A1 (ja) * 2017-04-06 2020-02-06 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカPanasonic Intellectual Property Corporation of America 符号化装置、復号装置、符号化方法及び復号方法
JPWO2018186429A1 (ja) * 2017-04-06 2020-02-06 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカPanasonic Intellectual Property Corporation of America 符号化装置、復号装置、符号化方法及び復号方法
EP3609185A4 (fr) * 2017-04-06 2020-03-18 Panasonic Intellectual Property Corporation of America Dispositif de codage, dispositif de décodage, procédé de codage et procédé de décodage
JP7408862B2 (ja) 2017-04-06 2024-01-05 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ 符号化装置及び復号装置
KR20190121377A (ko) * 2017-04-06 2019-10-25 파나소닉 인텔렉츄얼 프로퍼티 코포레이션 오브 아메리카 부호화 장치, 복호 장치, 부호화 방법 및 복호 방법
JP2021073802A (ja) * 2017-04-06 2021-05-13 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカPanasonic Intellectual Property Corporation of America 復号装置及び復号方法
KR102296015B1 (ko) 2017-04-06 2021-09-01 파나소닉 인텔렉츄얼 프로퍼티 코포레이션 오브 아메리카 부호화 장치, 복호 장치, 부호화 방법 및 복호 방법
KR20210110735A (ko) * 2017-04-06 2021-09-08 파나소닉 인텔렉츄얼 프로퍼티 코포레이션 오브 아메리카 부호화 장치, 복호 장치, 부호화 방법 및 복호 방법
JP2021153336A (ja) * 2017-04-06 2021-09-30 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカPanasonic Intellectual Property Corporation of America 符号化装置及び復号装置
JP2021158680A (ja) * 2017-04-06 2021-10-07 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカPanasonic Intellectual Property Corporation of America 符号化装置及び復号装置
US11172198B2 (en) 2017-04-06 2021-11-09 Panasonic Intellectual Property Corporation Of America Encoder, decoder, encoding method, and decoding method
CN114025159B (zh) * 2017-04-06 2023-12-12 松下电器(美国)知识产权公司 编码装置和解码装置
CN114025159A (zh) * 2017-04-06 2022-02-08 松下电器(美国)知识产权公司 编码装置和解码装置
CN114040202A (zh) * 2017-04-06 2022-02-11 松下电器(美国)知识产权公司 编码方法和解码方法
JP7044913B2 (ja) 2017-04-06 2022-03-30 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ 復号装置及び復号方法
KR102382809B1 (ko) 2017-04-06 2022-04-08 파나소닉 인텔렉츄얼 프로퍼티 코포레이션 오브 아메리카 부호화 장치, 복호 장치, 부호화 방법 및 복호 방법
KR20220045076A (ko) * 2017-04-06 2022-04-12 파나소닉 인텔렉츄얼 프로퍼티 코포레이션 오브 아메리카 부호화 장치, 복호 장치, 부호화 방법 및 복호 방법
CN110495179B (zh) * 2017-04-06 2022-04-15 松下电器(美国)知识产权公司 编码装置、解码装置、编码方法及解码方法
CN114449263A (zh) * 2017-04-06 2022-05-06 松下电器(美国)知识产权公司 编码装置、解码装置及存储介质
JP7442607B2 (ja) 2017-04-06 2024-03-04 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ 符号化方法及び復号方法
CN110495179A (zh) * 2017-04-06 2019-11-22 松下电器(美国)知识产权公司 编码装置、解码装置、编码方法及解码方法
KR102469589B1 (ko) 2017-04-06 2022-11-22 파나소닉 인텔렉츄얼 프로퍼티 코포레이션 오브 아메리카 부호화 장치, 복호 장치, 부호화 방법 및 복호 방법
JP2022082602A (ja) * 2017-04-06 2022-06-02 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ 符号化装置及び符号化方法
CN114040202B (zh) * 2017-04-06 2023-12-12 松下电器(美国)知识产权公司 编码方法和解码方法
CN114500999A (zh) * 2017-04-06 2022-05-13 松下电器(美国)知识产权公司 编码装置、解码装置及存储介质
CN114466184A (zh) * 2017-04-06 2022-05-10 松下电器(美国)知识产权公司 编码方法、解码方法及发送方法
KR20220158104A (ko) * 2017-04-06 2022-11-29 파나소닉 인텔렉츄얼 프로퍼티 코포레이션 오브 아메리카 부호화 장치, 복호 장치, 부호화 방법 및 복호 방법
JP7192041B2 (ja) 2017-04-06 2022-12-19 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ 符号化装置及び復号装置
JP7192040B2 (ja) 2017-04-06 2022-12-19 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ 符号化装置及び復号装置
US11563940B2 (en) 2017-04-06 2023-01-24 Panasonic Intellectual Property Corporation Of America Encoder, decoder, and related non-transitory computer readable medium
JP7237215B2 (ja) 2017-04-06 2023-03-10 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ 符号化装置及び符号化方法
US11778180B2 (en) 2017-04-06 2023-10-03 Panasonic Intellectual Property Corporation Of America Encoder, decoder, and related non-transitory computer readable medium
CN114500999B (zh) * 2017-04-06 2023-05-16 松下电器(美国)知识产权公司 编码装置、解码装置及存储介质
CN114449264B (zh) * 2017-04-06 2023-05-16 松下电器(美国)知识产权公司 编码方法、解码方法及发送方法
CN114466184B (zh) * 2017-04-06 2023-05-16 松下电器(美国)知识产权公司 编码方法、解码方法及发送方法
CN114449263B (zh) * 2017-04-06 2023-05-16 松下电器(美国)知识产权公司 编码装置、解码装置及存储介质
KR102555783B1 (ko) 2017-04-06 2023-07-14 파나소닉 인텔렉츄얼 프로퍼티 코포레이션 오브 아메리카 부호화 장치, 복호 장치, 부호화 방법 및 복호 방법
EP3738307A4 (fr) * 2018-01-29 2021-11-10 MediaTek Inc Filtrage de déblocage à longueur adaptative dans un codage vidéo
WO2019144732A1 (fr) 2018-01-29 2019-08-01 Mediatek Inc. Filtrage de déblocage à longueur adaptative dans un codage vidéo
CN111131821B (zh) * 2018-10-31 2023-05-09 北京字节跳动网络技术有限公司 依赖性量化下的去方块滤波
CN111131821A (zh) * 2018-10-31 2020-05-08 北京字节跳动网络技术有限公司 依赖性量化下的去方块滤波
JP2022548914A (ja) * 2020-03-27 2022-11-22 テンセント・アメリカ・エルエルシー デブロッキング操作の高度制御
EP3991308A4 (fr) * 2020-03-27 2022-08-31 Tencent America LLC Commande de haut niveau pour opérations de déblocage
US11973990B2 (en) 2020-03-27 2024-04-30 Tencent America LLC Signaling for modified deblocking filter operations

Similar Documents

Publication Publication Date Title
US10951917B2 (en) Method and apparatus for performing intra-prediction using adaptive filter
WO2013162441A1 (fr) Commande de filtrage de dégroupage
KR102130480B1 (ko) 영상의 재구성된 샘플 세트에 대한 보상 오프셋들의 인코딩/디코딩을 최적화하는 방법 및 장치
EP2938075A1 (fr) Filtrage de déblocage
KR20130139341A (ko) 디블록킹 필터링 제어
WO2014055020A1 (fr) Adaptation de paramètre de déblocage hiérarchique
EP2870752A1 (fr) Filtrage à intra-dégroupage restreint pour codage vidéo
EP2870758B1 (fr) Contrôle du filtre de déblocage
EP2870759B1 (fr) Décisions pour un filtrage de déblocage fort

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13714031

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 13714031

Country of ref document: EP

Kind code of ref document: A1