WO2020089825A1 - Quantization parameters under coding tool of dependent quantization - Google Patents

Quantization parameters under coding tool of dependent quantization Download PDF

Info

Publication number
WO2020089825A1
WO2020089825A1 PCT/IB2019/059343 IB2019059343W WO2020089825A1 WO 2020089825 A1 WO2020089825 A1 WO 2020089825A1 IB 2019059343 W IB2019059343 W IB 2019059343W WO 2020089825 A1 WO2020089825 A1 WO 2020089825A1
Authority
WO
WIPO (PCT)
Prior art keywords
quantization
parameter
dependent scalar
slice
processing method
Prior art date
Application number
PCT/IB2019/059343
Other languages
French (fr)
Inventor
Hongbin Liu
Li Zhang
Kai Zhang
Yue Wang
Original Assignee
Beijing Bytedance Network Technology Co., Ltd.
Bytedance Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Bytedance Network Technology Co., Ltd., Bytedance Inc. filed Critical Beijing Bytedance Network Technology Co., Ltd.
Publication of WO2020089825A1 publication Critical patent/WO2020089825A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/85Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
    • H04N19/86Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving reduction of coding artifacts, e.g. of blockiness
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/117Filters, e.g. for pre-processing or post-processing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/12Selection from among a plurality of transforms or standards, e.g. selection between discrete cosine transform [DCT] and sub-band transform or selection between H.263 and H.264
    • H04N19/122Selection of transform size, e.g. 8x8 or 2x4x8 DCT; Selection of sub-band transforms of varying structure or type
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/124Quantisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/80Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation
    • H04N19/82Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation involving filtering within a prediction loop
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/90Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
    • H04N19/96Tree coding, e.g. quad-tree coding

Definitions

  • This document is related to video and image coding technologies.
  • Digital video accounts for the largest bandwidth use on the internet and other digital communication networks. As the number of connected user devices capable of receiving and displaying video increases, it is expected that the bandwidth demand for digital video usage will continue to grow.
  • the disclosed techniques may be used by video or image decoder or encoder embodiments for in which quantization parameters are under the coding tool of dependent quantization.
  • a method of processing video includes performing a determination, by a processor, that dependent scalar quantization is used to process a first video block; determining, by the processor, a first quantization parameter (QP) to be used for a deblocking filter for the first video block based on the determination that dependent scalar quantization being used to process the first video block; and performing further processing of the first video block using the deblocking filter in accordance with the first QP.
  • QP quantization parameter
  • a video processing method comprises determining one or multiple deblocking filter parameters to be used in a deblocking filter process of a current video block based on whether dependent scalar quantization is used to process the current video block, wherein a set of admissible reconstruction value for a transform coefficient corresponding to the dependent scalar quantization depends on at least one transform coefficient level that precedes a current transform coefficient level; and performing the deblocking filter process on the current video block in accordance with the one or multiple deblocking filter parameter.
  • a video processing method comprises determining whether to apply deblocking filter process based on whether dependent scalar quantization is used to process a current video block, wherein a set of admissible reconstruction value for a transform coefficient corresponding to the dependent scalar quantization depends on at least one transform coefficient level that precedes a current transform coefficient level; and performing the deblocking filter process on the current video block base on the determination of applying the deblocking filter process.
  • a video processing method comprises determining a quantization parameter to be used in a dependent scalar quantization of a current video block in case that the dependent scalar quantization is enabled for the current video block, wherein a set of admissible reconstmction value for a transform coefficient corresponding to the dependent scalar quantization depends on at least one transform coefficient level that precedes a current transform coefficient level; and performing the dependent scalar quantization on the current video block in accordance with the determined quantization parameter, wherein the determined quantization parameter is also applied to a video process different from the dependent scalar quantization using a quantization parameter as an input parameter of the current video block.
  • a video processing method comprises determining a quantization parameter to be used in a dependent scalar dequantization of a current video block in case that the dependent scalar dequantization is enabled for the current video block, wherein a set of admissible reconstruction value for a transform coefficient corresponding to the dependent scalar dequantization depends on at least one transform coefficient level that precedes a current transform coefficient level; performing the dependent scalar dequantization on the current video block in accordance with the determined quantization parameter, wherein the determined quantization parameter is also applied to a video process different from the dependent scalar dequantization using a quantization parameter as an input parameter of the current video block.
  • the above-described method may be implemented by a video decoder apparatus that comprises a processor.
  • the above-described method may be implemented by a video encoder apparatus that comprises a processor.
  • the above-described method may be implemented by an apparatus in a video system that comprises a processor and a non-transitory memory with instructions thereon, wherein the instructions upon execution by the processor, cause the processor to implement the above -described method.
  • these methods may be embodied in the form of processor-executable instmctions and stored on a computer-readable program medium.
  • FIG. 1 shows an example of two scalar quantizers used in dependent quantization.
  • FIG. 2 shows an example of state transition and quantizer selection for dependent quantization.
  • FIG. 3 shows an example of an overall processing flow of a deblocking filter process.
  • FIG. 4 shows an example of a flow diagram for Bs calculation.
  • FIG. 5 shows examples of referred information for Bs calculation at a coding tree unit (CTU) boundary.
  • CTU coding tree unit
  • FIG. 6 shows an example of a pixels involved in a filter on/off decision and a strong/weak filter selection.
  • FIG. 7 shows an example of a deblocking behavior in a 4:2:2 chroma format.
  • FIG. 8 is a block diagram of an example of a video processing apparatus.
  • FIG. 9 shows a block diagram of an example implementation of a video encoder.
  • FIG. 10 is a flowchart for an example of a video processing method.
  • FIG. 1 1 is a flowchart for an example of a video processing method.
  • FIG. 12 is a flowchart for an example of a video processing method.
  • FIG. 13 is a flowchart for an example of a video processing method.
  • FIG. 14 is a flowchart for an example of a video processing method.
  • the present document provides various techniques that can be used by a decoder of image or video bitstreams to improve the quality of decompressed or decoded digital video or images.
  • video is used herein to include both a sequence of pictures (traditionally called video) and individual images.
  • a video encoder may also implement these techniques during the process of encoding in order to reconstmct decoded frames used for further encoding.
  • Section headings are used in the present document for ease of understanding and do not limit the embodiments and techniques to the corresponding sections. As such, embodiments from one section can be combined with embodiments from other sections.
  • This patent document is related to video coding technologies. Specifically, it is related to the usage of quantization parameters when dependent quantization is utilized. It may be applied to the existing video coding standard like HEVC, or the standard (Versatile Video Coding) to be finalized. It may be also applicable to future video coding standards or video codec.
  • Video coding standards have evolved primarily through the development of the well- known ITU-T and ISO/IEC standards.
  • the ITU-T produced H.261 and H.263, ISO/IEC produced MPEG-l and MPEG-4 Visual, and the two organizations jointly produced the H.262/MPEG-2 Video and H.264/MPEG-4 Advanced Video Coding (AVC) and H.265/HEVC standards.
  • AVC H.264/MPEG-4 Advanced Video Coding
  • H.265/HEVC High Efficiency Video Coding
  • the video coding standards are based on the hybrid video coding structure wherein temporal prediction plus transform coding are utilized.
  • Joint Video Exploration Team JVET was founded by VCEG and MPEG jointly in 2015.
  • FIG. 9 is a block diagram of an example implementation of a video encoder.
  • FIG. 9 shows that the encoder implementation has a feedback path built in in which the video encoder also performs video decoding functionality (reconstructing compressed representation of video data for use in encoding of next video data).
  • Dependent scalar quantization refers to an approach in which the set of admissible reconstmction values for a transform coefficient depends on the values of the transform coefficient levels that precede the current transform coefficient level in reconstmction order.
  • the main effect of this approach is that, in comparison to conventional independent scalar quantization (as used in HEVC and VTM-l), the admissible reconstmction vectors (given by all reconstructed transform coefficients of a transform block) are packed denser in the N- dimensional vector space (N represents the number of transform coefficients in a transform block). That means, for a given average number of admissible reconstmction vectors per N- dimensional unit volume, the average distance (or MSE distortion) between an input vector and the closest reconstruction vector is reduced (for typical distributions of input vectors).
  • the approach of dependent scalar quantization is realized by: (a) defining two scalar quantizers with different reconstruction levels and (b) defining a process for switching between the two scalar quantizers.
  • Figure 1 is an illustration of the two scalar quantizers used in the proposed approach of dependent quantization.
  • the two scalar quantizers used denoted by Q0 and Ql, are illustrated in Figure 1.
  • the location of the available reconstruction levels is uniquely specified by a quantization step size D. If we neglect the fact that the actual reconstruction of transform coefficients uses integer arithmetic, the two scalar quantizers Q0 and Ql are characterized as follows:
  • Q0 The reconstruction levels of the first quantizer Q0 are given by the even integer multiples of the quantization step size D. When this quantizer is used, a reconstructed transform coefficient f is calculated according to
  • the scalar quantizer used (Q0 or Ql) is not explicitly signalled in the bitstream. Instead, the quantizer used for a current transform coefficient is determined by the parities of the transform coefficient levels that precede the current transform coefficient in coding/reconstruction order.
  • Figure 2 is an example of a state transition and quantizer selection for the proposed dependent quantization.
  • the switching between the two scalar quantizers is realized via a state machine with four states.
  • the state can take four different values: 0, 1 , 2, 3. It is uniquely determined by the parities of the transform coefficient levels preceding the current transform coefficient in coding/reconstruction order.
  • the state is set equal to 0.
  • the transform coefficients are reconstmcted in scanning order (i.e., in the same order they are entropy decoded).
  • the state is updated as shown in Figure 2, where k denotes the value of the transform coefficient level. Note that the next state only depends on the current state and the parity (k & 1) of the current transform coefficient level k. With k representing the value of the current transform coefficient level, the state update can be written as
  • state stateT ransTablef state ][ k & 1 ],
  • stateTransTable represents the table shown in Figure 2 and the operator & specifies the bit-wise“and” operator in two’s-complement arithmetic.
  • state transition can also be specified without a table look-up:
  • state ( 32040 >> ( ( state ⁇ 2 ) + ( ( k & l ) ⁇ l ) ) ) & 3
  • the l6-bit value 32040 specifies the state transition table.
  • the state uniquely specifies the scalar quantizer used. If the state for a current transform coefficient is equal to 0 or 1 , the scalar quantizer Q0 is used. Otherwise (the state is equal to 2 or 3), the scalar quantizer Ql is used. [0056] The detailed scaling process is described as follows.
  • Inputs to this process are: [0060] a luma location ( xTbY, yTbY ) specifying the top-left sample of the current luma transform block relative to the top left luma sample of the current picture,
  • bitDepth specifying the bit depth of the current colour component.
  • Output of this process is the (nTbW)x(nTbH) array d of scaled transform coefficients with elements d[ x ][ y ].
  • the quantization parameter qP is derived as follows:
  • bitDepth ( ( ( Log2( nTbW ) + Log2( nTbH ) ) & l ) * 8 +
  • the intermediate scaling factor m[ x ] [ y ] is set equal to 16.
  • dep_quant_enabled_flag If dep_quant_enabled_flag is equal to 1, the following applies:
  • QPC quantization parameter
  • a deblocking filter process is performed for each CU in the same order as the decoding process. First vertical edges are filtered (horizontal filtering) then horizontal edges are filtered (vertical filtering). Filtering is applied to 8x8 block boundaries which are determined to be filtered, both for luma and chroma components. 4x4 block boundaries are not processed in order to reduce the complexity.
  • FIG. 3 illustrates the overall flow of deblocking filter processes.
  • a boundary can have three filtering status values: no filtering, weak filtering and strong filtering.
  • Each filtering decision is based on boundary strength, Bs, and threshold values, b and tc.
  • Figure 3 is an example of an overall processing flow of a deblocking filter process.
  • TU boundaries and PU boundaries Two kinds of boundaries are involved in the deblocking filter process: TU boundaries and PU boundaries. CU boundaries are also considered, since CU boundaries are necessarily also TU and PU boundaries.
  • PU shape is 2NxN (N > 4) and RQT depth is equal to 1
  • TU boundaries at 8x8 block grid and PU boundaries between each PU inside the CU are also involved in the filtering.
  • the boundary strength (Bs) reflects how strong a filtering process may be needed for the boundary.
  • a value of 2 for Bs indicates strong filtering, 1 indicates weak filtering and 0 indicates no deblocking filtering.
  • P and Q be defined as blocks which are involved in the filtering, where P represents the block located to the left (vertical edge case) or above (horizontal edge case) the boundary and Q represents the block located to the right (vertical edge case) or above (horizontal edge case) the boundary.
  • Figure 4 illustrates how the Bs value is calculated based on the intra coding mode, the existence of non-zero transform coefficients, reference picture, number of motion vectors and motion vector difference.
  • Threshold values b' and tc' are involved in the filter on/off decision, strong and weak filter selection and weak filtering process. These are derived from the value of the luma quantization parameter Q as shown in Table 2-1. The derivation process of Q is described in 2.2.3.1.
  • variable b is derived from b' as follows:
  • variable tc is derived from tc' as follows:
  • variable edgeType specifying whether a vertical (EDGE_VER) or a horizontal (EDGE_HOR) edge is filtered
  • edgeType is equal to EDGE_VER
  • pi,k recPictureLf xCb + xBl— i— 1 ][ yCb + yBl + k ] (8-285)
  • pi,k recPictureLf xCb + xBl + k ][ yCb + yBl - i - 1 ] (8-287)
  • variables QpQ and QpP are set equal to the QpY values of the coding units which include the coding blocks containing the sample q0,0 and p0,0, respectively.
  • a variable qPL is derived as follows:
  • variable b' is determined as specified in Table 8 11 based on the luma quantization parameter Q derived as follows:
  • slice_beta_offset_div2 is the value of the syntax element slice_beta_offset_div2 for the slice that contains sample q0,0.
  • variable b is derived as follows:
  • slice_tc_offset_div2 is the value of the syntax element slice_tc_offset_div2 for the slice that contains sample q0,0.
  • variable tc is derived as follows:
  • edgeType Depending on the value of edgeType, the following applies:
  • edgeType is equal to EDGE_VER, the following ordered steps apply:
  • dp3 Abs( p2,3 - 2 * pl,3 + p0,3 ) (8-294)
  • variable dpq is set equal to 2 * dpqO.
  • variable dpq is set equal to 2 * dpq3.
  • variable dE is set equal to 1.
  • dSamO is equal to 1 and dSam3 is equal to 1
  • the variable dE is set equal to 2.
  • edgeType is equal to EDGE_HOR
  • dp3 Abs( p2,3 - 2 * pl,3 + p0,3 ) (8-303)
  • variable dpq is set equal to 2 * dpqO.
  • variable dpq is set equal to 2 * dpq3.
  • variable dE is set equal to 1.
  • Table 8-11 shows derivation of threshold variables b' and tc' from input Q.
  • the filter on/off decision is made using 4 lines grouped as a unit, to reduce computational complexity.
  • Figure 6 illustrates the pixels involving in the decision. The 6 pixels in the two red boxes in the first 4 lines are used to determine whether the filter is on or off for those 4 lines. The 6 pixels in the two red boxes in the second group of 4 lines are used to determine whether the filter is on or off for the second group of 4 lines.
  • Figure 6 shows an example of pixels involved in an on/off decisions and a strong/weak filter selection.
  • variables dE, dEpl and dEp2 are set as follows:
  • a filter on/off decision is made in a similar manner as described above for the second group of 4 lines.
  • the filtered pixel values are obtained by the following equations. Note that three pixels are modified using four pixels as an input for each P and Q block, respectively.
  • pi’ ( p2 + pi + po + qo + 2 ) » 2
  • Dr Clip3( -( t c » 1), t c » 1, ( ( ( p 2 + po + 1 ) » 1 ) - pi + D ) » l )
  • pi’ Clipl Y ( pi + Dr )
  • the boundary strength Bs for chroma filtering is inherited from luma. If Bs > 1 , chroma filtering is performed. No filter selection process is performed for chroma, since only one filter can be applied.
  • the filtered sample values p 0 ’ and q 0 ’ are derived as follows.
  • D Clip3( -tc, tc, ( ( ( ( qo - po ) « 2 ) + pi - qi + 4 ) » 3 ) )
  • po’ Cliplc( po + D )
  • each chroma block has a rectangular shape and is coded using up to two square transforms. This process introduces additional boundaries between the transform blocks in chroma. These boundaries are not deblocked (thick dashed lines running horizontally through the center in Figure 7).
  • Figure 7 is an example of a deblocking behavior in a 4:2:2 chroma format.
  • QP range is extended from [0, 51 ] to [0, 63], and the derivation of tC' and b' are as follows.
  • the table size of b and tc are increased from 52 and 54 to 64 and 66 respectively.
  • variable qPL is derived as follows:
  • variable b' is determined as specified in Table 2-3 based on the luma quantization parameter Q derived as follows:
  • slice_beta_offset_div2 is the value of the syntax element slice_beta_offset_div2 for the slice that contains sample q0,0.
  • variable b is derived as follows:
  • variable tc' is determined as specified in Table 2 3 based on the luma quantization parameter Q derived as follows:
  • Q Clip3( 0, 65 ⁇ 3-, qPL + 2 * ( bS— l ) + ( slice_tc_offset_div2 ⁇ 1 ) ) (2-41) [00243] where slice_tc_offset_div2 is the value of the syntax element slice_tc_offset_div2 for the slice that contains sample q0,0.
  • variable tc is derived as follows:
  • tc tc' * ( 1 ⁇ ( BitDepthY - 8 ) )
  • Table 2-3 below is for a derivation of threshold variables b' and tc' from input Q.
  • initial state of context variables depends on QP of the slice.
  • the initialization process is described as follows.
  • Outputs of this process are the initialized CABAC context variables indexed by ctxTable and ctxldx.
  • Table 9-5 to Table 9-31 contain the values of the 8 bit variable initValue used in the initialization of context variables that are assigned to all syntax elements in subclauses 7.3.8.1 through 7.3.8.11, except end_of_slice_segment_flag, end_of_sub_stream_one_bit, and pcm_flag.
  • n ( offsetldx ⁇ 3 ) - 16 (9-5)
  • preCtxState Clip3( 1, 126, ( ( m * Clip3( 0, 51, SliceQpy ) ) >> 4 ) + n )
  • initType cabac_init_flag ? 2 : 1 (9-7)
  • initType cabac_init_flag ? 1 : 2
  • deblocking filter may depend on whether dependent scalar quantization is used or not.
  • QP used in deblocking filter depends on whether dep_quant_enabled_flag is equal to 0 or 1.
  • N is an integer such as 1 , 3, 6, 7 or -1, -3, -6, -7.
  • QPc + N is used in deblocking filter or/and any other process using QP as an input parameter.
  • N is an integer such as 1, 3, 6, 7 or -1, -3, -6, -7.
  • QPc + N is clipped to a valid range before being used.
  • the allowed QP range is set to [QP min - N, QP max - N] instead of [QP min , QP max ] for dependent quantization when QPc + N is used in it.
  • the allowed QP range is set to [Max(QPmin - N, QPmin), Min(QP max - N, QPmax)].
  • the encoder selects a weaker/stronger deblocking filter when dependent quantization is enabled and signal it to the decoder.
  • Tc and b are used implictly at both encoder and decoder, when dependent quantization is enabled. 5.
  • QPc + N is used in deblocking filter process, more entries may be required in Tc’ and b’ table (e.g., Table 2-3) for QP max + N.
  • CABAC context depends on the QPc + N instead of QPc when dependent quantization is enabled.
  • the high-level signaled quantization parameter may be assigned with different semantics based on whether dependent quantization is used or not.
  • qp indicated in the picture parameter set/picture header i.e., init_qp_minus26 in HEVC
  • it may have different semantics.
  • init_qp_minus26 plus 26 specifies the initial value of SliceQpy for each slice referring to the PPS or initial value of all tiles’ quantization parameters referring to the PPS/picture header.
  • init_qp_minus26 plus 27 specifies the initial value of SliceQpy for each slice referring to the PPS or initial value of all tiles’ quantization parameters referring to the PPS/picture header.
  • slice_qp_delta specifies the initial value of Qpy to be used for the coding blocks in the slice/tile/tile groups until modified by the value of CuQpDeltaVal in the coding unit layer.
  • the initial value of the Qpy quantization parameter for the slice/tile/tile groups, SliceQpy is derived as follows:
  • slice_qp_delta specifies the initial value of Qpy to be used for the coding blocks in the slice/tile/tile groups until modified by the value of CuQpDeltaVal in the coding unit layer.
  • the initial value of the Qpy quantization parameter for the slice/tile/tile groups, SliceQpy is derived as follows:
  • QPc + 1 is used in deblocking filter. The newly added parts are highlighted.
  • edgeType is equal to EDGE_VER
  • pi ,k recPicture L [ xCb + xBl— i— 1 ][ yCb + yBl + k ] (8-285)
  • pi ,k recPicturei ⁇ xCb + xBl + k ][ yCb + yBl - i - 1 ] (8-287)
  • variables QP Q and Qpp are set equal to the Qpy values of the coding units which include the coding blocks containing the sample qo , o and po , o, respectively.
  • dep_quant_enabled_flag of the coding unit which includes the coding block containing the sample qo o is equal to 1
  • QP Q is set equal to QP Q + 1.
  • Qpp is set equal to Qpp + 1.
  • a variable qP L is derived as follows:
  • variable b' is determined as specified in Table 8-11 based on the luma quantization parameter Q derived as follows:
  • slice_beta_offset_div2 is the value of the syntax element slice_beta_offset_div2 for the slice that contains sample qo , o.
  • variable b is derived as follows:
  • variable tc' is determined as specified in Table 8-11 based on the luma quantization parameter Q derived as follows:
  • slice_tc_offset_div2 is the value of the syntax element slice_tc_offset_div2 for the slice that contains sample qo , o.
  • variable tc is derived as follows:
  • edgeType is equal to EDGE_VER, the following ordered steps apply:
  • variable dpq is set equal to 2 * dpqO.
  • variable dpq is set equal to 2 * dpq3.
  • variable dE is set equal to 1.
  • edgeType is equal to EDGE_HOR
  • dq3 Abs( q 2 ,3 - 2 * qi,3 + qo,3 ) (8-305)
  • dpqO dpO + dqO (8-306)
  • variable dpq is set equal to 2 * dpqO.
  • variable dpq is set equal to 2 * dpq3.
  • variable dE is set equal to 1.
  • Table 8-11 below is for the derivation of threshold variables b' and tc' from input Q
  • FIG. 8 is a block diagram of a video processing apparatus 800.
  • the apparatus 800 may be used to implement one or more of the methods described herein.
  • the apparatus 800 may be embodied in a smartphone, tablet, computer, Internet of Things (IoT) receiver, and so on.
  • the apparatus 800 may include one or more processors 802, one or more memories 804 and video processing hardware 806.
  • the processor(s) 802 may be configured to implement one or more methods described in the present document.
  • the memory (memories) 804 may be used for storing data and code used for implementing the methods and techniques described herein.
  • the video processing hardware 806 may be used to implement, in hardware circuitry, some techniques described in the present document.
  • FIG. 10 is a flowchart for a method 1000 of processing a video.
  • the method 1000 includes performing (1005) a determination that dependent scalar quantization is used to process a first video block; determining (1010) a first quantization parameter (QP) to be used for a deblocking filter for the first video block based on the determination that dependent scalar quantization being used to process the first video block; and performing (1015) further processing of the first video block using the deblocking filter in accordance with the first QP.
  • QP quantization parameter
  • a quantization parameter for deblocking filter can be determined depending on the usage of dependent scalar quantization.
  • a video block may be encoded in the video bitstream in which bit efficiency may be achieved by using a bitstream generation rule related to motion information prediction.
  • the method can include wherein the determination that dependent scalar quantization is used is based on a value of a flag signal.
  • the method can include wherein the first QP used for the deblocking filter is used for dependent scalar quantization and other processing techniques of the first video block.
  • the method can include wherein the first QP is QPc.
  • the method can include wherein the first QP is QPc + N, wherein N is an integer.
  • the method can include wherein QPc + N is modified from a prior value to fit within a threshold value range.
  • the method can include wherein the threshold value range is [Max(QPmin - N, QPmin), Min(QPmax - N, QPmax)].
  • the method can include determining, by the processor, that dependent scalar quantization is not used to process a second video block; and performing further processing of the second video block using another deblocking filter, wherein the deblocking filter used for the first video block is stronger or weaker than the another deblocking filter used to process the second video block based on dependent scalar quantization being used for the first video block.
  • the method can include wherein the deblocking filter is selected by an encoder, the method further comprises: signaling to a decoder that dependent scalar quantization is enabled for the first video block.
  • the method can include wherein smaller or larger threshold values Tc and b are used by the encoder and the decoder based on the use of dependent scalar quantization.
  • the method can include wherein the first QP is QPc + N, and additional entries are used for Tc’ and b’ tables for QPmax + N.
  • the method can include wherein the first QP is QPc + N and Tc’ and b’ table is same as for QPmax + N based on QPc + N being clipped to be within a threshold value range.
  • the method can include wherein initialization of context-based adaptive binary arithmetic coding (CAB AC) is based on the first QP being QPc + N and dependent scalar quantization being used to process the first video block.
  • CAB AC context-based adaptive binary arithmetic coding
  • the method can include wherein the determination that dependent scalar quantization is used to process the first video block is signaled with semantics based on the use of the dependent scalar quantization.
  • the method can include wherein the first QP is indicated in a picture parameter set or a picture header.
  • the method can include wherein the picture parameter set or the picture header indicate init_qp_minus26 plus 26 that specifies an initial value of a SliceQPy for a slice referring to a PPS or an initial value of quantization parameters of tiles referred to in the PPS or the picture header based on dependent scalar quantization being off.
  • the method can include wherein the picture parameter set or the picture header indicate init_qp_minus26 plus 27 that specifies an initial value of a SliceQPy for a slice referring to a PPS or an initial value of quantization parameters of tiles referred to in the PPS or the picture header based on dependent scalar quantization being used.
  • the method can include wherein the first QP is indicated in a slice header, a tile header, or a tile groups header.
  • the method can include wherein the method is applied based on being signaled in a SPS, a PPS, a VPS, a sequence header, a picture header, a slice header, a tile group header, or a group of coding tree units (CTUs).
  • FIG. 1 1 is a flowchart for a video processing method 1100 of processing a video.
  • the method 1100 includes determining (1105) one or multiple deblocking filter parameters to be used in a deblocking filter process of a current video block based on whether dependent scalar quantization is used to process the current video block, wherein a set of admissible
  • reconstruction value for a transform coefficient corresponding to the dependent scalar quantization depends on at least one transform coefficient level that precedes a current transform coefficient level; and performing (1110) the deblocking filter process on the current video block in accordance with the one or multiple deblocking filter parameter.
  • the method can include that wherein the determining one or multiple deblocking filter parameters to be used in a deblocking filter process of a current video block further comprises: determining the one or multiple deblocking filter parameters corresponding to a weaker deblocking filter in case that the dependent scalar quantization is used for the current video block; or determining the one or multiple deblocking filter parameters corresponding to a stronger deblocking filter in case that the dependent scalar quantization is used for the current video block.
  • the method can include that wherein the stronger deblocking filter modifies more pixels, and the weaker deblocking filter modifies less pixels.
  • the method can include that wherein the determining one or multiple deblocking filter parameters to be used in a deblocking filter process of a current video block further comprises: selecting smaller threshold values Tc and b in case that the dependent scalar quantization is used for the current video block; or selecting larger threshold values Tc and b in case that the dependent scalar quantization is used for the current video block.
  • the method can include that wherein the determining one or multiple deblocking filter parameters to be used in a deblocking filter process of a current video block further comprises: determining a quantization parameter included in the one or multiple deblocking filter parameters based on whether dependent scalar quantization is used to process the current video block.
  • the method can include wherein at least one additional entry is set in a mapping table in the case that the dependent scalar quantization is used for the current video block, wherein the mapping table indicates mapping relationships between quantization parameters and threshold values b’, or indicates mapping relationships between quantization parameters and threshold values Tc’.
  • the method can include wherein the QPc + N is clipped to be within a range [QPmin, QPmax] in case that it greater than QPmax or less than QPmin, wherein QPmin and QPmax are respectively the minimum and the maximum allowable quantization parameter.
  • FIG. 13 is a flowchart for a video processing method 1300.
  • the method 1300 includes determining (1305) whether to apply deblocking filter process based on whether dependent scalar quantization is used to process a current video block, wherein a set of admissible reconstruction value for a transform coefficient corresponding to the dependent scalar quantization depends on at least one transform coefficient level that precedes a current transform coefficient level; and performing (1310) the deblocking filter process on the current video block base on the determination of applying the deblocking filter process.
  • FIG. 12 is a flowchart for a video processing method 1200 of processing a video.
  • the method 1200 includes determining (1205) a quantization parameter to be used in a dependent scalar quantization of a current video block in case that the dependent scalar quantization is enabled for the current video block, wherein a set of admissible reconstmction value for a transform coefficient corresponding to the dependent scalar quantization depends on at least one transform coefficient level that precedes a current transform coefficient level; and performing (1210) the dependent scalar quantization on the current video block in accordance with the determined quantization parameter, wherein the determined quantization parameter is also applied to a video process different from the dependent scalar quantization using a quantization parameter as an input parameter of the current video block.
  • the method can include wherein the video processing different from the dependent scalar quantization includes a deblocking filtering process.
  • the method can include wherein the determined quantization parameter is QPc in case that the dependent scalar quantization is enabled for the current video block, wherein QPc is a signaled quantization parameter of the current video block.
  • the method can include wherein the determined quantization parameter is QPc+N in case that the dependent scalar quantization is enabled for the current video block, wherein QPc is a signaled quantization parameter of the current video block, and N is an integer.
  • the method can further include clipping QPc + N to a threshold value range before using it.
  • the method can include wherein the threshold value range is[QPmin, QPmax], wherein QPmin and QPmax are respectively allowed minimum and maximum quantization parameters.
  • the method can include wherein an allowed QPc range for the dependent scalar quantization is [QPmin - N, QPmax - N] in case that the dependent scalar quantization is enabled for the current video block, wherein QPmin and QPmax are the allowed minimum value and maximum value of QPc in case that the dependent scalar quantization is not enabled for the current video block , respectively.
  • the method can include wherein an allowed QPc range for the dependent scalar quantization is [Max(QPmin - N, QPmin), Min(QPmax - N, QPmax)] in case that the dependent scalar quantization is enabled for the current video block, wherein QPmin and QPmax are the allowed minimum value and maximum value of QPc in case that the dependent scalar quantization is not enabled for the current video block, respectively.
  • the method can include wherein initialization of context-based adaptive binary arithmetic coding (CABAC) is based on the QPc + N in case that the dependent scalar quantization is enabled.
  • CABAC context-based adaptive binary arithmetic coding
  • the method can include wherein high-level quantization parameters are assigned with different semantics based on whether the dependent scalar quantization is enabled.
  • the method can include wherein the quantization parameter is signaled in a picture parameter set or a picture header by means of a first parameter.
  • the method can include wherein the first parameter is init_qp_minus26, and init_qp_minus26 plus 26 specifies an initial quantization parameter value SliceQPy for a slice referring to the picture parameter set or an initial value of quantization parameters of tiles referring to the picture parameter set or the picture header, in case that the dependent scalar quantization is disabled.
  • the method can include wherein the first parameter is init_qp_minus26, and init_qp_minus26 plus 27 specifies an initial quantization parameter value SliceQPy for a slice referring to the picture parameter set or an initial value of quantization parameters of tiles referring to the picture parameter set or the picture header, in case that the dependent scalar quantization is enabled.
  • the method can include wherein the quantization parameter is signaled in a slice header, a tile header, or a tile groups header by means of a second parameter.
  • the method can be applied in case of being signaled in a SPS, a PPS, a VPS, a sequence header, a picture header, a slice header, a tile group header, or a group of coding tree units (CTUs).
  • FIG. 14 is a flowchart for a video processing method 1400 of processing a video.
  • the method 1400 includes determining (1405) a quantization parameter to be used in a dependent scalar dequantization of a current video block in case that the dependent scalar dequantization is enabled for the current video block, wherein a set of admissible reconstruction value for a transform coefficient corresponding to the dependent scalar dequantization depends on at least one transform coefficient level that precedes a current transform coefficient level; performing (1410) the dependent scalar dequantization on the current video block in accordance with the determined quantization parameter, wherein the determined quantization parameter is also applied to a video process different from the dependent scalar dequantization using a quantization parameter as an input parameter of the current video block.
  • the disclosed techniques may be embodied in video encoders or decoders to improve compression efficiency when the coding units being compressed have shaped that are significantly different than the traditional square shaped blocks or rectangular blocks that are half-square shaped.
  • new coding tools that use long or tall coding units such as 4x32 or 32x4 sized units may benefit from the disclosed techniques.
  • the disclosed techniques may be embodied in a video system comprising a processor and a non-transitory memory with instmctions thereon, wherein the instructions upon execution by the processor, cause the processor to implement the above disclosed method.
  • the disclosed and other solutions, examples, embodiments, modules and the functional operations described in this document can be implemented in digital electronic circuitry, or in computer software, firmware, or hardware, including the structures disclosed in this document and their structural equivalents, or in combinations of one or more of them.
  • the disclosed and other embodiments can be implemented as one or more computer program products, i.e., one or more modules of computer program instructions encoded on a computer readable medium for execution by, or to control the operation of, data processing apparatus.
  • the computer readable medium can be a machine -readable storage device, a machine -readable storage substrate, a memory device, a composition of matter effecting a machine-readable propagated signal, or a combination of one or more them.
  • data processing apparatus encompasses all apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, or multiple processors or computers.
  • the apparatus can include, in addition to hardware, code that creates an execution environment for the computer program in question, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, or a combination of one or more of them.
  • a propagated signal is an artificially generated signal, e.g., a machine-generated electrical, optical, or electromagnetic signal, that is generated to encode information for transmission to suitable receiver apparatus.
  • a computer program (also known as a program, software, software application, script, or code) can be written in any form of programming language, including compiled or interpreted languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
  • a computer program does not necessarily correspond to a file in a file system.
  • a program can be stored in a portion of a file that holds other programs or data (e.g., one or more scripts stored in a markup language document), in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub programs, or portions of code).
  • a computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network.
  • processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer.
  • a processor will receive instructions and data from a read only memory or a random-access memory or both.
  • the essential elements of a computer are a processor for performing instmctions and one or more memory devices for storing instructions and data.
  • a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto optical disks, or optical disks.
  • mass storage devices for storing data, e.g., magnetic, magneto optical disks, or optical disks.
  • a computer need not have such devices.
  • Computer readable media suitable for storing computer program instructions and data include all forms of non-volatile memory, media and memory devices, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices; magnetic disks, e.g., internal hard disks or removable disks; magneto optical disks; and CD ROM and DVD-ROM disks.
  • semiconductor memory devices e.g., EPROM, EEPROM, and flash memory devices
  • magnetic disks e.g., internal hard disks or removable disks
  • magneto optical disks e.g., CD ROM and DVD-ROM disks.
  • the processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Discrete Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

Video processing method, apparatus and computer program product are disclosed. The video processing method comprises: determining a quantization parameter to be used in a dependent scalar quantization of a current video block in case that the dependent scalar quantization is enabled for the current video block, wherein a set of admissible reconstruction value for a transform coefficient corresponding to the dependent scalar quantization depends on at least one transform coefficient level that precedes a current transform coefficient level; and performing the dependent scalar quantization on the current video block in accordance with the determined quantization parameter, wherein the determined quantization parameter is also applied to a video process different from the dependent scalar quantization using a quantization parameter as an input parameter of the current video block.

Description

QUANTIZATION PARAMETERS UNDER CODING TOOU OF
DEPENDENT QUANTIZATION
CROSS-REFERENCE TO RELATED APPLICATION
[001] Under the applicable patent law and/or rules pursuant to the Paris Convention, this application is made to timely claim the priority to and benefits of International Patent Application No. PCT/CN2018/112945, filed on October 31 , 2018. The entire disclosure of International Patent Application No. PCT/CN2018/112945 is incorporated by reference as part of the disclosure of this application.
TECHNICAL FIELD
[002] This document is related to video and image coding technologies.
BACKGROUND
[003] Digital video accounts for the largest bandwidth use on the internet and other digital communication networks. As the number of connected user devices capable of receiving and displaying video increases, it is expected that the bandwidth demand for digital video usage will continue to grow.
SUMMARY
[004] The disclosed techniques may be used by video or image decoder or encoder embodiments for in which quantization parameters are under the coding tool of dependent quantization.
[005] In one example aspect, a method of processing video is disclosed. The method includes performing a determination, by a processor, that dependent scalar quantization is used to process a first video block; determining, by the processor, a first quantization parameter (QP) to be used for a deblocking filter for the first video block based on the determination that dependent scalar quantization being used to process the first video block; and performing further processing of the first video block using the deblocking filter in accordance with the first QP.
[006] In another example aspect, a video processing method is disclosed. The method comprises determining one or multiple deblocking filter parameters to be used in a deblocking filter process of a current video block based on whether dependent scalar quantization is used to process the current video block, wherein a set of admissible reconstruction value for a transform coefficient corresponding to the dependent scalar quantization depends on at least one transform coefficient level that precedes a current transform coefficient level; and performing the deblocking filter process on the current video block in accordance with the one or multiple deblocking filter parameter.
[007] In another example aspect, a video processing method is disclosed. The method comprises determining whether to apply deblocking filter process based on whether dependent scalar quantization is used to process a current video block, wherein a set of admissible reconstruction value for a transform coefficient corresponding to the dependent scalar quantization depends on at least one transform coefficient level that precedes a current transform coefficient level; and performing the deblocking filter process on the current video block base on the determination of applying the deblocking filter process.
[008] In another example aspect, a video processing method is disclosed. The method comprises determining a quantization parameter to be used in a dependent scalar quantization of a current video block in case that the dependent scalar quantization is enabled for the current video block, wherein a set of admissible reconstmction value for a transform coefficient corresponding to the dependent scalar quantization depends on at least one transform coefficient level that precedes a current transform coefficient level; and performing the dependent scalar quantization on the current video block in accordance with the determined quantization parameter, wherein the determined quantization parameter is also applied to a video process different from the dependent scalar quantization using a quantization parameter as an input parameter of the current video block.
[009] In another example aspect, a video processing method is disclosed. The method comprises determining a quantization parameter to be used in a dependent scalar dequantization of a current video block in case that the dependent scalar dequantization is enabled for the current video block, wherein a set of admissible reconstruction value for a transform coefficient corresponding to the dependent scalar dequantization depends on at least one transform coefficient level that precedes a current transform coefficient level; performing the dependent scalar dequantization on the current video block in accordance with the determined quantization parameter, wherein the determined quantization parameter is also applied to a video process different from the dependent scalar dequantization using a quantization parameter as an input parameter of the current video block.
[0010] In another example aspect, the above-described method may be implemented by a video decoder apparatus that comprises a processor.
[0011] In another example aspect, the above-described method may be implemented by a video encoder apparatus that comprises a processor.
[0012] In another example aspect, the above-described method may be implemented by an apparatus in a video system that comprises a processor and a non-transitory memory with instructions thereon, wherein the instructions upon execution by the processor, cause the processor to implement the above -described method.
[0013] In yet another example aspect, these methods may be embodied in the form of processor-executable instmctions and stored on a computer-readable program medium.
[0014] These, and other, aspects are further described in the present document.
BRIEF DESCRIPTION OF THE DRAWINGS
[0015] FIG. 1 shows an example of two scalar quantizers used in dependent quantization.
[0016] FIG. 2 shows an example of state transition and quantizer selection for dependent quantization.
[0017] FIG. 3 shows an example of an overall processing flow of a deblocking filter process.
[0018] FIG. 4 shows an example of a flow diagram for Bs calculation.
[0019] FIG. 5 shows examples of referred information for Bs calculation at a coding tree unit (CTU) boundary.
[0020] FIG. 6 shows an example of a pixels involved in a filter on/off decision and a strong/weak filter selection.
[0021] FIG. 7 shows an example of a deblocking behavior in a 4:2:2 chroma format.
[0022] FIG. 8 is a block diagram of an example of a video processing apparatus.
[0023] FIG. 9 shows a block diagram of an example implementation of a video encoder.
[0024] FIG. 10 is a flowchart for an example of a video processing method.
[0025] FIG. 1 1 is a flowchart for an example of a video processing method.
[0026] FIG. 12 is a flowchart for an example of a video processing method.
[0027] FIG. 13 is a flowchart for an example of a video processing method. [0028] FIG. 14 is a flowchart for an example of a video processing method.
DETAILED DESCRIPTION
[0029] The present document provides various techniques that can be used by a decoder of image or video bitstreams to improve the quality of decompressed or decoded digital video or images. For brevity, the term“video” is used herein to include both a sequence of pictures (traditionally called video) and individual images. Furthermore, a video encoder may also implement these techniques during the process of encoding in order to reconstmct decoded frames used for further encoding.
[0030] Section headings are used in the present document for ease of understanding and do not limit the embodiments and techniques to the corresponding sections. As such, embodiments from one section can be combined with embodiments from other sections.
[0031] 1. Summary
[0032] This patent document is related to video coding technologies. Specifically, it is related to the usage of quantization parameters when dependent quantization is utilized. It may be applied to the existing video coding standard like HEVC, or the standard (Versatile Video Coding) to be finalized. It may be also applicable to future video coding standards or video codec.
[0033] 2. Background
[0034] Video coding standards have evolved primarily through the development of the well- known ITU-T and ISO/IEC standards. The ITU-T produced H.261 and H.263, ISO/IEC produced MPEG-l and MPEG-4 Visual, and the two organizations jointly produced the H.262/MPEG-2 Video and H.264/MPEG-4 Advanced Video Coding (AVC) and H.265/HEVC standards. Since H.262, the video coding standards are based on the hybrid video coding structure wherein temporal prediction plus transform coding are utilized. To explore the future video coding technologies beyond HEVC, Joint Video Exploration Team (JVET) was founded by VCEG and MPEG jointly in 2015. Since then, many new methods have been adopted by JVET and put into the reference software named Joint Exploration Model (JEM). In April 2018, the Joint Video Expert Team (JVET) between VCEG (Q6/16) and ISO/IEC JTC1 SC29/WG11 (MPEG) was created to work on the VVC standard targeting at 50% bitrate reduction compared to HEVC. [0035] FIG. 9 is a block diagram of an example implementation of a video encoder. FIG. 9 shows that the encoder implementation has a feedback path built in in which the video encoder also performs video decoding functionality (reconstructing compressed representation of video data for use in encoding of next video data).
[0036] 2.1 Dependent Scalar Quantization
[0037] Dependent scalar quantization is proposed, and it refers to an approach in which the set of admissible reconstmction values for a transform coefficient depends on the values of the transform coefficient levels that precede the current transform coefficient level in reconstmction order. The main effect of this approach is that, in comparison to conventional independent scalar quantization (as used in HEVC and VTM-l), the admissible reconstmction vectors (given by all reconstructed transform coefficients of a transform block) are packed denser in the N- dimensional vector space (N represents the number of transform coefficients in a transform block). That means, for a given average number of admissible reconstmction vectors per N- dimensional unit volume, the average distance (or MSE distortion) between an input vector and the closest reconstruction vector is reduced (for typical distributions of input vectors).
Eventually, this effect can result in an improved rate-distortion efficiency.
[0038] The approach of dependent scalar quantization is realized by: (a) defining two scalar quantizers with different reconstruction levels and (b) defining a process for switching between the two scalar quantizers.
[0039] Figure 1 is an illustration of the two scalar quantizers used in the proposed approach of dependent quantization.
[0040] The two scalar quantizers used, denoted by Q0 and Ql, are illustrated in Figure 1. The location of the available reconstruction levels is uniquely specified by a quantization step size D. If we neglect the fact that the actual reconstruction of transform coefficients uses integer arithmetic, the two scalar quantizers Q0 and Ql are characterized as follows:
[0041] Q0: The reconstruction levels of the first quantizer Q0 are given by the even integer multiples of the quantization step size D. When this quantizer is used, a reconstructed transform coefficient f is calculated according to
[0042] t' = 2-k-A,
[0043] where k denotes the associated transform coefficient level (transmitted quantization index). [0044] Ql: The reconstruction levels of the second quantizer Ql are given by the odd integer multiples of the quantization step size D and, in addition, the reconstmction level equal to zero. The mapping of transform coefficient levels k to reconstmcted transform coefficients f is specified by
[0045] f= (2-k - sgn(k) )·D,
[0046] where sgn(·) denotes the signum function
[0047] sgn(x) = ( k = 0 ? 0 : ( k < 0 ? -l : l ) ).
[0048] The scalar quantizer used (Q0 or Ql) is not explicitly signalled in the bitstream. Instead, the quantizer used for a current transform coefficient is determined by the parities of the transform coefficient levels that precede the current transform coefficient in coding/reconstruction order.
[0049] Figure 2 is an example of a state transition and quantizer selection for the proposed dependent quantization.
[0050] As illustrated in Figure 2, the switching between the two scalar quantizers (Q0 and Ql) is realized via a state machine with four states. The state can take four different values: 0, 1 , 2, 3. It is uniquely determined by the parities of the transform coefficient levels preceding the current transform coefficient in coding/reconstruction order. At the start of the inverse quantization for a transform block, the state is set equal to 0. The transform coefficients are reconstmcted in scanning order (i.e., in the same order they are entropy decoded). After a current transform coefficient is reconstmcted, the state is updated as shown in Figure 2, where k denotes the value of the transform coefficient level. Note that the next state only depends on the current state and the parity (k & 1) of the current transform coefficient level k. With k representing the value of the current transform coefficient level, the state update can be written as
[0051] state = stateT ransTablef state ][ k & 1 ],
[0052] where stateTransTable represents the table shown in Figure 2 and the operator & specifies the bit-wise“and” operator in two’s-complement arithmetic. Alternatively, the state transition can also be specified without a table look-up:
[0053] state = ( 32040 >> ( ( state << 2 ) + ( ( k & l ) << l ) ) ) & 3
[0054] At this, the l6-bit value 32040 specifies the state transition table.
[0055] The state uniquely specifies the scalar quantizer used. If the state for a current transform coefficient is equal to 0 or 1 , the scalar quantizer Q0 is used. Otherwise (the state is equal to 2 or 3), the scalar quantizer Ql is used. [0056] The detailed scaling process is described as follows.
[0057] 7.3.4.9 Residual coding syntax
Figure imgf000009_0001
Figure imgf000010_0001
Figure imgf000011_0001
[0058] 8.4.3 Scaling process for transform coefficients
[0059] Inputs to this process are: [0060] a luma location ( xTbY, yTbY ) specifying the top-left sample of the current luma transform block relative to the top left luma sample of the current picture,
[0061] a variable nTbW specifying the transform block width,
[0062] a variable nTbH specifying the transform block height,
[0063] a variable cldx specifying the colour component of the current block,
[0064] a variable bitDepth specifying the bit depth of the current colour component.
[0065] Output of this process is the (nTbW)x(nTbH) array d of scaled transform coefficients with elements d[ x ][ y ].
[0066] The quantization parameter qP is derived as follows:
[0067] If cldx is equal to 0, the following applies:
[0068] qP = Qp'Y (8-383)
[0069] Otherwise, if cldx is equal to 1 , the following applies:
[0070] qP = Qp'Cb (8-384)
[0071] Otherwise (cldx is equal to 2), the following applies:
[0072] qP = Qp'Cr (8-385)
[0073] The variables bdShift, rectNorm and bdOffset are derived as follows:
[0074] bdShift = bitDepth + ( ( ( Log2( nTbW ) + Log2( nTbH ) ) & l ) * 8 +
( Log2( nTbW ) + Log2( nTbH ) ) / 2 )— 5 + dep_quant_enabled_flag (8-386)
[0075] rectNorm = ( ( Log2( nTbW ) + Log2( nTbH ) ) & 1 ) = = 1 ? 181 : 1 (8-387)
[0076] bdOffset = ( 1 < < bdShift ) > > 1 (8-388)
[0077] The list levelScalef ] is specified as levelScalef k ] = { 40, 45, 51, 57, 64, 72 } with k = 0..5.
[0078] For the derivation of the scaled transform coefficients d[ x ][ y ] with x = O.mTbW - 1, y = O.mTbH - 1, the following applies:
[0079] The intermediate scaling factor m[ x ] [ y ] is set equal to 16.
[0080] The scaling factor ls[ x ] [ y ] is derived as follows:
[0081] - If dep_quant_enabled_flag is equal to 1, the following applies:
[0082] ls[ x ][ y ] = ( m[ x ][ y ] * levelScale[ (qP + 1) % 6 ] ) « ( (qP + 1) / 6 ) (8-389)
[0083] - Otherwise (dep_quant_enabled_flag is equal to 0), the following applies:
[0084] ls[ x ][ y ] = ( m[ x ][ y ] * levelScalef qP % 6 ] ) << ( qP / 6 )(8-390)
[0085] The value dnc [ x ] [ y ] is derived as follows: [0086] dnc[ x ][ y ] = (8-391)
[0087] ( T ran sCoeffLe ve 1 [ xTbY ][ yTbY ][ cldx ][ x ][ y ] * ls[ x ][ y ] * rectNorm +bdOffset ) >> bdShift
[0088] The scaled transform coefficient d[ x ][ y ] is derived as follows:
[0089] d[ x ][ y ] = Clip3( CoeffMin, CoeffMax, dnc[ x ][ y ] ) (8-392)
[0090] Suppose quantization parameter (QP) QPC is the used QP of the current CU, in dependent quantization, it actually uses QPC + 1 for quantization according to equation 8-389. However, if dependent quantization is not used, QPC is used for quantization.
[0091] 2.2 Deblocking filter
[0092] A deblocking filter process is performed for each CU in the same order as the decoding process. First vertical edges are filtered (horizontal filtering) then horizontal edges are filtered (vertical filtering). Filtering is applied to 8x8 block boundaries which are determined to be filtered, both for luma and chroma components. 4x4 block boundaries are not processed in order to reduce the complexity.
[0093] Figure 3 illustrates the overall flow of deblocking filter processes. A boundary can have three filtering status values: no filtering, weak filtering and strong filtering. Each filtering decision is based on boundary strength, Bs, and threshold values, b and tc.
[0094] Figure 3 is an example of an overall processing flow of a deblocking filter process.
[0095] 2.2.1 Boundary decision
[0096] Two kinds of boundaries are involved in the deblocking filter process: TU boundaries and PU boundaries. CU boundaries are also considered, since CU boundaries are necessarily also TU and PU boundaries. When PU shape is 2NxN (N > 4) and RQT depth is equal to 1 , TU boundaries at 8x8 block grid and PU boundaries between each PU inside the CU are also involved in the filtering.
[0097] 2.2.2. Boundary strength calculation
[0098] The boundary strength (Bs) reflects how strong a filtering process may be needed for the boundary. A value of 2 for Bs indicates strong filtering, 1 indicates weak filtering and 0 indicates no deblocking filtering.,
[0099] Let P and Q be defined as blocks which are involved in the filtering, where P represents the block located to the left (vertical edge case) or above (horizontal edge case) the boundary and Q represents the block located to the right (vertical edge case) or above (horizontal edge case) the boundary. Figure 4 illustrates how the Bs value is calculated based on the intra coding mode, the existence of non-zero transform coefficients, reference picture, number of motion vectors and motion vector difference.
[00100] At the CTU boundary, information on every second block (on a 4x4 grid) to the left or above is re-used as depicted in Figure 5 in order to reduce line buffer memory requirement. Figure 5 is an example of referred information for Bs calculation at a CTU boundary.
[00101] Threshold variables
[00102] Threshold values b' and tc' are involved in the filter on/off decision, strong and weak filter selection and weak filtering process. These are derived from the value of the luma quantization parameter Q as shown in Table 2-1. The derivation process of Q is described in 2.2.3.1.
Figure imgf000014_0001
[00103] The variable b is derived from b' as follows:
[00104] b = b' * ( 1 « ( BitDepthY - 8 ) )
[00105] The variable tc is derived from tc' as follows:
[00106] tc = tc' * ( 1 << ( BitDepthy - 8 ) )
[00107] How to dervie tc' and b' are described as follow.
[00108] 2.2.3.1 tc' and b'
[00109] The decoding process of HEVC design for tc' and b' is described in sub-clause 8.7.2.5.3.
[00110] 8.7.2.5.3 Decision process for luma block edges
[00111] Inputs to this process are:
[00112] - a luma picture sample array recPictureL, [00113] - a luma location ( xCb, yCb ) specifying the top-left sample of the current luma coding block relative to the top-left luma sample of the current picture,
[00114] - a luma location ( xBl, yBl ) specifying the top-left sample of the current luma block relative to the top-left sample of the current luma coding block,
[00115] - a variable edgeType specifying whether a vertical (EDGE_VER) or a horizontal (EDGE_HOR) edge is filtered,
[00116] - a variable bS specifying the boundary filtering strength.
[00117] Outputs of this process are:
[00118] - the variables dE, dEp, and dEq containing decisions,
[00119] - the variables b and tc.
[00120] If edgeType is equal to EDGE_VER, the sample values pi,k and qi,k with i = 0..3 and k = 0 and 3 are derived as follows:
[00121] qi,k = recPictureLf xCb + xBl + i ][ yCb + yBl + k ] (8-284)
[00122] pi,k = recPictureLf xCb + xBl— i— 1 ][ yCb + yBl + k ] (8-285)
[00123] Otherwise (edgeType is equal to EDGE_HOR), the sample values pi,k and qi,k with i = 0..3 and k = 0 and 3 are derived as follows:
[00124] qi,k = recPictureLf xCb + xBl + k ][ yCb + yBl + i ] (8-286)
[00125] pi,k = recPictureLf xCb + xBl + k ][ yCb + yBl - i - 1 ] (8-287)
[00126] The variables QpQ and QpP are set equal to the QpY values of the coding units which include the coding blocks containing the sample q0,0 and p0,0, respectively.
[00127] A variable qPL is derived as follows:
[00128] qPL = ( ( QpQ + QpP + 1 ) >> 1 ) (8-288)
[00129] The value of the variable b' is determined as specified in Table 8 11 based on the luma quantization parameter Q derived as follows:
[00130] Q = Clip3( 0, 51 , qPL + ( slice_beta_offset_div2 << 1 ) ) (8-289)
[00131] where slice_beta_offset_div2 is the value of the syntax element slice_beta_offset_div2 for the slice that contains sample q0,0.
[00132] The variable b is derived as follows:
[00133] b = b' * ( 1 « ( BitDepthY - 8 ) ) (8-290) [00134] The value of the variable tc' is determined as specified in Table 8 11 based on the luma quantization parameter Q derived as follows:
[00135] Q = Clip3( 0, 53, qPL + 2 * ( bS— l ) + ( slice_tc_offset_div2 << 1 ) ) (8-291)
[00136] where slice_tc_offset_div2 is the value of the syntax element slice_tc_offset_div2 for the slice that contains sample q0,0.
[00137] The variable tc is derived as follows:
[00138] tc = tc' * ( 1 << ( BitDepthY - 8 ) ) (8-292)
[00139] Depending on the value of edgeType, the following applies:
[00140]— If edgeType is equal to EDGE_VER, the following ordered steps apply:
[00141] The variables dpqO, dpq3, dp, dq, and d are derived as follows:
[00142] dpO = Abs( p2,0 - 2 * pl,0 + rO,O ) (8-293)
[00143] dp3 = Abs( p2,3 - 2 * pl,3 + p0,3 ) (8-294)
[00144] dqO = Abs( q2,0 - 2 * ql,0 + q0,0 ) (8-295)
[00145] dq3 = Abs( q2,3 - 2 * ql,3 + q0,3 ) (8-296)
[00146] dpqO = dpO + dqO (8-297)
[00147] dpq3 = dp3 + dq3 (8-298)
[00148] dp = dpO + dp3 (8-299)
[00149] dq = dqO + dq3 (8-300)
[00150] d = dpqO + dpq3 (8-301)
[00151] The variables dE, dEp, and dEq are set equal to 0.
[00152] When d is less than b, the following ordered steps apply:
[00153] The variable dpq is set equal to 2 * dpqO.
[00154] For the sample location ( xCb + xBl, yCb + yBl ), the decision process for a luma sample as specified in subclause 8.7.2.5.6 is invoked with sample values pi,0, qi,0 with i = 0..3, the variables dpq, b, and tc as inputs, and the output is assigned to the decision dSamO.
[00155] The variable dpq is set equal to 2 * dpq3.
[00156] For the sample location ( xCb + xBl, yCb + yBl + 3 ), the decision process for a luma sample as specified in subclause 8.7.2.5.6 is invoked with sample values pi, 3, qi,3 with i = 0..3, the variables dpq, b, and tc as inputs, and the output is assigned to the decision dSam3.
[00157] The variable dE is set equal to 1. [00158] When dSamO is equal to 1 and dSam3 is equal to 1, the variable dE is set equal to 2.
[00159] When dp is less than ( b + ( b >> 1 ) ) >> 3, the variable dEp is set equal to 1.
[00160] When dq is less than ( b + ( b >> 1 ) ) >> 3, the variable dEq is set equal to 1.
[00161] - Otherwise (edgeType is equal to EDGE_HOR), the following ordered steps apply:
[00162] The variables dpqO, dpq3, dp, dq, and d are derived as follows:
[00163] dpO = Abs( p2,0 - 2 * pl,0 + p0,0 ) (8-302)
[00164] dp3 = Abs( p2,3 - 2 * pl,3 + p0,3 ) (8-303)
[00165] dqO = Abs( q2,0 - 2 * ql,0 + q0,0 ) (8-304)
[00166] dq3 = Abs( q2,3 - 2 * ql,3 + q0,3 ) (8-305)
[00167] dpqO = dpO + dqO (8-306)
[00168] dpq3 = dp3 + dq3 (8-307)
[00169] dp = dpO + dp3 (8-308)
[00170] dq = dqO + dq3 (8-309)
[00171] d = dpqO + dpq3 (8-310)
[00172] The variables dE, dEp, and dEq are set equal to 0.
[00173] When d is less than b, the following ordered steps apply:
[00174] The variable dpq is set equal to 2 * dpqO.
[00175] For the sample location ( xCb + xBl, yCb + yBl ), the decision process for a luma sample as specified in subclause 8.7.2.5.6 is invoked with sample values p0,0, p3,0, q0,0, and q3,0, the variables dpq, b, and tc as inputs, and the output is assigned to the decision dSamO.
[00176] The variable dpq is set equal to 2 * dpq3.
[00177] For the sample location ( xCb + xBl + 3, yCb + yBl ), the decision process for a luma sample as specified in subclause 8.7.2.5.6 is invoked with sample values p0,3, p3,3, q0,3, and q3,3, the variables dpq, b, and tc as inputs, and the output is assigned to the decision dSam3.
[00178] The variable dE is set equal to 1.
[00179] When dSamO is equal to 1 and dSam3 is equal to 1, the variable dE is set equal to 2.
[00180] When dp is less than ( b + ( b >> 1 ) ) >> 3, the variable dEp is set equal to 1.
[00181] When dq is less than ( b + ( b >> 1 ) ) >> 3, the variable dEq is set equal to 1.
[00182] Table 8-11 below shows derivation of threshold variables b' and tc' from input Q.
Figure imgf000018_0002
[00183] 2.2.4 Filter on/off decision for 4 lines
[00184] The filter on/off decision is made using 4 lines grouped as a unit, to reduce computational complexity. Figure 6 illustrates the pixels involving in the decision. The 6 pixels in the two red boxes in the first 4 lines are used to determine whether the filter is on or off for those 4 lines. The 6 pixels in the two red boxes in the second group of 4 lines are used to determine whether the filter is on or off for the second group of 4 lines.
[00185] Figure 6 shows an example of pixels involved in an on/off decisions and a strong/weak filter selection.
[00186] The following variables are defined:
[00187] dpO = | p¾o - 2*pi,0 + ro,o |
[00188] dp3 = | p2,3 - 2*pi,3 + po,3 |
[00189] dqO
[00190] dq3
Figure imgf000018_0001
[00191] If dp0+dq0+dp3+dq3 < b, filtering for the first four lines is turned on and the strong/weak filter selection process is applied. If this condition is not met, no filtering is done for the first 4 lines.
[00192] Additionally, if the condition is met, the variables dE, dEpl and dEp2 are set as follows:
[00193] dE is set equal to 1
[00194] If dpO + dp3 < (b + ( b >> 1 )) >> 3, the variable dEpl is set equal to 1
[00195] If dqO + dq3 < (b + ( b >> 1 )) >> 3, the variable dEql is set equal to 1
[00196] A filter on/off decision is made in a similar manner as described above for the second group of 4 lines.
[00197] 2.2.5. Strong/weak filter selection for 4 lines [00198] If filtering is turned on, a decision is made between strong and weak filtering. The pixels involved are the same as those used for the filter on/off decision. If the following two sets of conditions are met, a strong filter is used for filtering of the first 4 lines. Otherwise, a weak filter is used.
[00199] 1) 2*(dp0+dq0) < ( b >> 2 ), | p3o - pOo | + | qOo - q3o | < ( b >> 3 ) and | pOo - qOo | < ( 5* tt + 1 ) » 1
[00200] 2) 2*(dp3+dq3) < ( b >> 2 ), | p33 - p03 1 + | q03 - q33 1 < ( b >> 3 ) and | p03 - q03 1 < ( 5* tt + 1 ) » 1
[00201] The decision on whether to select strong or weak filtering for the second group of 4 lines is made in a similar manner.
[00202] 2.2.6 Strong filtering
[00203] For strong filtering, the filtered pixel values are obtained by the following equations. Note that three pixels are modified using four pixels as an input for each P and Q block, respectively.
[00204] po’ = ( P2 + 2*pi + 2*po + 2*qo + qi + 4 ) >> 3
[00205] qo’ = ( pi + 2*po + 2*qo + 2*qi + q2 + 4 ) >> 3
[00206] pi’ = ( p2 + pi + po + qo + 2 ) » 2
[00207] qi’ = ( po + qo + qi + q2 + 2 ) » 2
[00208] p2’ = ( 2*p3 + 3*p2 + pi + po + qo + 4 ) >> 3
[00209] q2’ = ( po + qo + qi + 3*q2 + 2*q3 + 4 ) >> 3
[00210] 2.2.7 Weak filtering
[00211] D is defined as follows.
[00212] D = ( 9 * ( qo - po ) - 3 * ( qi - pi ) + 8 ) >> 4
[00213] When abs(A) is less than tc *10,
[00214] D = Clip3( - tc , tc , D )
[00215] po’ = CliplvC po + D )
[00216] qo’ = Oΐr1g( qo - D )
[00217] If dEpl is equal to 1 ,
[00218] Dr = Clip3( -( tc » 1), tc » 1, ( ( ( p2 + po + 1 ) » 1 ) - pi + D ) » l )
[00219] pi’ = CliplY( pi + Dr )
[00220] If dEql is equal to 1 ,
[00221] Aq = Clip3( -( tc » 1), tc » 1, ( ( ( q2 + qo + 1 ) » 1 ) - qi - D ) » l ) [00222] qi’ = Oΐr1g( qi + DV )
[00223] Note that a maximum of two pixels are modified using three pixels as an input for each P and Q block, respectively.
[00224] 2.2.8 Chroma Filtering
[00225] The boundary strength Bs for chroma filtering is inherited from luma. If Bs > 1 , chroma filtering is performed. No filter selection process is performed for chroma, since only one filter can be applied. The filtered sample values p0’ and q0’ are derived as follows.
[00226] D = Clip3( -tc, tc, ( ( ( ( qo - po ) « 2 ) + pi - qi + 4 ) » 3 ) )
[00227] po’ = Cliplc( po + D )
[00228] qo’ = Cliplc( qo - D )
[00229] When the 4:2:2 chroma format is in use, each chroma block has a rectangular shape and is coded using up to two square transforms. This process introduces additional boundaries between the transform blocks in chroma. These boundaries are not deblocked (thick dashed lines running horizontally through the center in Figure 7).
[00230] Figure 7 is an example of a deblocking behavior in a 4:2:2 chroma format.
[00231] 2.3 Extension of quantization parameter value range
[00232] QP range is extended from [0, 51 ] to [0, 63], and the derivation of tC' and b' are as follows. The table size of b and tc are increased from 52 and 54 to 64 and 66 respectively.
[00233] 8.7.2.5.3 Decision process for luma block edges
[00234] The variable qPL is derived as follows:
[00235] qPL = ( ( QpQ + QpP + 1 ) >> 1 ) (2-38)
[00236] The value of the variable b' is determined as specified in Table 2-3 based on the luma quantization parameter Q derived as follows:
[00237] Q = Clip3( 0, 63-54-, qPL + ( slice_beta_offset_div2 << 1 ) ) (2-39)
[00238] where slice_beta_offset_div2 is the value of the syntax element slice_beta_offset_div2 for the slice that contains sample q0,0.
[00239] The variable b is derived as follows:
[00240] b = b' * ( 1 « ( BitDepthY - 8 ) ) (2-40)
[00241] The value of the variable tc' is determined as specified in Table 2 3 based on the luma quantization parameter Q derived as follows:
[00242] Q = Clip3( 0, 65 §3-, qPL + 2 * ( bS— l ) + ( slice_tc_offset_div2 << 1 ) ) (2-41) [00243] where slice_tc_offset_div2 is the value of the syntax element slice_tc_offset_div2 for the slice that contains sample q0,0.
[00244] The variable tc is derived as follows:
[00245] tc = tc' * ( 1 << ( BitDepthY - 8 ) )
[00246] Table 2-3 below is for a derivation of threshold variables b' and tc' from input Q.
Figure imgf000021_0001
[00247] 2.4 Initialization of context variables
[00248] In context-based adaptive binary arithmetic coding (CABAC), initial state of context variables depends on QP of the slice. The initialization process is described as follows.
[00249] 9.3.2.2 Initialization process for context variables
[00250] Outputs of this process are the initialized CABAC context variables indexed by ctxTable and ctxldx.
[00251] Table 9-5 to Table 9-31 contain the values of the 8 bit variable initValue used in the initialization of context variables that are assigned to all syntax elements in subclauses 7.3.8.1 through 7.3.8.11, except end_of_slice_segment_flag, end_of_sub_stream_one_bit, and pcm_flag.
[00252] For each context variable, the two variables pStateldx and valMps are initialized.
[00253] NOTE 1 - The variable pStateldx corresponds to a probability state index and the variable valMps corresponds to the value of the most probable symbol as further described in subclause 9.3.4.3.
[00254] From the 8 bit table entry initValue, the two 4 bit variables slope Idx and offsetldx are derived as follows: [00255] slope Idx = initValue » 4
offsetldx = initValue & 15 (9-4)
[00256] The variables m and n, used in the initialization of context variables, are derived from slopeldx and offsetldx as follows:
[00257] m = slopeldx * 5 - 45
n = ( offsetldx << 3 ) - 16 (9-5)
[00258] The two values assigned to pStateldx and valMps for the initialization are derived from SliceQpy, which is derived in Equation 7-40. Given the variables m and n, the initialization is specified as follows:
[00259] preCtxState = Clip3( 1, 126, ( ( m * Clip3( 0, 51, SliceQpy ) ) >> 4 ) + n )
valMps = ( preCtxState <= 63 ) ? 0 : 1
pStateldx = valMps ? ( preCtxState - 64 ) : ( 63 - preCtxState ) (9-6)
[00260] In Table 9-4, the ctxldx for which initialization is needed for each of the three initialization types, specified by the variable initType, are listed. Also listed is the table number that includes the values of initValue needed for the initialization. For P and B slice types, the derivation of initType depends on the value of the cabac_init_flag syntax element. The variable initType is derived as follows:
[00261] if( slice_type = = I )
initType = 0
else if( slice_type = = P )
initType = cabac_init_flag ? 2 : 1 (9-7)
else
initType = cabac_init_flag ? 1 : 2
[00262] 3.Examples of Problems Solved by Embodiments
[00263] In dependent quantization, QPc + 1 is used for quantization. However, in deblocking filter process, QPc is used, which is inconsistent.
[00264] In addition, if QPc + 1 is used in the deblocking filter process, how to handle the mapping table between Q and tc/b is unknown since QPc may be set to the maximum value, i.e., 63.
[00265] 4. Examples of embodiments
[00266] To address the problem, several methods can be applied to the deblocking filter process which relies on the quantization parameters of blocks to be filtered. It may also be applicable to other kinds of procedures, like bilateral filter, which depend on the quantization parameter associated with one block.
[00267] The detailed listing of techniques below should be considered as examples to explain general concepts. These inventions should not be interpreted in a narrow way. Furthermore, techniques can be combined in any manner. Denote the allowed minimum and maximum QP are QPmin and QPmax respectively. Denote signaled quantization parameter of the current CU is QPc and the quantization/dequantization process relies on QPc + N to derive the quantization step (e.g., N = 1 in current dependent quantization design). Denote Tc’[n] and b' [nj as the n-th entry of Tc’ and b’ table.
1. It is proposed that whether to and how to apply deblocking filter may depend on whether dependent scalar quantization is used or not.
a. For example, QP used in deblocking filter depends on whether dep_quant_enabled_flag is equal to 0 or 1.
2. It is proposed that one same QP is used in dependent quantization, deblocking filter or/and any other process using QP as an input parameter.
a. In one example, QPc instead of QPc + N is used in dependent quantization. N is an integer such as 1 , 3, 6, 7 or -1, -3, -6, -7.
b. In one example, QPc + N is used in deblocking filter or/and any other process using QP as an input parameter. N is an integer such as 1, 3, 6, 7 or -1, -3, -6, -7.
c. QPc + N is clipped to a valid range before being used.
3. It is proposed that the allowed QP range is set to [QPmin - N, QPmax - N] instead of [QPmin, QPmax] for dependent quantization when QPc + N is used in it.
a. Alternatively, the allowed QP range is set to [Max(QPmin - N, QPmin), Min(QPmax - N, QPmax)].
4. It is proposed that, as compared with no dependent quantization is used, a weaker/stronger deblocking filter is used when dependent quantization is used.
a. In one example, the encoder selects a weaker/stronger deblocking filter when dependent quantization is enabled and signal it to the decoder.
b. In one example, smaller/larger threshold values Tc and b are used implictly at both encoder and decoder, when dependent quantization is enabled. 5. When QPc + N is used in deblocking filter process, more entries may be required in Tc’ and b’ table (e.g., Table 2-3) for QPmax + N.
a. Alternatively, the same table may be utilized, however, whenever QPc + N is firstly clipped to be within the same range [QPmin, QP max]
b. In one example, Tc’ table is extended as: tc’[66] = 50 and tc’[67] = 52.
c. In one example, Tc’ table is extended as: tc’[66] = 49 and tc’[67] = 50.
d. In one example, Tc’ table is extended as: tc’[66] = 49 and tc’[67] = 51.
e. In one example, Tc’ table is extended as: tc’[66] = 48 and tc’[67] = 50.
f. In one example, Tc’ table is extended as: tc’[66] = 50 and tc’[67] = 51.
g. In one example, b’ table is extended as: b [64] = 90 and b’ [65] = 92.
h. In one example, b’ table is extended as: b [64] = 89 and b’ [65] = 90.
i. In one example, b’ table is extended as: b [64] = 89 and b’ [65] = 91.
j. In one example, b’ table is extended as: b [64] = 88 and b’ [65] = 90.
k. In one example, b’ table is extended as: b [64] = 90 and b’ [65] = 91.
6. The initialization of CABAC context depends on the QPc + N instead of QPc when dependent quantization is enabled.
7. The high-level signaled quantization parameter may be assigned with different semantics based on whether dependent quantization is used or not.
a. In one example, for the qp indicated in the picture parameter set/picture header (i.e., init_qp_minus26 in HEVC), it may have different semantics.
i. When dependent quantization is OFF, init_qp_minus26 plus 26 specifies the initial value of SliceQpy for each slice referring to the PPS or initial value of all tiles’ quantization parameters referring to the PPS/picture header.
ii. When dependent quantization is ON, init_qp_minus26 plus 27 specifies the initial value of SliceQpy for each slice referring to the PPS or initial value of all tiles’ quantization parameters referring to the PPS/picture header.
b. In one example, for the delta qp indicated in slice header/tile header/tile groups header (i.e., slice_qp_delta in HEVC), it may have different semantics. i. When dependent quantization is OFF, slice_qp_delta specifies the initial value of Qpy to be used for the coding blocks in the slice/tile/tile groups until modified by the value of CuQpDeltaVal in the coding unit layer. The initial value of the Qpy quantization parameter for the slice/tile/tile groups, SliceQpy, is derived as follows:
SliceQpy = 26 + init_qp_minus26 + slice_qp_delta ii. When dependent quantization is ON, slice_qp_delta specifies the initial value of Qpy to be used for the coding blocks in the slice/tile/tile groups until modified by the value of CuQpDeltaVal in the coding unit layer. The initial value of the Qpy quantization parameter for the slice/tile/tile groups, SliceQpy, is derived as follows:
SliceQpy = 26 + init_qp_minus26 + slice_qp_delta + 1
8. Whether to enable or disable the proposed method may be signaled in SPS/PPS/VPS/sequence header/picture header/slice header/tile group header/group of CTUs, etc. al.
[00268] 5. Example of another Embodiment
[00269] In one embodiment, QPc + 1 is used in deblocking filter. The newly added parts are highlighted.
[00270] 8.7.2.5.3 Decision process for luma block edges
[00271] Inputs to this process are:
[00272] - a luma picture sample array recPictureL,
[00273] - a luma location ( xCb, yCb ) specifying the top-left sample of the current luma coding block relative to the top-left luma sample of the current picture,
[00274] - a luma location ( xBl, yBl ) specifying the top-left sample of the current luma block relative to the top-left sample of the current luma coding block,
[00275] - a variable edgeType specifying whether a vertical (EDGE_VER) or a horizontal (EDGE_HOR) edge is filtered,
[00276] - a variable bS specifying the boundary filtering strength.
[00277] Outputs of this process are:
[00278] - the variables dE, dEp, and dEq containing decisions,
[00279] - the variables b and tc. [00280] If edgeType is equal to EDGE_VER, the sample values pi,k and qi,k with i = 0..3 and k = 0 and 3 are derived as follows :
[00281] qi,k = recPicturei^ xCb + xBl + i ][ yCb + yBl + k ] (8-284)
[00282] pi,k = recPictureL[ xCb + xBl— i— 1 ][ yCb + yBl + k ] (8-285)
[00283] Otherwise (edgeType is equal to EDGE_HOR), the sample values pi,k and qi,k with i = 0..3 and k = 0 and 3 are derived as follows:
[00284] qi,k = recPictureL[ xCb + xBl + k ][ yCb + yBl + i ] (8-286)
[00285] pi,k = recPicturei^ xCb + xBl + k ][ yCb + yBl - i - 1 ] (8-287)
[00286] The variables QPQ and Qpp are set equal to the Qpy values of the coding units which include the coding blocks containing the sample qo,o and po,o, respectively.
[00287] If dep_quant_enabled_flag of the coding unit which includes the coding block containing the sample qo,o is equal to 1, QPQ is set equal to QPQ + 1. If dep_quant_enabled_flag of the coding unit which includes the coding block containing the sample po,o is equal to 1, Qpp is set equal to Qpp + 1.
[00288] A variable qPL is derived as follows:
[00289] qPL = ( ( QPQ + Qpp + 1 ) » 1 ) (8-288)
[00290] The value of the variable b' is determined as specified in Table 8-11 based on the luma quantization parameter Q derived as follows:
[00291] Q = Clip3( 0, 51 , qPL + ( slice_beta_offset_div2 << 1 ) ) (8-289)
[00292] where slice_beta_offset_div2 is the value of the syntax element slice_beta_offset_div2 for the slice that contains sample qo,o.
[00293] The variable b is derived as follows:
[00294] b = b' * ( 1 « ( BitDepthy - 8 ) ) (8-290)
[00295] The value of the variable tc' is determined as specified in Table 8-11 based on the luma quantization parameter Q derived as follows:
[00296] Q = Clip3( 0, 53, qPL + 2 * ( bS— l ) + ( slice_tc_offset_div2 << 1 ) ) (8-291)
[00297] where slice_tc_offset_div2 is the value of the syntax element slice_tc_offset_div2 for the slice that contains sample qo,o.
[00298] The variable tc is derived as follows:
[00299] tc = tc' * ( 1 << ( BitDepthy - 8 ) ) (8-292)
[00300] Depending on the value of edgeType, the following applies: [00301] - If edgeType is equal to EDGE_VER, the following ordered steps apply:
[00302] The variables dpqO, dpq3, dp, dq, and d are derived as follows:
[00303] dpO = Abs( p2,o - 2 * ri,o + ro,o ) (8-293)
[00304] dp3 = Abs( p2,3 - 2 * pi, 3 + po,3 ) (8-294)
[00305] dqO = Abs( q2,o - 2 * qi,o + qo,o ) (8-295)
[00306] dq3 = Abs( q2,3 - 2 * qi,3 + qo,3 ) (8-296)
[00307] dpqO = dpO + dqO (8-297)
[00308] dpq3 = dp3 + dq3 (8-298)
[00309] dp = dpO + dp3 (8-299)
[00310] dq = dqO + dq3 (8-300)
[00311] d = dpqO + dpq3 (8-301)
[00312] The variables dE, dEp, and dEq are set equal to 0.
[00313] When d is less than b, the following ordered steps apply:
[00314] The variable dpq is set equal to 2 * dpqO.
[00315] For the sample location ( xCb + xBl, yCb + yBl ), the decision process for a luma sample as specified in subclause 8.7.2.5.6 is invoked with sample values pi,o, qi,o with i = 0..3, the variables dpq, b, and tc as inputs, and the output is assigned to the decision dSamO.
[00316] The variable dpq is set equal to 2 * dpq3.
[00317] For the sample location ( xCb + xBl, yCb + yBl + 3 ), the decision process for a luma sample as specified in subclause 8.7.2.5.6 is invoked with sample values pi, 3, qy with i = 0..3, the variables dpq, b, and tc as inputs, and the output is assigned to the decision dSam3.
[00318] The variable dE is set equal to 1.
[00319] When dSamO is equal to 1 and dSam3 is equal to 1, the variable dE is set equal to 2.
[00320] When dp is less than ( b + ( b >> 1 ) ) >> 3, the variable dEp is set equal to 1.
[00321] When dq is less than ( b + ( b >> 1 ) ) >> 3, the variable dEq is set equal to 1.
[00322] - Otherwise (edgeType is equal to EDGE_HOR), the following ordered steps apply:
[00323] The variables dpqO, dpq3, dp, dq, and d are derived as follows:
[00324] dpO = Abs( p2,o - 2 * pi,o + po,o ) (8-302)
[00325] dp3 = Abs( p2,3 - 2 * pi, 3 + po,3 ) (8-303)
[00326] dqO = Abs( q2,o - 2 * qi,o + qo,o ) (8-304)
[00327] dq3 = Abs( q2,3 - 2 * qi,3 + qo,3 ) (8-305) [00328] dpqO = dpO + dqO (8-306)
[00329] dpq3 = dp3 + dq3 (8-307)
[00330] dp = dpO + dp3 (8-308)
[00331] dq = dqO + dq3 (8-309)
[00332] d = dpqO + dpq3 (8-310)
[00333] The variables dE, dEp, and dEq are set equal to 0.
[00334] When d is less than b, the following ordered steps apply:
[00335] The variable dpq is set equal to 2 * dpqO.
[00336] For the sample location ( xCb + xBl, yCb + yBl ), the decision process for a luma sample as specified in subclause 8.7.2.5.6 is invoked with sample values po,o, p3,o, qo,o, and q3,o, the variables dpq, b, and tc as inputs, and the output is assigned to the decision dSamO.
[00337] The variable dpq is set equal to 2 * dpq3.
[00338] For the sample location ( xCb + xBl + 3, yCb + yBl ), the decision process for a luma sample as specified in subclause 8.7.2.5.6 is invoked with sample values po,3, p3,3, qo,3, and q3 3, the variables dpq, b, and tc as inputs, and the output is assigned to the decision dSam3.
[00339] The variable dE is set equal to 1.
[00340] When dSamO is equal to 1 and dSam3 is equal to 1, the variable dE is set equal to 2.
[00341] When dp is less than ( b + ( b >> 1 ) ) >> 3, the variable dEp is set equal to 1.
[00342] When dq is less than ( b + ( b >> 1 ) ) >> 3, the variable dEq is set equal to 1.
[00343] Table 8-11 below is for the derivation of threshold variables b' and tc' from input Q
Figure imgf000028_0001
[00344] FIG. 8 is a block diagram of a video processing apparatus 800. The apparatus 800 may be used to implement one or more of the methods described herein. The apparatus 800 may be embodied in a smartphone, tablet, computer, Internet of Things (IoT) receiver, and so on. The apparatus 800 may include one or more processors 802, one or more memories 804 and video processing hardware 806. The processor(s) 802 may be configured to implement one or more methods described in the present document. The memory (memories) 804 may be used for storing data and code used for implementing the methods and techniques described herein. The video processing hardware 806 may be used to implement, in hardware circuitry, some techniques described in the present document.
[00345] FIG. 10 is a flowchart for a method 1000 of processing a video. The method 1000 includes performing (1005) a determination that dependent scalar quantization is used to process a first video block; determining (1010) a first quantization parameter (QP) to be used for a deblocking filter for the first video block based on the determination that dependent scalar quantization being used to process the first video block; and performing (1015) further processing of the first video block using the deblocking filter in accordance with the first QP.
[00346] With reference to method 1000, some examples of determining a candidate for encoding and their use are described in Section 4 of the present document. For example, as described in Section 4, a quantization parameter for deblocking filter can be determined depending on the usage of dependent scalar quantization.
[00347] With reference to method 1000, a video block may be encoded in the video bitstream in which bit efficiency may be achieved by using a bitstream generation rule related to motion information prediction.
[00348] The method can include wherein the determination that dependent scalar quantization is used is based on a value of a flag signal.
[00349] The method can include wherein the first QP used for the deblocking filter is used for dependent scalar quantization and other processing techniques of the first video block.
[00350] The method can include wherein the first QP is QPc.
[00351] The method can include wherein the first QP is QPc + N, wherein N is an integer.
[00352] The method can include wherein QPc + N is modified from a prior value to fit within a threshold value range. [00353] The method can include wherein the threshold value range is [Max(QPmin - N, QPmin), Min(QPmax - N, QPmax)].
[00354] The method can include determining, by the processor, that dependent scalar quantization is not used to process a second video block; and performing further processing of the second video block using another deblocking filter, wherein the deblocking filter used for the first video block is stronger or weaker than the another deblocking filter used to process the second video block based on dependent scalar quantization being used for the first video block.
[00355] The method can include wherein the deblocking filter is selected by an encoder, the method further comprises: signaling to a decoder that dependent scalar quantization is enabled for the first video block.
[00356] The method can include wherein smaller or larger threshold values Tc and b are used by the encoder and the decoder based on the use of dependent scalar quantization.
[00357] The method can include wherein the first QP is QPc + N, and additional entries are used for Tc’ and b’ tables for QPmax + N.
[00358] The method can include wherein the first QP is QPc + N and Tc’ and b’ table is same as for QPmax + N based on QPc + N being clipped to be within a threshold value range.
[00359] The method can include wherein the Tc’ table is extended as: tc’[66] = 50 and tc’[67] = 52.
[00360] The method can include wherein the Tc’ and b’ table is extended as: tc’[66] = 49 and tc’[67] = 50.
[00361] The method can include wherein the Tc’ table is extended as: tc’[66] = 49 and tc’[67] = 51.
[00362] The method can include wherein the Tc’ table is extended as: tc’[66] = 48 and tc’[67] =
50.
[00363] The method can include wherein the Tc’ table is extended as: tc’[66] = 50 and tc’[67] =
51.
[00364] The method can include wherein the b’ table is extended as: b' [64] = 90 and b’ [65] = 92.
[00365] The method can include wherein the b’ table is extended as: b' [64] = 89 and b' [65] = 90. [00366] The method can include wherein the b’ table is extended as: b' [64] = 89 and b' [65] = 91.
[00367] The method can include wherein the b’ table is extended as: b' [64] = 88 and b' [65] = 90.
[00368] The method can include wherein the b’ table is extended as: b' [64] = 90 and b' [65] = 91.
[00369] The method can include wherein initialization of context-based adaptive binary arithmetic coding (CAB AC) is based on the first QP being QPc + N and dependent scalar quantization being used to process the first video block.
[00370] The method can include wherein the determination that dependent scalar quantization is used to process the first video block is signaled with semantics based on the use of the dependent scalar quantization.
[00371] The method can include wherein the first QP is indicated in a picture parameter set or a picture header.
[00372] The method can include wherein the picture parameter set or the picture header indicate init_qp_minus26 plus 26 that specifies an initial value of a SliceQPy for a slice referring to a PPS or an initial value of quantization parameters of tiles referred to in the PPS or the picture header based on dependent scalar quantization being off.
[00373] The method can include wherein the picture parameter set or the picture header indicate init_qp_minus26 plus 27 that specifies an initial value of a SliceQPy for a slice referring to a PPS or an initial value of quantization parameters of tiles referred to in the PPS or the picture header based on dependent scalar quantization being used.
[00374] The method can include wherein the first QP is indicated in a slice header, a tile header, or a tile groups header.
[00375] The method can include wherein the slice header, the tile header, or the tile groups header indicate slice_qp_delta specifies an initial value of a QpY for coding blocks in slice, tile. Or tile groups until modified by a value of CuQpDeltaVal in a coding unit layer, wherein an initial value of QpY is SliceQpY, wherein SliceQpY = 26 + init_qp_minus26 + slice_qp_delta based on dependent scalar quantization being off.
[00376] The method can include wherein the slice header, the tile header, or the tile groups header indicate slice_qp_delta specifies an initial value of a QpY for coding blocks in slice, tile. Or tile groups until modified by a value of CuQpDeltaVal in a coding unit layer, wherein an initial value of QpY is SliceQpY, wherein SliceQpY = 26 + init_qp_minus26 + slice_qp_delta based on dependent scalar quantization being used.
[00377] The method can include wherein the method is applied based on being signaled in a SPS, a PPS, a VPS, a sequence header, a picture header, a slice header, a tile group header, or a group of coding tree units (CTUs).
[00378] FIG. 1 1 is a flowchart for a video processing method 1100 of processing a video. The method 1100 includes determining (1105) one or multiple deblocking filter parameters to be used in a deblocking filter process of a current video block based on whether dependent scalar quantization is used to process the current video block, wherein a set of admissible
reconstruction value for a transform coefficient corresponding to the dependent scalar quantization depends on at least one transform coefficient level that precedes a current transform coefficient level; and performing (1110) the deblocking filter process on the current video block in accordance with the one or multiple deblocking filter parameter.
[00379] The method can include that wherein the determining one or multiple deblocking filter parameters to be used in a deblocking filter process of a current video block further comprises: determining the one or multiple deblocking filter parameters corresponding to a weaker deblocking filter in case that the dependent scalar quantization is used for the current video block; or determining the one or multiple deblocking filter parameters corresponding to a stronger deblocking filter in case that the dependent scalar quantization is used for the current video block.
[00380] The method can include that wherein the stronger deblocking filter modifies more pixels, and the weaker deblocking filter modifies less pixels.
[00381] The method can include that wherein the determining one or multiple deblocking filter parameters to be used in a deblocking filter process of a current video block further comprises: selecting smaller threshold values Tc and b in case that the dependent scalar quantization is used for the current video block; or selecting larger threshold values Tc and b in case that the dependent scalar quantization is used for the current video block.
[00382] The method can include that wherein the determining one or multiple deblocking filter parameters to be used in a deblocking filter process of a current video block further comprises: determining a quantization parameter included in the one or multiple deblocking filter parameters based on whether dependent scalar quantization is used to process the current video block.
[00383] The method can include wherein the quantization parameter used for the deblocking filter process is set equal to QPc + N in case that the dependent scalar quantization is used for the current video block, wherein QPc is a signaled quantization parameter of the current video block, QPc + N is a quantization parameter used for the dependent scalar quantization, N is an integer and N>=l .
[00384] The method can include wherein at least one additional entry is set in a mapping table in the case that the dependent scalar quantization is used for the current video block, wherein the mapping table indicates mapping relationships between quantization parameters and threshold values b’, or indicates mapping relationships between quantization parameters and threshold values Tc’.
[00385] The method can include wherein the mapping table is extended according to any of the following options: tc’[66] = 50 and tc’[67] = 52, tc’[66] = 49 and tc’[67] = 50, tc’[66] = 49 and tc’[67] = 51 , tc’[66] = 48 and tc’[67] = 50, tc’[66] = 50 and tc’[67] = 51.
[00386] The method can include wherein the mapping table is extended according to any of the following options: P’[64] = 90 and b’ [65] = 92, b [64] = 89 and b’ [65] = 90, b [64] = 89 and b’ [65] = 91 , b’ [64] = 88 and b [65] = 90, b [64] = 90 and b [65] = 91.
[00387] The method can include wherein the QPc + N is clipped to be within a range [QPmin, QPmax] in case that it greater than QPmax or less than QPmin, wherein QPmin and QPmax are respectively the minimum and the maximum allowable quantization parameter.
[00388] FIG. 13 is a flowchart for a video processing method 1300. The method 1300 includes determining (1305) whether to apply deblocking filter process based on whether dependent scalar quantization is used to process a current video block, wherein a set of admissible reconstruction value for a transform coefficient corresponding to the dependent scalar quantization depends on at least one transform coefficient level that precedes a current transform coefficient level; and performing (1310) the deblocking filter process on the current video block base on the determination of applying the deblocking filter process.
[00389] FIG. 12 is a flowchart for a video processing method 1200 of processing a video.
The method 1200 includes determining (1205) a quantization parameter to be used in a dependent scalar quantization of a current video block in case that the dependent scalar quantization is enabled for the current video block, wherein a set of admissible reconstmction value for a transform coefficient corresponding to the dependent scalar quantization depends on at least one transform coefficient level that precedes a current transform coefficient level; and performing (1210) the dependent scalar quantization on the current video block in accordance with the determined quantization parameter, wherein the determined quantization parameter is also applied to a video process different from the dependent scalar quantization using a quantization parameter as an input parameter of the current video block.
[00390] The method can include wherein the video processing different from the dependent scalar quantization includes a deblocking filtering process.
[00391] The method can include wherein the determined quantization parameter is QPc in case that the dependent scalar quantization is enabled for the current video block, wherein QPc is a signaled quantization parameter of the current video block.
[00392] The method can include wherein the determined quantization parameter is QPc+N in case that the dependent scalar quantization is enabled for the current video block, wherein QPc is a signaled quantization parameter of the current video block, and N is an integer.
[00393] The method can further include clipping QPc + N to a threshold value range before using it.
[00394] The method can include wherein the threshold value range is[QPmin, QPmax], wherein QPmin and QPmax are respectively allowed minimum and maximum quantization parameters.
[00395] The method can include wherein an allowed QPc range for the dependent scalar quantization is [QPmin - N, QPmax - N] in case that the dependent scalar quantization is enabled for the current video block, wherein QPmin and QPmax are the allowed minimum value and maximum value of QPc in case that the dependent scalar quantization is not enabled for the current video block , respectively.
[00396] The method can include wherein an allowed QPc range for the dependent scalar quantization is [Max(QPmin - N, QPmin), Min(QPmax - N, QPmax)] in case that the dependent scalar quantization is enabled for the current video block, wherein QPmin and QPmax are the allowed minimum value and maximum value of QPc in case that the dependent scalar quantization is not enabled for the current video block, respectively. [00397] The method can include wherein initialization of context-based adaptive binary arithmetic coding (CABAC) is based on the QPc + N in case that the dependent scalar quantization is enabled.
[00398] The method can include wherein high-level quantization parameters are assigned with different semantics based on whether the dependent scalar quantization is enabled.
[00399] The method can include wherein the quantization parameter is signaled in a picture parameter set or a picture header by means of a first parameter.
[00400] The method can include wherein the first parameter is init_qp_minus26, and init_qp_minus26 plus 26 specifies an initial quantization parameter value SliceQPy for a slice referring to the picture parameter set or an initial value of quantization parameters of tiles referring to the picture parameter set or the picture header, in case that the dependent scalar quantization is disabled.
[00401] The method can include wherein the first parameter is init_qp_minus26, and init_qp_minus26 plus 27 specifies an initial quantization parameter value SliceQPy for a slice referring to the picture parameter set or an initial value of quantization parameters of tiles referring to the picture parameter set or the picture header, in case that the dependent scalar quantization is enabled.
[00402] The method can include wherein the quantization parameter is signaled in a slice header, a tile header, or a tile groups header by means of a second parameter.
[00403] The method can include wherein the second parameter is slice_qp_delta, and slice_qp_delta is used to derive an initial quantization parameter value QpY for coding blocks in a slice, tile or tile groups until modified by a value of CuQpDeltaVal in a coding unit layer, wherein an initial value of QpY is set equal to SliceQpY, and SliceQpY = 26 + init_qp_minus26 + slice_qp_delta, in case that the dependent scalar quantization is disabled.
[00404] The method can include wherein the second parameter is slice_qp_delta, and slice_qp_delta is used to derive an initial quantization parameter value QpY for coding blocks in a slice, tile or tile groups until modified by a value of CuQpDeltaVal in a coding unit layer, wherein an initial value of QpY is set equal to SliceQpY, and SliceQpY = 26 + init_qp_minus26 + slice_qp_delta + 1 , in case that the dependent scalar quantization is enabled. [00405] The method can be applied in case of being signaled in a SPS, a PPS, a VPS, a sequence header, a picture header, a slice header, a tile group header, or a group of coding tree units (CTUs).
[00406] FIG. 14 is a flowchart for a video processing method 1400 of processing a video. The method 1400 includes determining (1405) a quantization parameter to be used in a dependent scalar dequantization of a current video block in case that the dependent scalar dequantization is enabled for the current video block, wherein a set of admissible reconstruction value for a transform coefficient corresponding to the dependent scalar dequantization depends on at least one transform coefficient level that precedes a current transform coefficient level; performing (1410) the dependent scalar dequantization on the current video block in accordance with the determined quantization parameter, wherein the determined quantization parameter is also applied to a video process different from the dependent scalar dequantization using a quantization parameter as an input parameter of the current video block.
[00407] It will be appreciated that the disclosed techniques may be embodied in video encoders or decoders to improve compression efficiency when the coding units being compressed have shaped that are significantly different than the traditional square shaped blocks or rectangular blocks that are half-square shaped. For example, new coding tools that use long or tall coding units such as 4x32 or 32x4 sized units may benefit from the disclosed techniques.
[00408] It will be appreciated that the disclosed techniques may be embodied in a video system comprising a processor and a non-transitory memory with instmctions thereon, wherein the instructions upon execution by the processor, cause the processor to implement the above disclosed method.
[00409] The disclosed and other solutions, examples, embodiments, modules and the functional operations described in this document can be implemented in digital electronic circuitry, or in computer software, firmware, or hardware, including the structures disclosed in this document and their structural equivalents, or in combinations of one or more of them. The disclosed and other embodiments can be implemented as one or more computer program products, i.e., one or more modules of computer program instructions encoded on a computer readable medium for execution by, or to control the operation of, data processing apparatus. The computer readable medium can be a machine -readable storage device, a machine -readable storage substrate, a memory device, a composition of matter effecting a machine-readable propagated signal, or a combination of one or more them. The term“data processing apparatus” encompasses all apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, or multiple processors or computers. The apparatus can include, in addition to hardware, code that creates an execution environment for the computer program in question, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, or a combination of one or more of them. A propagated signal is an artificially generated signal, e.g., a machine-generated electrical, optical, or electromagnetic signal, that is generated to encode information for transmission to suitable receiver apparatus.
[00410] A computer program (also known as a program, software, software application, script, or code) can be written in any form of programming language, including compiled or interpreted languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
A computer program does not necessarily correspond to a file in a file system. A program can be stored in a portion of a file that holds other programs or data (e.g., one or more scripts stored in a markup language document), in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub programs, or portions of code).
A computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network.
[00411] The processes and logic flows described in this document can be performed by one or more programmable processors executing one or more computer programs to perform functions by operating on input data and generating output. The processes and logic flows can also be performed by, and apparatus can also be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application specific integrated circuit).
[00412] Processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer. Generally, a processor will receive instructions and data from a read only memory or a random-access memory or both. The essential elements of a computer are a processor for performing instmctions and one or more memory devices for storing instructions and data. Generally, a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto optical disks, or optical disks. However, a computer need not have such devices. Computer readable media suitable for storing computer program instructions and data include all forms of non-volatile memory, media and memory devices, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices; magnetic disks, e.g., internal hard disks or removable disks; magneto optical disks; and CD ROM and DVD-ROM disks. The processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.
[00413] While this patent document contains many specifics, these should not be constmed as limitations on the scope of any subject matter or of what may be claimed, but rather as descriptions of features that may be specific to particular embodiments of particular techniques. Certain features that are described in this patent document in the context of separate
embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be
implemented in multiple embodiments separately or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination.
[00414] Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. Moreover, the separation of various system components in the embodiments described in this patent document should not be understood as requiring such separation in all
embodiments.
[00415] Only a few implementations and examples are described and other implementations, enhancements and variations can be made based on what is described and illustrated in this patent document.

Claims

1. A video processing method, comprising: determining a quantization parameter to be used in a dependent scalar quantization of a current video block in case that the dependent scalar quantization is enabled for the current video block, wherein a set of admissible reconstruction value for a transform coefficient corresponding to the dependent scalar quantization depends on at least one transform coefficient level that precedes a current transform coefficient level; and performing the dependent scalar quantization on the current video block in accordance with the determined quantization parameter; wherein the determined quantization parameter is also applied to a video process different from the dependent scalar quantization using a quantization parameter as an input parameter of the current video block.
2. The video processing method of claim 1 , wherein the video processing different from the dependent scalar quantization includes a deblocking filtering process.
3. The video processing method of claim 1, wherein the determined quantization parameter is QPc in case that the dependent scalar quantization is enabled for the current video block, wherein QPc is a signaled quantization parameter of the current video block.
4. The video processing method of claim 1, wherein the determined quantization parameter is QPc+N in case that the dependent scalar quantization is enabled for the current video block, wherein QPc is a signaled quantization parameter of the current video block, and N is an integer.
5. The video processing method of claim 4, further comprises clipping QPc + N to a threshold value range before using it.
6. The video processing method of claim 5, wherein the threshold value range is [QPmin, QPmax], wherein QPmin and QPmax are respectively allowed minimum and maximum quantization parameters.
7. The video processing method of claim 4, wherein an allowed QPc range for the dependent scalar quantization is [QPmin - N, QPmax - N] in case that the dependent scalar quantization is enabled for the current video block, wherein QPmin and QPmax are the allowed minimum value and maximum value of QPc in case that the dependent scalar quantization is not enabled for the current video block , respectively.
8. The video processing method of claim 4, wherein an allowed QPc range for the dependent scalar quantization is [Max(QPmin - N, QPmin), Min(QPmax - N, QPmax)] in case that the dependent scalar quantization is enabled for the current video block, wherein QPmin and QPmax are the allowed minimum value and maximum value of QPc in case that the dependent scalar quantization is not enabled for the current video block, respectively.
9. The video processing method of any one of claims 4-8, wherein initialization of context- based adaptive binary arithmetic coding (CAB AC) is based on the QPc + N in case that the dependent scalar quantization is enabled.
10. The video processing method of any one of claims 1-8, wherein high-level quantization parameters are assigned with different semantics based on whether the dependent scalar quantization is enabled.
11. The video processing method of any one of claims 1-10, wherein the quantization parameter is signaled in a picture parameter set or a picture header by means of a first parameter.
12. The video processing method of claim 11, wherein the first parameter is init_qp_minus26, and init_qp_minus26 plus 26 specifies an initial quantization parameter value SliceQPy for a slice referring to the picture parameter set or an initial value of quantization parameters of tiles referring to the picture parameter set or the picture header, in case that the dependent scalar quantization is disabled.
13. The video processing method of claim 11, wherein the first parameter is init_qp_minus26, and init_qp_minus26 plus 27 specifies an initial quantization parameter value SliceQPy for a slice referring to the picture parameter set or an initial value of quantization parameters of tiles referring to the picture parameter set or the picture header, in case that the dependent scalar quantization is enabled.
14. The video processing method of any one of claims 1-10, wherein the quantization parameter is signaled in a slice header, a tile header, or a tile groups header by means of a second parameter.
15. The video processing method of claim 14, wherein the second parameter is slice_qp_delta, and slice_qp_delta is used to derive an initial quantization parameter value QpY for coding blocks in a slice, tile or tile groups until modified by a value of CuQpDeltaVal in a coding unit layer, wherein an initial value of QpY is set equal to SliceQpY, and SliceQpY = 26 + init_qp_minus26 + slice_qp_delta, in case that the dependent scalar quantization is disabled.
16. The video processing method of claim 14, wherein the second parameter is slice_qp_delta, and slice_qp_delta is used to derive an initial quantization parameter value QpY for coding blocks in a slice, tile or tile groups until modified by a value of CuQpDeltaVal in a coding unit layer, wherein an initial value of QpY is set equal to SliceQpY, and SliceQpY = 26 + init_qp_minus26 + slice_qp_delta + 1 , in case that the dependent scalar quantization is enabled.
17. The video processing method of any one of claims 1-16, wherein the method is applied in case of being signaled in a SPS, a PPS, a VPS, a sequence header, a picture header, a slice header, a tile group header, or a group of coding tree units (CTUs).
18. A video processing method, comprising: determining a quantization parameter to be used in a dependent scalar dequantization of a current video block in case that the dependent scalar dequantization is enabled for the current video block, wherein a set of admissible reconstmction value for a transform coefficient corresponding to the dependent scalar dequantization depends on at least one transform coefficient level that precedes a current transform coefficient level; and performing the dependent scalar dequantization on the current video block in accordance with the determined quantization parameter, wherein the determined quantization parameter is also applied to a video process different from the dependent scalar dequantization using a quantization parameter as an input parameter of the current video block.
19. An apparatus in a video system comprising a processor and a non -transitory memory with instructions thereon, wherein the instructions upon execution by the processor, cause the processor to implement the method in any one of claims 1 to 18.
20. A computer program product stored on a non-transitory computer readable media, the computer program product including program code for carrying out the method in any one of claims 1 to 18.
PCT/IB2019/059343 2018-10-31 2019-10-31 Quantization parameters under coding tool of dependent quantization WO2020089825A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CNPCT/CN2018/112945 2018-10-31
CN2018112945 2018-10-31

Publications (1)

Publication Number Publication Date
WO2020089825A1 true WO2020089825A1 (en) 2020-05-07

Family

ID=68470587

Family Applications (2)

Application Number Title Priority Date Filing Date
PCT/IB2019/059342 WO2020089824A1 (en) 2018-10-31 2019-10-31 Deblocking filtering under dependent quantization
PCT/IB2019/059343 WO2020089825A1 (en) 2018-10-31 2019-10-31 Quantization parameters under coding tool of dependent quantization

Family Applications Before (1)

Application Number Title Priority Date Filing Date
PCT/IB2019/059342 WO2020089824A1 (en) 2018-10-31 2019-10-31 Deblocking filtering under dependent quantization

Country Status (2)

Country Link
CN (2) CN111131821B (en)
WO (2) WO2020089824A1 (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114727109B (en) * 2021-01-05 2023-03-24 腾讯科技(深圳)有限公司 Multimedia quantization processing method and device and coding and decoding equipment
CN116636206A (en) * 2021-02-05 2023-08-22 Oppo广东移动通信有限公司 Encoding method, decoding method, encoder, decoder, and electronic device
CN116918326A (en) * 2021-02-22 2023-10-20 浙江大学 Video encoding and decoding method and system, video encoder and video decoder
CN117616755A (en) * 2021-04-02 2024-02-27 抖音视界有限公司 Adaptive dependent quantization
EP4354861A1 (en) * 2021-06-11 2024-04-17 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Video decoding and coding method, device and storage medium
CN117426089A (en) * 2021-07-27 2024-01-19 Oppo广东移动通信有限公司 Video decoding and encoding method and device and storage medium

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010077325A2 (en) * 2008-12-29 2010-07-08 Thomson Licensing Method and apparatus for adaptive quantization of subband/wavelet coefficients
US9185404B2 (en) * 2011-10-07 2015-11-10 Qualcomm Incorporated Performing transform dependent de-blocking filtering
US9161046B2 (en) * 2011-10-25 2015-10-13 Qualcomm Incorporated Determining quantization parameters for deblocking filtering for video coding
PH12018500138A1 (en) * 2012-01-20 2018-07-09 Ge Video Compression Llc Transform coefficient coding
US9344723B2 (en) * 2012-04-13 2016-05-17 Qualcomm Incorporated Beta offset control for deblocking filters in video coding
WO2013162441A1 (en) * 2012-04-25 2013-10-31 Telefonaktiebolaget L M Ericsson (Publ) Deblocking filtering control
US20140079135A1 (en) * 2012-09-14 2014-03-20 Qualcomm Incoporated Performing quantization to facilitate deblocking filtering
CN103491373B (en) * 2013-09-06 2018-04-27 复旦大学 A kind of level Four flowing water filtering method of deblocking filter suitable for HEVC standard
US10091504B2 (en) * 2015-01-08 2018-10-02 Microsoft Technology Licensing, Llc Variations of rho-domain rate control

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
ANDREY NORKIN ET AL: "HEVC Deblocking Filter", IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, INSTITUTE OF ELECTRICAL AND ELECTRONICS ENGINEERS, US, vol. 22, no. 12, 1 December 2012 (2012-12-01), pages 1746 - 1754, XP011487156, ISSN: 1051-8215, DOI: 10.1109/TCSVT.2012.2223053 *
LI (TENCENT) X ET AL: "Fix of Initial QP Signaling", no. JVET-L0553, 1 October 2018 (2018-10-01), XP030194238, Retrieved from the Internet <URL:http://phenix.int-evry.fr/jvet/doc_end_user/documents/12_Macao/wg11/JVET-L0553-v1.zip JVET-L0553.docx> [retrieved on 20181001] *
SCHWARZ (FRAUNHOFER) H ET AL: "CE7: Transform coefficient coding and dependent quantization (Tests 7.1.2, 7.2.1)", no. JVET-K0071, 11 July 2018 (2018-07-11), XP030199394, Retrieved from the Internet <URL:http://phenix.int-evry.fr/jvet/doc_end_user/documents/11_Ljubljana/wg11/JVET-K0071-v2.zip JVET-K0071.doc> [retrieved on 20180711] *

Also Published As

Publication number Publication date
CN111131821B (en) 2023-05-09
CN111131819A (en) 2020-05-08
CN111131821A (en) 2020-05-08
WO2020089824A1 (en) 2020-05-07
CN111131819B (en) 2023-05-09

Similar Documents

Publication Publication Date Title
WO2020089825A1 (en) Quantization parameters under coding tool of dependent quantization
CN114586370A (en) Use of chrominance quantization parameters in video coding and decoding
TWI770681B (en) Video processing methods and apparatuses in video encoding or decoding system
CN113826383B (en) Block dimension setting for transform skip mode
WO2020039364A1 (en) Reduced window size for bilateral filter
JP7322285B2 (en) Quantization Parameter Offset for Chroma de Block Filtering
CN113261291A (en) Two-step cross-component prediction mode based on multiple parameters
CN114902657A (en) Adaptive color transform in video coding and decoding
WO2020125804A1 (en) Inter prediction using polynomial model
CN113826398B (en) Interaction between transform skip mode and other codec tools
CN114930818A (en) Bitstream syntax for chroma coding and decoding
CN113853787A (en) Transform skip mode based on sub-block usage
CN113966611A (en) Significant coefficient signaling in video coding and decoding
JP2023504574A (en) Using Quantization Groups in Video Coding
JP7372483B2 (en) Filtering parameter signaling in video picture header
WO2023213298A1 (en) Filter shape switch for adaptive loop filter in video coding
WO2023059235A1 (en) Combining deblock filtering and another filtering for video encoding and/or decoding

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19801979

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 19.08.2021)

122 Ep: pct application non-entry in european phase

Ref document number: 19801979

Country of ref document: EP

Kind code of ref document: A1