CN111131819B - Quantization parameters under a coding tool for dependent quantization - Google Patents

Quantization parameters under a coding tool for dependent quantization Download PDF

Info

Publication number
CN111131819B
CN111131819B CN201911055394.XA CN201911055394A CN111131819B CN 111131819 B CN111131819 B CN 111131819B CN 201911055394 A CN201911055394 A CN 201911055394A CN 111131819 B CN111131819 B CN 111131819B
Authority
CN
China
Prior art keywords
quantization
parameter
slice
video block
video processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911055394.XA
Other languages
Chinese (zh)
Other versions
CN111131819A (en
Inventor
刘鸿彬
张莉
张凯
王悦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing ByteDance Network Technology Co Ltd
ByteDance Inc
Original Assignee
Beijing ByteDance Network Technology Co Ltd
ByteDance Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing ByteDance Network Technology Co Ltd, ByteDance Inc filed Critical Beijing ByteDance Network Technology Co Ltd
Publication of CN111131819A publication Critical patent/CN111131819A/en
Application granted granted Critical
Publication of CN111131819B publication Critical patent/CN111131819B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/117Filters, e.g. for pre-processing or post-processing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/85Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
    • H04N19/86Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving reduction of coding artifacts, e.g. of blockiness
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/12Selection from among a plurality of transforms or standards, e.g. selection between discrete cosine transform [DCT] and sub-band transform or selection between H.263 and H.264
    • H04N19/122Selection of transform size, e.g. 8x8 or 2x4x8 DCT; Selection of sub-band transforms of varying structure or type
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/124Quantisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/80Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation
    • H04N19/82Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation involving filtering within a prediction loop
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/90Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
    • H04N19/96Tree coding, e.g. quad-tree coding

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Discrete Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The application relates to quantization parameters under a coding tool of dependency quantization, and discloses a video processing method, a video processing device and a computer program product. The video processing method comprises the following steps: in the case where the dependent scalar quantization is enabled for a current video block, determining a quantization parameter to be used in the dependent scalar quantization for the current video block, wherein a set of allowable reconstruction values for transform coefficients corresponding to the dependent scalar quantization depends on at least one transform coefficient level preceding the current transform coefficient level; and performing a dependent scalar quantization on the current video block according to the determined quantization parameter, wherein the determined quantization parameter is also applied to a video process of the current video block that uses the quantization parameter as an input parameter, different from the dependent scalar quantization.

Description

Quantization parameters under a coding tool for dependent quantization
The present application claims in time the priority and benefit of international patent application number PCT/CN2018/112945 filed on day 31 of 10 in 2018, in accordance with applicable patent laws and/or regulations of paris convention. The entire disclosure of international patent application number PCT/CN2018/112945 is incorporated by reference as part of the present disclosure.
Technical Field
This document relates to video and image coding techniques.
Background
Digital video occupies the greatest bandwidth usage on the internet and other digital communication networks. As the number of connected user devices capable of receiving and displaying video increases, the bandwidth requirements for digital video usage are expected to continue to increase.
Disclosure of Invention
The disclosed techniques may be used by video or image decoder or encoder embodiments, where quantization parameters are under a coding tool that relies on quantization.
In one exemplary aspect, a method of processing video is disclosed. The method includes performing, by a processor, a determination to process a first video block using a dependent scalar quantization; determining, by the processor, a first Quantization Parameter (QP) to be used for deblocking filtering of the first video block based on a determination that the first video block is processed using dependent scalar quantization; according to the first QP, further processing is performed on the first video block using deblocking filtering.
In another exemplary aspect, a video processing method is disclosed. The method comprises the following steps: determining one or more deblocking filter parameters to be used in a deblocking filtering process of the current video block based on whether a dependent scalar quantization is used to process the current video block, wherein a set of allowable reconstruction values of transform coefficients corresponding to the dependent scalar quantization is dependent on at least one transform coefficient stage preceding the current transform coefficient stage; and performing a deblocking filtering process on the current video block according to the one or more deblocking filtering parameters.
In another exemplary aspect, a video processing method is disclosed. The method comprises the following steps: determining whether to apply a deblocking filtering process based on whether a current video block is processed using a dependent scalar quantization, wherein a set of allowable reconstruction values for transform coefficients corresponding to the dependent scalar quantization depends on at least one transform coefficient stage preceding the current transform coefficient stage; and performing a deblocking filtering process on the current video block based on the determination that the deblocking filtering process is used.
In another exemplary aspect, a video processing method is disclosed. The method comprises the following steps: in the case where the dependent scalar quantization is enabled for a current video block, determining a quantization parameter to be used in the dependent scalar quantization for the current video block, wherein a set of allowable reconstruction values for transform coefficients corresponding to the dependent scalar quantization depends on at least one transform coefficient level preceding the current transform coefficient level; and performing a dependent scalar quantization on the current video block according to the determined quantization parameter, wherein the determined quantization parameter is also applied to a video process of the current video block that uses the quantization parameter as an input parameter, different from the dependent scalar quantization.
In another exemplary aspect, a video processing method is disclosed. The method comprises the following steps: in the case where the dependent scalar dequantization is enabled for the current video block, determining a quantization parameter to be used in the dependent scalar dequantization of the current video block, wherein a set of allowable reconstruction values for the transform coefficients corresponding to the dependent scalar dequantization depends on at least one transform coefficient stage preceding the current transform coefficient stage; and performing a dependent scalar dequantization of the current video block according to the determined quantization parameter, wherein the determined quantization parameter is also applied to a video process of the current video block that uses the quantization parameter as an input parameter, different from the dependent scalar dequantization.
In another example aspect, the above-described method may be implemented by a video decoder apparatus including a processor.
In another example aspect, the above-described method may be implemented by a video encoder apparatus including a processor.
In another exemplary aspect, the above-described method may be implemented by an apparatus in a video system comprising a processor and a non-transitory memory having instructions thereon, wherein the instructions, when executed by the processor, cause the processor to implement the above-described method.
In yet another example aspect, the methods may be embodied in the form of processor-executable instructions and stored on a computer-readable program medium.
These and other aspects are further described in this document.
Drawings
Fig. 1 shows an example of two scalar quantizers used in dependency quantization.
Fig. 2 shows an example of state transitions and quantizer selection for dependency quantization.
Fig. 3 shows an example of the overall processing flow of the deblocking filtering process.
Fig. 4 shows an example of a flowchart of Bs calculation.
Fig. 5 shows an example of reference information for Bs calculation at a Coding Tree Unit (CTU) boundary.
Fig. 6 shows an example of pixels involved in the filtering on/off decision and the strong/weak filtering selection.
Fig. 7 shows 4:2:2 example of deblocking behavior in chroma format.
Fig. 8 is a block diagram of an example of a video processing apparatus.
Fig. 9 shows a block diagram of an example implementation of a video encoder.
Fig. 10 is a flowchart of an example of a video processing method.
Fig. 11 is a flowchart of an example of a video processing method.
Fig. 12 is a flowchart of an example of a video processing method.
Fig. 13 is a flowchart of an example of a video processing method.
Fig. 14 is a flowchart of an example of a video processing method.
Detailed Description
This document provides various techniques that may be used by decoders of image or video bitstreams to improve the quality of decompressed or decoded digital video or pictures. For brevity, the term "video" is used herein to include sequences of pictures (conventionally referred to as video) and individual images. Furthermore, the video encoder may also implement these techniques during the encoding process in order to reconstruct the decoded frames for further encoding.
Chapter headings are used in this document for ease of understanding and do not limit the embodiments and techniques to the corresponding chapters. Thus, embodiments of one section may be combined with embodiments of other sections.
1. Summary of the inventionsummary
This patent document relates to video coding techniques. And more particularly to the use of quantization parameters in utilizing dependency quantization. It can be applied to existing video coding standards such as HEVC, or to standards to be finalized (multi-function video coding). It may also be applicable to future video coding standards or video codecs.
2. Background art
Video coding standards have been developed primarily by developing the well-known ITU-T and ISO/IEC standards. ITU-T developed h.261 and h.263, ISO/IEC developed MPEG-1 and MPEG-4 vision, and two organizations jointly developed the h.262/MPEG-2 video, h.264/MPEG-4 Advanced Video Coding (AVC), and h.265/HEVC standards. Since h.262, video coding standards have been based on hybrid video coding structures in which temporal prediction plus transform coding has been employed. To explore future video coding techniques beyond HEVC, VCEG and MPEG have combined to form a joint video exploration team (jfet) in 2015. Thereafter, jfet takes many new approaches and incorporates them into reference software named Joint Exploration Model (JEM). In month 4 2018, a joint video expert team (jfet) between VCEG (Q6/16) and ISO/IEC JTC1SC29/WG11 (MPEG) was created, working on the VVC standard with the goal of 50% bit rate reduction compared to HEVC.
Fig. 9 is a block diagram of an example implementation of a video encoder. Fig. 9 shows an encoder implementation with a built-in feedback path, where the video encoder also performs a video decoding function (reconstructing a compressed representation of video data for encoding of the next video data).
2.1 dependent scalar quantization
A dependency scalar quantization is proposed, which refers to one such method: wherein the set of allowable reconstruction values for the transform coefficients depends on the values of the transform coefficient level preceding the current transform coefficient level in the reconstruction order. The main effect of this approach is that the allowable reconstruction vectors (given by all the reconstruction transform coefficients of the transform block) are packed more densely in an N-dimensional vector space (N represents the number of transform coefficients in the transform block) than traditional independent scalar quantization (as used in HEVC and VTM-1). This means that for a given average number of allowable reconstruction vectors per N-dimensional unit volume, the average distance (or MSE distortion) between the input vector and the nearest reconstruction vector is reduced (for a typical distribution of input vectors). Ultimately, this effect may lead to an improvement in rate distortion efficiency.
A method of implementing dependency scalar quantization by: (a) Defining two scalar quantizers with different reconstruction stages, and (b) defining a process of switching between the two scalar quantizers.
Fig. 1 is a diagram of two scalar quantizers used in the proposed dependency quantization method.
The two scalar quantizers used are shown in fig. 1, denoted Q0 and Q1, respectively. The position of the available reconstruction stage is uniquely specified by the quantization step size delta. If we ignore the fact that the actual reconstruction of the transform coefficients uses integer arithmetic, the characteristics of the two scalar quantizers Q0 and Q1 are as follows:
q0: the reconstruction level of the first quantizer Q0 is given by an even integer multiple of the quantization step size delta. When the quantizer is used, a reconstructed transform coefficient t 'is calculated according to the following formula'
t'=2·k·Δ,
Where k represents the associated transform coefficient level (transmitted quantization index).
Q1: the reconstruction stage of the second quantizer Q1 is given by an odd integer multiple of the quantization step size delta and a reconstruction stage equal to zero. The mapping of the transform coefficient level k to the reconstructed transform coefficient t' is specified by the following equation
t'=(2·k–sgn(k))·Δ,
Wherein sgn (·) represents a sign function
sgn(x)=(k==00:(k<0?–1:1))。
The scalar quantizer (Q0 or Q1) used is not explicitly signaled in the bitstream. Instead, the quantizer used for the current transform coefficient is determined by the parity bits of the transform coefficient stage preceding the current transform coefficient in the encoding/reconstruction order.
Fig. 2 is an example of state transitions and quantizer selection for the proposed dependency quantization.
As shown in fig. 2, the switching between the two scalar quantizers (Q0 and Q1) is implemented via a state machine with four states. The state may take four different values: 0. 1, 2 and 3. Which is determined by the parity bits of the transform coefficient level preceding the current transform coefficient in the encoding/reconstruction order. At the beginning of the inverse quantization of the transform block, the state is set equal to 0. The transform coefficients are reconstructed in scan order (i.e., in the same order as the entropy decoding of the transform coefficients). After reconstructing the current transform coefficients, the state is updated as shown in fig. 2, where k represents the value of the transform coefficient level. Note that the next state depends only on the current state and the parity bits (k & 1) of the current transform coefficient level k. Representing the value of the current transform coefficient level by k, the state update may be written as
state=stateTransTable[state][k&1],
Wherein stateTransTable represents the table shown in FIG. 2, and operator & specifies a bitwise AND operator in two's complement arithmetic. Alternatively, the state transitions may be specified as follows without performing a table lookup:
state=(32040>>((state<<2)+((k&1)<<1)))&3
at this time, the 16-bit value 32040 specifies a state transition table.
The state uniquely specifies the scalar quantizer used. If the state of the current transform coefficient is equal to 0 or 1, a scalar quantizer Q0 is used. Otherwise (state equal to 2 or 3), scalar quantizer Q1 is used.
The detailed scaling (scaling) procedure is described below.
7.3.4.9 residual coding syntax
Figure GDA0004053572010000051
/>
Figure GDA0004053572010000061
/>
Figure GDA0004053572010000071
/>
Figure GDA0004053572010000081
Scaling of 8.4.3 transform coefficients
The inputs to this process are:
a luminance position (xTbY, yTbY) specifying a top left sample of the current luminance transformation block relative to a top left luminance sample of the current picture,
a variable nTbW, which specifies the transform block width,
a variable nTbH, which specifies the transform block height,
the variable cIdx, which specifies the color component of the current block,
a variable bitDepth, which specifies the bit depth of the current color component.
The output of this process is an array d of (nTbW) x (nTbH) with scaled transform coefficients for element d [ x ] [ y ].
The derivation of the quantization parameter qP is as follows:
if cIdx is equal to 0, the following formula applies:
qP=Qp′Y (8-383)
otherwise, if cIdx is equal to 1, the following formula applies:
qP=Qp′Cb (8-384)
otherwise (cIdx equals 2), the following formula applies:
qP=Qp′Cr (8-385)
the variables bdShift, rectNorm and bdOffset are derived as follows:
Figure GDA0004053572010000082
rectNorm=((Log2(nTbW)+Log2(nTbH))&1)==1181:1 (8-387)
bdOffset=(1<<bdShift)>>1 (8-388)
the list levescale [ ] is designated as levescale [ k ] = {40,45,51,57,64,72}, where k=0..5.
For the derivation of x=0..ntbw-1, y=0..ntbh-1, scaling the transform coefficient d [ x ] [ y ], the following formula applies:
the intermediate scaling factor m [ x ] [ y ] is set equal to 16.
The scaling factor ls [ x ] [ y ] is derived as follows:
-if dep_quant_enabled_flag is equal to 1, the following formula applies:
ls[x][y]=(m[x][y]*levelScale[(qP+1)%6])<<((qP+1)/6) (8-389)
otherwise (dep_quant_enabled_flag is equal to 0), the following formula applies:
ls[x][y]=(m[x][y]*levelScale[qP%6])<<(qP/6) (8-390)
the derivation of the value dnc [ x ] [ y ] is as follows:
dnc[x][y]=(TransCoeffLevel[xTbY][yTbY][cIdx][x][y]*ls[x][y]*rectNorm+bdOffset)>>bdShift (8-391)
the derivation of the scaled transform coefficients d [ x ] [ y ] is as follows:
d[x][y]=Clip3(CoeffMin,CoeffMax,dnc[x][y]) (8-392)
assuming that the Quantization Parameter (QP) QP is the QP used by the current CU, in dependency quantization it is actually quantized using QPC+1 according to equations 8-389. However, if dependency quantization is not used, QPC is used for quantization.
2.2 deblocking Filtering
The deblocking filter process is performed on each CU in the same order as the decoding process. The vertical edges are filtered first (horizontal filtering) and then the horizontal edges are filtered (vertical filtering). For both luminance and chrominance components, filtering is applied to determine the 8x8 block boundary to filter. To reduce complexity, 4 x 4 block boundaries are not processed.
Fig. 3 shows the overall process of the deblocking filter process. The boundary may have three filter state values: no filtering, weak filtering and strong filtering. Each filtering decision is based on the boundary strength Bs, and thresholds β and t C
Fig. 3 is an example of the overall processing flow of the deblocking filtering process.
2.2.1 boundary decision
The deblocking filtering process involves two boundaries: TU boundaries and PU boundaries. CU boundaries are also considered, as CU boundaries are also necessarily TU and PU boundaries. When the PU shape is 2NxN (N > 4) and the RQT depth is equal to 1, filtering also involves PU boundaries between TU boundaries at the 8x8 block grid and each PU inside the CU.
2.2.2 boundary Strength calculation
The boundary strength (Bs) reflects how strongly the boundary may require a filtering process. A value of Bs of 2 indicates strong filtering, 1 indicates weak filtering, and 0 indicates no deblocking filtering.
Let P and Q be defined as the blocks involved in the filtering, where P represents the blocks to the left (vertical edge case) or above (horizontal edge case) the boundary and Q represents the blocks to the right (vertical edge case) or above (horizontal edge case) the boundary. Fig. 4 shows how Bs values are calculated based on intra coding mode, presence of non-zero transform coefficients, reference picture, number of motion vectors, and motion vector difference.
At the CTU boundary, as shown in fig. 5, information about every other block on the left or above (on a 4 x 4 grid) is reused in order to reduce line buffer memory requirements. Fig. 5 is an example of reference information for Bs calculation at CTU boundaries.
Threshold variable
The filtering on/off decision, strong and weak filtering selection and weak filtering process involves thresholds beta' and t C '. These are derived from the values of the luminance quantization parameter Q as shown in table 2-1.The derivation of Q is described in section 2.2.3.1.
TABLE 2-1
Q 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
β′ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 6 7 8
t C 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1
Q 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37
β′ 9 10 11 12 13 14 15 16 17 18 20 22 24 26 28 30 32 34 36
t C 1 1 1 1 1 1 1 1 2 2 2 2 3 3 3 3 4 4 4
Q 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
β′ 38 40 42 44 46 48 50 52 54 56 58 60 62 64 - -
t C 5 5 6 6 7 8 9 10 11 13 14 16 18 20 22 24
The variable β is derived from β', as follows:
β=β′*(1<<(BitDepth Y -8))
variable t C Is from t C Derived in' as follows:
t C =t C ′*(1<<(BitDepth Y -8))
how t is derived is described as follows C 'and beta'.
2.2.3.1t C 'and beta'
In section 8.7.2.5.3, for t C Decoding process of HEVC design of 'and β'.
Decision processing of 8.7.2.5.3 luma block edges
The inputs to this process are:
luminance image sample array recaacture L
A luminance location (xCb, yCb) specifying an upper left corner sample of the current luma coded block relative to an upper left corner luma sample of the current picture,
a luminance location (xBl, yBl) specifying an upper left corner sample of the current luminance block relative to an upper left corner sample of the current luminance coding block,
a variable edgeType specifying whether to filter vertical (EDGE _ VER) or horizontal (EDGE _ HOR) EDGEs,
-a variable bS specifying a boundary filtering strength.
The output of this process is:
the variables dE, diep and dEq containing the decisions,
-variables β and t C
If the edgeType is equal to edge_ver, the derivation of the sample values pi, k and qi, k (i=0..3 and k=0 and 3) is as follows:
q i,k =recPicture L [xCb+xBl+i][yCb+yBl+k] (8-284)
p i,k =recPicture L [xCb+xBl-i-1][yCb+yBl+k] (8-285)
otherwise (edgeType equals edge_hor), the derivation of the sample values pi, k and qi, k (i=0..3 and k=0 and 3) is as follows:
qi,k=recPictureL[xCb+xBl+k][yCb+yBl+i] (8-286)
pi,k=recPictureL[xCb+xBl+k][yCb+yBl-i-1] (8-287)
variables QpQ and QpP are set to values equal to QpY of coding units including coding blocks containing samples q0,0 and p0, respectively.
The derivation of the variable qPL is as follows:
qPL=((QpQ+QpP+1)>>1) (8-288)
as specified in tables 8-11, the value of the variable β' is determined based on the luminance quantization parameter Q, and is derived as follows:
Q=Clip3(0,51,qPL+(slice_beta_offset_div2<<1)) (8-289)
where slice_beta_offset_div2 is the value of the syntax element slice_beta_offset_div2 of the slice containing samples q0, 0.
The derivation of the variable β is as follows:
β=β′*(1<<(BitDepthY-8)) (8-290)
as specified in tables 8-11, the variable t is determined based on the luminance quantization parameter Q C The value of' is derived as follows:
Q=Clip3(0,53,qPL+2*(bS-1)+(slice_tc_offset_div2<<1))(8-291)
where slice_tc_offset_div2 is the value of the syntax element slice_tc_offset_div2 of the slice containing samples q0, 0.
Variable t C The derivation of (2) is as follows:
t C =t C ′*(1<<(BitDepthY-8) (8-292)
depending on the value of the edgeType, the following formula applies:
-if the edgeType is equal to edge_ver, the following ordered steps apply:
the variables dpq0, dpq3, dp, dq and d are derived as follows:
dp0=Abs(p2,0-2*p1,0+p0,0) (8-293)
dp3=Abs(p2,3-2*p1,3+p0,3) (8-294)
dq0=Abs(q2,0-2*q1,0+q0,0) (8-295)
dq3=Abs(q2,3-2*q1,3+q0,3) (8-296)
dpq0=dp0+dq0 (8-297)
dpq3=dp3+dq3 (8-298)
dp=dp0+dp3 (8-299)
dq=dq0+dq3 (8-300)
d=dpq0+dpq3 (8-301)
the variables dE, diep and dEq are set equal to 0.
When d is less than β, the following sequential steps apply:
the variable dpq is set equal to 2×dpq0.
For the sample point values (xCb + xBl, yCb + yBl), the sample point values pi,0, qi,0 (where i=0..3), the variables dpq, β and t are used C The decision process for the luminance samples specified in section 8.7.2.5.6 is invoked as input and the output is assigned to decision dSam0.
The variable dpq is set equal to 2×dpq3.
For the sample point positions (xCb + xBl, yCb + yBl +3), the sample point values pi,3, qi,3 (where i=0..3), the variables dppq, β and t are used C The decision process for the luminance samples specified in section 8.7.2.5.6 is invoked as input and the output is assigned to decision dSam3.
The variable dE is set equal to 1.
When dSam0 is equal to 1 and dSam3 is equal to 1, the variable dE is set equal to 2.
When dp is less than (β+ (β > > 1)) >3, variable dEp is set equal to 1.
When dq is smaller than (β+ (β > > 1)) >3, variable dEq is set equal to 1.
Otherwise (edgeType equals edge_hor), the following ordered steps apply:
the variables dpq0, dpq3, dp, dq and d are derived as follows:
dp0=Abs(p2,0-2*p1,0+p0,0) (8-302)
dp3=Abs(p2,3-2*p1,3+p0,3) (8-303)
dq0=Abs(q2,0-2*q1,0+q0,0) (8-304)
dq3=Abs(q2,3-2*q1,3+q0,3) (8-305)
dpq0=dp0+dq0 (8-306)
dpq3=dp3+dq3 (8-307)
dp=dp0+dp3 (8-308)
dq=dq0+dq3 (8-309)
d=dpq0+dpq3 (8-310)
the variables dE, diep and dEq are set equal to 0.
When d is less than β, the following sequential steps apply:
the variable dpq is set equal to 2×dpq0.
For the sample point values (xCb + xBl, yCb + yBl), the sample point values p0, p3,0, q0,0 and q3,0, the variables dpq, β and t are used C The decision process for the luminance samples specified in section 8.7.2.5.6 is invoked as input and the output is assigned to decision dSam0.
The variable dpq is set equal to 2×dpq3.
For the sample point values (xCb + xBl +3, yCb+yBl), the sample point values p0,3, p3, q0,3 and q3, the variables dppq, β and t are used C The decision process for the luminance samples specified in section 8.7.2.5.6 is invoked as input and the output is assigned to decision dSam3.
The variable dE is set equal to 1.
When dSam0 is equal to 1 and dSam3 is equal to 1, the variable dE is set equal to 2.
When dp is less than (β+ (β > > 1)) >3, variable dEp is set equal to 1.
When dq is smaller than (β+ (β > > 1)) >3, variable dEq is set equal to 1.
Tables 8-11 below show the derivation of threshold variables β' and t from input Q C '。
Tables 8 to 11
Q 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
β′ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 6 7 8
t C 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1
Q 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37
β′ 9 10 11 12 13 14 15 16 17 18 20 22 24 26 28 30 32 34 36
t C 1 1 1 1 1 1 1 1 2 2 2 2 3 3 3 3 4 4 4
Q 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
β′ 38 40 42 44 46 48 50 52 54 56 58 60 62 64 - -
t C 5 5 6 6 7 8 9 10 11 13 14 16 18 20 22 24
2.2.4 Filter on/off decision for 4 rows
The filtering on/off decision is made using 4-line grouping as a unit to reduce computational complexity. Fig. 6 illustrates the pixels involved in this decision. The 6 pixels in the two red boxes of the first 4 rows are used to determine whether the filtering of these 4 rows is on or off. The 6 pixels in the two red boxes of the second set 4 of rows are used to determine whether the filtering of the second set 4 of rows is on or off.
Fig. 6 shows an example of pixels involved in the on/off decision and the strong/weak filter selection.
The following variables are defined:
dp0=|p 2,0 -2*p 1,0 +p 0,0 |
dp3=|p 2,3 -2*p 1,3 +p 0,3 |
dq0=|q 2,0 -2*q 1,0 +q 0,0 |
dq3=|q 2,3 -2*q 1,3 +q 0,3 |
if dp0+dq0+dp3+dq3< beta, the first four lines of filtering are turned on and a strong/weak filtering selection process is applied. If this condition is not met, the first 4 rows are not filtered.
Further, if the condition is satisfied, the variables dE, diep 1, and dEp2 are set as follows:
dE is set equal to 1
If dp0+dp3< (. Beta. + (. Beta. > > 1)) > >3, then variable dEp1 is set equal to 1
If dq0+dq3< (β+ (β > > 1)) >3, then variable dEq1 is set equal to 1
In a similar manner as described above, a filter on/off decision is made for the second set 4 of rows.
2.2.5 strong/weak Filter selection for 4 rows
If the filtering is on, a decision is made between strong filtering and weak filtering. The pixels involved are the same as those used for the filtering on/off decision. The first 4 rows are filtered using strong filtering if the following two sets of conditions are met. Otherwise, weak filtering is used.
1)2*(dp0+dq0)<(β>>2),|p3 0 -p0 0 |+|q0 0 -q3 0 |<(β>>3) |p0 0 -q0 0 |<(5*t C +1)>>1
2)2*(dp3+dq3)<(β>>2),|p3 3 -p0 3 |+|q0 3 -q3 3 |<(β>>3) |p0 3 -q0 3 |<(5*t C +1)>>1
In a similar manner, a decision is made as to whether to select strong or weak filtering for the second set of 4 rows.
2.2.6 Strong Filtering
For strong filtering, the filtered pixel value is obtained by the following formula. Note that for each P and Q block, three pixels are modified using four pixels as inputs, respectively.
p 0 ’=(p 2 +2*p 1 +2*p 0 +2*q 0 +q 1 +4)>>3
q 0 ’=(p 1 +2*p 0 +2*q 0 +2*q 1 +q 2 +4)>>3
p 1 ’=(p 2 +p 1 +p 0 +q 0 +2)>>2
q 1 ’=(p 0 +q 0 +q 1 +q 2 +2)>>2
p 2 ’=(2*p 3 +3*p 2 +p 1 +p 0 +q 0 +4)>>3
q 2 ’=(p 0 +q 0 +q 1 +3*q 2 +2*q 3 +4)>>3
2.2.7 Weak Filtering
Delta is defined as follows.
Δ=(9*(q 0 -p 0 )-3*(q 1 -p 1 )+8)>>4
When abs (delta) is less than t C * At the time of 10 a, the time of the reaction,
Δ=Clip3(-t C ,t C ,Δ)
p 0 ’=Clip1 Y (p 0 +Δ)
q 0 ’=Clip1 Y (q 0 -Δ)
if dEp is equal to 1,
Δp=Clip3(-(t C >>1),t C >>1,(((p 2 +p 0 +1)>>1)-p 1 +Δ)>>1)
p 1 ’=Clip1 Y (p 1 +Δp)
if dEq is equal to 1,
Δq=Clip3(-(t C >>1),t C >>1,(((q 2 +q 0 +1)>>1)-q 1 -Δ)>>1)
q 1 ’=Clip1 Y (q 1 +Δq)
note that for each P and Q block, up to two pixels are modified using three pixels as inputs, respectively.
2.2.8 chroma filtering
The boundary strength Bs for chroma filtering is inherited from luminance. If Bs >1, then chroma filtering is performed. The filter selection process is not performed on the chromaticity because the filtering can be applied only once. Filtered sample value p 0 ' and q 0 The derivation of' is as follows.
Δ=Clip3(-t C ,t C ,((((q 0 -p 0 )<<2)+p 1 -q 1 +4)>>3))
p 0 ’=Clip1 C (p 0 +Δ)
q 0 ’=Clip1 C (q 0 -Δ)
When 4:2: in the 2 chroma format, each chroma block has a rectangular shape and is encoded using a maximum of two square transforms. This process introduces additional boundaries between transform blocks of chromaticity. These boundaries are not deblocked (thick dashed lines horizontally through the center in fig. 7).
Fig. 7 is 4:2:2 example of deblocking behavior in chroma format.
2.3 extension of quantization parameter value range
QP ranging from [0,51 ]]Extend to [0,63 ]]And t C The derivation of 'and beta' is as follows. Beta and t C The table size of (a) increases from 52 and 54 to 64 and 66, respectively.
8.7.2.5.3 decision process for luminance block edges
The derivation of the variable qPL is as follows:
qP L =((Qp Q +Qp P +1)>>1) (2-38)
the value of the variable β' is determined according to the specifications of tables 2-3 based on the luminance quantization parameter Q derived as follows:
Q=Clip3(0,6351,qP L +(slice_beta_offset_div2<<1))(2-39)
where slice_beta_offset_div2 is the value of the syntax element slice_beta_offset_div2 of the slice containing samples q0, 0.
The derivation of the variable β is as follows:
β=β′*(1<<(BitDepthY-8)) (2-40)
variable t C The value of' is determined according to the specifications of tables 2-3 based on the luminance quantization parameter Q derived as follows:
Q=Clip3(0,65
Figure GDA0004053572010000161
qPL+2*(bS-1)+(slice_tc_offset_div2<<1)) (2-41)
Where slice_tc_offset_div2 is the value of the syntax element slice_tc_offset_div2 of the slice containing samples q0, 0.
The variable tC is derived as follows:
tC=tC′*(1<<(BitDepthY-8)) (0-6)
tables 2-3 below are the derivation of threshold variables β 'and tC' from input Q.
Tables 2 to 3
Figure GDA0004053572010000171
2.4 initialization of context variables
In context-based adaptive binary arithmetic coding (CABAC), the initial state of a context variable depends on the QP of the slice. The initialization process is described below.
Initialization process for 9.3.2.2 context variables
The output of this process is the initialized CABAC context variable indexed by ctxTable and ctxIdx.
For each context variable, two variables pStateIdx and valMps are initialized.
Note that the 1-variable pStateIdx corresponds to the probability state index and the variable valMps corresponds to the value of the most probable symbol.
From the 8-bit table entry initValue, the two 4-bit variables, slokeedx and offsetIdx, are derived as follows:
slopeIdx=initValue>>4
offsetIdx=initValue&15 (9-4)
the variables m and n used in the initialization of the context variables are derived from lopedIdx and offsetIdx as follows:
m=slopeIdx*5-45
n=(offsetIdx<<3)–16 (9-5)
two values assigned to pStateIdx and valMps for initialization are from SliceQp Y And (5) deducing. Given the variable m andn, the initialization is specified as follows:
preCtxState=Clip3(1,126,((m*Clip3(0,51,SliceQp Y ))>>4)+n)
valMps=(preCtxState<=63)?0:1
pStateIdx=valMps?(preCtxState-64):(63-preCtxState) (9-6)
the derivation of initType depends on the value of the cabac initflag syntax element for P and B slice types. The variable initType is derived as follows:
Figure GDA0004053572010000181
3. Example of problems solved by the embodiments
In dependency quantization, QP C +1 is used for quantization. However, in the deblocking filtering process, QP is used C This is inconsistent.
In addition, if QP is used in the deblocking filtering process C +1 due to QP C Can be set to a maximum value, i.e., 63, so how to handle Q and t C The mapping table between/beta is unknown.
4. Examples of the embodiments
To solve this problem, various methods may be applied to the deblocking filtering process, which depends on quantization parameters of a block to be filtered. Other types of processes, such as bilateral filtering, may also be applicable, depending on the quantization parameter associated with a block.
The techniques listed in detail below should be considered as examples explaining the general concepts. These inventions should not be interpreted in a narrow sense. Furthermore, the techniques may be combined in any manner. The minimum and maximum QP allowed are denoted QPmin and QPmax, respectively. Signaling quantization parameters of a current CU as QP C And quantization/inverse quantization processes depend on QP C +n to derive the quantization step size (e.g., n=1 in current dependency quantization designs). Tc' [ n ]]And beta' [ n ]]The nth entry of the table is denoted Tc 'and β'.
1. Whether and how deblocking filtering is applied is proposed may depend on whether dependency scalar quantization is used.
a. For example, the QP used in deblocking filtering depends on whether the dep_quant_enabled_flag is equal to 0 or 1.
2. It is proposed to use one and the same QP in dependency quantization, deblocking filtering or/and any other process using QP as input parameter.
a. In one example, QP is used in dependency quantization C Rather than QP C +N. N is an integer such as 1, 3, 6, 7 or-1, -3, -6, -7.
b. In one example, QP is used in deblocking filtering or/and any other processing that uses QP as an input parameter C +N。
c. In using QP C Before +N, it is clipped to the effective range.
3. Proposal for use of QP in dependency quantization C At +N, for dependency quantization, the allowed QP range is set to [ QP ] min –N,QP max –N]Rather than QP min ,QP max ]。
a. Alternatively, the allowable QP range is set to Max (QP min –N,QP min ),Min(QP max –N,QP max )]。
4. It is proposed to use weaker/stronger deblocking filtering when dependency quantization is used than when dependency quantization is not used.
a. In one example, when dependency quantization is enabled, the encoder selects weaker/stronger deblocking filtering and signals it to the decoder.
b. In one example, when dependency quantization is enabled, smaller/larger thresholds Tc and β are implicitly used at both the encoder and decoder.
5. When QP is used in the deblocking filtering process C At +N, more entries in the Tc 'and beta' tables (e.g., tables 2-3) may be needed for QP max +N。
a. Alternatively, the same table may be used, however, whenever QP C +N is first clipped to the same range [ Q ]P min ,QP max ]。
b. In one example, the Tc' table is extended to: tc '[66] =50 and tc' [67] =52.
c. In one example, the Tc' table is extended as: tc '[66] =49 and tc' [67] =50.
d. In one example, the Tc' table is extended as: tc '[66] =49 and tc' [67] =51.
e. In one example, the Tc' table is extended as: tc '[66] =48 and tc' [67] =50.
f. In one example, the Tc' table is extended as: tc '[66] =50 and tc' [67] =51.
g. In one example, the β' table is extended to: beta '[64] =90 and beta' [65] =92.
h. In one example, the β' table is extended to: beta '[64] =89 and beta' [65] =90.
i. In one example, the β' table is extended to: beta '[64] =89 and beta' [65] =91.
j. In one example, the β' table is extended to: beta '[64] =88 and beta' [65] =90.
k. In one example, the β' table is extended to: beta '[64] =90 and beta' [65] =91.
6. When dependent quantization is enabled, initialization of the CABAC context depends on QP C +N instead of QP C
7. Quantization parameters signaled by higher layers may be assigned different semantics based on whether dependency quantization is used or not.
a. In one example, it may have different semantics for qp indicated in the picture parameter set/picture header (i.e., init_qp_minus26 in HEVC).
i. Init_qp_minus26 plus 26 specifies the SliceQp of each slice of the reference PPS when the dependency is quantized to OFF Y Or the initial values of quantization parameters of all slices (tiles) referring to PPS/picture headers.
init_qp_minus26 plus 27 specify the SliceQp of each slice of the reference PPS when the dependency quantization is ON Y Initial value, or quantization parameter referencing all slices of PPS/picture headerInitial value of the number.
b. In one example, for the delta qp indicated in the stripe header/slice group header (i.e., slice_qp_delta in HEVC), it may have different semantics.
i. When the dependency is quantized to OFF, slice_qp_delta specifies Qp Y The initial value of (c) will be used for the encoded blocks in the stripe/slice group until modified by the value of cuqpdeltaal in the layer of encoded cells. Qp for stripe/slice group Y Initial value of quantization parameter SliceQp Y The following is derived:
SliceQp Y =26+init_qp_minus26+slice_qp_delta
when dependency quantization is ON, slice_qp_delta specifies Qp Y The initial value of (c) will be used for the encoded blocks in the stripe/slice group until modified by the value of cuqpdeltaal in the layer of encoded cells. Qp for stripe/slice group Y Initial value of quantization parameter SliceQp Y The following is derived:
SliceQp Y =26+init_qp_minus26+slice_qp_delta+1
8. whether the proposed method is enabled or disabled may be signaled in SPS/PPS/VPS/sequence header/picture header/slice group header/CTU group, etc.
5. Example of another embodiment
In one embodiment, QP is used in deblocking filtering C +1. The newly added portion is highlighted.
Decision processing of 8.7.2.5.3 luma block edges
The inputs to this process are:
luminance image sample array recaacture L
A luminance location (xCb, yCb) specifying an upper left corner sample of the current luma coded block relative to an upper left corner luma sample of the current picture,
a luminance location (xBl, yBl) specifying an upper left corner sample of the current luminance block relative to an upper left corner sample of the current luminance coding block,
a variable edgeType specifying whether to filter vertical (EDGE _ VER) or horizontal (EDGE _ HOR) EDGEs,
-a variable bS specifying a boundary filtering strength.
The output of this process is:
the variables dE, diep and dEq containing the decisions,
-variables β and t C
If the edgeType is equal to edge_ver, the derivation of the sample values pi, k and qi, k (i=0..3 and k=0 and 3) is as follows:
q i,k =recPicture L [xCb+xBl+i][yCb+yBl+k] (8-284)
p i,k =recPicture L [xCb+xBl-i-1][yCb+yBl+k] (8-285)
otherwise (edgeType equals EDGE_HOR), the sample value p i,k And q i,k The derivation of (i=0..3 and k=0 and 3) is as follows:
q i,k =recPicture L [xCb+xBl+k][yCb+yBl+i] (8-286)
p i,k =recPicture L [xCb+xBl+k][yCb+yBl-i-1] (8-287)
variable Qp Q And Qp P Are respectively set to be equal to the included sample point q 0,0 And p 0,0 Qp of a coding unit of a coding block of (c) Y Values.
If it includes a sample point q 0,0 The dep_quant_enabled_flag of the coding unit of the coding block of (1) is equal to 1, then Qp will be Q Set equal to Qp Q +1. If it includes a sample point p 0,0 The dep_quant_enabled_flag of the coding unit of the coding block of (1) is equal to 1, then Qp P Set equal to Qp P +1。
Variable qP L The derivation of (2) is as follows:
qP L =((Qp Q +Qp P +1)>>1) (8-288)
as specified in tables 8-11, the value of the variable β' is determined based on the luminance quantization parameter Q, and is derived as follows:
Q=Clip3(0,51,qP L +(slice_beta_offset_div2<<1)) (8-289)
wherein slice_beta_offset_div2 is a slice_beta_offset_div2 containing sample q 0,0 The syntax element slice beta offset div2 of the slice.
The derivation of the variable β is as follows:
β=β′*(1<<(BitDepth Y -8)) (8-290)
as specified in tables 8-11, the variable t is determined based on the luminance quantization parameter Q C The value of' is derived as follows:
Q=Clip3(0,53,qPL+2*(bS-1)+(slice_tc_offset_div2<<1))(8-291)
wherein slice_tc_offset_div2 is the inclusive sample point q 0,0 The syntax element slice _ tc _ offset _ div2 of the slice.
Variable t C The derivation of (2) is as follows:
t C =t C ′*(1<<(BitDepthY-8) (8-292)
depending on the value of the edgeType, the following formula applies:
-if the edgeType is equal to edge_ver, the following ordered steps apply:
the variables dpq0, dpq3, dp, dq and d are derived as follows:
dp0=Abs(p 2,0 -2*p 1,0 +p 0,0 ) (8-293)
dp3=Abs(p 2,3 -2*p 1,3 +p 0,3 ) (8-294)
dq0=Abs(q 2,0 -2*q 1,0 +q 0,0 ) (8-295)
dq3=Abs(q 2,3 -2*q 1,3 +q 0,3 ) (8-296)
dpq0=dp0+dq0 (8-297)
dpq3=dp3+dq3 (8-298)
dp=dp0+dp3 (8-299)
dq=dq0+dq3 (8-300)
d=dpq0+dpq3 (8-301)
the variables dE, diep and dEq are set equal to 0.
When d is less than β, the following sequential steps apply:
the variable dpq is set equal to 2×dpq0.
For the sample point positions (xCb + xBl, yCb + yBl), the sample point value p is used i,0 、q i,0 (wherein i=0..3), variables dpq, β and t C The decision process for the luminance samples specified in section 8.7.2.5.6 is invoked as input and the output is assigned to decision dSam0.
The variable dpq is set equal to 2×dpq3.
For the sample point positions (xCb + xBl, yCb + yBl +3), the sample point value p is used i,3 、q i,3 (wherein i=0..3), variables dpq, β and t C The decision process for the luminance samples specified in section 8.7.2.5.6 is invoked as input and the output is assigned to decision dSam3.
The variable dE is set equal to 1.
When dSam0 is equal to 1 and dSam3 is equal to 1, the variable dE is set equal to 2.
When dp is less than (β+ (β > > 1)) >3, variable dEp is set equal to 1.
When dq is smaller than (β+ (β > > 1)) >3, variable dEq is set equal to 1.
Otherwise (edgeType equals edge_hor), the following ordered steps apply:
The variables dpq0, dpq3, dp, dq and d are derived as follows:
dp0=Abs(p 2,0 -2*p 1,0 +p 0,0 ) (8-302)
dp3=Abs(p 2,3 -2*p 1,3 +p 0,3 ) (8-303)
dq0=Abs(q 2,0 -2*q 1,0 +q 0,0 ) (8-304)
dq3=Abs(q 2,3 -2*q 1,3 +q 0,3 ) (8-305)
dpq0=dp0+dq0 (8-306)
dpq3=dp3+dq3 (8-307)
dp=dp0+dp3 (8-308)
dq=dq0+dq3 (8-309)
d=dpq0+dpq3 (8-310)
the variables dE, diep and dEq are set equal to 0.
When d is less than β, the following sequential steps apply:
the variable dpq is set equal to 2×dpq0.
For the sample point positions (xCb + xBl, yCb + yBl), the sample point value p is used 0,0 、p 3,0 、q 0,0 And q 3,0 Variables dpq, beta and t C The decision process for the luminance samples specified in section 8.7.2.5.6 is invoked as input and the output is assigned to decision dSam0.
The variable dpq is set equal to 2×dpq3.
For the sample point position (xCb + xBl +3, yCb+yB), the sample point value p is used 0,3 、p 3,3 、q 0,3 And q 3,3 Variables dpq, beta and t C The decision process for the luminance samples specified in section 8.7.2.5.6 is invoked as input and the output is assigned to decision dSam3.
The variable dE is set equal to 1.
When dSam0 is equal to 1 and dSam3 is equal to 1, the variable dE is set equal to 2.
When dp is less than (β+ (β > > 1)) >3, variable dEp is set equal to 1.
When dq is smaller than (β+ (β > > 1)) >3, variable dEq is set equal to 1.
Tables 8-11 below show the derivation of threshold variables β' and t from input Q C '。
Tables 8 to 11
Q 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
β′ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 6 7 8
t C 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1
Q 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37
β′ 9 10 11 12 13 14 15 16 17 18 20 22 24 26 28 30 32 34 36
t C 1 1 1 1 1 1 1 1 2 2 2 2 3 3 3 3 4 4 4
Q 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
β′ 38 40 42 44 46 48 50 52 54 56 58 60 62 64 - -
t C 5 5 6 6 7 8 9 10 11 13 14 16 18 20 22 24
Fig. 8 is a block diagram of a video processing apparatus 800. The apparatus 800 may be used to implement one or more of the methods described herein. The apparatus 800 may be implemented in a smart phone, tablet, computer, internet of things (IoT) receiver, or the like. The apparatus 800 may include one or more processors 802, one or more memories 804, and video processing hardware 806. The one or more processors 802 may be configured to implement one or more of the methods described in this document. One or more memories 804 may be used to store data and code for implementing the methods and techniques described herein. Video processing hardware 806 may be used to implement some of the techniques described in this document in hardware circuitry.
Fig. 10 is a flow chart of a method 1000 of processing video. The method 1000 includes: performing (1005) a determination to process a first video block using a dependency scalar quantization; determining (1010) a first Quantization Parameter (QP) to be used for deblocking filtering of the first video block based on a determination that the first video block is processed using dependent scalar quantization; and performing (1015) further processing on the first video block using deblocking filtering according to the first QP.
Some examples of determining candidates for encoding and their use are described in section 4 of this document with reference to method 1000. For example, quantization parameters for deblocking filtering may be determined depending on the use of dependent scalar quantization, as described in section 4.
Referring to method 1000, video blocks may be encoded in a video bitstream, wherein bit efficiency may be achieved by using bitstream generation rules related to motion information prediction.
The method may include: wherein the determination using the dependency scalar quantization is based on the value of the flag signal.
The method may include: wherein the first QP used for deblocking filtering is used for dependency scalar quantization and other processing techniques for the first video block.
The method may include: wherein the first QP is QPc.
The method may include: wherein the first QP is QPc+N, where N is an integer.
The method may include: wherein qpc+n is modified from a previous value to fit within the threshold range.
The method may include: wherein the threshold range is [ Max (QPmin-N, QPmin), min (QPmax-N, QPmax) ].
The method may include: determining, by the processor, that the second video block is not processed using the dependency scalar quantization; and performing further processing on the second video block using another deblocking filter, wherein scalar quantization based on dependencies is used for the first video block, the deblocking filter for the first video block being stronger or weaker than the another deblocking filter for processing the second video block.
The method may include: wherein the deblocking filtering is selected by the encoder, the method further comprising: signaling to the decoder that dependency scalar quantization is enabled for the first video block.
The method may include: wherein the encoder and decoder use smaller or larger thresholds Tc and β based on the use of dependency scalar quantization.
The method may include: wherein the first QP is QPc+N and for QPmax+N, additional entries are used for Tc 'and β' tables.
The method may include: wherein the first QP is QPc+N and is clipped to be within the threshold based on QPc+N, the tables are the same for QPmax+N, tc 'and β'.
The method may include: wherein the Tc' table is extended as: tc '[66] =50 and tc' [67] =52.
The method may include: wherein the Tc' table is extended as: tc '[66] =49 and tc' [67] =50.
The method may include: wherein the Tc' table is extended as: tc '[66] =49 and tc' [67] =51.
The method may include: wherein the Tc' table is extended as: tc '[66] =48 and tc' [67] =50.
The method may include: wherein the Tc' table is extended as: tc '[66] =50 and tc' [67] =51.
The method may include: wherein, extend the β' table to: beta '[64] =90 and beta' [65] =92.
The method may include: wherein, extend the β' table to: beta '[64] =89 and beta' [65] =90.
The method may include: wherein, extend the β' table to: beta '[64] =89 and beta' [65] =91.
The method may include: wherein, extend the β' table to: beta '[64] =88 and beta' [65] =90.
The method may include: wherein, extend the β' table to: beta '[64] =90 and beta' [65] =91.
The method may include: wherein the initialization of context-based adaptive binary arithmetic coding (CABAC) is qpc+n based on the first QP and processes the first video block using dependent scalar quantization.
The method may include: wherein, based on the use of the dependent scalar quantization, a determination to process the first video block using the dependent scalar quantization is signaled with semantics.
The method may include: wherein the first QP is indicated in a picture parameter set or picture header.
The method may include: wherein the picture parameter set or picture header indicates init_qp_minus26 plus 26, and init_qp_minus26 plus 26 specifies an initial value of SliceQpy of a slice referencing PPS, or an initial value of a quantization parameter of a slice (tile) referenced in the PPS or picture header, based on the dependency quantization being off.
The method may include: wherein a picture parameter set or picture header indicates init_qp_minus26 plus 27, init_qp_minus26 plus 27 specifies an initial value of SliceQpy of a slice of the reference PPS or an initial value of a quantization parameter of a slice referenced in the PPS or picture header, based on dependency quantization being used.
The method may include: wherein the first QP is indicated in a slice header, or a slice group header.
The method may include: wherein the slice header, or slice group header indicates slice_qp_delta, slice_qp_delta specifies an initial value of QpY, and the initial value of QpY is used for the encoded blocks in the slice, or slice group until modified by the value of CuQpDeltaVal in the layer of encoded cells, wherein the initial value of QpY is SliceQpY, wherein SliceQpY = 26+init_qp_minus26+slice_qp_delta.
The method may include: wherein a slice header, or slice group header indicates slice_qp_delta, slice_qp_delta specifies an initial value of QpY, and an initial value of QpY is used for encoded blocks in a slice, or slice group until modified by a value of CuQpDeltaVal in the layer of encoded cells, wherein the initial value of QpY is SliceQpY, wherein SliceQpY = 26+init_qp_minus26+slice_qp_delta+1.
The method may include: wherein the method is applied based on being signaled in SPS, PPS, VPS, a sequence header, a picture header, a slice group header, or a group of Coding Tree Units (CTUs).
Fig. 11 is a flow chart of a video processing method 1100 for processing video. The method 1100 includes: determining (1105) one or more deblocking filter parameters to be used in a deblocking filtering process of the current video block based on whether a current video block is processed using a dependent scalar quantization, wherein a set of allowable reconstruction values of transform coefficients corresponding to the dependent scalar quantization is dependent on at least one transform coefficient stage preceding the current transform coefficient stage; and performing (1110) a deblocking filtering process on the current video block based on the one or more deblocking filtering parameters.
The method may include: wherein determining one or more deblocking filter parameters to be used in the deblocking filtering process of the current video block specifically includes: in the case of using a dependent scalar quantization on the current video block, determining one or more deblocking filter parameters corresponding to weaker deblocking filtering; or in the case of using a dependent scalar quantization on the current video block, one or more deblocking filter parameters corresponding to stronger deblocking filtering.
The method may include: wherein the stronger deblocking filter modifies more pixels and the weaker deblocking filter modifies less pixels.
The method may include: wherein determining one or more deblocking filter parameters to be used in the deblocking filtering process of the current video block specifically includes: in the case of using a dependent scalar quantization for the current video block, smaller thresholds Tc and β are selected; or in the case of using a dependent scalar quantization on the current video block, larger thresholds Tc and β are selected.
The method may include: wherein determining one or more deblocking filter parameters to be used in the deblocking filtering process of the current video block specifically includes: the quantization parameters included in the one or more deblocking filter parameters are determined based on whether a current video block is processed using dependent scalar quantization.
The method may include: wherein in the case of using the dependent scalar quantization for the current video block, the quantization parameter for the deblocking filtering process is set equal to qpc+n, where QPc is the quantization parameter of the current video block signaled, qpc+n is the quantization parameter for the dependent scalar quantization, N is an integer, and N > =1.
The method may include: wherein in case a dependent scalar quantization is used for the current video block, at least one additional entry is provided in the mapping table, wherein the mapping table indicates a mapping relation between the quantization parameter and the threshold value β 'or a mapping relation between the quantization parameter and the threshold value Tc'.
The method may include: wherein the mapping table is extended according to any one of the following options: tc '[66] =50 and tc' [67] =52, tc '[66] =49 and tc' [67] =50, tc '[66] =49 and tc' [67] =51, tc '[66] =48 and tc' [67] =50, tc '[66] =50 and tc' [67] =51.
The method may include: wherein the mapping table is extended according to any one of the following options: beta '[64] =90 and beta' [65] =92, beta '[64] =89 and beta' [65] =90, beta '[64] =89 and beta' [65] =91, beta '[64] =88 and beta' [65] =90, beta '[64] =90 and beta' [65] =91.
The method may include: wherein, in case qpc+n is greater than QPmax or less than QPmin, qpc+n is clipped into the range [ QPmin, QPmax ], wherein QPmin and QPmax are the minimum allowable quantization parameter and the maximum allowable quantization parameter, respectively.
Fig. 13 is a flow chart of a video processing method 1300 for processing video. The method 1300 includes: determining (1305) whether to apply a deblocking filtering process based on whether a current video block is processed using a dependent scalar quantization, wherein a set of allowable reconstruction values of transform coefficients corresponding to the dependent scalar quantization is dependent on at least one transform coefficient stage preceding the current transform coefficient stage; and performing (1310) a deblocking filtering process on the current video block based on the determination that the deblocking filtering process is applied.
Fig. 12 is a flow chart of a video processing method 1200 for processing video. The method 1200 includes: in the case that the dependent scalar quantization is enabled for the current video block, determining (1205) a quantization parameter to be used in the dependent scalar quantization for the current video block, wherein a set of allowable reconstruction values for the transform coefficients corresponding to the dependent scalar quantization depends on at least one transform coefficient level preceding the current transform coefficient level; and performing (1210) a dependent scalar quantization on the current video block based on the determined quantization parameter, wherein the determined quantization parameter is also applied to video processing that differs from the dependent scalar quantization using the quantization parameter as an input parameter for the current video block.
The method may include: wherein the video processing other than the dependent scalar quantization includes a deblocking filtering process.
The method may include: wherein the determined quantization parameter is QPc in case of enabling a dependency scalar quantization for the current video block, wherein QPc is a signaled quantization parameter for the current video block.
The method may include: wherein, in case of enabling dependency scalar quantization for a current video block, the determined quantization parameter is qpc+n, wherein QPc is a signaled quantization parameter for the current video block and N is an integer.
The method may include: qpc+n is clipped to the threshold range before it is used.
The method may include: wherein the threshold range is [ QPmin, QPmax ], where QPmin and QPmax are the minimum and maximum quantization parameters allowed, respectively.
The method may include: wherein the allowed QPc range for the dependent scalar quantization is [ QPmin-N, QPmax-N ] in case the dependent scalar quantization is enabled for the current video block, wherein QPmin and QPmax are the minimum and maximum values of QPc, respectively, allowed in case the dependent scalar quantization is not enabled for the current video block.
The method may include: wherein, in case of enabling the dependent scalar quantization for the current video block, the allowed QPc range of the dependent scalar quantization is [ Max (QPmin-N, QPmin), min (QPmax-N, QPmax) ], wherein QPmin and QPmax are the minimum and maximum values of QPc allowed in case of not enabling the dependent scalar quantization for the current video block, respectively.
The method may include: wherein, in case of enabling dependent scalar quantization, the initialization of context-based adaptive binary arithmetic coding (CABAC) is based on qpc+n.
The method may include: wherein higher-level quantization parameters assign different semantics based on whether dependency scalar quantization is enabled.
The method may include: wherein the quantization parameter is signaled by a first parameter in a picture parameter set or picture header.
The method may include: wherein the first parameter is init_qp_minus26 and wherein init_qp_minus26 plus 26 specifies an initial quantization parameter value SliceQpY of a slice of a reference picture parameter set or picture header, or an initial value of a quantization parameter of a slice of the reference picture parameter set or picture header, if dependency scalar quantization is disabled.
The method may include: wherein the first parameter is init_qp_minus26 and, in case of enabling a dependent scalar quantization, init_qp_minus26 plus 27 specifies an initial quantization parameter value SliceQpY of a slice of a reference picture parameter set or picture header.
The method may include: wherein the quantization parameter is signaled by a second parameter in a picture parameter set or picture header.
The method may include: wherein the second parameter is slice_qp_delta and, in case of disabling the dependent scalar quantization, slice_qp_delta is used to derive an initial quantization parameter value QpY, which is QpY for the encoded blocks in a slice, slice or group of slices, until modified by the value of CuQpDeltaVal in the layer of encoded cells, wherein the initial value of QpY is set equal to SliceQpY and SliceQpY = 26+init_qp_minus26+slice_qp_delta.
The method may include: wherein the second parameter is slice_qp_delta and, in case of enabling dependent scalar quantization, slice_qp_delta is used to derive an initial quantization parameter value QpY, which initial quantization parameter value QpY is used for the encoded blocks in a slice, slice or group of slices until modified by the value of CuQpDeltaVal in the layer of encoded cells, wherein the initial value of QpY is set equal to SliceQpY and SliceQpY = 26+init_qp_minus26+slice_qp_delta+1.
The method may include: wherein the method is applied when signaled in SPS, PPS, VPS, sequence header, picture header, slice group header, or Coding Tree Unit (CTU) group.
Fig. 14 is a flow chart of a video processing method 1400 for processing video. The method 1400 includes: in the case that the dependent scalar dequantization is enabled for the current video block, determining (1405) a quantization parameter to be used in the dependent scalar dequantization of the current video block, wherein a set of allowable reconstruction values for the transform coefficients corresponding to the dependent scalar dequantization depends on at least one transform coefficient stage preceding the current transform coefficient stage; and performing (1410) a dependent scalar dequantization of the current video block based on the determined quantization parameter, wherein the determined quantization parameter is also applied to video processing of the current video block that uses the quantization parameter as an input parameter that is different from the dependent scalar dequantization.
It should be appreciated that the disclosed techniques may be implemented in video encoders or decoders to improve compression efficiency when the shape of the compressed coding unit is significantly different from a conventional square block or half square rectangular block. For example, new coding tools that use long or tall coding units, such as units of size 4 x 32 or 32 x 4, may benefit from the disclosed techniques.
It should be appreciated that the disclosed techniques may be implemented in a video system comprising a processor and a non-transitory memory having instructions thereon, wherein the instructions, when executed by the processor, cause the processor to implement the methods disclosed above.
The aspects, examples, embodiments, modules, and functional operations disclosed and described herein may be implemented in digital electronic circuitry, or in computer software, firmware, or hardware, including the structures disclosed herein and structural equivalents thereof, or in combinations of one or more of them. The disclosed embodiments and other embodiments may be implemented as one or more computer program products, i.e., one or more modules of computer program instructions encoded on a computer-readable medium, for execution by, or to control the operation of, data processing apparatus. The computer readable medium can be a machine-readable storage device, a machine-readable storage substrate, a memory device, a composition of matter effecting a machine-readable propagated signal, or a combination of one or more of them. The term "data processing apparatus" includes all apparatuses, devices and machines for processing data, including for example a programmable processor, a computer or a multiprocessor or a group of computers. The apparatus may include, in addition to hardware, code that creates an execution environment for a computer program, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, or a combination of one or more of them. A propagated signal is an artificially generated signal, e.g., a machine-generated electrical, optical, or electromagnetic signal, that is generated to encode information for transmission to suitable receiving devices.
A computer program (also known as a program, software application, script, or code) can be written in any form of programming language, including compiled or interpreted languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. The computer program does not necessarily correspond to a file in a file system. A program may be stored in a portion of a file that holds other programs or data (e.g., one or more scripts stored in a markup language document), in a single file dedicated to the program, or in multiple coordinated files (e.g., files that store one or more modules, sub-programs, or portions of code). A computer program can be deployed to be executed on one computer or on multiple computers at one site or distributed across multiple sites and interconnected by a communication network.
The processes and logic flows described herein can be performed by one or more programmable processors executing one or more computer programs to perform functions by operating on input data and generating output. The processing and logic flows may also be performed by, and apparatus may be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application-specific integrated circuit).
Processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more of any kind of digital computer. Generally, a processor will receive instructions and data from a read-only memory or a random access memory or both. The essential elements of a computer are a processor for executing instructions and one or more memory devices for storing instructions and data. Typically, a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto-optical disks, or optical disks. However, a computer does not necessarily have such a device. Computer readable media suitable for storing computer program instructions and data include all forms of non-volatile memory, media and memory devices, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices; magnetic disks, such as internal hard disks or removable disks; a magneto-optical disk; CDROM and DVD-ROM discs. The processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.
While this patent document contains many specifics, these should not be construed as limitations on the scope of any invention or of what may be claimed, but rather as descriptions of features of particular embodiments of particular inventions. Certain features that are described in this patent document in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various functions that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination. Furthermore, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination and the combination of the claims may be directed to a subcombination or variation of a subcombination.
Also, although operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. Furthermore, the separation of various system components in the embodiments described herein should not be understood as requiring such separation in all embodiments.
Only a few implementations and examples are described, and other implementations, enhancements, and variations may be made based on what is described and illustrated in this patent document.

Claims (17)

1. A video processing method, comprising:
in the case where the dependent scalar quantization is enabled for a current video block, determining a quantization parameter to be used in the dependent scalar quantization for the current video block, wherein a set of allowable reconstruction values for transform coefficients corresponding to the dependent scalar quantization depends on at least one transform coefficient level preceding the current transform coefficient level; and
dependent scalar quantization is performed on the current video block according to the determined quantization parameter,
wherein the determined quantization parameter is also applied to video processing of the current video block that differs from the dependent scalar quantization using the quantization parameter as an input parameter,
in the case where dependency scalar quantization is enabled for a current video block, the determined quantization parameter is qpc+n, where QPc is the signaled quantization parameter for the current video block and N is an integer, wherein the method further comprises:
qpc+n is clipped to the threshold range before it is used.
2. The video processing method of claim 1, wherein the video processing other than the dependency scalar quantization comprises a deblocking filtering process.
3. The video processing method of claim 1, wherein the threshold range is [ QPmin, QPmax ], wherein QPmin and QPmax are allowed minimum and maximum quantization parameters, respectively.
4. The video processing method of claim 1, wherein the allowed QPc range for the dependent scalar quantization is [ QPmin-N, QPmax-N ] in the case where the dependent scalar quantization is enabled for the current video block, wherein QPmin and QPmax are a minimum and a maximum value of QPc allowed in the case where the dependent scalar quantization is not enabled for the current video block, respectively.
5. The video processing method of claim 1, wherein the allowed QPc range for the dependent scalar quantization is [ Max (QPmin-N, QPmin), min (QPmax-N, QPmax) ] where QPmin and QPmax are the minimum and maximum values, respectively, of QPc allowed for the current video block for which the dependent scalar quantization is not enabled.
6. The video processing method of any of claims 1-5, wherein, with dependent scalar quantization enabled, initialization of context-based adaptive binary arithmetic coding (CABAC) is based on qpc+n.
7. The video processing method of any of claims 1-5, wherein higher-level quantization parameters assign different semantics based on whether dependency scalar quantization is enabled.
8. The video processing method of any of claims 1-5, wherein the quantization parameter is signaled by a first parameter in a picture parameter set or picture header.
9. The video processing method of claim 8, wherein the first parameter is init_qp_minus26, and wherein init_qp_minus26 plus 26 specifies an initial quantization parameter value SliceQpY of a slice of a reference picture parameter set or a picture header, if dependency scalar quantization is disabled.
10. The video processing method of claim 8, wherein the first parameter is init_qp_minus26, and wherein init_qp_minus26 plus 27 specifies an initial quantization parameter value SliceQpY of a slice of a reference picture parameter set or picture header, if dependency scalar quantization is enabled.
11. The video processing method of any of claims 1-5, wherein the quantization parameter is signaled by a second parameter in a picture parameter set or picture header.
12. The video processing method of claim 11, wherein the second parameter is slice_qp_delta, and in case of disabling the dependency scalar quantization, slice_qp_delta is used to derive an initial quantization parameter value QpY, which is QpY for the encoded blocks in the slice, slice or slice group, until modified in the coding unit layer by the value of cuqpdeltaaval, wherein the initial value of QpY is set equal to SliceQpY, and SliceQpY = 26+init_qp_minus26+slice qp_delta.
13. The video processing method of claim 11, wherein the second parameter is slice_qp_delta, and where dependent scalar quantization is enabled, slice_qp_delta is used to derive an initial quantization parameter value QpY, which is QpY for the encoded blocks in a slice, slice or group of slices, until modified in the coding unit layer by the value of cuqpdeltaal, wherein the initial value of QpY is set equal to SliceQpY, and SliceQpY = 26+init_qp_minus26+slice qp_delta+1.
14. The video processing method of any of claims 1-5, 9-10, 12-13, wherein the method is applied when the method is signaled in SPS, PPS, VPS, sequence header, picture header, slice group header, or Code Tree Unit (CTU) group.
15. A video processing method, comprising:
in the case where the dependent scalar dequantization is enabled for the current video block, determining a quantization parameter to be used in the dependent scalar dequantization of the current video block, wherein a set of allowable reconstruction values for the transform coefficients corresponding to the dependent scalar dequantization depends on at least one transform coefficient stage preceding the current transform coefficient stage; and
based on the determined quantization parameter, a dependency scalar dequantization is performed on the current video block,
wherein the determined quantization parameter is also applied to video processing of the current video block using the quantization parameter as an input parameter, as opposed to a dependency scalar dequantization,
in the case of enabling dependency scalar dequantization of a current video block, the determined quantization parameter is qpc+n, where QPc is a signaled quantization parameter of the current video block and N is an integer, wherein the method further comprises:
qpc+n is clipped to the threshold range before it is used.
16. An apparatus in a video system, the apparatus comprising a processor and a non-transitory memory having instructions thereon, wherein the instructions, when executed by the processor, cause the processor to implement the method of any of claims 1-15.
17. A non-transitory computer readable medium, wherein the non-transitory computer readable medium has stored therein program code, which when executed, is for implementing the method according to any of claims 1 to 15.
CN201911055394.XA 2018-10-31 2019-10-31 Quantization parameters under a coding tool for dependent quantization Active CN111131819B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN2018112945 2018-10-31
CNPCT/CN2018/112945 2018-10-31

Publications (2)

Publication Number Publication Date
CN111131819A CN111131819A (en) 2020-05-08
CN111131819B true CN111131819B (en) 2023-05-09

Family

ID=68470587

Family Applications (2)

Application Number Title Priority Date Filing Date
CN201911055394.XA Active CN111131819B (en) 2018-10-31 2019-10-31 Quantization parameters under a coding tool for dependent quantization
CN201911056351.3A Active CN111131821B (en) 2018-10-31 2019-10-31 Deblocking filtering under dependency quantization

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN201911056351.3A Active CN111131821B (en) 2018-10-31 2019-10-31 Deblocking filtering under dependency quantization

Country Status (2)

Country Link
CN (2) CN111131819B (en)
WO (2) WO2020089825A1 (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114727109B (en) * 2021-01-05 2023-03-24 腾讯科技(深圳)有限公司 Multimedia quantization processing method and device and coding and decoding equipment
WO2022165763A1 (en) * 2021-02-05 2022-08-11 Oppo广东移动通信有限公司 Encoding method, decoding method, encoder, decoder and electronic device
WO2022174475A1 (en) * 2021-02-22 2022-08-25 浙江大学 Video encoding method and system, video decoding method and system, video encoder, and video decoder
WO2022206987A1 (en) * 2021-04-02 2022-10-06 Beijing Bytedance Network Technology Co., Ltd. Adaptive dependent quantization
WO2022257142A1 (en) * 2021-06-11 2022-12-15 Oppo广东移动通信有限公司 Video decoding and coding method, device and storage medium
WO2023004590A1 (en) * 2021-07-27 2023-02-02 Oppo广东移动通信有限公司 Video decoding and encoding methods and devices, and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010077325A2 (en) * 2008-12-29 2010-07-08 Thomson Licensing Method and apparatus for adaptive quantization of subband/wavelet coefficients
CN107431814A (en) * 2015-01-08 2017-12-01 微软技术许可有限责任公司 The change of ρ domains speed control
HK1246020A1 (en) * 2012-01-20 2018-08-31 Ge Video Compression Llc Apparatus for decoding a plurality of transform coefficients having transform coefficient levels from a data stream

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9185404B2 (en) * 2011-10-07 2015-11-10 Qualcomm Incorporated Performing transform dependent de-blocking filtering
US9161046B2 (en) * 2011-10-25 2015-10-13 Qualcomm Incorporated Determining quantization parameters for deblocking filtering for video coding
US9344723B2 (en) * 2012-04-13 2016-05-17 Qualcomm Incorporated Beta offset control for deblocking filters in video coding
WO2013162441A1 (en) * 2012-04-25 2013-10-31 Telefonaktiebolaget L M Ericsson (Publ) Deblocking filtering control
US20140079135A1 (en) * 2012-09-14 2014-03-20 Qualcomm Incoporated Performing quantization to facilitate deblocking filtering
CN103491373B (en) * 2013-09-06 2018-04-27 复旦大学 A kind of level Four flowing water filtering method of deblocking filter suitable for HEVC standard

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010077325A2 (en) * 2008-12-29 2010-07-08 Thomson Licensing Method and apparatus for adaptive quantization of subband/wavelet coefficients
HK1246020A1 (en) * 2012-01-20 2018-08-31 Ge Video Compression Llc Apparatus for decoding a plurality of transform coefficients having transform coefficient levels from a data stream
CN107431814A (en) * 2015-01-08 2017-12-01 微软技术许可有限责任公司 The change of ρ domains speed control

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
CE7:Transform coefficient coding and dependent quantization(Test 7.1.2,7.2.1);SCHWARZ (FRAUNHOFER) ET AL;《11.JVET MEETING(THE JOINT VIDEO EXPLORATION TEAM OF ISO/IEC JTC1/SC29/WG11 AND ITU-TSG.16)》;20180718;摘要、section2-3,图2-3 *

Also Published As

Publication number Publication date
WO2020089825A1 (en) 2020-05-07
CN111131821B (en) 2023-05-09
CN111131819A (en) 2020-05-08
CN111131821A (en) 2020-05-08
WO2020089824A1 (en) 2020-05-07

Similar Documents

Publication Publication Date Title
CN111131819B (en) Quantization parameters under a coding tool for dependent quantization
EP4087247A1 (en) Luminance based coding tools for video compression
CN115244924A (en) Signaling across component adaptive loop filters
CN113826383B (en) Block dimension setting for transform skip mode
CN114586370A (en) Use of chrominance quantization parameters in video coding and decoding
WO2020039364A1 (en) Reduced window size for bilateral filter
WO2020151753A1 (en) Method and apparatus of transform coefficient coding with tb-level constraint
CN114145018A (en) Quantization of palette modes
KR102294016B1 (en) Video encoding and decoding method using deblocking fitering with transform skip and apparatus using the same
CN113728627A (en) Prediction of parameters for in-loop reconstruction
CN113826398B (en) Interaction between transform skip mode and other codec tools
CN114930818A (en) Bitstream syntax for chroma coding and decoding
JP2023153169A (en) High precision conversion and quantization for image and video coding
CN113853787A (en) Transform skip mode based on sub-block usage
JP7490803B2 (en) Video Processing Using Syntax Elements
JP2022545276A (en) Deblocking filtering at coding block or sub-block boundaries
WO2021136470A1 (en) Clustering based palette mode for video coding
WO2020221213A1 (en) Intra sub-block partitioning and multiple transform selection
US20240022721A1 (en) Constraints on partitioning of video blocks
CN117769833A (en) Adaptive bilateral filter in video encoding and decoding
CN117256140A (en) Use of steering filters
CN117716690A (en) Conditions of use of adaptive bilateral filters
CN118120232A (en) Bilateral filtering in video encoding and decoding
CN118077197A (en) Combining deblocking filtering and another filtering for video encoding and/or decoding
CN116965035A (en) Transformation on non-binary blocks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant