CN111131819A - Quantization parameter under coding tool of dependent quantization - Google Patents

Quantization parameter under coding tool of dependent quantization Download PDF

Info

Publication number
CN111131819A
CN111131819A CN201911055394.XA CN201911055394A CN111131819A CN 111131819 A CN111131819 A CN 111131819A CN 201911055394 A CN201911055394 A CN 201911055394A CN 111131819 A CN111131819 A CN 111131819A
Authority
CN
China
Prior art keywords
quantization
dependency
parameter
scalar
slice
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911055394.XA
Other languages
Chinese (zh)
Other versions
CN111131819B (en
Inventor
刘鸿彬
张莉
张凯
王悦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing ByteDance Network Technology Co Ltd
ByteDance Inc
Original Assignee
Beijing ByteDance Network Technology Co Ltd
ByteDance Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing ByteDance Network Technology Co Ltd, ByteDance Inc filed Critical Beijing ByteDance Network Technology Co Ltd
Publication of CN111131819A publication Critical patent/CN111131819A/en
Application granted granted Critical
Publication of CN111131819B publication Critical patent/CN111131819B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/85Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
    • H04N19/86Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving reduction of coding artifacts, e.g. of blockiness
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/117Filters, e.g. for pre-processing or post-processing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/12Selection from among a plurality of transforms or standards, e.g. selection between discrete cosine transform [DCT] and sub-band transform or selection between H.263 and H.264
    • H04N19/122Selection of transform size, e.g. 8x8 or 2x4x8 DCT; Selection of sub-band transforms of varying structure or type
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/124Quantisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/80Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation
    • H04N19/82Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation involving filtering within a prediction loop
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/90Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
    • H04N19/96Tree coding, e.g. quad-tree coding

Abstract

A video processing method, apparatus and computer program product are disclosed. The video processing method comprises the following steps: determining a quantization parameter to be used in dependency scalar quantization of a current video block, where a set of allowable reconstruction values for transform coefficients corresponding to the dependency scalar quantization depends on at least one transform coefficient level preceding the current transform coefficient level, if the dependency scalar quantization is enabled for the current video block; and performing dependent scalar quantization on the current video block based on the determined quantization parameter, wherein the determined quantization parameter is further applied to a different video process of the current video block than the dependent scalar quantization using the quantization parameter as an input parameter.

Description

Quantization parameter under coding tool of dependent quantization
The present application claims in time the priority and benefit of international patent application No. PCT/CN2018/112945 filed on 31/10/31/2018, according to applicable patent laws and/or the provisions of the paris convention. The entire disclosure of international patent application No. PCT/CN2018/112945 is incorporated herein by reference as part of the present disclosure.
Technical Field
This document relates to video and image coding techniques.
Background
Digital video accounts for the largest bandwidth usage on the internet and other digital communication networks. As the number of connected user devices capable of receiving and displaying video increases, the bandwidth requirements for pre-counting the use of digital video will continue to grow.
Disclosure of Invention
The disclosed techniques may be used by video or image decoder or encoder embodiments, where the quantization parameters are under coding tools that rely on quantization.
In one exemplary aspect, a method of processing video is disclosed. The method includes performing, by a processor, a determination to process a first video block using dependency scalar quantization; determining, by a processor, a first Quantization Parameter (QP) to be used for deblocking filtering of a first video block based on a determination to process the first video block using dependency scalar quantization; further processing is performed on the first video block using deblocking filtering in accordance with the first QP.
In another exemplary aspect, a video processing method is disclosed. The method comprises the following steps: determining one or more deblocking filter parameters to be used in a deblocking filtering process of the current video block based on whether the current video block is processed using dependency scalar quantization, wherein a set of allowable reconstruction values of transform coefficients corresponding to the dependency scalar quantization depends on at least one transform coefficient level prior to the current transform coefficient level; and performing deblocking filtering processing on the current video block according to the one or more deblocking filtering parameters.
In another exemplary aspect, a video processing method is disclosed. The method comprises the following steps: determining whether to apply a deblocking filtering process based on whether the current video block is processed using dependency scalar quantization, wherein a set of allowable reconstruction values for transform coefficients corresponding to the dependency scalar quantization depends on at least one transform coefficient level prior to the current transform coefficient level; and performing a deblocking filter process on the current video block based on the determination to use the deblocking filter process.
In another exemplary aspect, a video processing method is disclosed. The method comprises the following steps: determining a quantization parameter to be used in dependency scalar quantization of a current video block, where a set of allowable reconstruction values for transform coefficients corresponding to the dependency scalar quantization depends on at least one transform coefficient level preceding the current transform coefficient level, if the dependency scalar quantization is enabled for the current video block; and performing dependent scalar quantization on the current video block based on the determined quantization parameter, wherein the determined quantization parameter is further applied to a different video process of the current video block than the dependent scalar quantization using the quantization parameter as an input parameter.
In another exemplary aspect, a video processing method is disclosed. The method comprises the following steps: determining a quantization parameter to be used in dependency scalar inverse quantization of the current video block, where a set of allowable reconstruction values for transform coefficients corresponding to the dependency scalar inverse quantization depends on at least one transform coefficient level prior to the current transform coefficient level, in case dependency scalar inverse quantization is enabled for the current video block; and performing a dependent scalar inverse quantization on the current video block based on the determined quantization parameter, wherein the determined quantization parameter is further applied to a different video process of the current video block than the dependent scalar inverse quantization using the quantization parameter as an input parameter.
In another exemplary aspect, the above method may be implemented by a video decoder apparatus comprising a processor.
In another exemplary aspect, the above method may be implemented by a video encoder apparatus comprising a processor.
In another exemplary aspect, the above method may be implemented by an apparatus in a video system, the apparatus comprising a processor and a non-transitory memory having instructions thereon, wherein the instructions, when executed by the processor, cause the processor to implement the above method.
In yet another example aspect, the methods may be embodied in the form of processor-executable instructions and stored on a computer-readable program medium.
These and other aspects are further described in this document.
Drawings
Fig. 1 shows an example of two scalar quantizers used in dependency quantization.
Fig. 2 shows an example of state transitions and quantizer selection for dependent quantization.
Fig. 3 shows an example of the overall process flow of the deblocking filtering process.
Fig. 4 shows an example of a flow chart of Bs calculation.
Fig. 5 shows an example of reference information for Bs calculation at a Coding Tree Unit (CTU) boundary.
Fig. 6 shows an example of the pixels involved in the filter on/off decision and strong/weak filter selection.
Fig. 7 shows 4: 2: an example of deblocking behavior for the 2 chroma format.
Fig. 8 is a block diagram of an example of a video processing apparatus.
Fig. 9 shows a block diagram of an example implementation of a video encoder.
Fig. 10 is a flowchart of an example of a video processing method.
Fig. 11 is a flowchart of an example of a video processing method.
Fig. 12 is a flowchart of an example of a video processing method.
Fig. 13 is a flowchart of an example of a video processing method.
Fig. 14 is a flowchart of an example of a video processing method.
Detailed Description
This document provides various techniques that may be used by a decoder of an image or video bitstream to improve the quality of decompressed or decoded digital video or pictures. For simplicity, the term "video" is used herein to include both a sequence of pictures (conventionally referred to as video) and individual images. In addition, the video encoder may also implement these techniques during the encoding process in order to reconstruct the decoded frames for further encoding.
The section headings are used in this document for ease of understanding and do not limit the embodiments and techniques to the corresponding sections. Thus, embodiments of one section may be combined with embodiments of other sections.
1. Summary of the invention
This patent document relates to video coding techniques. And in particular to the use of quantization parameters when using dependent quantization. It can be applied to existing video coding standards, such as HEVC, or to standards to be finalized (multi-functional video coding). It may also be applicable to future video coding standards or video codecs.
2. Background of the invention
Video coding standards have been developed primarily through the development of the well-known ITU-T and ISO/IEC standards. ITU-T developed H.261 and H.263, ISO/IEC developed MPEG-1 and MPEG-4 visuals, and both organizations jointly developed the H.262/MPEG-2 video, H.264/MPEG-4 Advanced Video Coding (AVC), and H.265/HEVC standards. Since h.262, the video coding standard was based on a hybrid video coding structure, in which temporal prediction plus transform coding was employed. To explore future video coding techniques beyond HEVC, VCEG and MPEG united in 2015 into the joint video exploration group (jfet). Thereafter, JVET adopted many new approaches and incorporated them into a reference software named Joint Exploration Model (JEM). In 4.2018, a joint video experts group (jfet) was created between VCEG (Q6/16) and ISO/IEC JTC1 SC29/WG11(MPEG), working on the VVC standard, with a 50% reduction in bit rate compared to HEVC.
Fig. 9 is a block diagram of an example implementation of a video encoder. Fig. 9 shows an encoder implementation with a built-in feedback path, where the video encoder also performs the video decoding function (reconstructing a compressed representation of the video data for encoding of the next video data).
2.1 dependency scalar quantization
A dependency scalar quantization is proposed, which refers to a method: wherein the set of allowable reconstruction values for the transform coefficient depends on values of transform coefficient levels preceding the current transform coefficient level in reconstruction order. The main effect of this approach is that the allowable reconstruction vectors (given by all reconstructed transform coefficients of the transform block) are packed more densely in the N-dimensional vector space (N represents the number of transform coefficients in the transform block) compared to traditional non-dependent scalar quantization (as used in HEVC and VTM-1). This means that for a given average number of allowable reconstruction vectors per N-dimensional unit volume, the average distance (or MSE distortion) between the input vector and the closest reconstruction vector decreases (for a typical distribution of input vectors). Ultimately, this effect may lead to an improvement in rate-distortion efficiency.
A method of implementing a dependency scalar quantization by: (a) defining two scalar quantizers having different reconstruction levels, and (b) defining a process of switching between the two scalar quantizers.
Fig. 1 is a diagram of two scalar quantizers used in the proposed dependency quantization method.
The two scalar quantizers used are shown in fig. 1, represented by Q0 and Q1, respectively. The position of the available reconstruction level is uniquely specified by the quantization step size Δ. If we ignore the fact that the actual reconstruction of the transform coefficients uses integer arithmetic, the two scalar quantizers Q0 and Q1 are characterized as follows:
q0: the reconstruction level of the first quantizer Q0 is given by an even integer multiple of the quantization step size Δ. When this quantizer is used, the reconstructed transform coefficient t 'is calculated according to the following formula'
t'=2·k·Δ,
Where k denotes the associated transform coefficient level (transmitted quantization index).
Q1: the reconstruction level of the second quantizer Q1 is given by an odd integer multiple of the quantization step size Δ, and a reconstruction level equal to zero. The mapping of the transform coefficient level k to the reconstructed transform coefficient t' is specified by
t'=(2·k–sgn(k))·Δ,
Wherein sgn (. cndot.) represents a sign function
sgn(x)=(k==0?0:(k<0?–1:1))。
The scalar quantizer used (Q0 or Q1) is not explicitly signaled in the bitstream. In contrast, the quantizer used for the current transform coefficient is determined by parity bits of the transform coefficient level preceding the current transform coefficient in encoding/reconstruction order.
Fig. 2 is an example of proposed state transitions and quantizer selection for dependent quantization.
As shown in fig. 2, switching between two scalar quantizers (Q0 and Q1) is implemented via a state machine having four states. The state may take four different values: 0. 1, 2 and 3. Which is determined by parity bits of a transform coefficient level preceding the current transform coefficient in encoding/reconstruction order. At the start of the inverse quantization of the transform block, the state is set equal to 0. The transform coefficients are reconstructed in the scan order (i.e., in the same order as the transform coefficients are entropy decoded). After the current transform coefficient is reconstructed, the state is updated as shown in fig. 2, where k represents the value of the transform coefficient level. Note that the next state depends only on the current state and the parity bit (k &1) of the current transform coefficient level k. With k representing the value of the current transform coefficient level, the state update can be written as
state=stateTransTable[state][k&1],
Where stateTransTable represents the table shown in FIG. 2, and the operator & specifies the bitwise AND operator in two's complement arithmetic. Alternatively, state transitions may also be specified without a table lookup as follows:
state=(32040>>((state<<2)+((k&1)<<1)))&3
at this time, 16-bit value 32040 specifies a state transition table.
The state uniquely specifies the scalar quantizer used. If the state of the current transform coefficient is equal to 0 or 1, a scalar quantizer Q0 is used. Otherwise (state equals 2 or 3), a scalar quantizer Q1 is used.
The detailed scaling process is described as follows.
7.3.4.9 residual coding syntax
Figure BDA0002256408870000051
Figure BDA0002256408870000061
Figure BDA0002256408870000071
Figure BDA0002256408870000081
8.4.3 scaling of transform coefficients
The inputs to this process are:
a luma location (xTbY, yTbY) specifying an upper left sample of the current luma transform block relative to an upper left luma sample of the current picture,
a variable nTbW, which specifies the transform block width,
a variable nTbH that specifies the transform block height,
a variable cIdx specifying the color component of the current block,
a variable bitDepth that specifies the bit depth of the current color component.
The output of this process is an (nTbW) x (ntbh) array d of scaled transform coefficients with the element d [ x ] [ y ].
The quantization parameter qP is derived as follows:
if cIdx is equal to 0, then the following equation applies:
qP=Qp′Y (8-383)
otherwise, if cIdx is equal to 1, the following equation applies:
qP=Qp′Cb (8-384)
otherwise (cIdx equals 2), the following applies:
qP=Qp′Cr (8-385)
the variables bdShift, rectNorm, and bdOffset are derived as follows:
bdShift=bitDepth+(((Log2(nTbW)+Log2(nTbH))&1)*8+(8-386)(Log2(nTbW)+Log2(nTbH))/2)-5+dep_quant_enabled_flag
rectNorm=((Log2(nTbW)+Log2(nTbH))&1)==1?181:1 (8-387)
bdOffset=(1<<bdShift)>>1 (8-388)
the list levelScale [ ] is designated levelScale [ k ] = {40,45,51,57,64,72}, where k ═ 0.. 5.
For the derivation of the scaling transform coefficients d [ x ] [ y ] for x ═ 0.. nTbW-1, y ═ 0.. nTbH-1, the following formula applies:
the intermediate scaling factor m x y is set equal to 16.
The scaling factor ls [ x ] [ y ] is derived as follows:
if dep _ quant _ enabled _ flag is equal to 1, the following applies:
ls[x][y]=(m[x][y]*levelScale[(qP+1)%6])<<((qP+1)/6) (8-389)
else (dep _ quant _ enabled _ flag equal to 0), the following applies:
ls[x][y]=(m[x][y]*levelScale[qP%6])<<(qP/6) (8-390)
the value dnc [ x ] [ y ] is derived as follows:
dnc[x][y]=(TransCoeffLevel[xTbY][yTbY][cIdx][x][y]*ls[x][y]*rectNorm+bdOffset)>>bdShift (8-391)
the derivation of the scaled transform coefficients d [ x ] [ y ] is as follows:
d[x][y]=Clip3(CoeffMin,CoeffMax,dnc[x][y]) (8-392)
assuming that the Quantization Parameter (QP) QPC is the QP used by the current CU, in dependent quantization it is actually quantized using QPC +1 according to equations 8-389. However, if dependent quantization is not used, QPC is used for quantization.
2.2 deblocking Filtering
The deblocking filter process is performed on each CU in the same order as the decoding process. Vertical edges are filtered first (horizontal filtering) and then horizontal edges are filtered (vertical filtering). For both luma and chroma components, filtering is applied to determine the 8x8 block boundaries to filter. To reduce complexity, 4 × 4 block boundaries are not processed.
FIG. 3 shows the overall processing of the deblocking filtering process the boundary may have three filter state values, no filtering, weak filtering, and strong filtering, each filter decision is based on the boundary strength Bs, and thresholds β and tC
Fig. 3 is an example of the overall process flow of the deblocking filtering process.
2.2.1 boundary decision
The deblocking filtering process involves two kinds of boundaries: TU boundaries and PU boundaries. CU boundaries are also considered, since CU boundaries are also necessarily TU and PU boundaries. When the PU shape is 2NxN (N >4) and the RQT depth is equal to 1, the filtering also involves TU boundaries at the 8x8 block grid and PU boundaries between each PU inside the CU.
2.2.2 boundary Strength calculation
The boundary strength (Bs) reflects how strong the boundary may require filtering processing. A value of Bs of 2 indicates strong filtering, 1 indicates weak filtering, and 0 indicates no deblocking filtering.
Let P and Q be defined as the blocks involved in the filtering, where P represents the blocks to the left of or above the boundary (vertical edge case) and Q represents the blocks to the right of or above the boundary (vertical edge case). Fig. 4 illustrates how the Bs value is calculated based on an intra coding mode, the presence of a non-zero transform coefficient, a reference picture, the number of motion vectors, and a motion vector difference.
At the CTU boundary, as shown in fig. 5, information about every other block (on the 4 x 4 grid) on the left or top is reused in order to reduce line buffer memory requirements. Fig. 5 is an example of reference information for Bs calculation at a CTU boundary.
Threshold variable
The filtering on/off decision, the strong and weak filtering selection and the weak filtering process involve thresholds β' and tC'. These are derived from the values of the luminance quantization parameter Q as shown in table 2-1. The derivation of Q is described in section 2.2.3.1.
Q 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
β′ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 6 7 8
tC 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1
Q 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37
β′ 9 10 11 12 13 14 15 16 17 18 20 22 24 26 28 30 32 34 36
tC 1 1 1 1 1 1 1 1 2 2 2 2 3 3 3 3 4 4 4
Q 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
β′ 38 40 42 44 46 48 50 52 54 56 58 60 62 64 - -
tC 5 5 6 6 7 8 9 10 11 13 14 16 18 20 22 24
Variable β is derived from β' as follows:
β=β′*(1<<(BitDepthY-8))
variable tCIs from tCDerived in' as follows:
tC=tC′*(1<<(BitDepthY-8))
how to derive t is described as followsC'and β'.
2.2.3.1tC'and β'
Described in subsection 8.7.2.5.3 for tCThe decoding process of the HEVC design of 'and β'.
8.7.2.5.3 decision processing of luminance block edges
The inputs to this process are:
-luminance image sample array recPictureL
-a luma location (xCb, yCb) specifying an upper left corner sample of the current luma coding block relative to an upper left corner luma sample of the current picture,
-a luma location (xBl, yBl) specifying an upper left corner sample of the current luma block relative to an upper left corner sample of the current luma coding block,
a variable edgeType specifying whether vertical (EDGE _ VER) or horizontal (EDGE _ HOR) EDGEs are to be filtered,
a variable bS, which specifies the boundary filtering strength.
The output of this process is:
variables dE, dEp and dEq containing the decision,
variables β and tC
If edgeType is equal to EDGE _ VER, the derivation of the sample values pi, k and qi, k (i ═ 0..3 and k ═ 0 and 3) is as follows:
qi,k=recPictureL[xCb+xBl+i][yCb+yBl+k](8-284)
pi,k=recPictureL[xCb+xBl-i-1][yCb+yBl+k](8-285)
otherwise (edgeType equals EDGE _ HOR), the sample values pi, k and qi, k (i ═ 0..3 and k ═ 0 and 3) are derived as follows:
qi,k=recPictureL[xCb+xBl+k][yCb+yBl+i](8-286)
pi,k=recPictureL[xCb+xBl+k][yCb+yBl-i-1](8-287)
variables QpQ and QpP are set equal to QpY values, respectively, for coding units that include coded blocks containing samples q0,0 and p0, 0.
The variable qPL is derived as follows:
qPL=((QpQ+QpP+1)>>1) (8-288)
as specified in tables 8-11, the value of the variable β' is determined based on the luminance quantization parameter Q, derived as follows:
Q=Clip3(0,51,qPL+(slice_beta_offset_div2<<1)) (8-289)
where slice _ beta _ offset _ div2 is the value of the syntax element slice _ beta _ offset _ div2 of the slice containing sample q0, 0.
The variable β is derived as follows:
β=β′*(1<<(BitDepthY-8)) (8-290)
as specified in tables 8-11, the variable t is determined based on the luminance quantization parameter QCThe value of' is derived as follows:
Q=Clip3(0,53,qPL+2*(bS-1)+(slice_tc_offset_div2<<1)) (8-291)
where slice _ tc _ offset _ div2 is the value of the syntax element slice _ tc _ offset _ div2 of the slice containing sample q0, 0.
Variable tCThe derivation of (c) is as follows:
tC=tC′*(1<<(BitDepthY-8) (8-292)
depending on the value of edgeType, the following formula applies:
-if edgeType is equal to EDGE _ VER, applying the following ordered steps:
the variables dpq0, dpq3, dp, dq and d are derived as follows:
dp0=Abs(p2,0-2*p1,0+p0,0) (8-293)
dp3=Abs(p2,3-2*p1,3+p0,3) (8-294)
dq0=Abs(q2,0-2*q1,0+q0,0) (8-295)
dq3=Abs(q2,3-2*q1,3+q0,3) (8-296)
dpq0=dp0+dq0 (8-297)
dpq3=dp3+dq3 (8-298)
dp=dp0+dp3 (8-299)
dq=dq0+dq3 (8-300)
d=dpq0+dpq3 (8-301)
the variables dE, dEp and dEq are set equal to 0.
When d is less than β, the following ordered steps apply:
the variable dpq is set equal to 2 × dpq 0.
For the sample position (xCb + xBl, yCb + yBl), the variables dpq, β and t are calculated using sample values pi,0, qi,0 (where i equals 0..3), and the variables dpq, β and tCThe decision process for the luminance samples specified in section 8.7.2.5.6 is invoked as input and the output is assigned to decision dSam 0.
The variable dpq is set equal to 2 × dpq 3.
For the spot position (xCb + xBl, yCb + yBl +3), the variables dpq, β and t are calculated using the spot values pi,3, qi,3 (where i is 0..3), and the variables dpq, β and tCThe decision process for the luminance samples specified in section 8.7.2.5.6 is invoked as input and the output is assigned to decision dSam 3.
The variable dE is set equal to 1.
When dSam0 equals 1 and dSam3 equals 1, the variable dE is set equal to 2.
When dp is less than (β + (β > >1)) > >3, variable dEp is set equal to 1.
When dq is less than (β + (β > >1)) > >3, the variable dEq is set equal to 1.
Else (edgeType equals EDGE _ HOR), the following ordered steps apply:
the variables dpq0, dpq3, dp, dq and d are derived as follows:
dp0=Abs(p2,0-2*p1,0+p0,0) (8-302)
dp3=Abs(p2,3-2*p1,3+p0,3) (8-303)
dq0=Abs(q2,0-2*q1,0+q0,0) (8-304)
dq3=Abs(q2,3-2*q1,3+q0,3) (8-305)
dpq0=dp0+dq0 (8-306)
dpq3=dp3+dq3 (8-307)
dp=dp0+dp3 (8-308)
dq=dq0+dq3 (8-309)
d=dpq0+dpq3 (8-310)
the variables dE, dEp and dEq are set equal to 0.
When d is less than β, the following ordered steps apply:
the variable dpq is set equal to 2 × dpq 0.
For the sample positions (xCb + xBl, yCb + yBl), sample values p0,0, p3,0, q0,0 and q3,0, variables dpq, β and tCThe decision process for the luminance samples specified in section 8.7.2.5.6 is invoked as input and the output is assigned to decision dSam 0.
The variable dpq is set equal to 2 × dpq 3.
For the sample positions (xCb + xBl +3, yCb + yBl), sample values p0,3, p3,3, q0,3 and q3,3, variables dpq, β and t 3 are usedCThe decision process for the luminance samples specified in section 8.7.2.5.6 is invoked as input and the output is assigned to decision dSam 3.
The variable dE is set equal to 1.
When dSam0 equals 1 and dSam3 equals 1, the variable dE is set equal to 2.
When dp is less than (β + (β > >1)) > >3, variable dEp is set equal to 1.
When dq is less than (β + (β > >1)) > >3, the variable dEq is set equal to 1.
Tables 8-11 below show the derivation of threshold variables β' and t from input QC'。
Q 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
β′ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 6 7 8
tC 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1
Q 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37
β′ 9 10 11 12 13 14 15 16 17 18 20 22 24 26 28 30 32 34 36
tC 1 1 1 1 1 1 1 1 2 2 2 2 3 3 3 3 4 4 4
Q 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
β′ 38 40 42 44 46 48 50 52 54 56 58 60 62 64 - -
tC 5 5 6 6 7 8 9 10 11 13 14 16 18 20 22 24
2.2.4 Filter on/off decision for 4 rows
The filter on/off decision is made using 4 rows grouped into one unit to reduce computational complexity. Fig. 6 illustrates the pixels involved in this decision. The 6 pixels in the two red boxes of the first 4 lines are used to determine whether the filtering of these 4 lines is on or off. The 6 pixels in the two red boxes of the second set of 4 lines are used to determine whether the filtering of the second set of 4 lines is on or off.
Fig. 6 shows an example of the pixels involved in the on/off decision and strong/weak filtering selection.
The following variables are defined:
dp0=|p2,0-2*p1,0+p0,0|
dp3=|p2,3-2*p1,3+p0,3|
dq0=|q2,0-2*q1,0+q0,0|
dq3=|q2,3-2*q1,3+q0,3|
if dp0+ dq0+ dp3+ dq3< β, then the first four rows of filtering are turned on and the strong/weak filtering selection process is applied.
Further, if the condition is satisfied, the variables dE, dEp1, and dEp2 are set as follows:
dE is set equal to 1
If dp0+ dp3< (β + (β > >1)) > >3, then the variable dEp1 is set equal to 1
If dq0+ dq3< (β + (β > >1)) > >3, the variable dEq1 is set equal to 1
In a similar manner as described above, filtering on/off decisions are made for the second set of 4 rows.
2.2.5 Strong/Weak Filter selection for 4 rows
If filtering is turned on, a decision is made between strong filtering and weak filtering. The pixels involved are the same as those used for the filtering on/off decision. The first 4 rows are filtered using strong filtering if the following two sets of conditions are met. Otherwise, weak filtering is used.
1)2*(dp0+dq0)<(β>>2),|p30-p00|+|q00-q30|<(β>>3) And | p00-q00|<(5*tC+1)>>1
2)2*(dp3+dq3)<(β>>2),|p33-p03|+|q03-q33|<(β>>3) And | p03-q03|<(5*tC+1)>>1
In a similar manner, a decision is made to select whether strong filtering or weak filtering is performed for the second set of 4 rows.
2.2.6 Strong Filtering
For strong filtering, the filtered pixel value is obtained by the following formula. Note that for each P and Q block, three pixels are modified using four pixels as inputs, respectively.
p0’=(p2+2*p1+2*p0+2*q0+q1+4)>>3
q0’=(p1+2*p0+2*q0+2*q1+q2+4)>>3
p1’=(p2+p1+p0+q0+2)>>2
q1’=(p0+q0+q1+q2+2)>>2
p2’=(2*p3+3*p2+p1+p0+q0+4)>>3
q2’=(p0+q0+q1+3*q2+2*q3+4)>>3
2.2.7 Weak Filtering
D is defined as follows.
D=(9*(q0-p0)-3*(q1-p1)+8)>>4
When abs (D) is less than tCAt the time of 10, the number of the grooves,
D=Clip3(-tC,tC,Δ)
p0’=Clip1Y(p0+Δ)
q0’=Clip1Y(q0-D)
if dEp1 is equal to 1, then,
Dp=Clip3(-(tC>>1),tC>>1,(((p2+p0+1)>>1)-p1+D)>>1)
p1’=Clip1Y(p1+Dp)
if dEq1 is equal to 1, then,
Dq=Clip3(-(tC>>1),tC>>1,(((q2+q0+1)>>1)-q1-D)>>1)
q1’=Clip1Y(q1+Dq)
note that for each P and Q block, a maximum of two pixels are modified using three pixels as inputs, respectively.
2.2.8 chroma filtering
The boundary strength Bs for chroma filtering is inherited from the luminance. If Bs>1, chroma filtering is performed. The filter selection process is not performed on the chroma because the filtering can be applied only once. Filtered sample value p0' and q0The derivation of' is as follows.
Δ=Clip3(-tC,tC,((((q0-p0)<<2)+p1-q1+4)>>3))
p0’=Clip1C(p0+Δ)
q0’=Clip1C(q0-Δ)
When using 4: 2: 2 chroma format, each chroma block has a rectangular shape and is encoded using a maximum of two square transforms. This process introduces additional boundaries between the chroma transform blocks. These boundaries are not deblocked (the thick dashed line horizontally across the center in fig. 7).
Fig. 7 is 4: 2: an example of deblocking behavior for the 2 chroma format.
2.3 extension of quantization parameter value Range
QP range from [0,51]Extend to [0,63 ]]And t isCThe derivation of ` and β ` is as follows β and tCThe size of the table of (a) increases from 52 and 54 to 64 and 66, respectively.
8.7.2.5.3 decision process of brightness block edge
The variable qPL is derived as follows:
qPL=((QpQ+QpP+1)>>1) (2-38)
the value of the variable β' is determined based on the luminance quantization parameter Q, as derived from tables 2-3, as follows:
Figure BDA0002256408870000161
where slice _ beta _ offset _ div2 is the value of the syntax element slice _ beta _ offset _ div2 of the slice containing sample q0, 0.
The variable β is derived as follows:
β=β′*(1<<(BitDepthY-8)) (2-40)
variable tCThe value of' is determined based on the luminance quantization parameter Q, which is derived as follows, according to the provisions of tables 2-3:
Figure BDA0002256408870000162
where slice _ tc _ offset _ div2 is the value of the syntax element slice _ tc _ offset _ div2 of the slice containing sample q0, 0.
The variable tC is derived as follows:
tC=tC′*(1<<(BitDepthY-8)) (0-6)
tables 2-3 below are derived from the input Q for the threshold variables β 'and tC'.
Q 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
β′ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 6 7 8
tC 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1
Q 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37
β′ 9 10 11 12 13 14 15 16 17 18 20 22 24 26 28 30 32 34 36
tC 1 1 1 1 1 1 1 1 2 2 2 2 3 3 3 3 4 4 4
Q 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56
β′ 38 40 42 44 46 48 50 52 54 56 58 60 62 64 66- 68- 70 72 74
tC 5 5 6 6 7 8 9 10 11 13 14 16 18 20 22 24 26 28 30
Q 57 58 59 60 61 62 63 64 65
β′ 76 78 80 82 84 86 88 - -
tC 32 34 36 38 40 42 44 46 48
2.4 initialization of context variables
In context-based adaptive binary arithmetic coding (CABAC), the initial state of a context variable depends on the QP of the slice. The initialization process is described below.
9.3.2.2 initialization procedure of context variable
The output of this process is the initialized CABAC context variable indexed by ctxTable and ctxIdx.
Tables 9-5 to 9-31 contain values of the 8-bit variable initValue used in initialization of context variables allocated to all syntax elements in sections 7.3.8.1 to 7.3.8.11, except for end _ of _ slice _ segment _ flag, end _ of _ sub _ stream _ one _ bit, and pcm _ flag.
For each context variable, two variables pStateIdx and valMps are initialized.
Note 1-As further described in section 9.3.4.3, the variable pStateIdx corresponds to the probability state index, and the variable valMps corresponds to the value of the most likely symbol.
The derivation of the two 4-bit variables slopeIdx and offsetIdx from the 8-bit table entry initValue is as follows:
slopeIdx=initValue>>4
offsetIdx=initValue&15 (9-4)
the variables m and n used in the initialization of the context variables are derived from lopedlx and offsetIdx as follows:
m=slopeIdx*5-45
n=(offsetIdx<<3)–16 (9-5)
two values assigned to pStateidx and valMps for initialization are derived from SliceQpYDerivation, which is derived in equations 7-40. Given the variables m and n, initialization is specified as follows:
preCtxState=Clip3(1,126,((m*Clip3(0,51,SliceQpY))>>4)+n)
valMps=(preCtxState<=63)?0:1
pStateIdx=valMps?(preCtxState-64):(63-preCtxState) (9-6)
in Table 9-4, the ctxIdx specified by the variable initType that needs to be initialized for each of the three initialization types is listed. The table number containing the initValue value required for initialization is also listed. For P and B slice types, the derivation of initType depends on the value of the cabac _ init _ flag syntax element. The variable initType is derived as follows:
Figure BDA0002256408870000181
3. examples of problems addressed by embodiments
In dependent quantization, QPC+1 is used for quantization. However, in the deblocking filter process, QP is usedCThis is inconsistent.
In addition, if QP is used in the deblocking filter processC+1 due to QPCCan be set to the maximum value, 63, so how to handle Q and tCThe mapping table between/β is unknown.
4. Examples of the embodiments
To solve this problem, various methods may be applied to the deblocking filtering process, depending on the quantization parameter of the block to be filtered. But may also be applicable to other types of processes such as bilateral filtering, which depends on the quantization parameter associated with a block.
The techniques detailed below should be considered as examples to explain the general concepts. These inventions should not be construed narrowly. Furthermore, the techniques may be combined in any manner. The allowed minimum and maximum QP are denoted as QPmin and QPmax, respectively. Representing the signaled quantization parameter of the current CU as QPCAnd the quantization/inverse process depends on QPC+ N to derive the quantization step (e.g., N ═ 1 in current dependent quantization designs). Tc' n]And β' [ n ]]Denoted as the nth entry of the Tc 'and β' tables.
1. Whether and how deblocking filtering is proposed may depend on whether dependency scalar quantization is used.
a. For example, the QP used in deblocking filtering depends on whether dep _ quant _ enabled _ flag is equal to 0 or 1.
2. It is proposed to use one and the same QP in dependent quantization, deblocking filtering or/and any other processing using QP as input parameter.
a. In one example, QP is used in dependent quantizationCInstead of QPC+ N. N is an integer, for example 1, 3, 6, 7 or-1, -3, -6, -7.
b. In one example, QP is used in deblocking filtering or/and any other processing that uses QP as an input parameterC+N。
c. In using QPCBefore + N, it is clipped to the valid range.
3. It is proposed to use QP in dependent quantizationC+ N, for dependent quantization, set the allowable QP range to [ QPmin–N,QPmax–N]Rather than [ QPmin,QPmax]。
a. Alternatively, the allowable QP range is set to [ Max (QP)min–N,QPmin),Min(QPmax–N,QPmax)]。
4. It is proposed to use weaker/stronger deblocking filtering when using dependent quantization than when not using dependent quantization.
a. In one example, when dependent quantization is enabled, the encoder selects the weaker/stronger deblocking filtering and signals it to the decoder.
b. In one example, when dependent quantization is enabled, the smaller/larger thresholds Tc and β are used implicitly at both the encoder and decoder.
5. When QP is used in the deblocking filtering processC+ N, more entries in the Tc 'and β' tables (e.g., tables 2-3) may be needed for QPmax+N。
a. Alternatively, the same table may be used, but whenever QP occursC+ N is first clipped to the same range QPmin,QPmax]。
b. In one example, the Tc' table is extended to: tc '[66] is 50 and tc' [67] is 52.
c. In one example, the Tc' table is extended to: tc '[66] 49 and tc' [67] 50.
d. In one example, the Tc' table is extended to: tc '[66] -, 49 and tc' [67] -, 51.
e. In one example, the Tc' table is extended to: tc '[66] 48 and tc' [67] 50.
f. In one example, the Tc' table is extended to: tc '[66] is 50 and tc' [67] is 51.
g. In one example, the β ' table is expanded to β ' [64] ═ 90 and β ' [65] ═ 92.
h. In one example, the β ' table is expanded to β ' [64] ═ 89 and β ' [65] ═ 90.
i. In one example, the β ' table is expanded to β ' [64] ═ 89 and β ' [65] ═ 91.
j. In one example, the β ' table is expanded to β ' [64] ═ 88 and β ' [65] ═ 90.
k. In one example, the β ' table is expanded to β ' [64] ═ 90 and β ' [65] ═ 91.
6. When dependent quantization is enabled, initialization of CABAC context depends on QPC+ N instead of QPC
7. The quantization parameters signaled by the higher layer signaling may be assigned different semantics based on whether dependent quantization is used or not.
a. In one example, for qp indicated in picture parameter set/picture header (i.e., HEVC)
Init _ qp _ minus26) which may have different semantics.
i. When the dependency quantization is OFF, init _ qp _ minus26 plus 26 specifies the SliceQp for each slice of the reference PPSYOr an initial value of a quantization parameter of all slices (tiles) referring to the PPS/picture header.
init _ qp _ minus26 plus 27 specifies the SliceQp for each slice of the reference PPS when the dependency quantization is ONYOr an initial value of a quantization parameter of all slices referring to the PPS/picture header.
b. In one example, it may have different semantics for the delta qp indicated in the slice header/slice group header (i.e., slice _ qp _ delta in HEVC).
i. Slice _ Qp _ delta specifies Qp when dependency quantization is OFFYIs used for the encoded block in the slice/slice group until modified by the value of CuQpDeltaVal in the coding unit layer. Qp of slice/groupYInitial value SliceQp of quantization parameterYThe following derivation:
SliceQpY=26+init_qp_minus26+slice_qp_delta
when the dependency quantization is ON, slice _ Qp _ delta specifies QpYIs used for the encoded block in the slice/slice group until modified by the value of CuQpDeltaVal in the coding unit layer. Qp of slice/groupYInitial value SliceQp of quantization parameterYThe following derivation:
SliceQpY=26+init_qp_minus26+slice_qp_delta+1
8. the proposed method can be signaled whether to enable or disable it in SPS/PPS/VPS/sequence header/picture header/slice group header/CTU group etc.
5. Example of another embodiment
In one embodiment, QP is used in deblocking filteringC+1. The newly added part is highlighted.
8.7.2.5.3 decision processing of luminance block edges
The inputs to this process are:
sampling point array recPicture of brightness imageL
-a luma location (xCb, yCb) specifying an upper left corner sample of the current luma coding block relative to an upper left corner luma sample of the current picture,
-a luma location (xBl, yBl) specifying an upper left corner sample of the current luma block relative to an upper left corner sample of the current luma coding block,
a variable edgeType specifying whether vertical (EDGE _ VER) or horizontal (EDGE _ HOR) EDGEs are to be filtered,
a variable bS, which specifies the boundary filtering strength.
The output of this process is:
variables dE, dEp and dEq containing the decision,
variables β and tC
If edgeType is equal to EDGE _ VER, the derivation of the sample values pi, k and qi, k (i ═ 0..3 and k ═ 0 and 3) is as follows:
qi,k=recPictureL[xCb+xBl+i][yCb+yBl+k](8-284)
pi,k=recPictureL[xCb+xBl-i-1][yCb+yBl+k](8-285)
otherwise (edgeType equals EDGE _ HOR), the sample value pi,kAnd q isi,kThe derivation of (i ═ 0..3 and k ═ 0 and 3) is as follows:
qi,k=recPictureL[xCb+xBl+k][yCb+yBl+i](8-286)
pi,k=recPictureL[xCb+xBl+k][yCb+yBl-i-1](8-287)
variable QpQAnd QpPAre respectively set equal to0,0And p0,0Of the coding block of (2)YThe value is obtained.
If included, contains samples q0,0Is equal to 1, then Qp will be equal toQIs set equal to QpQ+1. If included, contains samples p0,0Dep _ quant _ enabled _ flag of the coding unit of the coded block of (1), then Qp is equal to 1PSet equal to QpP+1。
Variable qPLThe derivation of (c) is as follows:
qPL=((QpQ+QpP+1)>>1) (8-288)
as specified in tables 8-11, the value of the variable β' is determined based on the luminance quantization parameter Q, derived as follows:
Q=Clip3(0,51,qPL+(slice_beta_offset_div2<<1)) (8-289)
wherein slice _ beta _ offset _ div2 comprisesSample point q0,0The value of slice _ beta _ offset _ div2 of the slice.
The variable β is derived as follows:
β=β′*(1<<(BitDepthY-8)) (8-290)
as specified in tables 8-11, the variable t is determined based on the luminance quantization parameter QCThe value of' is derived as follows:
Q=Clip3(0,53,qPL+2*(bS-1)+(slice_tc_offset_div2<<1))(8-291)
wherein slice _ tc _ offset _ div2 includes a sample point q0,0The value of the syntax element slice _ tc _ offset _ div2 of the strip of (1).
Variable tCThe derivation of (c) is as follows:
tC=tC′*(1<<(BitDepthY-8) (8-292)
depending on the value of edgeType, the following formula applies:
-if edgeType is equal to EDGE _ VER, applying the following ordered steps:
the variables dpq0, dpq3, dp, dq and d are derived as follows:
dp0=Abs(p2,0-2*p1,0+p0,0) (8-293)
dp3=Abs(p2,3-2*p1,3+p0,3) (8-294)
dq0=Abs(q2,0-2*q1,0+q0,0) (8-295)
dq3=Abs(q2,3-2*q1,3+q0,3) (8-296)
dpq0=dp0+dq0 (8-297)
dpq3=dp3+dq3 (8-298)
dp=dp0+dp3 (8-299)
dq=dq0+dq3 (8-300)
d=dpq0+dpq3 (8-301)
the variables dE, dEp and dEq are set equal to 0.
When d is less than β, the following ordered steps apply:
the variable dpq is set equal to 2 × dpq 0.
For the sample positions (xCb + xBl, yCb + yBl), sample value p is usedi,0、qi,0(where i ═ 0..3), variables dpq, β, and tCThe decision process for the luminance samples specified in section 8.7.2.5.6 is invoked as input and the output is assigned to decision dSam 0.
The variable dpq is set equal to 2 × dpq 3.
For the sample position (xCb + xBl, yCb + yBl +3), sample value p is usedi,3、qi,3(where i ═ 0..3), variables dpq, β, and tCThe decision process for the luminance samples specified in section 8.7.2.5.6 is invoked as input and the output is assigned to decision dSam 3.
The variable dE is set equal to 1.
When dSam0 equals 1 and dSam3 equals 1, the variable dE is set equal to 2.
When dp is less than (β + (β > >1)) > >3, variable dEp is set equal to 1.
When dq is less than (β + (β > >1)) > >3, the variable dEq is set equal to 1.
Else (edgeType equals EDGE _ HOR), the following ordered steps apply:
the variables dpq0, dpq3, dp, dq and d are derived as follows:
dp0=Abs(p2,0-2*p1,0+p0,0) (8-302)
dp3=Abs(p2,3-2*p1,3+p0,3) (8-303)
dq0=Abs(q2,0-2*q1,0+q0,0) (8-304)
dq3=Abs(q2,3-2*q1,3+q0,3) (8-305)
dpq0=dp0+dq0 (8-306)
dpq3=dp3+dq3 (8-307)
dp=dp0+dp3 (8-308)
dq=dq0+dq3 (8-309)
d=dpq0+dpq3 (8-310)
the variables dE, dEp and dEq are set equal to 0.
When d is less than β, the following ordered steps apply:
the variable dpq is set equal to 2 × dpq 0.
For the sample positions (xCb + xBl, yCb + yBl), sample value p is used0,0、p3,0、q0,0And q is3,0The variables dpq, β and tCThe decision process for the luminance samples specified in section 8.7.2.5.6 is invoked as input and the output is assigned to decision dSam 0.
The variable dpq is set equal to 2 × dpq 3.
For the sample position (xCb + xBl +3, yCb + yB), sample value p is used0,3、p3,3、q0,3And q is3,3The variables dpq, β and tCThe decision process for the luminance samples specified in section 8.7.2.5.6 is invoked as input and the output is assigned to decision dSam 3.
The variable dE is set equal to 1.
When dSam0 equals 1 and dSam3 equals 1, the variable dE is set equal to 2.
When dp is less than (β + (β > >1)) > >3, variable dEp is set equal to 1.
When dq is less than (β + (β > >1)) > >3, the variable dEq is set equal to 1.
Tables 8-11 below show the derivation of threshold variables β' and t from input QC'。
Q 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
β′ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 6 7 8
tC 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1
Q 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37
β′ 9 10 11 12 13 14 15 16 17 18 20 22 24 26 28 30 32 34 36
tC 1 1 1 1 1 1 1 1 2 2 2 2 3 3 3 3 4 4 4
Q 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
β′ 38 40 42 44 46 48 50 52 54 56 58 60 62 64 - -
tC 5 5 6 6 7 8 9 10 11 13 14 16 18 20 22 24
Fig. 8 is a block diagram of a video processing device 800. The apparatus 800 may be used to implement one or more of the methods described herein. The apparatus 800 may be implemented in a smartphone, tablet, computer, internet of things (IoT) receiver, and/or the like. The apparatus 800 may include one or more processors 802, one or more memories 804, and video processing hardware 806. The one or more processors 802 may be configured to implement one or more of the methods described in this document. The one or more memories 804 may be used to store data and code for implementing the methods and techniques described herein. Video processing hardware 806 may be used to implement some of the techniques described in this document in hardware circuits.
Fig. 10 is a flow diagram of a method 1000 of processing video. The method 1000 includes: performing (1005) a determination to process the first video block using dependency scalar quantization; determining (1010) a first Quantization Parameter (QP) to be used for deblocking filtering of the first video block based on the determination to process the first video block using dependent scalar quantization; and performing (1015) further processing on the first video block using deblocking filtering according to the first QP.
Some examples of determining candidates for encoding and their use are described in section 4 of this document with reference to method 1000. For example, as described in section 4, the quantization parameters for deblocking filtering may be determined depending on the use of dependent scalar quantization.
Referring to method 1000, video blocks may be encoded in a video bitstream, where bit efficiency may be achieved by using bitstream generation rules related to motion information prediction.
The method can comprise the following steps: wherein the determination using the dependency scalar quantization is based on a value of the flag signal.
The method can comprise the following steps: wherein the first QP for deblocking filtering is used for dependent scalar quantization and other processing techniques for the first video block.
The method can comprise the following steps: wherein the first QP is QPc.
The method can comprise the following steps: where the first QP is QPc + N, where N is an integer.
The method can comprise the following steps: where QPc + N is modified from the previous value to fit within the threshold range.
The method can comprise the following steps: wherein the threshold range is [ Max (QPmin-N, QPmin), Min (QPmax-N, QPmax) ].
The method can comprise the following steps: determining, by the processor, to process the second video block without using dependency scalar quantization; and performing further processing on the second video block using another de-blocking filtering, wherein a dependency scalar quantization based quantization is used for the first video block, the de-blocking filtering used for the first video block being stronger or weaker than the other de-blocking filtering used for processing the second video block.
The method can comprise the following steps: wherein the deblocking filtering is selected by the encoder, the method further comprising: signaling to a decoder that dependency scalar quantization is enabled for the first video block.
The method may include wherein the encoder and decoder use smaller or larger thresholds Tc and β based on the use of dependency scalar quantization.
The method may include wherein the first QP is QPc + N and for QPmax + N, additional entries are used for the Tc 'and β' tables.
The method may include wherein the first QP is QPc + N and the tables are the same for QPmax + N, Tc 'and β' based on QPc + N being clipped to within a threshold range.
The method can comprise the following steps: wherein the Tc' table is extended to: tc '[66] is 50 and tc' [67] is 52.
The method can comprise the following steps: wherein the Tc' table is extended to: tc '[66] 49 and tc' [67] 50.
The method can comprise the following steps: wherein the Tc' table is extended to: tc '[66] -, 49 and tc' [67] -, 51.
The method can comprise the following steps: wherein the Tc' table is extended to: tc '[66] 48 and tc' [67] 50.
The method can comprise the following steps: wherein the Tc' table is extended to: tc '[66] is 50 and tc' [67] is 51.
The method may comprise wherein the β ' table is extended to β ' [64] ═ 90 and β ' [65] ═ 92.
The method may comprise wherein the table of β ' is extended to β ' [64] ═ 89 and β ' [65] ═ 90.
The method may comprise wherein the table β ' is extended to β ' [64] ═ 89 and β ' [65] ═ 91.
The method may include wherein the β ' table is extended to β ' [64] ═ 88 and β ' [65] ═ 90.
The method may comprise wherein the β ' table is extended to β ' [64] ═ 90 and β ' [65] ═ 91.
The method can comprise the following steps: wherein the context-based adaptive binary arithmetic coding (CABAC) initialization is based on the first QP being QPc + N and the first video block being processed using dependency scalar quantization.
The method can comprise the following steps: wherein the determination to process the first video block using the dependency scalar quantization is signaled in a semantic based on the use of the dependency scalar quantization.
The method can comprise the following steps: wherein the first QP is indicated in a picture parameter set or picture header.
The method can comprise the following steps: wherein, based on the dependency quantization being off, the picture parameter set or picture header indicates init _ qp _ minus26 plus 26, init _ qp _ minus26 plus 26 specifies the initial value of SliceQpy for a slice that refers to the PPS, or the initial value of the quantization parameter for a slice (tile) referenced in the PPS or picture header.
The method can comprise the following steps: where dependent quantization is used, a picture parameter set or picture header indicates init _ qp _ minus26 plus 27, init _ qp _ minus26 plus 27 specifies the initial value of SliceQpy for a slice that refers to a PPS, or the initial value of a quantization parameter for a slice that is referenced in a PPS or picture header.
The method can comprise the following steps: wherein the first QP is indicated in a slice header, or a slice group header.
The method can comprise the following steps: wherein, based on dependency quantization being off, a slice header, or a slice group header indicates slice _ qp _ delta specifying an initial value of QpY, the initial value of QpY being used for a coding block in a slice, or slice group until modified by a value of CuQpDeltaVal in the coding unit layer, wherein the initial value of QpY is SliceQpY, wherein SliceQpY ═ 26+ init _ qp _ minus26+ slice _ qp _ delta.
The method can comprise the following steps: where dependency-based quantization is used, a slice header, or slice group header indicates slice _ qp _ delta specifying an initial value of QpY, QpY for a coding block in a slice, or slice group, until modified by the value of CuQpDeltaVal in the coding unit layer, where the initial value of QpY is SliceQpY, where SliceQpY is 26+ init _ qp _ minus26+ slice _ qp _ delta + 1.
The method can comprise the following steps: wherein the method is applied based on being signaled in an SPS, PPS, VPS, sequence header, picture header, slice group header, or Coding Tree Unit (CTU) group.
Fig. 11 is a flow diagram of a video processing method 1100 for processing video. The method 1100 comprises: determining (1105) one or more deblocking filter parameters to be used in deblocking filter processing of the current video block based on whether the current video block is processed using dependency scalar quantization, wherein a set of allowable reconstruction values for transform coefficients corresponding to the dependency scalar quantization depends on at least one transform coefficient level prior to the current transform coefficient level; and performing (1110) a deblocking filter process on the current video block according to the one or more deblocking filter parameters.
The method can comprise the following steps: wherein, determining one or more deblocking filter parameters to be used in the deblocking filter processing of the current video block specifically comprises: determining one or more deblocking filter parameters corresponding to weaker deblocking filtering in the case where dependency scalar quantization is used for the current video block; or in the case where a dependency scalar quantization is used for the current video block, one or more deblocking filter parameters corresponding to stronger deblocking filtering are determined.
The method can comprise the following steps: wherein the stronger deblocking filtering modifies more pixels and the weaker deblocking filtering modifies less pixels.
The method may comprise wherein determining one or more deblocking filter parameters to be used in the deblocking filtering process of the current video block comprises in particular selecting smaller thresholds Tc and β in case a dependent scalar quantization is used for the current video block or selecting larger thresholds Tc and β in case a dependent scalar quantization is used for the current video block.
The method can comprise the following steps: wherein, determining one or more deblocking filter parameters to be used in the deblocking filter processing of the current video block specifically comprises: determining a quantization parameter included in the one or more deblocking filter parameters based on whether the current video block is processed using dependent scalar quantization.
The method can comprise the following steps: wherein, in case dependent scalar quantization is used for the current video block, the quantization parameter for the deblocking filtering process is set equal to QPc + N, where QPc is the signaled quantization parameter for the current video block, QPc + N is the quantization parameter for dependent scalar quantization, N is an integer and N > is 1.
The method may comprise wherein at least one additional entry is provided in a mapping table in case a dependency scalar quantization is used for the current video block, wherein the mapping table indicates a mapping relationship between the quantization parameter and the threshold β 'or indicates a mapping relationship between the quantization parameter and the threshold Tc'.
The method can comprise the following steps: wherein the mapping table is extended according to any one of the following options: tc '[66] is 50 and tc' [67] is 52, tc '[66] is 49 and tc' [67] is 50, tc '[66] is 49 and tc' [67] is 51, tc '[66] is 48 and tc' [67] is 50, tc '[66] is 50 and tc' [67] is 51.
The method may comprise wherein the mapping table is extended according to any one of the following options β '[64] ═ 90 and β' [65] ═ 92, β '[64] ═ 89 and β' [65] ═ 90, β '[64] ═ 89 and β' [65] ═ 91, β '[64] ═ 88 and β' [65] ═ 90, β '[64] ═ 90 and β' [65] ═ 91.
The method can comprise the following steps: wherein, in case QPc + N is larger than QPmax or smaller than QPmin, QPc + N is clipped to within the range [ QPmin, QPmax ], wherein QPmin and QPmax are the minimum allowable quantization parameter and the maximum allowable quantization parameter, respectively.
Fig. 13 is a flow diagram of a video processing method 1300 for processing video. The method 1300 includes: determining (1305) whether to apply a deblocking filtering process based on whether the current video block is processed using dependency scalar quantization, wherein a set of allowable reconstruction values for transform coefficients corresponding to the dependency scalar quantization depends on at least one transform coefficient level prior to the current transform coefficient level; and performing (1310) a deblocking filter process on the current video block based on the determining to apply the deblocking filter process.
Fig. 12 is a flow diagram of a video processing method 1200 for processing video. The method 1200 includes: determining (1205) a quantization parameter to be used in dependency scalar quantization of the current video block, with dependency scalar quantization being enabled for the current video block, wherein a set of allowable reconstruction values of transform coefficients corresponding to the dependency scalar quantization depends on at least one transform coefficient level preceding the current transform coefficient level; and performing (1210) a dependent scalar quantization on the current video block based on the determined quantization parameter, wherein the determined quantization parameter is further applied to a different video processing than the dependent scalar quantization using the quantization parameter as an input parameter for the current video block.
The method can comprise the following steps: wherein the video processing other than the dependency scalar quantization comprises a deblocking filtering process.
The method can comprise the following steps: wherein, in case dependency scalar quantization is enabled for the current video block, the determined quantization parameter is QPc, wherein QPc is the signaled quantization parameter for the current video block.
The method can comprise the following steps: wherein, where dependency scalar quantization is enabled for the current video block, the determined quantization parameter is QPc + N, where QPc is the signaled quantization parameter for the current video block, and N is an integer.
The method can comprise the following steps: QPc + N is clipped to a threshold range before it is used.
The method can comprise the following steps: wherein the threshold range is [ QPmin, QPmax ], wherein QPmin and QPmax are the minimum and maximum quantization parameters allowed, respectively.
The method can comprise the following steps: wherein, in case dependency scalar quantization is enabled for the current video block, the allowed QPc range for dependency scalar quantization is [ QPmin-N, QPmax-N ], wherein QPmin and QPmax are the minimum and maximum values, respectively, of allowed QPc in case dependency scalar quantization is not enabled for the current video block.
The method can comprise the following steps: wherein, in case dependency scalar quantization is enabled for the current video block, the allowed QPc range for the dependency scalar quantization is [ Max (QPmin-N, QPmin), Min (QPmax-N, QPmax) ], where QPmin and QPmax are the minimum and maximum values, respectively, of the allowed QPc in case dependency scalar quantization is not enabled for the current video block.
The method can comprise the following steps: wherein the context-based adaptive binary arithmetic coding (CABAC) initialization is based on QPc + N with dependency scalar quantization enabled.
The method can comprise the following steps: wherein the higher layer quantization parameter assigns different semantics based on whether dependency scalar quantization is enabled.
The method can comprise the following steps: wherein the quantization parameter is signaled by a first parameter in a picture parameter set or a picture header.
The method can comprise the following steps: wherein the first parameter is init _ qp _ minus26, and in the case where dependency scalar quantization is disabled, init _ qp _ minus26 plus 26 specifies an initial quantization parameter value SliceQpY for a slice of the reference picture parameter set, or an initial value of a quantization parameter for a slice of the reference picture parameter set or picture header.
The method can comprise the following steps: wherein the first parameter is init _ qp _ minus26, and in the event dependency scalar quantization is enabled init _ qp _ minus26 plus 27 specifies an initial quantization parameter value SliceQpY for a slice of the reference picture parameter set, or an initial value for a quantization parameter for a slice of the reference picture parameter set or picture header.
The method can comprise the following steps: wherein the quantization parameter is signaled in a picture parameter set or a picture header by a second parameter.
The method can comprise the following steps: wherein the second parameter is slice _ qp _ delta and, with dependency scalar quantization disabled, slice _ qp _ delta is used to derive an initial quantization parameter value QpY, which initial quantization parameter value QpY is used for an encoded block in a slice, or group of slices until modified by the value of CuQpDeltaVal in the coding unit layer, where the initial value of QpY is set equal to SliceQpY and SliceQpY is 26+ init _ qp _ minus26+ slice _ qp _ delta.
The method can comprise the following steps: wherein the second parameter is slice _ qp _ delta and, with dependency scalar quantization enabled, slice _ qp _ delta is used to derive an initial quantization parameter value QpY, which initial quantization parameter value QpY is used for an encoded block in a slice, or group of slices until modified by the value of CuQpDeltaVal in the coding unit layer, where the initial value of QpY is set equal to SliceQpY and SliceQpY is 26+ init _ qp _ minus26+ slice _ qp _ delta + 1.
The method can comprise the following steps: wherein the method is applied when the method is signaled in an SPS, PPS, VPS, sequence header, picture header, slice group header, or Coding Tree Unit (CTU) group.
Fig. 14 is a flow diagram of a video processing method 1400 for processing video. The method 1400 comprises: determining (1405), with dependency scalar inverse quantization enabled for a current video block, a quantization parameter to be used in dependency scalar inverse quantization of the current video block, wherein a set of allowable reconstruction values of transform coefficients corresponding to the dependency scalar inverse quantization depends on at least one transform coefficient level prior to the current transform coefficient level; and performing (1410) a dependent scalar inverse quantization on the current video block based on the determined quantization parameter, wherein the determined quantization parameter is further applied to a different video processing of the current video block than the dependent scalar inverse quantization using the quantization parameter as an input parameter.
It should be appreciated that the disclosed techniques may be implemented in a video encoder or decoder to improve compression efficiency when the shape of the compressed coding unit is significantly different from a conventional square block or semi-square rectangular block. For example, new coding tools using long or tall coding units, such as units of size 4 × 32 or 32 × 4, may benefit from the disclosed techniques.
It should be understood that the disclosed techniques may be implemented in a video system including a processor and a non-transitory memory having instructions thereon, wherein the instructions, when executed by the processor, cause the processor to implement the above disclosed methods.
The techniques, examples, embodiments, modules, and functional operations disclosed herein and otherwise described may be implemented in digital electronic circuitry, or in computer software, firmware, or hardware, including the structures disclosed herein and structural equivalents thereof, or in combinations of one or more of them. The disclosed embodiments and others can be implemented as one or more computer program products, i.e., one or more modules of computer program instructions encoded on a computer-readable medium for execution by, or to control the operation of, data processing apparatus. The computer readable medium can be a machine-readable storage device, a machine-readable storage substrate, a memory device, a composition of matter effecting a machine-readable propagated signal, or a combination of one or more of them. The term "data processing apparatus" includes all apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, or multiple processors or groups of computers. The apparatus can include, in addition to hardware, code that creates an execution environment for the computer program, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, or a combination of one or more of them. Propagated signals are artificially generated signals, e.g., machine-generated electrical, optical, or electromagnetic signals, that are generated to encode information for transmission to suitable receiver apparatus.
A computer program (also known as a program, software application, script, or code) can be written in any form of programming language, including compiled or interpreted languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. A computer program does not necessarily correspond to a file in a file system. A program can be stored in a portion of a file that holds other programs or data (e.g., one or more scripts stored in a markup language document), in a single file dedicated to the program, or in multiple coordinated files (e.g., files that store one or more modules, sub programs, or portions of code). A computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network.
The processes and logic flows described herein can be performed by one or more programmable processors executing one or more computer programs to perform functions by operating on input data and generating output. The processes and logic flows can also be performed by, and apparatus can also be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application-specific integrated circuit).
Processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more of any kind of digital computer. Generally, a processor will receive instructions and data from a read-only memory or a random access memory or both. The essential elements of a computer are a processor for executing instructions and one or more memory devices for storing instructions and data. Generally, a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto-optical disks, or optical disks. However, a computer does not necessarily have such a device. Computer-readable media suitable for storing computer program instructions and data include all forms of non-volatile memory, media and memory devices, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices; magnetic disks, such as internal hard disks or removable disks; a magneto-optical disk; and CDROM and DVD-ROM discs. The processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.
While this patent document contains many specifics, these should not be construed as limitations on the scope of any invention or of what may be claimed, but rather as descriptions of features specific to particular embodiments of particular inventions. Certain features that are described in this patent document in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various functions described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination. Furthermore, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claim combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination.
Also, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. Moreover, the separation of various system components in the embodiments described herein should not be understood as requiring such separation in all embodiments.
Only a few implementations and examples have been described and other implementations, enhancements and variations can be made based on what is described and illustrated in this patent document.

Claims (20)

1. A video processing method, comprising:
determining a quantization parameter to be used in dependency scalar quantization of a current video block, where a set of allowable reconstruction values for transform coefficients corresponding to the dependency scalar quantization depends on at least one transform coefficient level preceding the current transform coefficient level, if the dependency scalar quantization is enabled for the current video block; and
performing a dependent scalar quantization on the current video block based on the determined quantization parameter,
wherein the determined quantization parameter is further applied to a video processing of the current video block that differs from the dependent scalar quantization using the quantization parameter as an input parameter.
2. The video processing method according to claim 1, wherein the video processing other than the dependency scalar quantization comprises a deblocking filtering process.
3. The video processing method of claim 1, wherein, where dependency scalar quantization is enabled for a current video block, the determined quantization parameter is QPc, wherein QPc is the signaled quantization parameter for the current video block.
4. The video processing method of claim 1, wherein, where dependency scalar quantization is enabled for a current video block, the determined quantization parameter is QPc + N, where QPc is the signaled quantization parameter for the current video block and N is an integer.
5. The video processing method of claim 4, further comprising: QPc + N is clipped to a threshold range before it is used.
6. The video processing method according to claim 5, wherein the threshold range is [ QPmin, QPmax ], wherein QPmin and QPmax are the minimum and maximum allowed quantization parameters, respectively.
7. The video processing method according to claim 4, wherein, in case dependency scalar quantization is enabled for the current video block, the allowed QPc range for dependency scalar quantization is [ QPmin-N, QPmax-N ], wherein QPmin and QPmax are respectively the minimum and maximum allowed QPc in case dependency scalar quantization is not enabled for the current video block.
8. The video processing method according to claim 4, wherein, in case dependency scalar quantization is enabled for the current video block, the allowed QPc range for dependency scalar quantization is [ Max (QPmin-N, QPmin), Min (QPmax-N, QPmax) ], where QPmin and QPmax are the minimum and maximum allowed QPc, respectively, in case dependency scalar quantization is not enabled for the current video block.
9. The video processing method according to any of claims 4 to 8, wherein the context-based adaptive binary arithmetic coding (CABAC) initialization is based on QPc + N with dependency scalar quantization enabled.
10. The video processing method according to any of claims 1 to 8, wherein the high layer quantization parameter assigns different semantics based on whether dependency scalar quantization is enabled.
11. The video processing method according to any of claims 1 to 10, wherein the quantization parameter is signaled by a first parameter in a picture parameter set or a picture header.
12. The video processing method of claim 11, wherein the first parameter is init _ qp _ minus26, and in the event dependency scalar quantization is disabled, init _ qp _ minus26 plus 26 specifies an initial quantization parameter value SliceQpY for a slice of a reference picture parameter set, or an initial value of a quantization parameter for a slice of a reference picture parameter set or picture header.
13. The video processing method of claim 11, wherein the first parameter is init _ qp _ minus26, and in the event dependency scalar quantization is enabled, init _ qp _ minus26 plus 27 specifies an initial quantization parameter value SliceQpY for a slice of a reference picture parameter set, or an initial value of a quantization parameter for a slice of a reference picture parameter set or picture header.
14. The video processing method according to any of claims 1 to 10, wherein the quantization parameter is signaled by a second parameter in a picture parameter set or a picture header.
15. The video processing method of claim 14, wherein the second parameter is slice _ qp _ delta and, with dependency scalar quantization disabled, slice _ qp _ delta is used to derive an initial quantization parameter value QpY, which is QpY used for an encoded block in a slice, or group of slices, until modified by the value of CuQpDeltaVal in the coding unit layer, wherein an initial value of QpY is set equal to SliceQpY and SliceQpY 26+ slice _ qp _ delta.
16. The video processing method of claim 14, wherein the second parameter is slice _ qp _ delta and, with dependency scalar quantization enabled, slice _ qp _ delta is used to derive an initial quantization parameter value QpY, which is QpY used for an encoded block in a slice, or group of slices, until modified by the value of CuQpDeltaVal in the coding unit layer, wherein an initial value of QpY is set equal to SliceQpY and SliceQpY 26+ slice _ qp _ delta + 1.
17. The video processing method of any of claims 1-16, wherein the method is applied when the method is signaled in an SPS, PPS, VPS, sequence header, picture header, slice group header, or Coding Tree Unit (CTU) group.
18. A video processing method, comprising:
determining a quantization parameter to be used in dependency scalar inverse quantization of the current video block, where a set of allowable reconstruction values for transform coefficients corresponding to the dependency scalar inverse quantization depends on at least one transform coefficient level prior to the current transform coefficient level, in case dependency scalar inverse quantization is enabled for the current video block; and
performing a dependency scalar inverse quantization on the current video block according to the determined quantization parameter,
wherein the determined quantization parameter is further applied to a video process of the current video block that differs from the dependent scalar inverse quantization using the quantization parameter as an input parameter.
19. An apparatus in a video system, the apparatus comprising a processor and a non-transitory memory having instructions thereon, wherein the instructions, when executed by the processor, cause the processor to implement the method of any of claims 1-18.
20. A computer program product stored on a non-transitory computer readable medium, the computer program product comprising program code for performing the method of any of claims 1-18.
CN201911055394.XA 2018-10-31 2019-10-31 Quantization parameters under a coding tool for dependent quantization Active CN111131819B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN2018112945 2018-10-31
CNPCT/CN2018/112945 2018-10-31

Publications (2)

Publication Number Publication Date
CN111131819A true CN111131819A (en) 2020-05-08
CN111131819B CN111131819B (en) 2023-05-09

Family

ID=68470587

Family Applications (2)

Application Number Title Priority Date Filing Date
CN201911056351.3A Active CN111131821B (en) 2018-10-31 2019-10-31 Deblocking filtering under dependency quantization
CN201911055394.XA Active CN111131819B (en) 2018-10-31 2019-10-31 Quantization parameters under a coding tool for dependent quantization

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN201911056351.3A Active CN111131821B (en) 2018-10-31 2019-10-31 Deblocking filtering under dependency quantization

Country Status (2)

Country Link
CN (2) CN111131821B (en)
WO (2) WO2020089825A1 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114727109A (en) * 2021-01-05 2022-07-08 腾讯科技(深圳)有限公司 Multimedia quantization processing method and device and coding and decoding equipment
WO2022165763A1 (en) * 2021-02-05 2022-08-11 Oppo广东移动通信有限公司 Encoding method, decoding method, encoder, decoder and electronic device
WO2022174475A1 (en) * 2021-02-22 2022-08-25 浙江大学 Video encoding method and system, video decoding method and system, video encoder, and video decoder
WO2022206987A1 (en) * 2021-04-02 2022-10-06 Beijing Bytedance Network Technology Co., Ltd. Adaptive dependent quantization
WO2022257142A1 (en) * 2021-06-11 2022-12-15 Oppo广东移动通信有限公司 Video decoding and coding method, device and storage medium
WO2023004590A1 (en) * 2021-07-27 2023-02-02 Oppo广东移动通信有限公司 Video decoding and encoding methods and devices, and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010077325A2 (en) * 2008-12-29 2010-07-08 Thomson Licensing Method and apparatus for adaptive quantization of subband/wavelet coefficients
CN107431814A (en) * 2015-01-08 2017-12-01 微软技术许可有限责任公司 The change of ρ domains speed control
HK1246020A1 (en) * 2012-01-20 2018-08-31 Ge Video Compression Llc Apparatus for decoding a plurality of transform coefficients having transform coefficient levels from a data stream

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9185404B2 (en) * 2011-10-07 2015-11-10 Qualcomm Incorporated Performing transform dependent de-blocking filtering
US9161046B2 (en) * 2011-10-25 2015-10-13 Qualcomm Incorporated Determining quantization parameters for deblocking filtering for video coding
US9344723B2 (en) * 2012-04-13 2016-05-17 Qualcomm Incorporated Beta offset control for deblocking filters in video coding
WO2013162441A1 (en) * 2012-04-25 2013-10-31 Telefonaktiebolaget L M Ericsson (Publ) Deblocking filtering control
US20140079135A1 (en) * 2012-09-14 2014-03-20 Qualcomm Incoporated Performing quantization to facilitate deblocking filtering
CN103491373B (en) * 2013-09-06 2018-04-27 复旦大学 A kind of level Four flowing water filtering method of deblocking filter suitable for HEVC standard

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010077325A2 (en) * 2008-12-29 2010-07-08 Thomson Licensing Method and apparatus for adaptive quantization of subband/wavelet coefficients
HK1246020A1 (en) * 2012-01-20 2018-08-31 Ge Video Compression Llc Apparatus for decoding a plurality of transform coefficients having transform coefficient levels from a data stream
CN107431814A (en) * 2015-01-08 2017-12-01 微软技术许可有限责任公司 The change of ρ domains speed control

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
SCHWARZ (FRAUNHOFER) ET AL: "CE7:Transform coefficient coding and dependent quantization(Test 7.1.2,7.2.1)", 《11.JVET MEETING(THE JOINT VIDEO EXPLORATION TEAM OF ISO/IEC JTC1/SC29/WG11 AND ITU-TSG.16)》 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114727109A (en) * 2021-01-05 2022-07-08 腾讯科技(深圳)有限公司 Multimedia quantization processing method and device and coding and decoding equipment
CN114727109B (en) * 2021-01-05 2023-03-24 腾讯科技(深圳)有限公司 Multimedia quantization processing method and device and coding and decoding equipment
WO2022165763A1 (en) * 2021-02-05 2022-08-11 Oppo广东移动通信有限公司 Encoding method, decoding method, encoder, decoder and electronic device
WO2022174475A1 (en) * 2021-02-22 2022-08-25 浙江大学 Video encoding method and system, video decoding method and system, video encoder, and video decoder
WO2022206987A1 (en) * 2021-04-02 2022-10-06 Beijing Bytedance Network Technology Co., Ltd. Adaptive dependent quantization
WO2022257142A1 (en) * 2021-06-11 2022-12-15 Oppo广东移动通信有限公司 Video decoding and coding method, device and storage medium
WO2023004590A1 (en) * 2021-07-27 2023-02-02 Oppo广东移动通信有限公司 Video decoding and encoding methods and devices, and storage medium

Also Published As

Publication number Publication date
WO2020089825A1 (en) 2020-05-07
CN111131821A (en) 2020-05-08
CN111131819B (en) 2023-05-09
CN111131821B (en) 2023-05-09
WO2020089824A1 (en) 2020-05-07

Similar Documents

Publication Publication Date Title
US11363264B2 (en) Sample adaptive offset control
CN111131821B (en) Deblocking filtering under dependency quantization
JP7389251B2 (en) Cross-component adaptive loop filter using luminance differences
EP4087247A1 (en) Luminance based coding tools for video compression
CN112042203A (en) System and method for applying deblocking filter to reconstructed video data
CN113826383B (en) Block dimension setting for transform skip mode
JP2023143946A (en) Quantization parameter offset for chroma deblock filtering
CN114375582A (en) Method and system for processing luminance and chrominance signals
WO2020125804A1 (en) Inter prediction using polynomial model
CN113826398B (en) Interaction between transform skip mode and other codec tools
CN113853787A (en) Transform skip mode based on sub-block usage
WO2021164736A1 (en) Constraints for inter-layer referencing
CN114930818A (en) Bitstream syntax for chroma coding and decoding
CN113966611A (en) Significant coefficient signaling in video coding and decoding
WO2021136470A1 (en) Clustering based palette mode for video coding
JP7372483B2 (en) Filtering parameter signaling in video picture header
US20240022721A1 (en) Constraints on partitioning of video blocks
WO2024022377A1 (en) Using non-adjacent samples for adaptive loop filter in video coding
WO2020221213A1 (en) Intra sub-block partitioning and multiple transform selection
WO2023213298A1 (en) Filter shape switch for adaptive loop filter in video coding
CN117716690A (en) Conditions of use of adaptive bilateral filters
CN117256140A (en) Use of steering filters
CN117769833A (en) Adaptive bilateral filter in video encoding and decoding
CN116965035A (en) Transformation on non-binary blocks
WO2023059235A1 (en) Combining deblock filtering and another filtering for video encoding and/or decoding

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant