CN115699737A - Implicit determination of transform skip mode - Google Patents

Implicit determination of transform skip mode Download PDF

Info

Publication number
CN115699737A
CN115699737A CN202180024661.6A CN202180024661A CN115699737A CN 115699737 A CN115699737 A CN 115699737A CN 202180024661 A CN202180024661 A CN 202180024661A CN 115699737 A CN115699737 A CN 115699737A
Authority
CN
China
Prior art keywords
video
transform
block
mode
blocks
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202180024661.6A
Other languages
Chinese (zh)
Inventor
张莉
张凯
张玉槐
刘鸿彬
王悦
马思伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Douyin Vision Co Ltd
ByteDance Inc
Original Assignee
Douyin Vision Co Ltd
ByteDance Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Douyin Vision Co Ltd, ByteDance Inc filed Critical Douyin Vision Co Ltd
Publication of CN115699737A publication Critical patent/CN115699737A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/70Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/12Selection from among a plurality of transforms or standards, e.g. selection between discrete cosine transform [DCT] and sub-band transform or selection between H.263 and H.264
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/157Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/61Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding

Abstract

Methods, systems, and apparatus for video processing are described. An example video processing method includes performing a conversion between a video and a bitstream of the video according to a rule. The rule specifies that the conversion indicates the use of a particular transform mode at least at a first video unit level and at a second video unit level.

Description

Implicit determination of transform skip mode
CROSS-REFERENCE TO RELATED APPLICATIONS
This application claims priority and benefit from international patent application No. PCT/CN2020/081198, filed on 25/3/2020/depending on applicable patent laws and/or rules under the paris convention. The entire disclosure of the above application is incorporated by reference into a portion of the present disclosure for all purposes in law.
Technical Field
This patent document relates to image encoding and decoding and video encoding and decoding.
Background
Digital video accounts for the largest bandwidth usage on the internet and other digital communication networks. As the number of connected user devices capable of receiving and displaying video increases, the bandwidth requirements for digital video usage are expected to continue to grow.
Disclosure of Invention
This document discloses techniques that may be used by video encoders and decoders that process a codec representation of video using control information useful for decoding of the codec representation.
In one example aspect, a video processing method is disclosed. The method includes performing a conversion between the video and a bitstream of the video according to a rule. The rule provides for indicating the use of a particular transform mode for the conversion at least in a first video unit level and a second video unit level.
In another example aspect, a video processing method is disclosed. The method includes performing a conversion between a video block of the video and a bitstream of the video according to a rule. The rules specify that syntax elements in the video unit level are used to indicate the allowed set of transforms for the transform.
In another example aspect, a video processing method is disclosed. The method includes performing a conversion of video blocks of the video to a bitstream of the video according to a rule. The rule specifies that use of a particular transform mode for conversion of a video block is determined based on a function associated with energy of representative coefficients of one or more representative blocks of video.
In another example aspect, a video processing method is disclosed. The method comprises the following steps: determining, based on a rule, whether a horizontally-specific transform or a vertically-specific transform is applied to a video block for a conversion between the video block of the video and a codec representation of the video; and performing a conversion based on the determination. The rule specifies a relationship between the determination and a representative coefficient from decoded coefficients of one or more representative blocks of video.
In another example aspect, another video processing method is disclosed. The method comprises the following steps: determining, based on a rule, whether a horizontally-specific transform or a vertically-specific transform is applied to a video block for a conversion between the video block of the video and a codec representation of the video; and performing a conversion based on the determination. The rule specifies a relationship between the determination and a decoded luminance coefficient for the video block.
In another example aspect, another video processing method is disclosed. The method comprises the following steps: for a conversion between a video block of video and a codec representation of the video, determining whether a horizontally-specific transform or a vertically-specific transform is applied to the video block based on a rule; and performing a conversion based on the determination. The rule specifies a relationship between the determination and a value V associated with a decoded coefficient or a representative coefficient of a representative block.
In another example aspect, another video processing method is disclosed. The method includes determining that one or more syntax fields are present in a codec representation of a video, wherein the video comprises one or more video blocks; based on the one or more syntax fields, it is determined whether a horizontally-specific transform or a vertically-specific transform is enabled for a video block in the video.
In another example aspect, another video processing method is disclosed. The method includes making a first determination as to whether use of a particular transform is enabled for conversion between a video block of the video and a codec representation of the video; making a second determination as to whether a zeroing operation is enabled during the transition; and performing a conversion based on the first determination and the second determination.
In another example aspect, another video processing method is disclosed. The method includes performing a conversion between a video block of the video and a codec representation of the video; wherein the video block is represented in a codec representation as a codec block, wherein non-zero coefficients of the codec block are limited within one or more sub-regions; and wherein a specific transformation is applied to generate the codec block.
In yet another example aspect, a video encoder apparatus is disclosed. The video encoder comprises a processor configured to implement the above-described method.
In yet another example aspect, a video decoder apparatus is disclosed. The video decoder comprises a processor configured to implement the above-described method.
In yet another example aspect, a computer-readable medium having code stored thereon is disclosed. The code embodies one of the methods described herein in the form of processor executable code.
These and other features are described in this document.
Drawings
Fig. 1 shows an example video encoder block diagram.
Fig. 2 shows an example of 67 intra prediction modes.
Fig. 3A illustrates an example of reference samples for wide-angle intra prediction.
Fig. 3B illustrates another example of reference samples for wide-angle intra prediction.
Fig. 4 illustrates the discontinuity problem when the direction exceeds 45 degrees.
Fig. 5A shows an example definition of samples used by PDPC applied to diagonal intra mode and adjacent angular intra mode.
Fig. 5B illustrates another example definition of samples used by a PDPC applied to diagonal intra mode and adjacent angular intra mode.
Fig. 5C illustrates another example definition of samples used by the PDPC applied to diagonal intra mode and adjacent angular intra mode.
Fig. 5D illustrates yet another example definition of samples used by PDPC applied to diagonal intra mode and adjacent angular intra mode.
Fig. 6 shows a division example of a 4 × 8 block and an 8 × 4 block.
Fig. 7 shows a division example of all blocks except for 4 × 8, 8 × 4, and 4 × 4.
Fig. 8 shows an example of quadratic transformation in JEM.
Fig. 9 shows an example of a simplified quadratic transformation LFNST.
Fig. 10A shows an example of a forward simplified transform.
Fig. 10B shows an example of an inverse simplified transform.
Fig. 11 shows an example of a positive LFNST 8x8 process with a 16x 48 matrix.
Fig. 12 shows an example of the scanning positions 17 to 64 of the non-zero elements.
Fig. 13 shows an example of sub-block transform patterns SBT-V and SBT-H.
Fig. 14A shows an example of Scan region based Coefficient Coding (SRCC).
Fig. 14B shows another example of scanning area-based coefficient coding (SRCC).
FIG. 15A illustrates an example limitation of IST according to non-zero coefficient position.
FIG. 15B illustrates another example limitation of IST according to non-zero coefficient positions.
Fig. 16A shows an example zeroed type TS codec block.
Fig. 16B shows another example zeroed type TS codec block.
Fig. 16C shows another example zeroed type TS codec block.
Fig. 16D shows another zeroed type of TS codec block.
Fig. 17 is a block diagram of an example video processing system.
Fig. 18 is a block diagram illustrating a video codec system according to some embodiments of the present disclosure.
Fig. 19 is a block diagram illustrating an encoder in accordance with some embodiments of the present disclosure.
Fig. 20 is a block diagram illustrating a decoder according to some embodiments of the present disclosure.
Fig. 21 is a block diagram of a video processing apparatus.
FIG. 22 is a flow diagram of an example method of video processing.
FIG. 23 is a flowchart representation of a video processing method according to the present technology.
Fig. 24 is a flowchart representation of another video processing method in accordance with the present technology.
FIG. 25 is a flowchart representation of yet another video processing method according to the present technology.
Detailed Description
The section headings are used in this document to facilitate understanding and do not limit the application of the techniques and embodiments disclosed in each section to that section only. Furthermore, the use of the H.266 term in some of the descriptions is intended only to facilitate understanding and is not intended to limit the scope of the disclosed technology. As such, the techniques described herein also apply to other video codec protocols and designs.
1. Overview
This document relates to video coding and decoding techniques. And in particular to transform skip modes and transform types (e.g., including particular transforms (which may be considered identity transforms)) in video codecs. It can be applied to existing video codec standards (e.g., HEVC), or to upcoming standards (general video codec). It may also be applied to future video codec standards or video codecs.
2. Preliminary discussion
The video codec standard has evolved largely through the development of the well-known ITU-T and ISO/IEC standards. ITU-T has established H.261 and H.263, ISO/IEC has established MPEG-1 and MPEG-4 visualizations, and these two organizations have jointly established the H.262/MPEG-2 Video and H.264/MPEG-4 Advanced Video Coding (AVC) and H.265/HEVC standards. Since h.262, video codec standards have been based on hybrid video codec structures, in which temporal prediction plus transform coding is utilized. In order to explore future Video coding and decoding technologies beyond HEVC, VCEG and MPEG united in 2015 to form Joint Video Exploration Team (jfet). Thereafter, JFET adopted many new methods and placed them into a reference software named Joint Exploration Model (JEM). In month 4 of 2018, the joint Video experts group (jfet) between VCEG (Q6/16) and ISO/IEC JTC1 SC29/WG11 (MPEG) holds in an effort to the VVC (Versatile Video Coding) standard with a 50% reduction in bitrate compared to HEVC.
2.1. Codec flow for a typical video codec
Fig. 1 shows an example of an encoder block diagram for a VVC, which contains three in-loop filter blocks: deblocking Filter (DF), sample Adaptive Offset (SAO), and ALF. Unlike DF using predefined filters, SAO and ALF utilize the original samples of the current picture, signaling the offsets and the codec side information of the filter coefficients, reducing the mean square error between the original samples and the reconstructed samples by adding the offsets and by applying a Finite Impulse Response (FIR) filter, respectively. ALF is located in the last processing stage of each picture and can be viewed as a tool trying to capture and fix artifacts (artifacts) created by previous stages.
2.2. Intra-mode coding and decoding with 67 intra-prediction modes
To capture any edge direction present in natural video, the number of directional intra modes extends from 33 to 65 used in HEVC. The additional directional mode is as depicted in fig. 2, and the planar mode and the DC mode remain unchanged. These dense directional intra prediction modes are applicable to all block sizes as well as luma and chroma intra prediction.
The conventional angular intra prediction direction is defined as from 45 degrees to-135 degrees in the clockwise direction as shown in fig. 2. In VTM2, several conventional angular intra prediction modes are adaptively replaced with wide angular intra prediction modes for non-square blocks. The alternative patterns are signaled using the original method and remapped to the indices of the wide-angle pattern after parsing. The total number of intra prediction modes is unchanged, e.g., 67, and the intra mode codec is unchanged.
In HEVC, each intra coded block has a square shape with a length of each side being a power of 2. Therefore, no division operation is required to generate an intra prediction value using the DC mode. In VVV2, the chunks may have a rectangular shape, which typically requires the use of a division operation for each chunk. To avoid division operations for DC prediction, only the longer edges are used to calculate the average of the non-square blocks.
2.3. Wide-angle intra prediction for non-square blocks
The conventional angular intra prediction direction is defined as a clockwise direction from 45 degrees to-135 degrees. In VTM2, several conventional angular intra prediction modes are adaptively replaced with wide-angle intra prediction modes for non-square blocks. The alternative mode is signaled using the original method and remapped to the index of the wide-angle mode after parsing. The total number of intra prediction modes for a particular block is unchanged, e.g., 67, and the intra mode codec is unchanged.
To support these prediction directions, a top reference of length 2W +1 and a left reference of length 2H +1 are defined as shown in FIGS. 3A-3B.
The number of modes of the replacement mode in the wide-angle direction mode depends on the aspect ratio of the block. Alternative intra prediction modes are shown in table 1.
Table 1: intra prediction mode replaced by wide-angle mode
Figure BDA0003864537510000061
As shown in fig. 4, in the case of wide-angle intra prediction, two vertically adjacent prediction samples may use two non-adjacent reference samples. Therefore, low-pass reference sampling filtering and side smoothing are applied to wide-angle prediction to reduce the increased gap Δ p α The negative effects of (c).
2.4. Position dependent intra prediction combining
In VTM2, the result of intra prediction in planar mode is further modified by a position dependent intra prediction combination (PDPC) method. PDPC is an intra prediction method that invokes a combination of unfiltered boundary reference samples and HEVC style intra prediction with filtered boundary reference samples. PDPC applies to the following intra modes without signaling: plane, DC, horizontal, vertical, lower left corner angle pattern and its eight adjacent angle patterns, and upper right corner angle pattern and its eight adjacent angle patterns.
Using the intra prediction mode (DC, plane, angle) and a linear combination of reference samples, the prediction samples pred (x, y) are predicted according to the following equation:
pred(x,y)=(wL×R -1,y +wT×R x,-1 –wTL×R -1,-1 +(64–wL–wT+wTL)×pred(x,y)+32)>>6
wherein R is x,-1 、R -1,y Respectively representing reference samples, R, located on top and to the left of the current sample (x, y) -1,-1 Representing the reference sample located in the upper left corner of the current block.
If PDPC is applied to DC intra mode, planar intra mode, horizontal intra mode, and vertical intra mode, no additional boundary filtering is needed, as is the case with HEVC DC mode boundary filtering or horizontal/vertical mode edge filtering.
Fig. 5A to 5D show reference samples (R) of PDPC applied to various prediction modes x,-1 ,R -1,y And R -1,-1 ) The definition of (1). The prediction samples pred (x ', y') are located at (x ', y') within the prediction block. Reference sample R x,-1 The coordinate x of (a) is given by: x = x '+ y' +1, reference sample point R -1,y Is similarly given by: y = x '+ y' +1. Fig. 5A shows a diagonal top right mode. Fig. 5B shows a diagonal down-left mode. Fig. 5C shows the adjacent diagonal top-right mode. Fig. 5D shows an adjacent diagonal down-left pattern.
The PDPC weights depend on the prediction mode, as shown in table 2.
Table 2: examples of PDPC weights according to prediction mode
Prediction mode wT wL wTL
Diagonal upper right 16>>((y’<<1)>>shift) 16>>((x’<<1)>>shift) 0
Diagonal left lower 16>>((y’<<1)>>shift) 16>>((x’<<1)>>shift) 0
Adjacent diagonal right upper 32>>((y’<<1)>>shift) 0 0
Adjacent diagonal left lower 0 32>>((x’<<1)>>shift) 0
2.5. Intra-frame sub-block partitioning (ISP)
In some embodiments, an ISP is proposed that divides a luma intra prediction block vertically or horizontally into 2 sub-partitions or 4 sub-partitions according to the block size dimension, as shown in table 3. Fig. 6 and 7 show examples of two possibilities. All sub-partitions fulfill the condition of having at least 16 samples.
Table 3: the number of sub-partitions depends on the block size.
Size of block Number of sub-divisions
4×4 Not dividing
4X 8 and 8X 4 2
All other cases 4
For each of these sub-partitions, a residual signal is generated by entropy-decoding the coefficients transmitted by the encoder, and then inverse-quantizing and inverse-transforming them. Then, the sub-partitions are intra-predicted, and finally, corresponding reconstructed samples are obtained by adding a residual signal to the prediction signal. Thus, the reconstructed value of each sub-partition will be available to generate the next prediction, which will repeat the process, and so on. All sub-partitions share the same intra mode.
Based on the intra mode and the utilized partition, two different categories of processing orders are used, which are referred to as normal order and reverse order. The first sub-partition to be processed is the partition containing the left upper sample of the CU, in normal order, and then continues down (horizontal partition) or right (vertical partition). As a result, the reference samples used to generate the sub-divided prediction signals are located only on the left and upper sides of these lines. On the other hand, the reverse processing order starts from the sub-division containing the lower-left sample of the CU and continues upward, or starts from the sub-division containing the upper-right sample of the CU and continues to the left.
2.6. Multiple Transform Set (Multiple Transform Set, MTS)
In addition to DCT-II, which has been adopted in HEVC, a Multiple Transform Selection (MTS) scheme is used for residual coding of both inter and intra coded blocks. It uses a number of transforms selected from DCT8/DST 7. The newly introduced transformation matrices are DST-VII and DCT-VIII. Table 4 shows the basis functions of the selected DST/DCT.
Table 4: transformation type and basis function
Figure BDA0003864537510000081
There are two ways to enable MTS, one is explicit MTS; the other is implicit MTS.
2.6.1. Implicit MTS
Implicit MTS is a new tool in VVC. The variable impliitmtsenabled is derived as follows:
whether implicit MTS is enabled depends on the value of the variable implicititMtsEnabled. The derivation of the variable implicitimtsenenabled is as follows:
-if sps _ mts _ enabled _ flag is equal to 1, and one or more of the following conditions is true, then implicitMtsEnabled is set equal to 1:
IntraSubPartitionsSplitType not equal to ISP _ NO _ SPLIT (i.e., ISP enabled)
-cu sbt flag is equal to 1 (i.e. ISP is enabled) and Max (nTbW, nTbH) is less than or equal to 32
-sps _ explicit _ MTS _ INTRA _ enabled _ flag equal to 0 (i.e. explicit MTS disabled), cuPredMode [0] [ xTbY ] [ yTbY ] equal to MODE _ INTRA, and lfnst _ idx [ x0] [ y0] equal to 0, and INTRA _ mip _ flag [ x0] [ y0] equal to 0
-otherwise, implicitMtsEnabled is set equal to 0.
The derivation of the variable trTypeHor specifying the horizontal transformation kernel and the variable trTypeVer specifying the vertical transformation kernel is as follows:
-setting trTypeHor and trTypeVer equal to 0 (e.g. DCT 2) if one or more of the following conditions is true.
cIdx is greater than 0 (i.e., for chrominance components)
IntraSubPartitionsSplitType not equal to ISP _ NO _ SPLIT, lfnst _ idx not equal to 0
Else, if implicitMtsEnabled is equal to 1, then the following applies:
if cu _ sbt _ flag is equal to 1, trTypeHor and trTypeVer are specified in Table 40 according to cu _ sbt _ horizontal _ flag and cu _ sbt _ pos _ flag.
Otherwise (cu _ sbt _ flag equal to 0), the derivation of trTypeHor and trTypeVer is as follows:
trTypeHor=(nTbW>=4&&nTbW<=16)?1:0 (1188)
trTypeVer=(nTbH>=4&&nTbH<=16)?1:0 (1189)
else, trTypeHor and trTypeVer are specified in Table 39 according to mts _ idx.
The variables nonZeroW and nonZeroH are derived as follows:
-if applyllfnstflag is equal to 1, ntbw is greater than or equal to 4, and nTbH is greater than or equal to 4, the following condition applies:
nonZeroW=(nTbW==4||nTbH==4)?4:8 (1190)
nonZeroH=(nTbW==4||nTbH==4)?4:8 (1191)
otherwise, the following applies:
nonZeroW=Min(nTbW,(trTypeHor>0)?16:32) (1192)
nonZeroH=Min(nTbH,(trTypeVer>0)?16:32) (1193)
2.6.2. explicit MTS
To control the MTS scheme, a flag is used to specify whether there is an explicit MTS for intra/inter frames in the bitstream. In addition, two separate enable flags are specified at the SPS level for intra and inter frames, respectively, to indicate whether explicit MTS is enabled. When MTS is enabled at SPS, the CU level transformation index may be signaled to indicate whether MTS is applied. Here, MTS is only applicable to luminance. The MTS CU level index (denoted by MTS _ idx) is signaled when the following conditions are met.
-width and height both less than or equal to 32
-CBF luminance flag equal to one
-non-TS
-non-ISP
-non-SBT
-LFNST is disabled
Presence of non-zero coefficients not at the DC position (upper left position of the block)
No non-zero coefficients outside the upper left 16x16 region
If the first bin of mts _ idx is equal to zero, then DCT2 applies in both directions. However, if the first bin of mts _ idx is equal to one, the other two bins are additionally signaled to indicate the type of transformation in the horizontal and vertical directions, respectively. The transformation and signaling mapping table is shown in table 5. For transform matrix precision, an 8-bit primary transform kernel is used. Thus, all transform kernels used in HEVC remain unchanged, including 4-point DCT-2 and DST-7, 8-point DCT-2, 16-point DCT-2, and 32-point DCT-2. In addition, other transform kernels, including 64-point DCT-2, 4-point DCT-8, 8-point, 16-point, 32-point DST-7, and DCT-8, use an 8-bit primary transform kernel.
Table 5: signaling of MTS
Figure BDA0003864537510000101
To reduce the complexity of large sizes of DST-7 and DCT-8, the high frequency transform coefficients are zeroed out for DST-7 blocks and DCT-8 blocks having a size (width or height, or both) equal to 32. Only the coefficients in the 16x16 low frequency region are retained.
As in HEVC, the residual of a block may be coded with a transform skip mode. To avoid redundancy of syntax coding, the transform skip flag is not signaled when the CU level MTS _ CU _ flag is not equal to zero. The block size limit of the transform skip is the same as that of the MTS in JEM4, which indicates that the transform skip applies to the CU when both the block width and the block height are equal to or less than 32.
2.6.3. zeroing in MTS
In VTM8, large block size transforms of sizes up to 64 × 64 are enabled, which is mainly applicable to higher resolution video, e.g., 1080p and 4K sequences. For transform blocks of size (width or height, or both width and height) no less than 64, the high frequency transform coefficients of the block to which the DCT2 transform is applied are zeroed out, leaving only the low frequency coefficients, all other coefficients being forced to zero without being signaled. For example, for an mxn transform block, M is the block width and N is the block height, and when M is not less than 64, only the left 32 columns of transform coefficients are retained. Similarly, when N is not less than 64, only the first 32 rows of transform coefficients are retained.
For transform blocks of size (width or height, or both width and height) no less than 32, the high frequency transform coefficients of the block to which the DCT8 or DST7 transform is applied are zeroed out, leaving only the low frequency coefficients, all other coefficients being forced to zero without being signaled. For example, for an M × N transform block, M is the block width and N is the block height, and when M is not less than 32, only the left 16 columns of transform coefficients are retained. Similarly, when N is not less than 32, only the first 16 rows of transform coefficients are retained.
2.7. Low frequency non-separable secondary transform (LFNST)
JEM inseparable Secondary Transform (Non-Separable Secondary Transform, NSST)
In JEM, a quadratic transform is applied between the forward main transform and quantization (at the encoder) and between the inverse quantization and inverse main transform (at the decoder side). As shown in fig. 8, 4 × 4 (or 8 × 8) secondary transform is performed according to block size. For example, for each 8 × 8 block, a 4 × 4 quadratic transform is applied to small blocks (e.g., min (width) < 8), and an 8 × 8 quadratic transform is applied to larger blocks (e.g., min (width) > 4).
The application of the non-separable transform is described below using the input as an example. To apply the indivisible transform, 4X4 input block X
Figure BDA0003864537510000121
Is first expressed as a vector
Figure BDA0003864537510000122
Figure BDA0003864537510000123
The non-separable transformation is calculated as
Figure BDA0003864537510000124
Wherein
Figure BDA0003864537510000125
A transform coefficient vector is indicated, and T is a 16x16 transform matrix. The 16x1 coefficient vector is then scanned using the scan order (horizontal, vertical, or diagonal) of the block
Figure BDA0003864537510000126
Reorganized into 4x4 blocks. The coefficients with the smaller index will be placed in a 4x4 coefficient block with the smaller scan index. There are a total of 35 transform sets, and each transform set uses 3 non-separable transform matrices (kernels). From intra-prediction modeThe mapping of the formula to the set of transformations is predefined. For each transform set, the selected non-separable secondary transform candidates are further specified by explicitly signaled secondary transform indices. The index is signaled by the CU once per frame in the bitstream after the coefficients are transformed.
2.7.2. Reduced quadratic Transform (LFNST)
In some embodiments, LFNST is introduced and 4 transform set (instead of 35 transform sets) mappings are used. In some implementations, a 16 × 64 (which may be further reduced to 16 × 48) matrix and a 16 × 16 matrix are used for the 8 × 8 block and the 4 × 4 block, respectively. For ease of notation, the 16 × 64 (which may be further reduced to 16 × 48) transform is denoted LFNST8 × 8, and the 16 × 16 transform is denoted LFNST4 × 4. Fig. 9 shows an example of LFNST.
LFNST calculation
The main idea of the Reduction Transform (RT) is to map N-dimensional vectors to R-dimensional vectors in different spaces, where R/N (R < N) is the reduction factor.
The RT matrix is an R × N matrix as follows:
Figure BDA0003864537510000127
where the transformed R rows are R bases of the N-dimensional space. The inverse transform matrix of RT is the transpose of its forward transform. The positive RT and the inverse RT are depicted in fig. 10A and 10B.
In this proposal, an LFNST8 × 8 with a reduction factor of 4 (1/4 size) is applied. Thus, instead of 64 × 64, a 16 × 64 direct matrix is used, which is the conventional 8 × 8 indivisible transform matrix size. In other words, the 64 × 16 inverse LFNST matrix is used at the decoder side to generate the core (primary) transform coefficients in the 8 × 8 top-left region. Positive LFNST8 × 8 uses a 16 × 64 (or 8 × 64, for an 8 × 8 block) matrix such that it produces non-zero coefficients only in the upper left 4 × 4 region within a given 8 × 8 region. In other words, if LFNST is applied, the 8 × 8 area except the upper left 4 × 4 area will only have zero coefficients. For LFNST4 × 4, 16 × 16 (or 8 × 16, for 4 × 4 blocks) direct matrix multiplication is applied.
The inverse LFNST is conditionally applied when the following two conditions are met:
a. the block size is greater than or equal to a given threshold (W > =4& & H > = 4)
b. Transform skip mode flag equal to zero
If both the width (W) and height (H) of the transform coefficient block are greater than 4, LFNST 8x8 is applied to the upper left 8x8 region of the transform coefficient block. Otherwise, LFNST 4x4 is applied to the top left min (8,W) × min (8,H) area of the transform coefficient block.
If the LFNST index is equal to 0, then LFNST is not applied. Otherwise, LFNST is applied, with its core selected along with the LFNST index. The LFNST selection method and the encoding and decoding of the LFNST index will be explained later.
Furthermore, LFNST is applied to intra CUs in intra and inter slices, as well as luminance and chrominance. If dual trees are enabled, the LFNST indices for luminance and chrominance are signaled separately. For inter-frame stripes (dual tree disabled), a single LFNST index is signaled and used for luma and chroma.
At the 13 th jvt conference, intra-frame subdivision (ISP) was adopted as a new intra-prediction mode. When the ISP mode is selected, LFNST is disabled and LFNST index is not signaled, since performance improvement is limited even if LFNST is applied to every feasible partition. Furthermore, disabling LFNST on the residual of ISP prediction may reduce coding complexity.
LFNST selection
The LFNST matrix is selected from four transform sets, each consisting of two transforms. Which transform set to apply is determined by the intra prediction mode, as follows:
1) If one of the three CCLM modes is indicated, transform set 0 is selected.
2) Otherwise, transform set selection is performed according to table 6.
Table 6: transformation set selection table
Figure BDA0003864537510000131
Figure BDA0003864537510000141
The index of the table, denoted IntraPredMode, ranges [ -14,83], is the transform mode index for wide-angle intra prediction.
Reduction ofLFNST matrix of dimensions
As a further simplification, a 16 × 48 matrix is applied instead of 16 × 64 with the same transform set configuration, each matrix taking 48 input data from three 4 × 4 blocks excluding the bottom right 4 × 4 block from the top left 8 × 8 block (fig. 11).
LFNST signaling
A positive LFNST8 × 8 with R =16 uses a 16 × 64 matrix, so it produces non-zero coefficients only in the upper left 4 × 4 area within a given 8 × 8 area. In other words, if LFNST is applied, the 8 × 8 region only produces zero coefficients except for the upper left 4 × 4 region. Therefore, when any non-zero element is detected in an 8 × 8 block region other than the upper left 4 × 4 (as shown in fig. 12), the LFNST index is not coded, since this means that LFNST is not applied. In this case, the LFNST index is inferred to be zero.
Zero setting range
In general, any coefficients in the 4x4 sub-block may be non-zero before applying the inverse LFNST to the 4x4 sub-block. However, constrained in some cases, some of the coefficients in the 4x4 sub-block must be zero before applying the inverse LFNST to the sub-block.
Let nonZeroSize be a variable. It is required that any coefficient having an index not smaller than nonZeroSize must be zero when it is rearranged into a 1-D array before the inverse LFNST.
When nonZeroSize equals 16, the coefficients in the top left 4 × 4 sub-block have no zeroing constraint.
In some examples, when the current block size is 4 × 4 or 8 × 8, nonZeroSize is set equal to 8. For other chunk sizes, nonZeroSize is set equal to 16.
2.8. Affine linear weighted intra prediction (ALWIP, also known as matrix-based intra prediction)
Affine linear weighted intra prediction (ALWIP, also known as Matrix based intra prediction (MIP)) is used in some embodiments.
In some embodiments, two tests are performed. In test 1, the ALWIP was designed to have a memory limit of 8 kbytes and a maximum of 4 multiplications per sample point. Test 2 is similar to test 1, but further simplifies the design in terms of memory requirements and model architecture.
A single set of matrices and offset vectors for all block shapes.
The number of patterns for all block shapes is reduced to 19.
Reduce memory requirements to 5760 10-bit values, i.e., 7.20 kilobytes.
Linear interpolation of the predicted samples is performed in a single step for each direction, instead of iterative interpolation in the first test.
2.9. Sub-block transformations
For inter-predicted CUs with CU cbf equal to 1, CU sbt flag may be signaled to indicate whether to decode the entire residual block or a sub-portion of the residual block. In the former case, the inter-frame MTS information is further parsed to determine the transform type of the CU. In the latter case, a portion of the residual block is coded with the inferred adaptive transform and another portion of the residual block is zeroed out. SBT does not apply to the combined inter-intra mode.
In the sub-block transform, a position-dependent transform is applied to the luminance transform block in SBT-V and SBT-H (the chrominance TB always uses DCT-2). The two positions of SBT-H and SBT-V are associated with different kernel transforms. More specifically, the horizontal and vertical transforms for each SBT position are specified in fig. 13. For example, the horizontal and vertical transforms for SBT-V position 0 are DCT-8 and DST-7, respectively. When one side of the residual TU is greater than 32, the corresponding transform is set to DCT-2. Thus, the sub-block transform jointly specifies TU tiling (tiling), cbf, and the horizontal and vertical transforms of the residual block, which can be considered as syntax shortcuts for the case where the main residual of the block is on one side of the block.
2.10. Coefficient coding and decoding (SRCC) based on scanning area
SRCC has been adopted by AVS-3. For SRCC, the lower right position (SRx, SRy) as shown in fig. 14A to 14B is signaled, and only the coefficients within the rectangle having the four corners (0,0), (SRx, 0), (0, SRy), (SRx, SRy) are scanned and signaled. All coefficients outside the rectangle are zero.
2.11. Implicit Selection of transforms (Implicit Selection of Transform, IST)
As disclosed in PCT/CN2019/090261 (incorporated herein by reference), an implicit selection of transform solutions is given, where the selection of the transform matrix (DCT 2 for horizontal and vertical transforms, or DST7 for both) is determined by the parity of the non-zero coefficients in the transform block.
The proposed method applies to the luminance component of intra-coded blocks, excluding those coded with DT, and allows block sizes from 4 × 4 to 32 × 32. The transform type is hidden in the transform coefficients. In particular, the parity of the number of significant coefficients (e.g., non-zero coefficients) in a block is used to represent the transform type. Odd numbers indicate the application of DST-VII and even numbers indicate the application of DCT-II.
In order to remove the 32-point DST-7 introduced by the IST, it is proposed to limit the use of the IST according to the range of the remaining scan area when using SRCC. As shown in fig. 15A to 15B, IST is not allowed when the x-coordinate or y-coordinate of the lower right position in the remaining scanning area is not less than 16. That is, for this case, DCT-II is directly applied.
For another case, when run-length coefficients are used for coding, each non-zero coefficient needs to be checked. When the x-coordinate or y-coordinate of a non-zero coefficient position is not less than 16, the IST is not allowed.
The corresponding grammatical changes are indicated in bold italic and underlined text as follows:
Figure BDA0003864537510000161
Figure BDA0003864537510000171
Figure BDA0003864537510000181
3. examples of technical problems solved by the disclosed technical solution
The current design of IST and MTS has the following problems:
TS mode in vvc is signaled in block level. However, DCT2 and DST7 work well for residual blocks in camera captured sequences, while for video with screen content, the Transition Skip (TS) mode is used more frequently than DST 7. There is a need to investigate how to determine the use of TS patterns in a more efficient way.
2. In VVC, the maximum allowable TS block size is set to 32 × 32. How to support large blocks of TS requires further investigation.
4. Example techniques and embodiments
The items listed below should be considered as examples to explain the general concept. These items should not be interpreted in a narrow manner. Further, these items may be combined in any manner.
min (x, y) gives the smaller of x and y.
Implicit determination of transform skip mode/specific transform
IT is proposed to determine whether to apply a horizontal and/or vertical specific transform (IT), e.g. a transform skip mode, to the current first block based on decoded coefficients of one or more representative blocks. This method is called "implicit determination of IT". When both the horizontal transform and the vertical transform are IT, a Transform Skip (TS) mode is used for the current first block.
The "block" may be a Transform Unit (TU)/Prediction Unit (PU)/Codec Unit (CU)/Transform Block (TB)/Prediction Block (PB)/Codec Block (CB). The TU/PU/CU may include one or more color components, such as only a luma component for a two-tree split, and the color component of the current codec is luma; and for two chroma components of the dual-tree partition, the color component of the current codec is chroma; or three color components for the single tree case.
1. The decoding coefficients may be associated with one or more representative blocks in the same color component or different color components of the current first block.
a. In one example, the representative block is a first block, and the decoding coefficients associated with the first block are used to determine IT usage on the first block.
b. In one example, the determination to use IT for the first block may depend on decoding coefficients of a plurality of blocks, the plurality of blocks including at least one block different from the first block.
i. In one example, the plurality of blocks may include a first block.
in one example, the plurality of blocks may include one or more blocks adjacent to the first block.
in one example, the plurality of blocks may include one block or a plurality of blocks having the same block dimension as the first block.
in one example, the plurality of blocks may include the last N decoded blocks that precede the first block in decoding order that satisfy a particular condition (such as having the same prediction mode as the current block, e.g., all intra-coded or IBC-coded, or having the same dimension as the current block). N is an integer greater than 1.
v. in one example, the plurality of blocks may include one or more blocks different in color component from the first block.
1) In one example, the first block may be in the luma component. The plurality of blocks may include blocks in the chroma component (e.g., a second block in the Cb/B component, and a third block in the Cr/R component).
a) In one example, the three blocks are in the same codec unit.
b) Further alternatively, the implicit MTS is applied only to luma blocks and not to chroma blocks.
2) In one example, the first block in the first color component and the plurality of blocks included in the plurality of blocks that are not in the first component color component may be at corresponding or collocated locations of the picture.
2. The decoded coefficients used to determine the use of IT are referred to as representative coefficients.
a. In one example, the representative coefficients include only coefficients not equal to zero (denoted as significant coefficients).
b. In one example, the representative coefficients may be modified prior to use to determine IT.
i. For example, the representative coefficients may be clipped prior to use in deriving the transform.
For example, the representative coefficients may be scaled before being used to derive the transform.
For example, the representative coefficients may be added with an offset before being used to derive the transform.
For example, the representative coefficients may be filtered before being used to derive the transform.
v. for example, the coefficients or representative coefficients may be mapped to other values (e.g., by a look-up table or dequantization) before being used to derive the transform.
c. In one example, the representative coefficients are all significant coefficients (significant coefficients) in the representative block.
d. Alternatively, the representative coefficients are part of the significant coefficients in the representative block.
i. In one example, the representative coefficients are those of the odd decoded significant coefficients.
1) Alternatively, the representative coefficients are those of even decoded significant coefficients.
in one example, the representative coefficients are those decoded significant coefficients that are greater than or not less than a threshold.
1) Alternatively, the representative coefficients are those decoded significant coefficients whose magnitudes are greater than or not less than a threshold value.
in one example, the representative coefficients are those decoded significant coefficients that are less than or not greater than a threshold.
1) Alternatively, the representative coefficients are those decoded significant coefficients whose magnitudes are less than or not greater than a threshold.
in one example, the representative coefficient is the first K (K > = 1) decoded significant coefficients in decoding order.
v. in one example, the representative coefficient is the last K (K > = 1) decoded significant coefficients in decoding order.
In one example, the representative coefficient may be a coefficient at a predefined position in the block.
1) In one example, the representative coefficient may include only one coefficient located at (xPos, yPos) coordinates with respect to the representative block. For example xPos = yPos =0.
2) In one example, the representative coefficient may include only one coefficient located at (xPos, yPos) coordinates with respect to the representative block. And xpo and/or ypo satisfy the following condition:
a) In one example, xPos is not greater than threshold Tx (e.g., 31) and/or yPos is not greater than threshold Ty (e.g., 31).
b) In one example, xPos is not less than the threshold Tx (e.g., 32) and/or yPos is not less than the threshold Ty (e.g., 32).
3) For example, the location may depend on the dimensions of the block.
In one example, the representative coefficients may be those at predefined locations in the coefficient scan order.
e. Alternatively, representative coefficients may also include those zero coefficients.
f. Alternatively, the representative coefficients may be coefficients derived from decoded coefficients, such as by clipping to a range, by quantization.
g. In one example, the representative coefficient may be a coefficient (which may include the last significant coefficient) before the last significant coefficient.
3. The determination to use IT for the first block may depend on the decoded luminance coefficient of the first block.
a. Further alternatively, the determined IT usage is applied only to the luminance component of the first block, while DCT2 is always applied to the chrominance component of the first block.
b. Further alternatively, the determined IT usage is applied to all color components of the first block. That is, the same transformation matrix is applied to all color components of the first block.
The determination of the use of it may depend on a function of the representative coefficients, such as a function using the representative coefficients as inputs and the value V as output.
a. In one example, V is derived as the number of representative coefficients.
i. Alternatively, V is derived as the sum of the representative coefficients.
1) Alternatively, V is derived as the sum of the levels of the representative coefficients (or their absolute values).
2) Alternatively, V may be derived as the level (or absolute value thereof) of one representative coefficient (such as the last).
3) Alternatively, V may be derived as the number of representative coefficients of even order.
4) Alternatively, V may be derived as the number of representative coefficients of odd order.
5) Further alternatively, the sum may be clipped to derive V.
Optionally, V is derived as an output of a function, wherein the function defines a residual energy distribution.
1) In one example, the function returns the ratio of the sum of the absolute values of some of the representative coefficients to the absolute value of all of the representative coefficients.
2) In one example, the function returns the ratio of the sum of the squares of the absolute values of some of the representative coefficients to the sum of the squares of the absolute values of all of the representative coefficients.
3) In one example, the function returns whether the energy of the first K representative coefficients multiplied by the scaling factor is greater than the energy of the first M (M > K) representative coefficients or all representative coefficients.
4) In one example, the function returns whether the energy representing the coefficient in a first sub-region of the representation block multiplied by the scaling factor is greater than the energy representing the coefficient in a second sub-region that contains and is larger than the first sub-region.
a) Optionally, the function returns whether the energy representing the coefficient in a first sub-region of the representation block multiplied by the scaling factor is greater than the energy representing the coefficient in a second sub-region that does not overlap the first sub-region.
b) In one example, the first sub-region is the upper left mxn sub-region (i.e., M = N =1, DC only).
i. Optionally, the first sub-region is a second sub-region that does not include the top left mxk sub-region (i.e., does not include DC).
c) In one example, the first sub-region is the upper left 4x4 sub-region.
5) In the above example, the energy is defined as the sum of absolute values or the sum of squares of values.
Optionally, V is derived as to whether at least one representative coefficient is located outside a sub-region of the representative block.
1) In one example, a sub-region is defined as the upper left sub-region of the representative block, e.g., the upper left quarter of the representative block.
b. In one example, the determination of the use of IT may depend on the parity of V.
i. For example, if V is even, then IT is used; and if V is odd, IT is not used.
1) Alternatively, if V is even, then IT is used; and if V is odd, IT is not used.
in one example, if V is less than the threshold T1, IT is used; and if V is greater than the threshold T2, IT is not used.
1) Alternatively, if V is greater than threshold T1, then IT is used; if V is less than threshold T2, then IT is not used.
For example, the threshold may depend on codec information such as block dimensions, prediction mode.
For example, the threshold may depend on QP.
c. In one example, the determination of the use of IT may depend on a combination of V and other coding information (e.g., prediction mode, slice type/picture type, block dimensions).
5. The determination of the use of IT may further depend on the coding information of the current block.
a. In one example, the determination may also depend on mode information (e.g., inter, intra, or IBC).
b. In one example, the transform determination may depend on the scan area being the smallest rectangle covering all significant coefficients (e.g., as depicted in fig. 14).
i. In one example, if the size (e.g., width multiplied by height) of the scan area associated with the current block is greater than a given threshold, a default transform (such as DCT-2) may be utilized, including a horizontal transform and a vertical transform. Otherwise, rules such as those defined in bullet 3 (e.g., IT when V is even and DCT-2 when V is odd) may be utilized.
in one example, if the width of the scan area associated with the current block is greater than (or less than) a given maximum width (e.g., 16), then a default horizontal transform (such as DCT-2) may be utilized. Otherwise, rules such as those defined in bullet 3 may be used.
in one example, if the height of the scan area associated with the current block is greater than (or less than) a given maximum height (e.g., 16), a default vertical transform (such as DCT-2) may be utilized. Otherwise, rules such as those defined in bullet 3 may be used.
in one example, the given size is L × K, where L and K are integers, such as 16.
v. in one example, the default transform matrix may be DCT-2 or DST-7.
6. One or more of the methods disclosed in bullets 1-5 can only be applied to a particular block.
a. For example, one or more of the methods disclosed in bullets 1-5 can only be applied to IBC-coded blocks and/or intra-coded blocks other than DT.
b. For example, one or more of the methods disclosed in bullet 1 through bullet 5 can only be applied to blocks with certain constraints on the coefficients.
i. A rectangle having four corners (0,0), (CRx, 0), (0,cry), (CRx, CRy) is defined as a constrained rectangle, for example in the SRCC method. In one example, one or more of the methods disclosed in bullet 1 through bullet 5 may be applied only if all coefficients outside the constrained rectangle are zero. For example CRx = CRy =16.
1) For example, CRx = SRx and CRy = SRy, where (SRx, SRy) is defined in SRCC as described in section 2.14.
2) Furthermore, the above method is optionally applied only when the block width or block height is greater than K.
a) In one example, K is equal to 16.
b) In one example, the above method is only applied when the block width is greater than K1 and K1 is equal to CRx; or when the block height is greater than K2 and K2 is equal to CRy.
Only when the last non-zero coefficient (in forward scan order) satisfies certain conditions, e.g., when the horizontal/vertical coordinate is not greater than a threshold (e.g.,
16/32), one or more of the methods may be applied.
7. When IT is determined not to be used, a default transform such as DCT-2 or DST-7 may be used instead.
a. Alternatively, when IT is determined not to be used, IT may be selected from a plurality of default transforms such as DCT-2 or DST-7.
8. To determine the transform to apply to a block coded with prediction mode a from a set of transforms, a representative coefficient (e.g., bullet 2) in one or more representative blocks (e.g., bullet 1) is used, and the set of transforms may depend on prediction mode a and/or one or more syntax elements and/or other coded information (e.g., use of DT, block dimensions).
a. In one example, the set of transforms is { DCT2}.
b. In one example, the set of transforms is { IT }.
c. In one example, the set of transforms includes { DCT2, IT }.
d. In one example, the set of transforms includes { DCT2, DST7}.
e. In one example, DCT2 is used when the number of representative coefficients is even. Otherwise (when the number of representative coefficients is odd), DST7 or IT is used, which DST7 or IT may be determined by the prediction mode a and/or one or more syntax elements and/or other codec information.
i. In one example, if prediction mode a is IBC, IT is always used when the number of representative coefficients is odd.
in one example, if prediction mode a is intra and DT is applied, DCT2 is always used when the number of representative coefficients is odd.
in one example, if prediction mode a is intra and DT is not applied, and one or more syntax elements indicate that use of implicit determination of IT or use of implicit determination of transform skip mode is enabled, IT is always used when the number of representative coefficients is odd.
9. Whether and/or how to apply the above disclosed methods may be signaled at the video region level (such as sequence level/picture level/slice level/sub-picture level).
a. In one example, the notification (e.g., flag) may be signaled in a sequence header/picture header/SPS/VPS/DCI/DPS/PPS/APS/slice header/slice group header.
i. Further, optionally, one or more syntax elements (e.g., one or more flags) may be signaled to specify the method of implicit determination of whether IT is enabled or the use of implicit determination of a transform skip mode.
1) In one example, a first flag may be signaled to control the use of a method of implicit determination of IT for IBC codec blocks in the video area level.
a) Further, alternatively, the notification flag may be signaled on condition that it is checked whether IBC is enabled.
b) Alternatively, the use of the method of whether to enable implicit determination of IT for IBC coded blocks may be controlled by the same flag used to control the use of an Implicit Selection (IST) mode for transformation of intra coded blocks that do not include a Derivation Tree (DT) mode.
2) In one example, the second flag may be signaled to control the use of a method of implicit determination of IT for intra codec blocks in the video area level (e.g. blocks with a Derivation Tree (DT) mode may be excluded).
3) In one example, a third flag may be signaled to control the use of a method of implicit determination of IT for inter-coded blocks in the video area level.
4) In one example, a flag may be signaled to control the use of a method of implicit determination of IT for intra coded blocks (e.g., blocks with DT mode may be excluded) and inter coded blocks in the video area level.
5) In one example, a flag may be signaled to control the use of a method of implicit determination of IT for IBC codec blocks and intra codec blocks in the video area level (e.g., blocks with DT mode may be excluded).
Further optionally, when a method of implicit determination of IT is enabled for the video area (e.g., flag true), the following may further apply:
1) In one example, for a block of IBC codec, if IT is used for the block, the TS mode is applied; otherwise, DCT2 is used.
2) In one example, for intra-coded blocks (e.g., blocks with DT mode may be excluded), if IT is used for blocks, TS mode is applied; otherwise, DCT2 is used.
Further alternatively, when the method of implicit determination of IT is disabled for the video area (e.g., flag false), the following may further apply;
1) In one example, for IBC and/or coded blocks, DCT-2 is used.
2) In one example, for intra-coded blocks (e.g., excluding blocks with DT mode), DCT-2 or DST-7 may be determined on-the-fly, such as by IST.
b. Multi-level control of enabling/applying IT methods may be signaled in multiple video unit levels (e.g., sequence level, picture level, slice level).
i. In one example, the first video unit level is defined as a sequence level.
1) Optionally, in addition, a first syntax element (e.g., flag) is signaled in the sequence header/SPS/to indicate the use of IT.
a) In one example, a first syntax element equal to 0 indicates that an Implicitly Selected Transform Skip (ISTS) method cannot be used in the sequence. Otherwise (first syntax element equal to 1) indicates that an Implicitly Selected Transform Skip (ISTS) method can be used in the sequence.
b) Alternatively, the first syntax element may be conditionally signaled, such as according to "implicit select to enable transform (IST)/IST _ enable _ flag is equal to 1".
c) Alternatively, and in addition, when the first syntax element is not present, a default value is inferred. For example, for a first video unit level, IT is inferred as disabled.
in one example, the second video unit level is defined as picture level/slice level.
1) Optionally, further, a second syntax element (e.g., a flag) is signaled in the picture header (e.g., intra picture header and/or inter picture header)/slice header to indicate the use of IT.
a) Alternatively, the second syntax element may be conditionally signaled, such as according to "implicit selection to enable transform (IST)" or "IST _ enable _ flag equal to 1" and/or "IT method is enabled at the first video unit level (e.g., sequence)".
b) Alternatively, and in addition, when the second syntax element is not present, then a default value is inferred. For example, for the second video unit level, IT is inferred as disabled.
c. Syntax elements indicating the allowed set of transforms are signaled at the video unit level (e.g., picture) and may be conditionally signaled depending on whether Implicit Selection of Transforms (IST) is enabled.
i. In one example, syntax elements (e.g., flags or indices) are signaled in a picture header (e.g., intra picture header and/or inter picture header)/slice header.
1) Optionally, in addition, N different allowed transform sets are supported, the selection of which depends on the syntax element.
a) In one example, N is set to 2.
b) In one example, the selection of a transform set may depend on block information, such as the codec mode of the CU, or whether the CU is codec with a (derivation tree) DT mode.
i. For example, DCT2 is always used for CUs with DT mode.
DCT2, for example, is always used for chroma blocks.
c) In one example, the two sets are { DCT2, DST7} and { DCT2, IT }.
i. Alternatively, the two sets are for inter-coded blocks.
Alternatively, the two sets are for non-DT codec inter-coded blocks.
d) In one example, the two sets are { DCT2} and { DCT2, IT }.
i. Alternatively, the two sets are for Inter Block Copy (IBC) coded blocks.
Alternatively, the two sets are for non-DT codec Inter Block Copy (IBC) codec blocks.
2) Alternatively, if not present, a default value is inferred. For example, the allowed set of transforms is inferred as only allowing one transform type (e.g., DCT 2).
in one example, the syntax element (e.g., flag or index) is used to control the selection of transforms from the allowed transform set for use by blocks with a particular codec mode.
1) In one example, the syntax element controls the selection of transforms from the allowed transform set for use by intra-coded blocks/blocks that do not include those blocks to which DT is applied and intra-coded blocks that do not include those blocks having a pulse-coded modulation (PCM) mode.
2) Alternatively, the allowed transform set may be independent of syntax elements for blocks with other coded modes.
a) In one example, for an inter-coded block, the allowed set of transforms is { DCT2}.
b) In one example, for a block of IBC coding, the allowed set of transforms is { DCT2} or { DCT2, IT }, which may depend on, for example, whether IST is enabled for the current picture or whether IBC is enabled.
10. At the video region level, such as the sequence level/picture level/slice group level/slice level/sub-picture level, an indication is signaled whether to apply zeroing to transform blocks (including particular transforms).
a. In one example, the indication (e.g., flag) may be signaled in a sequence header/picture header/SPS/VPS/DCI/DPS/PPS/APS/slice header/slice group header.
b. In one example, when the indication specifies that the zero-set is enabled, then only IT transformations are allowed.
c. In one example, when the indication specifies that the zero-set is disabled, then only non-IT transformations are allowed.
d. Further, optionally, the binarization/context modeling/allowable range/lower right position of the last significant coefficient in the SRCC (e.g., maximum X/Y coordinate relative to the upper left position of the block) may depend on the indication.
11. A first rule (e.g., in bullets 1-7 above) may be used to determine the usage of the first block by IT, and a second rule may be used to determine a transformation type that does not include IT.
a. In one example, the first rule may be defined as a residual energy distribution.
b. In one example, the second rule may be defined as the parity of the representative coefficient.
Transform skipping
12. Zero-setting is applied to IT (e.g., TS) codec blocks, where non-zero coefficients are confined to a particular sub-region of the block.
a. In one example, the zeroing range of an IT (e.g., TS) codec block is set to the top right K × L sub-region of the block, where K is set to min (T1, W) and L is set to min (T2, H), where W and H are block width/block height, respectively, and T1/T2 are two thresholds.
i. In one example, T1 and/or T2 may be set to 32 or 16.
Further, optionally, the last non-zero coefficient should be located within the K × L sub-region.
in addition, optionally, the lower right position (SRx, SRy) in the SRCC process
Should be located within the K x L sub-region.
13. A plurality of zero-set types for IT (e.g., TS) codec blocks are defined, where each type corresponds to a sub-region of the block in which non-zero coefficients are present only.
a. In one example, non-zero coefficients are present only in the top-left K0 x L0 sub-region of the block.
b. In one example, the non-zero coefficients are present only in the upper right K1 × L1 sub-region of the block.
i. Further alternatively, an indication of the lower left position of the sub-region with non-zero coefficients may be signalled.
c. In one example, the non-zero coefficients are present only in the bottom left K2 x L2 sub-region of the block.
i. Further alternatively, an indication of the upper right position of the sub-region with non-zero coefficients may be signalled.
d. In one example, the non-zero coefficients are present only in the bottom right K3 × L3 sub-region of the block.
i. Further alternatively, an indication of the upper left position of the sub-region with non-zero coefficients may be signalled.
e. Further optionally, an indication of the zeroing type of IT may be further explicitly signaled or immediately derived.
14. When at least one significant coefficient is outside the zeroed out region defined by IT (e.g., TS), e.g., outside the upper left K0 x L0 sub-region of the block, IT (e.g., TS) is not used in the block.
a. Further alternatively, for this case, a default transformation is used.
15. IT (e.g. TS) is used in a block when there is at least one significant coefficient outside the zeroed out region defined by another transform matrix (e.g. DST7/DCT2/DCT 8), e.g. outside the upper left K0 x L0 sub-region of the block.
a. Further alternatively, for this case, the TS mode is inferred to be used.
Fig. 16A to 16D show various types of zeroing of the TS codec block. Fig. 16A shows the top left K0 × L0 sub-region. Fig. 16B shows the upper right K1 × L1 sub-region. Fig. 16C shows the lower left K2 × L2 sub-region. Fig. 16D shows the lower right K3 × L3 sub-region.
General purpose
16. The decision of the transform matrix may be done at the CU/CB level or the TU level.
a. In one example, the decision is made at the CU level, where all TUs share the same transform matrix.
i. Further, alternatively, when one CU is divided into a plurality of TUs, coefficients in one TU (e.g., the first TU or the last TU) or a part of TUs or all TUs may be used to determine the transform matrix.
b. Whether CU-level solutions or TU-level solutions are used may depend on the block size of one block and/or the VPDU size and/or the maximum CTU size and/or the codec information.
i. In one example, the CU level determination method may be applied when the block size is larger than the VPDU size.
17. Whether and/or how to apply the above disclosed methods may depend on the codec information, which may include:
a. the block dimension.
i. In one example, the above method may be applied for blocks whose width and/or height are not greater than a threshold (e.g., 32).
in one example, the above method may be applied for blocks whose width and/or height are not less than a threshold (e.g., 4).
in one example, the above method may be applied for blocks whose width and/or height are less than a threshold (e.g., 64).
b.QP
c. Picture or slice type (such as I-frame or P/B-frame, I-slice or P/B-slice)
i. In one example, the proposed method may be enabled for I frames, but disabled for P/B frames.
d. Structure segmentation method (Single tree or double tree)
i. In one example, the above method may be applied for stripes/pictures/tiles/slices applying single tree splitting.
e. Codec modes (such as inter mode/intra mode/IBC mode, etc.).
i. In one example, the above method may be applied to intra-coded blocks.
in one example, the above method may be applied to intra-coded blocks that do not include those blocks to which DT is applied and blocks that do not include those blocks having a PCM mode.
in one example, the above method may be applied to intra-coded blocks that do not include those blocks to which DT is applied and blocks that do not include those blocks having a PCM mode, and IBC-coded blocks
f. Coding and decoding methods (such as intra sub-block partitioning, derived Tree (DT) method, etc.).
i. In one example, the above method may be disabled for intra coded blocks to which DT is applied.
in one example, the above method may be disabled for intra codec blocks to which ISPs are applied.
g. Color component
i. In one example, the above method may be applied for luminance blocks, while not applied for chrominance blocks.
h. Intra prediction modes (such as DC, vertical, horizontal, etc.).
i. Motion information (such as MV and reference index).
j. Standard grade/level/hierarchy
5. Example embodiments
The following are some example embodiments of some of the inventive aspects summarized above in section 4, which may be applied to the VVC specification.
5.1. Example #1
This section presents an example of a scheme for implicit selection of transform skip mode (ISTS). Basically, this scheme follows the design principles of Implicit Selection of Transforms (IST) that have been adopted by AVS 3. A high level flag is signaled in the picture header to indicate that ISTS is enabled. If ISTS is enabled, the allowed set of transforms is set to { DCT-II TS }, and the determination of the TS mode is based on the parity of the number of non-zero coefficients in the block. Simulation results reportedly show that the proposed ISTS achieved 15.86% and 12.79% bit rate reduction for screen content codec in AI and RA configurations, respectively, compared to HPM 6.0. The increase in complexity of the encoder and decoder is negligible.
5.1.1. Introduction to the design reside in
In the current AVS3 design, only DCT-II is allowed to codec the residual block for IBC mode. Whereas for intra-coded blocks that do not include DT, IST is applied, which allows the block to select DCT-II or DST-VII depending on the parity of the number of non-zero coefficients. However, DST-VII is much less efficient for screen content codec. Transform Skip (TS) mode is an efficient codec method for screen content codec. There is a need to investigate how to allow codecs to support TS without explicit signaling of blocks.
5.1.2. The proposed method
In some embodiments, implicit selection of transform skip mode (ISTS) may be used. A high level flag is signaled in the picture header to indicate whether ISTS is enabled.
When ISTS is enabled, the allowed set of transforms is set to { DCT-II TS }, and the determination of the TS mode is based on the parity of the number of non-zero coefficients in the block, which follows the same design principles as IST. Odd indicates that TS is applied and even indicates that DCT-II is applied. For intra-coded or IBC-coded CUs, ISTS applies to CUs of sizes from 4 × 4 to 32 × 32, excluding those CUs to which DT is applied or PCM is used.
When ISTS is disabled and IST is enabled, the allowed set of transforms is set to { DCT-II DST-VII } which is the same as the current AVS3 design.
5.1.3. Modifications to syntax tables, semantics and decoding process
Most of the relevant parts that have been added or modified are underlined in bold italics, and some of the deleted parts are denoted using [ ] ].
7.1.2.2 sequence header
Table 14: sequence header definition
Figure BDA0003864537510000341
Figure BDA0003864537510000351
Picture header for 7.1.3.1I picture
Table 27: intra prediction header definition
Figure BDA0003864537510000352
Picture header for 7.1.3.2PB picture
Table 28: inter prediction header definition
Figure BDA0003864537510000353
Document modification 7,1,7
Figure BDA0003864537510000361
7.2.2.2 sequence header
Figure BDA0003864537510000362
7.2.3.1 Intra prediction header
Figure BDA0003864537510000363
9.6.3 inverse transform
The process of converting the M1 × M2 transform coefficient matrix CoeffMatrix into the residual sample matrix resoguematrix is defined herein.
If the Intra prediction mode is neither 'Intra _ Luma _ PCM' nor 'Intra _ Chroma _ PCM'
Figure BDA0003864537510000364
If it is not
Figure BDA0003864537510000365
When the current transformation block is a luminance intra-frame prediction residual block, the values of M1 and M2 are both less than 64, and the value of IstTuFlag is equal to 1, and a residual sample point matrix ResidueMatrix is deduced according to a method defined by 0;
Figure BDA0003864537510000366
otherwise, a residual sample matrix residuermax is derived according to the method defined in 9.6.3.1.
Otherwise (Intra prediction mode is 'Intra _ Luma _ PCM' or 'Intra _ Chroma _ PCM'), the residual sample matrix, resisumatrix, is derived according to the method defined in 9.6.3.3.
9.6.3.4 implicit inverse transform skipping method
Figure BDA0003864537510000371
Fig. 17 is a block diagram illustrating an example video processing system 1700 in which various techniques disclosed herein may be implemented. Various implementations may include some or all of the components of system 1700. The system 1700 may include an input 1702 for receiving video content. The video content may be received in a raw or uncompressed format (e.g., 8-bit or 10-bit multi-component pixel values), or may be received in a compressed or encoded format. The input 1702 may represent a network interface, a peripheral bus interface, or a storage interface. Examples of network interfaces include wired interfaces (e.g., ethernet, passive Optical Network (PON), etc.), and wireless interfaces (e.g., wi-Fi or cellular interfaces).
The system 1700 can include a codec component 1704 that can implement various codecs or encoding methods described in this document. The codec component 1704 can reduce the average bit rate of the video from the input 1702 to the output of the codec component 1704 to produce a codec representation of the video. Thus, codec techniques are sometimes referred to as video compression or video transcoding techniques. The output of the codec component 1704 can be stored or transmitted via communication over the connection represented by the component 1706. A stored or transmitted bitstream (or codec) representation of video received at input 1702 can be used by component 1708 to generate pixel values or displayable video that is sent to display interface 1710. The process of generating a user viewable video from a bitstream is sometimes referred to as video decompression. Further, while certain video processing operations are referred to as "codec" operations or tools, it will be understood that codec tools or operations are used at the encoder and that corresponding decoding tools or operations that reverse the codec results will be performed by the decoder.
Examples of a peripheral bus interface or display interface may include Universal Serial Bus (USB) or High Definition Multimedia Interface (HDMI) or Displayport, among others. Examples of storage interfaces include SATA (serial advanced technology attachment), PCI, IDE interfaces, and the like. The techniques described in this document may be embodied in various electronic devices such as mobile phones, laptop computers, smart phones, or other devices capable of performing digital data processing and/or video display.
Fig. 21 is a block diagram of the video processing apparatus 2100. The apparatus 2100 may be used to implement one or more of the methods described herein. The apparatus 2100 may be embodied in a smartphone, tablet, computer, internet of things (IoT) receiver, and/or the like. The apparatus 2100 may include one or more processors 2102, one or more memories 2104, and video processing hardware 2106. The processor(s) 2102 may be configured to implement one or more methods described in this document. The one or more memories 2104 may be used to store data and code for implementing the methods and techniques described herein. The video processing hardware 2106 may be used to implement some of the techniques described in this document in hardware circuits.
Fig. 18 is a block diagram illustrating an example video codec system 100 that may utilize the techniques of this disclosure.
As shown in fig. 18, the video codec system 100 may include a source device 110 and a destination device 120. Source device 110 generates encoded video data, which may be referred to as a video encoding device. Destination device 120 may decode the encoded video data generated by source device 110, and source device 110 may be referred to as a video decoding device.
The source device 110 may include a video source 112, a video encoder 114, and an input/output (I/O) interface 116.
The video source 112 may include a source such as a video capture device, an interface that receives video data from a video content provider and/or a computer graphics system for generating video data, or a combination of these sources. The video data may include one or more pictures. The video encoder 114 encodes video data from the video source 112 to generate a bitstream. The bitstream may comprise a sequence of bits forming a codec representation of the video data. The bitstream may include coded pictures and associated data. A coded picture is a coded representation of a picture. The associated data may include sequence parameter sets, picture parameter sets, and other syntax structures. The I/O interface 116 may include a modulator/demodulator (modem) and/or a transmitter. The encoded video data may be sent directly to the destination device 120 over the network 130a via the I/O interface 116. The encoded video data may also be stored on a storage medium/server 130b for access by the destination device 120.
Destination device 120 may include I/O interface 126, video decoder 124, and display device 122.
I/O interface 126 may include a receiver and/or a modem. I/O interface 126 may retrieve encoded video data from source device 110 or storage medium/server 130 b. The video decoder 124 may decode the encoded video data. The display device 122 may display the decoded video data to a user. Display device 122 may be integrated with destination device 120 or may be located external to destination device 120, with destination device 120 configured to interface with an external display device.
The video encoder 114 and the video decoder 124 may operate in accordance with video compression standards such as the High Efficiency Video Codec (HEVC) standard, the universal video codec (VVM) standard, and other current and/or further standards.
Fig. 19 is a block diagram illustrating an example of a video encoder 200, which video encoder 200 may be the video encoder 114 in the system 100 shown in fig. 18.
Video encoder 200 may be configured to perform any or all of the techniques of this disclosure. In the example of fig. 19, the video encoder 200 includes a number of functional components. The techniques described in this disclosure may be shared among various components of video encoder 200. In some examples, the processor may be configured to perform any or all of the techniques described in this disclosure.
The functional components of the video encoder 200 may include a partitioning unit 201, a prediction unit 202, which may include a mode selection unit 203, a motion estimation unit 204, a motion compensation unit 205, and an intra prediction unit 206, a residual generation unit 207, a transform unit 208, a quantization unit 209, an inverse quantization unit 210, an inverse transform unit 211, a reconstruction unit 212, a buffer 213, and an entropy coding unit 214.
In other examples, video encoder 200 may include more, fewer, or different functional components. In one example, the prediction unit 202 may include an Intra Block Copy (IBC) unit. The IBC unit may perform prediction in IBC mode, where the at least one reference picture is a picture in which the current video block is located.
Furthermore, some components (e.g., the motion estimation unit 204 and the motion compensation unit 205) may be highly aggregated, but are represented separately in the example of fig. 11 for purposes of explanation.
The partition unit 201 may partition a picture into one or more video blocks. The video encoder 200 and the video decoder 300 may support various video block sizes.
The mode selection unit 203 may, for example, select one of the coding modes (intra or inter) based on the error result, and supply the resulting intra or inter coded block to the residual generation unit 207 to generate residual block data, and to the reconstruction unit 212 to reconstruct the coded block to be used as a reference picture. In some examples, mode selection unit 203 may select a combination of intra-prediction and inter-prediction (CIIP) modes, where the prediction is based on the inter-prediction signal and the intra-prediction signal. Mode selection unit 203 may also select the resolution (e.g., sub-pixel precision or integer pixel precision) of the motion vector for the block in the case of inter prediction.
To perform inter prediction on the current video block, motion estimation unit 204 may generate motion information for the current video block by comparing one or more reference frames from buffer 213 to the current video block. Motion compensation unit 205 may determine a prediction video block for the current video block based on motion information and decoded samples for pictures other than the picture associated with the current video block from buffer 213.
For example, motion estimation unit 204 and motion compensation unit 205 may perform different operations on the current video block depending on whether the current video block is in an I-slice, a P-slice, or a B-slice.
In some examples, motion estimation unit 204 may perform uni-directional prediction on the current video block, and motion estimation unit 204 may search for a reference video block of the current video block in a list 0 or list 1 reference picture. Motion estimation unit 204 may then generate a reference index that indicates a reference picture in list 0 or list 1 that includes the reference video block, and a motion vector that indicates spatial displacement between the current video block and the reference video block. Motion estimation unit 204 may output the reference index, the prediction direction indicator, and the motion vector as motion information of the current video block. The motion compensation unit 205 may generate a prediction video block of the current block based on a reference video block indicated by motion information of the current video block.
In other examples, motion estimation unit 204 may perform bi-prediction on the current video block, and motion estimation unit 204 may search for a reference video block of the current video block in a reference picture in list 0 and may also search for another reference video block of the current video block in a reference picture in list 1. Then, motion estimation unit 204 may generate reference indices indicating reference pictures in list 0 and list 1 that include reference video blocks, and motion vectors indicating spatial displacements between the reference video blocks and the current video block. Motion estimation unit 204 may output the reference index and the motion vector of the current video block as motion information for the current video block. Motion compensation unit 205 may generate a prediction video block for the current video block based on the reference video block indicated by the motion information for the current video block.
In some examples, motion estimation unit 204 may output the full set of motion information for the decoding process of the decoder.
In some examples, motion estimation unit 204 may not output the full set of motion information for the current video. Instead, motion estimation unit 204 may signal motion information for the current video block with reference to motion information of another video block. For example, motion estimation unit 204 may determine that the motion information of the current video block is sufficiently similar to the motion information of the adjacent video block.
In one example, motion estimation unit 204 may indicate a value in a syntax structure associated with the current video block that indicates to video decoder 300 that the current video block has the same motion information as another video block.
In another example, motion estimation unit 204 may identify another video block and a Motion Vector Difference (MVD) in a syntax structure associated with the current video block. The motion vector difference indicates the difference between the motion vector of the current video block and the motion vector of the indicated video block. Video decoder 300 may use the motion vector and motion vector difference of the indicated video block to determine the motion vector for the current video block.
As discussed above, the video encoder 200 may predictively signal the motion vectors. Two examples of predictive signaling techniques that may be implemented by video encoder 200 include Advanced Motion Vector Prediction (AMVP) and merge mode signaling.
The intra prediction unit 206 may perform intra prediction on the current video block. When intra prediction unit 206 performs intra prediction on the current video block, intra prediction unit 206 may generate prediction data for the current video block based on decoded samples of other video blocks in the same picture. The prediction data for the current video block may include a prediction video block and various syntax elements.
Residual generation unit 207 may generate residual data for the current video block by subtracting (e.g., by a minus sign) the prediction video block of the current video block from the current video block. The residual data for the current video block may include residual video blocks corresponding to different sample components of samples in the current video block.
In other examples, there may be no residual data for the current video block, e.g., in skip mode, residual generation unit 207 may not perform a subtraction operation.
Transform processing unit 208 may generate one or more transform coefficient video blocks for the current video block by applying one or more transforms to a residual video block associated with the current video block.
After transform processing unit 208 generates a transform coefficient video block associated with the current video block, quantization unit 209 may quantize the transform coefficient video block associated with the current video block based on one or more Quantization Parameter (QP) values associated with the current video block.
Inverse quantization unit 210 and inverse transform unit 211 may apply inverse quantization and inverse transform, respectively, to the transform coefficient video blocks to reconstruct residual video blocks from the transform coefficient video blocks. Reconstruction unit 212 may add the reconstructed residual video block to corresponding sample points from one or more prediction video blocks generated by prediction unit 202 to produce a reconstructed video block associated with the current block for storage in buffer 213.
After reconstruction unit 212 reconstructs the video block, a loop filtering operation may be performed to reduce video block artifacts in the video block.
Entropy encoding unit 214 may receive data from other functional components of video encoder 200. When entropy encoding unit 214 receives the data, entropy encoding unit 214 may perform one or more entropy encoding operations to generate entropy encoded data and output a bitstream that includes the entropy encoded data.
Fig. 20 is a block diagram illustrating an example of a video decoder 300, the video decoder 300 may be the video decoder 114 in the system 100 shown in fig. 18.
Video decoder 300 may be configured to perform any or all of the techniques of this disclosure. In the example of fig. 20, the video decoder 300 includes a number of functional components. The techniques described in this disclosure may be shared among various components of the video decoder 300. In some examples, the processor may be configured to perform any or all of the techniques described in this disclosure.
In the example of fig. 20, the video decoder 300 includes an entropy decoding unit 301, a motion compensation unit 302, an intra prediction unit 303, an inverse quantization unit 304, an inverse transformation unit 305, a reconstruction unit 306, and a buffer 307. In some examples, video decoder 300 may perform a decoding pass generally opposite to the encoding pass described with respect to video encoder 200 (fig. 19).
The entropy decoding unit 301 may retrieve the encoded bitstream. The encoded bitstream may include entropy encoded video data (e.g., encoded blocks of video data). Entropy decoding unit 301 may decode entropy encoded video data, and motion compensation unit 302 may determine motion information from the entropy decoded video data, including motion vectors, motion vector precision, reference picture list indices, and other motion information. For example, the motion compensation unit 302 may determine such information by performing AMVP and merge modes.
The motion compensation unit 302 may generate a motion compensation block, possibly based on an interpolation filter, to perform interpolation. An identifier for the interpolation filter used with sub-pixel precision may be included in the syntax element.
The motion compensation unit 302 may use interpolation filters used by the video encoder 200 during video block encoding to calculate the interpolation of sub-integer pixels of the reference block. The motion compensation unit 302 may determine an interpolation filter used by the video encoder 200 according to the received syntax information and generate a prediction block using the interpolation filter.
The motion compensation unit 302 may use some syntax information to determine the size of blocks used to encode the frame(s) and/or slice(s) of the encoded video sequence, partition information describing how to partition each macroblock of a picture of the encoded video sequence, a mode indicating how to encode each partition, one or more reference frames (and reference frame lists) used for each inter-coded block, and other information used to decode the encoded video sequence.
The intra prediction unit 303 may form a prediction block from spatially neighboring blocks using, for example, an intra prediction mode received in a bitstream. The inverse quantization unit 303 inversely quantizes (e.g., dequantizes) the quantized video block coefficients provided in the bit stream and decoded by the entropy decoding unit 301. The inverse transform unit 303 applies inverse transform.
The reconstruction unit 306 may add the residual block to the corresponding prediction block generated by the motion compensation unit 202 or the intra prediction unit 303 to form a decoded block. Deblocking filters may also be applied to filter the decoded blocks, if desired, to remove blocking artifacts. The decoded video blocks are then stored in a buffer 307, the buffer 307 providing reference blocks for subsequent motion compensation/intra prediction, and also generating decoded video for presentation on a display device.
The following provides a list of solutions preferred by some embodiments.
The following solution illustrates an example embodiment of the techniques discussed in the previous section (e.g., item 1).
1. A video processing method (e.g., method 2200 described in fig. 22) includes: determining whether to apply a horizontally-specific transform or a vertically-specific transform to the video block based on a rule for conversion between the video block of the video and a codec representation of the video (2202); and performing a transformation (2204) based on the determination, wherein the rule specifies a relationship between the determination and a representative coefficient from decoded coefficients of one or more representative blocks of the video.
2. The method of solution 1, wherein the one or more representative blocks belong to a color component to which the video block belongs.
3. The method of solution 1, wherein the one or more representative blocks belong to a color component that is different from a color component of the video block.
4. The method of any of solutions 1-3, wherein the one or more representative blocks correspond to video blocks.
5. The method of any of solutions 1-3, wherein the one or more representative blocks do not include video blocks.
The following solution illustrates an example embodiment of the techniques discussed in the previous section (e.g., bullets 1 and 2).
6. The method of any of solutions 1-5, wherein the representative coefficients comprise decoded coefficients having non-zero values.
7. The method of any of solutions 1-6, wherein the relationship specifies using the representative coefficient based on a modification coefficient determined by modifying the representative coefficient.
8. The method of any of solutions 1-7, wherein the representative coefficients correspond to significant coefficients of the decoded coefficients.
The following solution illustrates an example embodiment of the techniques discussed in the previous section (e.g., item 3).
9. A video processing method, comprising: determining, based on a rule, whether to apply a horizontally-specific transform or a vertically-specific transform to a video block for a conversion between the video block of the video and a codec representation of the video; and performing a conversion based on the determination, wherein the rule specifies a relationship between the determination and a decoded luminance coefficient for the video block.
10. The method of solution 1, wherein performing the transform comprises applying a horizontally-specific transformed luminance component or a vertically-specific transformed luminance component of the video block and a DCT2 to a chrominance component of the video block.
The following solutions illustrate example embodiments of the techniques discussed in the previous section (e.g., items 1 and 4).
11. A video processing method, comprising: determining, based on a rule, whether to apply a horizontally-specific transform or a vertically-specific transform to a video block for a conversion between the video block of the video and a codec representation of the video; and performing a transformation based on the determination, wherein the rule specifies a relationship between the determination and a value V associated with the decoded coefficient or a representative coefficient of the representative block.
12. The method of solution 11, wherein V equals the number of representative coefficients.
13. The method of solution 11, wherein V is equal to the sum of the values of the representative coefficients.
14. The method of solution 11, wherein V is a function of a residual energy distribution of the representative coefficients.
15. The method of any of solutions 11-14, wherein the relationship is defined with respect to the parity of the value V
The following solution illustrates an example embodiment of the techniques discussed in the previous section (e.g., item 5).
16. The method of any of the above solutions, wherein the rule specifies that the relationship further depends on codec information of the video block.
17. The method of solution 16, wherein the codec information is a codec mode of the video block.
18. The method of solution 16, wherein the codec information comprises a smallest rectangular region covering all significant coefficients of the video block.
The following solution illustrates an example embodiment of the techniques discussed in the previous section (e.g., item 6).
19. The method of any of the above solutions, wherein the determining is performed due to the video block having a mode or constraint on coefficients.
20. The method of solution 19, wherein the type corresponds to an Intra Block Copy (IBC) mode.
21. The method of solution 19, wherein the constraints on the coefficients are such that coefficients outside the rectangular interior of the current block are zero.
The following solution illustrates an example embodiment of the techniques discussed in the previous section (e.g., item 7).
22. The method of any of solutions 1-21, wherein the transforming is performed using a DCT-2 transform or a DST-7 transform without said determining using a horizontally specific transform and a vertically specific transform.
The following solution illustrates an example embodiment of the techniques discussed in the previous section (e.g., item 9).
23. The method of any of solutions 1-22, wherein one or more syntax fields in the codec representation indicate whether the method is enabled for video blocks.
24. The method of solution 23, wherein the one or more syntax fields are included at a sequence level or picture level or slice level or sub-picture level.
25. The method of any of solutions 23-24, wherein the one or more syntax fields are included in a slice header or a picture header.
The following solution illustrates an example embodiment of the techniques discussed in the previous section (e.g., items 1 and 8).
26. A video processing method, comprising: determining that one or more syntax fields are present in a codec representation of a video, wherein the video comprises one or more video blocks; based on the one or more syntax fields, it is determined whether a horizontally-specific transform or a vertically-specific transform is enabled for a video block in the video.
27. The method of solution 1, wherein, responsive to the one or more syntax fields indicating that an implicit determination of the transform skip mode is enabled, determining whether to apply a horizontally-specific transform or a vertically-specific transform to the video block based on a rule for a transition between a first video block of the video and a codec representation of the video; and performing a transformation based on the determination, wherein the rule specifies a relationship between the determination and a representative coefficient from decoded coefficients of one or more representative blocks of the video.
28. The method of solution 27, the first video block is coded in intra block copy mode.
29. The method of solution 27, the first video block is coded in intra mode.
30. The method of solution 27, the first video block is coded using an intra mode instead of a Derived Tree (DT) mode.
31. The method of solution 27, said determining being based on parity of a number of non-zero coefficients in the first video block.
32. The method of solution 27 applies a horizontally specific transform and a vertically specific transform to the first video block when the parity of the number of non-zero coefficients in the first video block is even.
33. The method of solution 27, when the parity of the number of non-zero coefficients in the first video block is even, the horizontally specific transform and the vertically specific transform are not applied to the first video block.
34. The method of solution 33, DCT-2 is applied to the first video block.
35. The method of solution 32, further comprising: in response to an implicit determination that the one or more syntax fields indicate the transform skip mode is disabled, the horizontal specific transform and the vertical specific transform are not applied to the first video block.
36. The method of solution 32, wherein DCT-2 is applied to the first video block.
The following solution illustrates an example embodiment of the techniques discussed in the previous section (e.g., items 9, 10).
37. A video processing method, comprising: making a first determination as to whether use of a particular transform is enabled for conversion between a video block of video and a codec representation of the video; making a second determination as to whether a zeroing operation is enabled during the transition; and performing a conversion based on the first determination and the second determination.
38. The method of solution 37, wherein the one or more syntax fields of the first level in the codec representation indicate the first determination.
39. The method of any of solutions 37-38, wherein the one or more syntax fields of the second level in the codec representation indicate the second determination.
40. The method of any of solutions 38-39, wherein the first level and the second level correspond to a sequence or picture level header field or a sequence or picture level parameter set or an adaptive parameter set.
41. The method of any of solutions 37-40, wherein the conversion uses a specific transform or zeroing operation, but not both.
The following solution illustrates an example embodiment of the techniques discussed in the previous section (e.g., items 12 and 13).
42. A video processing method, comprising: performing a conversion between a video block of video and a codec representation of the video; wherein the video block is represented in a codec representation as a codec block, wherein non-zero coefficients of the codec block are limited within one or more sub-regions; and wherein a specific transformation is applied to generate the codec block.
43. The method of solution 1, wherein the one or more sub-regions comprise an upper-right sub-region of the video block having a dimension of K × L, where K and L are integers, K is min (T1, W), L is min (T2, H), where W and H are a width and a height of the video block, respectively, and T1 and T2 are thresholds.
44. The method of any of solutions 42-43, wherein the codec representation indicates the one or more sub-regions.
The following solution shows an example embodiment of the techniques discussed in the previous section (items 16 and 17).
45. The method of any of solutions 1-44, wherein the video region comprises a video codec unit.
46. The method of solution 1-45, wherein the video region is a prediction unit or a transform unit.
47. The method of any of solutions 1-46, wherein the video blocks satisfy a particular dimensional condition.
48. The method of any of solutions 1-47, wherein the video block is coded using a predefined quantization parameter range.
49. The method of any of solutions 1-48, wherein the video region comprises a video picture.
50. The method of any of solutions 1 to 49, wherein said converting comprises encoding the video into a codec representation.
51. The method of any of solutions 1 to 49, wherein the converting comprises decoding the codec representation to generate pixel values of the video.
52. A video decoding apparatus comprising a processor configured to implement the method described in one or more of solutions 1 to 51.
53. A video codec device comprising a processor configured to implement the method described in one or more of solutions 1 to 51.
54. A computer program product having stored thereon computer code which, when executed by a processor, causes the processor to implement the method of any of solutions 1 to 51.
55. A method, apparatus or system as described in this document.
FIG. 23 is a flowchart representation of a method for video processing in accordance with the present technology. The method 2300 includes, at operation 2310, performing a conversion between a video and a bitstream of the video according to a rule. The rule specifies that use of a particular transform mode for the transform is indicated at least in the first video unit level and the second video unit level.
In some embodiments, the particular transform mode comprises a transform skip mode. In transform skip mode, the residual of the prediction error between the current video block and the reference video block is represented in the bitstream without applying a transform. In some embodiments, the first video unit level comprises a sequence level. In some embodiments, the first syntax element is included in a sequence header or a sequence parameter set to indicate use of an implicitly selected transform skip mode at a sequence level. In some embodiments, a first syntax element equal to 0 indicates that the implicitly selected transform skip mode is disabled at the sequence level. In some embodiments, a first syntax element equal to 1 indicates that an implicitly selected transform skip mode is enabled at the sequence level.
In some embodiments, the first syntax element is conditionally included in a sequence header or a sequence parameter set based on whether the first syntax flag indicates whether implicit selection of transforms is enabled in the sequence level. In some embodiments, the first syntax element is included in a sequence header or a sequence parameter set in response to the first syntax flag indicating that implicit selection of the transform is enabled in the sequence level. In some embodiments, the default value for the first syntax element is inferred in response to omitting the first syntax element in the bitstream. In some embodiments, the default value indicates that the implicitly selected transform skip mode is disabled at the sequence level.
In some embodiments, the second video unit level comprises a picture level or a slice level. In some embodiments, the second syntax element is included in a picture header or a slice header to indicate use of the implicitly selected transform skip mode in a picture level or a slice level, the picture header including at least an intra picture header or an inter picture header. In some embodiments, the second syntax element is conditionally indicated based on the first syntax element or the syntax flag in the first video unit level. In some embodiments, the default value for the second syntax element is inferred in response to omitting the second syntax element in the bitstream. In some embodiments, the default value indicates that the use of a particular transform is disabled at the second video unit level.
FIG. 24 is a flowchart representation of a method for video processing in accordance with the present technology. The method 2400 includes, at operation 2410, performing a conversion between a video block of a video and a bitstream of the video according to a rule. The rule specifies that a syntax element at the video unit level is used to indicate the allowed set of transforms for the transform.
In some embodiments, the video unit level comprises a picture level. In some embodiments, the syntax element is conditionally included based on whether implicit selection of a transform is enabled at the video unit level. In some embodiments, the syntax element comprises a flag or index. The syntax element is included in a picture header or a slice header, the picture header including at least an intra picture header or an inter picture header.
In some embodiments, N transform sets are supported for the conversion, and the selection of the allowed transform set from the N transform sets is based on a syntax element. In some embodiments, N is equal to 2. In some embodiments, the selection of the allowed transform set is further based on coding information of the video block, the coding information including at least a coding mode or a partition mode of the video block. In some embodiments, discrete cosine transform type-II (DCT 2) is always used in cases where video blocks are coded using the derivation tree mode. In some embodiments, discrete cosine transform type-II (DCT 2) is always used in the case where the video block is a chroma block. In some embodiments, the N transform sets include { DCT2, DST7} and { DCT2, IT }. DCT2 denotes discrete cosine transform type-II, DST7 denotes discrete sine transform type 7, and IT denotes implicit transform. In some embodiments, N transform sets are applicable to intra-coded blocks. In some embodiments, the N transform sets are applicable to intra-coded blocks coded using non-derived tree coding modes. In some embodiments, the N transform sets include { DCT2} and { DCT2, IT }. DCT2 denotes a discrete cosine transform type II, and IT denotes an implicit transform. In some embodiments, the N transform sets are applicable to blocks of an Intra Block Copy (IBC) codec. In some embodiments, the N transform sets are applicable to Intra Block Copy (IBC) coded blocks that are coded using non-derived tree coding modes.
In some embodiments, the default value is inferred in response to omitting a syntax element in the bitstream that indicates a default allowed set of transforms. In some embodiments, the syntax element comprises a flag or index. In response to coding a video block using a particular coding mode, a syntax element is used to indicate an allowed transform set for the transform. In some embodiments, the syntax element is used for the conversion in the case that the video block is coded using an intra coding mode or using an intra coding mode that does not include a derivation tree coding mode or a pulse code modulation coding mode. In some embodiments, the allowed set of transforms for conversion is independent of syntax elements without using intra-coding modes for coding video blocks. In some embodiments, where the video block is an inter-coded block, the allowed set of transforms includes { DCT2}. In some embodiments, where the video block is an Inter Block Copy (IBC) coded block, the allowed set of transforms includes { DCT2, IT }.
FIG. 25 is a flowchart representation of a method for video processing in accordance with the present technology. The method 2500 includes, at operation 2510, performing a conversion between a video block of the video and a bitstream of the video according to a rule. The rules specify that the use of a particular transform mode for a transform of a video block is determined based on a function associated with the energy of representative coefficients of one or more representative blocks of the video.
In some embodiments, the particular transform mode comprises a transform skip mode. In transform skip mode, the residuals of the prediction errors between the video block and the reference video block are represented in the bitstream without applying a transform. In some embodiments, the function returns whether the energy of the first K representative coefficients multiplied by the scaling factor is greater than the energy of the first M representative coefficients or all representative coefficients, where M is greater than K. In some embodiments, the function returns whether the energy of the representative coefficient in a first sub-region of the representative block multiplied by the scaling factor is greater than the energy of the representative coefficient in a second sub-region that includes the first sub-region and is larger than the first sub-region. In some embodiments, the function returns whether the energy of the representative coefficient in a first sub-region of the representative block multiplied by the scaling factor is greater than the energy of the representative coefficient in a second sub-region that does not overlap the first sub-region.
In some embodiments, the first sub-region comprises an upper-left mxn region in a video block. In some embodiments, M = N =1. In some embodiments, the first sub-region is a second sub-region that does not include the top-left mxn region. In some embodiments, the first sub-region is the upper left 4x4 region. In some embodiments, the energy representing the coefficient is defined as the sum of the absolute values representing the coefficient or the sum of the squares representing the coefficient.
In some embodiments, the applicability of one or more of the above methods is based on the coding information of the video block. In some embodiments, the method is applicable to the video block if the width or height of the video block is less than a threshold. In some embodiments, the threshold is 64. In some embodiments, the method is applicable to a video block without intra-coding the video block using either the derivation tree mode or the pulse-coded modulation mode. In some embodiments, the method is applicable to a video block in the case of an Intra Block Copy (IBC) codec for that video block.
In some embodiments, the converting comprises encoding the video into a bitstream. In some embodiments, the converting includes decoding the bitstream to generate the video.
In this document, the term "video processing" may refer to video encoding, video decoding, video compression, or video decompression. For example, a video compression algorithm may be applied during the conversion from a pixel representation of the video to a corresponding bitstream, and vice versa. As defined by the syntax, the bitstream of the current video block may, for example, correspond to bits collocated or spread at different locations within the bitstream. For example, a macroblock may be encoded according to transformed and coded error residual values, and also using bits in headers and other fields in the bitstream. Furthermore, during the transition, the decoder may parse the bitstream knowing that some fields may or may not be present, based on the determination as described in the above solution. Similarly, the encoder may determine that certain syntax fields are included or excluded and generate the codec representation accordingly by including or excluding syntax fields from the codec representation.
The disclosed and other solutions, examples, embodiments, modules, and functional operations described in this document may be implemented in: digital electronic circuitry, or computer software, firmware, or hardware, including the structures disclosed in this document and their structural equivalents, or combinations of one or more of them. The disclosed embodiments and other embodiments may be implemented as one or more computer program products, e.g., one or more modules of computer program instructions encoded on a computer-readable medium for execution by, or to control the operation of, data processing apparatus. The computer readable medium can be a machine-readable storage device, a machine-readable storage substrate, a memory device, a composition of matter effecting a machine-readable propagated signal, or a combination of one or more of them. The term "data processing apparatus" encompasses all apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, or multiple processors or computers. The apparatus can include, in addition to hardware, code that creates an execution environment for the computer program in question, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, or a combination of one or more of them. A propagated signal is an artificially generated signal, e.g., a machine-generated electrical, optical, or electromagnetic signal, that is generated to encode information for transmission to suitable receiver apparatus.
A computer program (also known as a program, software application, script, or code) can be written in any form of programming language, including compiled or interpreted languages, and it can also be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. A computer program does not necessarily correspond to a file in a file system. A program can be stored in a portion of a file that holds other programs or data (e.g., one or more scripts stored in a markup language document), in a single file dedicated to the associated program, or in multiple coordinated files (e.g., files that store one or more modules, sub programs, or portions of code). A computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network.
The processes and logic flows described in this document can be performed by one or more programmable processors executing one or more computer programs to perform functions by operating on input data and generating output. The processes and logic flows can also be performed by, and apparatus can also be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application-specific integrated circuit).
Processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer. Generally, a processor will receive instructions and data from a read-only memory or a random access memory or both. The essential elements of a computer are a processor for executing instructions and one or more memory devices for storing instructions and data. Generally, a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto-optical disks, or optical disks. However, a computer does not require such a device. Computer-readable media suitable for storing computer program instructions and data include all forms of non-volatile memory, media and memory devices, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices; magnetic disks, such as internal hard disks or removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks. The processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.
Although this patent document contains many specifics, these should not be construed as limitations on the scope of any subject matter or claimed content, but rather as descriptions of features specific to particular embodiments of particular technologies. Certain features that are described in this patent document in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination. Furthermore, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features may in some cases be excised from the claimed combination, and the claimed combination may be directed to a subcombination or variation of a subcombination.
Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. Moreover, the separation of various system components in the embodiments described in this patent document should not be understood as requiring such separation in all embodiments.
Only a few implementations and examples are described and other implementations, enhancements and variations can be made based on what is described and illustrated in this patent document.

Claims (63)

1. A video processing method, comprising:
performing a conversion between a video and a bitstream of the video according to a rule, wherein the rule specifies that use of a particular transform mode of the conversion is indicated at least at a first video unit level and a second video unit level.
2. The method of claim 1, wherein the particular transform mode comprises a transform skip mode, wherein in the transform skip mode, residuals representing prediction errors between the current video block and reference video blocks in the bitstream are not required to apply a transform.
3. The method of claim 1 or 2, wherein the first video unit level comprises a sequence level.
4. The method of claim 3, wherein a first syntax element is included in a sequence header or a sequence parameter set to indicate use of an implicitly selected transform skip mode in the sequence level.
5. The method of claim 3 or 4, wherein the first syntax element being equal to 0 indicates that the implicitly selected transform skip mode is disabled in the sequence level.
6. The method of claim 3 or 4, wherein the first syntax element being equal to 1 indicates that the implicitly selected transform skip mode is enabled in the sequence level.
7. The method of any of claims 4-6, wherein the first syntax element is conditionally included in the sequence header or the sequence parameter set based on a first syntax flag indicating whether implicit selection of a transform is enabled in the sequence level.
8. The method of claim 7, wherein, in response to the first syntax flag indicating that implicit selection of the transform is enabled in the sequence level, the first syntax element is included in the sequence header or the sequence parameter set.
9. The method of claim 7 or 8, wherein a default value for the first syntax element is inferred in response to omitting the first syntax element in the bitstream.
10. The method of claim 8, wherein the default value indicates that the implicitly selected transform skip mode is disabled in the sequence level.
11. The method of any of claims 1-10, wherein the second video unit level comprises a picture level or a slice level.
12. The method of claim 11, wherein a second syntax element is included in a picture header or a slice header to indicate use of an implicitly selected transform skip mode at the picture level or the slice level, the picture header comprising at least an intra picture header or an inter picture header.
13. The method of claim 12, wherein the second syntax element is conditionally indicated based on the first syntax element or syntax flag for the first video unit level.
14. The method of claim 12, wherein a default value for the second syntax element is inferred in response to omitting the second syntax element in the bitstream.
15. The method of claim 14, wherein the default value indicates that use of the particular transform is disabled at the second video unit level.
16. A video processing method, comprising:
performing a conversion between a video block of a video and a bitstream of the video according to a rule,
wherein the rule specifies a syntax element in a video unit level for indicating a set of allowed transforms for the conversion.
17. The method of claim 16, wherein the video unit level comprises a picture level.
18. The method of claim 16 or 17, wherein the syntax element is conditionally included based on an implicit selection of whether transform is enabled at the video unit level.
19. The method of any of claims 16-18, wherein the syntax element comprises a flag or an index, and wherein the syntax element is included in a picture header or a slice header, the picture header comprising at least an intra picture header or an inter picture header.
20. The method of any of claims 16 to 19, wherein N transform sets are supported for the converting, and the selection of the allowed transform set from the N transform sets is based on the syntax element.
21. The method of claim 20, wherein N is equal to 2.
22. The method of claim 20 or 21, wherein the selection of the allowed set of transforms is further based on coding information of the video block, the coding information comprising at least a coding mode or a partition mode of the video block.
23. The method of claim 22, wherein discrete cosine transform type IIDCT2 is always used in case the video block is coded using a spanning tree mode.
24. The method of claim 22, wherein discrete cosine transform type-IIDCT 2 is always used if the video block is a chrominance block.
25. The method of any of claims 20-24, wherein the N transform sets include { DCT2, DST7} and { DCT2, IT }, wherein DCT2 represents a discrete cosine transform type-II, DST7 represents a discrete sine transform type 7, and IT represents an implicit transform.
26. The method of claim 25, wherein the N transform sets are applicable to intra-coded blocks.
27. The method of claim 25, wherein the N transform sets are applicable to intra-coded blocks coded using non-spanning tree coding modes.
28. The method of any of claims 20-24, wherein the N transform sets include { DCT2} and { DCT2, IT }, wherein DCT2 represents a discrete cosine transform type-II and IT represents an implicit transform.
29. The method of claim 28, wherein the N transform sets are applicable to blocks of an intra block copy, IBC, codec.
30. The method of claim 28, wherein the N transform sets are applicable to intra block copy, IBC, coded blocks coded using a non-spanning tree coding mode.
31. The method of any of claims 16 to 30, wherein a default value is inferred in response to omitting the syntax element in the bitstream that indicates a default allowed set of transforms.
32. The method of any of claims 16-31, wherein the syntax element comprises a flag or an index, and wherein the syntax element is used to indicate the allowed transform set for the transform in response to the video block being coded using a particular coding mode.
33. The method of claim 32, wherein the syntax element is used for the converting if the video block is coded using an intra coding mode or using an intra coding mode that does not include a derivation tree coding mode or a pulse coding modulation coding mode.
34. The method of claim 32 or 33, wherein the allowed set of transforms for the converting is independent of the syntax element without using an intra-coding mode to code the video block.
35. The method of claim 34, wherein, in the case that the video block is an inter-coded block, the allowed set of transforms comprises { DCT2}.
36. The method of claim 34, wherein, where the video block is an inter-block copy (IBC) coded block, the allowed set of transforms comprises { DCT2, IT }.
37. A video processing method, comprising:
performing a conversion between a video block of a video and a bitstream of the video according to a rule,
wherein the rule specifies that use of a particular transform mode for the transformation of the video block is determined based on a function associated with energy of representative coefficients of one or more representative blocks of the video.
38. The method of claim 37, wherein the particular transform mode comprises a transform skip mode, wherein in the transform skip mode, residuals representing prediction errors between the video block and a reference video block in the bitstream are not required to apply a transform.
39. A method according to claim 37 or 38, wherein the function returns whether the energy of the first K representative coefficients multiplied by the scaling factor is greater than the energy of the first M representative coefficients or all representative coefficients, where M is greater than K.
40. The method of claim 37 or 38, wherein the function returns whether the energy representing the coefficient in a first sub-region of the representation block multiplied by the scaling factor is greater than the energy representing the coefficient in a second sub-region comprising the first sub-region and larger than the first sub-region.
41. The method of claim 37 or 38, wherein the function returns whether the energy of the representative coefficient in a first sub-region of the representative block multiplied by the scaling factor is greater than the energy of the representative coefficient in a second sub-region that does not overlap the first sub-region.
42. The method of claim 40 or 41, wherein the first sub-region comprises an upper-left M x N region in the video block.
43. The method of claim 42, wherein M = N =1.
44. The method of claim 40 or 41, wherein said first sub-region is said second sub-region excluding an upper left M x N region.
45. The method of claim 40 or 41, wherein the first sub-region is an upper left 4x4 region.
46. A method according to any one of claims 37 to 45, wherein the energy of the representative coefficient is defined as the sum of the absolute values of the representative coefficients or the sum of the squares of the representative coefficients.
47. The method of any one of claims 1-45, wherein the applicability of the method is based on coding information of the video block.
48. The method of claim 47, wherein the method applies to the video block if a width or height of the video block is less than a threshold.
49. The method of claim 48, wherein the threshold is 64.
50. The method of claim 47, wherein the method is applicable to the video block without intra-coding the video block using a derivation tree mode or a pulse coding modulation mode.
51. The method of claim 47, wherein the method is applicable to the video block if the video block is Intra Block Copy (IBC) coded.
52. The method of any of claims 1-51, wherein the converting comprises encoding the video into the bitstream.
53. The method of any of claims 1-51, wherein the converting comprises decoding the video from the bitstream.
54. A method for storing a bitstream of video, comprising:
generating a bitstream of a video from video blocks of the video according to a rule,
wherein the rules specify the use of the transition indication specific transform mode at least at a first video unit level and a second video unit level.
55. A method for storing a bitstream of video, comprising:
generating a bitstream of a video from video blocks of the video according to a rule,
wherein the rule specifies a syntax element at a video unit level for indicating a set of allowed transforms for the transformation.
56. A method for storing a bitstream of video, comprising:
generating a bitstream of a video from video blocks of the video according to a rule,
wherein the rule specifies that use of a particular transform mode for the video block is determined based on a function associated with energy of representative coefficients of one or more representative blocks of the video.
57. A video decoding apparatus comprising a processor configured to implement the method of any of claims 1 to 56.
58. A video encoding apparatus comprising a processor configured to implement the method of any of claims 1 to 56.
59. A computer program product having computer code stored thereon, which when executed by a processor causes the processor to implement the method of any of claims 1 to 56.
60. A non-transitory computer-readable recording medium storing a bitstream of a video generated by a method performed by a video processing apparatus, wherein the method comprises:
generating a bitstream of a video from video blocks of the video according to a rule,
wherein the rules specify the use of the transition indication specific transform mode at least at a first video unit level and a second video unit level.
61. A non-transitory computer-readable recording medium storing a bitstream of a video generated by a method performed by a video processing apparatus, wherein the method comprises:
generating a bitstream of a video from video blocks of the video according to a rule,
wherein the rule specifies a syntax element in a video unit level for indicating a set of allowed transforms for the conversion.
62. A non-transitory computer-readable recording medium storing a bitstream of a video generated by a method performed by a video processing apparatus, wherein the method comprises:
generating a bitstream of a video from video blocks of the video according to a rule,
wherein the rule specifies that use of a particular transform mode for the video block is determined based on a function associated with energy of representative coefficients of one or more representative blocks of the video.
63. A method, apparatus or system as described in this document.
CN202180024661.6A 2020-03-25 2021-03-25 Implicit determination of transform skip mode Pending CN115699737A (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
CN2020081198 2020-03-25
CNPCT/CN2020/081198 2020-03-25
PCT/CN2021/082963 WO2021190594A1 (en) 2020-03-25 2021-03-25 Implicit determination of transform skip mode

Publications (1)

Publication Number Publication Date
CN115699737A true CN115699737A (en) 2023-02-03

Family

ID=77890929

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202180024661.6A Pending CN115699737A (en) 2020-03-25 2021-03-25 Implicit determination of transform skip mode

Country Status (2)

Country Link
CN (1) CN115699737A (en)
WO (1) WO2021190594A1 (en)

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130294524A1 (en) * 2012-05-04 2013-11-07 Qualcomm Incorporated Transform skipping and lossless coding unification
WO2014052775A1 (en) * 2012-09-29 2014-04-03 Motorola Mobility Llc Adaptive transform options for scalable extension
US20180288439A1 (en) * 2017-03-31 2018-10-04 Mediatek Inc. Multiple Transform Prediction

Also Published As

Publication number Publication date
WO2021190594A1 (en) 2021-09-30

Similar Documents

Publication Publication Date Title
CN113728636B (en) Selective use of quadratic transforms in codec video
CN113826383B (en) Block dimension setting for transform skip mode
WO2020253861A1 (en) Adaptive in-loop color-space transform for video coding
WO2021088951A1 (en) Quantization properties of adaptive in-loop color-space transform for video coding
US20220078424A1 (en) Sub-block based use of transform skip mode
CN114270838B (en) Signaling of transition skip mode
CN115066899A (en) Scalable secondary transform processing of coded video
CN115668923A (en) Indication of multiple transform matrices in coded video
CN113826405B (en) Use of transform quantization bypass modes for multiple color components
US20230199185A1 (en) Coefficient reordering in video coding
US20230017146A1 (en) Implicit multiple transform set signaling in video coding
CN115606182A (en) Codec video processing using enhanced quadratic transform
CN115699737A (en) Implicit determination of transform skip mode
WO2022111507A1 (en) Position dependent coefficient reordering in coded video
CN113728631B (en) Intra sub-block segmentation and multiple transform selection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination