CN113826398B - Interaction between transform skip mode and other codec tools - Google Patents

Interaction between transform skip mode and other codec tools Download PDF

Info

Publication number
CN113826398B
CN113826398B CN202080036237.9A CN202080036237A CN113826398B CN 113826398 B CN113826398 B CN 113826398B CN 202080036237 A CN202080036237 A CN 202080036237A CN 113826398 B CN113826398 B CN 113826398B
Authority
CN
China
Prior art keywords
video block
mode
transform
current video
block
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202080036237.9A
Other languages
Chinese (zh)
Other versions
CN113826398A (en
Inventor
张莉
张凯
邓智玭
刘鸿彬
张娜
王悦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing ByteDance Network Technology Co Ltd
ByteDance Inc
Original Assignee
Beijing ByteDance Network Technology Co Ltd
ByteDance Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing ByteDance Network Technology Co Ltd, ByteDance Inc filed Critical Beijing ByteDance Network Technology Co Ltd
Publication of CN113826398A publication Critical patent/CN113826398A/en
Application granted granted Critical
Publication of CN113826398B publication Critical patent/CN113826398B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/119Adaptive subdivision aspects, e.g. subdivision of a picture into rectangular or non-rectangular coding blocks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/186Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/593Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/70Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards

Abstract

Devices, systems, and methods for lossless codec for visual media codec are described. An exemplary method of video processing comprises: performing a conversion between a current video block of a video and a bitstream representation of the video, wherein the current video block is coded in the bitstream representation using a quantization residual block differential pulse coded modulation (QR-BDPCM) mode, wherein in the QR-BDPCM a difference between a quantization residual of an intra prediction of the current video block and a prediction of the quantization residual is represented in the bitstream representation, wherein the bitstream representation complies with a format rule, the format rule specifying whether side information of the QR-BDPCM mode and/or a syntax element indicating applicability of a Transform Skip (TS) mode to the current video block is included in the bitstream representation, and wherein the side information comprises at least one of a usage indication of the QR-BDPCM mode or a prediction direction of the QR-BDPCM mode.

Description

Interaction between transform skip mode and other codec tools
Cross Reference to Related Applications
According to the regulations of the applicable patent laws and/or paris convention, the application timely requires the priority and benefit of international patent application No. pct/CN2019/086656 filed on 13.5.2019, international patent application No. pct/CN2019/093330 filed on 27.6.2019, and international patent application No. pct/CN2019/107144 filed on 21.9.9.2019. The entire disclosure of the above application is incorporated herein by reference as part of the disclosure of the present application for purposes of legal provision.
Technical Field
This patent document relates to video encoding and decoding techniques, devices and systems.
Background
Despite advances in video compression, digital video still uses the most bandwidth in the internet and other digital communication networks. As the number of connected user devices capable of receiving and displaying video increases, the bandwidth requirements for pre-counting the use of digital video will continue to grow.
Disclosure of Invention
Devices, systems, and methods related to digital video coding, and in particular to lossless coding of visual media coding, are described. The methods may be applied to existing video codec standards, such as High Efficiency Video Codec (HEVC), and future video codec standards, such as multifunctional video codec (VVC), or codecs.
In one representative aspect, the disclosed technology can be used to provide a video processing method. The method comprises the following steps: performing a conversion between a video comprising the plurality of color components and a bitstream representation of the video, wherein the bitstream representation of the video conforms to a rule specifying that one or more syntax elements are included in the bitstream representation for two color components to indicate whether a transform quantization bypass mode is applicable to video blocks representing the two color components in the bitstream representation, and wherein, when the transform quantization bypass mode is applicable to video blocks, the video blocks are represented in the bitstream representation without using transform and quantization processing or are obtained from the bitstream representation without using inverse transform and inverse quantization processing.
In another representative aspect, the disclosed techniques may be used to provide a video processing method. The method comprises the following steps: determining whether a transform quantization bypass mode is applicable to the current video block based on characteristics of a current video block of a video, wherein when the transform quantization bypass mode is applicable to the current video block, the current video block is represented in a bitstream representation without using transform and quantization processes or is obtained from the bitstream representation without using inverse transform and inverse quantization processes; and based on the determination, performing a conversion between the current video block and a bitstream representation of the video.
In another representative aspect, the disclosed techniques may be used to provide a video processing method. The method comprises the following steps: determining, based on a current video block of a video satisfying a dimensional constraint, that two or more codec modes are enabled to represent the current video block in a bitstream representation, wherein the dimensional constraint specifies: for the two or more codec modes, disabling the same set of allowed dimensions for the current video block, and wherein for an encoding operation the two or more codec modes represent the current video block in the bitstream representation without using a transform operation on the current video block, or wherein for a decoding operation the current video block is obtained from the bitstream representation using the two or more codec modes without using an inverse transform operation; and performing a conversion between the current video block and the bitstream representation of the video based on one of the two or more codec modes.
In another representative aspect, the disclosed techniques may be used to provide a video processing method. The method comprises the following steps: determining, based on a current video block of a video satisfying a dimensional constraint, that two codec modes are enabled to represent the current video block in a bitstream representation, wherein the dimensional constraint specifies: using the same set of allowed dimensions for enabling the two codec modes, and wherein for an encoding operation the two codec modes represent the current video block in the bitstream representation without using a transform operation on the current video block, or wherein for a decoding operation the current video block is obtained from the bitstream representation using the two codec modes without using an inverse transform operation; and performing a conversion between the current video block and the bitstream representation of the video based on one of the two codec modes.
In another representative aspect, the disclosed techniques may be used to provide a video processing method. The method comprises the following steps: based on a current video block of a video satisfying a dimensional constraint, determining to enable a coding mode to represent the current video block in a bitstream representation, wherein the coding mode represents the current video block in the bitstream representation without using a transform operation on the current video block during a coding operation, or wherein the current video block is obtained from the bitstream representation without using an inverse transform operation during a decoding operation, and wherein the dimensional constraint specifies: a first maximum transform block size of the current video block using the codec mode to which the transform operation or the inverse transform operation is not applied is different from a second maximum transform block size of the current video block using another codec tool to which the transform operation or the inverse transform operation is applied; and performing a conversion between the current video block and the bitstream representation of the video based on the codec mode.
In another representative aspect, the disclosed techniques may be used to provide a video processing method. The method comprises the following steps: performing a translation between a current video block of a video and a bitstream representation of the video, wherein the current video block is coded in the bitstream representation using a quantization residual block differential pulse coded modulation (QR-BDPCM) mode in which a difference between a quantization residual of an intra prediction of the current video block and a prediction of the quantization residual is represented in the bitstream representation, wherein the bitstream representation complies with a format rule specifying whether side information of a QR-BDPCM mode and/or a syntax element indicating applicability of a Transform Skip (TS) mode to the current video block is included in the bitstream representation, and wherein the side information includes at least one of a usage indication of the QR-BDPCM mode or a prediction direction of the QR-BDPCM mode.
In another representative aspect, the disclosed techniques may be used to provide a video processing method. The method comprises the following steps: determining to encode a current video block of a video using a transform quantization bypass mode that does not apply transform and quantization processing to the current video block; and based on the determination, performing a transition between the current video block and a bitstream representation of the video by disabling a Luma Mapping and Chroma Scaling (LMCS) process, wherein the disabling the LMCS process disables performance of a switch of sample points of the current video block between a reshape domain and an original domain if the current video block is from the luma component, or wherein the disabling the LMCS process disables scaling of a chroma residual of the current video block if the current video block is from a chroma component.
In another representative aspect, the disclosed techniques may be used to provide a video processing method. The method comprises the following steps: performing a first determination of whether to codec the current video block of video using a first codec mode in which no transform operation is applied to the current video block; performing a second determination of whether to codec one or more video blocks of the video using a second codec mode, wherein the one or more video blocks include a reference sample point for the current video block; performing a third determination, based on the first determination and the second determination, whether a third coding mode related to intra prediction processing is applicable to the current video block; and based on the third determination, performing a conversion between the current video block and a bitstream representation of the video.
In another representative aspect, the disclosed techniques may be used to provide a video processing method. The method comprises the following steps: performing a conversion between a current video block of a video and a bitstream representation of the video, wherein the bitstream representation conforms to a format rule that specifies whether syntax elements are included in the bitstream representation that indicate whether the current video block is coded using a transform quantization bypass mode, and wherein the current video block is represented in the bitstream representation without using transform and quantization processing when the transform quantization bypass mode is applicable to the current video block.
In another representative aspect, the disclosed techniques may be used to provide a video processing method. The method comprises the following steps: determining that a transform quantization bypass mode is applicable to a current video block of a video, wherein in the transform quantization bypass mode, transform and quantization processing is not used on the current video block; disabling a filtering method for sample points of the current video block based on the determination; and performing a conversion between the current video block and a bitstream representation of the video based on the determination and the disabling.
In another representative aspect, the disclosed techniques may be used to provide a video processing method. The method comprises the following steps: performing a first determination that a transform quantization bypass mode applies to a current video block of video, wherein in the transform quantization bypass mode, transform and quantization processing is not used on the current video block; performing a second determination in response to the first determination that a transform selection mode in an implicit Multiple Transform Set (MTS) process is not applicable to the current video block; and performing a conversion between the current video block and a bitstream representation of the video based on the first determination and the second determination.
In another representative aspect, the disclosed techniques may be used to provide a video processing method. The method comprises the following steps: performing a first determination that a transform quantization bypass mode applies to a current video block of a video, wherein in the transform quantization bypass mode, transform and quantization processing are not used on the current video block, wherein the current video block is associated with a chroma component; performing a second determination in response to the first determination that the sample points of the chroma component are not scaled in a Luma Mapping and Chroma Scaling (LMCS) process; and performing a conversion between the current video block and a bitstream representation of the video based on the first determination and the second determination.
In another representative aspect, the disclosed techniques may be used to provide a video processing method. The method comprises the following steps: determining that a transform quantization bypass mode is applicable to a current video block of a video, wherein in the transform quantization bypass mode, transform and quantization processing is not used on the current video block; disabling Luma Mapping and Chroma Scaling (LMCS) processing for a Coding Unit (CU), a Coding Tree Unit (CTU), a slice group, a picture, or a sequence of the current video block based on the determination; and based on the determination, performing a transition between the current video block and a bitstream representation of the video by disabling Luma Mapping and Chroma Scaling (LMCS) processing for a Codec Unit (CU), codec Tree Unit (CTU), slice group, picture, or sequence of the current video block, wherein the disabling of the LMCS processing disables performance of switching of sample points of the current video block between a reshape domain and an original domain if the current video block is from a luma component or wherein the disabling of the LMCS processing disables scaling of a chroma residual of the current video block if the current video block is from a chroma component.
In another representative aspect, the disclosed techniques may be used to provide a video processing method. The method comprises the following steps: performing a first determination of whether a current video block of a video is coded using a mode that applies an identity transform or no transform to the current video block; performing a second determination based on the first determination of whether to apply a coding tool to the current video block; and performing a conversion between the current video block and a bitstream representation of the video based on the first determination and the second determination.
In another representative aspect, the disclosed techniques may be used to provide a video processing method. The method comprises the following steps: performing a conversion between a current video block of a video and a bitstream representation of the video, wherein the bitstream representation includes a syntax element indicating whether a transform quantization bypass mode is applicable to represent the current video block, wherein the current video block is represented in the bitstream representation without using transform and quantization processing when the transform quantization bypass mode is applicable to the current video block, wherein the transform quantization bypass mode is applicable to a first video unit level of the current video block, and wherein the bitstream representation does not include signaling of the transform quantization bypass mode at a second video unit level, a video block of the current video block in the second video unit level being smaller than a video block at the first video unit level.
In another representative aspect, the disclosed techniques may be used to provide a video processing method. The method comprises the following steps: performing a conversion between a current video block of a video and a bitstream representation of the video, wherein the bitstream representation comprises a syntax element indicating whether a transform quantization bypass mode is applicable to represent the current video block, wherein the current video block is represented in the bitstream representation without using transform and quantization processing when the transform quantization bypass mode is applicable to the current video block, wherein the transform quantization bypass mode is applicable to the first video unit level of the current video block, and wherein the bitstream representation comprises side information used by the transform quantization bypass mode at a second video unit level of the current video block.
In another representative aspect, the disclosed techniques may be used to provide a video processing method. The method comprises the following steps: determining, for a current video block of a video, whether a mode in which a lossless codec technique is applied is applicable to the current video block; and based on the determination, perform a conversion between the current video block and a bitstream representation of the video, wherein the bitstream representation includes a syntax element indicating whether a coding tool is applicable at a video unit level for the current video block, wherein the video unit level is larger than a Coding Unit (CU), and wherein the coding tool is not applied to sample points within the video unit level.
In another representative aspect, the disclosed techniques may be used to provide a video processing method. The method comprises the following steps: determining, for a current video block of a video, whether one or more of the syntax elements indicating whether an intra sub-block partition (ISP) mode or a sub-block transform (SBT) mode allows non-discrete cosine transform II (DCT 2) transforms are included in a bitstream representation of the current video block; and based on the determination, performing a conversion between the current video block and the bitstream representation of the video.
In another representative aspect, the disclosed techniques may be used to provide a video processing method. The method comprises the following steps: configuring a bitstream representation of the current video block for a current video block comprising a plurality of color components, wherein an indication to skip transform and quantization processes is separately signaled in the bitstream representation for at least two of the plurality of color components; and based on the configuration, performing a transform between the current video block and a bitstream representation of the current video block.
In another representative aspect, the disclosed techniques may be used to provide a video processing method. The method comprises the following steps: based on characteristics of the current video block, making a decision regarding a mode that enables application of a skipped transform and quantization process on the current video block; and performing a conversion between the current video block and a bitstream representation of the current video block based on the determination.
In another representative aspect, the disclosed techniques may be used to provide a video processing method. The method comprises the following steps: making a decision, based on at least one dimension of the current video block, regarding a first mode that enables application of a skipped transform and quantization process on the current video block, and a second mode that enables no transform to be applied for the current block; and performing a conversion between the current video block and a bitstream representation of the current video block based on the determination.
In another representative aspect, the disclosed techniques may be used to provide a video processing method. The method comprises the following steps: determining that a current video block is coded using a first mode and a second mode that skip application of transform and quantization processing on the current video block; and performing a conversion between the current video block and a bitstream representation of the current video block based on the determination.
In another representative aspect, the disclosed techniques may be used to provide a video processing method. The method comprises the following steps: configuring a bitstream representation of the current video block for the current video block, wherein an indication to skip transform and quantization processing is signaled in the bitstream representation prior to signaling syntax elements related to one or more codec tools related to the plurality of transforms; and performing a conversion between the current video block and a bitstream representation of the current video block based on the configuration.
In another representative aspect, the disclosed techniques may be used to provide a video processing method. The method comprises the following steps: determining that a current video block is coded using a mode that skips application of transform and quantization processing on the current video block; and disabling a filtering method based on the determination and as part of performing a transition between the current video block and a bitstream representation of the current video block.
In another representative aspect, the disclosed techniques may be used to provide a video processing method. The method comprises the following steps: determining that a current video block is coded using a mode that skips application of transform and quantization processing on the current video block; and disabling in-loop shaping (ILR) processing for (i) a current picture including the current video block or (ii) a portion of the current picture based on the determining and as part of performing a transition between the current video block and a bitstream representation of the current video block.
In another representative aspect, the disclosed techniques may be used to provide a video processing method. The method comprises the following steps: configuring a bitstream representation of the current video block for the current video block, wherein an indication to skip transform and quantization processing is selectively signaled in the bitstream representation after signaling one or more indications of quantization parameters; and performing a conversion between the current video block and a bitstream representation of the current video block based on the configuration.
In another representative aspect, the disclosed techniques may be used to provide a video processing method. The method comprises the following steps: configuring a bitstream representation of a current video block for the current video block, wherein an indication to apply a codec tool to the current video block is signaled in the bitstream representation at a level of a video unit larger than a Codec Unit (CU); and based on the configuration, performing a conversion between the current video block and a bitstream representation of the current video block, wherein performing the transform comprises: although the application of the codec tool is indicated in the bitstream representation, the codec tool is restricted from being applied to at least some sample points of the current video block.
In yet another representative aspect, the above-described methods are embodied in the form of processor-executable code and stored in a computer-readable program medium.
In yet another representative aspect, an apparatus configured or operable to perform the above-described method is disclosed. The apparatus may include a processor programmed to implement the method.
In yet another representative aspect, a video decoder device may implement a method as described herein.
The above aspects, as well as other aspects and features of the disclosed technology, are described in more detail in the accompanying drawings, the description, and the claims.
Drawings
Fig. 1 shows a block diagram of an example encoder.
Fig. 2 shows an example of 67 intra prediction modes.
Fig. 3A-3D illustrate examples of sample points used by a position dependent intra prediction combining (PDPC) method applied to diagonal and neighboring angle intra modes.
Fig. 4 shows an example of four reference rows adjacent to a prediction block.
Fig. 5 shows examples of the division of 4 × 8 and 8 × 4 blocks.
Fig. 6 shows an example of division of all blocks except for 4 × 8, 8 × 4, and 4 × 4.
Fig. 7 shows an example of ALWIP for a 4 × 4 block.
Fig. 8 shows an example of ALWIP for an 8 × 8 block.
Fig. 9 shows an example of ALWIP for an 8 × 4 block.
Fig. 10 shows an example of ALWIP for a 16 × 16 block.
FIG. 11 shows an example of secondary transformations in JEM.
Fig. 12 shows an example of the proposed simplified secondary transition (RST).
FIG. 13 shows examples of sub-block transform modes SBT-V and SBT-H.
Fig. 14 shows a flow chart of a decoding flow using shaping.
15A-15G illustrate a flow diagram of an example method of video processing.
Fig. 16 is a block diagram of an example of a hardware platform for implementing the visual media decoding or visual media codec techniques described herein.
Fig. 17 is a block diagram illustrating an exemplary video codec system that may utilize techniques of the present disclosure.
Fig. 18 is a block diagram illustrating an example of a video encoder.
Fig. 19 is a block diagram showing an example of a video decoder.
Fig. 20 is a block diagram illustrating an example video processing system in which various techniques disclosed herein may be implemented.
21-38 show flow diagrams of example methods of video processing.
Detailed Description
Embodiments of the disclosed techniques may be applied to existing video codec standards (e.g., HEVC, h.265) and future standards to improve compression performance. Section headings are used herein to enhance readability of the description, and discussion or embodiments (and/or implementations) are not limited to the various sections in any way.
2. Video codec introduction
As the demand for high-resolution video continues to grow, video codec methods and techniques are ubiquitous in modern technology. Video codecs typically include electronic circuits or software that compress or decompress digital video, and are continually being improved to provide higher codec efficiency. Video codecs convert uncompressed video into a compressed format and vice versa. There is a complex relationship between video quality, the amount of data used to represent the video (as determined by the bit rate), the complexity of the encoding and decoding algorithms, susceptibility to data loss and errors, ease of editing, random access, and end-to-end delay (latency). The compression format typically conforms to a standard video compression specification, such as the High Efficiency Video Codec (HEVC) standard (also known as h.265 or MPEG-H part 2), the pending multifunctional video codec (VVC) standard, or other current and/or future video codec standards.
The video codec standard was developed primarily by developing the well-known ITU-T and ISO/IEC standards. ITU-T makes H.261 and H.263, ISO/IEC makes MPEG-1 and MPEG-4 video, and both organizations together make the H.262/MPEG-2 video and the H.264/MPEG-4 Advanced Video Codec (AVC) and H.265/HEVC standards. Starting from h.262, the video codec standard is based on a hybrid video codec structure, in which temporal prediction plus transform coding is utilized. To explore future video codec technologies beyond HEVC, VCEG and MPEG have together established the joint video exploration team (jfet) in 2015. Since then, JFET has adopted many new approaches and applied them to a reference software named Joint Exploration Model (JEM). In month 4 of 2018, the joint video experts group (jfet) between VCEG (Q6/16) and ISO/IEC JTC1 SC29/WG11 (MPEG) holds in an effort to the multifunctional video codec (VVC) standard, which targets a 50% bit rate reduction compared to HEVC.
2.1 encoding flow for a typical video codec
Fig. 1 shows an example of an encoder block diagram for a VVC, which contains three in-loop filter blocks: deblocking Filter (DF), sample Adaptive Offset (SAO), and ALF. Unlike DF using predefined filters, SAO and ALF utilize the original sample point of the current picture to reduce the mean square error between the original sample point and the reconstructed sample point by adding offsets and applying Finite Impulse Response (FIR) filters, respectively, signaling the offsets and filter coefficients with the coding side information. ALF is located at the final processing stage of each picture and can be viewed as a tool that attempts to capture and fix the artifacts created by the previous stages.
2.2 Intra mode coding and decoding Using 67 Intra prediction modes
To capture any edge direction present in natural video, the number of directional intra modes is extended from 33 used in HEVC to 65. In fig. 2, the additional directional mode is depicted as a red dashed arrow, and the Planar (Planar) mode and the DC mode remain the same. These denser directional intra prediction modes are applicable to all block sizes as well as luma and chroma intra prediction.
As shown in fig. 2, the conventional angular intra prediction direction is defined as from 45 degrees to-135 degrees in a clockwise direction. In VTM2, for non-square blocks, the conventional angular intra prediction mode is adaptively replaced with a wide-angle intra prediction mode. The alternative mode is signaled using the original method and remapped to the index of the wide-angle mode after parsing. The total number of intra prediction modes is unchanged, 67, and the intra mode codec is unchanged.
In HEVC, each intra coded block is square and the length of each side is a power of 2. Therefore, no partitioning operation is required to generate the intra predictor using the DC mode. In VVV2, the chunks may be rectangular, and a partitioning operation is generally required for each chunk. To avoid the partitioning operation of DC prediction, only the longer edges are used to calculate the average of the non-square blocks.
2.2.1 location dependent Intra prediction combination (PDPC)
In VTM2, the intra prediction result in the Planar (Planar) mode is further corrected by a position-dependent intra prediction combining (PDPC) method. PDPC is an intra prediction method that invokes a combination of unfiltered boundary reference sample points and HEVC-style intra prediction using filtered boundary reference sample points. PDPC is applied without signaling to the following intra modes: plane, DC, horizontal, vertical, lower left corner pattern and its eight adjacent angle patterns, and upper right corner pattern and its eight adjacent angle patterns.
The prediction sample point pred (x, y) is predicted using a linear combination of intra prediction modes (DC, plane, angle) and reference sample points according to the following equation:
pred(x,y)=(wL×R -1,y +wT×R x,-1 –wTL×R -1,-1 +(64–wL–wT+wTL)×pred(x,y)
+32)>>6
here, R x,-1 ,R -1,y Respectively, a reference sample point located at the top and left of the current sample point (x, y), and R -1,-1 Representing a reference sample point located in the upper left corner of the current block.
If PDPC is applied to DC, planar, horizontal and vertical intra modes, no additional boundary filter is needed, e.g. a DC mode boundary filter or a horizontal/vertical mode boundary filter is needed in case of HEVC.
FIGS. 3A-3D show a schematic representation of a system suitable for use withReference sample point (R) of PDPC for various prediction modes x,-1 ,R -1,y And R -1,-1 ) The definition of (1). The prediction sample point pred (x ', y') is located at (x ', y') within the prediction block. Reference sample point R x,-1 Is given by x = x '+ y' +1, and is referenced to a sample point R -1,y Is similarly given by y = x '+ y' + 1. The PDPC weights depend on the prediction mode and are shown in table 1.
Table 1: examples of PDPC weights according to prediction mode
Prediction mode wT wL wTL
Upper right diagonal 16>>((y’<<1)>>shift) 16>>((x’<<1)>>shift) 0
Diagonal from the lower left 16>>((y’<<1)>>shift) 16>>((x’<<1)>>shift) 0
Upper right adjacent diagonal 32>>((y’<<1)>>shift) 0 0
Adjacent angle of left and bottom 0 32>>((x’<<1)>>shift) 0
2.3 Multi-reference line (MRL)
Multiple Reference Line (MRL) intra prediction uses more reference lines for intra prediction. In fig. 4, an example of 4 reference rows is depicted, where the sample points of segments a and F are not taken from reconstructed neighboring sample points, but filled with the nearest sample points of segments B and E, respectively. HEVC intra prediction uses the nearest reference line (i.e., reference line 0). In MRL, two additional rows (reference row 1 and reference row 3) are used.
The index of the selected reference row (mrl _ idx) is signaled and used to generate the intra predictor. For reference row indices greater than 0, only additional reference row modes are included in the MPM list, and only MPM indices for no remaining modes are signaled. The reference row index is signaled before the intra prediction mode, and in case a non-zero reference row index is signaled, the plane and DC mode are excluded from the intra prediction mode.
MRL is disabled for the first row block inside the CTU to prevent the use of extended reference sample points outside the current CTU row. In addition, PDPC is disabled when additional lines are used.
2.4 Intra-frame sub-block partitioning (ISP)
In jfet-M0102, an ISP is proposed that divides the luma intra prediction block vertically or horizontally into 2 or 4 sub-partitions depending on the block size dimension, as shown in table 2. Fig. 5 and 6 show examples of these two possibilities. All sub-segmentations satisfy the condition of having at least 16 sample points. For block sizes of 4xN or Nx4 (N > 8), there may be sub-partitions of 1xN or Nx1, if allowed.
Table 2: number of sub-partitions depending on block size (max transform size is denoted by maxttsize)
Figure GDA0003801427680000121
For each of these sub-partitions, a residual signal is generated by entropy decoding the coefficients transmitted by the encoder, and then inverse quantizing and inverse transforming them. The sub-partitions are then intra-predicted and finally the corresponding reconstructed sample points are obtained by adding the residual signal to the prediction signal. Thus, the reconstructed value of each sub-partition will be available to generate a prediction of the next sub-partition, the process will be repeated, etc. All sub-partitions share the same intra mode.
TABLE 3 TrTypeHor and TrTypeVer Specifications depending on predModeIntra
Figure GDA0003801427680000122
2.5 affine Linear weighted Intra prediction (ALWIP or matrix-based Intra prediction)
Affine linear weighted intra prediction (ALWIP, also known as matrix-based intra prediction (MIP)) is proposed in jfet-N0217.
2.5.1 generating simplified prediction signals using matrix vector multiplication
Neighboring reference sample points are first down-sampled by averaging to generate a simplified reference signal bdry red . The simplified prediction signal pred is then calculated by calculating the matrix-vector product and adding an offset red
pred red =A·bdry red +b
Here, if W = H =4, a is W red ·H red A matrix of rows and 4 columns, otherwise in all other cases A is a matrix with W red ·H red A matrix of rows and 8 columns. b is a dimension W red ·H red The vector of (2).
2.5.2 Explanation of the Overall ALWIP Process
The overall process of averaging, matrix vector multiplication and linear interpolation of different shapes is shown in fig. 7 to 10. Note that the remaining shapes are considered to be one of the described cases.
1. Given a 4x4 block, ALWIP takes two averages along each axis of the boundary. The resulting four input sample points enter matrix vector multiplication. The models being taken from the set S 0 . After adding the offset, 16 final predicted sample points are generated. No linear interpolation is required to generate the prediction signal. Therefore, (4 · 16)/(4 · 4) =4 multiplications are performed in total for each sample point.
2. Given an 8x8 block, the ALWIP takes four averages along each axis of the boundary. The resulting eight input sample points enter a matrix vector multiplication. The models being taken from the set S 1 . This results in 16 sample points at odd positions of the prediction block. Therefore, (8 · 16)/(8 · 8) =2 multiplications are performed in total per sample point. After adding the offset, these sample points are vertically interpolated by using the reduced top boundary. Horizontal interpolation is then performed by using the original left boundary.
3. Given an 8x 4 block, the ALWIP takes four averages along the horizontal axis of the boundary and four original boundary values on the left boundary. The resulting eight input sample points enter a matrix vector multiplication. The models being taken from the set S 1 . This results in 16 sample points at odd horizontal positions and each vertical position of the prediction block. Therefore, a total of (8 · 16)/(8 · 4) =4 multiplications are performed per sample point. After adding the offset, these sample points are horizontally interpolated by using the original left boundary.
4. Given a 16x16 block, the ALWIP takes four averages along each axis of the boundary. The resulting eight input sample points enter a matrix vector multiplication. The models being taken from the set S 2 . This results in 64 sample points at odd positions of the prediction block. Therefore, (8 · 64)/(16 · 16) =2 multiplications are performed in total per sample point. After adding the offset, the sample points are interpolated vertically by using the eight mean values of the top boundary. Horizontal interpolation is then performed by using the original left boundary. In this case, the interpolation process does not add any multiplication. Thus, a total of two multiplications are required to compute each sample pointALWIP prediction.
For larger shapes the flow is substantially the same and it is easy to check that the number of multiplications per sample point is less than 4.
For a W × 8 block with W >8, only horizontal interpolation is needed since sample points are given at odd horizontal and every vertical position.
Finally, for a W × 4 block with W >8, let a _ k be the model generated by dropping each line corresponding to an odd term along the horizontal axis of the downsample block. Thus, the output size is 32 and again, only horizontal interpolation needs to be performed.
Corresponding processing is performed for the transposed case.
2.6 Multiple Transform Set (MTS) in VVC
2.6.1 explicit Multiple Transformation Sets (MTS)
In VTM4, large block size transforms up to 64 × 64 are enabled, which is mainly used for angular high resolution video, e.g. 1080p and 4K sequences. For a transform block with a size (width or height, or width and height) equal to 64, the high frequency transform coefficients are zeroed out so that only the low frequency coefficients remain. For example, for an M × N transform block (M is block width and N is block height), when M equals 64, only the left 32 columns of transform coefficients are retained. Similarly, when N equals 64, only the top 32 rows of transform coefficients are retained. When using the transform skip mode for large blocks, the entire block is used without zeroing out any values.
In addition to DCT-II used in HEVC, multiple Transform Selection (MTS) schemes are used to residual code inter and intra coded blocks. It uses a number of selected transforms from DCT8/DST 7. The newly introduced transformation matrices are DST-VII and DCT-VIII. Table 4 below shows the basis functions of the selected DST/DCT.
Table 4: basis functions of transformation matrices used in VVC
Figure GDA0003801427680000151
To preserve the orthogonality of the transform matrices, the transform matrices are quantized more accurately than the transform matrices in HEVC. In order to keep the median of the transformed coefficients in the 16-bit range, all coefficients will have 10 bits after the horizontal and vertical transforms.
To control the MTS scheme, separate enable flags are specified for intra and inter frames, respectively, at the SPS level. When MTS is enabled at SPS, CU level flag is signaled to indicate whether MTS is applied. Here, MTS is only applicable to luminance. The MTS CU level flag is signaled when the following conditions are met.
-width and height both less than or equal to 32
-CBF flag equal to 1
If the MTS CU flag is equal to zero, DCT2 is applied in both directions. However, if the MTS CU flag is equal to one, two other flags are additionally signaled to indicate the transform type in the horizontal and vertical directions, respectively. The transformation and signaling mapping table is shown in table 5. In terms of transform matrix precision, an 8-bit primary transform kernel is employed. Thus, all transform kernels used in HEVC remain unchanged, including 4-point DCT-2 and DST-7, 8-point, 16-point, and 32-point DCT-2. In addition, other transform kernels include 64-point DCT-2, 4-point DCT-8, 8-point, 16-point, 32-point DST-7, and DCT-8, with 8-bit primary transform kernels.
Table 5: mapping of decoded values of tu _ mts _ idx with corresponding transformation matrices in horizontal and vertical directions.
Figure GDA0003801427680000161
To reduce the complexity of large size DST-7 and DCT-8, the high frequency transform coefficients are zeroed out for DST-7 and DCT-8 blocks with size (width or height, or width and height) equal to 32. Only the coefficients in the 16x16 low frequency region are retained.
In addition to applying different transforms, VVC also supports a mode called Transform Skip (TS), which is similar to the concept of TS in HEVC. TS is considered a special case of MTS.
2.6.1.1 syntax and semantics
The MTS index may be signaled in the bitstream and this design is referred to as explicit MTS. In addition, another method of directly deriving the matrix from the transform block size is also supported, namely implicit MTS.
For explicit MTS, it supports all codec modes. Whereas for implicit MTS, only intra mode is supported.
7.3.2.4 Picture parameter set RBSP syntax
Figure GDA0003801427680000162
Figure GDA0003801427680000171
7.3.7.10 transform unit syntax
Figure GDA0003801427680000172
Figure GDA0003801427680000181
transform _ skip _ flag x0 y0 specifies whether a transform is applied to the luma transform block. The array index x0, y0 specifies the position (x 0, y 0) of the top left luma sample point of the transform block under consideration relative to the top left luma sample point of the picture. transform _ skip _ flag x0 y0 equal to 1 specifies that no transform is applied to the luma transform block. transform _ skip _ flag x0 y0 equal to 0 specifies that the decision whether to apply a transform to a luma transform block depends on other syntax elements. If transform _ skip _ flag [ x0] [ y0] is not present, it is inferred to be equal to 0.
tu _ mts _ idx [ x0] [ y0] specifies which transform kernel to apply to the residual sample points along the horizontal and vertical directions of the associated luma transform block. The array index x0, y0 specifies the position (x 0, y 0) of the top left luma sample point of the transform block under consideration relative to the top left luma sample point of the picture.
When tu _ mts _ idx [ x0] [ y0] is absent, it is inferred to be equal to 0.
In the CABAC decoding process, transform _ skip _ flag is decoded using one context, and tu _ mts _ idx is binarized using a truncated unary. Each bin of tu _ mts _ idx is context-coded and for the first bin, a context is selected using the quadtree depth (i.e., cqtDepth); and one context is used for the remaining binary numbers.
Table 6 assignment of ctxInc to syntax elements with context codec binary numbers
Figure GDA0003801427680000182
2.6.2 implicit Multiple Transform Set (MTS)
It is noted that the case where ISP, SBT and MTS are enabled but with implicit signaling is considered implicit MTS. In the specification, implicitMtsEnabled is used to define whether implicit MTS is enabled.
8.7.4 transform processing to scale transform coefficients
8978 overview of zxft 8978
The variable impliitmtsenenabled is derived as follows:
-if sps _ mts _ enabled _ flag is equal to 1 and one of the following conditions is true, then impricitmtsenabled is set equal to 1:
-intrasubportionssplittype is not equal to ISP _ NO _ SPLIT;
-cu sbt flag is equal to 1 and Max (nTbW, nTbH) is less than or equal to 32;
-sps _ explicit _ mts _ INTRA _ enabled _ flag and sps _ explicit _ mts _ inter _ enabled _ flag are both equal to 0 and CuPredMode [ xTbY ] [ yTbY ] is equal to MODE _ INTRA.
-otherwise, implitmtsenabled is set equal to 0.
The variable trTypeHor specifying the horizontal transform kernel and the variable trTypeVer specifying the vertical transform kernel are derived as follows:
-if cIdx is greater than 0, setting trTypeHor and trTypeVer equal to 0.
Otherwise, if implicitMtsEnabled is equal to 1, then the following applies:
-if intrasubportionssplittype does not equal ISP _ NO _ SPLIT, then trTypeHor and trTypeVer are specified depending on intraPredMode.
Otherwise, if cu _ sbt _ flag is equal to 1, trTypeHor and trTypeVer are specified in table 8 depending on cu _ sbt _ horizontal _ flag and cu _ sbt _ pos _ flag.
Else (sps _ explicit _ mts _ intra _ enabled _ flag and sps _ explicit _ mts _ inter _ enabled _ flag equal to 0), derive trTypeHor and trTypeVer as follows:
trTypeHor=(nTbW>=4&&nTbW<=16&&nTbW<=nTbH)?1:0 (8-1030)
trTypeVer=(nTbH>=4&&nTbH<=16&&nTbH<=nTbW)?1:0 (8-1031)
-otherwise, specify trTypeHor and trTypeVer depending on tu _ mts _ idx [ xTbY ] [ yTbY ] in Table 7.
Table 7 depends on the specification of trTypeHor and trTypeVer for tu _ mts _ idx [ x ] [ y ]
tu_mts_idx[x0][y0] 0 1 2 3 4
trTypeHor 0 1 2 1 2
trTypeVer 0 1 1 2 2
Table 8 depends on the specification of trTypeHor and trTypeVer for cu _ sbt _ horizontal _ flag and cu _ sbt _ pos _ flag
cu_sbt_horizontal_flag cu_sbt_pos_flag trTypeHor trTypeVer
0 0 2 1
0 1 1 1
1 0 1 2
1 1 1 1
2.7 Simplified secondary transformation (RST) proposed in JFET-N0193
2.7.1 Inseparable Secondary transformations in JEM (NSST)
In JEM, secondary transforms are applied between the forward primary transform and quantization (at the encoder) and between the dequantization and inverse primary transform (at the decoder side). As shown in fig. 11, a 4x4 (or 8x 8) secondary transform is performed depending on a block size. For example, for small blocks (i.e., min (width, height) < 8), a 4 × 4 secondary transform is applied; for larger blocks (i.e. min (width, height) > 4), an 8x8 secondary transform is applied every 8x8 block.
The following takes the input as an example to illustrate the application of the inseparable transform. To apply the non-separable transform, a 4X4 input block X is input
Figure GDA0003801427680000201
First expressed as a vector
Figure GDA0003801427680000202
Figure GDA0003801427680000203
Computing an inseparable transform as
Figure GDA0003801427680000204
Wherein
Figure GDA0003801427680000205
A transform coefficient vector is indicated, and T is a 16x16 transform matrix. The 16x1 coefficient vector is then scanned using the order of the block (horizontal, vertical or diagonal)
Figure GDA0003801427680000206
Reorganized into 4x4 blocks. The coefficients with smaller indices will be placed in the 4x4 coefficient block with smaller scan indices. There are a total of 35 transform sets, and each transform set uses 3 non-separable transform matrices (kernels). The mapping from intra prediction mode to transform set is predefined. For each transform set, the selected non-separable secondary transform (NSST) candidate is further specified by an explicitly signaled secondary transform index. After transforming the coefficients, an index is signaled once in the bitstream for each intra CU.
2.7.2 Simplified secondary transform (RST) in JVET-N0193
RST (also called low frequency non-separable transform (LFNST)) was introduced in JFET-K0099, and a 4-transform set (instead of a 35-transform set)) mapping was introduced in JFET-L0133. In this jfet-N0193, 16x64 (further simplified to 16x 48) and 16x16 matrices are used. For convenience of representation, the 16x64 (abbreviated 16x 48) transform is denoted RST8x8, and the 16x16 transform is denoted RST4x4. Fig. 12 shows an example of RST.
2.8 sub-block transforms
For inter-predicted CUs with CU cbf equal to 1, CU sbt flag may be signaled to indicate whether to decode the entire residual block or a sub-portion of the residual block. In the former case, the inter-frame MTS information is further parsed to determine the transform type of the CU. In the latter case, parts of the residual block are coded with the inferred adaptive transform and other parts of the residual block are zeroed out. SBT is not applied to the combined inter frame intra mode.
In the sub-block transform, a position-dependent transform is applied to the luminance transform block (the chrominance TB always uses DCT-2) in SBT-V and SBT-H. The two positions of SBT-H and SBT-V are associated with different core transformations. More specifically, FIG. 13 specifies the horizontal and vertical translation of each SBT location. For example, the horizontal and vertical transforms for SBT-V position 0 are DCT-8 and DST-7, respectively. When one side of the residual TU is greater than 32, the corresponding transform is set to DCT-2. Thus, the sub-block transforms collectively specify TU slices, cbf, and horizontal and vertical transforms for the residual block, which may be considered a syntax shortcut for the case where the main residual of the block is on one side of the block.
2.8.1 syntax elements and semantics
7.3.7.5 codec Unit syntax
Figure GDA0003801427680000211
Figure GDA0003801427680000221
7.3.7.11 residual codec syntax
Figure GDA0003801427680000222
Figure GDA0003801427680000231
sps _ sbt _ max _ size _64_flag equal to 0 specifies that the maximum CU width and height that allows sub-block transforms is 32 luma sample points. sps _ sbt _ max _ size _64_flag equal to 1 specifies that the maximum CU width and height that allows sub-block transforms is 64 luma sample points.
MaxSbtSize=sps_sbt_max_size_64_flag64:32 (7-33)
2.9 quantized residual Domain Block differential pulse codec modulation codec (QR-BDPCM)
The quantized residual domain BDPCM (hereinafter denoted as RBDPCM) is proposed in JFET-N0413. Intra-prediction is done on the whole block by sample point copying in a similar prediction direction (horizontal or vertical prediction) as intra-prediction. The residual is quantized and the difference between the quantized residual and its predictor (horizontal or vertical) quantization value is coded.
For a dimension of M (rows) xNBlock of (column) r i,j 0 ≦ i ≦ M-1,0 ≦ j ≦ N-1 is the prediction residual after performing horizontal (copying left neighboring pixel values in the prediction block row by row) or vertical (copying the top neighboring row to each row in the prediction block) intra prediction using unfiltered sample points from the top or left block boundary sample points. Let Q (r) i,j ) Where i is 0-1,0 j-N-1 represents the residual r i,j Wherein the residual is the difference between the original block and the prediction block value. Then applying block DPCM to the quantized residual sample points to obtain the pixel having elements
Figure GDA0003801427680000232
Modified MxN array of
Figure GDA0003801427680000233
When signaling the vertical BDPCM signal:
Figure GDA0003801427680000234
for horizontal prediction, a similar rule is applied and residual quantized sample points are obtained by:
Figure GDA0003801427680000235
quantizing a residual to sample points
Figure GDA0003801427680000236
To the decoder.
At the decoder side, the above calculations are reversed to produce Q (r) i,j ) I is more than or equal to 0 and less than or equal to M-1,0 and less than or equal to j and less than or equal to N-1, for the vertical prediction case,
Figure GDA0003801427680000241
in the case of the horizontal case,
Figure GDA0003801427680000242
quantizing the inverse of the residual Q -1 (Q(r i,j ) Is added to the intra block prediction value to generate a reconstructed sample point value.
Transform skipping is always used in QR-BDPCM.
2.9.1 Coefficient coding and decoding of TS coding and decoding block and QR-BDPCM coding and decoding block
QR-BDPCM follows the context modeling method of TS coding and decoding blocks.
Modified transform coefficient level coding of TS residuals. Relative to the conventional residual coding and decoding case, the residual coding and decoding of the TS includes the following changes:
(1) The last x/y position is not signaled,
(2) When all previous flags are equal to 0, coded _ sub _ block _ flag is coded for each subblock (except the last subblock),
(3) sig coeff flag context modeling uses a simplified template,
(4) a single context model of abs _ level _ gt1_ flag and par _ level _ flag,
(5) Context modeling of symbolic tokens (tokens further larger than 5, 7, 9),
(6) A modified Rice parameter derivation of remainder binarization,
(7) The limit on the number of context-coded bins per sample point, 2 bins per sample point within a block.
2.9.2 syntax and semantics
7.3.6.5 codec Unit syntax
Figure GDA0003801427680000243
Figure GDA0003801427680000251
bdpcm _ flag [ x0] [ y0] equal to 1 specifies that bdpcm _ dir _ flag exists at the position (x 0, y 0) of the codec unit including the luma codec block.
bdpcm _ dir _ flag x0 y0 equal to 0 specifies that the prediction direction to be used in the bdpcm block is horizontal, otherwise vertical.
7.3.6.10 transform unit syntax
Figure GDA0003801427680000261
Figure GDA0003801427680000262
Figure GDA0003801427680000271
The number of bins for context codec is limited to no more than 2 bins per sample point per CG.
Table 9 assignment of ctxInc to syntax elements with context codec binary numbers
Figure GDA0003801427680000281
2.10 In-loop shaping (ILR) in JFET-M0427
The basic idea of an in-loop shaper (ILR) is to convert the original (in the first domain) signal (predicted/reconstructed signal) into the second domain (shaped domain).
The in-loop brightness shaper is implemented as a pair of look-up tables (LUTs), but only one of the two LUTs needs to be signaled, since the other LUT can be calculated from the signaled LUT. Each LUT is a one-dimensional, 10-bit, 1024-entry mapping table (1D-LUT). One LUT is a forward LUT FwdLUT which converts the input luminance code value Y i Mapping to a modified value Y r :Y r =FwdLUT[Y i ]. The other LUT is a reverse LUT InvLUT which will modify the code value Y r Mapping to
Figure GDA0003801427680000282
(
Figure GDA0003801427680000283
Represents Y i Reconstructed value of).
ILR is also known in VVC as Luma Mapping and Chroma Scaling (LMCS).
2.10.1 PWL model
Conceptually, piecewise linear (PWL) is implemented as follows:
let x1, x2 be the two input axis points and y1, y2 be the corresponding output axis point of a segment. The output value y of any input value x between x1 and x2 can be interpolated by the following equation:
y=((y2-y1)/(x2-x1))*(x-x1)+y1
in a fixed point implementation, the equation can be rewritten as:
y=((m*x+2 FP_PREC-1 )>>FP_PREC)+c
where m is a scalar, c is an offset, and FP _ PREC is a constant value for specifying precision.
Note that in CE-12 software, the PWL model is used to pre-compute the FwdLUT and InvLUT mapping tables for 1024 entries; the PWL model also allows for dynamic computation of equivalent mapping values without pre-computing LUTs.
2.10.2 Brightness shaping
Test 2 of in-loop luma shaping (i.e., proposed CE 12-2) provides a lower complexity pipeline that also eliminates the decoding delay of intra prediction of blocks in inter-slice reconstruction. Intra prediction is performed in the shaping domain for both inter and intra slices.
Regardless of the slice type, intra prediction is always performed in the shaping domain. With such an arrangement, intra prediction can start immediately after the previous TU reconstruction is completed. This arrangement may also provide uniform rather than slice-dependent processing for intra modes. FIG. 14 shows a block diagram of a mode-based CE12-2 decoding process.
CE12-2 also tested a 16-segment piecewise linear (PWL) model of luma and chroma residual scaling, instead of the 32-segment piecewise PWL model of CE 12-1.
The inter-band reconstruction is done with the in-loop luma shaper in CE12-2 (the light shaded blocks indicate signal in the reshaped domain: luma residual; intra luma predicted; and intra luma reconstructed).
2.10.3 luma-related chroma residual scaling
Luma-related chroma residual scaling is a multiplication process implemented with fixed-point integer arithmetic. Chroma residual scaling compensates for the interaction of the luma signal with the chroma signals. Chroma residual scaling is applied at the TU level. More specifically, the average of the corresponding luminance prediction block is used.
The average is used to identify an index in the PWL model. The index identifies the scaling factor cscalelnv. The chroma residual is multiplied by this number.
It should be noted that the chroma scaling factor is calculated from the forward mapped predicted luma values instead of the reconstructed luma values.
2.10.4 Use of ILR
At the encoder side, each picture (or slice group) is first converted to the shaping domain. And performs all codec processing in the shaping domain. For intra prediction, neighboring blocks are located in the shaping domain; for inter prediction, the reference block (generated from the original domain of the decoded picture buffer) is first converted to the reshaped domain. A residual is then generated and encoded into the bitstream.
After the entire picture (or slice group) has been encoded/decoded, the sample points in the reshaped domain are converted to the original domain, and then the deblocking filter and other filters are applied.
In the following case, forward shaping of the prediction signal is disabled.
-the current block is intra coded;
the current block is coded as CPR (current picture reference, also called intra block copy, IBC);
-the current block is coded as a combined inter-frame intra mode (CIIP) and forward shaping is disabled for intra-predicted blocks.
3. Disadvantages of the existing implementations
The current design suffers from the following problems:
(1) LMCS is still applicable to blocks coded using the transform and quantization bypass mode (i.e., cu _ transquant _ bypass _ flag is equal to 1). However, the mapping from the original domain to the shaped domain, or from the shaped domain to the reconstructed domain, is lossy. It is not desirable to enable both LMCS and cu _ transquant _ bypass _ flag simultaneously.
(2) How to signal several new transform-related codec tools (such as MTS index or RST index or SBT), transform-free codec tools (such as QR-DPCM), and cu _ transquant _ bypass _ flag has not been investigated.
(3) In HEVC, cu _ transquant _ bypass _ flag is signaled once and applies to all three color components. It is necessary to study how to handle the dual trees.
(4) Certain codec tools, such as PDPC, incur codec losses when lossless codec is applied.
(5) Some codec tools should be disabled to ensure that a block is lossless codec. However, this has not been taken into account.
(6) In the latest VVC draft, SBT and ISP are considered implicit MTS. That is, for both SBT and ISP codec blocks, implicit transform selection is applied. Furthermore, if sps _ mts _ enbaled _ flag is false, SBT and ISP may still be enabled for the block, but instead only DCT2 is allowed. In this design, when sps _ mts _ based _ flag is false, the SBT/ISP codec gain becomes smaller.
In VVC D6, whether to enable implicit MTS is shown as follows:
-if sps _ mts _ enabled _ flag is equal to 1 and one of the following conditions is true, then impricitmtsenabled is set equal to 1:
-intrasubportionssplittype is not equal to ISP _ NO _ SPLIT;
-cu _ sbt _ flag is equal to 1 and Max (nTbW, nTbH) ≦ 32;
-sps _ explicit _ mts _ INTRA _ enabled _ flag equal to 0 and CuPredMode [0] [ xTbY ] [ yTbY ] equal to MODE _ INTRA and lfnst _ idx [ x0] [ y0] equal to 0 and INTRA _ mip _ flag [ x0] [ y0] equal to 0.
-otherwise, implitmtsenabled is set equal to 0.
4. Example method of lossless codec for visual media codec
Embodiments of the disclosed technology overcome the deficiencies of existing implementations, thereby providing video encoding and decoding with higher encoding and decoding efficiency. A method of lossless codec for visual media codec that can enhance existing and future video codec standards based on the disclosed techniques is illustrated in the examples described below for various implementations. The examples of the disclosed technology provided below illustrate general concepts and are not meant to be construed as limiting. In examples, various features described in these examples may be combined unless explicitly indicated to the contrary.
A block size is denoted by W x H, where W is the block width and H is the block height. The maximum transform block size is denoted by MaxTbW MaxTbH, where MaxTbW and MaxTbH are the width and height, respectively, of the maximum transform block. The minimum transform block size is denoted by MinTbW MinTbH, where MinTbW and MinTbH are the width and height, respectively, of the minimum transform block.
The TransQuantBypass mode is defined to skip transform and quantization processes, such as setting cu _ transquant _ bypass _ flag to 1.
Using TransQuantBypass schema for multiple color components
1. The indication of transquantBypass mode (e.g., cu _ transquant _ bypass _ flag) may be signaled separately for different color components.
a. In one example, when dual trees are enabled, cu _ transquant _ bypass _ flag for luma and chroma components or each color component may be separately coded.
b. In one example, the use of this mode may be context codec.
i. In one example, the selection of the context may depend on the color component.
c. In one example, predictive coding of the flag may be applied.
d. In one example, whether multiple indications or only one indication is signaled for all color components may depend on the codec structure (e.g., single tree or dual tree).
e. In one example, whether all color components are signaled with multiple indications or only one indication may depend on the color format and/or color component codec method (e.g., whether separate plane codecs are enabled) and/or codec mode.
i. In one example, when joint chroma residual coding is enabled for chroma blocks, both chroma blocks may share the same enable flag for TransQuantBypass.
f. In one example, whether a transquantBypass pattern can be applied in a block of a first color component may depend on whether a transquantBypass pattern is applied to sample points located in a corresponding region of a second color component.
i. The size of the corresponding region corresponding to the block of the first color component and the coordinates of the top-left sample point and of the second color component may depend on the color format. For example, if the upper left sample point coordinates of the first color component are (x 0, y 0) and the block size of the first color component is W × H, the size of the corresponding region may be 2w × 2h and the upper left sample point coordinates of the second color component are (2 × × x0,2 × y0) for a 4.
in one example, the first color component is a chroma component (e.g., cb or Cr).
in one example, the second color component is a luminance component.
in one example, if all sample points in the region of the second color component corresponding to the block of the first color component are transquantBypass codec, then the transquantBypass mode can only be used in the block of the first color component.
v. in one example, if at least one sample point in the region of the second color component corresponding to the block of the first color component is transquantBypass codec, then the transquantBypass mode can only be used in the block of the first color component.
In the example above, if TransQuantBypass cannot be used, transQuantBypass may not be signaled and inferred to be 0.
In one example, whether to signal side information for TransQuantBypass of a first color component depends on the use of TransQuantBypass in one or more blocks in the corresponding region of a second color component.
1) If TransQuantBypass is applied to all blocks in the corresponding region, transQuantBypass may be enabled and side information of TransQuantBypass of the first color component may be signaled. Otherwise, the signaling is skipped and inferred as disabled.
2) If TransQuantBypass is applied to one or more blocks in the corresponding region (e.g., a block covering the center position of the corresponding region), transQuantBypass may be enabled and side information of TransQuantBypass for the first color component may be signaled. Otherwise, the signaling is skipped and inferred as disabled.
Optionally, furthermore, when the dual tree codec structure is enabled, the above method may be enabled.
2. An indication of the transquantBypass mode for a chroma block (e.g., cu _ transquant _ bypass _ flag) may be derived from the corresponding luma region.
a. In one example, if a chroma block corresponds to a luma region covering one or more blocks, such as a Coding Unit (CU) or a Prediction Unit (PU) or a Transform Unit (TU), and at least one luma block is coded using TransQuantBypass mode, the chroma block should be coded using TransQuantBypass mode.
i. Alternatively, if a chroma block corresponds to a luma region covering one or more blocks and all of these luma blocks are coded using transquantBypass mode, the chroma block should be coded using transquantBypass mode.
Alternatively, the chroma block may be divided into sub-blocks. If a sub-block corresponds to a luma region covering one or more blocks and all of these luma blocks are coded using transquantBypass mode, then the chroma sub-block should be coded using transquantBypass mode.
3. For blocks larger than a VPDU, the TransQuantBypass mode may be enabled.
a. A block is defined as being larger than a VPDU if the width or height of the block is larger than the width or height of the VPDU.
i. Alternatively, a block is defined as being larger than a VPDU if both the width and height of the block are larger than the width and height of the VPDU, respectively.
b. In one example, for blocks larger than a VPDU, an indication of transquantBypass mode (e.g., cu _ transquant _ bypass _ flag) cu _ transquant _ bypass _ flag may be signaled.
c. In one example, for CTUs larger than a VPDU, it may be partitioned through a quadtree until multiple VPDUs are reached, or not partitioned. When not partitioned, cu _ transquant _ bypass _ flag can be inferred to be 1 without signaling.
i. Alternatively, intra prediction modes may be allowed for those large blocks.
d. Alternatively, the transQuantBypass mode may be enabled for blocks larger than the maximum allowed transform block size (e.g., maxTbSizeY) or for blocks of either width/height larger than the maximum allowed transform block size (e.g., maxTbSizeY).
i. Alternatively, the sub-items a-c may be applied by replacing the VPDU with MaxTbSizeY.
4. For blocks larger than a VPDU, transform skip mode and/or other codec methods may be enabled without applying a transform.
a. A block is defined as being larger than a VPDU if the width or height of the block is larger than the width or height of the VPDU.
i. Alternatively, a block is defined as being larger than a VPDU if the width and height of the block are larger than the width and height of the VPDU, respectively.
b. In one example, for blocks larger than a VPDU, an indication of a transform skip mode may be signaled.
c. In one example, for a CTU larger than a VPDU, it may be partitioned through a quadtree until multiple VPDUs are reached, or not partitioned. When not divided, the transform skip flag may be inferred to be 1 without signaling.
i. Alternatively, intra prediction modes may be allowed for those large blocks.
d. Alternatively, transform skip mode and/or other codec methods that do not apply transforms may be enabled for blocks larger than the maximum allowed transform block size (e.g., maxTbSizeY) or blocks with either width/height larger than the maximum allowed transform block size (e.g., maxTbSizeY).
i. Alternatively, the sub-items a-c may be applied by replacing the VPDU with MaxTbSizeY.
e. Other codec methods that do not apply the transform may include transform skip mode, DPCM, QR-DPCM, etc.
Block dimension setting for TransQuantBypass mode
The allowed block dimension of the transquantbypass mode may be the same as the TS-enabled block dimension.
Transquantbypass mode may be applicable to enable the same block dimension of QR-BDPCM.
Transquantbypass mode can be applied to enable the same block dimension of the TS.
Ts mode may be applicable to enable different block dimensions of QR-BDPCM.
i. Optionally, QR-BDPCM may be enabled for video units (e.g., sequences) even if TS mode is disabled/disabled.
d. Whether QR-BDPCM is enabled may depend on whether TS or TransQuantBypass mode is enabled.
i. In one example, the signaling of the on/off control flag of the QR-BDPCM in a video unit (e.g., sequence/TU/PU/CU/picture/slice) may be checked by the condition of whether TS or TransQuantBypass is allowed.
in one example, if TransQuantBypass is allowed, QR-BDPCM may still be enabled even if TS is not allowed.
e. The maximum and/or minimum block dimensions of a block using the TransQuantBypass mode can be signaled at the sequence/view/picture/slice/CTU/video unit level.
i. In one example, an indication of the maximum and/or minimum block dimensions of a block with cu _ transquant _ bypass _ flag may be signaled in SPS/VPS/PPS/slice header/slice group header/slice, etc.
6. It is proposed to align the allowed block dimensions for all kinds of codec modes (such as TS, transQuantBypass mode, QR-BDPCM, etc.) that disable the transform.
a. Optionally, a single indication of the maximum and/or minimum size allowed in these cases may be signalled to control the use of all these modes.
b. In one example, when one of the codec tools that does not rely on non-identity transformation is enabled, an indication of the maximum and/or minimum size allowed under these circumstances may be signaled.
i. In one example, log2_ transform _ skip _ max _ size _ minus2 may be signaled when either of TS or QR-BDPCM is enabled.
in one example, log2_ transform _ skip _ max _ size _ minus2 may be signaled when either TS or TransQuantBypass is enabled.
7. The setting of the maximum transform block size of a block skipping transform (such as TS, transQuantBypass mode, QR-BDPCM, etc.) may be different from that used in the non-TS case where the transform is applied.
Interaction between TransQuantBypass mode and other codec tools
8. For the TransQuantBypass codec block, luma shaping and/or chroma scaling may be disabled.
a. When TransQuantBypass is applied to a block, the residual is coded in the original domain, not the reshaped domain, regardless of the enable/disable flag of the LMCS. For example, an enable/disable flag for the LMCS may be signaled in the stripe/sequence level.
b. In one example, for intra and TransQuantBypass codec blocks, the predictor/reference sample points for intra prediction may first be mapped from the reshaped domain to the original domain.
c. In one example, for IBC and TransQuantBypass codec blocks, the prediction signal/reference sample points used for IBC prediction may first be mapped from the reshaped domain to the original domain.
d. In one example, for CIIP and TransQuantBypass codec blocks, the following applies:
i. the intra-predicted signal/reference sample points used for intra-prediction may be first mapped from the reshaped domain to the original domain.
Skipping the mapping of the predicted signal for intra prediction from the original domain to the shaped domain.
e. In one example, for palette mode, the palette table may be generated in the original domain, rather than in the reshaped domain.
f. Alternatively, two buffers may be allocated, one of which is used to store the sum of the prediction signal and the residual signal (i.e., the reconstructed signal); and another buffer is used to store the shaped sum, i.e. the sum of the prediction signal and the residual signal needs to be first mapped from the original domain to the shaped domain and can be further used for coding the subsequent block.
i. Alternatively, only the reconstructed signal in the original domain is stored. The reconstructed signal in the reshaped domain can be converted from the reconstructed signal in the original domain when needed by the subsequent blocks.
g. The above method is applicable to other coding methods that rely on reference sample points in the current slice/slice group/picture.
i. In one example, an inverse shaping process (i.e., a conversion from a shaped domain to an original domain) may be applied to a prediction signal generated from a reference sample point in a current slice/slice group/picture.
Optionally, forward shaping processing (i.e. conversion from the original domain to the shaped domain) is not allowed to be applied to the prediction signal generated from reference sample points in a different picture (e.g. in a reference picture).
h. Whether the above method is enabled or not may depend on the enabled/disabled status of TransQuantBypass/TS/QR-BDPCM/PCM/other tools that do not apply a transform to those blocks that contain the required reference sample points to be used.
i. In one example, whether and/or how intra prediction (such as normal intra prediction, intra block copy, or inter intra prediction (such as CIIP in VVC)) is applied may depend on whether the current block is TransQuantBypass coded and/or whether one or more neighboring blocks that provide intra prediction (or reference sample points) are TransQuantBypass coded.
in one example, if the current block is TransQuantBypass coded and for those reference sample points located in the TransQuantBypass coded block, the reference sample points are not transformed (i.e., no forward or reverse shaping process need be applied) and can be used directly to derive the prediction signal.
in one example, if the current block is TransQuantBypass coded, and for those reference sample points that are not located in the TransQuantBypass coded block, the reference sample points may first need to be converted to the original domain (e.g., by applying an inverse shaping process) and then used to derive the prediction signal.
in one example, if the current block is not TransQuantBypass coded and for those reference sample points not located in the TransQuantBypass coded block, the reference sample points are not transformed (i.e., no forward or reverse shaping process need be applied) and can be used directly to derive the prediction signal.
v. in one example, if the current block is not TransQuantBypass coded, and for those reference sample points located in the TransQuantBypass coded block, the reference sample points may first need to be converted to the shaping domain (e.g., by applying a reverse shaping process) and then used to derive the prediction signal.
In one example, the above method may be applied when these reference sample points are from blocks in the same slice/brick/slice/picture, such as when the current block is intra/IBC/CIIP coded.
In one example, the above method may be applied to a particular color component, such as a Y component or a G component, but not to other color components.
i. In one example, luma shaping and/or chroma scaling may be disabled for blocks that are coded using methods that do not apply transforms (e.g., TS mode).
i. Alternatively, in addition, the above items (e.g., items 7 a-h) can be applied by replacing the TransQuantBypass mode with a different codec mode (e.g., TS).
9. The indication of the transquantBypass mode may be signaled before signaling one or more codec tools associated with the transform matrix.
a. In one example, the codec tool associated with the transform matrix may be one or more of the following tools (the associated syntax elements are included in parentheses):
i. transform skip mode (e.g., transform _ skip _ flag)
Explicit MTS (e.g., tu _ MTS _ idx)
RST (e.g., st _ idx)
SBT (e.g., cu _ sbt _ flag, cu _ sbt _ quad _ flag, cu _ sbt _ pos _ flag)
QR-BDPCM (e.g., bdplcm flag, bdpcmm dir flag)
b. How the residual is coded may depend on the use of the TransQuantBypass mode.
i. In one example, whether residual _ coding or residual _ coding _ ts is coded may depend on the use of the transquantBypass mode.
in one example, when TransQuantBypass is disabled and transform _ skip _ flag is disabled, a residual coding method (e.g., residual _ coding) designed for the block to which the transform is applied may be used for residual coding.
in one example, when TransQuantBypass is enabled or transform _ skip _ flag is enabled, a residual coding method (e.g., residual _ coding _ ts) designed for a block to which no transform is applied may be used for residual coding.
TransQuantBypass patterns can be considered as special TS patterns.
1) Alternatively, in addition, transform _ skip _ flag may not be signaled and/or inferred as 1 when TransQuantBypass mode is enabled for a block.
a. Alternatively, residual _ coding _ ts may also be used.
c. Alternatively, in case of using TransQuantBypass mode, side information of codec tools related to other types of transformation matrices may be signaled.
i. When the TransQuantBypass mode is applied, side information of the use of the transform skip mode, QR-BDPCM, BDPCM may be further signaled.
When the TransQuantBypass mode is applied, the side information of the use of SBT, RST, MTS may not be signaled.
10. The indication of the TransQuantBypass mode may be signaled after signaling one or more codec tools associated with the transform matrix.
a. Alternatively, when certain codec tools (such as transform skip mode, QR-BDPCM, BDPCM) are applied, an indication of TransQuantBypass mode may be codec.
b. Alternatively, when certain codec tools (such as SBT, RST, MTS) are applied, the indication of TransQuantBypass mode may not be codec.
11. After signaling the indication of the quantization parameter, an indication of the TransQuantBypass mode may be conditionally signaled.
a. Alternatively, the indication of the quantization parameter may be conditionally signaled after signaling the indication of the TransQuantBypass mode.
b. In one example, when TransQuantBypass is applied to a block, signaling of the quantization parameter and/or a change in the quantization parameter (e.g., cu _ qp _ delta _ abs, cu _ qp _ delta _ sign _ flag) may be skipped.
i. For example, cu _ qp _ delta _ abs may be inferred to be 0.
c. Alternatively, when the change in the quantization parameter (e.g., cu _ qp _ delta _ abs, cu _ qp _ delta _ sign _ flag) is not equal to 0, the signaling of the TransQuantBypass mode may be skipped and the TransQuantBypass is inferred to be disabled.
i. In one example, the signaling of TransQuantBypass may depend only on the indication of the quantization parameter for a particular color component (such as a luma component).
12. When TransQuantBypass mode is enabled for a block, ALF or non-linear ALF is disabled for sample points in the block.
13. When the TransQuantBypass mode is enabled for a block, the bilateral filter or/and the diffusion filter may be disabled for sample points in the block and/or other types of post-reconstruction filters that may modify the reconstructed block.
14. PDPC can be disabled when TransQuantBypass mode is enabled for a block.
15. If the block uses TransQuantBypass mode codec, the transform selection method in implicit MTS is not applicable.
16. If the block uses TransQuantBypass mode codec, the scaling method for chroma sample points in LMCS is not applicable.
17. For trans quantBypass enabled CU/CTU/slice group/picture/sequence, LMCS can be disabled.
a. In one example, LMCS may be disabled when the SPS/VPS/PPS/slice level flag indicates that transform and quantization bypass modes may be applied to one block within a sequence/view/picture/slice.
b. Optionally, signaling of LMCS-related syntax elements may be skipped.
c. At the sequence/picture/slice group/slice/brick level, the TransQuantBypass mode may be disabled when LMCS is enabled.
i. For example, the signaling to enable/disable TransQuantBypass may depend on the use of LMCS.
1) In one example, if LMCS is applied, the indication of TransQuantBypass is not signaled. For example, it is inferred that TransQuantBypass is not used.
2) Optionally, the signaling of the use of LMCS may depend on the enabling/disabling of TransQuantBypass.
d. In one example, the consistent bitstream should satisfy: transQuantBypass and LMCS should not be enabled for the same stripe/tile group/picture.
18. After signaling of the TS mode (e.g., transform _ skip _ flag), side information of the QR-BDPCM may be signaled.
a. In one example, QR-BDPCM is considered a special case of TS mode.
i. Optionally, when QR-BCPCM is allowed for a block, TS mode should also be enabled, i.e. signaled/derived transform _ skip _ flag should be equal to 1.
Optionally, the side information of the QR-BDPCM may be further signaled when the signaled/derived transform _ skip _ flag is equal to 1.
Optionally, the side information of the QR-BDPCM may not be signaled when the signaled/derived transform _ skip _ flag is equal to 0.
b. In one example, QR-BDPCM is considered a different mode than the TS mode.
i. Optionally, when signaled/derived transform _ skip _ flag is equal to 0, side information of QR-BDPCM may be further signaled.
Optionally, when the signaled/derived transform _ skip _ flag is equal to 1, the side information of the QR-BDPCM may not be signaled further.
19. The TransQuantBypass mode and/or TS mode and/or other codec methods (e.g., palette mode) that do not apply a transform may be enabled at the sub-block level instead of in the entire block (e.g., CU/PU).
a. In one example, for the dual-tree case, a chroma block may be divided into a plurality of sub-blocks, and each block may determine the use of a TransQuantBypass mode and/or a TS mode and/or other codec method that does not apply a transform from the codec information of the corresponding luma block.
20. Whether or not inverse shaping processing (i.e., conversion from the shaped domain to the original domain) is applied to the reconstructed block prior to the loop filtering processing may vary from block to block.
a. In one example, the inverse shaping process may not be applied to blocks coded with a transquantBypass mode, but to other blocks coded with a non-transquantBypass mode.
21. Whether and/or how to apply decoder-side motion derivation, or decoder-side intra mode decision, or CIIP, or TPM's codec utility may depend on whether identity transforms are applied to the block and/or not apply any transforms to the block.
a. In one example, when TransQuantBypass/TS/QR-BDPCM/DPCM/PCM/other codec tools that have identity transforms applied and/or have no transforms applied are enabled for a block, the codec tools of decoder-side motion derivation, or decoder-side intra-mode decision, or CIIP, or TPM, may be disabled.
b. In one example, prediction refinement using optical flow (PROF), or CIIP, or TPM may be disabled for those blocks to which identity transformations are applied, or to which no transformations are applied.
c. In one example, bi-directional optical flow (BDOF), or CIIP, or TPM may be disabled for those blocks for which identity transformations are applied.
d. In one example, decode-side motion vector refinement (DMVR), or CIIP, or TPM, may be disabled for those blocks for which identity transformation is applied.
22. TransQuantBypass can be enabled at the video unit (e.g., picture/stripe/slice/brick) level. That is, all blocks in the same video unit will share the same on/off control of TransQuantBypass. The signaling of TransQuantBypass in smaller video blocks (e.g., CU/TU/PU) is skipped.
a. Alternatively, in addition, a video unit level usage indication of TransQuantBypass may be signaled in the case of another tool X usage.
b. Alternatively, in addition, in the case of the video unit level use of TransQuantBypass, a use indication of another tool X may be signaled.
c. Optionally, in addition, the consistent bitstream should satisfy: when TransQuantBypass is enabled, another tool X should be disabled.
d. Optionally, in addition, the consistent bitstream should satisfy: when another tool X is enabled, transQuantBypass should be disabled.
e. For the above example, "another tool X" may be:
i.LMCS
decoder side motion derivation (e.g., DMVR, BDOF, PROF)
Decoder side intra mode decision
iv.CIIP
v.TPM
23. The side information of TransQuantBypass can be signaled at TU level, however, the use of TransQuantBypass can be controlled at higher levels (e.g., PU/CU).
a. In one example, when one CU/PU includes multiple TUs, side information of TransQuantBypass may be signaled once (e.g., associated with the first TU in the CU), and other TUs within the CU share the same side information.
24. The indication of the codec tool may be signaled in a video unit level that is larger than the CU, however, even if the indication tells that the tool is applied, it may not be applied to certain sample points within the video unit.
a. In one example, for sample points within a lossless codec block (e.g., using the TransQuantBypass mode), the codec tool may not be applied even if the telling tool is indicated to be enabled.
b. In one example, a video unit may be a sequence/picture/view/slice/tile/sub-picture/CTB/CTU.
c. In one example, the codec tool may be a filtering method (e.g., ALF/clipping processing in a nonlinear ALF/SAO/bilateral filter/Hadamard transform domain filter)/scaling method/decoder-side derivation method, etc.
d. Optionally, the consistent bitstream should follow the following rules: if all or part of the sample points are lossless codec within a video unit, the indication of the codec tool should signal that the tool is disabled.
25. The above method can be applied to lossless codec blocks (e.g., transQuantBypass mode that bypasses transformation and quantization) or near lossless codec blocks.
a. In one example, a block is considered a near lossless codec block if it is coded with QPs within a certain range (e.g., [4,4 ]).
26. One or more separate flags (other than the sps _ mts _ enabled _ flag) that control whether the ISP/SBT allows non-DCT 2 transforms may be signaled.
a. Optionally, in addition, a separate flag may be signaled when either the ISP or SBT is enabled.
i. Alternatively, and in addition, a separate flag may be signaled when both the ISP and SBT are enabled.
Optionally, further, when the flag is equal to true, the ISP/SBT allows non-DCT 2 transforms.
b. Optionally, in addition, when the ISP is enabled, a flag (e.g., sps _ ISP _ impricit _ transform) may be signaled indicating whether the ISP codec block allows non-DCT 2 (e.g., DST7/DCT 8).
c. Further, optionally, a flag (e.g., sps _ SBT _ impricit _ transform) indicating whether the SBT codec block allows non-DCT 2 (e.g., DST7/DCT 8) may be signaled when SBT is enabled.
d. Optionally, in addition, the sps _ explicit _ MTS _ intra _ enabled _ flag may control the use of explicit MTS (e.g., tu _ MTS _ idx may be present in the bitstream) or the selection of transforms on the intra block dimension (e.g., implicit MTS applied to intra blocks of non-ISP codecs).
e. In one example, when sps _ explicit _ mts _ intra _ enabled _ flag is disabled, ISPs with non-DCT 2 transforms may still be applied.
f. In one example, when the sps _ explicit _ mts _ inter _ enabled _ flag is disabled, SBT with non-DCT 2 transforms may still be applied.
The above examples may be incorporated in the context of methods described below, such as methods and 1500, 1510, 1520, 1530, 1540, 1550, 1560, and 2100-3800, which may be implemented at a video decoder or video encoder.
Fig. 15A shows a flow diagram of an exemplary method for video processing. The method 1500 includes: at step 1502, a bitstream representation of a current video block is configured for the current video block comprising a plurality of color components, wherein an indication to skip transform and quantization processes is separately signaled in the bitstream representation for at least two of the plurality of color components.
The method 1500 includes: at step 1504, based on the configuration, a transform is performed between the current video block and a bitstream representation of the current video block. In some embodiments, the current video block comprises a luma component and a plurality of chroma components, and wherein the indication of at least one of the plurality of chroma components is based on the indication of the luma component.
In some embodiments, the indication to skip the transform and quantization process is denoted as cu _ transquant _ bypass _ flag. In one example, a dual-tree splitting process is enabled for a current video block, and wherein a cu _ transquant _ bypass _ flag is separately coded for a luma component and at least one chroma component of the current video block. In another example, the use of a skip transform and quantization process, denoted cu _ transquant _ bypass _ flag, is coded based on context. In yet another example, the context is selected based on at least one of the plurality of color components.
In some embodiments, the signaling of the indication for the at least two color components, respectively, is based on at least one of a color format, a color component codec method, or a codec mode of the current video block.
In some embodiments, a first of the at least two components is a chroma component and wherein a second of the at least two components is a luma component. For example, the chrominance component is Cb or Cr.
In some embodiments, the use of the skipped transform and quantization process in the block of the first of the at least two color components is based on the use of the skipped transform and quantization process at all sample points in the region of the second of the at least two color components corresponding to the first of the at least two color components.
In some embodiments, the use of skipping the transform and quantization process in the block of the first of the at least two color components is based on skipping the use of the transform and quantization process at least one sample point in the region of the second of the at least two color components corresponding to the first of the at least two color components.
Fig. 15B shows a flow diagram of another exemplary method for video processing. The method 1510 includes: at step 1512, based on the characteristics of the current video block, a decision is made as to the mode of enabling application of the skipped transform and quantization process on the current video block.
The method 1510 includes: at step 1514, based on the determination, a conversion is performed between the current video block and a bitstream representation of the current video block.
In some embodiments, the characteristic is a size of a current video block, wherein the mode is enabled, and wherein the size of the current video block is greater than a size of a Virtual Pipe Data Unit (VPDU). In one example, the height or width of the current video block is greater than the height or width of the VPDU, respectively.
In some embodiments, a codec mode that does not apply a transform is enabled for the current video block. In one example, the coding mode is a transform skip mode, a Differential Pulse Coding Modulation (DPCM) mode, or a quantized residual DPCM mode.
Fig. 15C shows a flow diagram of an exemplary method for video processing. The method 1520 includes: at step 1522, a decision is made regarding enabling a first mode on the current video block that skips application of the transform and quantization process, and enabling a second mode for the current block that does not apply the transform, based on at least one dimension of the current video block.
The method 1520 includes: at step 1524, based on the determination, a conversion is performed between the current video block and a bitstream representation of the current video block. In some embodiments, the second mode is a Transform Skip (TS) mode. In other embodiments, the second mode is a quantized residual block differential pulse codec modulation (QR-BDPCM) mode.
In some embodiments, the maximum or minimum value of the at least one dimension of the current video block is signaled in a Sequence Parameter Set (SPS), a Video Parameter Set (VPS), a Picture Parameter Set (PPS), a slice header, a slice group header, or a slice header.
In some embodiments, the allowable values for at least one dimension are the same for the first mode and the second mode. In one example, the second mode is one of a transform skip mode, a Block Differential Pulse Code Modulation (BDPCM) mode, or a quantized residual BDPCM mode.
Fig. 15D shows a flow diagram of an exemplary method for video processing. The method 1530 includes: at step 1532, it is determined that the current video block is codec using the first mode and the second mode that skip application of the transform and quantization process on the current video block.
The method 1530 includes: at step 1534, based on the determination, a conversion is performed between the current video block and a bitstream representation of the current video block. In some embodiments, the current video block includes a luma component and a chroma component, and wherein at least one of reshaping of the luma component or scaling of the chroma component is disabled.
In some embodiments, the first mode is an intra prediction mode. In other embodiments, the first mode is an Intra Block Copy (IBC) mode. In another embodiment, the first mode is a combined inter-frame intra prediction (CIIP) mode. In one example, the reference sample points used in the first mode are mapped from the reshaped domain to the original domain.
In some embodiments, the current video block includes a luma component and a chroma component, wherein the first mode does not apply a transform to the current video block, and wherein application of Luma Mapping and Chroma Scaling (LMCS) processing is disabled for the current video block.
Fig. 15E shows a flow diagram of an exemplary method for video processing. The method 1540 includes: at step 1542, a bitstream representation of the current video block is configured for the current video block, wherein an indication to skip transform and quantization processing is signaled in the bitstream representation prior to signaling syntax elements related to one or more codec tools related to the plurality of transforms.
The method 1540 includes: at step 1544, based on the configuring, a conversion is performed between the current video block and a bitstream representation of the current video block. In some embodiments, the one or more codec tools associated with the plurality of transforms include at least one of a transform skip mode, an explicit Multiple Transform Set (MTS) scheme, a simplified secondary transform (RST) mode, a sub-block transform (SBT) mode, or a quantized residual block differential pulse codec modulation (QR-BDPCM) mode.
Fig. 15F shows a flow diagram of an exemplary method for video processing. The method 1550 includes: at step 1552, it is determined that the current video block is being coded using a mode that skips application of transform and quantization processing on the current video block.
The method 1550 includes: at step 1554, based on the determination and as part of performing a transition between the current video block and the bitstream representation of the current video block, the filtering method is disabled.
In some embodiments, the filtering method comprises an Adaptive Loop Filtering (ALF) method or a non-linear ALF method.
In some embodiments, the filtering method uses at least one of a bilateral filter, a diffusion filter, or a post-reconstruction filter that modifies a reconstructed version of the current video block.
In some embodiments, the filtering method comprises a position dependent intra prediction combining (PDPC) method.
In some embodiments, the filtering method comprises a loop filtering method, and method 1550 further comprises the step of making a decision regarding the selective application of a reverse shaping process to the reconstructed version of the current video block prior to applying the filtering method.
Fig. 15G shows a flow diagram of an exemplary method for video processing. The method 1560 includes: at step 1562, it is determined that the current video block was coded using a mode that skips the application of transform and quantization processing on the current video block.
The method 1560 includes: at step 1564, intra-loop shaping (ILR) processing is disabled for (i) a current picture including the current video block or (ii) a portion of the current picture based on the determining and as part of performing a transition between the current video block and a bitstream representation of the current video block.
In some embodiments, the portion of the current picture is a Codec Unit (CU), a Codec Tree Unit (CTU), a slice, or a slice group.
In some embodiments, the indication of disabled is signaled in a Sequence Parameter Set (SPS), video Parameter Set (VPS), picture Parameter Set (PPS), or slice header.
In some embodiments, the bitstream representation of the current video block does not include signaling of syntax elements related to ILR processing.
Another method for video processing includes: configuring a bitstream representation of the current video block for the current video block, wherein an indication to skip transform and quantization processing is selectively signaled in the bitstream representation after signaling one or more indications of quantization parameters; and performing a conversion between the current video block and a bitstream representation of the current video block based on the configuration.
In some embodiments, when the variation of the quantization parameter is not equal to zero, an indication to skip the transform and quantization process is excluded from the bitstream representation.
In some embodiments, the indication to skip the transform and quantization process is selectively signaled based on a quantization parameter of a luma component of the current video block.
Another method for video processing includes: configuring a bitstream representation of a current video block for the current video block, wherein an indication to apply a codec tool to the current video block is signaled in the bitstream representation at a level of a video unit larger than a Codec Unit (CU); and based on the configuration, performing a conversion between the current video block and a bitstream representation of the current video block, wherein performing the transform comprises: although the application of the coding tool is indicated in the bitstream representation, the application of the coding tool is restricted to at least some sample points of the current video block.
In some embodiments, at least some sample points are within a lossless codec block.
In some embodiments, a video unit comprises a sequence, a picture, a view, a slice, a tile, a sub-picture, a Coding Tree Block (CTB), or a Coding Tree Unit (CTU).
In some embodiments, the codec tool includes one or more of a filtering method, a scaling method, or a decoder-side derivation method.
In some embodiments, the filtering method includes at least one of an Adaptive Loop Filter (ALF), a bilateral filter, a sample adaptive offset filter, or a Hadamard transform domain filter.
5. Example implementations of the disclosed technology
Fig. 16 is a block diagram of the video processing apparatus 1600. Apparatus 1600 may be used to implement one or more of the methods described herein. The apparatus 1600 may be implemented in a smartphone, tablet, computer, internet of things (IoT) receiver, and/or the like. The apparatus 1600 may include one or more processors 1602, one or more memories 1604, and video processing hardware 1606. Processor 1602 may be configured to implement one or more methods described herein (including, but not limited to, methods 1500, 1510, 1520, 1530, 1540, 1550, 1560, and 2100-3800). The memory (es) 1604 may be used to store data and code for implementing the methods and techniques described herein. Video processing hardware 1606 may be used to implement some of the techniques described herein in hardware circuits.
In some embodiments, the video codec method may be implemented using an apparatus implemented on a hardware platform as described with respect to fig. 16.
Some embodiments of the disclosed technology include: a decision or determination is made to enable a video processing tool or mode. In one example, when a video processing tool or mode is enabled, the codec will use or implement that tool or mode in the processing of video blocks, but does not necessarily modify the resulting bitstream based on the use of that tool or mode. That is, when a video processing tool or mode is enabled based on a decision or determination, the transformation of the bitstream representation from the video block to the video will use that video processing tool or mode. In another example, when a video processing tool or mode is enabled, the decoder will process the bitstream with the knowledge that the bitstream has been modified based on the video processing tool or mode. That is, the transformation from the bitstream representation of the video to the video blocks will be performed using video processing tools or modes that are enabled based on the decision or determination.
Some embodiments of the disclosed technology include: a decision or determination is made to disable the video processing tool or mode. In one example, when a video processing tool or mode is disabled, the codec will not use that tool or mode in transforming a video block to a bitstream representation of the video. In another example, when a video processing tool or mode is disabled, the decoder will process the bitstream with the knowledge that the bitstream was not modified using the video processing tool or mode that was enabled based on the decision or determination.
Fig. 17 is a block diagram illustrating an exemplary video codec system 100 that may utilize techniques of this disclosure. As shown in fig. 17, the video codec system 100 may include a source device 110 and a destination device 120. Source device 110 generates encoded video data, which may be referred to as a video encoding device. Destination device 120 may decode the encoded video data generated by source device 110, and source device 110 may be referred to as a video decoding device. The source device 110 may include a video source 112, a video encoder 114, and an input/output (I/O) interface 116.
The video source 112 may include sources such as a video capture device, an interface that receives video data from a video content provider, and/or a computer graphics system for generating video data, or a combination of such sources. The video data may include one or more pictures. The video encoder 114 encodes video data from the video source 112 to generate a bitstream. The bitstream may comprise a sequence of bits forming a codec representation of the video data. The bitstream may include coded pictures and related data. A coded picture is a coded representation of a picture. The related data may include sequence parameter sets, picture parameter sets, and other syntax structures. The I/O interface 116 may include a modulator/demodulator (modem) and/or a transmitter. The encoded video data may be signaled directly to the destination device 120 via the I/O interface 116 over the network 130 a. The encoded video data may also be stored on storage medium/server 130b for access by destination device 120.
Destination device 120 may include I/O interface 126, video decoder 124, and display device 122.
I/O interface 126 may include a receiver and/or a modem. I/O interface 126 may obtain encoded video data from source device 110 or storage medium/server 130 b. Video decoder 124 may decode the encoded video data. The display device 122 may display the decoded video data to a user. The display device 122 may be integrated with the destination device 120 or may be external to the destination device 120 configured to interface with an external display device.
The video encoder 114 and the video decoder 124 may operate in accordance with video compression standards such as the High Efficiency Video Codec (HEVC) standard, the multifunction video codec (VVM) standard, and other current and/or further standards.
Fig. 18 is a block diagram illustrating an example of a video encoder 200, which video encoder 200 may be the video encoder 114 in the system 100 shown in fig. 17.
Video encoder 200 may be configured to perform any or all of the techniques of this disclosure. In the example of fig. 18, the video encoder 200 includes a number of functional components. The techniques described in this disclosure may be shared among various components of the video encoder 200. In some examples, the processor may be configured to perform any or all of the techniques described in this disclosure.
The functional components of the video encoder 200 may include a partitioning unit 201, a prediction unit 202, which may include a mode selection unit 203, a motion estimation unit 204, a motion compensation unit 205, and an intra prediction unit 206, a residual generation unit 207, a transform unit 208, a quantization unit 209, an inverse quantization unit 210, an inverse transform unit 211, a reconstruction unit 212, a buffer 213, and an entropy coding unit 214.
In other examples, video encoder 200 may include more, fewer, or different functional components. In an example, the prediction unit 202 may include an Intra Block Copy (IBC) unit. The IBC unit may perform prediction in IBC mode, where the at least one reference picture is a picture in which the current video block is located.
Furthermore, some components (such as the motion estimation unit 204 and the motion compensation unit 205) may be highly integrated, but are separately represented in the example of fig. 18 for explanation purposes.
The partition unit 201 may partition a picture into one or more video blocks. The video encoder 200 and the video decoder 300 may support various video block sizes.
The mode selection unit 203 may, for example, select one of the coding modes (intra or inter) based on the error result, and supply the resulting intra or inter coded block to the residual generation unit 207 to generate residual block data, and to the reconstruction unit 212 to reconstruct the coded block to be used as a reference picture. In some examples, mode selection unit 203 may select a combined intra inter frame (CIIP) mode in which prediction is based on the inter prediction signal and the intra prediction signal. In the case of inter prediction, mode selection unit 203 may also select a resolution for the motion vector (e.g., sub-pixel or integer-pixel precision) of the block.
To perform inter prediction on the current video block, motion estimation unit 204 may generate motion information for the current video block by comparing one or more reference frames in buffer 213 with the current video block. Motion compensation unit 205 may determine a predictive video block for the current video block based on motion information and decoded sample point points for a picture from buffer 213 (other than the picture associated with the current video block).
For example, motion estimation unit 204 and motion compensation unit 205 may perform different operations on the current video block depending on whether the current video block is in an I-slice, a P-slice, or a B-slice.
In some examples, motion estimation unit 204 may perform uni-directional prediction on the current video block, and motion estimation unit 204 may search for a reference video block of the current video block in a list 0 or list 1 reference picture. Then, motion estimation unit 204 may generate a reference index indicating a reference picture in list 0 or list 1 that includes the reference video block and a motion vector indicating a spatial displacement between the current video block and the reference video block. Motion estimation unit 204 may output the reference index, the prediction direction indicator, and the motion vector as motion information of the current video block. The motion compensation unit 205 may generate a prediction video block of the current block based on a reference video block indicated by motion information of the current video block.
In other examples, motion estimation unit 204 may perform bi-prediction on the current video block, and motion estimation unit 204 may search for a reference video block of the current video block in a reference picture of list 0 and may also search for another reference video block of the current video in a reference picture of list 1. Then, the motion estimation unit 204 may generate a reference index indicating a reference picture in list 0 or list 1 including the reference video block and a motion vector indicating a spatial displacement between the reference video block and the current video block. Motion estimation unit 204 may output the reference index and the motion vector of the current video block as motion information for the current video block. Motion compensation unit 205 may generate a prediction video block for the current video block based on the reference video block indicated by the motion information for the current video block.
In some examples, motion estimation unit 204 may output a full set of motion information for a decoding process of a decoder.
In some examples, motion estimation unit 204 may not output the full set of motion information for the current video. Instead, motion estimation unit 204 may signal motion information for the current video block with reference to motion information of another video block. For example, motion estimation unit 204 may determine that the motion information of the current video block is sufficiently similar to the motion information of the adjacent video block.
In one example, motion estimation unit 204 may indicate a value in a syntax structure associated with the current video block that indicates to video decoder 300 that the current video block has the same motion information as another video block.
In another example, motion estimation unit 204 may identify another video block and a Motion Vector Difference (MVD) in a syntax structure associated with the current video block. The motion vector difference indicates a difference between a motion vector of the current video block and a motion vector indicating the video block. The video decoder 300 may use the indicated motion vector and motion vector difference for the video block to determine the motion vector for the current video block.
As described above, the video encoder 200 may predictively signal the motion vector. Two examples of prediction signaling techniques that may be implemented by video encoder 200 include Advanced Motion Vector Prediction (AMVP) and Merge mode signaling.
The intra-prediction unit 206 may intra-predict the current video block. When intra-prediction unit 206 intra-predicts the current video block, intra-prediction unit 206 may generate prediction data for the current video block based on decoded sample point points of other video blocks in the same picture. The prediction data for the current video block may include a predicted video block and various syntax elements.
Residual generation unit 207 may generate residual data for the current video block by subtracting (e.g., as indicated by a minus sign) the prediction video block of the current video block from the current video block. The residual data for the current video block may comprise residual video blocks of different sample point components corresponding to sample points in the current video block.
In other examples, for example in skip mode, the current video block of the current video block may not have residual data and the residual generation unit 207 may not perform the subtraction operation.
Transform processing unit 208 may generate one or more transform coefficient video blocks for the current video block by applying one or more transforms to a residual video block associated with the current video block.
After transform processing unit 208 generates a transform coefficient video block associated with the current video block, quantization unit 209 may quantize the transform coefficient video block associated with the current video block based on one or more Quantization Parameter (QP) values associated with the current video block.
Inverse quantization unit 210 and inverse transform unit 211 may apply inverse quantization and inverse transform, respectively, to the transform coefficient video blocks to reconstruct residual video blocks from the transform coefficient video blocks. Reconstruction unit 212 may add the reconstructed residual video block to corresponding sample points of one or more prediction video blocks generated by prediction unit 202 to produce a reconstructed video block associated with the current block for storage in buffer 213.
After reconstruction unit 212 reconstructs the video blocks, a loop filtering operation may be performed to reduce video block artifacts in the video blocks.
Entropy encoding unit 214 may receive data from other functional components of video encoder 200. When entropy encoding unit 214 receives the data, entropy encoding unit 214 may perform one or more entropy encoding operations to generate entropy encoded data and output a bitstream that includes the entropy encoded data.
Fig. 19 is a block diagram illustrating an example of a video decoder 300 that may be the video decoder 114 in the system 100 shown in fig. 17.
Video decoder 300 may be configured to perform any or all of the techniques of this disclosure. In the example of fig. 19, the video decoder 300 includes a number of functional components. The techniques described in this disclosure may be shared among various components of the video decoder 300. In some examples, the processor may be configured to perform any or all of the techniques described in this disclosure.
In the example of fig. 19, the video decoder 300 includes an entropy decoding unit 301, a motion compensation unit 302, an intra prediction unit 303, an inverse quantization unit 304, an inverse transform unit 305, and a reconstruction unit 306 and a buffer 307. In some examples, video decoder 300 may perform a decoding process that is generally the inverse of the encoding process described with respect to video encoder 200 (fig. 18).
The entropy decoding unit 301 may retrieve the encoded bitstream. The encoded bitstream may include entropy encoded video data (e.g., encoded blocks of video data). Entropy decoding unit 301 may decode entropy-encoded video data, and motion compensation unit 302 may determine motion information including motion vectors, motion vector precision, reference picture list indices, and other motion information from the entropy-decoded video data. For example, the motion compensation unit 302 may determine this information by performing AMVP and Merge modes.
The motion compensation unit 302 may generate a motion compensation block by interpolating based on an interpolation filter. An identifier of the interpolation filter used with sub-pixel precision may be included in the syntax element.
Motion compensation unit 302 may use interpolation filters used by video encoder 20 during video block encoding to calculate an interpolation of sub-integer pixels of a reference block. The motion compensation unit 302 may determine an interpolation filter used by the video encoder 200 according to the received syntax information and generate a prediction block using the interpolation filter.
The motion compensation unit 302 may use some syntax information to determine the size of blocks used to encode frames and/or slices of a coded video sequence, partition information describing how to partition each macroblock of a picture of the coded video sequence, a mode indicating how to encode each partition, one or more reference frames (and reference frame lists) for each inter coded block, and other information used to decode the coded video sequence.
The intra prediction unit 303 may form a prediction block from spatially neighboring blocks using, for example, an intra prediction mode received in a bitstream. The inverse quantization unit 303 inversely quantizes (i.e., dequantizes) the quantized video block coefficients provided in the bitstream and decoded by the entropy decoding unit 301. The inverse transform unit 303 applies inverse transform.
The reconstruction unit 306 may add the residual block to the corresponding prediction block generated by the motion compensation unit 202 or the intra prediction unit 303 to form a decoded block. A deblocking filter may also be applied to filter the decoded blocks to remove blocking artifacts, if desired. The decoded video blocks are then stored in a buffer 307, which provides reference blocks for subsequent motion compensation/intra prediction, and also generates decoded video for presentation on a display device.
Fig. 20 is a block diagram illustrating an example video processing system 2000 in which various techniques disclosed herein may be implemented. Various implementations may include some or all of the components of system 2000. The system 2000 may include an input 2002 for receiving video content. The video content may be received in a raw or uncompressed format (e.g., 8 or 10 bit multi-component pixel values), or may be received in a compressed or codec format. The input 2002 may represent a network interface, a peripheral bus interface, or a storage interface. Examples of network interfaces include wired interfaces such as ethernet, passive Optical Network (PON), and wireless interfaces such as Wi-Fi or cellular interfaces.
System 2000 may include a codec component 2004 that may implement various encoding or coding methods described herein. The codec component 2004 may reduce the average bit rate of the video from the input 2002 to the output of the codec component 2004 to produce a codec representation of the video. Thus, codec techniques are sometimes referred to as video compression or video transcoding techniques. The output of codec component 2004 may be stored or signaled through a connected communication represented by component 2006. Component 2008 can use a stored or communicated bitstream (or codec) representation of video received at input 2002 to generate pixel values or displayable video that is signaled to display interface 2010. The process of generating user-viewable video from a bitstream representation is sometimes referred to as video decompression. Further, while certain video processing operations are referred to as "codec" operations or tools, it should be understood that encoding tools or operations are used at the encoder and the corresponding decoding tools or operations that invert the encoded results will be performed by the decoder.
Examples of a peripheral bus interface or display interface may include a Universal Serial Bus (USB) or a high-definition multimedia interface (HDMI) or displayport, among others. Examples of storage interfaces include SATA (serial advanced technology attachment), PCI, IDE interfaces, and the like. The techniques described herein may be implemented in various electronic devices, such as mobile phones, laptop computers, smart phones, or other devices capable of performing digital data processing and/or video display.
Fig. 21 shows a flow diagram of an exemplary method for video processing. The method 2100 comprises: performing 2102 a conversion between a video comprising a plurality of color components and a bitstream representation of the video, wherein the bitstream representation conforms to a rule specifying that one or more syntax elements are included in the bitstream representation for two color components to indicate whether a transform quantization bypass mode is applicable to video blocks representing the two color components in the bitstream representation, and wherein, when the transform quantization bypass mode is applicable to the video blocks, the video blocks are represented in the bitstream representation without using transform and quantization processes or are obtained from the bitstream representation without using inverse transform and inverse quantization processes.
In some embodiments of method 2100, the one or more syntax elements indicating whether transform quantization bypass mode is applicable are represented by one or more cu _ transquant _ bypass _ flag. In some embodiments of method 2100, in response to enabling dual-tree partitioning processing for two video blocks, a first cu transquant bypass flag of a first video block of a luma component and a second cu transquant bypass flag of a second video block for at least one chroma component are separately encoded and the video blocks include the first video block and the second video block. In some embodiments of method 2100, the use of one or more cu _ transquant _ bypass _ flag is coded based on context. In some embodiments of method 2100, the context is selected based on at least one of the plurality of color components. In some embodiments of method 2100, another cu _ transquant _ bypass _ flag from the one or more cu _ transquant _ bypass _ flags is used to predictively code the one cu _ transquant _ bypass _ flag from the one or more cu _ transquant _ bypass _ flags. In some embodiments of method 2100, the codec structures of the two video blocks determine whether one cu _ transquant _ bypass _ flag is indicated in the bitstream representation for the plurality of color components or whether a plurality of cu _ transquant _ bypass _ flag is indicated in the bitstream representation for the plurality of color components.
In some embodiments of method 2100, the codec structure comprises a single tree codec structure or a dual tree codec structure. In some embodiments of method 2100, determining whether the transform quantization bypass mode is applicable to a first video block of a first color component based on whether a transform and quantization process or an inverse transform and inverse quantization process is not used on sample points located in a corresponding region of a second video block of a second color component, wherein the corresponding region of the second video block corresponds to a region of the first video block, and wherein the video block comprises the first video block and the second video block. In some embodiments of method 2100, the size of the corresponding region of the second video block and the coordinates of the top-left sample point of the second video block of the second color component are dependent on the color format. In some embodiments of method 2100, in response to the first video block having another size W × H and another top-left sample point located at (x 0, y 0), and in response to the color format being 4.
In some embodiments of method 2100, the first color component is a chroma component. In some embodiments of method 2100, the chroma component is a blue differential component or a red differential component. In some embodiments of method 2100, the second color component is a luma component. In some embodiments of method 2100, transform and quantization processing or inverse transform and inverse quantization processing is not used for the first video block in response to all sample points located in a corresponding region of the second video block being either codec without use of transform and quantization processing or derived without use of inverse transform and inverse quantization processing. In some embodiments of method 2100, the transform and quantization process or the inverse transform and inverse quantization process is not used for the first video blocks of the first color component in response to at least one sample point located in a corresponding region of the second video blocks of the second color component being either codec without use of the transform and quantization process or derived without use of the inverse transform and inverse quantization process.
In some embodiments of method 2100, in response to not using a transform and quantization process or an inverse transform and inverse quantization process for the first video block or the second video block, the bitstream representation does not include a cu transquant bypass flag for the first video block or the second video block. In some embodiments of method 2100, whether the side information indicating use of the transform and quantization process or the inverse transform and inverse quantization process in the bit stream representation of the first video block of the first color component is determined based on whether the transform and quantization process or the inverse transform and quantization process is not used for one or more video blocks in the corresponding region of the second color component. In some embodiments of method 2100, in response to not using the transform and quantization process or the inverse transform and quantization process for all of the one or more video blocks in the corresponding region of the second color component, side information of the transform and quantization process or the inverse transform and inverse quantization process is indicated in a bit stream representation of the first video block of the first color component.
In some embodiments of method 2100, in response to not using the transform and quantization process or the inverse transform and quantization process for only some of the one or more video blocks in the corresponding region of the second color component, side information of the transform and quantization process or the inverse transform and inverse quantization process is excluded from the bitstream representation of the first video block of the first color component. In some embodiments of method 2100, in response to not using the transform and quantization process or the inverse transform and quantization process for one or more video blocks in the corresponding region of the second color component, side information of the transform and quantization process or the inverse transform and inverse quantization process is indicated in a bit stream representation of a first video block of the first color component. In some embodiments of method 2100, the one or more video blocks comprise a video block covering a center location of a corresponding region of the second color component.
In some embodiments of method 2100, in response to not using the transform and quantization process or the inverse transform and quantization process for one or more video blocks in the corresponding region of the second color component, side information of the transform and quantization process or the inverse transform and inverse quantization process is excluded from the bitstream representation of the first video block of the first color component. In some embodiments of method 2100, a dual tree coding structure is enabled for the first video block and the second video block. In some embodiments of method 2100, the plurality of color components comprises a luma component and a plurality of chroma components, wherein the first syntax element for at least one chroma component of the plurality of chroma components is based on the second syntax of the luma component, and wherein the one or more syntax elements indicating whether the transform quantization bypass mode is applicable comprise the first syntax element and the second syntax element. In some embodiments of method 2100, in response to not using a transform and quantization process or an inverse transform and inverse quantization process for one or more video blocks associated with a luma component, a transform and quantization process or an inverse transform and inverse quantization process for video blocks of at least one chroma component is not used, and wherein at least one video block associated with a luma component corresponds to a video block associated with at least one chroma component.
In some embodiments of method 2100, the one or more video blocks of the luma component comprise a Coding Unit (CU), a Prediction Unit (PU), or a Transform Unit (TU). In some embodiments of method 2100, the transform and quantization process or the inverse transform and inverse quantization process is not used for all of the one or more video blocks for the luma component. In some embodiments of method 2100, the video block of the at least one chroma component is divided into sub-blocks, wherein, in response to not using the transform and quantization process or the inverse transform and inverse quantization process for all of the one or more video blocks of the luma component, the transform and quantization process or the inverse transform and inverse quantization process is not used for sub-blocks from the sub-blocks, and wherein the one or more video blocks associated with the luma component correspond to sub-blocks associated with the at least one chroma component.
Fig. 22 shows a flow diagram of an exemplary method for video processing. The method 2200 comprises: determining 2202, based on a characteristic of a current video block of the video, whether a transform quantization bypass mode is applicable to the current video block, wherein, when the transform quantization bypass mode is applicable to the current video block, the current video block is represented in a bitstream representation without using transform and quantization processing or the current block is obtained from the bitstream representation without using inverse transform and inverse quantization processing; and performing 2204 a conversion between the current video block and a bitstream representation of the video based on the determination.
In some embodiments of method 2200, the characteristic is a first size of the current video block, wherein the transform quantization bypass mode is applicable to the current video block in response to the first size of the current video block being greater than a second size of a Virtual Pipe Data Unit (VPDU). In some embodiments of method 2200, the height or width of the current video block is greater than the height or width of the VPDU, respectively. In some embodiments of method 2200, the height and width of the current video block are greater than the height and width of the VPDU, respectively. In some embodiments of method 2200, a syntax element is included in the bitstream representation that indicates that no transform and quantization or inverse transform and inverse quantization process is used for the current video block. In some embodiments of method 2200, the method further comprises: the current video block is partitioned into a plurality of VPDUs in response to a size of a Codec Tree Unit (CTU) of the current video block being greater than a second size of the VPDU. In some embodiments of method 2200, wherein a size of a Coding Tree Unit (CTU) of the current video block is greater than a second size of the VPDU, wherein the current video block is not partitioned into the VPDU, wherein a transform quantization bypass mode that does not use a transform and quantization process or an inverse transform and inverse quantization process on the current video block applies to the current video block, and wherein syntax elements indicating that the transform and quantization process or the inverse transform and inverse quantization process is not used are not included in the bitstream representation.
In some embodiments of method 2200, the CTUs for the current video block allow intra prediction modes. In some embodiments of method 2200, the characteristic is a first size of the current video block, wherein the transform quantization bypass mode is applied to the current video block in response to the first size of the current video block being greater than a second size of the maximum allowed transform block. In some embodiments of method 2200, the height or width of the current video block is greater than the dimension of the maximum allowed transform block. In some embodiments of method 2200, the height and width of the current video block are greater than the dimension of the maximum allowed transform block. In some embodiments of method 2200, a syntax element is included in the bitstream representation that indicates that no transform and quantization or inverse transform and inverse quantization process is used for the current video block. In some embodiments of method 2200, the method further comprises: in response to a size of a Coding Tree Unit (CTU) of a current video block being greater than a second size of a maximum allowed transform block, the current video block is divided into a plurality of maximum allowed transform blocks.
In some embodiments of method 2200, wherein a size of a Coding Tree Unit (CTU) of the current video block is greater than a second size of the maximum allowed transform block, wherein the current video block is not divided into the plurality of maximum allowed transform blocks, wherein no transform and quantization process or inverse transform and inverse quantization process is used for the current video block, and wherein syntax elements indicating that no transform and quantization process or inverse transform and inverse quantization process is used are not included in the bitstream representation. In some embodiments of method 2200, the intra prediction mode is allowed for the CTU of the current video block. In some embodiments of method 2200, in response to the first size of the current video block being greater than the second size of the Virtual Pipe Data Unit (VPDU), a codec mode is enabled that does not apply a transform to the current video block.
In some embodiments of method 2200, the height or width of the current video block is greater than the height or width of the VPDU, respectively. In some embodiments of method 2200, the height and width of the current video block are greater than the height and width of the VPDU, respectively. In some embodiments of method 2200, a syntax element is included in the bitstream representation that indicates that the codec mode is not to apply the transform to the current video block. In some embodiments of method 2200, the method further comprises: the current video block is partitioned into a plurality of VPDUs in response to a size of a Codec Tree Unit (CTU) of the current video block being greater than a second size of the VPDU. In some embodiments for method 2200, wherein a size of a Coding Tree Unit (CTU) of the current video block is greater than the second size of the VPDU, and wherein the current video block is not divided into the plurality of VPDUs, wherein no transform is applied to the current video block, and wherein a syntax element indicating that a coding mode does not apply a transform to the current video block is not included in the bitstream representation.
In some embodiments of method 2200, intra prediction modes are allowed for the CTU of the current video block. In some embodiments of method 2200, a codec mode that does not apply a transform to the current video block is enabled in response to the first size of the current video block being greater than the second size of the maximum allowed transform block. In some embodiments of method 2200, the height or width of the current video block is greater than the size of the maximum allowed transform block. In some embodiments of method 2200, the height and width of the current video block are greater than the size of the maximum allowed transform block. In some embodiments of method 2200, a syntax element is included in the bitstream representation that indicates that the codec mode is not to apply the transform to the current video block. In some embodiments of method 2200, the method further comprises: in response to a size of a Coding Tree Unit (CTU) of a current video block being greater than a second size of a maximum allowed transform block, the current video block is divided into a plurality of maximum allowed transform blocks.
In some embodiments of method 2200, wherein a size of a Coding Tree Unit (CTU) of the current video block is greater than a second size of the maximum allowed transform block, wherein the current video block is not divided into the plurality of maximum allowed transform blocks, wherein no transform and quantization process or inverse transform and inverse quantization process is used for the current video block, and wherein syntax elements indicating that the coding mode does not apply a transform to the current video block are not included in the bitstream representation. In some embodiments of method 2200, intra prediction modes are allowed for the CTU of the current video block. In some embodiments of method 2200, the coding mode is a transform skip mode, a Differential Pulse Code Modulation (DPCM) mode, or a quantized residual DPCM (QR-DPCM) mode, and wherein, in the QR-DPCM mode, a Differential Pulse Code Modulation (DPCM) representation is used to represent a difference between a quantized residual of an intra prediction of the current video block and a prediction of the quantized residual in a bitstream representation of the current video block.
Fig. 23 shows a flow diagram of an exemplary method for video processing. The method 2300 comprises: based on a current video block of a video satisfying a dimensional constraint, determining 2302 enables two or more codec modes to represent the current video block in a bitstream representation, wherein the dimensional constraint specifies: disabling the same set of allowed dimensions for the current video block for two or more codec modes, and wherein for a codec operation the two or more codec modes represent the current video block in a bitstream representation without using a transform operation for the current video block, or wherein for a decoding operation the current video block is obtained from the bitstream representation using the two or more codec modes without using an inverse transform operation; and performing 2304 a conversion between the current video block and a bitstream representation of the video based on one of the two or more codec modes.
In some embodiments of method 2300, wherein the two or more coding modes include any two or more of a Transform Skip (TS) mode, a transform quantization bypass mode, a Block Differential Pulse Coding Modulation (BDPCM) mode, and a quantization residual BDPCM (QR-BDPCM) mode, wherein in the transform quantization bypass mode, no transform and quantization processing is applied to a current video block during a coding operation, wherein in the transform quantization bypass mode, no inverse transform and inverse quantization processing is applied to obtain the current video block during a decoding operation, and wherein in the QR-BDPCM mode, a difference between a quantization residual of an intra prediction of the current video block and a prediction of the quantization residual is represented in a bit stream representation. In some embodiments of the method 2300, a single syntax element indicating a permitted maximum value and/or a permitted minimum size value for the dimensions of two or more codec modes is signaled in the bitstream representation. In some embodiments of the method 2300, in response to one of the two or more codec modes being enabled, a syntax element indicating an allowed maximum value and/or an allowed minimum size value for a dimension of the two or more codec modes is signaled in the bitstream representation, and wherein the one codec mode is not dependent on a non-identity transformation.
In some embodiments of method 2300, the syntax element is a log2_ transform _ skip _ max _ size _ minus2 value. In some embodiments of method 2300, the one codec mode is a Transform Skip (TS) mode, a transform quantization bypass mode, or a QR-BDPCM mode. In some embodiments of method 2300, the one codec mode is enabled at a level that is not a codec unit level. In some embodiments of method 2300, the one codec mode comprises a Transform Skip (TS) mode or a quantization residual block differential pulse codec modulation (QR-BDPCM) mode, and wherein the syntax element is included in a log2_ transform _ skip _ max _ size _ minus2 value in response to the TS mode or the QR-BDPCM mode being enabled.
In some embodiments of method 2300, the one codec mode comprises a Transform Skip (TS) mode or a transform bypass mode, and wherein the syntax element is included in a log2_ transform _ skip _ max _ size _ minus2 value in response to the TS mode or the transform bypass mode being enabled. In some embodiments of the method 2300, the enabled codec modes are from two or more codec modes, and wherein the indication of use of the codec modes is included in the bitstream representation. In some embodiments of the method 2300, the method further comprises: based on a current video block of the video not satisfying the dimensional constraint, determining not to enable two or more codec modes for the current video block. In some embodiments of method 2300, in response to not enabling a codec mode for a current video block, an indication of use of a codec mode from two or more codec modes is not included in the bitstream representation.
Fig. 24 shows a flow diagram of an exemplary method for video processing. The method 2400 includes: determining 2402 to enable two codec modes to represent the current video block in a bitstream representation based on the current video block of the video satisfying a dimensional constraint, wherein the dimensional constraint specifies: using the same set of allowed dimensions for enabling two codec modes, and wherein for an encoding operation the two codec modes represent a current video block in a bitstream representation without using a transform operation on the current video block, or wherein for a decoding operation the two codec modes are used to obtain the current video block from the bitstream representation without using an inverse transform operation; and performing 2404 a conversion between the current video block and a bitstream representation of the video based on one of the two codec modes.
In some embodiments of method 2400, wherein the two coding modes include a transform quantization bypass mode and a Transform Skip (TS) mode, wherein, in the transform quantization bypass mode, no transform and quantization processing is applied to a current video block during a coding operation, and wherein, in the transform quantization bypass mode, no inverse transform and inverse quantization processing is applied to obtain the current video block during a decoding operation. In some embodiments of method 2400, wherein the two codec modes include a transform quantization bypass mode and a quantized residual block differential pulse codec modulation (QR-BDPCM) mode, wherein in the transform quantization bypass mode, no transform and quantization processing is applied to the current video block during an encoding operation, wherein in the transform quantization bypass mode, no inverse transform and inverse quantization processing is applied to obtain the current video block during a decoding operation, and wherein in the QR-BDPCM mode, a difference between a quantized residual of an intra prediction of the current video block and a prediction of the quantized residual is represented in a bit stream representation. In some embodiments of method 2400, a set of dimensions for enabling the TS mode is different from dimensions associated with the QR-BDPCM mode. In some embodiments of method 2400, when the TS mode is not allowed or disabled for a video unit, the QR-BDPCM mode is enabled for the video unit.
In some embodiments of method 2400, the two codec modes include a quantized residual block differential pulse codec modulation (QR-BDPCM) mode and a Transform Skip (TS) mode. In some embodiments of method 2400, whether a QR-BDPCM mode that does not apply a transform to a current video block is enabled is based on whether a Transform Skip (TS) mode is enabled for the current video block or depending on whether a transform quantization bypass mode that does not use a transform and quantization process or an inverse transform and inverse quantization process is enabled for the current video block. In some embodiments of method 2400, the method further comprises: performing a determination whether the bitstream representation includes a syntax element indicating whether QR-BDPCM mode is enabled for a current video unit of the video, wherein the determination is performed based on whether TS mode is enabled for a current video block or whether transform quantization bypass mode is enabled.
In some embodiments of method 2400, the current video unit comprises a sequence of current video blocks, a Transform Unit (TU), a Prediction Unit (PU), a Coding Unit (CU), a picture, or a slice. In some embodiments of method 2400, a QR-BDPCM mode that does not apply a transform to a current video block is enabled in response to enabling a transform quantization bypass mode for the current video block and in response to disabling a TS mode for the current video block. In some embodiments of the method 2400, the allowed maximum or the allowed minimum of the dimensions of the two codec modes is signaled at sequence, view, picture, slice group, slice, codec Tree Unit (CTU), or video unit level in the bitstream representation. In some embodiments of method 2400, in the bitstream representation, an allowed maximum or an allowed minimum for the dimensions of the two codec modes is signaled in a Sequence Parameter Set (SPS), a Video Parameter Set (VPS), a Picture Parameter Set (PPS), a slice header, a slice group header, or a slice header. In some embodiments of method 2400, for two codec modes enabled for a current video block, an indication of use of the two codec methods is represented in a bitstream representation.
In some embodiments of method 2400, the method further comprises: based on a current video block of the video not satisfying the dimensional constraint, determining not to enable two codec modes for the current video block. In some embodiments of method 2400, in response to not enabling both codec modes for the current video block, a use indication of both codec modes is not included in the bitstream representation.
Fig. 25 shows a flow diagram of an exemplary method for video processing. The method 2500 includes: determining 2502 to enable a codec mode to represent a current video block in a bitstream representation based on a current video block of the video satisfying a dimensional constraint, wherein the codec mode represents the current video block in the bitstream representation without using a transform operation on the current video block during an encoding operation, or wherein the current video block is obtained from the bitstream representation without using an inverse transform operation during a decoding operation, and wherein the dimensional constraint specifies: a first maximum transform block size of a current video block of the codec mode to which no transform operation or inverse transform operation is applied is different from a second maximum transform block size of a current video block using another codec tool to which a transform operation or inverse transform operation is applied; and performing 2504 a conversion between the current video block and a bitstream representation of the video based on the codec mode.
In some embodiments of method 2500, wherein the coding mode comprises a Transform Skip (TS) mode, a transform quantization bypass mode, a quantized residual block differential pulse coding modulation (QR-BDPCM) mode, or a Block Differential Pulse Coding Modulation (BDPCM) mode, wherein in the transform quantization bypass mode, transform and quantization processing are not applied to the current video block during a coding operation, wherein in the transform quantization bypass mode, inverse transform and inverse quantization processing are not applied to obtain the current video block during a decoding operation, and wherein in the QR-BDPCM mode, a difference between a quantized residual of an intra prediction of the current video block and a prediction of the quantized residual is represented in a bitstream representation.
Fig. 26 shows a flow diagram of an exemplary method for video processing. The method 2600 comprises: performing a translation between a current video block of a video and a bitstream representation of the video, wherein the current video block is coded in the bitstream representation using a quantized residual block differential pulse code modulation (QR-BDPCM) mode, in which a difference between a quantized residual of an intra prediction of the current video block and a prediction of the quantized residual is represented in the bitstream representation, wherein the bitstream representation complies with a format rule specifying whether side information of the QR-BDPCM mode and/or a syntax element indicating applicability of a Transform Skip (TS) mode to the current video block is included in the bitstream representation, and wherein the side information comprises at least one of a usage indication of the QR-BDPCM mode or a prediction direction of the QR-BDPCM mode.
In some embodiments of method 2600, a TS mode is applied to the current video block in response to using a QR-BDPCM mode for the current video block. In some embodiments of method 2600, the syntax element is not present in the bitstream and is derived as applying the TS mode to the current video block. In some embodiments of method 2600, the format rule specifies that the side information is included in the bitstream representation after the syntax element. In some embodiments of method 2600, the syntax element indicates that a TS mode is applied to the current video block. In some embodiments of method 2600, the format rule specifies that side information of a QR-BDPCM mode is included in the bitstream representation in response to applying a TS mode to the current video block. In some embodiments of method 2600, the format rule specifies that side information of the QR-BDPCM mode is not to be included in the bitstream representation in response to not applying the TS mode to the current video block.
In some embodiments of method 2600, the format rule specifies that side information of a QR-BDPCM mode is included in the bitstream representation in response to not applying a TS mode to the current video block. In some embodiments of method 2600, the format rule specifies that side information of the QR-BDPCM mode is not to be included in the bitstream representation in response to applying the TS mode to the current video block.
Fig. 27 shows a flow diagram of an exemplary method for video processing. The method 2700 includes: determining 2702 to encode a current video block of the video using a transform quantization bypass mode that does not apply transform and quantization processing to the current video block; and based on the determination, performing 2704 a transition between the current video block and a bitstream representation of the video by disabling Luma Mapping and Chroma Scaling (LMCS) processing, wherein disabling LMCS processing disables performance of switching of sample points of the current video block between a reshaped domain and an original domain if the current video block is from a luma component or wherein disabling LMCS processing disables scaling of a chroma residual of the current video block if the current video block is from a chroma component.
In some embodiments of method 2700, wherein the intra prediction mode is applied to the current video block, and wherein a prediction signal or reference sample point for the intra prediction mode of the current video block is mapped from the reshaped domain to the original domain. In some embodiments of method 2700, wherein an Intra Block Copy (IBC) mode is applied to the current video block, and wherein prediction signals or reference sample points in the IBC mode for the current video block are mapped from the reshaped domain to the original domain. In some embodiments of method 2700, wherein a combined inter-frame intra prediction (CIIP) mode is applied to the current video block, and wherein prediction signals or reference sample points in the CIIP mode for the current video block are mapped from the reshaped domain to the original domain. In some embodiments of method 2700, wherein a combined inter-intra prediction (CIIP) mode is applied to the current video block, and wherein mapping of the prediction signal used in the CIIP mode of the current video block from the original domain to the shaped domain is not performed.
In some embodiments of method 2700, the method further comprises: in palette mode and in the original domain, a palette table is generated for the current video block. In some embodiments of method 2700, the method further comprises: allocating a first buffer and a second buffer to the current video block, wherein the first buffer is configured to store a sum of the prediction signal and the residual signal, and wherein the second buffer is configured to store a shaped sum obtained by mapping the sum of the prediction signal and the residual signal from an original domain to a shaped domain.
In some embodiments of method 2700, the method further comprises: a buffer is allocated to the current video block, the buffer being configured to store a sum of the prediction signal and the residual signal in an original domain, wherein the shaped sum is derived by mapping the sum of the prediction signal and the residual signal from the original domain to a shaped domain. In some embodiments of method 2700, the method further comprises: a coding mode that uses a current slice, slice group, or reference sample point within a picture of the current video block is applied to the current video block.
Fig. 28 shows a flow diagram of an exemplary method for video processing. The method 2800 includes: performing 2802 a first determination of whether to codec a current video block of video using a first codec mode in which no transform operation is applied to the current video block; performing 2804 a second determination of whether to codec one or more video blocks of the video using a second codec mode, wherein the one or more video blocks include a reference sample point for the current video block; performing 2806 a third determination, based on the first determination and the second determination, whether a third coding mode related to the intra prediction process is applicable to the current video block; and based on the third determination, performing 2808 a conversion between the current video block and a bitstream representation of the video.
In some embodiments of method 2800, wherein the second codec mode comprises a transform quantization bypass mode, a Transform Skip (TS) mode, a quantization residual block differential pulse codec modulation (QR-BDPCM) mode, a Block Differential Pulse Codec Modulation (BDPCM) mode, or a Pulse Codec Modulation (PCM) mode, wherein in the transform quantization bypass mode, no transform and quantization processing is applied to the current video block, and wherein in the QR-BDPCM mode, a difference between a quantization residual of an intra prediction of the current video block and a prediction of the quantization residual is represented in a bit stream representation. In some embodiments of method 2800, wherein the first codec mode and the second codec mode are transform quantization bypass modes that do not apply transform and quantization processing to video blocks of the video, wherein a third determination of whether a third codec mode is applicable to the current video block is performed based on whether the current video block is coded using the transform quantization bypass mode and based on whether one or more adjacent video blocks of the current video block that provide the reference sample point are coded using the transform quantization bypass mode. In some embodiments of method 2800, the third codec mode comprises an intra prediction mode, an Intra Block Copy (IBC) mode, or a Combined Inter Intra Prediction (CIIP) mode.
In some embodiments of method 2800, wherein the first codec mode is a transform quantization bypass mode that does not apply transform and quantization processing to video blocks of the video, wherein the prediction signal is derived using reference sample points that are not converted in the forward shaping processing or the reverse shaping processing in response to the current video block being coded using the transform quantization bypass mode and in response to the reference sample points being located in the current video block. In some embodiments of method 2800, wherein the first codec mode is a transform quantization bypass mode that does not apply transform and quantization processing to video blocks of the video, wherein the prediction signal is derived using transformed reference sample points obtained by an inverse shaping process that transforms the reference sample points to an original domain in response to a current video block being coded using the transform quantization bypass mode and in response to the reference sample points being outside the current video block. In some embodiments of method 2800, wherein the first codec mode is a transform quantization bypass mode that does not apply transform and quantization processing to video blocks of the video, wherein the prediction signal is derived using reference sample points that are not transformed in the forward shaping processing or the reverse shaping processing in response to the current video block not being coded using the transform quantization bypass mode and in response to the reference sample points being outside the current video block.
In some embodiments of method 2800, wherein the first codec mode is a transform quantization bypass mode that does not apply transform and quantization processing to video blocks of the video, wherein the prediction signal is derived using transformed reference sample points obtained by an inverse shaping process that transforms the reference sample points to a shaping domain in response to a current video block not being coded using the transform quantization bypass mode and in response to the reference sample points being located in the current video block. In some embodiments of method 2800, wherein the current video block is coded using a third coding mode, wherein the reference sample points are from one or more video blocks located in the same slice, brick, slice, or picture as the current video block, and wherein the intra prediction process is performed on the current video block using the third coding mode. In some embodiments of method 2800, the current video block is associated with a luminance component or a green color component.
Fig. 29 shows a flow diagram of an exemplary method for video processing. The method 2900 includes: performing 2902 a conversion between a current video block of a video and a bitstream representation of the video, wherein the bitstream representation conforms to a format rule that specifies whether a syntax element indicating whether the current video block is coded using a transform quantization bypass mode is included in the bitstream representation, and wherein the current video block is represented in the bitstream representation without using transform and quantization processing when the transform quantization bypass mode is applicable to the current video block.
In some embodiments of method 2900, wherein the format rule specifies that the syntax element is included in the bitstream representation, and wherein the format rule specifies that the syntax element is included in the bitstream representation prior to signaling a current video block for use of one or more transform matrix related codec tools. In some embodiments of method 2900, wherein the one or more transform matrix related codec tools include a Transform Skip (TS) mode, an explicit Multiple Transform Set (MTS) scheme, a Reduced Secondary Transform (RST) mode, a sub-block transform (SBT) mode, or a quantized residual block differential pulse codec modulation (QR-BDPCM) mode, wherein in the QR-BDPCM mode, a difference between a quantized residual of an intra prediction of a current video block and a prediction of the quantized residual is represented in a bitstream representation. In some embodiments of method 2900, wherein the second syntax element for the residual coding technique is included in the bitstream representation based on whether the current video block is coded using transform quantization bypass mode. In some embodiments of method 2900, wherein the residual coding technique applies or does not apply a transform to the current video block.
In some embodiments of method 2900, wherein the second syntax element indicates, in the bitstream representation, a residual coding technique to apply the transform to the current video block in response to the transform quantization bypass mode not being applicable to the current video block and in response to the Transform Skip (TS) mode not being applied to the current video block. In some embodiments of method 2900, wherein, in response to the transform quantization bypass mode being applicable to the current video block and in response to the Transform Skip (TS) mode being applied to the current video block, the second syntax element indicates in the bitstream representation a residual coding technique that does not apply the transform to the current video block. In some embodiments of method 2900, wherein, in response to the transform quantization bypass mode being applicable to the current video block, the bitstream representation does not include signaling for a Transform Skip (TS) mode for the current video block.
In some embodiments of method 2900, wherein a residual coding technique that does not apply a transform is applied to the current video block. In some embodiments of method 2900, wherein the bitstream representation includes side information associated with a codec tool associated with the transform matrix based on whether the transform quantization bypass mode is applicable to the current video block. In some embodiments of method 2900, wherein, in response to the transform quantization bypass mode being applicable to the current video block, the bitstream representation includes side information indicating use of a Transform Skip (TS) mode, a quantized residual block differential pulse code modulation (QR-BDPCM) mode, or a Block Differential Pulse Code Modulation (BDPCM) mode. In some embodiments of method 2900, wherein, in response to the transform quantization bypass mode being applicable to the current video block, the bitstream representation does not include side information indicating use of a sub-block transform (SBT) mode, a simplified secondary transform (RST) mode, or an explicit Multiple Transform Set (MTS) scheme.
In some embodiments of method 2900, wherein the format rule specifies that the syntax element is included in the bitstream representation, and wherein the format rule specifies that the syntax element is included in the bitstream representation after signaling one or more codec tools associated with the transform matrix. In some embodiments of method 2900, wherein the syntax element is coded in the bitstream representation when transform matrix dependent coding tools from the one or more transform matrix dependent coding tools are applied to the current video block. In some embodiments of method 2900, wherein the one or more codec tools associated with the transform matrix include a Transform Skip (TS) mode, a quantized residual block differential pulse codec modulation (QR-BDPCM) mode, or a Block Differential Pulse Codec Modulation (BDPCM) mode. In some embodiments of method 2900, wherein the format rule specifies that no syntax element is included in the bitstream representation in response to applying a sub-block transform (SBT) mode, a Reduced Secondary Transform (RST) mode, or an explicit Multiple Transform Set (MTS) scheme to the current video block.
Fig. 30 shows a flow diagram of an exemplary method for video processing. The method 3000 includes: determining 3002 that a transform quantization bypass mode is applicable to a current video block of the video, wherein in the transform quantization bypass mode, transform and quantization processing is not used on the current video block; disabling 3004 a filtering method for sample points of the current video block based on the determining; and performing 3006 a conversion between the current video block and a bitstream representation of the video based on the determining and the disabling.
In some embodiments of method 3000, wherein the filtering method comprises an Adaptive Loop Filtering (ALF) method or a nonlinear ALF method. In some embodiments of method 3000, wherein the filtering method comprises a bilateral filter, a diffusion filter, or a post-reconstruction filter that modifies reconstructed sample points of the current video block. In some embodiments of method 3000, wherein the filtering method comprises a position dependent intra prediction combining (PDPC) method.
Fig. 31 shows a flow diagram of an exemplary method for video processing. The method 3100 comprises: performing 3102 a first determination that a transform quantization bypass mode is applicable to a current video block of the video, wherein in the transform quantization bypass mode, transform and quantization processing is not used on the current video block; performing 3104 a second determination in response to the first determination that the transform selection mode in implicit Multiple Transform Set (MTS) processing is not applicable to the current video block; and performing 3106 a conversion between the current video block and a bitstream representation of the video based on the first determination and the second determination.
Fig. 32 shows a flow diagram of an exemplary method for video processing. The method 3200 includes: performing 3202 a first determination that the transform quantization bypass mode applies to a current video block of the video, wherein in the transform quantization bypass mode, transform and quantization processing are not used on the current video block, wherein the current video block is associated with a chroma component; performing 3204 a second determination in response to the first determination, i.e., not scaling sample points of the chroma component in a Luma Mapping and Chroma Scaling (LMCS) process; and performing 3206 a conversion between the current video block and a bitstream representation of the video based on the first determination and the second determination.
Fig. 33 shows a flow diagram of an exemplary method for video processing. Method 3300 includes: determining 3302 that a transform quantization bypass mode is applicable to a current video block of the video, wherein in the transform quantization bypass mode, transform and quantization processing is not used on the current video block; disabling Luma Mapping and Chroma Scaling (LMCS) processing for a Coding Unit (CU), coding Tree Unit (CTU), slice group, picture, or sequence of a current video block based on the determining; and based on the determination, performing 3306 a conversion between the current video block and a bitstream representation of the video by disabling Luma Mapping and Chroma Scaling (LMCS) processing for a Codec Unit (CU), codec Tree Unit (CTU), slice group, picture, or sequence of the current video block, wherein disabling the LMCS processing disables performance of switching of sample points of the current video block between a reshape domain and an original domain if the current video block is from a luma component or wherein disabling the LMCS processing disables scaling of a chroma residual of the current video block if the current video block is from a chroma component.
In some embodiments of method 3300, wherein a Sequence Parameter Set (SPS), video Parameter Set (VPS), picture Parameter Set (PPS), or slice header indicates that no transform and quantization processing on the current video block is used for the current video block. In some embodiments of method 3300, wherein the bitstream of the video represents signaling that does not include syntax elements related to LMCS processing.
Fig. 34 shows a flow diagram of an exemplary method for video processing. The method 3400 comprises: performing 3402 a first determination of whether a current video block of the video is coded using a mode that applies an identity transform or no transform to the current video block; performing 3404 a second determination, i.e., whether to apply a coding tool to the current video block, based on the first determination; and performing 3406 a conversion between the current video block and a bitstream representation of the video based on the first determination and the second determination.
In some embodiments of the method 3400, wherein the second determining comprises: determining not to apply the coding tool to the current video block based on a first determination to apply an identity transform or not to apply a transform to the current video block. In some embodiments of method 3400, wherein the modes comprise a transform quantization bypass mode, a Transform Skip (TS) mode, a quantization residual block differential pulse codec modulation (QR-BDPCM) mode, a Differential Pulse Codec Modulation (DPCM) mode, or a Pulse Codec Modulation (PCM) mode, wherein the codec tools comprise a decoder-side motion derivation tool, a decoder-side intra mode decision tool, a combined inter-frame prediction (CIIP) mode, a triangulation mode (TPM), wherein in the transform quantization bypass mode, no transform and quantization processing is applied to the current video block, and wherein in the QR-PCM mode, a difference between a quantization residual of an intra prediction of the current video block and a prediction of the quantization residual is represented in a bit stream representation. In some embodiments of the method 3400, wherein the second determining comprises: determining not to apply a coding tool to the current video block based on a first determination to apply an identity transform or not to apply a transform to the current video block, wherein the coding tool comprises a prediction refinement tool using optical flow (PROF), a combined inter-frame intra prediction (CIIP) mode, or a triangulation mode (TPM).
In some embodiments of the method 3400, wherein the second determining comprises: determining not to apply a coding tool to the current video block based on a first determination to apply an identity transformation to the current video block, and wherein the coding tool comprises a bi-directional optical flow (BDOF) tool, a combined inter-frame intra prediction (CIIP) mode, or a triangulation mode (TPM). In some embodiments of the method 3400, wherein the second determining comprises: determining not to apply a coding tool to the current video block based on a first determination to apply an identity transform to the current video block, and wherein the coding tool comprises a decode-side motion vector refinement (DMVR) tool, a combined inter-frame intra prediction (CIIP) mode, or a triangulation mode (TPM).
Fig. 35 shows a flow diagram of an exemplary method for video processing. The method 3500 comprises: performing a transition between a current video block of 3502 video and a bitstream representation of the video, wherein the bitstream representation includes a syntax element indicating whether a transform quantization bypass mode is applicable to represent the current video block, wherein the current video block is represented in the bitstream representation without using transform and quantization processing when the transform quantization bypass mode is applicable to the current video block, wherein the transform quantization bypass mode is applicable to a first video unit level of the current video block, and wherein the bitstream representation does not include signaling of the transform quantization bypass mode at a second video unit level, the video block of the current video block in the second video unit level being smaller than the video block at the first video unit level.
In some embodiments of method 3500, wherein the first video unit level comprises a picture level, slice level, or brick level of the current video block, and wherein the second video unit level comprises a Coding Unit (CU), a Transform Unit (TU), or a Prediction Unit (PU). In some embodiments of method 3500, wherein the bitstream representation comprises signaling whether the transform quantization bypass mode is applicable at the first video unit level based on whether a coding tool is applied to the current video block. In some embodiments of method 3500, wherein the bitstream representation includes signaling whether a coding tool is applied to the current video block based on whether the transform quantization bypass mode is applicable for the first video unit level. In some embodiments of method 3500, wherein the codec utility is not applied to the current video block in response to the signaling that the bitstream representation includes that the transform quantization bypass mode is applicable at the first video unit level. In some embodiments of method 3500, wherein the transform quantization bypass mode is not applicable at the first video unit level in response to the bitstream representation indicating that the coding tool is to be applied to the current video block. In some embodiments of method 3500, among other things, the codec tool includes Luma Mapping and Chroma Scaling (LMCS) processing, decoder-side motion derivation, decoder-side intra mode decision, combined inter-frame intra prediction (CIIP) mode, or triangulation mode (TPM). In some embodiments of method 3500, wherein the decoder-side motion derivation process comprises a decoding-side motion vector refinement (DMVR) tool, a bi-directional optical flow (BDOF) tool, or a prediction refinement tool using optical flow (PROF).
Fig. 36 shows a flow diagram of an exemplary method for video processing. The method 3600 includes: performing a transition between a current video block of 3602 video and a bitstream representation of the video, wherein the bitstream representation comprises a syntax element indicating whether a transform quantization bypass mode is applicable to represent the current video block, wherein the current video block is represented in the bitstream representation without using transform and quantization processing when the transform quantization bypass mode is applicable to the current video block, wherein the transform quantization bypass mode is applicable to a first video unit level of the current video block, and wherein the bitstream representation comprises side information used by the transform quantization bypass mode at a second video unit level of the current video block.
In some embodiments of method 3600, wherein a first video unit level comprises a Prediction Unit (PU) or a Coding Unit (CU) of a current video block, and wherein a second video unit level comprises a Transform Unit (TU) of the current video block. In some embodiments of method 3600, wherein the primary side information is included in a bit stream representation of the current video block in response to the PU or CU comprising a plurality of TUs. In some embodiments of method 3600, wherein the side information is associated with a first TU in the CU, and wherein remaining one or more TUs share the side information.
Fig. 37 shows a flow diagram of an exemplary method for video processing. Method 3700 includes: determining 3702, for a current video block of the video, whether a mode in which the lossless codec technique is applied is applicable to the current video block; and based on the determination, performing 3704 a conversion between the current video block and a bitstream representation of the video, wherein the bitstream representation includes a syntax element indicating whether a coding tool is applicable at a video unit level for the current video block, wherein the video unit level is larger than a Coding Unit (CU), and wherein the coding tool is not applied to sample points within the video unit level.
In some embodiments of method 3700, wherein the coding tool is not applied to sample points within the video unit level when the syntax element of the coding tool indicates that the coding tool is applicable for application to the video unit level. In some embodiments of method 3700, wherein the video unit level comprises a sequence, picture, view, slice, brick, sub-picture, coding Tree Block (CTB), or Coding Tree Unit (CTU) of the current video block. In some embodiments of method 3700, where the codec tool includes filtering processes including non-linear ALF, sample Adaptive Offset (SAO), bilateral filter, adaptive Loop Filtering (ALF) method in Hamdard transform domain filter and clipping, scaling, decoder side derivation techniques. In some embodiments of method 3700, wherein the syntax element of the codec tool indicates that the codec tool is not applicable at the video unit level in response to codec of some or all of the sample points at the video unit level using the applicable mode. In some embodiments of method 3700, wherein the lossless codec technique comprises a TransQuantBypass mode that does not apply transform and quantization processing to the current video block, or wherein the lossless codec technique comprises a near lossless codec technique. In some embodiments of method 3700, wherein the near lossless codec technique comprises a quantization parameter of the current video block being within a particular range. In some embodiments of method 3700, wherein the particular range is [4,4].
Fig. 38 shows a flow diagram of an exemplary method for video processing. The method 3800 includes: determining 3802, for a current video block of a video, whether one or more syntax elements are included in a bitstream representation of the current video block that indicate whether an intra sub-block partition (ISP) mode or a sub-block transform (SBT) mode allows non-discrete cosine transform II (DCT 2) transform; and based on the determination, performing 3804 a conversion between the current video block and a bitstream representation of the video.
In some embodiments of method 3800, wherein the one or more syntax elements are signaled in the bit stream representation in response to either of the ISP mode or the SBT mode being applied to the current video block. In some embodiments of method 3800, wherein the one or more syntax elements are signaled in the bit stream representation in response to applying both the ISP mode and the SBT mode to the current video block. In some embodiments of method 3800, wherein the non-DCT 2 transform is allowed in response to the one or more syntax elements indicating a condition of "true". In some embodiments of method 3800, wherein one syntax element is signaled in the bit stream representation in response to applying the ISP mode to the current video block. In some embodiments of method 3800, wherein one syntax element is signaled in the bit stream representation in response to applying SBT mode to the current video block. In some embodiments of method 3800, wherein the bitstream representation includes another syntax element indicating a use of an explicit Multiple Transform Set (MTS) processing of the current video block or a selection of an intra block dimensional up-conversion of the current video block. In some embodiments of method 3800, wherein the intra block dimension comprises an implicit MTS applied to a non-ISP coded intra block of the current video block. In some embodiments of method 3800, wherein the ISP mode using the non-DCT 2 transform is applied to the current video block in response to another syntax element indicating that no explicit MTS processing is applied to the current video block. In some embodiments of method 3800, wherein the SBT mode using the non-DCT 2 transform is applied to the current video block in response to the another syntax element indicating that no explicit MTS processing is applied to the current video block.
From the foregoing, it will be appreciated that specific embodiments of the disclosed technology have been described herein for purposes of illustration, but that various modifications may be made without deviating from the scope of the disclosure. Accordingly, the disclosed technology is not limited, except as by the appended claims.
Implementations of the subject matter and the functional operations described in this patent document may be implemented in various systems, digital electronic circuitry, or computer software, firmware, or hardware, including the structures disclosed in this specification and their structural equivalents, or in combinations of one or more of them. Implementations of the subject matter described in this specification can be implemented as one or more computer program products, i.e., one or more modules of computer program instructions encoded on a tangible and non-transitory computer readable medium for execution by, or to control the operation of, data processing apparatus. The computer readable medium can be a machine readable storage device, a machine readable storage substrate, a memory device, a composition of matter effecting a machine readable propagated signal, or a combination of one or more of them. The term "data processing unit" or "data processing apparatus" includes all apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, or multiple processors or groups of computers. The apparatus can include, in addition to hardware, code that creates an execution environment for the computer program, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, or a combination of one or more of them.
A computer program (also known as a program, software application, script, or code) can be written in any form of programming language, including compiled or interpreted languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. A computer program does not necessarily correspond to a file in a file system. A program can be stored in a portion of a file that holds other programs or data (e.g., one or more scripts stored in a markup language document), in a single file dedicated to the program, or in multiple coordinated files (e.g., files that store one or more modules, sub-programs, or portions of code). A computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network.
The processes and logic flows described in this specification can be performed by one or more programmable processors executing one or more computer programs to perform functions by operating on input data and generating output. The processes and logic flows can also be performed by, and apparatus can also be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application-specific integrated circuit).
Processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more of any kind of digital computer. Generally, a processor will receive instructions and data from a read-only memory or a random access memory or both. The essential elements of a computer are a processor for executing instructions and one or more memory devices for storing instructions and data. Generally, a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto-optical disks, or optical disks. However, a computer does not necessarily have such a device. Computer-readable media suitable for storing computer program instructions and data include all forms of non-volatile memory, media and memory devices, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices. The processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.
The specification and drawings are to be regarded in an illustrative manner, with an exemplary meaning being given. As used herein, the use of "or" is intended to include "and/or" unless the context clearly indicates otherwise.
While this patent document contains many specifics, these should not be construed as limitations on the scope of any invention or of what may be claimed, but rather as descriptions of features specific to particular embodiments of particular inventions. Certain features that are described in this patent document in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various functions described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination.
Also, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. Moreover, the separation of various system components in the embodiments described herein should not be understood as requiring such separation in all embodiments.
Only a few implementations and examples have been described and other implementations, enhancements and variations can be made based on what is described and illustrated in this patent document.

Claims (90)

1. A video processing method, comprising:
performing a first determination of whether to codec a current video block of video using a first codec mode in which no transform operation is applied to the current video block;
performing a second determination of whether to codec one or more video blocks of the video using a second codec mode, wherein the one or more video blocks include a reference sample point for the current video block;
performing a third determination based on the first determination and the second determination, determining whether a third coding mode related to intra prediction processing is applicable to the current video block; and
based on the third determination, performing a conversion between the current video block and a bitstream representation of the video,
wherein the second codec mode comprises a transform quantization bypass mode, a transform skip TS mode, a quantization residual block differential pulse codec modulation QR-BDPCM mode, a block differential pulse codec modulation BDPCM mode, or a pulse codec modulation PCM mode, wherein in the transform quantization bypass mode, no transform and quantization processing is applied to the one or more video blocks, and wherein in the QR-BDPCM mode, a difference between a quantization residual of an intra prediction of the one or more video blocks and a prediction of the quantization residual is represented in the bitstream representation.
2. The method as set forth in claim 1, wherein,
wherein the first codec mode and the second codec mode are transform quantization bypass modes that do not apply transform and quantization processing to video blocks of the video,
wherein the third determination of whether the third codec mode is applicable to the current video block is performed based on whether the current video block is coded using the transform quantization bypass mode and based on whether the one or more neighboring video blocks of the current video block that provide reference sample points are coded using the transform quantization bypass mode.
3. The method of claim 1, wherein the third coding mode comprises an intra prediction mode, an Intra Block Copy (IBC) mode, or a Combined Inter Intra Prediction (CIIP) mode.
4. The method as set forth in claim 1, wherein,
wherein the first codec mode is a transform quantization bypass mode that does not apply transform and quantization processing to video blocks of the video, and
wherein a prediction signal is derived using reference sample points that are not transformed in a forward shaping process or a reverse shaping process in response to the current video block being coded using the transform quantization bypass mode and in response to the reference sample points being located in the current video block.
5. The method of claim 1, wherein the first and second light sources are selected from the group consisting of,
wherein the first codec mode is a transform quantization bypass mode that does not apply transform and quantization processing to video blocks of video, and
wherein a prediction signal is derived using a converted reference sample point obtained by an inverse shaping process that converts the reference sample point to an original domain in response to the current video block being coded using the transform quantization bypass mode and in response to a reference sample point being located outside the current video block.
6. The method of claim 1, wherein the first and second light sources are selected from the group consisting of,
wherein the first codec mode is a transform quantization bypass mode that does not apply transform and quantization processing to video blocks of the video, and
wherein, in response to the current video block not being coded using the transform quantization bypass mode and in response to a reference sample point being outside the current video block, a prediction signal is derived using the reference sample point that was not transformed in a forward shaping process or a backward shaping process.
7. The method of claim 1, wherein the first and second light sources are selected from the group consisting of,
wherein the first codec mode is a transform quantization bypass mode that does not apply transform and quantization processing to video blocks of the video, and
wherein, in response to the current video block not being coded using the transform quantization bypass mode and in response to a reference sample point being located in the current video block, a prediction signal is derived using a transformed reference sample point obtained by an inverse shaping process that converts the reference sample point to a shaping domain.
8. The method of any one of claims 2 to 7,
wherein the current video block is coded using the third coding mode, and
wherein the reference sample points are from the one or more video blocks located in the same slice, brick, slice, or picture as the current video block, and wherein the intra-prediction process is performed on the current video block by using the third coding mode.
9. The method of any of claims 1-7, wherein the current video block is associated with a luma component or a green color component.
10. The method of claim 1, wherein,
encoding and decoding the current video block in the bitstream representation using the QR-BDPCM mode,
wherein the bitstream representation conforms to a format rule specifying whether side information of the QR-BDPCM mode and/or a syntax element indicating applicability of a transform skip TS mode to the current video block is included in the bitstream representation, and
wherein the side information includes at least one of a usage indication of the QR-BDPCM mode or a prediction direction of the QR-BDPCM mode.
11. The method of claim 10, wherein, in response to using the QR-BDPCM mode for the current video block, the TS mode is applied to the current video block.
12. The method of claim 11, wherein the syntax element is not present in the bitstream and is derived to apply the TS mode to the current video block.
13. The method of claim 10, wherein the format rule specifies that the side information is included in the bitstream representation after the syntax element.
14. The method of claim 10, wherein the syntax element indicates that the TS mode is applied to the current video block.
15. The method of claim 10, wherein the format rule specifies that the side information of the QR-BDPCM mode is to be included in the bitstream representation in response to applying the TS mode to the current video block.
16. The method of claim 10, wherein the format rule specifies that the side information of the QR-BDPCM mode is not to be included in the bitstream representation in response to not applying the TS mode to the current video block.
17. The method of claim 10, wherein the format rule specifies that the side information of the QR-BDPCM mode is to be included in the bitstream representation in response to not applying the TS mode to the current video block.
18. The method of claim 10, wherein the format rule specifies that the side information of the QR-BDPCM mode is not to be included in the bitstream representation in response to applying the TS mode to the current video block.
19. The method according to claim 1, wherein the first determining specifically comprises: determining to encode the current video block of the video using a transform quantization bypass mode that does not apply transform and quantization processing to the current video block;
the method further comprises the following steps: performing a conversion between the current video block and a bitstream representation of the video by disabling luma mapping and chroma scaling LMCS processing based on the first determination,
wherein the disabling the LMCS processing disables performance of switching of sample points of the current video block between a shaping domain and an original domain if the current video block is from a luma component, or
Wherein the disabling the LMCS processing disables scaling of a chroma residual of the current video block if the current video block is from a chroma component.
20. The method of claim 19, wherein the first and second portions are selected from the group consisting of,
wherein an intra prediction mode is applied to the current video block, and
wherein a prediction signal or reference sample point for the intra prediction mode for the current video block is mapped from a reshaped domain to an original domain.
21. The method of claim 19, wherein the first and second portions are selected from the group consisting of,
wherein an Intra Block Copy (IBC) mode is applied to the current video block, and
wherein a prediction signal or reference sample point in the IBC mode for the current video block is mapped from a reshaped domain to an original domain.
22. The method of claim 19, wherein the first and second portions are selected from the group consisting of,
wherein a combined inter-frame intra prediction CIIP mode is applied to the current video block, and
wherein a prediction signal or reference sample point in the CIIP mode for the current video block is mapped from a shaped domain to an original domain.
23. The method as set forth in claim 19, wherein,
wherein a combined inter-frame intra prediction CIIP mode is applied to the current video block, and
wherein mapping of prediction signals used in the CIIP mode of the current video block from an original domain to a shaped domain is not performed.
24. The method of claim 19, further comprising:
in palette mode and in the original domain, a palette table is generated for the current video block.
25. The method of claim 19, further comprising:
allocating a first buffer and a second buffer to the current video block,
wherein the first buffer is configured to store a sum of a prediction signal and a residual signal, and
wherein the second buffer is configured to store a shaped sum obtained by mapping the sum of the prediction signal and the residual signal from an original domain to a shaped domain.
26. The method of claim 19, further comprising:
allocating a buffer to the current video block, the buffer configured to store a sum of a prediction signal and a residual signal in an original domain,
wherein a shaped sum is derived by mapping the sum of the prediction signal and the residual signal from the original domain to the shaped domain.
27. The method of claim 19, further comprising:
applying, to the current video block, a coding mode that uses a current slice, slice group, or reference sample point within a picture of the current video block.
28. The method of claim 1, wherein,
the bitstream representation conforms to a format rule that specifies whether syntax elements indicating whether the current video block is coded using transform quantization bypass mode are included in the bitstream representation, and
wherein, when the transform quantization bypass mode is applied to the current video block, the current video block is represented in the bit stream representation without using transform and quantization processes.
29. The method of claim 28, wherein the first and second portions are selected from the group consisting of,
wherein the format rule specifies that the syntax element is to be included in the bitstream representation, and
wherein the format rule specifies that the syntax element is to be included in the bitstream representation prior to signaling the current video block for use of one or more transform matrix-related codec tools.
30. The method of claim 29, wherein the one or more transform matrix-related codec tools include a transform skip TS mode, an explicit Multiple Transform Set (MTS) scheme, a simplified secondary transform (RST) mode, a sub-block transform (SBT) mode, or a quantized residual block differential pulse codec modulation (QR) -BDPCM mode, and
wherein, in the QR-BDPCM mode, a difference between a quantized residual of an intra prediction of the current video block and a prediction of the quantized residual is represented in the bitstream representation.
31. The method of claim 29, wherein a second syntax element for a residual coding technique is included in the bitstream representation based on whether the current video block is coded using the transform quantization bypass mode.
32. The method of claim 31, wherein the residual coding technique applies or does not apply a transform to the current video block.
33. The method of claim 31, wherein the second syntax element indicates, in the bitstream representation, the residual coding technique that applies a transform to the current video block in response to the transform quantization bypass mode not being applied to the current video block and in response to a transform skip TS mode not being applied to the current video block.
34. The method of claim 31, wherein the second syntax element indicates, in the bitstream representation, the residual coding technique that does not apply transforms to the current video block in response to the transform quantization bypass mode being applied to the current video block and in response to a transform skip TS mode being applied to the current video block.
35. The method of claim 29, wherein, in response to the transform quantization bypass mode being applicable to the current video block, the bitstream representation does not include signaling for a transform skip TS mode for the current video block.
36. The method of claim 35, wherein a residual coding technique that does not apply a transform is applied to the current video block.
37. The method of claim 29, wherein the bitstream representation comprises side information associated with a transform matrix related codec tool based on whether the transform quantization bypass mode is applicable to the current video block.
38. The method of claim 37, wherein, in response to the transform quantization bypass mode being applicable to the current video block, the bitstream representation comprises the side information indicating use of a transform skip TS mode, a quantized residual block differential pulse codec modulation (QR-BDPCM) mode, or a Block Differential Pulse Codec Modulation (BDPCM) mode.
39. The method of claim 37, wherein, in response to the transform quantization bypass mode being applicable to the current video block, the bitstream representation does not include the side information indicating use of a sub-block transform (SBT) mode, a simplified secondary transform (RST) mode, or an explicit Multiple Transform Set (MTS) scheme.
40. The method of claim 28, wherein the first and second portions are selected from the group consisting of,
wherein the format rule specifies that the syntax element is to be included in the bitstream representation, and
wherein the format rule specifies that the syntax element is to be included in the bitstream representation after signaling one or more codec tools associated with a transform matrix.
41. The method of claim 40, wherein the syntax element is encoded in the bitstream representation when transform matrix-dependent coding tools from the one or more transform matrix-dependent coding tools are applied to the current video block.
42. The method of claim 40 or 41, wherein the one or more transform matrix related codec tools comprise a transform skip TS mode, a quantized residual block differential pulse codec modulation QR-BDPCM mode, or a block differential pulse codec modulation BDPCM mode.
43. The method of claim 28, wherein the format rule specifies that the syntax element is not to be included in the bitstream representation in response to applying a sub-block transform (SBT) mode, a Reduced Secondary Transform (RST) mode, or an explicit Multiple Transform Set (MTS) scheme to the current video block.
44. The method of claim 1, wherein the first determining specifically comprises: determining that a transform quantization bypass mode applies to the current video block of the video, wherein in the transform quantization bypass mode, transform and quantization processing is not used on the current video block;
the method further comprises the following steps: disabling a filtering method for sample points of the current video block based on the first determination; and
based on the first determination and the disabling, performing a transition between the current video block and the bitstream representation of the video.
45. The method of claim 44, wherein the filtering method comprises an adaptive loop filtering ALF method or a non-linear ALF method.
46. The method of claim 44, wherein the filtering method comprises a bilateral filter, a diffusion filter, or a post-reconstruction filter that modifies reconstructed sample points of the current video block.
47. The method of claim 44, wherein the filtering method comprises a location-dependent intra prediction combining (PDPC) method.
48. The method of claim 1, further comprising:
performing a fourth determination that the transform quantization bypass mode applies to a second video block of the video,
wherein, in the transform quantization bypass mode, no transform and quantization processing is used on the second video block;
performing a fifth determination in response to the fourth determination that a transform selection mode in an implicit Multiple Transform Set (MTS) process is not applicable to the second video block; and
based on the fourth determination and the fifth determination, performing a conversion between the second video block and a bitstream representation of the video.
49. The method of claim 1, further comprising:
performing a sixth determination that the transform quantization bypass mode applies to a third video block of the video,
wherein, in the transform quantization bypass mode, no transform and quantization processing is used on the third video block, wherein the third video block is associated with a chroma component;
performing a seventh determination in response to the sixth determination that the sample points of the chroma components are not scaled in a Luma Mapping and Chroma Scaling (LMCS) process; and
based on the sixth determination and the seventh determination, performing a conversion between the third video block and a bitstream representation of the video.
50. The method according to claim 1, wherein the first determining specifically comprises: determining that a transform quantization bypass mode is applicable to the current video block of the video, wherein in the transform quantization bypass mode, transform and quantization processing is not used on the current video block;
the method further comprises the following steps:
disabling luma mapping and chroma scaling LMCS processing for a codec unit CU, a codec tree unit CTU, a slice group, a picture, or a sequence of the current video block based on the first determination; and
based on the first determination, perform a conversion between the current video block and the bitstream representation of the video by disabling the luma mapping and the chroma scaling LMCS processing for the codec unit CU, the codec tree unit CTU, the slice group, the picture, or the sequence of the current video block,
wherein the disabling the LMCS processing disables performance of switching of sample points of the current video block between a shaping domain and an original domain if the current video block is from a luma component, or
Wherein the disabling the LMCS processing disables scaling of a chroma residual of the current video block if the current video block is from a chroma component.
51. The method of claim 50, wherein a Sequence Parameter Set (SPS), a Video Parameter Set (VPS), a Picture Parameter Set (PPS), or a slice header indicates that the transform and quantization processing on the current video block is not to be used for the current video block.
52. The method of claim 50, wherein the bitstream of the video represents signaling that does not include syntax elements related to the LMCS processing.
53. The method of claim 1, further comprising:
performing an eighth determination, determining whether a fourth video block of the video is coded using a mode that applies an identity transform or no transform to the fourth video block;
performing a ninth determination based on the eighth determination, determining whether to apply a coding tool to the fourth video block; and
based on the eighth determination and the ninth determination, performing a conversion between the fourth video block and a bitstream representation of the video.
54. The method of claim 53, wherein the ninth determination further comprises: determining not to apply the coding tool to the fourth video block based on the eighth determination to apply the identity transform or not apply a transform to the fourth video block.
55. The method of claim 53 or claim 54,
wherein the modes include a transform quantization bypass mode, a transform skip TS mode, a quantization residual block differential pulse codec modulation (QR-BDPCM) mode, a Differential Pulse Codec Modulation (DPCM) mode, or a Pulse Codec Modulation (PCM) mode,
wherein the coding and decoding tools comprise a decoder side motion derivation tool, a decoder side intra-frame mode decision tool, a combined inter-frame intra-frame prediction CIIP mode, a triangulation mode TPM,
wherein, in the transform quantization bypass mode, transform and quantization processing is not applied to the fourth video block, and
wherein, in the QR-BDPCM mode, a difference between a quantized residual of an intra prediction of the fourth video block and a prediction of the quantized residual is represented in the bitstream representation.
56. The method according to claim 55, wherein said step of measuring,
wherein the ninth determination further comprises: determining not to apply the coding tool to the fourth video block based on the eighth determination to apply the identity transform or not apply a transform to the fourth video block,
wherein the encoding and decoding tools comprise a prediction refinement tool using optical flow PROF, a combined inter-frame intra prediction CIIP mode or a triangulation mode TPM.
57. In accordance with the method of claim 55,
wherein the ninth determination further comprises: determining not to apply the coding tool to the fourth video block based on the eighth determination to apply the identity transform to the fourth video block, and
wherein the coding and decoding tool comprises a bidirectional optical flow BDOF tool, a combined inter-frame intra-frame prediction CIIP mode or a triangulation mode TPM.
58. The method of claim 53, in which the first and second regions are different,
wherein the ninth determination further comprises: determining not to apply the coding tool to the fourth video block based on the eighth determination to apply the identity transform to the fourth video block, and
wherein the coding and decoding tools comprise a decoding side motion vector refinement DMVR tool, a combined inter-frame intra-frame prediction CIIP mode or a triangulation mode TPM.
59. The method of claim 1, wherein,
the bitstream representation includes a syntax element indicating whether a transform quantization bypass mode is applicable to represent the current video block,
wherein when the transform quantization bypass mode is applied to the current video block, the current video block is represented in the bit stream representation without using transform and quantization processes,
wherein the transform quantization bypass mode is applicable to a first video unit level of the current video block, and
wherein the bitstream represents signaling that the transform quantization bypass mode is not included at a second video unit level, the video blocks of the current video block in the second video unit level being smaller than the video blocks at the first video unit level.
60. The method according to claim 59, wherein said step of treating the sample,
wherein the first video unit level comprises a picture level, a slice level, or a brick level of the current video block, and
wherein the second video unit level comprises a Coding Unit (CU), a Transform Unit (TU), or a Prediction Unit (PU).
61. The method of claim 59, wherein the bitstream representation comprises signaling whether the transform quantization bypass mode is applicable at the first video unit level based on whether a coding tool is applied to the current video block.
62. The method of claim 59, wherein the bitstream representation comprises signaling whether to apply a coding tool to the current video block based on whether the transform quantization bypass mode is applicable for the first video unit level.
63. The method of claim 59, wherein in response to the bitstream representation comprising signaling that the transform quantization bypass mode is applicable at the first video unit level, applying no coding tools to the current video block.
64. The method of claim 59, wherein, in response to the bitstream representation indicating that a coding tool is to be applied to the current video block, the transform quantization bypass mode is not applicable at the first video unit level.
65. The method of any one of claims 61 to 64, wherein the coding tools include luma mapping and chroma scaling LMCS processing, decoder side motion derivation, decoder side intra mode decision, combined inter-frame intra prediction CIIP mode or triangulation mode TPM.
66. The method of claim 65, wherein said decoder-side motion derivation process comprises a decoding-side motion vector refinement (DMVR) tool, a bi-directional optical-flow (BDOF) tool, or a prediction refinement tool using optical-flow (PROF).
67. The method of claim 1, wherein,
the bitstream representation includes a syntax element indicating whether transform quantization bypass mode is applicable to represent the current video block,
wherein when the transform quantization bypass mode is applied to the current video block, the current video block is represented in the bit stream representation without using transform and quantization processes,
wherein the transform quantization bypass mode is applicable to a third video unit level of the current video block, and
wherein the bitstream representation includes side information used by the transform quantization bypass mode at a fourth video unit level of the current video block.
68. The method of claim 67, wherein the third video unit level comprises a Prediction Unit (PU) or a Coding Unit (CU) of the current video block, and wherein the fourth video unit level comprises a Transform Unit (TU) of the current video block.
69. The method of claim 68, wherein the side information is included once in the bit stream representation of the current video block in response to the PU or the CU comprising a plurality of TUs.
70. The method of claim 69, wherein the side information is associated with a first TU in the CU, and wherein remaining one or more TUs share the side information.
71. The method of claim 1, further comprising:
determining, for a current video block of the video, whether a mode in which lossless coding techniques are applied applies to the current video block; and
performing a conversion between the current video block and the bitstream representation of the video based on the determination that the mode of the lossless codec technique corresponds,
wherein the bitstream representation comprises a syntax element at a video unit level indicating whether a coding tool is applicable to the current video block,
wherein the video unit level is larger than a coding unit, CU, and
wherein the coding tools are not applied to sample points within the video unit level.
72. The method of claim 71, wherein the coding tool is not applied to sample points within the video unit level when the syntax element of the coding tool indicates that the coding tool is applicable for application to the video unit level.
73. The method of claim 71, wherein the video unit level comprises a sequence, picture, view, slice, brick, sub-picture, coding Tree Block (CTB), or Coding Tree Unit (CTU) of the current video block.
74. The method of claim 71, wherein the codec tool includes filtering processes including nonlinear ALF, sample Adaptive Offset (SAO), bilateral filter, ALF method for adaptive loop filtering in Hamdard transform domain filter, clipping process, scaling method, decoder side derivation technique.
75. The method of claim 71, wherein, in response to coding some or all sample points at the video unit level using the applicable mode, the syntax element of the coding tool indicates that the coding tool is not applicable at the video unit level.
76. The method of any of claims 71-75, wherein the lossless codec technique comprises a transform quantization bypass TransQuantBypass mode that does not apply transform and quantization processing to the current video block, or wherein the lossless codec technique comprises a near lossless codec technique.
77. The method of claim 76, wherein the near lossless codec technique comprises a quantization parameter of the current video block being within a particular range.
78. The method of claim 77, wherein the particular range is [4,4].
79. The method of claim 1, further comprising:
determining, for a sixth video block of the video, whether one or more syntax elements indicating whether an intra sub-block partition ISP mode or a sub-block transform SBT mode allows non-discrete cosine transform II non-DCT 2 transform are included in the bitstream representation of the sixth video block; and
based on the determination of the sixth video block, performing a conversion between the sixth video block and the bitstream representation of the video.
80. The method of claim 79, wherein the one or more syntax elements are signaled in the bit stream representation in response to either the ISP mode or the SBT mode being applied to the sixth video block.
81. The method of claim 79, wherein the one or more syntax elements are signaled in the bit stream representation in response to applying both the ISP mode and the SBT mode to the sixth video block.
82. The method of claim 79, wherein the non-DCT 2 transformation is allowed in response to the one or more of the syntax elements indicating a condition of true.
83. The method of claim 79, wherein one syntax element is signaled in the bit stream representation in response to applying the ISP mode to the sixth video block.
84. The method of claim 79, wherein one syntax element is signaled in the bit stream representation in response to applying the SBT mode to the sixth video block.
85. The method of claim 79, wherein the bitstream representation comprises another syntax element indicating a use of explicit Multiple Transform Set (MTS) processing of the sixth video block or a selection of an intra block dimensional up-conversion of the sixth video block.
86. The method of claim 85, wherein the intra block dimension comprises an implicit MTS applied to a non-ISP coded intra block of the sixth video block.
87. The method of claim 85, wherein the ISP mode that uses non-DCT 2 transforms is applied to the sixth video block in response to the another the syntax element indicating that the explicit MTS processing is not applied to the sixth video block.
88. The method of claim 85, wherein the SBT mode using a non-DCT 2 transform is applied to the sixth video block in response to the another the syntax element indicating that the explicit MTS processing is not applied to the sixth video block.
89. An apparatus in a video system comprising a processor and a non-transitory memory having instructions thereon, wherein the instructions, when executed by the processor, cause the processor to implement the method of any of claims 1-88.
90. A non-transitory computer-readable medium having code stored thereon, which, when executed by a processor, causes the processor to implement the method of any one of claims 1-88.
CN202080036237.9A 2019-05-13 2020-05-13 Interaction between transform skip mode and other codec tools Active CN113826398B (en)

Applications Claiming Priority (7)

Application Number Priority Date Filing Date Title
CN2019086656 2019-05-13
CNPCT/CN2019/086656 2019-05-13
CN2019093330 2019-06-27
CNPCT/CN2019/093330 2019-06-27
CN2019107144 2019-09-21
CNPCT/CN2019/107144 2019-09-21
PCT/CN2020/089938 WO2020228718A1 (en) 2019-05-13 2020-05-13 Interaction between transform skip mode and other coding tools

Publications (2)

Publication Number Publication Date
CN113826398A CN113826398A (en) 2021-12-21
CN113826398B true CN113826398B (en) 2022-11-29

Family

ID=73288969

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202080036237.9A Active CN113826398B (en) 2019-05-13 2020-05-13 Interaction between transform skip mode and other codec tools
CN202080036158.8A Active CN113826405B (en) 2019-05-13 2020-05-13 Use of transform quantization bypass modes for multiple color components

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN202080036158.8A Active CN113826405B (en) 2019-05-13 2020-05-13 Use of transform quantization bypass modes for multiple color components

Country Status (2)

Country Link
CN (2) CN113826398B (en)
WO (2) WO2020228716A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022222989A1 (en) * 2021-04-21 2022-10-27 Beijing Bytedance Network Technology Co., Ltd. Method, device, and medium for video processing
WO2023193721A1 (en) * 2022-04-05 2023-10-12 Beijing Bytedance Network Technology Co., Ltd. Method, apparatus, and medium for video processing

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014197691A1 (en) * 2013-06-05 2014-12-11 Qualcomm Incorporated Residual differential pulse code modulation (dpcm) extensions and harmonization with transform skip, rotation, and scans
WO2015043501A1 (en) * 2013-09-27 2015-04-02 Qualcomm Incorporated Residual coding for depth intra prediction modes
CN108712649A (en) * 2012-06-29 2018-10-26 韩国电子通信研究院 Video encoding/decoding method, method for video coding and computer-readable medium

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11184623B2 (en) * 2011-09-26 2021-11-23 Texas Instruments Incorporated Method and system for lossless coding mode in video coding
GB2501535A (en) * 2012-04-26 2013-10-30 Sony Corp Chrominance Processing in High Efficiency Video Codecs
US20130294524A1 (en) * 2012-05-04 2013-11-07 Qualcomm Incorporated Transform skipping and lossless coding unification
US9706200B2 (en) * 2012-06-18 2017-07-11 Qualcomm Incorporated Unification of signaling lossless coding mode and pulse code modulation (PCM) mode in video coding
CN103634608B (en) * 2013-12-04 2015-03-25 中国科学技术大学 Residual error transformation method of high-performance video coding lossless mode
US10271052B2 (en) * 2014-03-14 2019-04-23 Qualcomm Incorporated Universal color-space inverse transform coding
EP3120561B1 (en) * 2014-03-16 2023-09-06 VID SCALE, Inc. Method and apparatus for the signaling of lossless video coding
GB2531004A (en) * 2014-10-06 2016-04-13 Canon Kk Residual colour transform signalled at sequence level for specific coding modes
US9918105B2 (en) * 2014-10-07 2018-03-13 Qualcomm Incorporated Intra BC and inter unification
CN106664405B (en) * 2015-06-09 2020-06-09 微软技术许可有限责任公司 Robust encoding/decoding of escape-coded pixels with palette mode

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108712649A (en) * 2012-06-29 2018-10-26 韩国电子通信研究院 Video encoding/decoding method, method for video coding and computer-readable medium
WO2014197691A1 (en) * 2013-06-05 2014-12-11 Qualcomm Incorporated Residual differential pulse code modulation (dpcm) extensions and harmonization with transform skip, rotation, and scans
WO2015043501A1 (en) * 2013-09-27 2015-04-02 Qualcomm Incorporated Residual coding for depth intra prediction modes

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
"CE8-related: Quantized residual BDPCM";Marta Karczewicz;《Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11 14th Meeting: Geneva, CH, 19–27 Mar. 2019》;20190327;第1-7页 *

Also Published As

Publication number Publication date
CN113826398A (en) 2021-12-21
WO2020228718A1 (en) 2020-11-19
WO2020228716A1 (en) 2020-11-19
CN113826405B (en) 2023-06-23
CN113826405A (en) 2021-12-21

Similar Documents

Publication Publication Date Title
CN113812154B (en) Multiple quadratic transform matrices for video processing
KR20210145754A (en) Calculations in matrix-based intra prediction
CN113826383B (en) Block dimension setting for transform skip mode
CN113950828A (en) Conditional signaling for simplified quadratic transformation in video bitstreams
CN114009024B (en) Selective enablement of adaptive intra-loop color space transforms in video codecs
CN113812155A (en) Interaction between multiple interframe coding and decoding methods
US11546595B2 (en) Sub-block based use of transform skip mode
WO2021088951A1 (en) Quantization properties of adaptive in-loop color-space transform for video coding
CN113853791A (en) Transform bypass codec residual block in digital video
CN113826398B (en) Interaction between transform skip mode and other codec tools
CN113841410B (en) Coding and decoding of multiple intra prediction methods
CN116965035A (en) Transformation on non-binary blocks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant