WO2024099024A1 - Methods and apparatus of arbitrary block partition in video coding - Google Patents

Methods and apparatus of arbitrary block partition in video coding Download PDF

Info

Publication number
WO2024099024A1
WO2024099024A1 PCT/CN2023/124124 CN2023124124W WO2024099024A1 WO 2024099024 A1 WO2024099024 A1 WO 2024099024A1 CN 2023124124 W CN2023124124 W CN 2023124124W WO 2024099024 A1 WO2024099024 A1 WO 2024099024A1
Authority
WO
WIPO (PCT)
Prior art keywords
block
scaling
partition
power
current block
Prior art date
Application number
PCT/CN2023/124124
Other languages
French (fr)
Inventor
Hong-Hui Chen
Chia-Ming Tsai
Tzu-Der Chuang
Chih-Wei Hsu
Original Assignee
Mediatek Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mediatek Inc. filed Critical Mediatek Inc.
Publication of WO2024099024A1 publication Critical patent/WO2024099024A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/119Adaptive subdivision aspects, e.g. subdivision of a picture into rectangular or non-rectangular coding blocks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/46Embedding additional information in the video signal during the compression process

Definitions

  • the present invention is a non-Provisional Application of and claims priority to U.S. Provisional Patent Application No. 63/383,277, filed on November 11, 2022.
  • the U.S. Provisional Patent Application is hereby incorporated by reference in its entirety.
  • the present invention relates block partition in a video coding system.
  • the present invention relates to partitioning with an exception region identified for restricted partition modes.
  • VVC Versatile video coding
  • JVET Joint Video Experts Team
  • MPEG ISO/IEC Moving Picture Experts Group
  • ISO/IEC 23090-3 2021
  • Information technology -Coded representation of immersive media -Part 3 Versatile video coding, published Feb. 2021.
  • VVC is developed based on its predecessor HEVC (High Efficiency Video Coding) by adding more coding tools to improve coding efficiency and also to handle various types of video sources including 3-dimensional (3D) video signals.
  • HEVC High Efficiency Video Coding
  • Fig. 1A illustrates an exemplary adaptive Inter/Intra video coding system incorporating loop processing.
  • Intra Prediction the prediction data is derived based on previously coded video data in the current picture.
  • Motion Estimation (ME) is performed at the encoder side and Motion Compensation (MC) is performed based on the result of ME to provide prediction data derived from other picture (s) and motion data.
  • Switch 114 selects Intra Prediction 110 or Inter-Prediction 112 and the selected prediction data is supplied to Adder 116 to form prediction errors, also called residues.
  • the prediction error is then processed by Transform (T) 118 followed by Quantization (Q) 120.
  • T Transform
  • Q Quantization
  • the transformed and quantized residues are then coded by Entropy Encoder 122 to be included in a video bitstream corresponding to the compressed video data.
  • the bitstream associated with the transform coefficients is then packed with side information such as motion and coding modes associated with Intra prediction and Inter prediction, and other information such as parameters associated with loop filters applied to underlying image area.
  • the side information associated with Intra Prediction 110, Inter prediction 112 and in-loop filter 130, are provided to Entropy Encoder 122 as shown in Fig. 1A. When an Inter-prediction mode is used, a reference picture or pictures have to be reconstructed at the encoder end as well.
  • the transformed and quantized residues are processed by Inverse Quantization (IQ) 124 and Inverse Transformation (IT) 126 to recover the residues.
  • the residues are then added back to prediction data 136 at Reconstruction (REC) 128 to reconstruct video data.
  • the reconstructed video data may be stored in Reference Picture Buffer 134 and used for prediction of other frames.
  • incoming video data undergoes a series of processing in the encoding system.
  • the reconstructed video data from REC 128 may be subject to various impairments due to a series of processing.
  • in-loop filter 130 is often applied to the reconstructed video data before the reconstructed video data are stored in the Reference Picture Buffer 134 in order to improve video quality.
  • deblocking filter (DF) may be used.
  • SAO Sample Adaptive Offset
  • ALF Adaptive Loop Filter
  • the loop filter information may need to be incorporated in the bitstream so that a decoder can properly recover the required information. Therefore, loop filter information is also provided to Entropy Encoder 122 for incorporation into the bitstream.
  • DF deblocking filter
  • SAO Sample Adaptive Offset
  • ALF Adaptive Loop Filter
  • Loop filter 130 is applied to the reconstructed video before the reconstructed samples are stored in the reference picture buffer 134.
  • the system in Fig. 1A is intended to illustrate an exemplary structure of a typical video encoder. It may correspond to the High Efficiency Video Coding (HEVC) system, VP8, VP9, H. 264 or VVC.
  • HEVC High Efficiency Video Coding
  • the decoder can use similar or portion of the same functional blocks as the encoder except for Transform 118 and Quantization 120 since the decoder only needs Inverse Quantization 124 and Inverse Transform 126.
  • the decoder uses an Entropy Decoder 140 to decode the video bitstream into quantized transform coefficients and needed coding information (e.g. ILPF information, Intra prediction information and Inter prediction information) .
  • the Intra prediction 150 at the decoder side does not need to perform the mode search. Instead, the decoder only needs to generate Intra prediction according to Intra prediction information received from the Entropy Decoder 140.
  • the decoder only needs to perform motion compensation (MC 152) according to Inter prediction information received from the Entropy Decoder 140 without the need for motion estimation.
  • an input picture is partitioned into non-overlapped square block regions referred as CTUs (Coding Tree Units) , similar to HEVC.
  • CTUs Coding Tree Units
  • Each CTU can be partitioned into one or multiple smaller size coding units (CUs) .
  • the resulting CU partitions can be in square or rectangular shapes.
  • VVC divides a CTU into prediction units (PUs) as a unit to apply prediction process, such as Inter prediction, Intra prediction, etc.
  • the VVC standard incorporates various new coding tools to further improve the coding efficiency over the HEVC standard.
  • various new coding tools some coding tools relevant to the present invention are reviewed as follows.
  • CTUs coding tree units
  • the CTU concept is same as that for the HEVC.
  • a CTU consists of an N ⁇ N block of luma samples together with two corresponding blocks of chroma samples.
  • Fig. 2 shows an example of a picture 210 divided into CTUs (shown as small squares) with 9 rows and 11 columns.
  • the maximum allowed size of the luma block in a CTU is specified to be 128 ⁇ 128 (although the maximum size of the luma transform blocks is 64 ⁇ 64) .
  • a CTU is split into CUs by using a quaternary-tree or quadtree (QT) structure denoted as a coding tree to adapt to various local characteristics.
  • QT quadtree
  • the decision whether to code a picture area using inter-picture (temporal) or intra-picture (spatial) prediction is made at the leaf CU level.
  • Each leaf CU can be further split into one, two or four PUs according to the PU splitting type. Inside one PU, the same prediction process is applied and the relevant information is transmitted to the decoder on a PU basis.
  • a leaf CU After obtaining the residual block by applying the prediction process based on the PU splitting type, a leaf CU can be partitioned into transform units (TUs) according to another quaternary-tree structure similar to the coding tree for the CU.
  • transform units TUs
  • One of key feature of the HEVC structure is that it has the multiple partition conceptions including CU, PU, and TU.
  • a quadtree with nested Multi-Type Tree (MTT) using binary and ternary splits segmentation structure replaces the concepts of multiple partition unit types, i.e. it removes the separation of the CU, PU and TU concepts except as needed for CUs that have a size too large for the maximum transform length, and supports more flexibility for CU partition shapes.
  • a CU can have either a square or rectangular shape.
  • a coding tree unit (CTU) is first partitioned by a quaternary tree (a.k.a. quadtree) structure. Then the quaternary tree leaf nodes can be further partitioned by a multi-type tree structure. As shown in Fig.
  • the multi-type tree leaf nodes are called coding units (CUs) , and unless the CU is too large for the maximum transform length, this segmentation is used for prediction and transform processing without any further partitioning. This means that, in most cases, the CU, PU and TU have the same block size in the quadtree with nested multi-type tree coding block structure. The exception occurs when maximum supported transform length is smaller than the width or height of the colour component of the CU.
  • Fig. 4 illustrates the signalling mechanism of the partition splitting information in quadtree with nested multi-type tree. If the block is not allowed to split, a leaf node is reached and the block is CU (i.e., leaf 420) . If the block is allowed to split, then a syntax split_cu_flag is signalled to indicated whether split is applied to the block. If the block is not split (i.e., path “0” ) , a leaf node is reached and the block is CU (e.g. leaf 430) .
  • a coding tree unit (CTU) 410 is treated as the root of a quaternary tree and is first partitioned by a quaternary tree structure.
  • syntax split_cu_flag indicates the block is split (i.e., path “1” )
  • syntax split_qt_flag is signalled to indicate whether quadtree is used. If syntax split_qt_flag indicates the quadtree is used (i.e., path “1” ) , then the block is split into 4 sub-blocks 440. Each quaternary tree leaf node (when sufficiently large to allow it) is then further partitioned by a multi-type tree structure.
  • a first flag (mtt_split_cu_flag) is signalled to indicate whether the node is further partitioned; when a node is further partitioned, a second flag (mtt_split_cu_vertical_flag) is signalled to indicate the splitting direction, and then a third flag (mtt_split_cu_binary_flag) is signalled to indicate whether the split is a binary split or a ternary split.
  • the ternary split is applied, the block is split into 3 sub-blocks 450.
  • the binary split is used, the block is split into 2 sub-blocks 460.
  • the multi-type tree slitting mode (MttSplitMode) of a CU is derived as shown in Table 1.
  • Fig. 5 shows a CTU divided into multiple CUs with a quadtree and nested multi-type tree coding block structure, where the bold block edges represent quadtree partitioning and the remaining edges represent multi-type tree partitioning.
  • the quadtree with nested multi-type tree partition provides a content-adaptive coding tree structure comprised of CUs.
  • the size of the CU may be as large as the CTU or as small as 4 ⁇ 4 in units of luma samples.
  • the maximum chroma CB size is 64 ⁇ 64 and the minimum size chroma CB consist of 16 chroma samples.
  • the maximum supported luma transform size is 64 ⁇ 64 and the maximum supported chroma transform size is 32 ⁇ 32.
  • the width or height of the CB is larger the maximum transform width or height, the CB is automatically split in the horizontal and/or vertical direction to meet the transform size restriction in that direction.
  • CTU size the root node size of a quaternary tree
  • MinQTSize the minimum allowed quaternary tree leaf node size
  • MaxBtSize the maximum allowed binary tree root node size
  • MaxTtSize the maximum allowed ternary tree root node size
  • MaxMttDepth the maximum allowed hierarchy depth of multi-type tree splitting from a quadtree leaf
  • MinBtSize the minimum allowed binary tree leaf node size
  • MinTtSize the minimum allowed ternary tree leaf node size
  • the CTU size is set as 128 ⁇ 128 luma samples with two corresponding 64 ⁇ 64 blocks of 4: 2: 0 chroma samples
  • the MinQTSize is set as 16 ⁇ 16
  • the MaxBtSize is set as 128 ⁇ 128
  • MaxTtSize is set as 64 ⁇ 64
  • the MinBtSize and MinTtSize (for both width and height) is set as 4 ⁇ 4
  • the MaxMttDepth is set as 4.
  • the quaternary tree partitioning is applied to the CTU first to generate quaternary tree leaf nodes.
  • the quaternary tree leaf nodes may have a size from 16 ⁇ 16 (i.e., the MinQTSize) to 128 ⁇ 128 (i.e., the CTU size) . If the leaf QT node is 128 ⁇ 128, it will not be further split by the binary tree since the size exceeds the MaxBtSize and MaxTtSize (i.e., 64 ⁇ 64) . Otherwise, the leaf quadtree node could be further partitioned by the multi-type tree. Therefore, the quaternary tree leaf node is also the root node for the multi-type tree and it has multi-type tree depth (mttDepth) as 0.
  • mttDepth multi-type tree depth
  • TT split is forbidden when either width or height of a luma coding block is larger than 64, as shown in Fig. 6, where block 610 represent a 128 ⁇ 128 block. TT split is also forbidden when either width or height of a chroma coding block is larger than 32.
  • Block 620 corresponds to vertical binary split and block 630 corresponds to a horizontal binary split.
  • Block 640 corresponds to the 128 ⁇ 128 block is split into four 64 ⁇ 64 blocks and the vertical TT split is applied to the upper-left 64 ⁇ 64 block.
  • Block 650 corresponds to the 128 ⁇ 128 block is split into four 64 ⁇ 64 blocks and the horizontal TT split is applied to the upper-left 64 ⁇ 64 block.
  • the coding tree scheme supports the ability for the luma and chroma to have a separate block tree structure.
  • the luma and chroma CTBs in one CTU have to share the same coding tree structure.
  • the luma and chroma can have separate block tree structures.
  • luma CTB is partitioned into CUs by one coding tree structure
  • the chroma CTBs are partitioned into chroma CUs by another coding tree structure.
  • a CU in an I slice may consist of a coding block of the luma component or coding blocks of two chroma components, and a CU in a P or B slice always consists of coding blocks of all three colour components unless the video is monochrome.
  • GPS Geometric Partitioning Mode
  • a Geometric Partitioning Mode (GPM) is supported for inter prediction as described in JVET-W2002 (Adrian Browne, et al., Algorithm description for Versatile Video Coding and Test Model 14 (VTM 14) , ITU-T/ISO/IEC Joint Video Exploration Team (JVET) , 23rd Meeting, by teleconference, 7–16 July 2021, document: document JVET-M2002) .
  • the geometric partitioning mode is signalled using a CU-level flag as one kind of merge mode, with other merge modes including the regular merge mode, the MMVD mode, the CIIP mode and the subblock merge mode.
  • the GPM mode can be applied to skip or merge CUs having a size within the above limit and having at least two regular merge modes.
  • a CU When this mode is used, a CU is split into two parts by a geometrically located straight line in certain angles.
  • VVC In VVC, there are a total of 20 angles and 4 offset distances used for GPM, which has been reduced from 24 angles in an earlier draft. The location of the splitting line is mathematically derived from the angle and offset parameters of a specific partition.
  • VVC there are a total of 64 partitions as shown in Fig. 7, where the partitions are grouped according to their angles and dashed lines indicate redundant partitions.
  • Each part of a geometric partition in the CU is inter-predicted using its own motion; only uni-prediction is allowed for each partition, that is, each part has one motion vector and one reference index.
  • each line corresponds to the boundary of one partition.
  • partition group 710 consists of two vertical GPM partitions (i.e., 90°) .
  • Partition group 720 consists of four slant GPM partitions with a small angle from the vertical direction.
  • partition group 730 consists of two vertical GPM partitions (i.e., 270°) similar to those of group 710, but with an opposite direction.
  • the uni-prediction motion constraint is applied to ensure that only two motion compensated prediction are needed for each CU, same as the conventional bi-prediction.
  • the uni-prediction motion for each partition is derived using the process described later.
  • a geometric partition index indicating the selected partition mode of the geometric partition (angle and offset) , and two merge indices (one for each partition) are further signalled.
  • the number of maximum GPM candidate size is signalled explicitly in SPS (Sequence Parameter Set) and specifies syntax binarization for GPM merge indices.
  • the uni-prediction candidate list is derived directly from the merge candidate list constructed according to the extended merge prediction process.
  • n the index of the uni-prediction motion in the geometric uni-prediction candidate list.
  • These motion vectors are marked with “x” in Fig. 8.
  • the L (1 -X) motion vector of the same candidate is used instead as the uni-prediction motion vector for geometric partitioning mode.
  • blending is applied to the two prediction signals to derive samples around geometric partition edge.
  • the blending weight for each position of the CU are derived based on the distance between individual position and the partition edge.
  • the two integer blending matrices (W 0 and W 1 ) are utilized for the GPM blending process.
  • the weights in the GPM blending matrices contain the value range of [0, 8] and are derived based on the displacement from a sample position to the GPM partition boundary.
  • a method and apparatus for video coding using arbitrary partition are disclosed.
  • input data associated with a current block in a current picture are received, wherein the input data comprise pixel data for the current block to be encoded at an encoder side or encoded data associated with the current block to be decoded at a decoder side.
  • the current block is partitioned into multiple partitions using one or more partition lines, wherein at least one horizontal or vertical partition line divides the current block into two parts and at least one of the two parts is a non-power-of-2 block, or at least one tilted partition line divides the current block into two non-rectangular shapes comprising at least one trapezoid.
  • the multiple partitions are encoded or decoded separately, wherein said encoding the multiple partitions comprises applying scaling to convert the non-power-of-2 block or a target trapezoid to a converted power-of-2 block, or said decoding the multiple partitions comprises applying inverse scaling to convert the converted power-of-2 block back to the non-power-of-2 block or the target trapezoid.
  • the inverse scaling is applied to reconstructed residue of the converted power-of-2 block at the decoder side. In another embodiment, the inverse scaling is applied to reconstructed data of the converted power-of-2 block at the decoder side.
  • the scaling is applied to residues of the non-power-of-2 block or the target non-rectangular shape at the encoder side. In another embodiment, the scaling is applied to the pixel data and prediction data of the non-power-of-2 block or the target non-rectangular shape to form residues of the non-power-of-2 block or the target non-rectangular shape at the encoder side.
  • the scaling corresponds to scaling up, scaling down or both.
  • the scaling corresponds to projective transformation.
  • multiple scaling schemes are allowed for the scaling.
  • a syntax can be signalled or parsed to indicate a target scaling scheme selected for the current block.
  • the scaling is also applied to a power-of-2 block.
  • the method further comprises signalling or parsing information regarding an exception region in the current picture, wherein only a partial split mode set is allowed for the exception region is allowed.
  • the partial split mode set corresponds to no split.
  • the information regarding the exception region is signalled or parsed at a picture level.
  • the information regarding the exception region corresponds to a CTU identification list in the picture level.
  • the CTU ID list corresponds to a bit map for CTUs of the current picture, wherein a bit value equal to1 indicates a corresponding CTU in the exception region and the bit value equal to 0 indicates the corresponding CTU not in the exception region.
  • the method further comprises collecting partition statistics from one or more pictures during a first pass and evaluating syntax element adjustment associated with one or more possible exception regions for bitrate reduction during a second pass at the encoder side.
  • the partition statistics is collected from a previous picture and said one or more possible exception regions are selected for the current picture.
  • Fig. 1A illustrates an exemplary adaptive Inter/Intra video coding system incorporating loop processing.
  • Fig. 1B illustrates a corresponding decoder for the encoder in Fig. 1A.
  • Fig. 2 illustrates an example of partitioning a picture into Coding Tree Units (CTUs) .
  • CTUs Coding Tree Units
  • Fig. 3 illustrates examples of a multi-type tree structure corresponding to vertical binary splitting (SPLIT_BT_VER) , horizontal binary splitting (SPLIT_BT_HOR) , vertical ternary splitting (SPLIT_TT_VER) , and horizontal ternary splitting (SPLIT_TT_HOR) .
  • Fig. 4 illustrates an example of the signalling mechanism of the partition splitting information in quadtree with nested multi-type tree coding tree structure.
  • Fig. 5 shows an example of a CTU divided into multiple CUs with a quadtree and nested multi-type tree coding block structure, where the bold block edges represent quadtree partitioning and the remaining edges represent multi-type tree partitioning.
  • Fig. 6 shows some examples of TT split forbidden when either width or height of a luma coding block is larger than 64.
  • Fig. 7 illustrates an example of the of 64 partitions used in the VVC standard, where the partitions are grouped according to their angles and dashed lines indicate redundant partitions.
  • Fig. 8 illustrates an example of uni-prediction MV selection for the geometric partitioning mode.
  • Fig. 9 illustrates an example of arbitrary vertical binary partition according to an embodiment of the present invention.
  • Fig. 10A illustrates an example of arbitrary QT partition according to an embodiment of the present invention.
  • Fig. 10B illustrates an example of arbitrary TT partition according to an embodiment of the present invention.
  • Fig. 11A illustrates an example of arbitrary GPM-like partition according to an embodiment of the present invention.
  • Fig. 11B illustrates an example of scaling two partitions from the arbitrary GPM-like partition to power-of-2 blocks according to an embodiment of the present invention.
  • Fig. 12 illustrates an example of an exception region with no split.
  • Fig. 13 illustrates a flowchart of an exemplary video coding system, which incorporates arbitrary partition according to one embodiment of the present invention.
  • an arbitrary partition scheme is disclosed to improve the coding performance.
  • the new BT can have any kind of splitting boundary position, as shown in Fig. 9.
  • the partition position e.g. the N value as before
  • it can be derived in decoder side by some template-matching method.
  • This new form of BT will be beneficial for a lot of contents, because object boundary is commonly not on “just middle” of the parent CU, where conventional VBT, HBT case will not perfectly match.
  • the arbitrary boundary of BT in one sub-embodiment, it can be turned on or off for this kind of splitting for different CTU/CTU-Row/Tile/Slice/Picture/GOP (group of pictures) to save the syntax overhead.
  • some splitting dimension is not allowed for intra-coding. For example, if the 32x32 is split into 7x32 and 25x32, intra coding is not allowed. However, if 32x32 is split to 16x32 and 16x32, intra-coding is not allowed.
  • a scaling process is first applied to the width and/or the height of the partitioned block before transform in the residual domain (i.e., uncompressed CU samples subtracted by CU prediction) which makes the partitioned block suitable for transform. For example, if the 32x32 CU is split into 7x32 and 25x32 CUs, 7x32 CU can be scaled to 8x32 and 25x32 CU can be scaled to 32x32 before transform. These CUs are encoded by normal CU encoding flow. After inverse transform, the CUs can be scaled back to 7x32 and 25x32 to get the decoded samples.
  • the residual domain i.e., uncompressed CU samples subtracted by CU prediction
  • a scaling process is first applied to the width and/or the height of the partitioned blocks where the blocks are formed by uncompressed CU samples and CU prediction individually. The difference between these two blocks is then encoded by a normal CU encoding flow, where transform kernels for power-of-two width and height can be applied as usual.
  • the decoder scales the CU prediction and adds the received residual to CU prediction to construct the decoded CU.
  • the decoded CU is then scaled with a proper ratio for restoring the CU size divided by the arbitrary boundary of BT.
  • the scaling method for non-power-of-2 blocks can be applied on other types of block partition, including arbitrary QT (Fig. 10A) and arbitrary TT (Fig. 10B) .
  • the parent CU is partitioned into 2 parts (e.g. partition A and partition B) , the partition boundary can be tilted (i.e., not horizontal boundary or vertical boundary) , as shown in Fig. 11A.
  • the partition method is similar to GPM mode in VVC.
  • the partition A and B are different CUs and applying a separate coding flow, such as coding mode, candidate list, or residual coding.
  • a separate coding flow such as coding mode, candidate list, or residual coding.
  • it can use shape adaptive DCT for the transform part.
  • the example in Fig. 11A illustrates a case of two trapezoids as a result of tilted partition line
  • the two partition could be a triangle and a trapezoid if the titled partition line cut through two adjacent sides of the block.
  • the tilted line partitions the block into two non-rectangular partitions, which may comprise two trapezoids, or one triangle and on trapezoid.
  • the encoder uses Picture/Tile/Slice/CTU-row/CTU header to control on or off for the GPM-partition type.
  • the encoder can perform a picture analysis (e.g. MV field analysis) to determine whether to turn on the GPM CU-partition or not.
  • MV field analysis e.g. MV field analysis
  • partition B can go through the transform stage of the kernel with power-of-2 length. Per-row or per-column scaling can be applied. In one embodiment, the scaling ratio is change along certain direction for scaling the shape back to rectangle again. Take Fig. 11B as an example, partition A can be scaled along the vertical direction 1110 and make it become a rectangle suitable for transform coding. Similarly, partition B can be scaled along the vertical direction 1120 and make it become a rectangle suitable for transform coding. In another embodiment, general projective transform can be used to make the partition become suitable for transform coding. As is known in the field of 2D projective geometry, the projective transform is the composition of a pair of perspective projections.
  • the projective transform is also a type of linear transform of positions, similar to the affine transform. However, while parallel lines remain parallel for the affine transform, parallel lines do not necessarily remain parallel for the projective transform.
  • the scaling method can be applied in normal CU for seeking coding gain. For example, for a CU with size 16x8 or 8x16 (width x height) , the scaling process can be applied first to convert the CU to one with size 8x8 after scaling. In one embodiment, additional flag is signalled to indicate the scaling mode is enabled.
  • the scaling strategy can implicitly be one of the following conditions:
  • rate distortion optimization in the encoder choses the best mode with lowest cost to be the final encoding mode.
  • the CU is applied with the inverse scaling process after reconstructing the CU.
  • scaling strategy is signalled in SPS (Sequence Parameter Set) , PH (Picture Header) or signalled by CTU-level, or CU-level syntax elements.
  • SPS Sequence Parameter Set
  • PH Physical Header
  • syntax elements related to block partition can be reduced.
  • extra syntax elements are signalled through picture header (PH) to notify the decoder regarding the region with block partition exceptions.
  • PH picture header
  • These syntax elements are the top left CTU coordinate, horizontal CTU span in unit of CTU count, vertical CTU span in unit of CTU count, and the exception types.
  • the exception type can be one of the following types:
  • the above exception type list is not an exhaust list and other possible type can also be applied.
  • an extra syntax flag can be signalled to indicate the block partition exception being applied to the whole picture and the information related to the region’s coordinate and size can be skipped.
  • this block partition exception mode there can be several methods to implement this block partition exception mode.
  • multiple pass coding is adopted, in first pass the block split information is collected. Then, in second pass, bit-stream reduction chances with the block partition exception are identified.
  • the second pass is a quick pass without real decoding and encoding processing and just performing syntax element adjustment for seeking a shorter bit-stream.
  • the block split information is collected while encoding and the information is used to influence the encoding of the following pictures. Therefore, the encoding process is still one pass while the information used to decide the block partition exception is from previous encoding frames.
  • Fig. 13 illustrates a flowchart of an exemplary video coding system, which incorporates arbitrary partition according to one embodiment of the present invention.
  • the steps shown in the flowchart may be implemented as program codes executable on one or more processors (e.g., one or more CPUs) at the encoder side.
  • the steps shown in the flowchart may also be implemented based hardware such as one or more electronic devices or processors arranged to perform the steps in the flowchart.
  • input data associated with a current block in a current picture are received in step 1310, wherein the input data comprise pixel data for the current block to be encoded at an encoder side or encoded data associated with the current block to be decoded at a decoder side.
  • the current block is partitioned into multiple partitions using one or more partition lines in step 1320, wherein at least one horizontal or vertical partition line divides the current block into two parts and at least one of the two parts is a non-power-of-2 block, or at least one tilted partition line divides the current block into two non-rectangular shapes comprising at least one trapezoid.
  • the multiple partitions are encoded or decoded separately in step 1330, wherein said encoding the multiple partitions comprises applying scaling to convert the non-power-of-2 block or a target non-rectangular shape to a converted power-of-2 block, or said decoding the multiple partitions comprises applying inverse scaling to convert the converted power-of-2 block back to the non-power-of-2 block or the target non-rectangular shape
  • Embodiment of the present invention as described above may be implemented in various hardware, software codes, or a combination of both.
  • an embodiment of the present invention can be one or more circuit circuits integrated into a video compression chip or program code integrated into video compression software to perform the processing described herein.
  • An embodiment of the present invention may also be program code to be executed on a Digital Signal Processor (DSP) to perform the processing described herein.
  • DSP Digital Signal Processor
  • the invention may also involve a number of functions to be performed by a computer processor, a digital signal processor, a microprocessor, or field programmable gate array (FPGA) .
  • These processors can be configured to perform particular tasks according to the invention, by executing machine-readable software code or firmware code that defines the particular methods embodied by the invention.
  • the software code or firmware code may be developed in different programming languages and different formats or styles.
  • the software code may also be compiled for different target platforms.
  • different code formats, styles and languages of software codes and other means of configuring code to perform the tasks in accordance with the invention will not depart from the spirit and scope of the invention.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

A method and apparatus for video coding using arbitrary partition. According to the method, a current block is partitioned into multiple partitions using one or more partition lines, wherein at least one horizontal or vertical partition line divides the current block into two parts and at least one of the two parts is a non-power-of-2 block, or at least one tilted partition line divides the current block into two non-rectangular shapes comprising at least one trapezoid. The multiple partitions are encoded or decoded separately, wherein said encoding the multiple partitions comprises applying scaling to convert the non-power-of-2block or a target trapezoid to a converted power-of-2 block, or said decoding the multiple partitions comprises applying inverse scaling to convert the converted power-of-2 block back to the non-power-of-2 block or the target trapezoid.

Description

METHODS AND APPARATUS OF ARBITRARY BLOCK PARTITION IN VIDEO CODING
CROSS REFERENCE TO RELATED APPLICATIONS
The present invention is a non-Provisional Application of and claims priority to U.S. Provisional Patent Application No. 63/383,277, filed on November 11, 2022. The U.S. Provisional Patent Application is hereby incorporated by reference in its entirety.
FIELD OF THE INVENTION
The present invention relates block partition in a video coding system. In particular, the present invention relates to partitioning with an exception region identified for restricted partition modes.
BACKGROUND
Versatile video coding (VVC) is the latest international video coding standard developed by the Joint Video Experts Team (JVET) of the ITU-T Video Coding Experts Group (VCEG) and the ISO/IEC Moving Picture Experts Group (MPEG) . The standard has been published as an ISO standard: ISO/IEC 23090-3: 2021, Information technology -Coded representation of immersive media -Part 3: Versatile video coding, published Feb. 2021. VVC is developed based on its predecessor HEVC (High Efficiency Video Coding) by adding more coding tools to improve coding efficiency and also to handle various types of video sources including 3-dimensional (3D) video signals.
Fig. 1A illustrates an exemplary adaptive Inter/Intra video coding system incorporating loop processing. For Intra Prediction, the prediction data is derived based on previously coded video data in the current picture. For Inter Prediction 112, Motion Estimation (ME) is performed at the encoder side and Motion Compensation (MC) is performed based on the result of ME to provide prediction data derived from other picture (s) and motion data. Switch 114 selects Intra Prediction 110 or Inter-Prediction 112 and the selected prediction data is supplied to Adder 116 to form prediction errors, also called residues. The prediction error is then processed by Transform (T) 118 followed by Quantization (Q) 120. The transformed and quantized residues are then coded by Entropy Encoder 122 to be included in a video bitstream corresponding to the compressed video data. The bitstream associated with the transform coefficients is then packed with side information such as motion and coding modes associated with Intra prediction and Inter prediction, and other information such as parameters associated with loop filters applied to underlying image area. The side information associated with Intra  Prediction 110, Inter prediction 112 and in-loop filter 130, are provided to Entropy Encoder 122 as shown in Fig. 1A. When an Inter-prediction mode is used, a reference picture or pictures have to be reconstructed at the encoder end as well. Consequently, the transformed and quantized residues are processed by Inverse Quantization (IQ) 124 and Inverse Transformation (IT) 126 to recover the residues. The residues are then added back to prediction data 136 at Reconstruction (REC) 128 to reconstruct video data. The reconstructed video data may be stored in Reference Picture Buffer 134 and used for prediction of other frames.
As shown in Fig. 1A, incoming video data undergoes a series of processing in the encoding system. The reconstructed video data from REC 128 may be subject to various impairments due to a series of processing. Accordingly, in-loop filter 130 is often applied to the reconstructed video data before the reconstructed video data are stored in the Reference Picture Buffer 134 in order to improve video quality. For example, deblocking filter (DF) , Sample Adaptive Offset (SAO) and Adaptive Loop Filter (ALF) may be used. The loop filter information may need to be incorporated in the bitstream so that a decoder can properly recover the required information. Therefore, loop filter information is also provided to Entropy Encoder 122 for incorporation into the bitstream. In Fig. 1A, Loop filter 130 is applied to the reconstructed video before the reconstructed samples are stored in the reference picture buffer 134. The system in Fig. 1A is intended to illustrate an exemplary structure of a typical video encoder. It may correspond to the High Efficiency Video Coding (HEVC) system, VP8, VP9, H. 264 or VVC.
The decoder, as shown in Fig. 1B, can use similar or portion of the same functional blocks as the encoder except for Transform 118 and Quantization 120 since the decoder only needs Inverse Quantization 124 and Inverse Transform 126. Instead of Entropy Encoder 122, the decoder uses an Entropy Decoder 140 to decode the video bitstream into quantized transform coefficients and needed coding information (e.g. ILPF information, Intra prediction information and Inter prediction information) . The Intra prediction 150 at the decoder side does not need to perform the mode search. Instead, the decoder only needs to generate Intra prediction according to Intra prediction information received from the Entropy Decoder 140. Furthermore, for Inter prediction, the decoder only needs to perform motion compensation (MC 152) according to Inter prediction information received from the Entropy Decoder 140 without the need for motion estimation.
According to VVC, an input picture is partitioned into non-overlapped square block regions referred as CTUs (Coding Tree Units) , similar to HEVC. Each CTU can be partitioned into one or multiple smaller size coding units (CUs) . The resulting CU partitions  can be in square or rectangular shapes. Also, VVC divides a CTU into prediction units (PUs) as a unit to apply prediction process, such as Inter prediction, Intra prediction, etc.
The VVC standard incorporates various new coding tools to further improve the coding efficiency over the HEVC standard. Among various new coding tools, some coding tools relevant to the present invention are reviewed as follows.
Partitioning of the CTUs Using a Tree Structure
Pictures are divided into a sequence of coding tree units (CTUs) . The CTU concept is same as that for the HEVC. For a picture that has three sample arrays, a CTU consists of an N×N block of luma samples together with two corresponding blocks of chroma samples. Fig. 2 shows an example of a picture 210 divided into CTUs (shown as small squares) with 9 rows and 11 columns.
The maximum allowed size of the luma block in a CTU is specified to be 128×128 (although the maximum size of the luma transform blocks is 64×64) .
In HEVC, a CTU is split into CUs by using a quaternary-tree or quadtree (QT) structure denoted as a coding tree to adapt to various local characteristics. The decision whether to code a picture area using inter-picture (temporal) or intra-picture (spatial) prediction is made at the leaf CU level. Each leaf CU can be further split into one, two or four PUs according to the PU splitting type. Inside one PU, the same prediction process is applied and the relevant information is transmitted to the decoder on a PU basis. After obtaining the residual block by applying the prediction process based on the PU splitting type, a leaf CU can be partitioned into transform units (TUs) according to another quaternary-tree structure similar to the coding tree for the CU. One of key feature of the HEVC structure is that it has the multiple partition conceptions including CU, PU, and TU.
In VVC, a quadtree with nested Multi-Type Tree (MTT) using binary and ternary splits segmentation structure replaces the concepts of multiple partition unit types, i.e. it removes the separation of the CU, PU and TU concepts except as needed for CUs that have a size too large for the maximum transform length, and supports more flexibility for CU partition shapes. In the coding tree structure, a CU can have either a square or rectangular shape. A coding tree unit (CTU) is first partitioned by a quaternary tree (a.k.a. quadtree) structure. Then the quaternary tree leaf nodes can be further partitioned by a multi-type tree structure. As shown in Fig. 3, there are four splitting types in multi-type tree structure, vertical binary splitting (SPLIT_BT_VER 310) , horizontal binary splitting (SPLIT_BT_HOR 320) , vertical ternary splitting (SPLIT_TT_VER 330) , and horizontal ternary splitting (SPLIT_TT_HOR 340) . The multi-type tree leaf nodes are called coding units (CUs) , and unless the CU is too large for the maximum transform length, this segmentation is used for prediction and  transform processing without any further partitioning. This means that, in most cases, the CU, PU and TU have the same block size in the quadtree with nested multi-type tree coding block structure. The exception occurs when maximum supported transform length is smaller than the width or height of the colour component of the CU.
Fig. 4 illustrates the signalling mechanism of the partition splitting information in quadtree with nested multi-type tree. If the block is not allowed to split, a leaf node is reached and the block is CU (i.e., leaf 420) . If the block is allowed to split, then a syntax split_cu_flag is signalled to indicated whether split is applied to the block. If the block is not split (i.e., path “0” ) , a leaf node is reached and the block is CU (e.g. leaf 430) . A coding tree unit (CTU) 410 is treated as the root of a quaternary tree and is first partitioned by a quaternary tree structure. When syntax split_cu_flag indicates the block is split (i.e., path “1” ) , syntax split_qt_flag is signalled to indicate whether quadtree is used. If syntax split_qt_flag indicates the quadtree is used (i.e., path “1” ) , then the block is split into 4 sub-blocks 440. Each quaternary tree leaf node (when sufficiently large to allow it) is then further partitioned by a multi-type tree structure. In the multi-type tree structure, a first flag (mtt_split_cu_flag) is signalled to indicate whether the node is further partitioned; when a node is further partitioned, a second flag (mtt_split_cu_vertical_flag) is signalled to indicate the splitting direction, and then a third flag (mtt_split_cu_binary_flag) is signalled to indicate whether the split is a binary split or a ternary split. When the ternary split is applied, the block is split into 3 sub-blocks 450. When the binary split is used, the block is split into 2 sub-blocks 460. Based on the values of mtt_split_cu_vertical_flag and mtt_split_cu_binary_flag, the multi-type tree slitting mode (MttSplitMode) of a CU is derived as shown in Table 1.
Table 1 -MttSplitMode derivation based on multi-type tree syntax elements
Fig. 5 shows a CTU divided into multiple CUs with a quadtree and nested multi-type tree coding block structure, where the bold block edges represent quadtree partitioning and the remaining edges represent multi-type tree partitioning. The quadtree with nested multi-type tree partition provides a content-adaptive coding tree structure comprised of CUs. The size of  the CU may be as large as the CTU or as small as 4×4 in units of luma samples. For the case of the 4: 2: 0 chroma format, the maximum chroma CB size is 64×64 and the minimum size chroma CB consist of 16 chroma samples.
In VVC, the maximum supported luma transform size is 64×64 and the maximum supported chroma transform size is 32×32. When the width or height of the CB is larger the maximum transform width or height, the CB is automatically split in the horizontal and/or vertical direction to meet the transform size restriction in that direction.
The following parameters are defined and specified by SPS syntax elements for the quadtree with nested multi-type tree coding tree scheme.
– CTU size: the root node size of a quaternary tree
– MinQTSize: the minimum allowed quaternary tree leaf node size
– MaxBtSize: the maximum allowed binary tree root node size
– MaxTtSize: the maximum allowed ternary tree root node size
– MaxMttDepth: the maximum allowed hierarchy depth of multi-type tree splitting from a quadtree leaf
– MinBtSize: the minimum allowed binary tree leaf node size
– MinTtSize: the minimum allowed ternary tree leaf node size
In one example of the quadtree with nested multi-type tree coding tree structure, the CTU size is set as 128×128 luma samples with two corresponding 64×64 blocks of 4: 2: 0 chroma samples, the MinQTSize is set as 16×16, the MaxBtSize is set as 128×128 and MaxTtSize is set as 64×64, the MinBtSize and MinTtSize (for both width and height) is set as 4×4, and the MaxMttDepth is set as 4. The quaternary tree partitioning is applied to the CTU first to generate quaternary tree leaf nodes. The quaternary tree leaf nodes may have a size from 16×16 (i.e., the MinQTSize) to 128×128 (i.e., the CTU size) . If the leaf QT node is 128×128, it will not be further split by the binary tree since the size exceeds the MaxBtSize and MaxTtSize (i.e., 64×64) . Otherwise, the leaf quadtree node could be further partitioned by the multi-type tree. Therefore, the quaternary tree leaf node is also the root node for the multi-type tree and it has multi-type tree depth (mttDepth) as 0. When the multi-type tree depth reaches MaxMttDepth (i.e., 4) , no further splitting is considered. When the multi-type tree node has width equal to MinBtSize and smaller or equal to 2 *MinTtSize, no further horizontal splitting is considered. Similarly, when the multi-type tree node has height equal to MinBtSize and smaller or equal to 2 *MinTtSize, no further vertical splitting is considered.
To allow 64×64 Luma block and 32×32 Chroma pipelining design in VVC hardware decoders, TT split is forbidden when either width or height of a luma coding block is larger than 64, as shown in Fig. 6, where block 610 represent a 128×128 block. TT split is also  forbidden when either width or height of a chroma coding block is larger than 32. Block 620 corresponds to vertical binary split and block 630 corresponds to a horizontal binary split. Block 640 corresponds to the 128×128 block is split into four 64×64 blocks and the vertical TT split is applied to the upper-left 64×64 block. Block 650 corresponds to the 128×128 block is split into four 64×64 blocks and the horizontal TT split is applied to the upper-left 64×64 block.
In VVC, the coding tree scheme supports the ability for the luma and chroma to have a separate block tree structure. For P and B slices, the luma and chroma CTBs in one CTU have to share the same coding tree structure. However, for I slices, the luma and chroma can have separate block tree structures. When the separate block tree mode is applied, luma CTB is partitioned into CUs by one coding tree structure, and the chroma CTBs are partitioned into chroma CUs by another coding tree structure. This means that a CU in an I slice may consist of a coding block of the luma component or coding blocks of two chroma components, and a CU in a P or B slice always consists of coding blocks of all three colour components unless the video is monochrome.
Geometric Partitioning Mode (GPM)
In VVC, a Geometric Partitioning Mode (GPM) is supported for inter prediction as described in JVET-W2002 (Adrian Browne, et al., Algorithm description for Versatile Video Coding and Test Model 14 (VTM 14) , ITU-T/ISO/IEC Joint Video Exploration Team (JVET) , 23rd Meeting, by teleconference, 7–16 July 2021, document: document JVET-M2002) . The geometric partitioning mode is signalled using a CU-level flag as one kind of merge mode, with other merge modes including the regular merge mode, the MMVD mode, the CIIP mode and the subblock merge mode. A total of 64 partitions are supported by geometric partitioning mode for each possible CU size, w×h=2m×2n with m, n ∈ {3…6} excluding 8x64 and 64x8. The GPM mode can be applied to skip or merge CUs having a size within the above limit and having at least two regular merge modes.
When this mode is used, a CU is split into two parts by a geometrically located straight line in certain angles. In VVC, there are a total of 20 angles and 4 offset distances used for GPM, which has been reduced from 24 angles in an earlier draft. The location of the splitting line is mathematically derived from the angle and offset parameters of a specific partition. In VVC, there are a total of 64 partitions as shown in Fig. 7, where the partitions are grouped according to their angles and dashed lines indicate redundant partitions. Each part of a geometric partition in the CU is inter-predicted using its own motion; only uni-prediction is allowed for each partition, that is, each part has one motion vector and one reference index. In Fig. 7, each line corresponds to the boundary of one partition. The partitions are grouped  according to its angle. For example, partition group 710 consists of two vertical GPM partitions (i.e., 90°) . Partition group 720 consists of four slant GPM partitions with a small angle from the vertical direction. Also, partition group 730 consists of two vertical GPM partitions (i.e., 270°) similar to those of group 710, but with an opposite direction. The uni-prediction motion constraint is applied to ensure that only two motion compensated prediction are needed for each CU, same as the conventional bi-prediction. The uni-prediction motion for each partition is derived using the process described later.
If geometric partitioning mode is used for the current CU, then a geometric partition index indicating the selected partition mode of the geometric partition (angle and offset) , and two merge indices (one for each partition) are further signalled. The number of maximum GPM candidate size is signalled explicitly in SPS (Sequence Parameter Set) and specifies syntax binarization for GPM merge indices. After predicting each of part of the geometric partition, the sample values along the geometric partition edge are adjusted using a blending processing with adaptive weights using the process described later. This is the prediction signal for the whole CU, and transform and quantization process will be applied to the whole CU as in other prediction modes. Finally, the motion field of a CU predicted using the geometric partition modes is stored using the process described later.
Uni-Prediction Candidate List Construction
The uni-prediction candidate list is derived directly from the merge candidate list constructed according to the extended merge prediction process. Denote n as the index of the uni-prediction motion in the geometric uni-prediction candidate list. The LX motion vector of the n-th extended merge candidate (X = 0 or 1, i.e., LX = L0 or L1) , with X equal to the parity of n, is used as the n-th uni-prediction motion vector for geometric partitioning mode. These motion vectors are marked with “x” in Fig. 8. In case a corresponding LX motion vector of the n-the extended merge candidate does not exist, the L (1 -X) motion vector of the same candidate is used instead as the uni-prediction motion vector for geometric partitioning mode.
Blending Along the Geometric Partitioning Edge
After predicting each part of a geometric partition using its own motion, blending is applied to the two prediction signals to derive samples around geometric partition edge. The blending weight for each position of the CU are derived based on the distance between individual position and the partition edge.
The two integer blending matrices (W0 and W1) are utilized for the GPM blending process. The weights in the GPM blending matrices contain the value range of [0, 8] and are derived based on the displacement from a sample position to the GPM partition boundary.
In the present invention, method and apparatus of arbitrary partition are disclosed to  improve the efficiency of partition signalling.
BRIEF SUMMARY OF THE INVENTION
A method and apparatus for video coding using arbitrary partition are disclosed. According to the method, input data associated with a current block in a current picture are received, wherein the input data comprise pixel data for the current block to be encoded at an encoder side or encoded data associated with the current block to be decoded at a decoder side. The current block is partitioned into multiple partitions using one or more partition lines, wherein at least one horizontal or vertical partition line divides the current block into two parts and at least one of the two parts is a non-power-of-2 block, or at least one tilted partition line divides the current block into two non-rectangular shapes comprising at least one trapezoid. The multiple partitions are encoded or decoded separately, wherein said encoding the multiple partitions comprises applying scaling to convert the non-power-of-2 block or a target trapezoid to a converted power-of-2 block, or said decoding the multiple partitions comprises applying inverse scaling to convert the converted power-of-2 block back to the non-power-of-2 block or the target trapezoid.
In one embodiment, the inverse scaling is applied to reconstructed residue of the converted power-of-2 block at the decoder side. In another embodiment, the inverse scaling is applied to reconstructed data of the converted power-of-2 block at the decoder side.
In one embodiment, the scaling is applied to residues of the non-power-of-2 block or the target non-rectangular shape at the encoder side. In another embodiment, the scaling is applied to the pixel data and prediction data of the non-power-of-2 block or the target non-rectangular shape to form residues of the non-power-of-2 block or the target non-rectangular shape at the encoder side.
In one embodiment, the scaling corresponds to scaling up, scaling down or both. In another embodiment, the scaling corresponds to projective transformation. In yet another embodiment, multiple scaling schemes are allowed for the scaling. Furthermore, a syntax can be signalled or parsed to indicate a target scaling scheme selected for the current block.
In one embodiment, the scaling is also applied to a power-of-2 block.
In one embodiment, the method further comprises signalling or parsing information regarding an exception region in the current picture, wherein only a partial split mode set is allowed for the exception region is allowed. In one embodiment, the partial split mode set corresponds to no split. In one embodiment, the information regarding the exception region is signalled or parsed at a picture level. In one embodiment, the information regarding the exception region corresponds to a CTU identification list in the picture level. In another  embodiment, the CTU ID list corresponds to a bit map for CTUs of the current picture, wherein a bit value equal to1 indicates a corresponding CTU in the exception region and the bit value equal to 0 indicates the corresponding CTU not in the exception region.
In one embodiment, the method further comprises collecting partition statistics from one or more pictures during a first pass and evaluating syntax element adjustment associated with one or more possible exception regions for bitrate reduction during a second pass at the encoder side. In one embodiment, the partition statistics is collected from a previous picture and said one or more possible exception regions are selected for the current picture.
BRIEF DESCRIPTION OF THE DRAWINGS
Fig. 1A illustrates an exemplary adaptive Inter/Intra video coding system incorporating loop processing.
Fig. 1B illustrates a corresponding decoder for the encoder in Fig. 1A.
Fig. 2 illustrates an example of partitioning a picture into Coding Tree Units (CTUs) .
Fig. 3 illustrates examples of a multi-type tree structure corresponding to vertical binary splitting (SPLIT_BT_VER) , horizontal binary splitting (SPLIT_BT_HOR) , vertical ternary splitting (SPLIT_TT_VER) , and horizontal ternary splitting (SPLIT_TT_HOR) .
Fig. 4 illustrates an example of the signalling mechanism of the partition splitting information in quadtree with nested multi-type tree coding tree structure.
Fig. 5 shows an example of a CTU divided into multiple CUs with a quadtree and nested multi-type tree coding block structure, where the bold block edges represent quadtree partitioning and the remaining edges represent multi-type tree partitioning.
Fig. 6 shows some examples of TT split forbidden when either width or height of a luma coding block is larger than 64.
Fig. 7 illustrates an example of the of 64 partitions used in the VVC standard, where the partitions are grouped according to their angles and dashed lines indicate redundant partitions.
Fig. 8 illustrates an example of uni-prediction MV selection for the geometric partitioning mode.
Fig. 9 illustrates an example of arbitrary vertical binary partition according to an embodiment of the present invention.
Fig. 10A illustrates an example of arbitrary QT partition according to an embodiment of the present invention.
Fig. 10B illustrates an example of arbitrary TT partition according to an embodiment of the present invention.
Fig. 11A illustrates an example of arbitrary GPM-like partition according to an  embodiment of the present invention.
Fig. 11B illustrates an example of scaling two partitions from the arbitrary GPM-like partition to power-of-2 blocks according to an embodiment of the present invention.
Fig. 12 illustrates an example of an exception region with no split.
Fig. 13 illustrates a flowchart of an exemplary video coding system, which incorporates arbitrary partition according to one embodiment of the present invention.
DETAILED DESCRIPTION OF THE INVENTION
It will be readily understood that the components of the present invention, as generally described and illustrated in the figures herein, may be arranged and designed in a wide variety of different configurations. Thus, the following more detailed description of the embodiments of the systems and methods of the present invention, as represented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. References throughout this specification to “one embodiment, ” “an embodiment, ” or similar language mean that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the present invention. Thus, appearances of the phrases “in one embodiment” or “in an embodiment” in various places throughout this specification are not necessarily all referring to the same embodiment.
Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. One skilled in the relevant art will recognize, however, that the invention can be practiced without one or more of the specific details, or with other methods, components, etc. In other instances, well-known structures, or operations are not shown or described in detail to avoid obscuring aspects of the invention. The illustrated embodiments of the invention will be best understood by reference to the drawings, wherein like parts are designated by like numerals throughout. The following description is intended only by way of example, and simply illustrates certain selected embodiments of apparatus and methods that are consistent with the invention as claimed herein.
In the present invention, an arbitrary partition scheme is disclosed to improve the coding performance.
Arbitrary Boundary of QT/BT/TT with Arbitrary Scaling Ratio
In this proposed method, a new type of BT (Binary Splitting) is proposed. Instead of conventional BT, the new BT can have any kind of splitting boundary position, as shown in Fig. 9. For example, if the CU is 32x32, the arbitrary horizontal BT will partition the block  into 2 sub-CU, where one is 32xN, another is 32x (32-N) , in which N is the value of 1 to 31. In one sub-embodiment, the partition position (e.g. the N value as before) can be signalled from encoder to decoder. In another sub-embodiment, it can be derived in decoder side by some template-matching method.
This new form of BT will be beneficial for a lot of contents, because object boundary is commonly not on “just middle” of the parent CU, where conventional VBT, HBT case will not perfectly match. With the arbitrary boundary of BT, in one sub-embodiment, it can be turned on or off for this kind of splitting for different CTU/CTU-Row/Tile/Slice/Picture/GOP (group of pictures) to save the syntax overhead.
In one sub-embodiment, with the arbitrary boundary of BT, it can be restricted to apply the “Arbitrary Boundary of BT” to skip mode only in order to simplify the transform algorithm. In one sub-embodiment, with the arbitrary boundary of BT, it can be restricted to apply the “Arbitrary Boundary of BT” to inter mode only in order to simplify the intra-prediction algorithm. In another embodiment, some splitting dimension is not allowed for intra-coding. For example, if the 32x32 is split into 7x32 and 25x32, intra coding is not allowed. However, if 32x32 is split to 16x32 and 16x32, intra-coding is not allowed.
In another embodiment, a scaling process is first applied to the width and/or the height of the partitioned block before transform in the residual domain (i.e., uncompressed CU samples subtracted by CU prediction) which makes the partitioned block suitable for transform. For example, if the 32x32 CU is split into 7x32 and 25x32 CUs, 7x32 CU can be scaled to 8x32 and 25x32 CU can be scaled to 32x32 before transform. These CUs are encoded by normal CU encoding flow. After inverse transform, the CUs can be scaled back to 7x32 and 25x32 to get the decoded samples.
In another embodiment, a scaling process is first applied to the width and/or the height of the partitioned blocks where the blocks are formed by uncompressed CU samples and CU prediction individually. The difference between these two blocks is then encoded by a normal CU encoding flow, where transform kernels for power-of-two width and height can be applied as usual. When decoding, the decoder scales the CU prediction and adds the received residual to CU prediction to construct the decoded CU. The decoded CU is then scaled with a proper ratio for restoring the CU size divided by the arbitrary boundary of BT. In addition to BT, the scaling method for non-power-of-2 blocks can be applied on other types of block partition, including arbitrary QT (Fig. 10A) and arbitrary TT (Fig. 10B) .
GPM-Like Partition with Arbitrary Scaling Ratio
In this method, the parent CU is partitioned into 2 parts (e.g. partition A and partition B) , the partition boundary can be tilted (i.e., not horizontal boundary or vertical boundary) , as  shown in Fig. 11A. The partition method is similar to GPM mode in VVC. The partition A and B are different CUs and applying a separate coding flow, such as coding mode, candidate list, or residual coding. For example, it can use shape adaptive DCT for the transform part. While the example in Fig. 11A illustrates a case of two trapezoids as a result of tilted partition line, the two partition could be a triangle and a trapezoid if the titled partition line cut through two adjacent sides of the block. In other words, the tilted line partitions the block into two non-rectangular partitions, which may comprise two trapezoids, or one triangle and on trapezoid.
For the content-dependent characteristics, in one embodiment, it uses Picture/Tile/Slice/CTU-row/CTU header to control on or off for the GPM-partition type. For example, the encoder can perform a picture analysis (e.g. MV field analysis) to determine whether to turn on the GPM CU-partition or not. Moreover, it is further proposed to have one GPM-TT Partition, which means the splitting boundary can be tilted for TT partition.
For partition A, partition B can go through the transform stage of the kernel with power-of-2 length. Per-row or per-column scaling can be applied. In one embodiment, the scaling ratio is change along certain direction for scaling the shape back to rectangle again. Take Fig. 11B as an example, partition A can be scaled along the vertical direction 1110 and make it become a rectangle suitable for transform coding. Similarly, partition B can be scaled along the vertical direction 1120 and make it become a rectangle suitable for transform coding. In another embodiment, general projective transform can be used to make the partition become suitable for transform coding. As is known in the field of 2D projective geometry, the projective transform is the composition of a pair of perspective projections. It describes what happens to the perceived positions of observed objects when the point of view of the observer changes. The projective transform is also a type of linear transform of positions, similar to the affine transform. However, while parallel lines remain parallel for the affine transform, parallel lines do not necessarily remain parallel for the projective transform.
CU-based scaling down and scaling up
The scaling method, as disclosed in the section entitled Arbitrary Boundary of QT/BT/TT with Arbitrary Scaling Ratio, can be applied in normal CU for seeking coding gain. For example, for a CU with size 16x8 or 8x16 (width x height) , the scaling process can be applied first to convert the CU to one with size 8x8 after scaling. In one embodiment, additional flag is signalled to indicate the scaling mode is enabled. The scaling strategy can implicitly be one of the following conditions:
a) If CU is not square CU, always scale the longer side down to the length of short side,
b) If CU is not square CU, always scale the short side up to the length of longer side,
c) If CU is square CU with size 2Nx2N, scale the CU to NxN.
After CU scaling, rate distortion optimization in the encoder choses the best mode with lowest cost to be the final encoding mode. When decoding a CU with this mode flag set to true, the CU is applied with the inverse scaling process after reconstructing the CU. In another embodiment, scaling strategy is signalled in SPS (Sequence Parameter Set) , PH (Picture Header) or signalled by CTU-level, or CU-level syntax elements. The scaling of CU can be done with one of the following procedures:
a) At encoder side, the original pixels of CU are scaled to new size. Scaling the prediction pixels of CU to new size. Calculate the CU residue by scaled original pixels subtracting scaled prediction pixels
b) At encoder side, first calculate the CU residue by original pixels subtracting prediction pixels to get the residue. Scaling the residue to new size.
When the different methods for scaling the CU at the encoder side are used, the inverse scaling process in the decoder side should be changed accordingly.
Only allowing partial split mode inside one region or picture
As depicted in Fig. 12, there is a region highlighted by thick-line rectangle 1210, where only no split mode is selected for all the CTUs in this region. Therefore, if this region is signalled to the decoder, syntax elements related to block partition can be reduced. In one embodiment, extra syntax elements are signalled through picture header (PH) to notify the decoder regarding the region with block partition exceptions. These syntax elements are the top left CTU coordinate, horizontal CTU span in unit of CTU count, vertical CTU span in unit of CTU count, and the exception types. The exception type can be one of the following types:
(a) Region with no split
(b) Region without MTT
(c) Region without VBT and TT (only QT and HBT allowed)
(d) Region with only QT and VBT
(e) Region with only BT
(f) Region with only HBT
(g) Region with only VBT
The above exception type list is not an exhaust list and other possible type can also be applied. In another embodiment, if all the pictures can be encoded with a certain block partition exception type, an extra syntax flag can be signalled to indicate the block partition exception being applied to the whole picture and the information related to the region’s coordinate and size can be skipped.
For the encoder, there can be several methods to implement this block partition  exception mode. In one embodiment, multiple pass coding is adopted, in first pass the block split information is collected. Then, in second pass, bit-stream reduction chances with the block partition exception are identified. The second pass is a quick pass without real decoding and encoding processing and just performing syntax element adjustment for seeking a shorter bit-stream.
In another embodiment, for a certain picture, the block split information is collected while encoding and the information is used to influence the encoding of the following pictures. Therefore, the encoding process is still one pass while the information used to decide the block partition exception is from previous encoding frames.
Fig. 13 illustrates a flowchart of an exemplary video coding system, which incorporates arbitrary partition according to one embodiment of the present invention. The steps shown in the flowchart may be implemented as program codes executable on one or more processors (e.g., one or more CPUs) at the encoder side. The steps shown in the flowchart may also be implemented based hardware such as one or more electronic devices or processors arranged to perform the steps in the flowchart. According to the method, input data associated with a current block in a current picture are received in step 1310, wherein the input data comprise pixel data for the current block to be encoded at an encoder side or encoded data associated with the current block to be decoded at a decoder side. The current block is partitioned into multiple partitions using one or more partition lines in step 1320, wherein at least one horizontal or vertical partition line divides the current block into two parts and at least one of the two parts is a non-power-of-2 block, or at least one tilted partition line divides the current block into two non-rectangular shapes comprising at least one trapezoid. The multiple partitions are encoded or decoded separately in step 1330, wherein said encoding the multiple partitions comprises applying scaling to convert the non-power-of-2 block or a target non-rectangular shape to a converted power-of-2 block, or said decoding the multiple partitions comprises applying inverse scaling to convert the converted power-of-2 block back to the non-power-of-2 block or the target non-rectangular shape
The flowchart shown is intended to illustrate an example of video coding according to the present invention. A person skilled in the art may modify each step, re-arranges the steps, split a step, or combine steps to practice the present invention without departing from the spirit of the present invention. In the disclosure, specific syntax and semantics have been used to illustrate examples to implement embodiments of the present invention. A skilled person may practice the present invention by substituting the syntax and semantics with equivalent syntax and semantics without departing from the spirit of the present invention.
The above description is presented to enable a person of ordinary skill in the art to  practice the present invention as provided in the context of a particular application and its requirement. Various modifications to the described embodiments will be apparent to those with skill in the art, and the general principles defined herein may be applied to other embodiments. Therefore, the present invention is not intended to be limited to the particular embodiments shown and described, but is to be accorded the widest scope consistent with the principles and novel features herein disclosed. In the above detailed description, various specific details are illustrated in order to provide a thorough understanding of the present invention. Nevertheless, it will be understood by those skilled in the art that the present invention may be practiced.
Embodiment of the present invention as described above may be implemented in various hardware, software codes, or a combination of both. For example, an embodiment of the present invention can be one or more circuit circuits integrated into a video compression chip or program code integrated into video compression software to perform the processing described herein. An embodiment of the present invention may also be program code to be executed on a Digital Signal Processor (DSP) to perform the processing described herein. The invention may also involve a number of functions to be performed by a computer processor, a digital signal processor, a microprocessor, or field programmable gate array (FPGA) . These processors can be configured to perform particular tasks according to the invention, by executing machine-readable software code or firmware code that defines the particular methods embodied by the invention. The software code or firmware code may be developed in different programming languages and different formats or styles. The software code may also be compiled for different target platforms. However, different code formats, styles and languages of software codes and other means of configuring code to perform the tasks in accordance with the invention will not depart from the spirit and scope of the invention.
The invention may be embodied in other specific forms without departing from its spirit or essential characteristics. The described examples are to be considered in all respects only as illustrative and not restrictive. The scope of the invention is therefore, indicated by the appended claims rather than by the foregoing description. All changes which come within the meaning and range of equivalency of the claims are to be embraced within their scope.

Claims (18)

  1. A method of video coding, the method comprising:
    receiving input data associated with a current block in a current picture, wherein the input data comprise pixel data for the current block to be encoded at an encoder side or encoded data associated with the current block to be decoded at a decoder side;
    partitioning the current block into multiple partitions using one or more partition lines, wherein at least one horizontal or vertical partition line divides the current block into two parts and at least one of the two parts is a non-power-of-2 block, or at least one tilted partition line divides the current block into two non-rectangular shapes comprising at least one trapezoid; and
    encoding or decoding the multiple partitions separately, wherein said encoding the multiple partitions comprises applying scaling to convert the non-power-of-2 block or a target non-rectangular shape to a converted power-of-2 block, or said decoding the multiple partitions comprises applying inverse scaling to convert the converted power-of-2 block back to the non-power-of-2 block or the target non-rectangular shape.
  2. The method of Claim 1, wherein the inverse scaling is applied to reconstructed residue of the converted power-of-2 block at the decoder side.
  3. The method of Claim 1, wherein the inverse scaling is applied to reconstructed data of the converted power-of-2 block at the decoder side.
  4. The method of Claim 1, wherein the scaling is applied to residues of the non-power-of-2 block or the target non-rectangular shape at the encoder side.
  5. The method of Claim 1, wherein the scaling is applied to the pixel data and prediction data of the non-power-of-2 block or the target non-rectangular shape to form residues of the non-power-of-2 block or the target non-rectangular shape at the encoder side.
  6. The method of Claim 1, wherein the scaling corresponds to scaling up, scaling down or both.
  7. The method of Claim 1, wherein the scaling corresponds to projective transformation.
  8. The method of Claim 1, wherein multiple scaling schemes are allowed for the scaling.
  9. The method of Claim 8, wherein a syntax is signalled or parsed to indicate a target scaling scheme selected for the current block.
  10. The method of Claim 1, wherein the scaling is also applied to a power-of-2 block.
  11. The method of Claim 1 further comprising signalling or parsing information regarding an exception region in the current picture, wherein only a partial split mode set is allowed for the exception region is allowed.
  12. The method of Claim 11, wherein the partial split mode set corresponds to no split.
  13. The method of Claim 11, wherein the information regarding the exception region is signalled or parsed at a picture level.
  14. The method of Claim 13, wherein the information regarding the exception region corresponds to a CTU identification list in the picture level.
  15. The method of Claim 14, wherein the CTU ID list corresponds to a bit map for CTUs of the current picture, wherein a bit value equal to1 indicates a corresponding CTU in the exception region and the bit value equal to 0 indicates the corresponding CTU not in the exception region.
  16. The method of Claim 1 further comprising collecting partition statistics from one or more pictures during a first pass and evaluating syntax element adjustment associated with one or more possible exception regions for bitrate reduction during a second pass at the encoder side.
  17. The method of Claim 16, wherein the partition statistics is collected from a previous picture and said one or more possible exception regions are selected for the current picture.
  18. An apparatus for video coding, the apparatus comprising one or more electronics or processors arranged to:
    receive input data associated with a current block, wherein the input data comprise pixel data for the current block to be encoded at an encoder side or encoded data associated with the current block to be decoded at a decoder side;
    partition the current block into multiple partitions using one or more partition lines, wherein at least one horizontal or vertical partition line divides the current block into two parts and at least one of the two parts is a non-power-of-2 block, or at least one tilted partition line divides the current block into two non-rectangular shapes comprising at least one trapezoid; and
    encode or decode the multiple partitions separately, wherein encoding process of the multiple partitions comprises applying scaling to convert the non-power-of-2 block or a target non-rectangular shape to a converted power-of-2 block, or decoding process of the multiple partitions comprises applying inverse scaling to convert the converted power-of-2 block to the non-power-of-2 block or the target non-rectangular shape.
PCT/CN2023/124124 2022-11-11 2023-10-12 Methods and apparatus of arbitrary block partition in video coding WO2024099024A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202263383277P 2022-11-11 2022-11-11
US63/383,277 2022-11-11

Publications (1)

Publication Number Publication Date
WO2024099024A1 true WO2024099024A1 (en) 2024-05-16

Family

ID=91031901

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/124124 WO2024099024A1 (en) 2022-11-11 2023-10-12 Methods and apparatus of arbitrary block partition in video coding

Country Status (1)

Country Link
WO (1) WO2024099024A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3562154A1 (en) * 2016-12-26 2019-10-30 Nec Corporation Image encoding method, image decoding method, image encoding device, image decoding device and program
US20200014946A1 (en) * 2018-07-09 2020-01-09 Tencent America LLC Method and apparatus for block partition with non-uniform quad split
WO2020251421A2 (en) * 2019-10-03 2020-12-17 Huawei Technologies Co., Ltd. Method and apparatus of high-level syntax for non-rectangular partitioning modes
US20210377531A1 (en) * 2019-02-15 2021-12-02 Beijing Bytedance Network Technology Co., Ltd. Transform parameter derivation based on block partition
CN113892264A (en) * 2019-06-05 2022-01-04 高通股份有限公司 Using non-rectangular prediction modes to reduce motion field storage for video data prediction

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3562154A1 (en) * 2016-12-26 2019-10-30 Nec Corporation Image encoding method, image decoding method, image encoding device, image decoding device and program
US20200014946A1 (en) * 2018-07-09 2020-01-09 Tencent America LLC Method and apparatus for block partition with non-uniform quad split
US20210377531A1 (en) * 2019-02-15 2021-12-02 Beijing Bytedance Network Technology Co., Ltd. Transform parameter derivation based on block partition
CN113892264A (en) * 2019-06-05 2022-01-04 高通股份有限公司 Using non-rectangular prediction modes to reduce motion field storage for video data prediction
WO2020251421A2 (en) * 2019-10-03 2020-12-17 Huawei Technologies Co., Ltd. Method and apparatus of high-level syntax for non-rectangular partitioning modes

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
J. MA (FRAUNHOFER), A. WIECKOWSKI (FRAUNHOFER), V. GEORGE, T. HINZ, J. BRANDENBURG, S. DE LUXAN HERNANDEZ, H. KIRCHHOFFER, R. SKUP: "Quadtree plus binary tree with shifting (including software)", 10. JVET MEETING; 20180410 - 20180420; SAN DIEGO; (THE JOINT VIDEO EXPLORATION TEAM OF ISO/IEC JTC1/SC29/WG11 AND ITU-T SG.16 ), no. JVET-J0035, 11 April 2018 (2018-04-11), XP030248139 *

Similar Documents

Publication Publication Date Title
KR102643116B1 (en) Method and apparatus for encoding/decoding image, recording medium for stroing bitstream
CN112369030B (en) Video decoding method and device of decoder
KR102283517B1 (en) Method and apparatus for encoding/decoding image and recording medium for storing bitstream
CN109804626B (en) Method and apparatus for encoding and decoding image and recording medium for storing bit stream
CN107836117B (en) Block segmentation method and device
CN112088533B (en) Image encoding/decoding method and apparatus, and recording medium storing bit stream
KR102328179B1 (en) Method and apparatus for encoding/decoding image and recording medium for storing bitstream
KR102283407B1 (en) Innovations in block vector prediction and estimation of reconstructed sample values within an overlap area
EP3313072B1 (en) Method for predicting block on basis of illumination compensation in image coding system
KR20190046704A (en) Method and apparatus for asymmetric sub-block based video encoding/decoding
US20150010079A1 (en) Apparatus and method for encoding/decoding images for intra-prediction
KR20190042090A (en) Method and apparatus for block segmentation and intra prediction in a video coding system
KR20200097811A (en) Video decoding method and apparatus according to block division structure in video coding system
KR102390452B1 (en) Method and apparatus for encoding/decoding image and recording medium for storing bitstream
KR20160135226A (en) Search region determination for intra block copy in video coding
WO2012119376A1 (en) Method and apparatus of transform unit partition with reduced complexity
KR20190058632A (en) Distance Weighted Bidirectional Intra Prediction
CN113228638B (en) Method and apparatus for conditionally encoding or decoding video blocks in block partitioning
CN110771166B (en) Intra-frame prediction device and method, encoding device, decoding device, and storage medium
WO2024099024A1 (en) Methods and apparatus of arbitrary block partition in video coding
WO2023198013A1 (en) Methods and apparatus of cu partition using signalling predefined partitions in video coding
WO2023197837A1 (en) Methods and apparatus of improvement for intra mode derivation and prediction using gradient and template
WO2023202713A1 (en) Method and apparatus for regression-based affine merge mode motion vector derivation in video coding systems
WO2023193806A1 (en) Method and apparatus using decoder-derived intra prediction in video coding system
WO2024083238A1 (en) Method and apparatus of matrix weighted intra prediction in video coding system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23887716

Country of ref document: EP

Kind code of ref document: A1