CN113994672A - Method and apparatus for video encoding and decoding using triangle prediction - Google Patents

Method and apparatus for video encoding and decoding using triangle prediction Download PDF

Info

Publication number
CN113994672A
CN113994672A CN202080042822.XA CN202080042822A CN113994672A CN 113994672 A CN113994672 A CN 113994672A CN 202080042822 A CN202080042822 A CN 202080042822A CN 113994672 A CN113994672 A CN 113994672A
Authority
CN
China
Prior art keywords
list
prediction
motion vector
candidate
merge
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202080042822.XA
Other languages
Chinese (zh)
Other versions
CN113994672B (en
Inventor
王祥林
陈漪纹
修晓宇
马宗全
朱弘正
叶水明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Publication of CN113994672A publication Critical patent/CN113994672A/en
Application granted granted Critical
Publication of CN113994672B publication Critical patent/CN113994672B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/119Adaptive subdivision aspects, e.g. subdivision of a picture into rectangular or non-rectangular coding blocks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors
    • H04N19/517Processing of motion vectors by encoding
    • H04N19/52Processing of motion vectors by encoding by predictive encoding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/70Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

Methods and apparatus for video coding and decoding are provided. The method comprises the following steps: partitioning a video picture into a plurality of Coding Units (CUs), wherein at least one CU of the plurality of CUs is further partitioned into two Prediction Units (PUs), the two PUs comprising at least one geometrically shaped PU; constructing a first merge list comprising a plurality of candidates, wherein each candidate comprises one or more motion vectors; and obtaining a uni-directional prediction merge list for the geometry's PU by directly selecting one or more motion vectors from the first merge list.

Description

Method and apparatus for video encoding and decoding using triangle prediction
Cross Reference to Related Applications
This application claims priority from U.S. provisional application entitled "video codec with triangle prediction" filed on 25.04.2019, application No. 62/838,935, the entire contents of which are incorporated herein by reference for all purposes.
Technical Field
The present application relates generally to video coding and compression, and in particular, but not exclusively, to methods and apparatus for motion compensated prediction using triangle prediction units (i.e., a special case of geometrically partitioned prediction units) in video coding.
Background
Various electronic devices, such as digital televisions, laptop or desktop computers, tablet computers, digital cameras, digital recording devices, digital media players, video game consoles, smart phones, video teleconferencing devices, video streaming devices, and the like, support digital video. Electronic devices transmit, receive, encode, decode, and/or store digital video data by implementing video compression/decompression. Digital video devices implement video codec techniques such as those described in standards defined by general video codec (VVC), Joint exploration test model (JEM), MPEG-2, MPEG-4, ITU-T H.263, ITU-T H.264/MPEG-4, Part 10, Advanced Video Codec (AVC), ITU-T H.265/High Efficiency Video Codec (HEVC), and extensions of such standards.
Video coding typically uses prediction methods (e.g., inter-prediction, intra-prediction) that exploit redundancy present in video images or sequences. An important goal of video codec techniques is to compress video data into a form that uses a lower bit rate while avoiding or minimizing degradation of video quality. As ever-evolving video services become available, there is a need for codec techniques with better codec efficiency.
Video compression typically includes performing spatial (intra) prediction and/or temporal (inter) prediction to reduce or remove redundancy inherent in video data. For block-based video coding, a video frame is divided into one or more slices, each slice having a plurality of video blocks, which may also be referred to as Coding Tree Units (CTUs). Each CTU may contain one Coding Unit (CU) or be recursively divided into smaller CUs until a predefined minimum CU size is reached. Each CU (also referred to as a leaf CU) contains one or more Transform Units (TUs) and each CU also contains one or more Prediction Units (PUs). Each CU may be coded in intra, inter, or IBC mode. Video blocks in an intra-coded (I) slice of a video frame are encoded using spatial prediction with respect to reference samples in neighboring blocks within the same video frame. Video blocks in an inter-coded (P or B) slice of a video frame may use spatial prediction with respect to reference samples in neighboring blocks within the same video frame or temporal prediction with respect to reference samples in other previous and/or future reference video frames.
The prediction block for the current video block to be encoded is derived based on spatial prediction or temporal prediction of a reference block (e.g., a neighboring block) that has been previously encoded. The process of finding the reference block may be accomplished by a block matching algorithm. Residual data representing pixel differences between the current block to be encoded and the prediction block is referred to as a residual block or prediction error. The inter-coded block is encoded according to the residual block and a motion vector pointing to a reference block forming a prediction block in a reference frame. The process of determining motion vectors is commonly referred to as motion estimation. And encoding the intra-coded block according to the intra-frame prediction mode and the residual block. For further compression, the residual block is transformed from the pixel domain to a transform domain (e.g., frequency domain), resulting in residual transform coefficients, which may then be quantized. The quantized transform coefficients, initially arranged in a two-dimensional array, may be scanned to produce one-dimensional vectors of transform coefficients, and then entropy encoded into a video bitstream to achieve even greater compression.
The encoded video bitstream is then saved in a computer readable storage medium (e.g., flash memory) for access by another electronic device having digital video capabilities or for direct transmission to the electronic device, either wired or wirelessly. The electronic device then performs video decompression (which is the inverse of the video compression described above), e.g., by parsing the encoded video bitstream to obtain semantic elements from the bitstream and reconstructing the digital video data from the encoded video bitstream to its original format based at least in part on the semantic elements obtained from the bitstream, and the electronic device renders the reconstructed digital video data on a display of the electronic device.
As the digital video quality changes from high definition to 4K × 2K or even 8K × 4K, the amount of video data to be encoded/decoded grows exponentially. It is a long-standing challenge how to encode/decode video data more efficiently while maintaining the image quality of the decoded video data.
In the joint video experts group (jfet) conference, jfet defines a first draft of universal video codec (VVC) and a VVC test model 1(VTM1) encoding method. The decision includes using a quadtree with nested multi-type trees of bi-partition and tri-partition coding block structures as an initial new codec feature of the VVC. Since then, reference software VTM and draft VVC decoding procedures for implementing codec methods have been developed during the jfet conference.
Disclosure of Invention
In general, this disclosure describes examples of techniques related to motion compensated prediction using geometrically shaped prediction units in video coding.
According to a first aspect of the present disclosure, there is provided a method for video coding with geometric prediction, comprising: partitioning a video picture into a plurality of Coding Units (CUs), wherein at least one CU of the plurality of CUs is further partitioned into two Prediction Units (PUs), the two PUs comprising at least one geometric-shaped PU; constructing a first merge list comprising a plurality of candidates based on a merge list construction process for conventional merge prediction, wherein each candidate of the plurality of candidates is a motion vector comprising a list 0 motion vector or a list 1 motion vector or both a list 0 motion vector and a list 1 motion vector; receiving a signaled first index value indicating a first candidate selected from the first merge list; receiving a signaled second index value indicating a second candidate selected from the first merge list; receiving a first binary flag signaled to indicate whether to select a list 0 motion vector of the first candidate or a list 1 motion vector of the first candidate for a first PU of the geometric prediction; inferring, based on the first binary flag and based on whether the current picture uses backward prediction, that the second binary flag is a second PU indicating a selection of whether a list 0 motion vector of the second candidate or a list 1 motion vector of the second candidate for the geometric prediction.
According to a second aspect of the present disclosure, there is provided a method for video coding using geometric prediction, comprising: partitioning a video picture into a plurality of Coding Units (CUs), wherein at least one CU of the plurality of CUs is further partitioned into two Prediction Units (PUs), the two PUs comprising at least one geometric-shaped PU; constructing a first merge list comprising a plurality of candidates based on a merge list construction process for conventional merge prediction, wherein each candidate of the plurality of candidates is a motion vector comprising a list 0 motion vector or a list 1 motion vector or both a list 0 motion vector and a list 1 motion vector; receiving a signaled first index value indicating a first candidate selected from the first merge list; receiving a signaled second index value indicating a second candidate selected from the first merge list; inferring whether a list 0 motion vector of the first candidate or a list 1 motion vector of the first candidate is selected for the geometrically predicted first PU; inferring whether to select a list 0 motion vector of the second candidate or a list 1 motion vector of the second candidate for a second PU of the geometric prediction.
According to a third aspect of the present disclosure, there is provided an apparatus for video coding using geometric prediction, comprising: one or more processors; and a memory configured to store instructions executable by the one or more processors; wherein the one or more processors, when executing the instructions, are configured to: partitioning a video picture into a plurality of Coding Units (CUs), wherein at least one CU of the plurality of CUs is further partitioned into two Prediction Units (PUs), the two PUs comprising at least one geometric-shaped PU; constructing a first merge list comprising a plurality of candidates based on a merge list construction process for conventional merge prediction, wherein each candidate of the plurality of candidates is a motion vector comprising a list 0 motion vector or a list 1 motion vector or both a list 0 motion vector and a list 1 motion vector; receiving a signaled first index value indicating a first candidate selected from the first merge list; receiving a signaled second index value indicating a second candidate selected from the first merge list; receiving a first binary flag signaled to indicate whether to select a list 0 motion vector of the first candidate or a list 1 motion vector of the first candidate for a first PU of the geometric prediction; inferring, based on the first binary flag and based on whether a current picture uses backward prediction, whether a second binary flag indicates whether to select a list 0 motion vector of the second candidate or a list 1 motion vector of the second candidate for a second PU of the geometric prediction.
Drawings
A more detailed description of examples of the disclosure will be rendered by reference to specific examples thereof which are illustrated in the appended drawings. In view of the fact that these drawings depict only some examples and are therefore not to be considered limiting of scope, the examples will be described and explained with additional specificity and detail through the use of the accompanying drawings.
Fig. 1 is a block diagram illustrating an exemplary video encoder according to some embodiments of the present disclosure.
Fig. 2 is a block diagram illustrating an exemplary video decoder according to some embodiments of the present disclosure.
Fig. 3 is a schematic diagram illustrating a quadtree plus binary tree (QTBT) structure, according to some embodiments of the present disclosure.
Fig. 4 is a schematic diagram illustrating an example of a picture divided into CTUs according to some embodiments of the present disclosure.
FIG. 5 is a schematic diagram illustrating a multi-type tree partitioning scheme according to some embodiments of the present disclosure.
Fig. 6 is a schematic diagram illustrating partitioning of a CU into triangle prediction units, according to some embodiments of the present disclosure.
Fig. 7 is a schematic diagram illustrating the location of neighboring blocks according to some embodiments of the present disclosure.
Fig. 8 is a schematic diagram illustrating locations of spatial merge candidates according to some embodiments of the present disclosure.
Fig. 9 is a schematic diagram illustrating motion vector scaling of temporal merging candidates according to some embodiments of the present disclosure.
Fig. 10 is a schematic diagram illustrating candidate locations of temporal merging candidates according to some embodiments of the present disclosure.
Fig. 11A is a schematic diagram illustrating one example of uni-directional prediction Motion Vector (MV) selection for a triangle prediction mode according to some embodiments of the present disclosure.
Fig. 11B is a schematic diagram illustrating another example of uni-directional prediction Motion Vector (MV) selection for a triangle prediction mode according to some embodiments of the present disclosure.
Fig. 12A is a schematic diagram illustrating one example of uni-directional predictive MV selection for triangle prediction mode according to some embodiments of the present disclosure.
Fig. 12B is a schematic diagram illustrating another example of uni-directional predictive MV selection for triangle prediction mode, according to some embodiments of the present disclosure.
Fig. 12C is a schematic diagram illustrating another example of uni-directional predictive MV selection for triangle prediction mode, according to some embodiments of the present disclosure.
Fig. 12D is a schematic diagram illustrating another example of uni-directional predictive MV selection for triangle prediction mode, according to some embodiments of the present disclosure.
Fig. 13 is a schematic diagram illustrating an example of uni-directional predictive MV selection for triangle prediction mode according to some embodiments of the present disclosure.
Fig. 14 is a block diagram illustrating an example apparatus for video codec according to some embodiments of the present disclosure.
Fig. 15 is a flow diagram illustrating an exemplary process for video coding using motion compensated prediction of geometric prediction units according to some embodiments of the present disclosure.
Detailed Description
Reference will now be made in detail to the present embodiments, examples of which are illustrated in the accompanying drawings. In the following detailed description, numerous non-limiting specific details are set forth in order to provide an understanding of the subject matter presented herein. It will be apparent to those of ordinary skill in the art that various alternatives may be used. For example, it will be apparent to one of ordinary skill in the art that the subject matter presented herein may be implemented on many types of electronic devices having digital video capabilities.
Reference throughout this specification to "one embodiment," "an example," "some embodiments," "some examples," or similar language means that a particular feature, structure, or characteristic described is included in at least one embodiment or example. Features, structures, elements, or characteristics described in connection with one or some embodiments may be applicable to other embodiments as well, unless expressly stated otherwise.
Throughout the disclosure, unless explicitly stated otherwise, the terms "first," "second," "third," and the like are used merely as labels to refer to relevant elements (e.g., devices, components, compositions, steps, etc.) and do not indicate any spatial or temporal order. For example, "first device" and "second device" may refer to two separately formed devices or two parts, components, or operating states of the same device, and may be arbitrarily named.
As used herein, the term "if" or "when.. may be understood to mean" when.. or "in response.. depending on the context. These terms, if they appear in the claims, may not indicate that the associated limitation or feature is conditional or optional.
The terms "module," "sub-module," "circuit," "sub-circuit," "circuitry," "sub-circuitry," "unit" or "subunit" may include memory (shared, dedicated, or combined) that stores code or instructions executable by one or more processors. A module may comprise one or more circuits, with or without stored code or instructions. A module or circuit may include one or more components connected directly or indirectly. These components may or may not be physically attached to each other or positioned adjacent to each other.
The units or modules may be implemented purely in software, purely in hardware or in a combination of hardware and software. In a purely software implementation, a unit or module may comprise functionally related code blocks or software components linked together, directly or indirectly, for performing specific functions, for example.
Fig. 1 shows a block diagram illustrating an exemplary block-based hybrid video encoder 100 that may be used in connection with many video codec standards that use block-based processing. In encoder 100, a video frame is partitioned into multiple video blocks for processing. For each given video block, a prediction is formed based on either an inter prediction method or an intra prediction method. In inter-frame prediction, one or more predictors are formed by motion estimation and motion compensation based on pixel points from a previously reconstructed frame. In intra prediction, a predictor is formed based on reconstructed pixels in the current frame. Through the mode decision, the best predictor can be selected to predict the current block.
The prediction residual, which represents the difference between the current video block and its predictor, is sent to the transform circuit 102. The transform coefficients are then sent from transform circuit 102 to quantization circuit 104 for entropy reduction. The quantized coefficients are then fed to an entropy coding circuit 106 to generate a compressed video bitstream. As shown in fig. 1, prediction related information 110 (such as video block partitioning information, motion vectors, reference picture indices, and intra prediction modes) from inter prediction circuitry and/or intra prediction circuitry 112 is also fed through entropy encoding circuitry 106 and saved into a compressed video bitstream 114.
In the encoder 100, decoder related circuitry is also required to reconstruct the pixel points for prediction purposes. First, the prediction residual is reconstructed by inverse quantization 116 and inverse transform circuit 118. This reconstructed prediction residual is combined with the block predictor 120 to generate an unfiltered reconstructed pixel point for the current video block.
Spatial prediction (or "intra prediction") uses pixels from already coded samples (called reference samples) of neighboring blocks in the same video frame as the current video block to predict the current video block.
Temporal prediction (also referred to as "inter prediction") uses reconstructed pixels from already coded video pictures to predict the current video block. Temporal prediction reduces temporal redundancy inherent in video signals. The temporal prediction signal for a given Coding Unit (CU) or coding block is typically signaled by one or more Motion Vectors (MV) indicating the amount and direction of motion between the current CU and its temporal reference. Furthermore, if a plurality of reference pictures are supported, one reference picture index for identifying from which reference picture in the reference picture memory the temporal prediction signal comes is additionally transmitted.
After performing spatial and/or temporal prediction, the intra/inter mode decision circuit 121 in the encoder 100 selects the best prediction mode, e.g., based on a rate-distortion optimization method. The block predictor 120 is then subtracted from the current video block; and the resulting prediction residual is decorrelated using transform circuitry 102 and quantization circuitry 104. The resulting quantized residual coefficients are dequantized by an dequantization circuit 116 and inverse transformed by an inverse transform circuit 118 to form a reconstructed residual, which is then added back to the prediction block to form the reconstructed signal of the CU. Loop filter 115, such as a deblocking filter, Sample Adaptive Offset (SAO), and/or Adaptive Loop Filter (ALF), may be further applied to the reconstructed CU before the reconstructed CU is placed in a reference picture memory of picture buffer 117 and used to encode a subsequent video block. To form the output video bitstream 114, the coding mode (inter or intra), prediction mode information, motion information, and quantized residual coefficients are all sent to the entropy coding unit 106 for further compression and packetization to form the bitstream.
For example, deblocking filters are available in current versions of AVC, HEVC, and VVC. In HEVC, an additional loop filter definition, known as Sample Adaptive Offset (SAO), is used to further improve coding efficiency. In the current version of the VVC standard, an additional loop filter called an Adaptive Loop Filter (ALF) is being actively studied, and it is highly likely to be included in the final standard.
These loop filter operations are optional. Performing these operations helps to improve coding efficiency and visual quality. They may also be turned off based on the decisions presented by the encoder 100 to save computational complexity.
It should be noted that intra-prediction is typically based on unfiltered reconstructed pixels, while inter-prediction is based on filtered reconstructed pixels (in case the encoder 100 turns on these filter options).
Fig. 2 is a block diagram illustrating an exemplary block-based video decoder 200 that may be used in connection with many video codec standards. The decoder 200 is similar to the reconstruction related parts residing in the encoder 100 of fig. 1. In the decoder 200, an input video bitstream 201 is first decoded by entropy decoding 202 to derive quantized coefficient levels and prediction related information. The quantized coefficient levels are then processed by inverse quantization 204 and inverse transformation 206 to obtain a reconstructed prediction residual. The block predictor mechanism implemented in the intra/inter mode selector 212 is configured to perform either intra prediction 208 or motion compensation 210 based on the decoded prediction information. A set of unfiltered reconstructed pixel points is obtained by summing the reconstructed prediction residual from the inverse transform 206 and the prediction output generated by the block predictor mechanism using summer 214.
The reconstructed block may further pass through a loop filter 209 before being stored in a picture buffer 213, which serves as a reference picture memory. The reconstructed video in the picture buffer 213 may be sent to drive a display device and used to predict subsequent video blocks. With loop filter 209 open, a filtering operation is performed on these reconstructed pixels to derive a final reconstructed video output 222.
The video encoding/decoding standards mentioned above (such as VVC, JEM, HEVC, MPEG-4, Part 10) are conceptually similar. For example, they all use block-based processing. Some standard block partitioning schemes are set forth below.
HEVC is based on a hybrid block based motion compensated transform coding-decoding architecture. The basic unit for compression is called a Coding Tree Unit (CTU). For 4: 2: 0 chroma format, the maximum CTU size is defined as a block of up to 64 by 64 luma pixels and two 32 by 32 chroma pixels. Each CTU may contain one Coding Unit (CU) or be recursively divided into four smaller CUs until a predefined minimum CU size is reached. Each CU (also referred to as a leaf CU) includes one or more Prediction Units (PUs) and Transform Unit (TU) trees.
In general, in addition to monochrome content, a CTU may include one luma Coded Tree Block (CTB) and two corresponding chroma CTBs; a CU may include one luma Coding Block (CB) and two corresponding chroma CBs; the PU may include one luma Prediction Block (PB) and two corresponding chroma PBs; and a TU may include one luma Transform Block (TB) and two corresponding chroma TBs. However, exceptions may occur because the minimum TB size is 4 × 4 for both luma and chroma (i.e., 2 × 2 chroma TBs are not supported for the 4: 2: 0 color format) and each intra chroma CB always has only one intra chroma PB regardless of the number of intra luma PB in the corresponding intra luma CB.
For an intra CU, luma CB can be predicted by one or four luma PB, and each of two chroma CBs is always predicted by one chroma PB, where each luma PB has one intra luma prediction mode and two chroma PBs share one intra chroma prediction mode. Also, for intra CU, TB size cannot be larger than PB size. In each PB, intra prediction is applied to predict samples of each TB inside the PB from neighboring reconstructed samples of the TB. For each PB, in addition to 33 directional intra prediction modes, a DC mode and a planar mode are supported to predict a flat area and a gradually changing area, respectively.
For each inter PU, one of three prediction modes including inter, skip, and merge may be selected. In general, a Motion Vector Competition (MVC) scheme is introduced to select a motion candidate from a given candidate set comprising spatial motion candidates and temporal motion candidates. Multiple references for motion estimation allows finding the best reference among the 2 possible reconstructed reference picture lists (i.e., list 0 and list 1). For inter mode, referred to as AMVP mode, where AMVP stands for advanced motion vector prediction, an inter prediction indicator (list 0, list 1, or bi-prediction), a reference index, a motion candidate index, a Motion Vector Difference (MVD), and a prediction residual are transmitted. For skip mode and merge mode, only the merge index is sent, and the current PU inherits the inter prediction indicator, reference index, and motion vector from the neighboring PU referenced by the coded merge index. In the case of a CU that is skipped from encoding, the residual signal is also omitted.
A joint exploration test model (JEM) is built on top of the HEVC test model. The basic encoding and decoding flow of HEVC remains unchanged in JEM; however, the design elements of the most important modules (including the modules of block structure, intra and inter prediction, residual transformation, loop filter and entropy coding) are slightly modified and additional coding tools are added. The JEM includes the following new codec features.
In HEVC, CTUs are partitioned into CUs by using a quadtree structure represented as a coding tree to accommodate various local characteristics. The decision whether to codec a picture region using inter-picture (temporal) prediction or intra-picture (spatial) prediction is made at the CU level. Each CU may be further divided into one, two, or four PUs according to PU division types. Within one PU, the same prediction process is applied and the relevant information is sent to the decoder on a PU basis. After obtaining the residual block by applying a prediction process based on the PU partition type, the CU may be partitioned into Transform Units (TUs) according to another quadtree structure of the CU similar to a coding tree. One of the key features of the HEVC structure is that it has multiple partitioning concepts including CU, PU and TU.
Fig. 3 is a schematic diagram illustrating a quadtree plus binary tree (QTBT) structure, according to some embodiments of the present disclosure.
The QTBT structure removes the concept of multiple partition types, i.e. it removes the distinction of CU, PU and TU concepts and supports more flexibility on CU partition shapes. In a QTBT block structure, a CU may have a square or rectangular shape. As shown in fig. 3, a Coding Tree Unit (CTU) is first divided by a quad-tree (i.e., quadtree) structure. The leaf nodes of the quadtree may be further partitioned by a binary tree structure. There are two partition types in binary tree partitioning: symmetrical horizontal division and symmetrical vertical division. The binary tree leaf nodes are called Coding Units (CUs) and such partitions are used for prediction and transform processing without any further partitioning. This means that CU, PU and TU have the same block size in the QTBT coding block structure. In JEM, a CU sometimes consists of Coded Blocks (CBs) of different color components, e.g., in 4: 2: in the case of P and B slices of 0 chroma format, one CU contains one luma CB and two chroma CBs, and sometimes consists of CBs of one component, e.g., in the case of I slices, one CU contains only one luma CB or only two chroma CBs.
The following parameters are defined for the QTBT segmentation scheme.
-CTU size: the root node size of the quadtree, the same as the concept in HEVC;
-MinQTSize: allowed minimum quadtree leaf node size;
-MaxBTSize: the allowed maximum binary tree root node size;
-MaxBTDepth: maximum allowed binary tree depth;
-MinBTSize: allowed minimum binary tree leaf node size.
In one example of a QTBT segmentation structure, the CTU size is set to 128 × 128 luma samples and two corresponding 64 × 64 chroma sample blocks (with 4: 2: 0 chroma format), MinQTSize is set to 16 × 16, MaxBTSize is set to 64 × 64, MinBTSize (for both width and height) is set to 4 × 4, and MaxBTDepth is set to 4. Quadtree partitioning is first applied to CTUs to produce quadtree leaf nodes. The quad tree leaf nodes may have sizes from 16 × 16 (i.e., MinQTSize) to 128 × 128 (i.e., CTU size). If the quad tree leaf node is 128 x 128, it will not be further partitioned through the binary tree since the size exceeds MaxBTSize (i.e., 64 x 64). Otherwise, the leaf nodes of the quadtree may be further partitioned by the binary tree. Thus, the leaf nodes of the quadtree are also the root nodes of the binary tree, and their binary tree depth is 0. When the binary tree depth reaches MaxBTDepth (i.e., 4), no further partitioning is considered. When the binary tree nodes have a width equal to MinBTSize (i.e., 4), no further horizontal partitioning is considered. Similarly, when the binary tree nodes have a height equal to MinBTSize, no further vertical partitioning is considered. The leaf nodes of the binary tree are further processed by the prediction and transformation process without any further partitioning. In JEM, the maximum CTU size is 256 × 256 luma samples.
An example of block segmentation by using the QTBT scheme and the corresponding tree representation are shown in fig. 3. The solid lines indicate quad-tree partitions and the dashed lines indicate binary tree partitions. As shown in fig. 3, a Coding Tree Unit (CTU)300 is first partitioned by a quadtree structure, and three of four quadtree leaf nodes 302, 304, 306, 308 are further partitioned by a quadtree structure or a binary tree structure. For example, the quadtree leaf nodes 306 are further partitioned by quadtree partitioning; the quadtree leaf node 304 is further divided into two leaf nodes 304a, 304b by binary tree division; and the quadtree leaf nodes 302 are further partitioned by binary tree partitioning. In each partition (i.e., non-leaf) node of the binary tree, a flag is marked to indicate which partition type (i.e., horizontal or vertical) is used, with 0 indicating horizontal partition and 1 indicating vertical partition. For example, for a quad tree leaf node 304, a designation of 0 indicates a horizontal division, and for a quad tree leaf node 302, a designation of 1 indicates a vertical division. For a quadtree partition, there is no need to indicate the partition type, because the quadtree partition always partitions a block horizontally and vertically to generate 4 sub-blocks having equal sizes.
In addition, the QTBT scheme supports the ability for luminance and chrominance to have separate QTBT structures. Currently, luminance and chrominance CTBs in one CTU share the same QTBT structure for P and B stripes. However, for an I-slice, the luma CTB is partitioned into CUs by a QTBT structure, and the chroma CTB is partitioned into chroma CUs by another QTBT structure. This means that a CU in an I slice consists of either a coded block of the luma component or a coded block of the two chroma components, and a CU in a P or B slice consists of coded blocks of all three color components.
In the joint video experts group (jfet) conference, jfet defines a first draft of universal video codec (VVC) and a VVC test model 1(VTM1) encoding method. The decision includes using a quadtree with nested multi-type trees of bi-partition and tri-partition coding block structures as an initial new codec feature of the VVC.
In VVC, a picture division structure divides input video into blocks called Coding Tree Units (CTUs). The CTUs are divided into Coding Units (CUs) using quadtrees with a nested multi-type tree structure, where leaf Coding Units (CUs) define regions that share the same prediction mode (e.g., intra or inter). Here, the term "cell" defines the area of the image that covers all components; the term "block" is used to define an area covering a particular component (e.g., luminance) and may differ in spatial location when considering chroma sampling formats, such as 4: 2: 0.
Segmenting pictures into CTUs
Fig. 4 is a schematic diagram illustrating an example of a picture divided into CTUs according to some embodiments of the present disclosure.
In VVC, pictures are divided into CTU sequences, and the CTU concept is the same as that of HEVC. For a picture with three sample arrays, the CTU consists of a block of N × N luma samples and two corresponding blocks of chroma samples. Fig. 4 shows an example of a picture 400 divided into CTUs 402.
The maximum allowable size of a luminance block in the CTU is designated as 128 × 128 (although the maximum size of a luminance transform block is 64 × 64).
Partitioning CTUs using tree structures
FIG. 5 is a schematic diagram illustrating a multi-type tree partitioning scheme according to some embodiments of the present disclosure.
In HEVC, the CTUs are divided into CUs by using a quadtree structure represented as a coding tree to accommodate various local characteristics. The decision whether to codec a picture region using inter-picture (temporal) prediction or intra-picture (spatial) prediction is made at the leaf-CU level. Each leaf-CU may be further divided into one, two, or four PUs according to the PU division type. Within one PU, the same prediction process is applied and the relevant information is sent to the decoder on a PU basis. After obtaining the residual block by applying a prediction process based on the PU partition type, the leaf-CU may be partitioned into Transform Units (TUs) according to another quadtree structure of the CU similar to a coding tree. One of the key features of the HEVC structure is that it has multiple partitioning concepts including CU, PU and TU.
In VVCs, a quadtree with nested multi-type trees of a bi-partition and tri-partition structure is used instead of the concept of multiple partition unit types, i.e., it removes the distinction of CU, PU and TU concepts (except that a CU with a size too large for the maximum transform length requires the distinction of such concepts) and supports more flexibility for CU partition shapes. In the coding tree structure, a CU may have a square or rectangular shape. A Coding Tree Unit (CTU) is first partitioned by a quad-tree (i.e., quadtree) structure. The quad-leaf nodes may then be further partitioned by a multi-type tree structure. As shown in fig. 5, there are four partition types in the multi-type tree structure: a vertical binary division 502(SPLIT _ BT _ VER), a horizontal binary division 504(SPLIT _ BT _ HOR), a vertical ternary division 506(SPLIT _ TT _ VER), and a horizontal ternary division 508(SPLIT _ TT _ HOR). The multi-type leaf node is called a Coding Unit (CU), and unless the CU is too large for the maximum transform length, this partition is used for prediction and transform processing without any further partitioning. This means that in most cases, a CU, a PU and a TU have the same block size in a quadtree with a nested multi-type tree coding block structure. An exception occurs when the maximum supported transform length is less than the width or height of the color component of the CU. In VTM1, a CU consists of Coded Blocks (CBs) of different color components, e.g., one CU contains one luma CB and two chroma CBs (unless the video is monochrome, i.e., has only one color component).
Partitioning a CU into multiple prediction units
In VVC, for each CU partitioned based on the structure shown above, prediction of block content may be performed on the entire CU block or in a subblock manner as explained in the following paragraphs. Such a predicted operation unit is called a prediction unit (or PU).
In the case of intra prediction (or intra prediction), the size of a PU is typically equal to the size of a CU. In other words, prediction is performed on the entire CU block. For inter prediction (or inter prediction), the size of a PU may be equal to or smaller than the size of a CU. In other words, there are cases where a CU may be divided into multiple PUs for prediction.
Some examples of PU sizes smaller than CU sizes include affine prediction mode, advanced temporal level motion vector prediction (ATMVP) mode, triangle prediction mode, and so on.
In affine prediction mode, a CU may be partitioned into multiple 4 × 4 PUs for prediction. Motion vectors may be derived for each 4 x 4PU and motion compensation may be performed on the 4 x 4PU accordingly. In ATMVP mode, a CU may be divided into one or more 8 × 8 PUs for prediction. Motion vectors are derived for each 8 x 8PU and motion compensation may be performed on the 8 x 8PU accordingly. In the triangle prediction mode, a CU may be divided into two triangle shape prediction units. Motion vectors are derived for each PU and motion compensation is performed accordingly. For inter prediction, a triangle prediction mode is supported. More details of the triangle prediction mode are set forth below.
Triangle prediction mode (or triangle division mode)
Fig. 6 is a schematic diagram illustrating partitioning of a CU into triangle prediction units, according to some embodiments of the present disclosure.
The concept of triangle prediction mode is to introduce triangle partitions for motion compensated prediction. The triangle prediction mode may also be named triangle prediction unit mode or triangle partition mode. As shown in FIG. 6, a CU 602 or 604 is divided into two triangle Prediction Units (PUs) in a diagonal or anti-diagonal direction (i.e., a division from the top left corner to the bottom right corner as shown in CU 602 or a division from the top right corner to the bottom left corner as shown in CU 604)1And PU2. Each triangle prediction unit in a CU uses its own uni-directional prediction motion vector and reference frame index derived from the uni-directional prediction candidate list for inter prediction. After the triangle prediction unit is predicted, an adaptive weighting process is performed on the diagonal edges. The transform and quantization process is then applied to the entire CU. Note that this mode is applied only to the skip mode and the merge mode in the current VVC. Although in fig. 6, the CU is shown as a square block, the triangular prediction mode may also be applied to non-square (i.e., rectangular) shaped CUs.
The unidirectional prediction candidate list may include one or more candidates, and each candidate may be a motion vector. Thus, throughout this disclosure, the terms "uni-directional prediction candidate list", "uni-directional prediction motion vector candidate list", and "uni-directional prediction merge list" may be used interchangeably; and the terms "uni-directionally predicted merge candidate" and "uni-directionally predicted motion vector" may also be used interchangeably.
Uni-directional prediction motion vector candidate list
Fig. 7 is a schematic diagram illustrating the location of neighboring blocks according to some embodiments of the present disclosure.
In some examples, the list of uni-directional predicted motion vector candidates may include two to five uni-directional predicted motion vector candidates. In some other examples, other numbers are possible. It is derived from neighboring blocks. As shown in fig. 7, the uni-directional predictive motion vector candidate list is derived from seven neighboring blocks including five spatially neighboring blocks (1 to 5) and two temporally co-located blocks (6 to 7). The motion vectors of seven neighboring blocks are collected into a first merge list. Then, a unidirectional prediction candidate list is formed based on the first merge list motion vector according to a specific order. Based on the order, a uni-directional predicted motion vector from the first merge list, followed by a reference picture list 0 or L0 motion vector of the bi-directional predicted motion vector, followed by a reference picture list 1 or L1 motion vector of the bi-directional predicted motion vector, followed by an average motion vector of L0 and L1 motion vectors of the bi-directional predicted motion vector, is first placed in the uni-directional predicted motion vector candidate list. At this time, if the number of candidates is still less than the target number (which is 5 in the current VVC), a zero motion vector is added to the list to satisfy the target number.
A predictor is derived for each triangular PU based on its motion vector. Notably, the derived predictors cover a larger area than the actual triangle PU, such that there is an overlapping area of the two predictors along the shared diagonal edge of the two triangle PUs. A weighting process is applied to the diagonal edge region between the two predictors to derive the final prediction of the CU. The weighting factors currently used for the luma and chroma samples are {7/8,6/8,5/8,4/8,3/8,2/8,1/8} and {6/8,4/8,2/8}, respectively.
Triangle prediction mode semantics and signaling
Here, the triangle prediction mode is signaled using a triangle prediction flag. When a CU is coded in skip mode or merge mode, a triangle prediction flag is signaled. For a given CU, if the value of the triangle prediction flag is 1, it indicates that the corresponding CU is coded using the triangle prediction mode. Otherwise, the CU is coded using a prediction mode other than the triangle prediction mode.
For example, the triangle prediction flag is conditionally signaled in skip mode or merge mode. First, a triangle prediction tool enable/disable flag is signaled in the sequence parameter set (or SPS). The triangle prediction flag is signaled at the CU level only when this triangle prediction tool enable/disable flag is true. Second, triangle prediction tools are allowed only in B-slices. Therefore, only in B slices, the triangle prediction flag is signaled at CU level. Third, the triangle prediction mode is signaled only for CUs with a size equal to or greater than a certain threshold. If the size of the CU is less than the threshold, the triangle prediction flag is not signaled. Fourth, a triangle prediction flag is signaled for a CU only if the CU is not coded in subblock merge mode, which includes both affine mode and ATMVP mode. In the four cases listed above, when the triangle predictor is not signaled, the triangle predictor is inferred to be 0 at the decoder side.
When signaling the triangle prediction flag, the triangle prediction flag is signaled with a specific context using a Context Adaptive Binary Arithmetic Coding (CABAC) entropy codec. A context is formed based on the triangle predictor values of the top and left blocks of the current CU.
To codec (i.e., encoder or decode) the triangle predictor of the current block (or current CU), the triangle predictors from both the top block (or CU) and the left block (or CU) are derived and their values are summed. This generates three possible contexts corresponding to the following cases:
1) both the left block and the top block have a triangle prediction flag of 0;
2) both the left block and the top block have a triangle prediction flag 1;
3) and others.
Separate probabilities are maintained for each of the three contexts. Once a context value is determined for the current block, the triangle predictor of the current block is coded using a CABAC probability model corresponding to this context value.
If the triangle prediction flag is true, a triangle split direction flag is signaled to indicate whether the split direction is from top left to bottom right or top right to bottom left.
Then, two merge index values are signaled to indicate a merge index value of the first uni-directional prediction merge candidate and a merge index value of the second uni-directional prediction merge candidate, respectively, for triangle prediction. These two merge index values are used to locate two merge candidates from the list of uni-directional predicted motion vector candidates for the first partition and the second partition, respectively. For triangle prediction, it is required that the two merge index values are different so that the two predictors for the two triangle partitions can be different from each other. Thus, the first merge index value is directly signaled. To signal the second merge index value, the second merge index value is directly signaled if the second merge index value is less than the first merge index value. Otherwise, 1 is subtracted from the second merge index value before the second merge index value is signaled to the decoder. At the decoder side, the first merge index is directly decoded and used. To decode the second merge index value, a value denoted "idx" is first decoded from the CABAC engine. If idx is less than the first merge index value, the second merge index value will be equal to the value of idx. Otherwise, the second merge index value will be equal to (idx + 1).
Conventional merge mode motion vector candidate list
According to the current VVC, in a conventional merge mode in which an entire CU is predicted without being divided into more than one PU, a motion vector candidate list or a merge candidate list is constructed using a different process from that for the triangle prediction mode.
Fig. 8 is a schematic diagram illustrating the positions of spatial merge candidates according to some embodiments of the present disclosure, and first, spatial motion vector candidates are selected based on motion vectors from neighboring blocks, as illustrated in fig. 8. In the derivation of the spatial merge candidates for the current block 802, up to four merge candidates are selected among the candidates located in the positions as depicted in fig. 8. The order of derivation is A1→B1→B0→A0→(B2). Only when in position A1、B1、B0、A0Does not have any PU available or is intra-coded to consider location B2
Next, temporal merging candidates are derived. In the derivation of temporal merging candidates, the scaled motion vectors are derived based on co-located PUs belonging to pictures within a given reference picture list that have a smallest Picture Order Count (POC) difference from the current picture. The reference picture list to be used for deriving the co-located PU is explicitly indicated in the slice header. Fig. 9 illustrates motion vector scaling of a temporal merging candidate according to some embodiments of the present disclosure, as indicated by the dashed line in fig. 9, resulting in a scaled motion vector of the temporal merging candidate. The scaled motion vector of the temporal merging candidate is scaled from the motion vector of the co-located PU col _ PU using POC distances tb and td, where tb is defined as the POC difference between the reference picture curr _ ref of the current picture curr _ pic and the current picture curr _ pic, and td is defined as the POC difference between the reference picture col _ ref of the co-located picture col _ pic and the co-located picture col _ pic. The reference picture index of the temporal merging candidate is set equal to zero. A practical implementation of the scaling process is described in the HEVC draft specification. For B slices, two motion vectors (one for reference picture list 0 and the other for reference picture list 1) are obtained and combined to form a bi-predictive merge candidate.
Fig. 10 is a schematic diagram illustrating candidate locations of temporal merging candidates according to some embodiments of the present disclosure.
As depicted in fig. 10, the location of the co-located PU is selected between two candidate locations C3 and H. If the PU at position H is not available, or is intra coded, or is outside the current CTU, position C3 is used to derive a temporal merging candidate. Otherwise, position H is used to derive a temporal merging candidate.
After inserting both spatial and temporal motion vectors into the merge candidate list as described above, history-based merge candidates are added. So-called history-based merge candidates include those motion vectors from previously coded CUs, which are maintained in a separate motion vector list and managed based on certain rules.
After the history-based candidates are inserted, the pairwise average motion vector candidates are further added to the list if the merge candidate list is not full. This type of candidate is constructed by averaging the candidates already in the current list, as the name implies. More specifically, based on a particular order or rule, two candidates in the merge candidate list are taken at a time and the average motion vector of the two candidates is appended to the current list.
After inserting the pairwise average motion vector, if the merge candidate list is still not full, zero motion vectors will be added to fill the list.
Constructing a first merge list for triangle prediction using a conventional merge list construction process
The triangular prediction mode in the current VVC shares some similarities with the conventional merge prediction mode throughout its formation of predictors. For example, in both prediction modes, the merge list needs to be constructed based on at least the neighboring spatial motion vector and the co-located motion vector of the current CU. Meanwhile, the triangle prediction mode also has some aspects different from the conventional merge prediction mode.
For example, although the merge lists need to be constructed in the triangle prediction mode and the conventional merge prediction mode, the detailed process of obtaining such lists is different.
These differences result in additional cost for codec implementation because of the additional logic required. The process and logic of building the merge list may be unified and shared between the triangle prediction mode and the regular merge prediction mode.
In some examples, in forming a uni-directional prediction (also referred to as uni-prediction) merge list for the triangle prediction mode, new motion vectors are pruned completely for those already in the list before being added to the merge list. In other words, the new motion vector is compared to each motion vector already in the uni-directional prediction merge list, and is added to the list only if the new motion vector is different from each motion vector in the merge list. Otherwise, no new motion vector is added to the list.
According to some examples of the present disclosure, in the triangle prediction mode, a uni-directional prediction merge list may be obtained or constructed from a regular merge mode motion vector candidate list (which may be referred to as a regular merge list).
More specifically, to construct a merge candidate list for the triangle prediction mode, a first merge list is first constructed based on a merge list construction process for conventional merge prediction. The first merge list includes a plurality of candidates, each candidate being a motion vector. The motion vectors in the first merge list are then used to further construct or derive a uni-directional prediction merge list for the triangle prediction mode.
It should be noted that the first merge list constructed in this case may select a list size different from that of the general merge mode or the normal merge mode. In one example of the present disclosure, the first merge list has the same size as the list of the general merge mode. In another example of the present disclosure, the constructed first merge list has a list size different from that of the general merge mode.
Constructing a uni-directional predictive merge list from a first merge list
According to some examples of the disclosure, a uni-directional prediction merge list for the triangle prediction mode may be constructed or derived from the first merge list based on one of the following methods.
In an example of the present disclosure, to construct or derive a uni-directional prediction merge list, the candidate prediction list 0 motion vector in the first merge list is first examined and selected into the uni-directional prediction merge list. If the uni-directional prediction merge-list is not full after this process (e.g., the number of candidates in this list is still less than the target number), the prediction list 1 motion vector of the candidate in the first merge-list is checked and selected into the uni-directional prediction merge-list. If the uni-directional prediction merge list is still not full, add prediction list 0 zero vector to the uni-directional prediction merge list. If the uni-directional prediction merge list is still not full, add prediction list 1 zero vector to the uni-directional prediction merge list.
In another example of the present disclosure, for each candidate in the first merge list, its predictor list 0 motion vector and predictor list 1 motion vector are added to the uni-directional prediction merge list in an interleaved manner. More specifically, for each candidate in the first merge list, if the candidate is a uni-directionally predicted motion vector, it is added directly to the uni-directionally predicted merge list. Otherwise, if the candidate is a bi-predictive motion vector in the first merge list, its predictor list 0 motion vector is added to the uni-predictive merge list first, followed by its predictor list 1 motion vector. Once all motion vector candidates in the first merge list have been checked and added, while the uni-directionally predicted merge list is still not full, a uni-directionally predicted zero motion vector may be added. For example, for each reference frame index, a prediction list 0 zero motion vector and a prediction list 1 zero motion vector may be added separately to the uni-directional prediction merge list until the list is full.
In yet another example of the present disclosure, a uni-directional predicted motion vector from a first merge list is first selected into a uni-directional predicted merge list. If the uni-predictive merge-list is not full after this process, for each bi-predictive motion vector in the first merge-list, its predictor list 0 motion vector is added to the uni-predictive merge-list first, followed by its predictor list 1 motion vector. After this process, a uni-directionally predicted zero motion vector may be added if the uni-directionally predicted merge list is still not full. For example, for each reference frame index, a prediction list 0 zero motion vector and a prediction list 1 zero motion vector may be added separately to the uni-directional prediction merge list until the list is full.
In the above description, when adding uni-directional predicted motion vectors to a uni-directional predicted merge list, a motion vector pruning process may be performed to ensure that the new motion vectors to be added are different from those already in the uni-directional predicted merge list. Such a motion vector pruning process may also be performed in a partial way to obtain a lower complexity, e.g. checking the new motion vectors to be added only for some but not all motion vectors already in the uni-directional prediction merge list. In the extreme case, no motion vector pruning (i.e., motion vector comparison operation) is performed in the process.
Constructing a uni-directional prediction merge list from a first merge list based on a picture prediction configuration
In some examples of the disclosure, the uni-directional prediction merge list may be built in an adaptive manner based on whether the current picture uses backward prediction. For example, depending on whether the current picture uses backward prediction, a unidirectional prediction merge list may be constructed using different methods. If the Picture Order Count (POC) values of all reference pictures are not greater than the POC value of the current picture, it indicates that the current picture does not use backward prediction.
In an example of the present disclosure, when the current picture does not use backward prediction, or after determining that the current picture does not use backward prediction, the candidate prediction list 0 motion vectors in the first merge list are first checked and selected into the uni-directional prediction merge list, followed by those candidate prediction list 1 motion vectors; if the uni-prediction merge list is not yet full, a uni-prediction zero motion vector may be added. Otherwise, if the current picture uses backward prediction, each candidate prediction list 0 motion vector and prediction list 1 motion vector in the first merge list may be examined and selected into a uni-directional prediction merge list in an interleaved manner as described above, i.e., adding the first candidate prediction list 0 motion vector in the first merge list, then adding the first candidate prediction list 1 motion vector, then adding the second candidate prediction list 0 motion vector, then adding the second candidate prediction list 1 motion vector, and so on. At the end of the process, a uni-directional predictive zero vector may be added if the uni-directional predictive merge list is not yet full.
In another example of the present disclosure, if the current picture does not use backward prediction, the candidate prediction list 1 motion vectors in the first merge list are first checked and selected into the uni-directional prediction merge list, followed by those candidate prediction list 0 motion vectors; if the uni-prediction merge list is not yet full, a uni-prediction zero motion vector may be added. Otherwise, if the current picture uses backward prediction, each candidate prediction list 0 motion vector and prediction list 1 motion vector in the first merge list may be examined and selected into a uni-directional prediction merge list in an interleaved manner as described above, i.e., adding the first candidate prediction list 0 motion vector in the first merge list, then adding the first candidate prediction list 1 motion vector, then adding the second candidate prediction list 0 motion vector, then adding the second candidate prediction list 1 motion vector, and so on. At the end of the process, a uni-directional predictive zero vector may be added if the uni-directional predictive merge list is not yet full.
In yet another example of the present disclosure, if the current picture does not use backward prediction, only the candidate prediction list 0 motion vector in the first merge list is checked first and selected into the uni-directional prediction merge list, and if the uni-directional prediction merge list is not yet full, a uni-directional prediction zero motion vector may be added. Otherwise, if the current picture uses backward prediction, each candidate prediction list 0 motion vector and prediction list 1 motion vector in the first merge list may be examined and selected into a uni-directional prediction merge list in an interleaved manner as described above, i.e., adding the first candidate prediction list 0 motion vector in the first merge list, then adding the first candidate prediction list 1 motion vector, then adding the second candidate prediction list 0 motion vector, then adding the second candidate prediction list 1 motion vector, and so on. At the end of the process, a uni-directional predictive zero vector may be added if the uni-directional predictive merge list is not yet full.
In yet another example of the present disclosure, if the current picture does not use backward prediction, only the candidate prediction list 1 motion vector in the first merge list is checked first and selected into the uni-directional prediction merge list, and if the uni-directional prediction merge list is not yet full, a uni-directional prediction zero motion vector may be added. Otherwise, if the current picture uses backward prediction, each candidate prediction list 0 motion vector and prediction list 1 motion vector in the first merge list may be examined and selected into a uni-directional prediction merge list in an interleaved manner as described above, i.e., adding the first candidate prediction list 0 motion vector in the first merge list, then adding the first candidate prediction list 1 motion vector, then adding the second candidate prediction list 0 motion vector, then adding the second candidate prediction list 1 motion vector, and so on. At the end of the process, a uni-directional predictive zero vector may be added if the uni-directional predictive merge list is not yet full.
In another example of the present disclosure, when the current picture does not use backward prediction, the prediction list 0 motion vectors of candidates in the first merge list are used as uni-directional prediction merge candidates, and indexes are set according to the same index order as they are in the first merge list. Otherwise, if the current picture uses backward prediction, the list 0 motion vector and the list 1 motion vector of each candidate in the first merge list are used as uni-directional prediction merge candidates, and are indexed based on the interleaving manner as described above, i.e., first the list 0 motion vector of the first candidate in the first merge list, then the list 1 motion vector of the first candidate, then the list 0 motion vector of the second candidate, then the list 1 motion vector of the second candidate, and so on. In case the candidate in the first merge list is a uni-directional motion vector, an index is set to the zero motion vector after this candidate in the uni-directional prediction merge list. This ensures that for the case where backward prediction is used for the current picture, each candidate in the first merge list (whether it is a bi-predictive motion vector or a uni-predictive motion vector) may provide two uni-directional motion vectors as uni-predictive merge candidates.
In another example of the present disclosure, when the current picture does not use backward prediction, the prediction list 0 motion vectors of candidates in the first merge list are used as uni-directional prediction merge candidates, and indexes are set according to the same index order as they are in the first merge list. Otherwise, if the current picture uses backward prediction, the list 0 motion vector and the list 1 motion vector of each candidate in the first merge list are used as uni-directional prediction merge candidates, and are indexed based on the interleaving manner as described above, i.e., first the list 0 motion vector of the first candidate in the first merge list, then the list 1 motion vector of the first candidate, then the list 0 motion vector of the second candidate, then the list 1 motion vector of the second candidate, and so on. In case the candidate in the first merge list is a uni-directional motion vector, an index is set in the uni-directional prediction merge list after this candidate to the same motion vector plus a certain motion offset.
In the above description, although it is described that the motion vector is selected from the first merge list to the unidirectional prediction merge list, in practice, the unidirectional prediction merge list may be physically formed or the unidirectional prediction merge list may not be physically formed to implement the method in a different manner. For example, the first merge list may be used directly without physically creating a uni-directional predictive merge list. For example, those list 0 and/or list 1 motion vectors for each candidate in the first merge list may simply be indexed based on a particular order and accessed directly from the first merge list. It should be noted that this indexing order may follow the same selection order described in those examples above. This means that given a merge index of a PU coded with triangular prediction mode, its corresponding uni-directional prediction merge candidate can be obtained directly from the first merge list without physically forming a uni-directional prediction merge list.
In this process, the pruning may be performed fully or partially when checking for new motion vectors to be added to the list. When performed partially, this means that the new motion vector is compared to some motion vectors already in the uni-directional prediction merge list, instead of all motion vectors. In the extreme case, no motion vector pruning (i.e., motion vector comparison operation) is performed in the process.
Such motion vector pruning may also be performed adaptively based on whether the current picture uses backward prediction when forming the uni-directional prediction merge list. For example, for all examples of the disclosure in this section described above, the motion vector pruning operation is performed in whole or in part when the current picture does not use backward prediction. When the current picture uses backward prediction, the motion vector pruning operation is not performed when forming the uni-directional prediction merge list.
Triangle prediction using a first merge list without creating a uni-directional prediction merge list
In the above example, the uni-directional prediction merge list for triangle prediction is constructed by selecting the motion vector from the first merge list into the uni-directional prediction merge list. However, in practice, the method may be implemented in different ways (physically forming a uni-predictive (or uni-predictive) merge list or not). In some examples, the first merge list may be used directly without physically creating a uni-directional predictive merge list. For example, each candidate list 0 and/or list 1 motion vector in the first merge list may only be indexed and accessed directly from the first merge list based on a particular order.
For example, the first merge list may be obtained from a decoder or other electronic device/component. In other examples, after constructing a first merge list including a plurality of candidates (each candidate being one or more motion vectors) based on a merge list construction process for conventional merge prediction, a uni-directional prediction merge list is not constructed, but a pre-defined index list including a plurality of reference indices (each reference index being a reference to a motion vector of a candidate in the first merge list) is used to derive uni-directional merge candidates for the triangle prediction mode. The index list may be considered as a representation of a uni-directional prediction merge list for triangle prediction, and the uni-directional prediction merge list includes at least a subset of candidates in the first merge list corresponding to the reference index. It should be noted that the order of the indices may follow any of the selection orders described in the examples of building unidirectional predictive merge lists. In practice, such an index list may be implemented in different ways. For example, it may be explicitly implemented as a list. In other examples, it may also be implemented or obtained in specific logic and/or program functions without explicitly forming any lists.
In some examples of the disclosure, the index list may be adaptively determined based on whether the current picture uses backward prediction. For example, the reference indices in the index list may be arranged according to whether the current picture uses backward prediction, i.e., based on a comparison of Picture Order Count (POC) of the current picture with POC of the reference picture. If the Picture Order Count (POC) values of all reference pictures are not greater than the POC value of the current picture, it indicates that the current picture does not use backward prediction.
In one example of the present disclosure, when the current picture does not use backward prediction, the prediction list 0 motion vectors of candidates in the first merge list are used as unidirectional prediction merge candidates, and are set with indexes according to the same index order as they are in the first merge list. That is, upon determining that the POC of the current picture is greater than each of the POC of the reference pictures, the reference indices are arranged according to the same order of the list 0 motion vectors of the candidates in the first merge list. Otherwise, if the current picture uses backward prediction, the list 0 motion vector and the list 1 motion vector of each candidate in the first merge list are used as uni-directional prediction merge candidates and are indexed based on an interleaving manner, i.e., first the list 0 motion vector of the first candidate in the first merge list, then the list 1 motion vector of the first candidate, then the list 0 motion vector of the second candidate, then the list 1 motion vector of the second candidate, and so on. That is, upon determining that the POC of the current picture is less than at least one of the POC of the reference picture, in the case that each candidate in the first merge list is a bi-predictive motion vector, the reference indices are arranged according to an interleaved manner of the list 0 motion vector and the list 1 motion vector of each candidate in the first merge list. In case the candidate in the first merge list is a uni-directional motion vector, the zero motion vector is set indexed as the uni-directional prediction merge candidate following the motion vector of this candidate. This ensures that for the case where backward prediction is used for the current picture, each candidate in the first merge list (whether it is a bi-predictive motion vector or a uni-predictive motion vector) provides two uni-directional motion vectors as uni-directional prediction merge candidates.
In another example of the present disclosure, when the current picture does not use backward prediction, the prediction list 0 motion vectors of candidates in the first merge list are used as uni-directional prediction merge candidates and are set with indexes according to the same index order as they are in the first merge list. Otherwise, if the current picture uses backward prediction, the list 0 motion vector and the list 1 motion vector of each candidate in the first merge list are used as uni-directional prediction merge candidates and are set with indices based on the interleaving manner as described above, i.e., the list 0 motion vector of the first candidate in the first merge list, followed by the list 1 motion vector of the first candidate, followed by the list 0 motion vector of the second candidate, followed by the list 1 motion vector of the second candidate, and so on. In case the candidate in the first merge list is a uni-directional motion vector, the motion vector plus a certain motion offset is set indexed as a uni-directional prediction merge candidate following the motion vector of this candidate.
Thus, in case the candidate in the first merge list is a uni-directional motion vector, upon determining that the POC of the current picture is smaller than at least one of the POC of the reference pictures, the reference indices are arranged according to the following interleaving manner: each candidate motion vector in the first merge list, a zero motion vector, or a motion vector plus an offset.
In the above process, the pruning may be performed fully or partially when checking for a new motion vector to be added to the uni-directional prediction merge list. When performed partially, this means that the new motion vector is compared to some motion vectors already in the uni-directional prediction merge list, but not all motion vectors. In the extreme case, no motion vector pruning (i.e., motion vector comparison operations) is performed in the process.
In forming the uni-directional prediction merge list, motion vector pruning may also be adaptively performed based on whether the current picture uses backward prediction. For example, for examples of the present disclosure relating to index list determination based on picture prediction configuration, a motion vector pruning operation is performed, in whole or in part, when the current picture does not use backward prediction. When the current picture uses backward prediction, no motion vector pruning operation is performed.
Selecting uni-directional prediction merge candidates for triangle prediction mode
In addition to the above examples, other ways of uni-directional prediction merge list construction or uni-directional prediction merge candidate selection are disclosed.
In one example of the present disclosure, once the first merge list for the normal merge mode is constructed, uni-directional prediction merge candidates may be selected for triangle prediction according to the following rules:
for a motion vector candidate in the first merge list, one and only one of its list 0 motion vector or list 1 motion vector is used for triangle prediction;
for a given motion vector candidate in the first merge list, if the merge index value of that motion vector candidate in the list is even, its list 0 motion vector is used for triangle prediction if available, and in case this motion vector candidate does not have a list 1 motion vector, its list 0 motion vector is used for triangle prediction; and
for a given motion vector candidate in the first merge list, if its merge index value in the list is odd, its list 1 motion vector is used for triangle prediction if available, and its list 0 motion vector is used for triangle prediction if this motion vector candidate does not have a list 1 motion vector.
Fig. 11A shows an example of uni-directional prediction Motion Vector (MV) selection (or uni-directional prediction merge candidate selection) for the triangle prediction mode. In an example, the first 5 merged MV candidates derived in the first merged list are indexed from 0 to 4; and each row has two columns, respectively representing a list 0 motion vector and a list 1 motion vector of candidates in the first merged list. Each candidate in the list may be either uni-directionally predicted or bi-directionally predicted. For a uni-directional prediction candidate, it only has either a list 0 motion vector or a list 1 motion vector, but not both. For bi-prediction candidates, it has both list 0 and list 1 motion vectors. In fig. 11A, for each merge index, the motion vectors labeled "x" are those that were first used for triangle prediction if they were available. If the motion vector labeled "x" is not available, the unlabeled motion vector corresponding to the same merge index will be used for triangle prediction. In other words, according to this method, given a merge index value of a PU coded in triangle prediction mode, this index value can be directly used to locate a merge candidate in the first merge list; then, depending on the parity of the index value (i.e., if the index value is even or odd), either the list 0 motion vector or the list 1 motion vector of the merge candidate located in the first merge list is selected for the PU based on the above rule. There is no need to physically form the uni-directional prediction merge list in this process.
The concepts described above may be extended to other examples. Fig. 11B illustrates another example of uni-directional prediction Motion Vector (MV) selection for the triangle prediction mode. According to fig. 11B, the rule for selecting uni-directional prediction merging candidates for triangle prediction is as follows:
for a motion vector candidate in the first merge list, one and only one of its list 0 motion vector or list 1 motion vector is used for triangle prediction;
for a given motion vector candidate in the first merge list, if the merge index value of that motion vector candidate in the list is even, its list 1 motion vector is used for triangle prediction if available, and in case this motion vector candidate does not have a list 1 motion vector, its list 0 motion vector is used for triangle prediction; and
for a given motion vector candidate in the first merge list, if the merge index value of that motion vector candidate in the list is odd, its list 0 motion vector is used for triangle prediction if available, and in case this motion vector candidate does not have a list 0 motion vector, its list 1 motion vector is used for triangle prediction.
In some examples, other different orders may be defined and used to select uni-directional prediction merge candidates for triangle prediction from those in the first merge list. More specifically, for a given motion vector candidate in the first merge list, the decision whether to first use its list 0 or list 1 motion vector when the motion vector candidate is available for triangle prediction does not necessarily depend on the parity of the index values of the candidates in the first merge list, as described above. For example, the following rules may also be used:
for a motion vector candidate in the first merge list, one and only one of its list 0 motion vector or list 1 motion vector is used for triangle prediction;
based on some predefined pattern, for several motion vector candidates in the first merge list, their list 0 motion vectors are used for triangle prediction if available, and in case no list 0 motion vector is present, the corresponding list 1 motion vector is used for triangle prediction; and
based on the same predefined pattern, for the remaining motion vector candidates in the first merge list, their list 1 motion vectors are used for triangle prediction if available, and in case no list 1 motion vector is present, the corresponding list 0 motion vector is used for triangle prediction.
Fig. 12A to 12D show some examples of predefined modes in unidirectional prediction Motion Vector (MV) selection for the triangle prediction mode. For each merge index, the motion vectors labeled "x" are those that were first used for triangle prediction if they were available. If the motion vector labeled "x" is not available, the unlabeled motion vector corresponding to the same merge index will be used for triangle prediction.
In fig. 12A, for the first three motion vector candidates in the first merge list, their list 0 motion vectors are checked first. Only when a list 0 motion vector is not available, the corresponding list 1 motion vector is used for triangle prediction. For the fourth and fifth motion vector candidates in the first merged list, their list 1 motion vectors are checked first. Only when a list 1 motion vector is not available, the corresponding list 0 motion vector is used for triangle prediction. Fig. 12B to 12D show three other modes of selecting uni-directional prediction merge candidates from the first merge list. The examples shown in the figures are not limiting and there are additional examples. For example, horizontally and/or vertically mirrored versions of those modes shown in fig. 12A-12D may also be used.
The selected uni-directional prediction merge candidate may be indexed and accessed directly from the first merge list; or these selected uni-directional prediction merge candidates may be put into a uni-directional prediction merge list for triangle prediction. The derived uni-directional prediction merge-list comprises a plurality of uni-directional prediction merge candidates, and each uni-directional prediction merge candidate comprises one motion vector of the corresponding candidate in the first merge-list. According to some examples of the disclosure, each candidate in the first merge list includes at least one of a list 0 motion vector and a list 1 motion vector, and each uni-directionally predicting merge candidate may be one of a list 0 motion vector and a list 1 motion vector of the respective candidate in the first merge list. Each uni-directional prediction merge candidate is associated with a merge index of an integer value; and selects the list 0 motion vector and the list 1 motion vector based on a preset rule for the uni-directional prediction merge candidate.
In one example, for each uni-directional prediction merge candidate having an even merge index value, a list 0 motion vector of the corresponding candidate in the first merge list having the same merge index is selected as the uni-directional prediction merge candidate; and for each uni-directional prediction merge candidate having an odd merge index value, select a list 1 motion vector of the corresponding candidate in the first merge list having the same merge index. In another example, for each uni-directional prediction merge candidate having an even merge index value, a list 1 motion vector is selected for the corresponding candidate in the first merge list having the same merge index; and for each uni-directional prediction merge candidate having an odd merge index value, select a list 0 motion vector of the corresponding candidate in the first merge list having the same merge index.
In yet another example, for each uni-directionally predicted merge candidate, where a list 1 motion vector for the respective candidate in the first merge list is determined to be available, the list 1 motion vector is selected as the uni-directionally predicted merge candidate; and select a list 0 motion vector of the corresponding candidate in the first merge list if it is determined that the list 1 motion vector is not available.
In yet another example, for each uni-directional prediction merge candidate having a merge index value within a first range, a list 0 motion vector of the respective candidate in the first merge list is selected as the uni-directional prediction merge candidate; and for each uni-directional prediction merge candidate having a merge index value within the second range, select a list 1 motion vector for the corresponding candidate in the first merge list.
Selecting uni-directional prediction merge candidates directly from a first merge list for triangle prediction mode
Fig. 13 illustrates another example of the present disclosure. Once the first merge list for the normal merge mode is constructed, the uni-directional prediction motion vector is selected directly from the list for triangle prediction. To indicate a particular list 0 motion vector or list 1 motion vector for triangle prediction, an index value is first signaled to indicate which candidate to select from the first merge list. Then, a binary reference list indication flag, referred to as L0L1_ flag in the following description, is signaled to indicate whether the candidate list 0 or the candidate list 1 motion vector selected from the first merge list is selected for the first partition of triangle prediction. The same signaling method is used to indicate that the second list 0 motion vector or the second list 1 motion vector is used for the second partition of the triangle prediction. For example, the signaled semantics for a CU of delta mode coding may include index1, L0L1_ flag1, index2, L0L1_ flag 2. Here, index1 and index2 are the merge index values of two candidates selected from the first merge list for the first partition and the second partition, respectively. L0L1_ flag1 is a binary flag for the first partition to indicate whether to select a list 0 motion vector or a list 1 motion vector for the candidate selected from the first merge list based on index 1. L0L1_ flag2 is a binary flag for the second partition to indicate whether to select a list 0 motion vector or a list 1 motion vector for the candidate selected from the first merge list based on index 2. It is worth mentioning that different signaling orders of the above semantics can be used in the method of the present disclosure. In one example, the signaling order may follow index1 → L0L1_ flag1 → index2 → L0L1_ flag 2; in another example, the signaling order may follow index1 → index2 → L0L1_ flag1 → L0L1_ flag 2; and so on. Thus, in the methods of the present disclosure, the order of description of the signaled semantics should not be interpreted based on the method as the only signaling order of those semantics; alternatively, other different signaling orders of those semantics may be used, which are understood to also be encompassed in the methods of the present disclosure.
In the above signaling method, in the triangle prediction mode, each list 0 motion vector and/or list 1 motion vector, as indicated by the symbol "x" in the rectangular box in fig. 13, may be indicated to/signaled to the decoder for deriving the prediction of the first partition, and each list 0 motion vector and/or list 1 motion vector, as indicated by the symbol "x" in the rectangular box in fig. 13, may also be indicated to/signaled to the decoder for deriving the prediction of the second partition. Therefore, the selection of the uni-directional predictive motion vector from the first merge list becomes very flexible. Given a first merge list of size N candidates, up to 2N uni-directional prediction motion vectors may be used for each of the two triangular partitions. The two merge index values of the two partitions in the triangle prediction mode do not have to be different from each other. In other words, they may take the same value. The index value is signaled directly without adjustment prior to signaling. More specifically, unlike what is currently defined in VVC, the second index value is signaled directly to the decoder without any adjustment being performed on the second index value prior to signaling.
In another example of the present disclosure, when two index values are the same, it is not necessary to signal the binary flag L0L1_ flag2 of the second partition. Alternatively, the binary flag L0L1_ flag2 is inferred to have an opposite value with respect to the binary flag L0L1_ flag1 of the first partition. In other words, in this case, L0L1_ flag2 may take on the value (1-L0L1_ flag 1).
In another example of the present disclosure, L0L1_ flag1 and L0L1_ flag2 may be coded as CABAC context binary bits. The context for the L0L1_ flag1 may be separated from the context for the L0L1_ flag 2. The CABAC probability for each context may be initialized at the beginning of the video sequence and/or at the beginning of the picture and/or at the beginning of the parallel block group.
In another example of the present disclosure, when a motion vector indicated by a merge index value and an associated L0L1_ flag is not present, a uni-directional predicted zero motion vector may be used instead.
In another example of the present disclosure, when a motion vector indicated by a merge index value and an associated L0L1_ flag is not present, a corresponding motion vector indicated by the same merge index value but from another list (i.e., list (1-L0L1_ flag)) may instead be used.
In another example of the present disclosure, for a CU that is coded in delta mode, the second L0L1_ flag (i.e., L0L1_ flag2) associated with the second index (i.e., index2) is not signaled but is always inferred. In this case, the index1, L0L1_ flag1, and index2 semantics still need to be signaled. In one approach, the inference of L0L1_ flag2 is based on the value of L0L1_ flag1 and whether the current picture uses backward prediction. More specifically, for a CU coded in delta mode, if the current picture uses backward prediction, the value of L0L1_ flag2 is inferred to be the opposite binary value of L0L1_ flag1 (i.e., 1-L0L1_ flag 1); if the current picture does not use backward prediction, the value of L0L1_ flag2 is inferred to be the same as L0L1_ flag 1. In addition, if the current picture does not use backward prediction, it may be further forced that the value of index2 is different from the value of index1, since both motion vectors (one for each triangle partition) are from the same prediction list. If the value of index2 is equal to the value of index1, this indicates that the same motion vector will be used for both triangle partitions, which is useless from a codec efficiency point of view. In this case, when signaling the value of index2, a corresponding adjustment of the value of index2 may be performed prior to index binarization, as in the current VVC design signaling index 2. For example, in the case where the actual value of index1 is smaller than the actual value of index2, the value of index2 is signaled using CABAC binarized codewords corresponding to (index 2-1); otherwise, the value of index2 is signaled using CABAC binarized codewords corresponding to index 2. Accordingly, at the decoder side, if the signaled index2 value is less than the signaled index1 value, the actual value of index2 is set equal to the signaled index2 value; otherwise, the actual value of index2 is adjusted to be equal to the signaled index2 value plus one. It is worth mentioning that, based on this new example of the present disclosure, optionally, forcing index2 to have a different value than index1 and the same index2 value adjustment for CABAC binarization may also be applied to the case when the current picture uses backward prediction.
In another example of the present disclosure, no L0L1_ flag is signaled for a CU of delta mode codec. Instead, they are inferred. In this case, it is still necessary to signal the index1 semantic and the index2 semantic, where the index1 semantic and the index2 semantic represent the merge index values of the two candidates selected from the first merge list for the first partition and the second partition, respectively. Given a merge candidate index value, a particular method may be defined when determining whether to select a list 0 or a list 1 motion vector for a respective merge candidate from the first list for triangle mode prediction. In one approach, for index1, the mode shown in FIG. 11A is used to determine from which prediction list the motion vector of the merge candidate is selected for triangle mode prediction; and for index2, the mode illustrated in fig. 11B is used to determine from which prediction list the motion vector of the merge candidate is selected for triangle mode prediction. In other words, if index1 is an even value, a list 0 motion vector is selected, and if index1 is an odd value, a list 1 motion vector is selected. For index2, if index2 is an even value, a list 1 motion vector is selected, and if index2 is an odd value, a list 0 motion vector is selected. In case a motion vector corresponding to a particular prediction list is not present, some default motion vector may instead be used, e.g. a zero motion vector, or a corresponding candidate motion vector from another prediction list, etc. It is worth mentioning that the present disclosure also covers the case where the mode shown in fig. 11A is used for index2 and the mode shown in fig. 11B is used for index1 when determining from which prediction list the motion vector of the merge candidate is selected for triangle mode prediction.
In the above process, although the first merge list containing 5 merge candidates is used for illustration in all examples in the present disclosure, in practice, the size of the first merge list may be defined differently, for example, 6 or 4 or some other value. All methods described in this disclosure are equally applicable when the first merged list has a size other than 5.
In the above process, motion vector pruning may also be performed. Such trimming may be done fully or partially. When performed partially, this means that the new motion vector is compared to some motion vectors already in the uni-directional prediction merge list, but not all motion vectors. This may also mean that only some, but not all, of the new motion vectors need to be examined for pruning before being used as merging candidates for triangle prediction. One particular example is to only check the second motion vector for the first motion vector for pruning, rather than checking all other motion vectors for pruning, before using the second motion vector as a merging candidate for triangle prediction. In the extreme case, no motion vector pruning (i.e., motion vector comparison operation) is performed in the process.
Although the methods of forming unidirectional prediction merge lists in the present disclosure are described for triangular prediction modes, these methods are applicable to other prediction modes of similar kind. For example, in a more general geometric partitioning prediction mode where a CU is partitioned into two PUs along a line that is not a full diagonal, the two PUs may have a geometric shape such as a triangle, wedge, or trapezoid shape. In such cases, the prediction for each PU is formed in a similar manner as in the triangle prediction mode, and the methods described herein are equally applicable.
Fig. 14 is a block diagram illustrating an apparatus for video codec according to some embodiments of the present disclosure. The apparatus 1400 may be a terminal, such as a mobile phone, a tablet computer, a digital broadcast terminal, a tablet device, or a personal digital assistant.
As shown in fig. 14, the apparatus 1400 may include one or more of the following components: a processing component 1402, a memory 1404, a power component 1406, a multimedia component 1408, an audio component 1410, an input/output (I/O) interface 1412, a sensor component 1414, and a communication component 1416.
The processing component 1402 generally controls the overall operation of the device 1400, such as operations related to display, telephone calls, data communications, camera operations, and recording operations. Processing component 1402 may include one or more processors 1420 for executing instructions to perform all or a portion of the steps of the above-described methods. Further, processing component 1402 can include one or more modules for facilitating interaction between processing component 1402 and other components. For example, the processing component 1402 can include a multimedia module for facilitating interaction between the multimedia component 1408 and the processing component 1402.
The memory 1404 is configured to store different types of data to support the operation of the apparatus 1400. Examples of such data include instructions, contact data, phonebook data, messages, pictures, videos, etc. for any application or method operating on the device 1400. The memory 1404 may be implemented by any type or combination of volatile or non-volatile storage devices, and the memory 1404 may be Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic disk, or optical disk.
The power supply component 1406 provides power to the various components of the device 1400. The power components 1406 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the device 1400.
The multimedia component 1408 includes a screen that provides an output interface between the device 1400 and a user. In some examples, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen that receives an input signal from a user. The touch panel may include one or more touch sensors for sensing touches, swipes, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but may also detect the duration and pressure associated with the touch or slide operation. In some examples, the multimedia component 1408 may include a front camera and/or a rear camera. The front camera and/or the back camera may receive external multimedia data when the device 1400 is in an operating mode, such as a shooting mode or a video mode.
The audio component 1410 is configured to output and/or input audio signals. For example, audio component 1410 includes a Microphone (MIC). When the device 1400 is in an operational mode (such as a call mode, a recording mode, and a voice recognition mode), the microphone is configured to receive external audio signals. The received audio signals may further be stored in the memory 1404 or transmitted via the communication component 1416. In some examples, audio component 1410 also includes a speaker for outputting audio signals.
I/O interface 1412 provides an interface between processing component 1402 and peripheral interface modules. The peripheral interface module can be a keyboard, a click wheel, a button and the like. These buttons may include, but are not limited to, a home button, a volume button, a start button, and a lock button.
The sensor components 1414 include one or more sensors for providing state evaluation in different aspects of the apparatus 1400. For example, the sensor component 1414 can detect the on/off state of the device 1400 and the relative position of the components. For example, the components are the display and the keyboard of the device 1400. The sensor components 1414 may also detect changes in position of the device 1400 or components of the device 1400, presence or absence of user contact on the device 1400, direction or acceleration/deceleration of the device 1400, and changes in temperature of the device 1400. The sensor components 1414 may include proximity sensors configured to detect the presence of nearby objects without any physical touch. The sensor component 1314 may also include an optical sensor, such as a CMOS or CCD image sensor used in imaging applications. In some examples, sensor assembly 1414 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 1416 is configured to facilitate wired or wireless communication between the apparatus 1400 and other devices. The device 1400 may access the wireless network based on a communication standard such as WiFi, 4G, or a combination thereof. In an example, the communication component 1416 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel. In an example, the communication component 1416 can also include a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an example, the apparatus 1400 may be implemented by one or more of an Application Specific Integrated Circuit (ASIC), a Digital Signal Processor (DSP), a Digital Signal Processing Device (DSPD), a Programmable Logic Device (PLD), a Field Programmable Gate Array (FPGA), a controller, a microcontroller, a microprocessor, or other electronic components to perform the above-described methods.
The non-transitory computer-readable storage medium may be, for example, a Hard Disk Drive (HDD), a Solid State Drive (SSD), flash memory, a hybrid drive or Solid State Hybrid Drive (SSHD), Read Only Memory (ROM), compact disc read only memory (CD-ROM), magnetic tape, a floppy disk, and the like.
Fig. 15 is a flow diagram illustrating an exemplary process for video coding using motion compensated prediction of geometric prediction units according to some embodiments of the present disclosure.
In step 1501, processor 1420 partitions the video picture into multiple Coding Units (CUs), wherein at least one of the multiple CUs is further partitioned into two Prediction Units (PUs). The two PUs may comprise at least one geometrically shaped PU. For example, the geometrically shaped PUs may include a pair of triangular shaped PUs, a pair of wedge shaped PUs, or other geometrically shaped PUs.
At step 1502, processor 1420 constructs a first merge list comprising a plurality of candidates, wherein each candidate comprises one or more motion vectors, list 0 motion vectors, or list 1 motion vectors. For example, processor 1420 may construct the first merge list based on a merge list construction process for conventional merge prediction. The processor 1420 may also obtain the first merge list from other electronic devices or memory.
At step 1503, the processor 1420 signals a first index value indicating a first candidate selected from the first merge list.
At step 1504, the processor 1420 signals a second index value indicating a second candidate selected from the first merge list.
In step 1505, processor 1420 signals a first binary flag to indicate whether the first candidate's list 0 motion vector or the first candidate's list 1 motion vector is selected for the first PU of geometric prediction.
In step 1505, processor 1420 infers, based on the first binary flag and based on whether the current picture uses backward prediction, whether the second binary flag indicates whether to select a list 0 motion vector of the second candidate or a list 1 motion vector of the second candidate for the second PU of the geometric prediction.
In some examples, an apparatus for video coding is provided. The apparatus includes a processor 1420; and a memory 1404 configured to store instructions executable by the processor; wherein the processor, when executing the instructions, is configured to perform the method as shown in figure 15.
In some other examples, a non-transitory computer-readable storage medium 1404 having instructions stored therein is provided. When executed by the processor 1420, the instructions cause the processor to perform the method shown in fig. 15.
The description of the present disclosure has been presented for purposes of illustration and is not intended to be exhaustive or limited to the disclosure. Many modifications, variations and alternative embodiments will become apparent to those of ordinary skill in the art having the benefit of the teachings presented in the foregoing descriptions and the associated drawings.
The examples were chosen and described in order to explain the principles of the disclosure and to enable others of ordinary skill in the art to understand the disclosure for various embodiments and with the best mode of practicing the disclosure and with various modifications as are suited to the particular use contemplated. Therefore, it is to be understood that the scope of the disclosure is not to be limited to the specific examples of the embodiments disclosed and that modifications and other embodiments are intended to be included within the scope of the disclosure.

Claims (13)

1. A method for video coding with geometric prediction, comprising:
partitioning a video picture into a plurality of Coding Units (CUs), wherein at least one CU of the plurality of CUs is further partitioned into two Prediction Units (PUs), the two PUs comprising at least one geometric-shaped PU;
constructing a first merge list comprising a plurality of candidates based on a merge list construction process for conventional merge prediction, wherein each candidate of the plurality of candidates is a motion vector comprising a list 0 motion vector or a list 1 motion vector or both a list 0 motion vector and a list 1 motion vector;
receiving a signaled first index value indicating a first candidate selected from the first merge list;
receiving a signaled second index value indicating a second candidate selected from the first merge list;
receiving a first binary flag signaled to indicate whether to select a list 0 motion vector of the first candidate or a list 1 motion vector of the first candidate for a first PU of the geometric prediction;
inferring, based on the first binary flag and based on whether a current picture uses backward prediction, whether a second binary flag indicates whether to select a list 0 motion vector of the second candidate or a list 1 motion vector of the second candidate for a second PU of the geometric prediction.
2. The method for video coding with geometric prediction according to claim 1, wherein inferring, based on the first binary flag and based on whether a current picture uses backward prediction, whether a second binary flag indicates selection of the second candidate's list 0 motion vector or the second candidate's list 1 motion vector for the geometrically predicted second PU further comprises:
inferring the second binary flag to be an opposite binary value of the first binary flag in response to the current picture using backward prediction;
inferring the second binary flag to be the same value of the first binary flag in response to the current picture not using backward prediction.
3. The method for video coding with geometric prediction according to claim 1, wherein, in response to the current picture not using backward prediction, the second index value is different from the first index value.
4. The method for video coding with geometric prediction according to claim 1, further comprising:
in response to the current picture not using backward prediction and in response to the signaled second index value being equal to or greater than the signaled first index value, adjusting the second index value to be equal to the signaled second index value plus one.
5. A method for video coding with geometric prediction, comprising:
partitioning a video picture into a plurality of Coding Units (CUs), wherein at least one CU of the plurality of CUs is further partitioned into two Prediction Units (PUs), the two PUs comprising at least one geometric-shaped PU;
constructing a first merge list comprising a plurality of candidates based on a merge list construction process for conventional merge prediction, wherein each candidate of the plurality of candidates is a motion vector comprising a list 0 motion vector or a list 1 motion vector or both a list 0 motion vector and a list 1 motion vector;
receiving a signaled first index value indicating a first candidate selected from the first merge list;
receiving a signaled second index value indicating a second candidate selected from the first merge list;
inferring whether a list 0 motion vector of the first candidate or a list 1 motion vector of the first candidate is selected for the geometrically predicted first PU;
inferring whether to select a list 0 motion vector of the second candidate or a list 1 motion vector of the second candidate for a second PU of the geometric prediction.
6. The method for video coding with geometric prediction according to claim 5, further comprising:
selecting a list 0 motion vector of the first candidate for the first PU of the geometric prediction when the first index value is determined to be even;
selecting a list 1 motion vector of the first candidate for the first PU of the geometric prediction when the first index value is determined to be odd;
selecting the second candidate list 1 motion vector for the second PU of the geometric prediction when the second index value is determined to be even;
selecting the second candidate list 0 motion vector for the second PU of the geometric prediction when the second index value is determined to be odd.
7. The method for video coding with geometric prediction according to claim 6, further comprising:
in response to determining that a respective selected list 0 or list 1 motion vector for the first candidate does not exist, selecting the first candidate motion vector from another prediction list for the first PU of the geometric prediction;
in response to determining that a respective selected list 0 or list 1 motion vector for the second candidate does not exist, selecting the second candidate motion vector from another prediction list for the second PU of the geometric prediction.
8. The method for video coding with geometric prediction according to claim 5, further comprising:
selecting a list 1 motion vector of the first candidate for the first PU of the geometric prediction when the first index value is determined to be even;
selecting a list 0 motion vector of the first candidate for the first PU of the geometric prediction when the first index value is determined to be odd;
selecting the second candidate list 0 motion vector for the second PU of the geometric prediction when the second index value is determined to be even;
selecting the second candidate list 1 motion vector for the second PU of the geometric prediction when the second index value is determined to be odd.
9. The method for video coding with geometric prediction according to claim 8, further comprising:
in response to determining that a respective selected list 0 or list 1 motion vector for the first candidate does not exist, selecting the first candidate motion vector from another prediction list for the first PU of the geometric prediction;
in response to determining that a respective selected list 0 or list 1 motion vector of the second candidate does not exist, selecting the second candidate motion vector from another prediction list for the second PU of the geometric prediction.
10. An apparatus for video coding with geometric prediction, comprising:
one or more processors; and
a memory configured to store instructions executable by the one or more processors;
wherein the one or more processors, when executing the instructions, are configured to:
partitioning a video picture into a plurality of Coding Units (CUs), wherein at least one CU of the plurality of CUs is further partitioned into two Prediction Units (PUs), the two PUs comprising at least one geometric-shaped PU;
constructing a first merge list comprising a plurality of candidates based on a merge list construction process for conventional merge prediction, wherein each candidate of the plurality of candidates is a motion vector comprising a list 0 motion vector or a list 1 motion vector or both a list 0 motion vector and a list 1 motion vector;
receiving a signaled first index value indicating a first candidate selected from the first merge list;
receiving a signaled second index value indicating a second candidate selected from the first merge list;
receiving a first binary flag signaled to indicate whether to select a list 0 motion vector of the first candidate or a list 1 motion vector of the first candidate for a first PU of the geometric prediction;
inferring, based on the first binary flag and based on whether a current picture uses backward prediction, whether a second binary flag indicates whether to select a list 0 motion vector of the second candidate or a list 1 motion vector of the second candidate for a second PU of the geometric prediction.
11. The apparatus for video coding with geometric prediction according to claim 16, wherein the one or more processors are further configured to:
inferring the second binary flag to be an opposite binary value of the first binary flag in response to the current picture using backward prediction;
inferring the second binary flag to be the same value of the first binary flag in response to the current picture not using backward prediction.
12. The device for video coding with geometric prediction according to claim 16, wherein the signaled second index value is different from the signaled first index value in response to the current picture not using backward prediction.
13. The apparatus for video coding with geometric prediction according to claim 16, wherein the one or more processors are further configured to:
in response to the current picture not using backward prediction and in response to the signaled second index value being equal to or greater than the signaled first index value, adjusting the second index value to be equal to the signaled second index value plus one.
CN202080042822.XA 2019-04-25 2020-04-27 Method and apparatus for video encoding and decoding using triangle prediction Active CN113994672B (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US201962838935P 2019-04-25 2019-04-25
US62/838,935 2019-04-25
PCT/US2020/030124 WO2020220037A1 (en) 2019-04-25 2020-04-27 Methods and apparatuses for video coding with triangle prediction

Publications (2)

Publication Number Publication Date
CN113994672A true CN113994672A (en) 2022-01-28
CN113994672B CN113994672B (en) 2023-07-25

Family

ID=72941866

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202080042822.XA Active CN113994672B (en) 2019-04-25 2020-04-27 Method and apparatus for video encoding and decoding using triangle prediction

Country Status (2)

Country Link
CN (1) CN113994672B (en)
WO (1) WO2020220037A1 (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100208818A1 (en) * 2007-10-12 2010-08-19 Thomson Licensing Methods and apparatus for video encoding and decoding geometrically partitioned bii-predictive mode partitions
WO2018026118A1 (en) * 2016-08-01 2018-02-08 한국전자통신연구원 Image encoding/decoding method

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
PL3429205T3 (en) * 2010-05-04 2021-02-08 Lg Electronics Inc. Method and apparatus for processing a video signal
WO2015190078A1 (en) * 2014-06-12 2015-12-17 日本電気株式会社 Video encoding device, video encoding method, and recording medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100208818A1 (en) * 2007-10-12 2010-08-19 Thomson Licensing Methods and apparatus for video encoding and decoding geometrically partitioned bii-predictive mode partitions
WO2018026118A1 (en) * 2016-08-01 2018-02-08 한국전자통신연구원 Image encoding/decoding method

Non-Patent Citations (9)

* Cited by examiner, † Cited by third party
Title
BENJAMIN BROSS: "《Versatile Video Coding (Draft 5)》", 《JOINT VIDEO EXPERTS TEAM (JVET)OF ITU-T SG 16 WP 3 AND ISO/IEC JTC 1/SC 29/WG 11 JVET-N1001》, 12 April 2019 (2019-04-12), pages 16 *
BENJAMIN BROSS: "《Versatile Video Coding (Draft 5)》", 《JOINT VIDEO EXPERTS TEAM (JVET)OF ITU-T SG 16 WP 3 AND ISO/IEC JTC 1/SC 29/WG 11 JVET-N1001》, pages 16 *
JINGYA LI: "《CE4: Triangle merge candidate list simplification (Test 4.4.7)》", 《JOINT VIDEO EXPERTS TEAM (JVET)OF ITU-T SG 16 WP 3 AND ISO/IEC JTC 1/SC 29/WG 11 JVET-N0197》, 12 March 2019 (2019-03-12) *
JINGYA LI: "《CE4: Triangle merge candidate list simplification (Test 4.4.7)》", 《JOINT VIDEO EXPERTS TEAM (JVET)OF ITU-T SG 16 WP 3 AND ISO/IEC JTC 1/SC 29/WG 11 JVET-N0197》, pages 1 - 2 *
TZU-DER CHUANG, CHING-YEH CHEN,ETC: "《CE10-related: Simplification of triangle merging candidate list derivation》", 《JOINT VIDEO EXPERTS TEAM (JVET)OF ITU-T SG 16 WP 3 AND ISO/IEC JTC 1/SC 29/WG 11: JVET-M0184-V1》, pages 1 - 2 *
XIANGLIN WANG: "《CE4: Triangle prediction merge list construction (CE4-4.3)》", 《JOINT VIDEO EXPERTS TEAM (JVET)OF ITU-T SG 16 WP 3 AND ISO/IEC JTC 1/SC 29/WG 11JVET-N0322》 *
XIANGLIN WANG: "《CE4: Triangle prediction merge list construction (CE4-4.3)》", 《JOINT VIDEO EXPERTS TEAM (JVET)OF ITU-T SG 16 WP 3 AND ISO/IEC JTC 1/SC 29/WG 11JVET-N0322》, 19 March 2019 (2019-03-19), pages 1 - 2 *
XIANGLIN WANG: "《CE4-related: An improved method for triangle merge list construction》", 《JOINT VIDEO EXPERTS TEAM (JVET)OF ITU-T SG 16 WP 3 AND ISO/IEC JTC 1/SC 29/WG 11 JVET-N0340》 *
XIANGLIN WANG: "《CE4-related: An improved method for triangle merge list construction》", 《JOINT VIDEO EXPERTS TEAM (JVET)OF ITU-T SG 16 WP 3 AND ISO/IEC JTC 1/SC 29/WG 11 JVET-N0340》, 20 March 2019 (2019-03-20), pages 1 - 3 *

Also Published As

Publication number Publication date
CN113994672B (en) 2023-07-25
WO2020220037A1 (en) 2020-10-29

Similar Documents

Publication Publication Date Title
CN113824959B (en) Method, apparatus and storage medium for video encoding
CN114071135A (en) Method and apparatus for signaling merge mode in video coding
CN113545050B (en) Video encoding and decoding method and device using triangle prediction
US20230089782A1 (en) Methods and apparatuses for video coding using geometric partition
US20220239902A1 (en) Methods and apparatuses for video coding using triangle partition
US20220070445A1 (en) Methods and apparatuses for video coding with triangle prediction
US20220014780A1 (en) Methods and apparatus of video coding for triangle prediction
CN113994672B (en) Method and apparatus for video encoding and decoding using triangle prediction
CN117041594B (en) Method, apparatus, storage medium and program product for encoding video
CN114982230A (en) Method and apparatus for video coding and decoding using triangle partitions
CN114080807A (en) Method and device for video coding and decoding by utilizing triangular partition
CN113841406A (en) Method and apparatus for video coding and decoding using triangle partitioning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant