CN112997496A - Improvement of affine prediction mode - Google Patents

Improvement of affine prediction mode Download PDF

Info

Publication number
CN112997496A
CN112997496A CN201980074330.6A CN201980074330A CN112997496A CN 112997496 A CN112997496 A CN 112997496A CN 201980074330 A CN201980074330 A CN 201980074330A CN 112997496 A CN112997496 A CN 112997496A
Authority
CN
China
Prior art keywords
affine
candidates
candidate list
candidate
merge
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201980074330.6A
Other languages
Chinese (zh)
Other versions
CN112997496B (en
Inventor
张莉
张凯
刘鸿彬
许继征
王悦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing ByteDance Network Technology Co Ltd
ByteDance Inc
Original Assignee
Beijing ByteDance Network Technology Co Ltd
ByteDance Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing ByteDance Network Technology Co Ltd, ByteDance Inc filed Critical Beijing ByteDance Network Technology Co Ltd
Publication of CN112997496A publication Critical patent/CN112997496A/en
Application granted granted Critical
Publication of CN112997496B publication Critical patent/CN112997496B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/537Motion estimation other than block-based
    • H04N19/54Motion estimation other than block-based using feature points or meshes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors
    • H04N19/517Processing of motion vectors by encoding
    • H04N19/52Processing of motion vectors by encoding by predictive encoding

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

Improvements to affine prediction modes are described. In one example, a method for video processing is disclosed. The method comprises the following steps: generating an affine candidate list for the current block by inserting affine candidates into the affine candidate list based on an insertion order, the insertion order depending on an affine model type of at least one affine candidate in the affine candidate list; and performing video processing on the current block based on the generated affine candidate list.

Description

Improvement of affine prediction mode
Cross Reference to Related Applications
The present application claims in time the priority and benefit of international patent application PCT/CN2018/115354 filed on 2018, 11, month, 14, according to applicable patent laws and/or rules of paris convention. The entire disclosure of the above application is incorporated by reference as part of the disclosure of this patent.
Technical Field
This patent document relates to image and video encoding and decoding.
Background
Digital video occupies the greatest bandwidth usage in the internet and other digital communication networks. As the number of networked user devices capable of receiving and displaying video increases, it is expected that the bandwidth demand for digital video usage will continue to grow.
Disclosure of Invention
The disclosed techniques may be used by a video decoder or encoder embodiment during video decoding or encoding using affine motion prediction or compensation tools.
In one example aspect, a method for video processing is disclosed. The method comprises the following steps: generating an affine candidate list for the current block by inserting affine candidates into the affine candidate list based on an insertion order, the insertion order depending on an affine model type of at least one affine candidate in the affine candidate list; and performing video processing on the current block based on the generated affine candidate list.
In another example aspect, a method for video processing is disclosed. The method comprises the following steps: generating an affine candidate list for the current block, wherein at least one affine candidate in the affine candidate list is reordered during generation of the affine candidate list; performing video processing on the current block based on the generated affine candidate list.
In another example aspect, a video processing apparatus is disclosed. The video processing apparatus includes a processor configured to implement the methods described herein.
In another example aspect, a computer program product stored on a non-transitory computer readable medium is disclosed. The computer program product comprises program code for performing the methods described herein.
In yet another example aspect, a video encoder apparatus is disclosed. The video encoder apparatus includes a processor configured to implement the methods described herein.
In yet another example aspect, a video decoder apparatus is disclosed. The video decoder apparatus includes a processor configured to implement the methods described herein.
In yet another aspect, a computer-readable medium having code stored thereon is disclosed. When executed by a processor, the code causes the processor to implement the methods described herein.
These and other aspects are described herein.
Drawings
Figure 1 is an example of a derivation process for the Merge candidate list construction.
Fig. 2 shows example positions of spatial domain Merge candidates.
Fig. 3 shows an example of a candidate pair considering redundancy check for spatial domain Merge candidates.
Fig. 4A-4B show example positions of N × 2N and 2N × N partitioned second PUs (Prediction units).
Fig. 5 is an example illustration of motion vector scaling of temporal Merge candidates.
FIG. 6 shows example candidate locations for the time domain Merge candidates C0 and C1.
Fig. 7 shows an example of combined bidirectional predictive Merge candidates.
Fig. 8 shows an example derivation process of a motion vector prediction candidate.
Fig. 9 is an example illustration of motion vector scaling of spatial motion vector candidates.
Fig. 10 shows an example of optional Temporal Motion Vector Prediction (ATMVP) Motion Prediction for a CU (Coding Unit).
Fig. 11 shows an example of one CU with four sub-blocks (a-D) and their neighboring blocks (a-D).
Fig. 12 is a flowchart of an example of encoding with different MV (Motion Vector) precisions.
Fig. 13A-13B show 135 degree partition types (partitioned from top left to bottom right) and 45 degree partition patterns. Diagram of partitioning a CU into two triangular prediction units (two partitioning modes).
Fig. 14 shows an example of the positions of adjacent blocks.
Fig. 15 shows an example of the upper block and the left block.
Fig. 16A-16B show examples of 2 Control Point Motion Vectors (CPMVs) and 3 CPMVs.
Fig. 17 shows an example of two CPMVs.
18A-18B illustrate 4-parameter and 6-parameter affine model examples.
Fig. 19 shows MVP (Motion Vector Predictor) of AF _ INTER of inherited affine candidates.
Fig. 20 shows an example of building an affine motion predictor in AF _ INTER.
Fig. 21A to 21B show examples of control point motion vectors at the time of affine encoding under AF _ MERGE.
Fig. 22 shows an example of candidate positions of the affine Merge mode.
Fig. 23 shows an example of an intra picture block copy operation.
Fig. 24 shows candidate positions of the affine Merge mode.
Figure 25 shows a modified Merge list construction process.
Fig. 26 is a block diagram of an example of a video processing apparatus.
Fig. 27 is a flowchart of an example of a video processing method.
Fig. 28 is a flowchart of another example of a video processing method.
Detailed Description
This document provides various techniques that a decoder of a video bitstream can use to improve the quality of decompressed or decoded digital video. In addition, the video encoder may also implement these techniques during the course of encoding in order to reconstruct the decoded frames for further encoding.
For ease of understanding, section headings are used in this document and do not limit embodiments and techniques to the corresponding sections. As such, embodiments from one section may be combined with embodiments from other sections.
1. Overview
This document relates to video coding techniques. In particular, it relates to affine prediction modes in video coding. It may be applied to existing video coding standards, such as HEVC, or to upcoming standards (e.g., multi-function video coding). It may also be applied to future video coding standards or video codecs.
In this document, the term "video processing" may refer to video encoding, video decoding, video compression, or video decompression. For example, a video compression algorithm may be applied during the conversion from a pixel representation of the video to a corresponding bitstream representation, and vice versa.
2. Introductory notes
Video coding standards have evolved largely through the development of the well-known ITU-T and ISO/IEC standards. ITU-T has established H.261 and H.263, ISO/IEC has established MPEG-1 and MPEG-4 visualizations, and these two organizations have jointly established the H.262/MPEG-2 Video and H.264/MPEG-4 Advanced Video Coding (AVC) and H.265/HEVC standards. Since h.262, video coding standards have been based on hybrid video coding structures, in which temporal prediction plus transform coding is used. In order to explore future Video coding techniques other than HEVC, VCEG and MPEG united in 2015 to form Joint Video Exploration Team (jmet). Thereafter, JFET adopted many new methods and placed them into a reference software named Joint Exploration Model (JEM). In month 4 of 2018, the joint video experts group (JVET) between VCEG (Q6/16) and ISO/IEC JTC1 SC29/WG11(MPEG) holds in an effort to the multifunctional video coding (VVC) standard, with a 50% reduction in bitrate compared to HEVC.
2.1 HEVC/H.265 inter prediction
Each inter-predicted PU has motion parameters for one or two reference picture lists. The motion parameters include a motion vector and a reference picture index. The use of one of the two reference picture lists can also be signaled using inter _ pred _ idc. Motion vectors can be explicitly coded as deltas relative to a pre-measurement.
When a CU is encoded in skip mode, one PU is associated with the CU and there are no significant residual coefficients, encoded motion vector deltas, or reference picture indices. A Merge mode is specified whereby the motion parameters of the current PU are obtained from neighboring PUs that include spatial and temporal candidates. The Merge mode may be applied to any inter-predicted PU, not just for the skip mode. An alternative to the Merge mode is the explicit transmission of motion parameters, where each PU explicitly signals a motion vector (more precisely, a Motion Vector Difference (MVD) compared to a motion vector predictor), a corresponding reference picture index for each reference picture list, and the use of reference picture lists. In this disclosure, this mode is referred to as Advanced Motion Vector Prediction (AMVP).
When the signaling indicates that one of the two reference picture lists is to be used, the PU is generated from one block of samples. This is called "one-way prediction". Unidirectional prediction may be used for P slices and B slices.
When the signaling indicates that both reference picture lists are to be used, the PU is generated from two blocks of samples. This is called "bi-prediction". Bi-prediction can only be used for B slices.
The following text provides detailed information about inter prediction modes specified in HEVC. The description will start with the Merge mode.
2.1.1 reference Picture List
In HEVC, the term inter prediction is used to denote a prediction derived from data elements (e.g., sample values or motion vectors) of reference pictures other than the currently decoded picture. Pictures can be predicted from multiple reference pictures, as in h.264/AVC. Reference pictures used for inter prediction are organized in one or more reference picture lists. The reference index identifies which reference picture in the list should be used to create the prediction signal.
A single reference picture list (list 0) is used for P slices and two reference picture lists (list 0 and list 1) are used for B slices. It should be noted that the reference pictures included in the list 0/1 may be pictures from the past and future in terms of capture/display order.
2.1.2 Merge mode
2.1.2.1 derivation of candidates for Merge mode
When predicting a PU using the Merge mode, the index pointing to an entry in the Merge candidate list is parsed from the bitstream and used to retrieve motion information. The construction of this list is specified in the HEVC standard and can be summarized in the following sequence of steps:
step 1: initial candidate derivation
o step 1.1: spatial domain candidate derivation
o step 1.2: redundancy check of spatial domain candidates
o step 1.3: time domain candidate derivation
Step 2: additional candidate insertions
o step 2.1: creating bi-directional prediction candidates
o step 2.2: inserting zero motion candidates
These steps are also schematically depicted in fig. 1. For spatial domain Merge candidate derivation, a maximum of four Merge candidates are selected among the candidates located at five different positions. For time domain Merge candidate derivation, at most one Merge candidate is selected among the two candidates. Since a constant number of candidates per PU is assumed at the decoder, additional candidates are generated when the number of candidates obtained from step 1 does not reach the maximum number of Merge candidates (maxnummerge candidates) signaled in the slice header. Since the number of candidates is constant, the index of the best Merge candidate is encoded using Truncated Unary binarization (TU). If the size of the CU is equal to 8, all PUs of the current CU share a single Merge candidate list, which is the same as the Merge candidate list of the 2N × 2N prediction unit.
Hereinafter, operations associated with the above steps will be described in detail.
2.1.2.2 spatial domain candidate derivation
In the derivation of spatial domain Merge candidates, a maximum of four Merge candidates are selected among the candidates located at the positions depicted in FIG. 2. The order of derivation is A1、B1、B0、A0And B2. Only when in position A1、B1、B0、A0Does not take into account location B when any PU of (e.g., because it belongs to another slice or slice) is unavailable or intra-coded2. At the addition position A1After the candidate of (b), a redundancy check is performed on the addition of the remaining candidates, which ensures that candidates with the same motion information are excluded from the list, thereby improving the coding efficiency. In order to reduce computational complexity, not all possible candidate pairs are considered in the mentioned redundancy check. Instead, only pairs linked with arrows in fig. 3 are considered, and only when the corresponding candidates for redundancy check do not have the same motion information, the candidates are added to the list. Another source of duplicate motion information is the "second PU" associated with a partition other than 2 nx 2N. As an example, fig. 4A and 4B depict the second PU in the case of N × 2N and 2N × N, respectively. Position A when the current PU is partitioned into Nx 2N1The candidates of (b) are not considered for list construction. In fact, adding this candidate will result in both prediction units having the same motion information, which is redundant for having only one PU in the coding unit. Similarly, the current PU is partitioned into 2At NxN, position B is not considered1
2.1.2.3 time-domain candidate derivation
In this step, only one candidate is added to the list. Specifically, in the derivation of the temporal region Merge candidate, the scaled motion vector is derived based on co-located (co-located) PUs belonging to pictures within a given reference Picture list that have the smallest POC (Picture Order Count) difference with the current Picture. The reference picture list to be used for deriving the co-located PU is explicitly signaled in the slice header. As indicated by the dashed line in fig. 5, a scaled motion vector for the temporal Merge candidate is obtained, which is scaled from the motion vector of the co-located PU using POC distances tb and td, where tb is defined as the POC difference between the reference picture of the current picture and the current picture, and td is defined as the POC difference between the reference picture of the co-located picture and the co-located picture. The reference picture index of the temporal region Merge candidate is set equal to zero. A practical implementation of the scaling process is described in the HEVC specification. For B slices, two motion vectors (one for reference picture list0 and the other for reference picture list 1) are obtained and combined to generate a bi-predictive Merge candidate.
In co-located PU (Y) belonging to the reference frame, in candidate C0And C1The location of the time domain candidate is selected as depicted in fig. 6. If at position C0Where a PU is unavailable, intra-coded, or outside a current coding number unit (CTU, also known as LCU, maximum coding unit) row (row), then position C is used1. Otherwise, position C is used in the derivation of the time domain Merge candidate0
2.1.2.4 additional candidate insertions
In addition to spatial and temporal Merge candidates, there are two additional types of Merge candidates: a combined bi-directional predicted Merge candidate and zero Merge candidate. A combined bidirectional predictive Merge candidate is generated by using both spatial and temporal Merge candidates. The combined bi-directionally predicted Merge candidates are for B slices only. A combined bi-directional prediction candidate is generated by combining a first reference picture list motion parameter of an initial candidate with a second reference picture list motion parameter of another initial candidate. If these two tuples provide different motion hypotheses, they will form new bi-directional prediction candidates. As an example, fig. 7 shows when two candidates in the original list (on the left) with mvL0 and refIdxL0 or mvL1 and refIdxL1 are used to create a combined bi-predictive Merge candidate that is added to the final list (on the right). There are many rules on combinations that are considered to generate these additional Merge candidates.
Zero motion candidates are inserted to fill the remaining entries in the Merge candidate list and thus reach the maxnummerge capacity. These candidates have zero spatial displacement and reference picture indices that start from zero and increase each time a new zero motion candidate is added to the list. The number of reference frames that these candidates use is 1 and 2 for unidirectional prediction and bi-directional prediction, respectively. Finally, no redundancy check is performed on these candidates.
2.1.3 AMVP
AMVP exploits the spatial-temporal correlation of motion vectors to neighboring PUs for explicit transmission of motion parameters. For each reference picture list, a motion vector candidate list is constructed by first checking the availability of left, top temporally adjacent PU locations, removing redundant candidates and adding zero vectors to make the candidate list a constant length. The encoder may then select the best prediction quantity from the candidate list and send a corresponding index indicating the selected candidate. Similar to the Merge index signaling, the index of the best motion vector candidate is encoded using a truncated unary. In this case, the maximum value to be encoded is 2 (see fig. 8). In the following sections, details will be provided regarding the derivation process of the motion vector prediction candidates.
2.1.3.1 derivation of AMVP candidates
Fig. 8 outlines the derivation of motion vector prediction candidates.
In motion vector prediction, two types of motion vector candidates are considered: spatial motion vector candidates and temporal motion vector candidates. For spatial motion vector candidate derivation, two motion vector candidates are finally derived based on the motion vector of each PU located at five different positions depicted as shown in fig. 2.
For temporal motion vector candidate derivation, one motion vector candidate is selected from two candidates derived based on two different co-located positions. After generating the first spatio-temporal candidate list, duplicate motion vector candidates in the list are removed. If the number of potential candidates is greater than two, the motion vector candidate within the associated reference picture list whose reference picture index is greater than 1 is removed from the list. If the number of spatio-temporal motion vector candidates is less than two, additional zero motion vector candidates are added to the list.
2.1.3.2 spatial motion vector candidates
In the derivation of spatial motion vector candidates, a maximum of two candidates are considered among five potential candidates derived from PUs located at positions as depicted in fig. 2, those positions being the same as the position of the motion Merge. The derivation order of the left side of the current PU is defined as A0、A1And scaled A0Zoom of A1. The derivation order of the upper side of the current PU is defined as B0、B1、B2Zoomed B0Zoomed B1Zoomed B2. Thus for each side, there are four cases that can be used as motion vector candidates, two of which do not require the use of spatial scaling and two of which use spatial scaling. Four different cases are summarized as follows:
without spatial scaling
- (1) identical reference picture list, and identical reference picture index (identical POC)
- (2) different reference picture lists, but the same reference picture (same POC)
Spatial scaling
- (3) same reference picture list, but different reference pictures (different POCs)
- (4) different reference picture lists, and different reference pictures (different POCs)
First check for non-spatial scaling and then spatial scaling. Spatial scaling is considered when POC differs between reference pictures of neighboring PUs and reference pictures of the current PU, regardless of the reference picture list. If all PUs of the left side candidate are not available or are intra coded, scaling for the upper side motion vector is allowed to facilitate parallel derivation of left and upper side MV candidates. Otherwise, spatial scaling is not allowed for the upper motion vectors.
As depicted in fig. 9, in the spatial scaling process, the motion vectors of neighboring PUs are scaled in a similar manner as the temporal scaling. The main difference is that the reference picture list and the index of the current PU are given as input; the actual scaling procedure is the same as the time domain scaling procedure.
2.1.3.3 temporal motion vector candidates
All processes for deriving the temporal domain Merge candidate are the same as those for deriving the spatial motion vector candidate, except for reference picture index derivation (see fig. 6). The reference picture index is signaled to the decoder.
sub-CU-based motion vector prediction method in 2.2 JEM
In a JEM with QTBT (quad Trees plus Binary Trees), there can be at most one set of motion parameters per CU for each prediction direction. Two sub-CU level motion vector prediction methods are considered in the encoder by dividing the large CU into sub-CUs and deriving motion information of all sub-CUs of the large CU. An Alternative Temporal Motion Vector Prediction (ATMVP) method allows each CU to extract multiple sets of motion information from multiple blocks in a co-located reference picture that are smaller than the current CU. In a Spatial-Temporal Motion Vector Prediction (STMVP) method, a Motion Vector of a sub-CU is recursively derived by using Temporal Motion Vector predictors and Spatial neighboring Motion vectors.
In order to maintain a more accurate motion field for sub-CU motion prediction, motion compression of the reference frame is currently disabled.
2.2.1 optional temporal motion vector prediction
In an optional Temporal Motion Vector Prediction (ATMVP) method, the Motion Vector Temporal Motion Vector Prediction (TMVP) is modified by extracting multiple sets of Motion information (including Motion vectors and reference indices) from blocks smaller than the current CU. As an example, a sub-CU is a square N × N block (N is set to 4 by default). Fig. 10 shows an example of ATMVP motion prediction for a CU.
ATMVP predicts motion vectors of sub-CUs within a CU in two steps. The first step is to identify the corresponding block in the reference picture with a so-called temporal vector. The reference picture is also referred to as a motion source picture. The second step is to divide the current CU into sub-CUs and obtain a motion vector and a reference index for each sub-CU from a block corresponding to each sub-CU, as an example.
In the first step, the reference picture and the corresponding block are determined from motion information of spatially neighboring blocks of the current CU. To avoid an iterative scanning process of neighboring blocks, the first Merge candidate in the Merge candidate list of the current CU is used. The first available motion vector and its associated reference index are set as the indices of the temporal vector and the motion source picture. In this way, in ATMVP, the corresponding block can be identified more accurately than in TMVP, where the corresponding block (sometimes referred to as a co-located block) is always in a lower right or center position with respect to the current CU.
In a second step, the corresponding block of the sub-CU is identified by a temporal vector in the motion source picture by adding the temporal vector to the coordinates of the current CU. For each sub-CU, the motion information of its corresponding block (the minimum motion grid covering the central sample points) is used to derive the motion information of the sub-CU. After identifying the motion information of the corresponding nxn block, it is converted into a motion vector and reference index of the current sub-CU in the same way as the TMVP of HEVC, where motion scaling and other processes apply. For example, the decoder checks whether a low delay condition is met (i.e. POC of all reference pictures of the current picture is smaller than POC of the current picture) and possibly uses the motion vector MVx(motion vector corresponding to reference picture list X) to predict motion vector MV of each sub-CUy(wherein X equals 0 or 1, and Y equals 1-X).
2.2.2 spatio-temporal motion vector prediction (STMVP)
In this method, the motion vectors of the sub-CUs are recursively derived in raster scan order. Fig. 11 illustrates this concept. Consider an 8 × 8 CU, which contains 4 × 4 sub-CUs: A. b, C and D. The adjacent 4 x 4 blocks in the current frame are labeled a, b, c, and d.
The motion derivation of sub-CU a starts by identifying its two spatial neighbors (neighbor bins). The first neighbor is an nxn block (block c) on the upper side of the sub-CU a. If this block c is not available or intra coded, the other nxn blocks on the upper side of the sub-CU a are examined (from left to right, starting from block c). The second neighbor is the block to the left of sub-CU a (block b). If block b is not available or intra-coded, the other blocks to the left of sub-CU a are examined (from top to bottom, starting from block b). The motion information obtained from the neighboring blocks of each list is scaled to the first reference frame of the given list. Next, the Temporal Motion Vector Predictor (TMVP) of sub-block a is derived by following the same procedure of TMVP derivation as specified by HEVC. The motion information of the co-located block at position D is extracted and scaled accordingly. Finally, after retrieving and scaling the motion information, all available motion vectors (up to 3) are averaged for each reference list separately. The averaged motion vector is assigned as the motion vector of the current sub-CU.
2.2.3 sub-CU motion prediction mode signaling
The sub-CU modes are enabled as additional Merge candidates and no additional syntax elements are needed to signal these modes. Two additional Merge candidates are added to the Merge candidate list of each CU to represent ATMVP mode and STMVP mode. Up to seven Merge candidates may be used if the sequence parameter set indicates ATMVP and STMVP are enabled. The coding logic of the additional Merge candidates is the same as the coding logic of the Merge candidates in the HM, which means that for each CU in a P-slice or a B-slice two RD checks are also needed for two additional Merge candidates.
In JEM, all the bins (bins) of the Merge index are context coded by CABAC. Whereas in HEVC only the first bit is context coded and the remaining bits are context bypass coded.
2.3 inter-frame prediction method in VVC
There are several new coding tools for inter-Prediction improvement, such as Adaptive motion vector difference resolution (AMVR) for signaling MVD, affine Prediction mode, Triangle Prediction Mode (TPM), ATMVP, Generalized Bi-Prediction (GBI), Bi-directional Optical flow (BIO).
2.3.1 adaptive motion vector difference resolution
In HEVC, when use _ integer _ mv _ flag in a slice header is equal to 0, a Motion Vector Difference (MVD) (between a motion vector and a predicted motion vector of a PU) is signaled in units of quarter (predictor) luminance samples. In VVC, a Locally Adaptive Motion Vector Resolution (LAMVR) is introduced. In VVC, MVDs may be encoded in units of quarter-luma samples, integer-luma samples, or four-luma samples (i.e., 1/4 pixels, 1 pixel, 4 pixels). The MVD resolution is controlled at the Coding Unit (CU) level, and the MVD resolution flag is conditionally signaled for each CU having at least one non-zero MVD component.
For a CU with at least one non-zero MVD component, a first flag is signaled to indicate whether quarter luma sample MV precision is used in the CU. When the first flag (equal to 1) indicates that quarter-luma sample MV precision is not used, another flag is signaled to indicate whether integer-luma sample MV precision or four-luma sample MV precision is used.
The quarter-luma sample MV resolution is used for a CU when the first MVD resolution flag of the CU is zero or not coded for the CU (meaning all MVDs in the CU are zero). When a CU uses integer luma sample MV precision or four luma sample MV precision, the MVP in the CU's AMVP candidate list is rounded to the corresponding precision.
In the encoder, RD checking at the CU level is used to determine which MVD resolution is to be used for the CU. That is, RD checking at the CU level is performed three times for each MVD resolution. To speed up the encoder speed, the following encoding scheme is applied in JEM:
during the RD check of a CU with normal quarter-luminance sample MVD resolution, the motion information of the current CU (integer luminance sample accuracy) is stored. The stored motion information (after rounding) is used as a starting point for further small-range motion vector refinement during RD-checking for the same CU with integer luma samples and 4 luma sample MVD resolution, so that the time-consuming motion estimation process is not repeated three times.
Conditionally invoke the RD check of CUs with 4 luma samples MVD resolution. For a CU, when the RD cost for the integer luma sample MVD resolution is much greater than the RD cost for the quarter-luma sample MVD resolution, the RD check for the 4 luma sample MVD resolution of the CU is skipped.
The encoding process is shown in fig. 12. First, 1/4 pixel MVs are tested, RD costs are calculated and denoted as RDCost0, then integer MVs are tested, and RD costs are denoted as RDCost 1. If RDCost1< th RDCost0 (where th is a positive value), then test 4 pixels MV; otherwise, 4-pixel MVs are skipped. Basically, the motion information and RD cost etc. for 1/4 pixel MVs are known when checking integer or 4 pixel MVs, which can be reused to speed up the encoding process of integer or 4 pixel MVs.
2.3.2 triangle prediction mode
The concept of Triangle Prediction Mode (TPM) is to introduce a new triangle partition for motion compensated prediction. As shown in fig. 13A-13B, it divides a CU into two triangular prediction units in a diagonal or opposite diagonal direction. Each triangle prediction unit in a CU is inter-predicted using its own uni-directional prediction motion vector and reference frame index derived from a single uni-directional prediction candidate list. After the triangle prediction unit is predicted, an adaptive weighting process is performed on the diagonal edges. The transform and quantization process is then applied to the entire CU. Note that this mode only applies to the large mode (note: the skip mode is considered as a special large mode).
13A-13B are diagrams of partitioning a CU into two triangle prediction units (two partition modes); FIG. 13A: 135 degree partition type (partition from top left corner to bottom right corner), and fig. 13B: a 45 degree division pattern.
2.3.2.1 unidirectional prediction candidate list for TPM
The unidirectional prediction candidate list, referred to as the TPM motion candidate list, includes five unidirectional prediction motion vector candidates. As shown in fig. 14, it is derived from seven neighboring blocks including five spatially neighboring blocks (1 to 5) and two temporally co-located blocks (6 to 7). The motion vectors of seven neighboring blocks are collected and put into a unidirectional prediction candidate list in the order of a unidirectional prediction motion vector, an L0 motion vector of a bidirectional prediction motion vector, an L1 motion vector of a bidirectional prediction motion vector, and an average motion vector of an L0 motion vector and an L1 motion vector of a bidirectional prediction motion vector. If the number of candidates is less than five, a zero motion vector is added to the list. The motion candidates added to the TPM list are referred to as TPM candidates, and the motion information derived from the spatial/temporal blocks is referred to as regular motion candidates (regular motion candidates).
More specifically, it relates to the following steps:
1) from A1、B1、B0、A0、B2Col and Col2 (corresponding to blocks 1-7 in figure 14),is free of Requiring any trimming operation
2) The variable numcurmergecand is set to 0.
3) For slave A1、B1、B0、A0、B2Each regular motion candidate derived by Col and Col2, and numcurrmeasurcand is less than 5, and if the regular motion candidate is a unidirectional prediction (from list0 or list 1), it is added directly to the Merge list as a TPM candidate, numcurrmeasurcand is incremented by 1. Such TPM candidates are referred to as "original uni-predictive candidates".
Applications ofComplete pruning(full pruning)。
4) For slave A1、B1、B0、A0、B2Each motion candidate derived by Col and Col2, and numCurrMergeCand is less than 5, if the regular motion candidate is bi-predictive, then the motion information from list0 is added to the TPM _ Merge list as a new TPM candidate (i.e., modified to be uni-predictive from list 0), and numCurrMergeCand is incremented by 1. Such TPM candidates are referred to as "Truncated List0 prediction candidates (Truncated List 0-p)redicted candidate)”。
Applications ofComplete pruning
5) For slave A1、B1、B0、A0、B2Each motion candidate derived by Col and Col2, and numcurrmercrgecand is less than 5, if the regular motion candidate is bi-predictive, the motion information from list1 is added to the TPM _ Merge list (i.e., modified to be uni-directional predictive from list 1), and numcurrmercrgecand is increased by 1. Such TPM candidates are referred to as "Truncated List1 prediction candidates (Truncated List1-predicted candidates)".
Applications ofComplete pruning
6) For slave A1、B1、B0、A0、B2Each motion candidate derived by Col and Col2, and numcurmergecand is less than 5, if the regular motion candidate is bi-predictive,
if the slice QP of the List0 reference picture is smaller than the slice QP of the List1 reference picture, the motion information of List1 is first scaled to the List0 reference picture and the average amount of two MVs (one from the original List0 and the other from the scaled MV of List 1) is added to the TPM mean List, such candidates are referred to as average uni-prediction motion candidates (averaged uni-prediction from List 0motion candidate) from List0 and numcurrerancecand is increased by 1.
Otherwise, first scale the motion information of List0 to List1 reference pictures and add the average amount of two MVs (one from the original List1 and the other the scaled MV from List 0) to the TPM mean List, such TPM candidates are referred to as average uni-prediction motion candidates (avged un-prediction from List 1motion candidate) from List1 and numcurrerargecand is increased by 1.
Applications ofComplete pruning
7) If numcurrMergeCand is less than 5, a zero motion vector candidate is added.
When inserting a candidate into the list, this process is called full pruning if it has to be compared to all the candidates previously added to see if it is the same as one of them.
2.3.2.2 adaptive weighting process
After predicting each triangle prediction unit, an adaptive weighting process is applied to the diagonal edges between the two triangle prediction units to derive the final prediction of the entire CU. Two weight factor sets are defined as follows:
first set of weighting factors: {7/8,6/8,4/8,2/8,1/8} and {7/8,4/8,1/8} for luma and chroma samples, respectively;
second set of weighting factors: {7/8,6/8,5/8,4/8,3/8,2/8,1/8} and {6/8,4/8,2/8} are used for luma and chroma samples, respectively.
The set of weighting factors is selected based on a comparison of the motion vectors of the two triangular prediction units. The second weight factor set is used when the reference pictures of the two triangle prediction units are different from each other or their motion vector differences are greater than 16 pixels. Otherwise, the first set of weighting factors is used. An example is shown in fig. 15.
2.3.2.3 Triangle Prediction Mode (TPM) signaling
A one bit flag indicating whether to use the TPM may be signaled first. Thereafter, the indication of the two partitioning modes (as depicted in fig. 13) is further signaled, as well as the Merge index selected for each of the two partitions.
2.3.2.3.1 TPM token signaling
Let us denote the width and height of a luminance block by W and H, respectively. If W H <64, the triangle prediction mode is disabled.
When a block is encoded with affine mode, the triangle prediction mode is also disabled.
When a block is encoded with a Merge mode, a one bit flag may be signaled to indicate whether the triangle prediction mode is enabled or disabled for the block.
The flag is encoded with 3 contexts based on the following equation.
Ctx index ═ ((left block L available & & L is encoded with TPM);
2.3.2.3.2 signals an indication of two partitioning modes (as depicted in FIG. 13) and the Merge index selected for each of the two partitions
Note that the partition mode, the Merge indices of the two partitions, is jointly coded. As an example, two partitions are restricted from using the same reference index. Therefore, there are 2 (division pattern) × N (maximum large candidate number) × (N-1) possibilities, where N is set to 5. One indication is coded and the mapping between the partition mode, the two Merge indices and the coding indication is derived from the array defined as follows: const 8_ t g _ trianglecoding [ triac _ MAX _ NUM _ CANDS ] [3], { {0,1,0}, {1,0,1}, {1,0,2}, {0,0,1}, {0,2,0}, {1,0,3}, {1,0,4}, {1,1,0}, {0,3,0}, {0,4,0}, {0,0,2}, {0,1,2}, {0,0,3}, {0,1,3}, {0,1,4}, {1,1,4}, {1,1,3}, {1,2,1}, {1,2,0, 2,1}, {0,1,3}, {0,1,4}, {1, 1}, {1,2,0, 2,1, 3}, {0,1,4}, {1, 3}, {1,2,1, 3},1, 4,1}, {0,4,1}, {0,2,3}, {1,4,2}, {0,3,2}, {1,4,3}, {0,3,1}, {0,2,4}, {1,2,4}, {0,4,2}, {0,3,4} };
division mode (45 degrees or 135 degrees) ═ g _ triangle combination [ signaled indication ] [0 ];
the Merge index of candidate a is g _ triangle combining [ signaled indication ] [1 ];
the Merge index of candidate B is g _ triangle combining [ signaled indication ] [2 ];
once the two motion candidates a and B are derived, the motion information of the two partitions (PU1 and PU2) may be set according to a or B. Whether the PU1 uses the motion information of the Merge candidate a or B depends on the prediction directions of the two motion candidates. Table 1 shows the relationship between two derived motion candidates a and B and two partitions.
Table 1: deriving motion information for a partition from the derived two Merge candidates (A, B)
Predicted direction of A Predicted direction of B Motion information of PU1 Motion information of PU2
L0 L0 A(L0) B(L0)
L1 L1 B(L1) A(L1)
L0 L1 A(L0) B(L1)
L1 L0 B(L0) A(L1)
2.3.2.3.3 (denoted by merge _ triangle _ idx)
merge_triangle_idxIn [0,39 ]](inclusive) within the range of (inclusive). An Exponential Golomb (EG) code of order K is used for binarization of merge _ triangle _ idx, where K is set to 1.
EG of K order
To encode larger numbers with fewer bits (at the expense of encoding smaller numbers with more bits), this may useNon-negative integerThe parameter k. To encode a non-negative integer x with an exponential golomb code of order k:
1. using the above-mentioned 0 th order exponential golomb code pair
Figure BDA0003060792930000152
Is encoded and then
2. Using binary pair x mod 2kCarry out coding
Table 2: exponential golomb-k encoding examples
Figure BDA0003060792930000151
Figure BDA0003060792930000161
2.3.3 affine motion compensated prediction
In HEVC, only the translational motion model is applied to Motion Compensation Prediction (MCP). While in the real world there are many kinds of movements, such as zoom in/out, rotation, perspective movement and other irregular movements. In VVC, a simplified affine transform motion compensated prediction is applied with a 4-parameter affine model and a 6-parameter affine model. As shown in fig. 16A-16B, the affine motion field of the block is described by two Control Point Motion Vectors (CPMV) of the 4-parameter affine model (fig. 16A) and 3 CPMV of the 6-parameter affine model (fig. 16B).
The Motion Vector Field (MVF) of a block is described by the following equations by a 4-parameter affine model in equation (1) in which 4 parameters are defined as variables a, b, e, and f, and a 6-parameter affine model in equation (2) in which 6 parameters are defined as variables a, b, c, d, e, and f, respectively:
Figure BDA0003060792930000162
Figure BDA0003060792930000163
wherein (mv)h 0,mvh 0) Is the motion vector of the upper left corner control point, and (mv)h 1,mvh 1) Is the motion vector of the upper right corner control point, and (mv)h 2,mvh 2) Is the motion vector of the lower left corner control point, all three motion vectors are called Control Point Motion Vectors (CPMV), (x, y) represent the coordinates of the representative point (representational point) in the current block with respect to the upper left sample point, and (mv)h(x,y),mvv(x, y)) is the motion vector derived for the sample point located at (x, y). CP motion vectors may be signaled (like in affine AMVP mode) or derived in real time (on-the-fly) (like in affine Merge mode). w and h are the width and height of the current block. In practice, division is performed by right-shifting and rounding operations. In the VTM, a representative point is defined as the center position of the subblock, for example, when the coordinates of the upper left corner of the subblock with respect to the upper left corner sample point within the current block are (xs, ys), the coordinates of the representative point are defined as (xs +2, ys + 2). For each sub-block (i.e., 4 × 4 in VTM), the representative point is used to derive a motion vector for the entire sub-block.
To further simplify the motion compensated prediction, sub-block based affine transform prediction is applied. To derive the motion vector of each M × N (in the current VVC, M and N are both set to 4) subblock, the motion vector of the center sample point of each subblock is calculated according to equations (1) and (2) and rounded to a fractional precision of 1/16, as shown in fig. 17. Then, a motion compensated interpolation filter of 1/16 pixels is applied to generate a prediction for each sub-block with the derived motion vector. The affine mode introduces an interpolation filter of 1/16 pixels.
After MCP, the high precision motion vector of each sub-block is rounded and saved to the same precision as the normal motion vector.
Signaling of 2.3.3.1 affine predictions
Similar to the translational motion model, there are also two modes for signaling side information due to affine prediction. They are AFFINE _ INTER and AFFINE _ MERGE modes.
2.3.3.2 AF _ INTER mode
For CUs with a width and height larger than 8, the AF _ INTER mode may be applied. An affine flag at the CU level is signaled in the bitstream to indicate whether AF _ INTER mode is used.
In this mode, for each reference picture list (list 0 or list 1), an affine AMVP candidate list is constructed with three types of affine motion predictors, in the following order, where each candidate comprises the estimated CPMV of the current block. Best CPMV found at encoder side (such as mv in fig. 20)0mv1mv2) The difference from the estimated CPMV is signaled. In addition, the index of the affine AMVP candidate from which the estimated CPMV is derived is further signaled.
1) Inherited affine motion predictor
The checking order is similar to the order of spatial MVPs in HEVC AMVP list construction. First, the left-inherited affine motion predictor is derived from the first block in { a1, a0} that is affine encoded and has the same reference picture as in the current block. Second, the upside inherited affine motion predictor is derived from the first block of { B1, B0, B2} that is affine encoded and has the same reference picture as in the current block. Five chunks a1, a0, B1, B0, B2 are depicted in fig. 19.
Once the neighboring blocks are found to be encoded with affine mode, the CPMV of the coding unit covering the neighboring blocks is used to derive the predictor of the CPMV of the current block. For example, if A1 is encoded with a non-affine mode and A0 is encoded with a 4-parameter affine mode, the left-handed affine MV predictor will be derived from A0. In this case, the CPMV of the CU covering A0, such as the upper left CPMV in FIG. 21B
Figure BDA0003060792930000181
And the upper right corner CPMV
Figure BDA0003060792930000182
Indicated, the CPMV used to derive the estimate of the current block, is represented by the upper left (with coordinates (x) of the current block0, y0)), upper right (with coordinates (x1, y1)) and lower right position (with coordinates (x2, y2))
Figure BDA0003060792930000183
And (4) showing.
2) Constructed affine motion predictor
As shown in fig. 20, the constructed affine motion predictor includes Control Point Motion Vectors (CPMV) derived from neighboring inter-coded blocks with the same reference picture. The number of CPMVs is 2 if the current affine motion model is a 4-parameter affine, and 3 if the current affine motion model is a 6-parameter affine. Top left CPMV
Figure BDA0003060792930000184
Is derived from the MV at the first block in the set { a, B, C } that is inter-coded and has the same reference picture as in the current block. Top right CPMV
Figure BDA0003060792930000185
Is derived from the MV at the first block in the set { D, E } that is inter-coded and has the same reference picture as in the current block. CPMV at lower left
Figure BDA0003060792930000186
Is derived from the MV at the first block in the set F, G that is inter-coded and has the same reference picture as in the current block.
If the current affine motion model is a 4-parameter affine, only if
Figure BDA0003060792930000187
And
Figure BDA0003060792930000188
when both are established, i.e.
Figure BDA0003060792930000189
And
Figure BDA00030607929300001810
the constructed affine motion predictor is only inserted into the candidate list by the CPMV used as an estimate of the top left (with coordinates (x0, y0)), top right (with coordinates (x1, y1)) position of the current block.
If the current affine motion model is a 6-parameter affine, only if
Figure BDA00030607929300001811
And
Figure BDA00030607929300001812
when all are established, i.e.
Figure BDA00030607929300001813
And
Figure BDA00030607929300001814
the constructed affine motion predictor is only inserted into the candidate list by the CPMV used as an estimate of the position of the top left (with coordinates (x0, y0)), top right (with coordinates (x1, y1)) and bottom right (with coordinates (x2, y2)) of the current block.
When the constructed affine motion predictor is inserted into the candidate list, no pruning process is applied.
3) Normal AMVP movement prediction
The following condition applies until the number of affine motion predictors reaches a maximum value.
1) By setting all CPMVs equal to
Figure BDA00030607929300001815
(if available), an affine motion predictor is derived.
2) By setting all CPMVs equal to
Figure BDA00030607929300001816
(if available), an affine motion predictor is derived.
3) By setting all CPMVs equal to
Figure BDA0003060792930000191
(if available), an affine motion predictor is derived.
4) By setting all CPMVs equal to HEVC TMVP (if available), an affine motion predictor is derived.
5) By setting all CPMVs to zero MV, an affine motion predictor is derived.
It is noted that,
Figure BDA0003060792930000192
has been derived in the constructed affine motion predictor.
Fig. 18A-18B show a 4-parameter affine model and a 6-parameter affine model, respectively.
Fig. 19 shows an example of MVP of AF _ INTER of inherited affine candidates.
Fig. 20 shows an example of MVP of AF _ INTER of the constructed affine candidates.
In the AF _ INTER mode, when the 4/6 parameter affine mode is used, 2/3 control points are required, and thus 2/3 MVDs need to be encoded for these control points, as shown in fig. 18A-18B. In an example, it is proposed to derive the MV in such a way that the MV is derived from the mvd0Median prediction mvd1And mvd2
Figure BDA0003060792930000193
Figure BDA0003060792930000194
Figure BDA0003060792930000195
Wherein the content of the first and second substances,
Figure BDA0003060792930000196
mvdiand mv1A predicted motion vector or a motion vector of each of the upper left pixel (i ═ 0), the upper right pixel (i ═ 1), and the lower left pixel (i ═ 2)The amount difference and the motion vector, as shown in fig. 18B. Note that the addition of two motion vectors (e.g., mvA (xA, yA) and mvB (xB, yB)) is equal to the separate summation of two components, i.e., newMV ═ mvA + mvB, with the two components of newMV set to (xA + xB) and (yA + yB), respectively.
2.3.3.3 AF _ MERGE mode
When a CU is applied in AF _ MERGE mode, it obtains the first block encoded with affine mode from the valid neighboring reconstructed blocks. And the selection order of the candidate blocks is from left, top right, bottom left to top left as shown in fig. 21A (in turn represented by A, B, C, D, E). For example, if the adjacent lower left block is encoded in affine mode, as represented by a0 in fig. 21B, the Control Point (CP) motion vector mv containing the upper left, upper right, and lower left corners of the adjacent CU/PU of block a is extracted0 N、mv1 NAnd mv2 N. And is based on mv0 N、mv1 NAnd mv2 NTo calculate the top left/top right/bottom left motion vector mv on the current CU/PU0 C、mv1 CAnd mv2 C(it is used only for the 6 parameter affine mode). It should be noted that in VTM-2.0, if the current block is affine-encoded, the subblock located at the upper left corner (e.g., 4 × 4 block in VTM) stores mv0, and the subblock located at the upper right corner stores mv 1. If the current block is encoded with a 6-parameter affine model, the sub-block located in the lower left corner will store mv 2; otherwise (with a 4-parameter affine model), the LB stores mv 2'. The other sub-blocks store MVs for the MC.
Deriving the CPMVmv of the current CU0 C、mv1 CAnd mv2 CThereafter, the MVF of the current CU is generated according to the simplified affine motion model equations (1) and (2). To identify whether the current CU is encoded with AF _ MERGE mode, an affine flag is signaled in the bitstream when there is at least one neighboring block encoded with affine mode.
Fig. 21A-21B show candidate and CPMV prediction amount derivation for AF _ MERGE with five neighboring blocks, respectively.
In an example, the affine Merge candidate list is constructed by the following steps:
1) inserting inherited affine candidates
Inherited affine candidates refer to candidates derived from affine motion models that effectively neighbor affine coding blocks. At most two inherited affine candidates are derived from the affine motion models of the neighboring blocks and inserted into the candidate list. For left-hand pre-measurement, the scan order is { A0, A1 }; for the upper prediction, the scan order is { B0, B1, B2 }.
2) Insertion-built affine candidates
If the number of candidates in the affine Merge candidate list is less than MaxmumAffinic and (set to 5), the constructed affine candidate is inserted into the candidate list. The constructed affine candidates refer to candidates constructed by combining the neighboring motion information of each control point.
The motion information of the control points is first derived from the specified spatial and temporal neighbors shown in fig. 22. CPk (k ═ 1,2,3,4) represents the kth control point. A0, a1, a2, B0, B1, B2, and B3 are spatial positions of the predicted CPk (k ═ 1,2, 3); t is the temporal location of the predicted CP 4.
The coordinates of the CP1, CP2, CP3, and CP4 are (0,0), (W,0), (H,0), and (W, H), respectively, where W and H are the width and height of the current block.
Motion information for each control point is obtained according to the following priority order:
for CP1, the checking priority is B2->B3->A2. If B is present2Can be used, then B is used2. Otherwise, if B2Not available, then B is used3. If B is present2And B3Are all unusable, use A2. If all three candidates are not available, no motion information for CP1 can be obtained.
For CP2, the checking priority is B1->B0
For CP3, the inspection priority is A1->A0
For CP4, T is used.
Next, affine Merge candidates are constructed using combinations of control points.
Motion information of three control points is required to construct a 6-parameter affine candidate. The three control points may select one from the following four combinations ({ CP1, CP2, CP4}, { CP1, CP2, CP3}, { CP2, CP3, CP4}, { CP1, CP3, CP4 }). The combinations CP1, CP2, CP3, CP2, CP3, CP4, CP1, CP3, CP4 will be converted into a 6-parameter motion model represented by upper-left, upper-right and lower-left control points.
Motion information for two control points is needed to construct a 4-parameter affine candidate. The two control points may select one from the following two combinations ({ CP1, CP2}, { CP1, CP3 }). These two combinations will be converted into a 4-parameter motion model represented by the upper left and upper right control points.
The combination of constructed affine candidates is inserted into the candidate list in the following order:
{CP1,CP2,CP3}、{CP1,CP2,CP4}、{CP1,CP3,CP4}、{CP2,CP3,CP4}、{CP1,CP2}、{CP1,CP3}。
only when the CP has the same reference index, an available combination of motion information of the CP is added to the affine Merge list.
3) Filling with zero motion vectors
If the number of candidates in the affine Merge candidate list is less than 5, a zero motion vector with a zero reference index is inserted into the candidate list until the list is full.
2.3.4 Current Picture reference
Intra block copy (ibc) (intra block copy), or intra picture block Compensation (CPR), is used in HEVC Screen Content Coding (SCC), also referred to as Current Picture Referencing (CPR). This tool is very effective for the coding of screen content video, since repeated patterns in text and graphics rich content often occur in the same picture. Using previously reconstructed blocks with the same or similar patterns as the prediction quantity can effectively reduce the prediction error, thereby improving the coding efficiency. Fig. 23 shows an example of intra block compensation.
Similar to the CRP design in HEVC SCC, in VVC, the use of IBC mode is signaled both on the sequence and picture level. When IBC mode is enabled under a Sequence Parameter Set (SPS), it may be enabled at a picture level. When IBC mode is enabled at the picture level, the currently reconstructed picture is considered as a reference picture. Thus, there is no need to make syntax changes at the block level to signal the use of IBC mode on the basis of the existing VVC inter mode.
The main characteristics are as follows:
it is considered as normal inter mode. Thus, Merge and skip modes may also be used for IBC mode. The Merge candidate list construction is uniform, and contains Merge candidates from adjacent positions, which are coded in IBC mode or HEVC inter mode. Depending on the selected Merge index, the current block in Merge or skip mode may be merged into a neighbor for IBC mode encoding, or otherwise into a neighbor for normal inter mode encoding that uses a different picture as a reference picture.
Block vector prediction and coding scheme for IBC mode reuse the scheme used for motion vector prediction and coding in HEVC inter mode (AMVP and MVD coding).
Motion vectors for IBC mode, also called block vectors, are encoded with integer pixel accuracy, but stored in memory with 1/16 pixel accuracy after decoding, since the interpolation and deblocking stages require quarter-pixel accuracy. When used for motion vector prediction in IBC mode, the stored vector predictor will be shifted to the right by 4.
-search scope: limited to the current CTU.
When affine mode/triangle mode/GBI/weighted prediction is enabled, CPR is not allowed.
Merge list design in 2.3.5 VVC
Three different Merge list construction processes are supported in the VVC:
1) subblock Merge candidate list: it includes ATMVP and affine Merge candidates. The affine mode and the ATMVP mode share one Merge list building process. Here, the ATMVP and affine Merge candidates may be added in order. The sub-block Merge list size is signaled in the slice header and has a maximum value of 5.
2) Unidirectional prediction TPM Merge list: for the triangle prediction mode, two partitions share one Merge list construction process, even though two partitions can select their own Merge candidate index. When building the Merge list, the spatial neighborhood and two time domain blocks of a block are examined. The motion information derived from spatially neighboring blocks and temporal blocks is referred to herein as regular motion candidates. These regular motion candidates are further used to derive a plurality of TPM candidates. Note that the transform is performed at the entire block level, even though two partitions may use different motion vectors to generate their own prediction blocks.
The unidirectional prediction TPM Merge list size is fixed to 5.
3) Rule Merge List: and sharing a Merge list construction process for the rest coding blocks. Here, the spatial/temporal/HMVP, the pairwise combined bidirectional prediction Merge candidate, and the zero motion candidate may be inserted in order. The rule Merge list size is signaled in the slice header and has a maximum value of 6.
2.3.5.1 subblock Merge candidate list
It is proposed to put all sub-block related motion candidates, except the regular Merge list for non sub-block Merge candidates, into a separate Merge list.
The sub-block related motion candidates are put into a separate Merge list named "sub-block Merge candidate list".
In one example, the sub-block Merge candidate list includes affine Merge candidates, and ATMVP candidates and/or sub-block-based STMVP candidates.
In an example, the ATMVP large candidate in the normal large list is moved to the first position of the affine large list. So that all the Merge candidates in the new list (i.e. the sub-block based Merge candidate list) are based on the sub-block coding tool.
2.3.5.2 rule Merge List
Unlike the Merge list design, in VVC, a history-based motion vector prediction (HMVP) method is employed.
In HMVP, previously encoded motion information is stored. The motion information of the previously encoded block is defined as an HMVP candidate. The plurality of HMVP candidates are stored in a table named HMVP table, and the table is maintained in real time during the encoding/decoding process. When starting to encode/decode a new slice, the HMVP table is emptied. Whenever there is an inter-coded block, the associated motion information is added to the last entry of the table as a new HMVP candidate. The whole encoding flow is shown in fig. 24.
The HMVP candidates may be used in the AMVP and Merge candidate list construction process. Figure 25 depicts a modified Merge candidate list construction process (highlighted in grey). When the Merge candidate list is not full after the TMVP candidate is inserted, the HMVP candidates stored in the HMVP table may be used to populate the Merge candidate list. The HMVP candidates in the table are inserted in descending order of index, considering that a block generally has a higher correlation in motion information with the nearest neighboring block. The last entry in the table is added first to the list and the first entry is added last. Similarly, redundancy elimination also applies to HMVP candidates. Once the total number of available Merge candidates reaches the signaled maximum number of allowed Merge candidates, the Merge candidate list construction process terminates.
3. Technical examples solved by the disclosed embodiments
In current VVC designs, affine prediction mode can achieve significant coding gain for sequences with affine motion. However, it may have the following problems:
1) for the bi-predictive affine mode, the correlation of affine motion information between two reference picture lists is not considered.
2) For affine Merge candidate derivation processes, the affine model (4-parameter or 6-parameter) type is inherited directly from the neighboring blocks, which requires additional line buffer size to store the affine model type.
4. Description of various technologies
The following detailed description is to be considered as an example to explain the general concepts. These inventions should not be construed narrowly. Furthermore, these inventions may be combined in any manner.
1. It is proposed that candidates added to one reference picture list can be used to predict CPMV of another reference picture.
a. In one example, the CPMV of one reference picture may be used to predict the CPMV of another reference picture.
b. In one example, the encoded MVD of one reference picture may be used (scaled if needed) to predict the MVD of another reference picture.
2. A symmetric affine coding mode is proposed, where the motion information of one reference picture list (list X) is signaled, while the motion information of the other reference picture list (list Y, where Y is not equal to X) is always skipped.
a. In one example, motion information (such as CPMV) for a reference picture list (list Y) without signaling may be derived from the motion information for the reference picture list (list X).
b. In one example, the prediction direction of this mode is also set to bi-prediction.
c. In one example, it is added as a new coding mode. Alternatively, it may be used instead of the one-way affine coding mode.
3. It is proposed that affine model types (e.g., 4-parameter or 6-parameter) may be used to decide the insertion order of affine candidates when constructing affine candidate lists (e.g., affine AMVP/Merge candidate list, sub-block Merge candidate list).
a. For an affine AMVP candidate list, neighboring blocks with the same affine model type may be given higher priority. For example, the motion information of a neighboring block having the same affine model type may be added to the AMVP list before the motion information of a second neighboring block having a different affine model type.
b. In one example, the affine type may be further signaled for affine Merge mode.
c. For the affine and/or sub-block Merge candidate lists, neighboring blocks with the same affine model type may be given higher priority.
i. In one example, the motion information of a neighboring block having the same affine model type as the first affine candidate may be added to the Merge list before the motion information of a second neighboring block having a different affine model type.
in one example, the combination of constructed affine candidates may be reordered with a 4-parameter affine candidate (2 CPMVs) added before a 6-parameter affine candidate.
d. For the affine Merge candidate list and/or the sub-block Merge candidate list, more constructed affine candidates having the same affine model type as that of the selected Merge candidate may be constructed.
i. In one example, the selected Merge candidate is the first available affine Merge candidate.
in one example, the selected Merge candidate is an affine Merge candidate associated with a particular location of a spatially neighboring block.
e. For the affine Merge candidate list and/or the sub-block Merge candidate list, the order of the constructed affine candidates may depend on the affine model type of the selected affine Merge candidate.
i. In one example, the selected Merge candidate is the first available affine Merge candidate.
in one example, the selected Merge candidate is an affine Merge candidate associated with a particular location of a spatially neighboring block.
4. It is proposed that affine model types associated with a block are not stored and not used for encoding subsequent blocks.
a. Alternatively, such information may be stored, but only used to encode subsequent blocks within the current CTU or within the same mxn region or row of the current CTU. In one example, one picture/slice may be divided into non-overlapping regions of a size equal to mxn (e.g., 64 × 64).
b. In one example, instead of storing 2 CPMVs (from upper-left and upper-right positions), 3 CPMVs (from upper-left, upper-right and lower-left positions) may be stored after decoding an AMVP affine coding block with a 4-parameter affine model.
i. In one example, the CPMV at the lower left can be derived using the CPMV at the upper left and upper right.
c. In one example, for each affine Merge candidate, a 6-parameter affine model is utilized. Alternatively, for each affine Merge candidate, a 4-parameter affine model is utilized.
5. It is proposed that affine candidates can be reordered instead of using a fixed insertion order.
a. In one example, the reordering depends on the derived MVs relative to a representative proximity of the current block. Each affine candidate is used to derive motion vectors for several representative neighboring positions, and then the difference of the derived MVs and the decoded MVs associated with those representative neighboring positions is calculated. Finally, the affine candidates are reordered in ascending order of difference.
b. In one example, the difference metric is MSE (mean squared error).
c. Alternatively, further, prior to computing the difference, if the affine candidate has a different reference picture than the representative neighboring block, the derived MV may be further scaled.
d. Alternatively, in addition, both the derived MVs and the representative neighboring MVs may be scaled to some selected reference pictures before calculating the difference.
e. In one example, only some affine candidates are reordered. For example, only neighboring affine candidates are reordered. They may always be inserted before the constructed affine candidates.
f. In one example, only the constructed affine candidates are reordered. They may always be inserted after the neighboring affine candidates.
g. In one example, only the top N affine candidates are reordered.
h. In one example, only the first N reordered affine Merge candidates are inserted into the sub-block Merge list.
i. In one example, if such reordering is performed, the maximum length of the sub-block Merge list is reduced by K. For example, K ═ 2.
6. It is proposed to apply the reordering method described in 4 above to affine AMVP list construction.
a. In one example, no affine AMVP index is signaled, and only the first of the reordered affine AMVP candidates is used as a predictor.
7. It is proposed that multiple (e.g., 2) affine candidates may be averaged to generate a new affine candidate.
a. In one example, only affine candidates having the same reference picture are used to generate the average affine candidate.
b. In one example, affine candidates with different reference pictures may be used to generate an average affine candidate, and all affine candidates are scaled to the same reference picture.
i. In one example, the reference picture of any of these affine candidates may be used as the reference picture of the average affine candidate.
in one example, a reference picture of average affine candidates may be defined for each CU/slice/picture/video/slice and may be signaled in slice header/PPS/VPS/SPS.
in one example, the reference picture is implicitly predefined at both the encoder and decoder.
in one example, no scaling is performed.
Fig. 26 is a block diagram of the video processing device 2600. Apparatus 2600 can be used to implement one or more methods described herein. The apparatus 2600 may be embodied in a smartphone, tablet, computer, Internet of Things (IoT) receiver, and/or the like. The device 2600 may include one or more processors 2602, one or more memories 2604, and video processing hardware 2606. The processor(s) 2602 may be configured to implement one or more of the methods described in this document. Memory (es) 2604 may be used to store data and code for implementing the methods and techniques described herein. Video processing hardware 2606 may be used to implement some of the techniques described in this document in hardware circuits.
Fig. 27 is a flow diagram of an example method 2700 of video processing. The method may be performed by a video encoder in its decoding loop, or by a video decoder. The method 2700 includes: generating (2702) an affine candidate list for the current block by inserting affine candidates into the affine candidate list based on an insertion order, the insertion order depending on an affine model type of at least one affine candidate in the affine candidate list; and performing (2704) video processing on the current block based on the generated affine candidate list.
Fig. 28 is a flow diagram of an example method 2800 of video processing. The method may be performed by a video encoder in its decoding loop, or by a video decoder. The method 2800 includes: generating (2802) an affine candidate list for the current block, wherein during the generating of the affine candidate list, at least one affine candidate in the affine candidate list is reordered; performing (2804) video processing on the current block based on the generated affine candidate list.
It should be appreciated that several techniques have been disclosed that benefit video encoder and decoder embodiments incorporated in video processing devices such as smart phones, laptops, desktops, and similar devices by allowing the use of affine models in video compression and decompression as described by many of the techniques and embodiments in this document.
Some embodiments may be described using the following examples.
1. A method for video processing, comprising:
generating an affine candidate list for the current block by inserting affine candidates into the affine candidate list based on an insertion order, the insertion order depending on an affine model type of at least one affine candidate in the affine candidate list; and
performing video processing on the current block based on the generated affine candidate list.
2. The method of example 1, wherein the affine candidate list comprises at least one of an affine Advanced Motion Vector Prediction (AMVP) candidate list, an affine Merge candidate list, and a subblock Merge candidate list.
3. The method of examples 1 or 2, wherein the affine model type corresponds to a 4-parameter affine model or a 6-parameter affine model.
4. The method of any of examples 1-3, wherein affine candidates from neighboring blocks having a same affine model type are given a higher priority in insertion order than affine candidates having a different affine model type.
5. The method of any of examples 1-4, wherein affine candidates from neighboring blocks encoded with a 4-parameter affine model are inserted into the affine candidate list before affine candidates encoded with a 6-parameter affine model.
6. The method of any of examples 1-4, wherein an affine model type is signaled for the affine Merge mode.
7. The method of example 2, further comprising, for the affine Merge candidate list and/or the sub-block Merge candidate list:
and constructing at least one affine candidate of the same affine model type as the selected Merge candidate.
8. The method of example 2 or 7, wherein,
the insertion order of the at least one constructed affine candidate depends on the affine model type of the selected Merge candidate.
9. The method according to example 7 or 8, wherein the selected Merge candidate is a first available affine Merge candidate in the Merge candidate list or an affine Merge candidate associated with a certain position of a spatially neighboring block.
10. The method of any of examples 1-9, wherein the insertion order is fixed.
11. A video processing method, comprising:
generating an affine candidate list for a current block, wherein at least one affine candidate in the affine candidate list is reordered during generation of the affine candidate list;
performing video processing on the current block based on the generated affine candidate list.
12. The method of example 11, wherein the at least one affine candidate is reordered based on derived motion vectors MV of representative neighboring positions with respect to the current block, wherein the derived MVs are derived based on affine candidates in the affine candidate list.
13. The method of example 11, further comprising:
deriving MVs of representative neighboring blocks based on affine candidates in the affine candidate list;
calculating a difference between the derived MV and decoded MVs associated with the representative neighboring block; and
reordering at least one affine candidate in the affine candidate list based on a particular order of the differences.
14. The method of example 13, wherein the particular order is an ascending order of the differences.
15. The method of example 13 or 14, wherein the difference is based on a mean square error, MSE.
16. The method of any of examples 13-15, further comprising:
prior to computing the difference, scaling the derived MV if the affine candidate has a reference picture different from the representative neighboring position.
17. The method of any of examples 13-16, further comprising:
scaling the derived MVs and decoded MVs associated as representative neighboring blocks to at least one selected reference picture prior to computing the difference.
18. The method of any of examples 11-17, wherein only some of the affine candidates in the list of affine candidates are reordered.
19. The method of example 18, wherein only affine candidates derived from neighboring blocks are reordered, and the reordered affine candidates are inserted into the affine candidate list before the constructed affine candidates.
20. The method of example 18, wherein only the constructed affine candidates are reordered, and the reordered affine candidates are inserted into the affine candidate list after affine candidates derived from neighboring blocks.
21. The method of example 18, wherein only the top N affine candidates in the affine candidate list are reordered, N representing a positive integer.
22. The method of example 18, wherein the first N of the reordered affine Merge candidates are inserted into the sub-block Merge candidate list, N representing a positive integer.
23. The method of any one of examples 11 to 22, wherein after performing the reordering, a maximum length of the subblock Merge candidate list is reduced by K, K representing a positive integer.
24. The method of example 23, wherein K ═ 2.
25. The method of any of examples 11-24, wherein the affine candidate list is an AMVP candidate list.
26. The method of example 25, wherein a first candidate of the reordered affine AMVP candidates is used as the predictor for the current block without signaling an affine AMVP index.
27. The method of any of examples 1-26, wherein the video processing comprises at least one of: encoding the video block into a bit stream representation of the video block, and decoding the video block from the bit stream representation of the video block.
28. A video processing apparatus comprising a processor configured to implement the method of any of examples 1 to 27.
29. A computer program product stored on a non-transitory computer readable medium, the computer program product comprising program code for performing the method of any of examples 1 to 27.
The disclosed and other solutions, examples, embodiments, modules, and functional operations described in this document can be implemented in digital electronic circuitry, or in computer software, firmware, or hardware, including the structures disclosed in this document and their structural equivalents, or in combinations of one or more of them. The disclosed and other embodiments may be implemented as one or more computer program products, i.e., one or more modules of computer program instructions encoded on a computer-readable medium for execution by, or to control the operation of, data processing apparatus. The computer readable medium can be a machine-readable storage device, a machine-readable storage substrate, a memory device, a composition of matter effecting a machine-readable propagated signal, or a combination of one or more of them. The term "data processing apparatus" encompasses all apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, or multiple processors or computers. The apparatus can include, in addition to hardware, code that creates an execution environment for the computer program in question, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, or a combination of one or more of them. A propagated signal is an artificially generated signal, e.g., a machine-generated electrical, optical, or electromagnetic signal, that is generated to encode information for transmission to suitable receiver apparatus.
A computer program (also known as a program, software application, script, or code) can be written in any form of programming language, including compiled or interpreted languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. A computer program does not necessarily correspond to a file in a file system. A program can be stored in a portion of a file that holds other programs or data (e.g., one or more scripts stored in a markup language document), in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub programs, or portions of code). A computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network.
The processes and logic flows described herein can be performed by one or more programmable processors executing one or more computer programs to perform functions by operating on input data and generating output. The processes and logic flows can also be performed by, and apparatus can also be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application-specific integrated circuit).
Processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer. Generally, a processor will receive instructions and data from a read-only memory or a random access memory or both. The essential elements of a computer are a processor for executing instructions and one or more memory devices for storing instructions and data. Generally, a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto-optical disks, or optical disks. However, a computer does not necessarily require such a device. Computer-readable media suitable for storing computer program instructions and data include all forms of non-volatile memory, media and memory devices, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices; magnetic disks, e.g., internal hard disks or removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks. The processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.
While this patent document contains many specifics, these should not be construed as limitations on the scope of any subject matter or claim, but rather as descriptions of features specific to particular embodiments of particular technologies. Certain features that are described in this patent document in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination. Furthermore, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination.
Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. Moreover, the separation of various system components in the embodiments described in this patent document should not be understood as requiring such separation in all embodiments.
Only a few embodiments and examples are described and other implementations, enhancements and variations can be made based on what is described and illustrated in this patent document.

Claims (29)

1. A method for video processing, comprising:
generating an affine candidate list for the current block by inserting affine candidates into the affine candidate list based on an insertion order, the insertion order depending on an affine model type of at least one affine candidate in the affine candidate list; and
performing video processing on the current block based on the generated affine candidate list.
2. The method of claim 1, wherein the affine candidate list comprises at least one of an affine Advanced Motion Vector Prediction (AMVP) candidate list, an affine Merge candidate list, and a subblock Merge candidate list.
3. The method of claim 1 or 2, wherein the affine model type corresponds to a 4-parameter affine model or a 6-parameter affine model.
4. The method of any of claims 1-3, wherein affine candidates from neighboring blocks having the same affine model type are given higher priority in insertion order than affine candidates having different affine model types.
5. The method of any of claims 1-4, wherein affine candidates from neighboring blocks encoded with a 4-parameter affine model are inserted into the affine candidate list before affine candidates encoded with a 6-parameter affine model.
6. The method according to any of claims 1-4, wherein an affine model type is signaled for the affine Merge mode.
7. The method according to claim 2, further comprising, for the affine Merge candidate list and/or the sub-block Merge candidate list:
and constructing at least one affine candidate of the same affine model type as the selected Merge candidate.
8. The method of claim 2 or 7,
the insertion order of the at least one constructed affine candidate depends on the affine model type of the selected Merge candidate.
9. The method according to claim 7 or 8, wherein the selected Merge candidate is the first available affine Merge candidate in the Merge candidate list or an affine Merge candidate associated with a certain position of a spatially neighboring block.
10. The method of any of claims 1-9, wherein the insertion order is fixed.
11. A video processing method, comprising:
generating an affine candidate list for a current block, wherein at least one affine candidate in the affine candidate list is reordered during generation of the affine candidate list;
performing video processing on the current block based on the generated affine candidate list.
12. The method of claim 11, wherein the at least one affine candidate is reordered based on derived motion vectors MV of representative neighboring positions with respect to the current block, wherein the derived MVs are derived based on affine candidates in the affine candidate list.
13. The method of claim 11, further comprising:
deriving MVs of representative neighboring blocks based on affine candidates in the affine candidate list;
calculating a difference between the derived MV and decoded MVs associated with the representative neighboring block; and
reordering at least one affine candidate in the affine candidate list based on a particular order of the differences.
14. The method of claim 13, wherein the particular order is an ascending order of the differences.
15. The method according to claim 13 or 14, wherein the difference is based on a mean square error, MSE.
16. The method according to any one of claims 13-15, further including:
prior to computing the difference, scaling the derived MV if the affine candidate has a reference picture different from the representative neighboring position.
17. The method according to any one of claims 13-16, further including:
scaling the derived MVs and decoded MVs associated as representative neighboring blocks to at least one selected reference picture prior to computing the difference.
18. The method of any of claims 11-17, wherein only some of the affine candidates in the list of affine candidates are reordered.
19. The method of claim 18, wherein only affine candidates derived from neighboring blocks are reordered, and the reordered affine candidates are inserted into the affine candidate list before the constructed affine candidates.
20. The method of claim 18, wherein only the constructed affine candidates are reordered, and the reordered affine candidates are inserted into the affine candidate list after affine candidates derived from neighboring blocks.
21. The method of claim 18, wherein only the first N affine candidates in the affine candidate list are reordered, N representing a positive integer.
22. The method of claim 18, wherein the first N of the reordered affine Merge candidates are inserted into the sub-block Merge candidate list, N representing a positive integer.
23. The method according to any of claims 11 to 22, wherein after performing the reordering, the maximum length of the subblock Merge candidate list is reduced by K, K representing a positive integer.
24. The method of claim 23, wherein K-2.
25. The method of any of claims 11-24, wherein the affine candidate list is an AMVP candidate list.
26. The method of claim 25, wherein a first candidate of the reordered affine AMVP candidates is used as a predictor for the current block without signaling an affine AMVP index.
27. The method of any of claims 1-26, wherein the video processing comprises at least one of: encoding the video block into a bit stream representation of the video block, and decoding the video block from the bit stream representation of the video block.
28. A video processing apparatus comprising a processor configured to implement the method of any of claims 1 to 27.
29. A computer program product stored on a non-transitory computer readable medium, the computer program product comprising program code for performing the method of any of claims 1 to 27.
CN201980074330.6A 2018-11-14 2019-11-14 Affine prediction mode improvement Active CN112997496B (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
CN2018115354 2018-11-14
CNPCT/CN2018/115354 2018-11-14
PCT/CN2019/118531 WO2020098753A1 (en) 2018-11-14 2019-11-14 Improvements of Affine Prediction Mode

Publications (2)

Publication Number Publication Date
CN112997496A true CN112997496A (en) 2021-06-18
CN112997496B CN112997496B (en) 2024-05-14

Family

ID=70730961

Family Applications (2)

Application Number Title Priority Date Filing Date
CN201980074330.6A Active CN112997496B (en) 2018-11-14 2019-11-14 Affine prediction mode improvement
CN201980074269.5A Pending CN113273208A (en) 2018-11-14 2019-11-14 Improvement of affine prediction mode

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN201980074269.5A Pending CN113273208A (en) 2018-11-14 2019-11-14 Improvement of affine prediction mode

Country Status (2)

Country Link
CN (2) CN112997496B (en)
WO (2) WO2020098752A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20240031592A1 (en) * 2022-07-21 2024-01-25 Tencent America LLC Temporal motion vector predictor with displacement

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003256875A (en) * 2002-02-27 2003-09-12 Fuji Heavy Ind Ltd Positional deviation adjusting device amd method for stereoscopic image, and stereoscopic type monitoring device
CN102461058A (en) * 2009-03-10 2012-05-16 爱迪德有限责任公司 White-box cryptographic system with input dependent encodings
CN202795630U (en) * 2012-09-10 2013-03-13 南京恩博科技有限公司 Thermal imagery video double-identification forest fire recognition system
CN103688240A (en) * 2011-05-20 2014-03-26 梦芯片技术股份有限公司 Method for transmitting digital scene description data and transmitter and receiver scene processing device
CN106097381A (en) * 2016-05-27 2016-11-09 北京理工大学 A kind of method for tracking target differentiating Non-negative Matrix Factorization based on manifold
US20170214932A1 (en) * 2014-07-18 2017-07-27 Mediatek Singapore Pte. Ltd Method of Motion Vector Derivation for Video Coding
CN107211156A (en) * 2015-01-26 2017-09-26 高通股份有限公司 Traveling time motion vector prediction based on sub- predicting unit
US20170332095A1 (en) * 2016-05-16 2017-11-16 Qualcomm Incorporated Affine motion prediction for video coding
US20170332099A1 (en) * 2016-05-13 2017-11-16 Qualcomm Incorporated Merge candidates for motion vector prediction for video coding
US20180098063A1 (en) * 2016-10-05 2018-04-05 Qualcomm Incorporated Motion vector prediction for affine motion models in video coding
CN108432250A (en) * 2016-01-07 2018-08-21 联发科技股份有限公司 The method and device of affine inter-prediction for coding and decoding video
GB201815444D0 (en) * 2018-09-21 2018-11-07 Canon Kk Video coding and decoding

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102946536B (en) * 2012-10-09 2015-09-30 华为技术有限公司 The method of candidate vector list builder and device
US9438910B1 (en) * 2014-03-11 2016-09-06 Google Inc. Affine motion prediction in video coding
WO2017147765A1 (en) * 2016-03-01 2017-09-08 Mediatek Inc. Methods for affine motion compensation
US10631002B2 (en) * 2016-09-30 2020-04-21 Qualcomm Incorporated Frame rate up-conversion coding mode
US10701390B2 (en) * 2017-03-14 2020-06-30 Qualcomm Incorporated Affine motion information derivation
US10805630B2 (en) * 2017-04-28 2020-10-13 Qualcomm Incorporated Gradient based matching for motion search and derivation
CN110574377B (en) * 2017-05-10 2021-12-28 联发科技股份有限公司 Method and apparatus for reordering motion vector prediction candidate set for video coding

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003256875A (en) * 2002-02-27 2003-09-12 Fuji Heavy Ind Ltd Positional deviation adjusting device amd method for stereoscopic image, and stereoscopic type monitoring device
CN102461058A (en) * 2009-03-10 2012-05-16 爱迪德有限责任公司 White-box cryptographic system with input dependent encodings
CN103688240A (en) * 2011-05-20 2014-03-26 梦芯片技术股份有限公司 Method for transmitting digital scene description data and transmitter and receiver scene processing device
CN202795630U (en) * 2012-09-10 2013-03-13 南京恩博科技有限公司 Thermal imagery video double-identification forest fire recognition system
US20170214932A1 (en) * 2014-07-18 2017-07-27 Mediatek Singapore Pte. Ltd Method of Motion Vector Derivation for Video Coding
CN107211156A (en) * 2015-01-26 2017-09-26 高通股份有限公司 Traveling time motion vector prediction based on sub- predicting unit
CN108432250A (en) * 2016-01-07 2018-08-21 联发科技股份有限公司 The method and device of affine inter-prediction for coding and decoding video
US20170332099A1 (en) * 2016-05-13 2017-11-16 Qualcomm Incorporated Merge candidates for motion vector prediction for video coding
US20170332095A1 (en) * 2016-05-16 2017-11-16 Qualcomm Incorporated Affine motion prediction for video coding
CN106097381A (en) * 2016-05-27 2016-11-09 北京理工大学 A kind of method for tracking target differentiating Non-negative Matrix Factorization based on manifold
US20180098063A1 (en) * 2016-10-05 2018-04-05 Qualcomm Incorporated Motion vector prediction for affine motion models in video coding
GB201815444D0 (en) * 2018-09-21 2018-11-07 Canon Kk Video coding and decoding

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
FENG WU等: "Description of SDR video coding technology proposal by University of Science and Technology of China, Peking University, Harbin Institute of Technology, and Wuhan University (IEEE 1857.10 Study Group)", 《JOINT VIDEO EXPERTS TEAM (JVET) OF ITU-T SG 16 WP 3 AND ISO/IEC JTC 1/SC 29/WG 11 10TH MEETING: SAN DIEGO, US, 10–20 APR. 2018,JVET-J0032-V2》, pages 1 - 48 *
HUANBANG CHEN等: "CE4: Common base for affine merge mode (Test 4.2.1)", 《JOINT VIDEO EXPERTS TEAM (JVET) OF ITU-T SG 16 WP 3 AND ISO/IEC JTC 1/SC 29/WG 11 12TH MEETING: MACAO, CN, 3–12 OCT. 2018,JVET-L0366-V1》, pages 1 - 4 *
LI LI等: "An Efficient Four-Parameter Affine Motion Model for Video Coding", 《IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY》, 31 August 2018 (2018-08-31) *

Also Published As

Publication number Publication date
CN113273208A (en) 2021-08-17
WO2020098753A1 (en) 2020-05-22
WO2020098752A1 (en) 2020-05-22
CN112997496B (en) 2024-05-14

Similar Documents

Publication Publication Date Title
CN111147850B (en) Table maintenance for history-based motion vector prediction
CN113170099B (en) Interaction between intra copy mode and inter prediction tools
CN110620923B (en) Generalized motion vector difference resolution
CN113170182B (en) Pruning method under different prediction modes
CN112970253B (en) Motion candidate list construction for prediction
CN113170183A (en) Pruning method for inter-frame prediction with geometric partitioning
CN112997493B (en) Construction method for single type motion candidate list
CN112868240A (en) Collocated localized illumination compensation and modified inter-frame prediction codec
CN112219400A (en) Location dependent storage of motion information
CN113287317A (en) Collocated local illumination compensation and modified interframe coding and decoding tool
CN113261290A (en) Motion prediction based on modification history
CN113424525A (en) Size selective application of decoder-side refinement tools
CN113316935A (en) Motion candidate list using local illumination compensation
CN113366839B (en) Refinement quantization step in video codec
CN112997496B (en) Affine prediction mode improvement
CN113557720A (en) Adaptive weights in multi-hypothesis prediction in video coding
CN113812165A (en) Improvements to HMVP tables

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant