WO2020140908A1 - Mappage entre indice de distance et distance fusionnée par mvd - Google Patents

Mappage entre indice de distance et distance fusionnée par mvd Download PDF

Info

Publication number
WO2020140908A1
WO2020140908A1 PCT/CN2019/130725 CN2019130725W WO2020140908A1 WO 2020140908 A1 WO2020140908 A1 WO 2020140908A1 CN 2019130725 W CN2019130725 W CN 2019130725W WO 2020140908 A1 WO2020140908 A1 WO 2020140908A1
Authority
WO
WIPO (PCT)
Prior art keywords
pel
distance
mmvd
video block
index
Prior art date
Application number
PCT/CN2019/130725
Other languages
English (en)
Inventor
Kai Zhang
Li Zhang
Hongbin Liu
Jizheng Xu
Yue Wang
Original Assignee
Beijing Bytedance Network Technology Co., Ltd.
Bytedance Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Bytedance Network Technology Co., Ltd., Bytedance Inc. filed Critical Beijing Bytedance Network Technology Co., Ltd.
Priority to CN201980087392.0A priority Critical patent/CN113261295A/zh
Publication of WO2020140908A1 publication Critical patent/WO2020140908A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/105Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/13Adaptive entropy coding, e.g. adaptive variable length coding [AVLC] or context adaptive binary arithmetic coding [CABAC]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • H04N19/137Motion inside a coding unit, e.g. average field, frame or block difference
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors
    • H04N19/517Processing of motion vectors by encoding
    • H04N19/52Processing of motion vectors by encoding by predictive encoding

Definitions

  • the present document relates to video and image coding and decoding.
  • Digital video accounts for the largest bandwidth use on the internet and other digital communication networks. As the number of connected user devices capable of receiving and displaying video increases, it is expected that the bandwidth demand for digital video usage will continue to grow.
  • the present document discloses video coding tools that, in one example aspect, improve the signaling of motion vectors for video and image coding.
  • a method for video processing comprising: determining, for a current video block which is coded using a merge with motion vector difference (MMVD) mode, a first relationship between a distance and a distance index (DI) , wherein the distance is a distance between a motion vector of the current video block and a base candidate selected from a merge candidate list; and performing, based on the first relationship, a conversion between the current video block and a bitstream representation of the current video block.
  • MMVD merge with motion vector difference
  • DI distance index
  • a method for video processing comprising: performing a conversion between a current video block and a bitstream representation of the current video block, wherein the current video block is coded using a merge with motion vector difference (MMVD) mode; wherein the conversion comprises parsing MMVD side information from or writing the MMVD side information into the bitstream representation, wherein the MMVD side information comprises at least one of an MMVD flag indicating whether MMVD syntaxes are parsed, a first syntax element indicating a distance of MMVD between a motion vector of the current video block and a base candidate selected from a merge candidate list, a second syntax element indicating a direction of MMVD representing a direction of motion vector difference (MVD) relative to the base candidate.
  • MMVD merge with motion vector difference
  • a method for video processing comprising: determining at least one distance for motion vector difference (MVD) associated with a current video block, which is coded in a merge with motion vector difference (MMVD) mode, from a first distance with a rough granularity and one or more distances with fine granularities; and performing a conversion between a current video block and a bitstream representation of the current video block based on the distance for MVD.
  • MMVD merge with motion vector difference
  • an apparatus in a video system comprising a processor and a non-transitory memory with instructions thereon, wherein the instructions upon execution by the processor, cause the processor to implement the method in any one of examples described as above
  • a computer program product stored on a non-transitory computer readable media, the computer program product including program code for carrying out the method in any one of examples described as above
  • the above-described method may be implemented by a video encoder apparatus or a video decoder apparatus that comprises a processor.
  • these methods may be embodied in the form of processor-executable instructions and stored on a computer-readable program medium.
  • FIG. 1 shows an example of simplified affine motion model.
  • FIG. 2 shows an example of affine motion vector field (MVF) per sub-block.
  • FIG. 3A-3B show 4 and 6 parameter affine models, respectively.
  • FIG. 4 shows an example of motion vector predictor (MVP) for AF_INTER.
  • FIG. 5A-5B show examples of candidates for AF_MERGE.
  • FIG. 6 shows an example of candidate positions for affine merge mode.
  • FIG. 7 shows an example of distance index and distance offset mapping.
  • FIG. 8 shows an example of ultimate motion vector expression (UMVE) search process.
  • UMVE ultimate motion vector expression
  • FIG. 9 shows an example of UMVE search point.
  • FIG. 10 is a flowchart for an example method for video processing.
  • FIG. 11 is a flowchart for another example method for video processing.
  • FIG. 12 is a flowchart for yet another example method for video processing.
  • FIG. 13 shows an example of a hardware platform for implementing a technique described in the present document.
  • the present document provides various techniques that can be used by a decoder of video bitstreams to improve the quality of decompressed or decoded digital video. Furthermore, a video encoder may also implement these techniques during the process of encoding in order to reconstruct decoded frames used for further encoding.
  • Section headings are used in the present document for ease of understanding and do not limit the embodiments and techniques to the corresponding sections. As such, embodiments from one section can be combined with embodiments from other sections.
  • This patent document is related to video coding technologies. Specifically, it is related to motion compensation in video coding. It may be applied to the existing video coding standard like HEVC, or the standard (Versatile Video Coding) to be finalized. It may be also applicable to future video coding standards or video codec.
  • Video coding standards have evolved primarily through the development of the well-known ITU-T and ISO/IEC standards.
  • the ITU-T produced H. 261 and H. 263, ISO/IEC produced MPEG-1 and MPEG-4 Visual, and the two organizations jointly produced the H. 262/MPEG-2 Video and H. 264/MPEG-4 Advanced Video Coding (AVC) and H. 265/HEVC standards.
  • AVC H. 264/MPEG-4 Advanced Video Coding
  • H. 265/HEVC High Efficiency Video Coding
  • the video coding standards are based on the hybrid video coding structure wherein temporal prediction plus transform coding are utilized.
  • Joint Video Exploration Team JVET was founded by VCEG and MPEG jointly in 2015.
  • JVET Joint Exploration Model
  • HEVC high definition motion model
  • MCP motion compensation prediction
  • JEM JEM
  • affine transform motion compensation prediction is applied. As shown FIG. 1, the affine motion field of the block is described by two control point motion vectors.
  • the motion vector field (MVF) of a block is described by the following equation:
  • sub-block based affine transform prediction is applied.
  • the sub-block size M ⁇ N is derived as in Equation 2, where MvPre is the motion vector fraction accuracy (1/16 in JEM) , (v 2x , v 2y ) is motion vector of the bottom-left control point, calculated according to Equation 1.
  • Equation 2 M and N should be adjusted downward if necessary to make it a divisor of w and h, respectively.
  • the motion vector of the center sample of each sub-block is calculated according to Equation 1, and rounded to 1/16 fraction accuracy.
  • the high accuracy motion vector of each sub-block is rounded and saved as the same accuracy as the normal motion vector.
  • affine motion modes there are two affine motion modes: AF_INTER mode and AF_MERGE mode.
  • AF_INTER mode can be applied.
  • An affine flag in CU level is signaled in the bitstream to indicate whether AF_INTER mode is used.
  • v 0 is selected from the motion vectors of the block A, B or C.
  • the motion vector from the neighbour block is scaled according to the reference list and the relationship among the POC of the reference for the neighbour block, the POC of the reference for the current CU and the POC of the current CU. And the approach to select v 1 from the neighbour block D and E is similar. If the number of candidate list is smaller than 2, the list is padded by the motion vector pair composed by duplicating each of the AMVP candidates. When the candidate list is larger than 2, the candidates are firstly sorted according to the consistency of the neighbouring motion vectors (similarity of the two motion vectors in a pair candidate) and only the first two candidates are kept. An RD cost check is used to determine which motion vector pair candidate is selected as the control point motion vector prediction (CPMVP) of the current CU.
  • CPMVP control point motion vector prediction
  • an index indicating the position of the CPMVP in the candidate list is signaled in the bitstream.
  • FIG. 3A shows an example of a 4-paramenter affine model.
  • FIG. 3B shows an example of a 6-parameter affine model.
  • MVD In AF_INTER mode, when 4/6 parameter affine mode is used, 2/3 control points are required, and therefore 2/3 MVD needs to be coded for these control points, as shown in FIG. 3A.
  • two motion vectors e.g., mvA(xA, yA) and mvB (xB, yB)
  • newMV mvA + mvB and the two components of newMV is set to (xA + xB) and (yA + yB) , respectively.
  • MV of 2 or 3 control points needs to be determined jointly. Directly searching the multiple MVs jointly is computationally complex. A fast affine ME algorithm is proposed and is adopted into VTM/BMS.
  • the fast affine ME algorithm is described for the 4-parameter affine model, and the idea can be extended to 6-parameter affine model.
  • the motion vectors can be rewritten in vector form as:
  • MVD of AF_INTER are derived iteratively.
  • MV i (P) the MV derived in the ith iteration for position P
  • dMV C i the delta updated for MV C in the ith iteration.
  • MSE MSE
  • a CU When a CU is applied in AF_MERGE mode, it gets the first block coded with affine mode from the valid neighbour reconstructed blocks. And the selection order for the candidate block is from left, above, above right, left bottom to above left as shown in FIG. 5A. If the neighbour left bottom block A is coded in affine mode as shown in FIG. 5B, the motion vectorsv 2 , v 3 and v 4 of the top left corner, above right corner and left bottom corner of the CU which contains the block A are derived. And the motion vector v 0 of the top left corner on the current CU is calculated according to v 2 , v 3 and v 4 . Secondly, the motion vector v 1 of the above right of the current CU is calculated.
  • the MVF of the current CU is generated.
  • an affine flag is signaled in the bitstream when there is at least one neighbour block is coded in affine mode.
  • an affine merge candidate list is constructed with following steps:
  • Inherited affine candidate means that the candidate is derived from the affine motion model of its valid neighbor affine coded block.
  • the scan order for the candidate positions is: A1, B1, B0, A0 and B2.
  • full pruning process is performed to check whether same candidate has been inserted into the list. If a same candidate exists, the derived candidate is discarded.
  • Constructed affine candidate means the candidate is constructed by combining the neighbor motion information of each control point.
  • the motion information for the control points is derived firstly from the specified spatial neighbors and temporal neighbor shown in FIG. 5B.
  • T is temporal position for predicting CP4.
  • the coordinates of CP1, CP2, CP3 and CP4 is (0, 0) , (W, 0) , (H, 0) and (W, H) , respectively, where W and H are the width and height of current block.
  • FIG. 6 shows an example of candidates position for affine merge mode
  • the motion information of each control point is obtained according to the following priority order:
  • the checking priority is B2->B3->A2.
  • B2 is used if it is available. Otherwise, if B2 is available, B3 is used. If both B2 and B3 are unavailable, A2 is used. If all the three candidates are unavailable, the motion information of CP1 cannot be obtained.
  • the checking priority is B1->B0.
  • the checking priority is A1->A0.
  • Motion information of three control points are needed to construct a 6-parameter affine candidate.
  • the three control points can be selected from one of the following four combinations ( ⁇ CP1, CP2, CP4 ⁇ , ⁇ CP1, CP2, CP3 ⁇ , ⁇ CP2, CP3, CP4 ⁇ , ⁇ CP1, CP3, CP4 ⁇ ) .
  • Combinations ⁇ CP1, CP2, CP3 ⁇ , ⁇ CP2, CP3, CP4 ⁇ , ⁇ CP1, CP3, CP4 ⁇ will be converted to a 6-parameter motion model represented by top-left, top-right and bottom-left control points.
  • Motion information of two control points are needed to construct a 4-parameter affine candidate.
  • the two control points can be selected from one of the following six combinations ( ⁇ CP1, CP4 ⁇ , ⁇ CP2, CP3 ⁇ , ⁇ CP1, CP2 ⁇ , ⁇ CP2, CP4 ⁇ , ⁇ CP1, CP3 ⁇ , ⁇ CP3, CP4 ⁇ ) .
  • Combinations ⁇ CP1, CP4 ⁇ , ⁇ CP2, CP3 ⁇ , ⁇ CP2, CP4 ⁇ , ⁇ CP1, CP3 ⁇ , ⁇ CP3, CP4 ⁇ will be converted to a 4-parameter motion model represented by top-left and top-right control points.
  • reference index X (X being 0 or 1) of a combination
  • the reference index with highest usage ratio in the control points is selected as the reference index of list X, and motion vectors point to difference reference picture will be scaled.
  • full pruning process is performed to check whether same candidate has been inserted into the list. If a same candidate exists, the derived candidate is discarded.
  • UMVE is extended to affine merge mode, we will call this UMVE affine mode thereafter.
  • the proposed method selects the first available affine merge candidate as a base predictor. Then it applies a motion vector offset to each control point’s motion vector value from the base predictor. If there’s no affine merge candidate available, this proposed method will not be used.
  • the selected base predictor s inter prediction direction, and the reference index of each direction is used without change.
  • the current block’s affine model is assumed to be a 4-parameter model, only 2 control points need to be derived. Thus, only the first 2 control points of the base predictor will be used as control point predictors.
  • a zero_MVD flag is used to indicate whether the control point of current block has the same MV value as the corresponding control point predictor. If zero_MVD flag is true, there’s no other signaling needed for the control point. Otherwise, a distance index and an offset direction index is signaled for the control point.
  • a distance offset table with size of 5 is used as shown in the table below.
  • Distance index is signaled to indicate which distance offset to use.
  • the mapping of distance index and distance offset values is shown in FIG. 7.
  • the direction index can represent four directions as shown below, where only x or y direction may have an MV difference, but not in both directions.
  • the signaled distance offset is applied on the offset direction for each control point predictor. Results will be the MV value of each control point.
  • MV (v x , v y ) MVP (v px , v py ) + MV (x-dir-factor *distance-offset, y-dir-factor *distance-offset) ;
  • the signaled distance offset is applied on the signaled offset direction for control point predictor’s L0 motion vector; and the same distance offset with opposite direction is applied for control point predictor’s L1 motion vector. Results will be the MV values of each control point, on each inter prediction direction.
  • MV L0 (v 0x , v 0y ) MVP L0 (v 0px , v 0py ) + MV (x-dir-factor *distance-offset, y-dir-factor *distance-offset) ;
  • MV L1 (v 0x , v 0y ) MVP L1 (v 0px , v 0py ) + MV (-x-dir-factor *distance-offset, -y-dir-factor *distance-offset) ;
  • UMVE ultimate motion vector expression
  • UMVE re-uses merge candidate as same as those included in the regular merge candidate list in VVC.
  • a base candidate can be selected, and is further expanded by the proposed motion vector expression method.
  • UMVE provides a new motion vector difference (MVD) representation method, in which a starting point, a motion magnitude and a motion direction are used to represent a MVD.
  • MVD motion vector difference
  • FIG. 8 shows an example of UMVE Search Process.
  • FIG. 9 shows examples of UMVE Search Points.
  • This proposed technique uses a merge candidate list as it is. But only candidates which are default merge type (MRG_TYPE_DEFAULT_N) are considered for UMVE’s expansion.
  • Base candidate index defines the starting point.
  • Base candidate index indicates the best candidate among candidates in the list as follows.
  • Base candidate IDX is not signaled.
  • Distance index is motion magnitude information.
  • Distance index indicates the pre-defined distance from the starting point information. Pre-defined distance is as follows:
  • the distance IDX is binarized in bins with the truncated unary code in the entropy coding procedure as:
  • the first bin is coded with a probability context, and the following bins are coded with the equal-probability model, a.k.a. by-pass coding.
  • Direction index represents the direction of the MVD relative to the starting point.
  • the direction index can represent of the four directions as shown below.
  • UMVE flag is signaled right after sending a skip flag or merge flag. If skip or merge flag is true, UMVE flag is parsed. If UMVE flage is equal to 1, UMVE syntaxes are parsed. But, if not 1, AFFINE flag is parsed. If AFFINE flag is equal to 1, that is AFFINE mode, But, if not 1, skip/merge index is parsed for VTM’s skip/merge mode.
  • either the first or the second merge candidate in the merge candidate list could be selected as the base candidate.
  • UMVE is known as Merge with MVD (MMVD) .
  • P TraditionalBiPred is the final predictor for the conventional bi-prediction
  • P L0 andP L1 are predictors from L0 and L1, respectively
  • RoundingOffset and shiftNum are used to normalize the final predictor.
  • GBI Generalized Bi-prediction
  • P GBi is the final predictor of GBi. (1-w 1 ) and w 1 are the selected GBI weights applied to the predictors of L0 and L1, respectively. RoundingOffset GBi andshiftNum GBi are used to normalize the final predictor in GBi.
  • the supported weights of w 1 is ⁇ -1/4, 3/8, 1/2, 5/8, 5/4 ⁇ .
  • One equal-weight set and four unequal-weight sets are supported.
  • the process to generate the final predictor is exactly the same as that in the conventional bi-prediction mode.
  • the number of candidate weight sets is reduced to three.
  • the weight selection in GBI is explicitly signaled at CU-level if this CU is coded by bi-prediction.
  • the weight selection is inherited from the merge candidate.
  • GBI supports DMVR to generate the weighted average of template as well as the final predictor for BMS-1.0.
  • MVDs motion vector differences
  • LAMVR locally adaptive motion vector resolution
  • MVD can be coded in units of quarter luma samples, integer luma samples or four luma samples.
  • the MVD resolution is controlled at the coding unit (CU) level, and MVD resolution flags are conditionally signaled for each CU that has at least one non-zero MVD components.
  • a first flag is signaled to indicate whether quarter luma sample MV precision is used in the CU.
  • the first flag (equal to 1) indicates that quarter luma sample MV precision is not used, another flag is signaled to indicate whether integer luma sample MV precision or four luma sample MV precision is used.
  • the quarter luma sample MV resolution is used for the CU.
  • the MVPs in the AMVP candidate list for the CU are rounded to the corresponding precision.
  • the first MVD resolution flag is coded with one of three probability contexts: C0, C1 or C2; while the second MVD resolution flag is coded with a forth probability context: C3.
  • the probability context Cx for the first MVD resolution flag is derived as (L represents the left neighbouring block and A represents the above neighbouring block) :
  • xL is set equal to 1; otherwise, xL is set equal to 0.
  • xA is set equal to 1; otherwise, xA is set equal to 0.
  • x is set equal to xL+xA.
  • CU-level RD checks are used to determine which MVD resolution is to be used for a CU. That is, the CU-level RD check is performed three times for each MVD resolution.
  • the following encoding schemes are applied in the JEM.
  • the motion information of the current CU (integer luma sample accuracy) is stored.
  • the stored motion information (after rounding) is used as the starting point for further small range motion vector refinement during the RD check for the same CU with integer luma sample and 4 luma sample MVD resolution so that the time-consuming motion estimation process is not duplicated three times.
  • ⁇ RD check of a CU with 4 luma sample MVD resolution is conditionally invoked.
  • RD cost integer luma sample MVD resolution is much larger than that of quarter luma sample MVD resolution
  • the RD check of 4 luma sample MVD resolution for the CU is skipped.
  • LAMVR is also known as Integer Motion Vector (IMV) .
  • IMV Integer Motion Vector
  • the current (partially) decoded picture is considered as a reference picture.
  • This current picture is put in the last position of reference picture list 0. Therefore, for a slice using the current picture as the only reference picture, its slice type is considered as a P slice.
  • the bitstream syntax in this approach follows the same syntax structure for inter coding while the decoding process is unified with inter coding. The only outstanding difference is that the block vector (which is the motion vector pointing to the current picture) always uses integer-pel resolution.
  • both block width and height are smaller than or equal to 16.
  • AMVR ⁇ Enable adaptive motion vector resolution (AMVR) for CPR mode when the SPS flag is on.
  • a block vector can switch between 1-pel integer and 4-pel integer resolutions at block level.
  • the encoder performs RD check for blocks with either width or height no larger than 16.
  • the block vector search is performed using hash-based search first. If there is no valid candidate found from hash search, block matching based local search will be performed.
  • hash key matching 32-bit CRC
  • hash key matching (32-bit CRC) between the current block and a reference block is extended to all allowed block sizes.
  • the hash key calculation for every position in current picture is based on 4x4 blocks.
  • a hash key matching to a reference block happens when all its 4 ⁇ 4 blocks match the hash keys in the corresponding reference locations. If multiple reference blocks are found to match the current block with the same hash key, the block vector costs of each candidates are calculated and the one with minimum cost is selected.
  • the search range is set to be 64 pixels to the left and on top of current block, and the search range is restricted to be within the current CTU.
  • Sub-block merge candidate list it includes ATMVP and affine merge candidates.
  • One merge list construction process is shared for both affine modes and ATMVP mode. Here, the ATMVP and affine merge candidates may be added in order.
  • Sub-block merge list size is signaled in slice header, and maximum value is 5.
  • uni-Prediction TPM merge list size is fixed to be 5.
  • Regular merge list For remaining coding blocks, one merge list construction process is shared. Here, the spatial/temporal/HMVP, pairwise combined bi-prediction merge candidates and zero motion candidates may be inserted in order. Regular merge list size is signaled in slice header, and maximum value is 6.
  • sub-block merge candidate list The sub-block related motion candidates are put in a separate merge list is named as ‘sub-block merge candidate list’ .
  • the sub-block merge candidate list includes affine merge candidates, and ATMVP candidate, and/or sub-block based STMVP candidate.
  • the ATMVP merge candidate in the normal merge list is moved to the first position of the affine merge list.
  • all the merge candidates in the new list i.e., sub-block based merge candidate list
  • An affine merge candidate list is constructed with following steps:
  • Inherited affine candidate means that the candidate is derived from the affine motion model of its valid neighbor affine coded block.
  • the maximum two inherited affine candidates are derived from affine motion model of the neighboring blocks and inserted into the candidate list.
  • the scan order is ⁇ A0, A1 ⁇ ; for the above predictor, the scan order is ⁇ B0, B1, B2 ⁇ .
  • Constructed affine candidate means the candidate is constructed by combining the neighbor motion information of each control point.
  • the motion information for the control points is derived firstly from the specified spatial neighbors and temporal neighbor shown in Fig. 7.
  • T is temporal position for predicting CP4.
  • the coordinates of CP1, CP2, CP3 and CP4 is (0, 0) , (W, 0) , (H, 0) and (W, H) , respectively, where W and H are the width and height of current block.
  • the motion information of each control point is obtained according to the following priority order:
  • the checking priority is B2 ⁇ B3 ⁇ A2.
  • B2 is used if it is available. Otherwise, if B2 is available, B3 is used. If both B2 and B3 are unavailable, A2 is used. If all the three candidates are unavailable, the motion information of CP1 cannot be obtained.
  • the checking priority is B1 ⁇ B0.
  • the checking priority is A1 ⁇ A0.
  • Motion information of three control points are needed to construct a 6-parameter affine candidate.
  • the three control points can be selected from one of the following four combinations ( ⁇ CP1, CP2, CP4 ⁇ , ⁇ CP1, CP2, CP3 ⁇ , ⁇ CP2, CP3, CP4 ⁇ , ⁇ CP1, CP3, CP4 ⁇ ) .
  • Combinations ⁇ CP1, CP2, CP3 ⁇ , ⁇ CP2, CP3, CP4 ⁇ , ⁇ CP1, CP3, CP4 ⁇ will be converted to a 6-parameter motion model represented by top-left, top-right and bottom-left control points.
  • Motion information of two control points are needed to construct a 4-parameter affine candidate.
  • the two control points can be selected from one of the two combinations ( ⁇ CP1, CP2 ⁇ , ⁇ CP1, CP3 ⁇ ) .
  • the two combinations will be converted to a 4-parameter motion model represented by top-left and top-right control points.
  • the available combination of motion information of CPs is only added to the affine merge list when the CPs have the same reference index.
  • the MMVD idea is applied on affine merge candidates (named as Affine merge with prediction offset) . It is an extension of A MVD (or named as “distance” , or “offset” ) is signaled after an affine merge candidate (known as the is signaled. The all CPMVs are added with the MVD to get the new CPMVs.
  • the distance table is specified as
  • a POC distance based offset mirroring method is used for Bi-prediction.
  • the offset applied to L0 is as signaled, and the offset on L1 depends on the temporal position of the reference pictures on list 0 and list 1.
  • both reference pictures are on the same temporal side of the current picture, the same distance offset and same offset directions are applied for CPMVs of both L0 and L1.
  • the CPMVs of L1 will have the distance offset applied on the opposite offset direction.
  • the coding/parsing procedure for UMVE information may be not efficient since it uses truncated unary binarization method for coding distance (MVD precision) information and fixed length with bypass coding for direction index. That is based on the assumption that the 1/4-pel precision takes the highest percentage. However, it is not true for all kinds of sequences.
  • DI may be binarized with fixed-length code, Exponential-Golomb code, truncated Exponential-Golomb code, Rice code, or any other codes.
  • the distance index may be signaled with more than one syntax elements.
  • K is larger than 1.
  • a sub-set index (1 st syntax) is firstly signaled, followed by distance index (2 nd syntax) within the sub-set.
  • mmvd_distance_subset_idx is first signaled, followed by mmvd_distance_idx_in_subset.
  • the mmvd_distance_idx_in_subset can be binarized with unary code, truncated unary code, fixed-length code, Exponential-Golomb code, truncated Exponential-Golomb code, Rice code, or any other codes.
  • mmvd_distance_idx_in_subset can be binarized as a flag if there are only two possible distances in the sub-set.
  • mmvd_distance_idx_in_subset is not signaled if there is only one possible distance in the sub-set.
  • the maximum value is set to be the number of possible distance in the sub-set minus 1 if mmvd_distance_idx_in_subset is binarized as a truncated code.
  • one sub-set includes all fractional MVD precisions (e.g., 1/4-pel, 1/2-pel) .
  • the other sub-set includes all integer MVD precisions (e.g., 1-pel, 2-pel, 4-pel, 8-pel, 16-pel, 32-pel) .
  • one sub-set may only have one distance (e.g., 1/2-pel) , and the other sub-set has all the remaining distances.
  • a first sub-set includes fractional MVD precisions (e.g., 1/4-pel, 1/2-pel) ; a second sub-set includes integer MVD precisions less than 4-pel (e.g. 1-pel, 2-pel) ; and a third sub-set includes all other MVD precisions (e.g. 4-pel, 8-pel, 16-pel, 32-pel) .
  • K sub-sets there are K sub-sets and the number of K is set equal to that allowed MVD precisions in LAMVR.
  • the signaling of sub-set index may reuse what is done for LAMVR (e.g., reusing the way to derive context offset index; reusing the contexts, etc., al)
  • Distances within a sub-set may be decided by the associated LAMVR index (e.g., AMVR_mode in specifications) .
  • how to define sub-sets and/or how many sub-sets may be pre-defined or adaptively changed on-the-fly.
  • the 1 st syntax may be coded with truncated unary, fixed-length code, Exponential-Golomb code, truncated Exponential-Golomb code, Rice code, or any other codes.
  • the 2 nd syntax may be coded with truncated unary, fixed-length code, Exponential-Golomb code, truncated Exponential-Golomb code, Rice code, or any other codes.
  • the sub-set index (e.g., 1 st syntax) may be not explicitly coded in the bitstream.
  • the sub-set index may be derived on-the-fly, e.g., based on coded information of current block (e.g., block dimension) and/or previously coded blocks.
  • the distance index within a sub-set (e.g., 2 nd syntax) may be not explicitly coded in the bitstream.
  • the 2 nd syntax may be derived on-the-fly, e.g., based on coded information of current block (e.g., block dimension) and/or previously coded blocks.
  • a first resolution bin is signaled to indicate whether DI is smaller than a predefined number T or not.
  • a first resolution bin is signaled to indicate whether the distance is smaller than a predefined number or not.
  • mmvd_resolution_flag is first signaled followed by mmvd_distance_idx_in_subset.
  • mmvd_resolution_flag is first signaled followed by mmvd_short_distance_idx_in_subsetwhen mmvd_resolution_flag is equal to 0 or mmvd_long_distance_idx_in_subsetwhen mmvd_resolution_flag is equal to 1.
  • the distance index number T corresponds to 1-Pel distance.
  • T 2 with the Table 2a defined in VTM-3.0.
  • the distance index number T corresponds to 1/2-Pel distance.
  • T 1 with the Table 2a defined in VTM-3.0.
  • the distance index number T corresponds to W-Pel distance.
  • T 3 corresponding to 2-Pel distance with the Table 2a defined in VTM-3.0.
  • the first resolution bin is equal to 0 if DI is smaller than T.
  • the first resolution bin is equal to 1 if DI is smaller than T.
  • a code for short distanceindex is further signaled after the first resolution bin to indicate the value of DI.
  • DI is signaled.
  • DI can be binarized with unary code, truncated unary code, fixed-length code, Exponential-Golomb code, truncated Exponential-Golomb code, Rice code, or any other codes.
  • T-1-DI can be binarized with unary code, truncated unary code, fixed-length code, Exponential-Golomb code, truncated Exponential-Golomb code, Rice code, or any other codes.
  • T-1-DI is binarized as a truncated code such as truncated unary code
  • the maximum coded value is T-1.
  • DI T-S-1.
  • a code for long distanceindex is further signaled after the first resolution bin to indicate the value of DI.
  • DI-T can be binarized with unary code, truncated unary code, fixed-length code, Exponential-Golomb code, truncated Exponential-Golomb code, Rice code, or any other codes.
  • DI-T is binarized as a truncated code such as truncated unary code
  • the maximum coded value is DMax –T, where DMax is the maximum allowed distance, such as 7 in VTM-3.0.
  • DI B+T.
  • B’ DMax-DI is signaled, where DMax is the maximum allowed distance, such as 7 in VTM-3.0.
  • DMax-DI can be binarized with unary code, truncated unary code, fixed-length code, Exponential-Golomb code, truncated Exponential-Golomb code, Rice code, or any other codes.
  • DMax-DI is binarized as a truncated code such as truncated unary code
  • the maximum coded value is DMax –T, where DMax is the maximum allowed distance, such as 7 in VTM-3.0.
  • DI DMax-B’.
  • a first resolution bin is signaled to indicate whether DI is greater than a predefined number T or not.
  • a first resolution bin is signaled to indicate whether the distance is greater than a predefined number or not.
  • the distance index number T corresponds to 1-Pel distance.
  • T 2 with the Table 2a defined in VTM-3.0.
  • the distance index number T corresponds to 1/2-Pel distance.
  • T 1 with the Table 2a defined in VTM-3.0.
  • the distance index number T corresponds to W-Pel distance.
  • T 3 corresponding to 2-Pel distance with the Table 2a defined in VTM-3.0.
  • the first resolution bin is equal to 0 if DI is greater than T.
  • the first resolution bin is equal to 1 if DI is greater than T.
  • a code for short distanceindex is further signaled after the first resolution bin to indicate the value of DI.
  • DI is signaled.
  • DI can be binarized with unary code, truncated unary code, fixed-length code, Exponential-Golomb code, truncated Exponential-Golomb code, Rice code, or any other codes.
  • T-DI can be binarized with unary code, truncated unary code, fixed-length code, Exponential-Golomb code, truncated Exponential-Golomb code, Rice code, or any other codes.
  • T-DI is binarized as a truncated code such as truncated unary code
  • the maximum coded value is T.
  • a code for long distanceindex is further signaled after the first resolution bin to indicate the value of DI.
  • DI-1-T can be binarized with unary code, truncated unary code, fixed-length code, Exponential-Golomb code, truncated Exponential-Golomb code, Rice code, or any other codes.
  • DI-1-T is binarized as a truncated code such as truncated unary code
  • the maximum coded value is DMax -1–T, where DMax is the maximum allowed distance, such as 7 in VTM-3.0.
  • DI B+T+1.
  • B’ DMax-DI is signaled, where DMax is the maximum allowed distance, such as 7 in VTM-3.0.
  • DMax-DI can be binarized with unary code, truncated unary code, fixed-length code, Exponential-Golomb code, truncated Exponential-Golomb code, Rice code, or any other codes.
  • DMax-DI is binarized as a truncated code such as truncated unary code
  • the maximum coded value is DMax –1-T, where DMax is the maximum allowed distance, such as 7 in VTM-3.0.
  • DI DMax-B’.
  • the 1 st syntax is the first resolution bin mentioned above.
  • which probability context is used is derived from the first resolution bins of neighbouring blocks.
  • which probability context is used is derived from the LAMVR values of neighbouring blocks (e.g., AMVR_mode values) .
  • the 2 nd syntax is the short distance index mentioned above.
  • the first bin to code the short distance index is coded with a probability context, and other bins are by-pass coded.
  • the first N bins to code the short distance index are coded with probability contexts, and other bins are by-pass coded.
  • all bins to code the short distance index are coded with probability contexts.
  • different bins may have different probability contexts.
  • which probability context is used is derived from the short distance indices of neighbouring blocks.
  • the 2 nd syntax is long distance index mentioned above.
  • the first bin to code the long distance index is coded with a probability context, and other bins are by-pass coded.
  • the first N bins to code the long distance index are coded with probability contexts, and other bins are by-pass coded.
  • all bins to code the long distance index are coded with probability contexts.
  • different bins may have different probability contexts.
  • which probability context is used is derived from the long distance indices of neighbouring blocks.
  • the 1 st syntax (e.g., first resolution bin) is coded depending on the probability models used to code the LAMVR information.
  • the first resolution bin is coded with the same manner (e.g., shared context, or same context index derivation method but with neighboring blocks’ LAMVR information replaced by MMVD information) as coding the first MVD resolution flag.
  • which probability context is used to code the first resolution bin is derived from the LAMVR information of neighbouring blocks.
  • which probability context is used to code the first resolution bin is derived from the first MVD resolution flags of neighbouring blocks.
  • the first MVD resolution flag is coded and serve as the first resolution bin when the distance index is coded.
  • which probability model is used to code the first resolution bin may depend on the coded LAMVR information.
  • which probability model is used to code the first resolution bin may depend on the MV resolution of neighbouring blocks.
  • the first bin to code the short distance index is coded with a probability context.
  • the first bin to code the short distance index is coded with the same manner (e.g., shared context, or same context index derivation method but with neighboring blocks’ LAMVR information replaced by MMVD information) as coding the second MVD resolution flag.
  • the second MVD resolution flag is coded and serve as the first bin to code the short distance indexwhen the distance index is coded.
  • which probability model is used to code the first bin to code the short distance index may depend on the coded LAMVR information.
  • which probability model is used to code the first bin to code the short distance index may depend on the MV resolution of neighbouring blocks.
  • the first bin to code the long distance index is coded with a probability context.
  • the first bin to code the long distance index is coded with the same manner (e.g., shared context, or same context index derivation method but with neighboring blocks’ LAMVR information replaced by MMVD information) as coding the second MVD resolution flag.
  • the second MVD resolution flag is coded and serve as the first bin to code the long distance indexwhen the distance index is coded.
  • which probability model is used to code the first bin to code the long distance index may depend on the coded LAMVR information.
  • which probability model is used to code the first bin to code the long distance index may depend on the MV resolution of neighbouring blocks.
  • the first MVD resolution flag is coded with one of three probability contexts: C0, C1 or C2; while the second MVD resolution flag is coded with a forth probability context: C3. Examples to derive the probability context to code the distance index is described as below.
  • xL is set equal to 1; otherwise, xL is set equal to 0.
  • xA is set equal to 1; otherwise, xA is set equal to 0.
  • x is set equal to xL+xA.
  • the probability context for the first bin to code the long distance index is C3.
  • the probability context for the first bin to code the short distance index is C3.
  • a short distance index is signaled to indicate the MMVD distance in a first sub-set.
  • the short distance index can be 0 or 1, to represent the MMVD distance to be 1/4-pel or 1/2 -pel, respectively.
  • a medium distance index is signaled to indicate the MMVD distance in a second sub-set.
  • the medium distance index can be 0 or 1, to represent the MMVD distance to be 1-pel or 2-pel, respectively.
  • a long distance index is signaled to indicate the MMVD distance.
  • the medium distance index can be X, to represent the MMVD distance to be (4 ⁇ X) -pel.
  • a sub-set distance index may refer to a short distance index, or a medium distance index, or a long distance index.
  • the sub-set distance index can be binarized with unary code, truncated unary code, fixed-length code, Exponential-Golomb code, truncated Exponential-Golomb code, Rice code, or any other codes.
  • a sub-set distance index can be binarized as a flag if there are only two possible distances in the sub-set.
  • a sub-set distance index is not signaled if there is only one possible distance in the sub-set.
  • the maximum value is set to be the number of possible distance in the sub-set minus 1 if a sub-set distance index is binarized as a truncated code.
  • the first bin to code the sub-set distance index is coded with a probability context, and other bins are by-pass coded.
  • the first N bins to code the sub-set distance index are coded with probability contexts, and other bins are by-pass coded.
  • all bins to code the sub-set distance index are coded with probability contexts.
  • different bins may have different probability contexts.
  • the distances signaled in the short distance sub-set must be in a sub-pel, but not an integer-pel.
  • 5/4-pel, 3/2-pel 7/4-pel may be in the short distance sub-set, but 3-pel cannot be in the short distance sub-set.
  • the distances signaled in the medium distance sub-set must be integer-pel but not in a form of 4N, where N is an integer.
  • N is an integer.
  • 3-pel, 5-pel may be in the medium distance sub-set, but 24-pel cannot be in the medium distance sub-set.
  • the distances signaled in the long distance sub-set must be integer-pel in a form of 4N, where N is an integer.
  • N is an integer.
  • 4-pel, 8-pel, 16-pel or 24-pel may be in the long distance sub-set.
  • a variable to store the MV resolution of the current block may be decided by the UMVE distance.
  • T1 and T2 can be any numbers.
  • a variable to store the MV resolution of the current block may be decided by the UMVE distance index.
  • variable to store MV resolution of UMVE coded blocks may be utilized for coding following blocks which are coded with LAMVR mode.
  • variable to store MV resolution of UMVE coded blocks may be utilized for coding following blocks which are coded with UMVE mode.
  • the MV precisions of LAMVR-coded blocks may be utilized for coding following UMVE-coded blocks.
  • the above bullets may be also applicable to coding the direction index.
  • mapping may be piece-wised.
  • distance table size may be larger than 8, such as 9, 10, 12, 16.
  • distance shorter than 1/4-pel may be included in the distance table, such as 1/8-pel, 1/16-pel or 3/8-pel.
  • distances not in the form of 2 X -pel may be included in the distance table, such as 3-pel, 5-pel, 6-pel, etc.
  • the distance table may be different for different directions.
  • the parsing process for the distance index may be different for different directions.
  • two x-directions with the direction index 0 and 1 have the same distance table.
  • two y-directions with the direction index 2 and 3 have the same distance table.
  • x-directions and y-directions may have two different distance tables.
  • the parsing process for the distance index may be different for x-directions and y-directions.
  • the distance table for y-directions may have less possible distances than the distance table for x-directions.
  • the shortest distance in the distance table for y-directions may be shorter than the shortest distance in the distance table for x-directions.
  • the longest distance in the distance table for y-directions may be shorter than the longest distance in the distance table for x-directions.
  • different distance tables may be used for different block width when the direction is along the x-axis.
  • different distance tables may be used for different block height when the direction is along the y-axis.
  • POC difference is calculated as
  • the delta of two distances (MVD precisions) with consecutive indices may be fixed for all indices.
  • the delta of two distances (MVD precisions) with consecutive indices may be different for different indices.
  • the ratio of two distances (MVD precisions) with consecutive indices may be different for different indices.
  • the set of distances such as ⁇ 1-pel, 2-pel, 4-pel, 8-pel, 16-pel, 32-pel, 48-pel, 64-pel ⁇ may be used.
  • the set of distances such as ⁇ 1-pel, 2-pel, 4-pel, 8-pel, 16-pel, 32-pel, 64-pel, 96-pel ⁇ may be used.
  • the set of distances such as ⁇ 1-pel, 2-pel, 3-pel, 4-pel, 5-pel, 16-pel, 32-pel ⁇ may be used.
  • the signaling of MMVD side information may be done in the following way:
  • a MMVD flag may be firstly signaled, followed by a sub-set index of distance, a distance index within the sub-set, a direction index.
  • MMVD is treated as a different mode from merge mode.
  • a MMVD flag may be further signaled, followed by a sub-set index of distance, a distance index within the sub-set, a direction index.
  • MMVD is treated as a special merge mode.
  • the direction of MMVD and the distance of MMVD may be signaled jointly.
  • whether and how to signal MMVD distance may depend on MMVD direction.
  • whether and how to signal MMVD direction may depend on MMVD distance.
  • a joint codeword is signaled with one or more syntax elements.
  • the MMVD distance and MMVD direction can be derived from the code word.
  • the codeword is equal to MMVD distance index + MMVD direction index *7.
  • the MMVD a codeword table is designed. Each codeword corresponds to a unique combination of MMVD distance and MMVD direction.
  • the distance tables size is 9:
  • the distance tables size is 10:
  • the distance tables size is 12:
  • MMVD distance may be signaled in a granularity-signaling method. The distance is firstly signaled by index with a rough granularity, followed by one or more indices with finer granularities.
  • a first index F 1 represents distances in an ordered set M 1 ; a second index F 2 represents distances in an ordered set M 2 .
  • the final distance is calculated as, such as M 1 [F 1 ] +M 2 [F 2 ] .
  • a first index F 1 represents distances in an ordered set M 1 ; a second index F 2 represents distances in an ordered set M 2 ; and so on, until a nth index F n represents distances in an ordered set M n .
  • the final distance is calculated as M 1 [F 1 ] +M 2 [F 2 ] +... + M n [F n ]
  • the signaling or binarization of F k may depend on the signaled F k-1 .
  • the entries in M k [F k ] may depend on the signaled F k-1 .
  • M 1 ⁇ 1/4 -pel, 1-pel, 4-pel, 8-pel, 16-pel, 32-pel, 64-pel, 128-pel ⁇ ,
  • MMVD side information e.g., MMVD distance
  • interpret the signaled MMVD side information e.g., distance index to the distance
  • a code table index is signaled or inferred at a higher level.
  • a specific code table is determined by the table index.
  • the distance index may be signaled with the methods disclosed in item 1-26. Then the distance is derived by querying the entry with the signaled distance index in the specific code table.
  • a parameter X is signaled or inferred at a higher level.
  • the distance index may be signaled with the methods disclosed in item 1-26.
  • valid MV resolution is signaled or inferred at a higher level. Only the MMVD distances with valid MV resolution can be signaled.
  • the signaling method of MMVD information at CU level may depend on the valid MV resolutions signaled at a higher level.
  • the signaling method of MMVD distance resolution information at CU level may depend on the valid MV resolutions signaled at a higher level.
  • the number of distance sub-sets may depend on the valid MV resolutions signaled at a higher level.
  • each sub-set may depend on the valid MV resolutions signaled at a higher level.
  • a minimum MV resolution (such as 1/4-pel, or 1-pel, or 4-pel) is signaled.
  • the short distance sub-set represented by the short distance index is redefined as the very-long distance sub-set.
  • two distances that can be signaled within this very-long sub-set are 64-pel and 128-pel.
  • the encoder may decide whether the slice/tile/picture/sequence/group of CTUs/group of blocks is screen content or not by checking the ratio of the block that has one or more similar or identical blocks within the same slice/tile/picture/sequence/group of CTUs/group of blocks.
  • the ratio is larger than a threshold, it is considered as screen content.
  • the ratio is larger than a first threshold and smaller than a second threshold, it is considered as screen content.
  • the slice/tile/picture/sequence/group of CTUs/group of blocks may be split into MxN non-overlapped blocks.
  • the encoder checks whether there is another (or more) MxN block is similar with or identical to it. For example, MxN is equal to 4x4.
  • d In one example, only partial of the blocks are checked when calculating the ratio. For example, only blocks in even row and even columns are checked.
  • a key value e.g., cyclic redundancy check (CRC) code
  • CRC cyclic redundancy check
  • the key value may be generated using only some color components of the block.
  • the key value is generated by using luma component only.
  • the key value may be generated using only some pixels of the block. For example, only even rows of the block are used.
  • SAD/SATD/SSE or mean-removed SAD/SATD/SSE may be used to measure the similarity of two blocks.
  • SAD/SATD/SSE or mean-removed SAD/SATD/SSE may be only calculated for some pixels.
  • SAD/SATD/SSE or mean-removed SAD/SATD/SSE is only calculated for the even rows.
  • the indication of usage of affine MMVD may be signaled only when the affine mode is enabled.
  • the indication of usage of affine MMVD may be signaled only when the affine mode is enabled and there are more than 1 base affine candidate.
  • the MMVD method can also be applied to other sub-block based coding tools in addition to affine mode, such as ATMVP mode.
  • MMVD on/off flag is set to be 1
  • MMVD is applied to ATMVP.
  • one set of MMVD side information may be applied to all sub-blocks, in this case, one set of MMVD side information signaled.
  • different sub-blocks may choose different sets, in this case, multiple sets of MMVD side information may be signaled.
  • the MV of each sub-block is added with the signaled MVD (a.k.a. offset or distance)
  • the method to signal the MMVD information when the sub-block merge candidate is an ATMVP merge candidate is the same to the method when the sub-block merge candidate is an affine merge candidate.
  • a POC distance based offset mirroring method is used for Bi-prediction to add the MVD on the MV of each sub-block when the sub-block merge candidate is an ATMVP merge candidate.
  • the MV of each sub-block is added with the signaled MVD (a.k.a. offset or distance) when the sub-block merge candidate is an affine merge candidate.
  • the LAMVR information used to signal the MMVD information of affine MMVD mode may be different from the LAMVR information used to signal the MMVD information of non-affine MMVD mode.
  • the LAMVR information used to signal the MMVD information of affine MMVD mode are also used to signal the MV precision used in affine inter-mode; but the LAMVR information used to signal the MMVD information of non-affine MMVD mode is used to signal the MV precision used in non-affine inter-mode.
  • the MVD information in MMVD mode for sub-block merge candidates should be signaled in the same way as the MVD information in MMVD mode for regular merge candidates.
  • b For example, they share the same mapping between the a distance index and a distance.
  • the MMVD side information signalling may be dependent on the coded mode, such as affine or normal merge or triangular merge mode or ATMVP mode.
  • the pre-defined MMVD side information may be dependent on the coded mode, such as affine or normal merge or triangular merge mode or ATMVP mode.
  • the pre-defined MMVD side information may be dependent on the color subsampling method (e.g., 4: 2: 0, 4: 2: 2, 4: 4: 4) , and/or color component.
  • MMVD can be applied on triangular prediction mode.
  • TPM merge candidate After a TPM merge candidate is signaled, the MMVD information is signaled. The signaled TPM merge candidate is treated as the base merge candidate.
  • the MMVD information is signaled with the same signaling method as the MMVD for regular merge;
  • the MMVD information is signaled with the same signaling method as the MMVD for affine merge or other kinds of sub-block merge;
  • the MMVD information is signaled with a signaling method different to that of the MMVD for regular merge, affine merge or other kinds of sub-block merge;
  • the MV of each triangle partition is added with the signaled MVD;
  • the MV of one triangle partition is added with the signaled MVD, and the MV of the other triangle partition is added with f (signaled MVD) , f is any function.
  • f depends on the reference picture POCs or reference indices of the two triangle partitions.
  • f (MVD) -MVD if the reference picture of one triangle partition is before the current picture in displaying order and the reference picture of the other triangle partition is after the current picture in displaying order.
  • the LAMVR information used to signal the MMVD information of affine MMVD mode may be different from the LAMVR information used to signal the MMVD information of non-affine MMVD mode.
  • the LAMVR information used to signal the MMVD information of affine MMVD mode are also used to signal the MV precision used in affine inter-mode; but the LAMVR information used to signal the MMVD information of non-affine MMVD mode is used to signal the MV precision used in non-affine inter-mode.
  • the MMVD side information may include e.g., the offset table (distance) , and direction information.
  • This section shows some embodiments for the improved MMVD design.
  • Embodiment #1 (MMVD distance index coding)
  • a first resolution bin is coded. For example, it may be coded with the same probability context as the first flag of MV resolution.
  • a following flag is coded. For example, it may be coded with another probability context to indicate the short distance index. If the flag is 0, the index is 0; if the flag is 1, the index is 1.
  • the long distance index L is coded as a truncated unary code, with the maximum value MaxDI -2, where MaxDI is the maximum possible distance index, equal to 7 in the embodiment. After paring out L, the distance index is reconstructed as L+2.
  • MaxDI the maximum possible distance index
  • the fist bin of the long distance index is coded with a probability context, and other bins are by-pass coded.
  • other bins are by-pass coded.
  • the mmvd_distance_subset_idx represents the resolution index as mentioned above
  • mmvd_distance_idx_in_subset represents the short or long distance index according to the resolution index.
  • Truncated unary may be used to codemmvd_distance_idx_in_subset.
  • the embodiment can achieve 0.15%coding gain in average and 0.34%gain on UHD sequences (class A1) .
  • Embodiment #2 (MMVD side information coding)
  • MMVD is treated as a separate mode which is not treated as a merge mode. Therefore, MMVD flag may be further coded only when merge flag is 0.
  • the MMVD information is signaled as:
  • mmvd_distance_idx_in_subset [x0] [y0] is binarized as a truncated unary code.
  • the maximum value of the truncated unary code is 1 if amvr_mode [x0] [y0] ⁇ 2; Otherwise (amvr_mode [x0] [y0] is equal to 2) , The maximum value is set to be 3.
  • mmvd_distance_idx [x0] [y0] is set equal tommvd_distance_idx_in_subset [x0] [y0] +2*amvr_mode [x0] [y0] .
  • Embodiment #3 (MMVD slice level control)
  • a syntax element mmvd_integer_flag is signaled.
  • sps_fracmmvd_enabled_flag 1 specifies that slice_fracmmvd_flag is present in the slice header syntax for B slices and P slices.
  • sps_fracmmvd_enabled_flag 0 specifies that slice_fracmmvd_flag is not present in the slice header syntax for B slices and P slices.
  • slice_fracmmvd_flag specifies the distance table used to derive MmvdDistance [x0] [y0] .
  • slice_fracmmvd_flag When not present, the value of slice_fracmmvd_flag is inferred to be 1.
  • the MMVD information is signaled as:
  • mmvd_distance_idx_in_subset [x0] [y0] is binarized as a truncated unary code.
  • the maximum value of the truncated unary code is 1 if amvr_mode [x0] [y0] ⁇ 2; Otherwise (amvr_mode [x0] [y0] is equal to 2) , The maximum value is set to be 3.
  • mmvd_distance_idx [x0] [y0] is set equal tommvd_distance_idx_in_subset [x0] [y0] +2*amvr_mode [x0] [y0] .
  • the probability contexts are used by mmvd_distance_idx_in_subset [x0] [y0] depends on amvr_mode [x0] [y0] .
  • the array indices x0, y0 specify the location (x0, y0) of the top-left luma sample of the considered coding block relative to the top-left luma sample of the picture.
  • the mmvd_distance_idx [x0] [y0] and the MmvdDistance [x0] [y0] is as follows:
  • FIG. 10 is a flowchart for an example method 1000 for video processing.
  • the method 1000 includes determining (1002) , determining, for a current video block which is coded using a merge with motion vector difference (MMVD) mode, a first relationship between a distance and a distance index (DI) , wherein the distance is a distance between a motion vector of the current video block and a base candidate selected from a merge candidate list; and performing (1004) , based on the first relationship, a conversion between the current video block and a bitstream representation of the current video block.
  • MMVD merge with motion vector difference
  • DI distance index
  • FIG. 11 is a flowchart for an example method 1100 for video processing.
  • the method 1100 includes performing (1102) a conversion between a current video block and a bitstream representation of the current video block, wherein the current video block is coded using a merge with motion vector difference (MMVD) mode; wherein the conversion comprises parsing MMVD side information from or writing the MMVD side information into the bitstream representation, wherein the MMVD side information comprises at least one of a MMVD flag indicating whether MMVD syntaxes are parsed, a first syntax element indicating a distance of MMVD between a motion vector of the current video block and a base candidate selected from a merge candidate list, a second syntax element indicating a direction of MMVD representing a direction of motion vector difference (MVD) relative to the base candidate.
  • MMVD merge with motion vector difference
  • FIG. 12 is a flowchart for an example method 1200 for video processing.
  • the method 1200 includes determining (1202) at least one distance for motion vector difference (MVD) associated with a current video block, which is coded in a merge with motion vector difference (MMVD) mode, from a first distance with a rough granularity and one or more distances with fine granularities; and performing (1204) a conversion between a current video block and a bitstream representation of the current video block based on the distance for MVD.
  • MVD motion vector difference
  • MMVD merge with motion vector difference
  • a method for video processing comprising: determining, for a current video block which is coded using a merge with motion vector difference (MMVD) mode, a first relationship between a distance and a distance index (DI) , wherein the distance is a distance between a motion vector of the current video block and a base candidate selected from a merge candidate list; and performing, based on the first relationship, a conversion between the current video block and a bitstream representation of the current video block.
  • MMVD merge with motion vector difference
  • DI distance index
  • the first relationship is different from a single exponential relationship.
  • the single exponential relationship is specified as:
  • the first relationship is based on a piece-wise mapping.
  • the piece-wise mapping is specified as:
  • the first relationship is represented as at least one distance table including at least one distance, as an entry, indicated with a distance index.
  • the distance table includes more than 8 entries.
  • the distance table comprises 9, 10, 12 or 16 entries.
  • the distance table comprises one or more entries shorter than 1/4-pel.
  • the one or more entries have one of 1/8-pel, 1/16-pel and3/8-pel precision.
  • the distance table comprises one or more entries not in a form of 2 X -pel, wherein X is an integer.
  • the one or more entries have one of3-pel, 5-pel and 6-pel precision.
  • different distance tables are used for different directions with different direction indexes, wherein the direction represents a direction of motion vector difference (MVD) relative to the base candidate.
  • VMD motion vector difference
  • the different directions are two horizontal directions with different direction indexes or two vertical directions with different direction indexes.
  • a same distance table is shared by two horizontal directions with different direction indexes or shared by two vertical directions with different direction indexes.
  • the distance table for a vertical direction has a smaller size than the distance table for a horizontal direction.
  • a minimum entry in the distance table for a vertical direction is smaller than a minimum entry in the distance table for a horizontal direction.
  • a maximum entry in the distance table for a vertical direction is smaller than a maximum entry in the distance table for a horizontal direction.
  • a first distance table is used for the current video block, and a second distance table different from the first distance table is used for a subsequent video block with dimensions different from the current video block.
  • the subsequent video block has a width different from that of the current video block in a horizontal direction.
  • the subsequent video block has a height different from that of the current video block in a vertical direction.
  • a first distance table is used for the current video block, and a second distance table different from the first distance table is used for a subsequent video block with different picture order count (POC) distance.
  • POC picture order count
  • different distance tables are used different base candidates for the current video block.
  • a ratio of two entries with consecutive distance indexes in the distance table is fixed to M, and M is not equal to 2.
  • M 4.
  • a delta of two entries with consecutive distance indexes in the distance table is fixed for all distance indexes.
  • a ratio of two entries with consecutive distance indexes in the distance table is different for different distance indexes.
  • the distance table has one set of entries:
  • the distance table has a size of 9 entries:
  • the distance table has a size of 10 entries:
  • the distance table has a size of 11 entries:
  • a method for video processing comprising:
  • MMVD merge with motion vector difference
  • the conversion comprises parsing MMVD side information from or writing the MMVD side information into the bitstream representation, wherein the MMVD side information comprises at least one of an MMVD flag indicating whether MMVD syntaxes are parsed, a first syntax element indicating a distance of MMVD between a motion vector of the current video block and a base candidate selected from a merge candidate list, a second syntax element indicating a direction of MMVD representing a direction of motion vector difference (MVD) relative to the base candidate.
  • MMVD side information comprises at least one of an MMVD flag indicating whether MMVD syntaxes are parsed, a first syntax element indicating a distance of MMVD between a motion vector of the current video block and a base candidate selected from a merge candidate list, a second syntax element indicating a direction of MMVD representing a direction of motion vector difference (MVD) relative to the base candidate.
  • MMVD side information comprises at least one of an MMVD flag indicating whether MMVD syntaxes are parsed
  • distances of MMVD allowed for the current video block are classified into a plurality of subsets, and the first syntax element comprises a subset index and a distance index of MMVD indicating a distance within a subset with the subset index, and the second syntax element comprise a direction index indicating the direction.
  • the MMVD flag may be firstly signaled, followed by the sub-set index of distance, the distance index within the sub-set, a direction index.
  • one of the first and second syntax elements is parsed from or written into the bitstream representation based on the other of the first and second syntax elements.
  • a combination of the first and second syntax elements is represented with at least one codeword.
  • the combination of the first and second syntax elements is a sum of the distance index of MMVD and the direction index of MMVD*7.
  • the at least one codeword is included in a codeword table which comprises a plurality of codewords each corresponds to a unique combination of the first and second syntax elements.
  • a method for video processing comprising:
  • the at least one distance is determined as a sum of the first distance with a rough granularity and the one or more distances with fine granularities, M 1 [F 1 ] +M 2 [F 2 ] +M i [F i ] ... + M n [F n ] , wherein M 1 represents a first set comprising at least one distance with a rough granularity as an entry, F 1 indicates an entry in M 1 , M i represent a set comprising at least one distance with a fine granularity as an entry, and Fi indicates an entry in M i , i is an integer from 2 to n.
  • M k [F k ] ⁇ M k-1 [F k-1 +1] -M k-1 [F k-1 ] , k is an integer from 2 to n.
  • a binarization of F k depends on that of F k-1 .
  • entries in Mk [Fk] depends on Fk-1, k is an integer from 2 to n.
  • n 2
  • M 1 ⁇ 1/4-pel, 1-pel, 4-pel, 8-pel, 16-pel, 32-pel, 64-pel, 128-pel ⁇ ,
  • the conversion includes encoding the current video block into the bitstream representation of the current video block and decoding the current video block from the bitstream representation of the current video block.
  • an apparatus in a video system comprising a processor and a non-transitory memory with instructions thereon, wherein the instructions upon execution by the processor, cause the processor to implement the method in any one of examples described as above.
  • a computer program product stored on a non-transitory computer readable media, the computer program product including program code for carrying out the method in any one of examples described as above.
  • FIG. 13 is a block diagram of a video processing apparatus 1300.
  • the apparatus 1300 may be used to implement one or more of the methods described herein.
  • the apparatus 1300 may be embodied in a smartphone, tablet, computer, Internet of Things (IoT) receiver, and so on.
  • the apparatus 1300 may include one or more processors 1302, one or more memories 1304 and video processing hardware 1306.
  • the processor (s) 1302 may be configured to implement one or more methods described in the present document.
  • the memory (memories) 1304 may be used for storing data and code used for implementing the methods and techniques described herein.
  • the video processing hardware 1306 may be used to implement, in hardware circuitry, some techniques described in the present document, and may be partly or completely be a part of the processors 1302 (e.g., graphics processor core GPU or other signal processing circuitry) .
  • video processing may refer to video encoding, video decoding, video compression or video decompression.
  • video compression algorithms may be applied during conversion from pixel representation of a video to a corresponding bitstream representation or vice versa.
  • the bitstream representation of a current video block may, for example, correspond to bits that are either co-located or spread in different places within the bitstream, as is defined by the syntax.
  • a macroblock may be encoded in terms of transformed and coded error residual values and also using bits in headers and other fields in the bitstream.
  • the disclosed and other solutions, examples, embodiments, modules and the functional operations described in this document can be implemented in digital electronic circuitry, or in computer software, firmware, or hardware, including the structures disclosed in this document and their structural equivalents, or in combinations of one or more of them.
  • the disclosed and other embodiments can be implemented as one or more computer program products, i.e., one or more modules of computer program instructions encoded on a computer readable medium for execution by, or to control the operation of, data processing apparatus.
  • the computer readable medium can be a machine-readable storage device, a machine-readable storage substrate, a memory device, a composition of matter effecting a machine-readable propagated signal, or a combination of one or more them.
  • data processing apparatus encompasses all apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, or multiple processors or computers.
  • the apparatus can include, in addition to hardware, code that creates an execution environment for the computer program in question, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, or a combination of one or more of them.
  • a propagated signal is an artificially generated signal, e.g., a machine-generated electrical, optical, or electromagnetic signal, that is generated to encode information for transmission to suitable receiver apparatus.
  • a computer program (also known as a program, software, software application, script, or code) can be written in any form of programming language, including compiled or interpreted languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
  • a computer program does not necessarily correspond to a file in a file system.
  • a program can be stored in a portion of a file that holds other programs or data (e.g., one or more scripts stored in a markup language document) , in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub programs, or portions of code) .
  • a computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network.
  • the processes and logic flows described in this document can be performed by one or more programmable processors executing one or more computer programs to perform functions by operating on input data and generating output.
  • the processes and logic flows can also be performed by, and apparatus can also be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application specific integrated circuit) .
  • processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer.
  • a processor will receive instructions and data from a read only memory or a random-access memory or both.
  • the essential elements of a computer are a processor for performing instructions and one or more memory devices for storing instructions and data.
  • a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto optical disks, or optical disks.
  • mass storage devices for storing data, e.g., magnetic, magneto optical disks, or optical disks.
  • a computer need not have such devices.
  • Computer readable media suitable for storing computer program instructions and data include all forms of non-volatile memory, media and memory devices, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices; magnetic disks, e.g., internal hard disks or removable disks; magneto optical disks; and CD ROM and DVD-ROM disks.
  • semiconductor memory devices e.g., EPROM, EEPROM, and flash memory devices
  • magnetic disks e.g., internal hard disks or removable disks
  • magneto optical disks e.g., CD ROM and DVD-ROM disks.
  • the processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

L'invention concerne des systèmes, des procédés et des dispositifs de traitement de vidéos. Un exemple de procédé de traitement de vidéo consiste : à déterminer, pour un bloc vidéo actuel qui est codé selon un mode de fusion avec différence entre vecteurs de mouvement (MMVD), une première relation entre une distance et un indice de distance (DI), la distance étant une distance entre un vecteur de mouvement du bloc vidéo actuel et un candidat de base, sélectionné à partir d'une liste de candidats à la fusion ; et à réaliser, en fonction de la première relation, une conversion entre le bloc vidéo actuel et une représentation de flux binaire du bloc vidéo actuel.
PCT/CN2019/130725 2018-12-31 2019-12-31 Mappage entre indice de distance et distance fusionnée par mvd WO2020140908A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201980087392.0A CN113261295A (zh) 2018-12-31 2019-12-31 具有MVD的Merge中距离索引与距离之间的映射

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
CN2018126066 2018-12-31
CNPCT/CN2018/126066 2018-12-31
CNPCT/CN2019/070636 2019-01-07
CN2019070636 2019-01-07
CNPCT/CN2019/071159 2019-01-10
CN2019071159 2019-01-10

Publications (1)

Publication Number Publication Date
WO2020140908A1 true WO2020140908A1 (fr) 2020-07-09

Family

ID=71406598

Family Applications (3)

Application Number Title Priority Date Filing Date
PCT/CN2019/130712 WO2020140906A1 (fr) 2018-12-31 2019-12-31 Procédé d'analyse d'indice de distance en fusion avec une mvd
PCT/CN2019/130725 WO2020140908A1 (fr) 2018-12-31 2019-12-31 Mappage entre indice de distance et distance fusionnée par mvd
PCT/CN2019/130723 WO2020140907A1 (fr) 2018-12-31 2019-12-31 Interaction entre fusion avec mvd et amvr

Family Applications Before (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/130712 WO2020140906A1 (fr) 2018-12-31 2019-12-31 Procédé d'analyse d'indice de distance en fusion avec une mvd

Family Applications After (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/130723 WO2020140907A1 (fr) 2018-12-31 2019-12-31 Interaction entre fusion avec mvd et amvr

Country Status (2)

Country Link
CN (3) CN113348667B (fr)
WO (3) WO2020140906A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022262694A1 (fr) * 2021-06-15 2022-12-22 Beijing Bytedance Network Technology Co., Ltd. Procédé, dispositif et support de traitement vidéo

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170006302A1 (en) * 2014-03-19 2017-01-05 Kt Corporation Method and apparatus for processing multiview video signals
US20170339425A1 (en) * 2014-10-31 2017-11-23 Samsung Electronics Co., Ltd. Video encoding device and video decoding device using high-precision skip encoding and method thereof
CN108886618A (zh) * 2016-03-24 2018-11-23 Lg 电子株式会社 视频编码系统中的帧间预测方法和装置

Family Cites Families (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7286710B2 (en) * 2003-10-01 2007-10-23 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Coding of a syntax element contained in a pre-coded video signal
US7599435B2 (en) * 2004-01-30 2009-10-06 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Video frame encoding and decoding
US7469070B2 (en) * 2004-02-09 2008-12-23 Lsi Corporation Method for selection of contexts for arithmetic coding of reference picture and motion vector residual bitstream syntax elements
CN101389021B (zh) * 2007-09-14 2010-12-22 华为技术有限公司 视频编解码方法及装置
CN101257625B (zh) * 2008-04-01 2011-04-20 海信集团有限公司 视频编解码中的位置索引方法及视频解码器
US20110013853A1 (en) * 2009-07-17 2011-01-20 Himax Technologies Limited Approach for determining motion vector in frame rate up conversion
WO2011128268A1 (fr) * 2010-04-13 2011-10-20 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Codeur et décodeur de partitionnement d'intervalles de probabilité
CN102148990B (zh) * 2011-04-28 2012-10-10 北京大学 一种运动矢量预测装置和方法
US20130003823A1 (en) * 2011-07-01 2013-01-03 Kiran Misra System for initializing an arithmetic coder
US20130070855A1 (en) * 2011-09-17 2013-03-21 Qualcomm Incorporated Hybrid motion vector coding modes for video coding
CN102447902B (zh) * 2011-09-30 2014-04-16 广州柯维新数码科技有限公司 选择参考场及获取时域运动矢量的方法
US20130114686A1 (en) * 2011-11-08 2013-05-09 Sharp Laboratories Of America, Inc. Video decoder with enhanced cabac motion vector decoding
ES2864591T3 (es) * 2011-12-21 2021-10-14 Sun Patent Trust Selección de contexto para codificación por entropía de coeficientes de transformada
EP2952003B1 (fr) * 2013-01-30 2019-07-17 Intel Corporation Partitionnement adaptatif de contenu pour une prédiction et un codage pour une vidéo de prochaine génération
WO2016034058A1 (fr) * 2014-09-01 2016-03-10 Mediatek Inc. Procédé de copie de bloc d'image de type intra à des fins de codage de contenu d'écran et vidéo
US9918105B2 (en) * 2014-10-07 2018-03-13 Qualcomm Incorporated Intra BC and inter unification
KR101782154B1 (ko) * 2015-06-05 2017-09-26 인텔렉추얼디스커버리 주식회사 움직임 벡터 차분치를 이용하는 영상 부호화 및 복호화 방법과 영상 복호화 장치
US20180176596A1 (en) * 2015-06-05 2018-06-21 Intellectual Discovery Co., Ltd. Image encoding and decoding method and image decoding device
WO2017052009A1 (fr) * 2015-09-24 2017-03-30 엘지전자 주식회사 Appareil et procédé de codage d'image fondé sur une plage de vecteurs de mouvement adaptative (amvr) dans un système de codage d'image
EP3357245A4 (fr) * 2015-11-05 2019-03-13 MediaTek Inc. Procédé et appareil d'inter prédiction utilisant un vecteur de mouvement moyen pour le codage vidéo
GB2561507B (en) * 2016-01-07 2021-12-22 Mediatek Inc Method and apparatus for affine merge mode prediction for video coding system
US10142652B2 (en) * 2016-05-05 2018-11-27 Google Llc Entropy coding motion vector residuals obtained using reference motion vectors
US10721489B2 (en) * 2016-09-06 2020-07-21 Qualcomm Incorporated Geometry-based priority for the construction of candidate lists
US10462462B2 (en) * 2016-09-29 2019-10-29 Qualcomm Incorporated Motion vector difference coding technique for video coding
EP3301918A1 (fr) * 2016-10-03 2018-04-04 Thomson Licensing Procédé et appareil de codage et de décodage d'informations de mouvement
US10979732B2 (en) * 2016-10-04 2021-04-13 Qualcomm Incorporated Adaptive motion vector precision for video coding
CN116320476A (zh) * 2016-12-22 2023-06-23 株式会社Kt 对视频进行解码或编码的方法和发送视频数据的方法
US10750181B2 (en) * 2017-05-11 2020-08-18 Mediatek Inc. Method and apparatus of adaptive multiple transforms for video coding
US10602180B2 (en) * 2017-06-13 2020-03-24 Qualcomm Incorporated Motion vector prediction

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170006302A1 (en) * 2014-03-19 2017-01-05 Kt Corporation Method and apparatus for processing multiview video signals
US20170339425A1 (en) * 2014-10-31 2017-11-23 Samsung Electronics Co., Ltd. Video encoding device and video decoding device using high-precision skip encoding and method thereof
CN108886618A (zh) * 2016-03-24 2018-11-23 Lg 电子株式会社 视频编码系统中的帧间预测方法和装置

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
JEONG SEUNGSOO: "Proposed WD for CE4 Ultimate motion vector expression (Test 4.5.4)", JOINT VIDEO EXPERTS TEAM (JVET) OF ITU-T SG 16 WP 3 AND ISO/IEC JTC 1/SC 29/WG 11, 3 October 2018 (2018-10-03), pages 1 - 12, XP030195378 *
SEUNGSOO JEONG , MIN WOO PARK , YINJI PIAO , MONSOO PARK , KIHO CHOI: "CE4 Ultimate motion vector expression (Test 4.5.4)", JOINT VIDEO EXPERTS TEAM (JVET) OF ITU-T SG 16 WP 3 AND ISO/IEC JTC 1/SC 29/WG 11, 12 October 2018 (2018-10-12), Macao CN, pages 1 - 8, XP030192242 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022262694A1 (fr) * 2021-06-15 2022-12-22 Beijing Bytedance Network Technology Co., Ltd. Procédé, dispositif et support de traitement vidéo

Also Published As

Publication number Publication date
CN113348667A (zh) 2021-09-03
WO2020140906A1 (fr) 2020-07-09
CN113348667B (zh) 2023-06-20
WO2020140907A1 (fr) 2020-07-09
CN113273189A (zh) 2021-08-17
CN113261295A (zh) 2021-08-13

Similar Documents

Publication Publication Date Title
KR102613889B1 (ko) 적응적 움직임 벡터 해상도를 갖는 움직임 벡터 수정
US11457226B2 (en) Side information signaling for inter prediction with geometric partitioning
KR102635047B1 (ko) 적응적 움직임 벡터 해상도를 가지는 어파인 모드에 대한 구문 재사용
WO2020098810A1 (fr) Fusion avec différences de vecteurs de mouvement dans un traitement vidéo
WO2020143774A1 (fr) Fusion avec mvd basée sur une partition de géométrie
WO2020125750A1 (fr) Précision de vecteurs de mouvement en mode fusion avec différence de vecteurs de mouvement
WO2020156516A1 (fr) Contexte pour coder une résolution de vecteur de mouvement adaptatif en mode affine
US11128860B2 (en) Affine mode calculations for different video block sizes
US11863771B2 (en) Updating of history based motion vector prediction tables
WO2020143643A1 (fr) Procédé de commande pour fusion avec mvd
WO2020156525A1 (fr) Éléments de syntaxe multiples pour une résolution adaptative de vecteurs de mouvement
WO2020140908A1 (fr) Mappage entre indice de distance et distance fusionnée par mvd
CN112997496B (zh) 仿射预测模式的改进

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19906903

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19906903

Country of ref document: EP

Kind code of ref document: A1