WO2021043166A1 - Improvement of merge candidates - Google Patents

Improvement of merge candidates Download PDF

Info

Publication number
WO2021043166A1
WO2021043166A1 PCT/CN2020/113024 CN2020113024W WO2021043166A1 WO 2021043166 A1 WO2021043166 A1 WO 2021043166A1 CN 2020113024 W CN2020113024 W CN 2020113024W WO 2021043166 A1 WO2021043166 A1 WO 2021043166A1
Authority
WO
WIPO (PCT)
Prior art keywords
candidates
candidate
video
merge
pel
Prior art date
Application number
PCT/CN2020/113024
Other languages
French (fr)
Inventor
Na Zhang
Hongbin Liu
Li Zhang
Kai Zhang
Yue Wang
Original Assignee
Beijing Bytedance Network Technology Co., Ltd.
Bytedance Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Bytedance Network Technology Co., Ltd., Bytedance Inc. filed Critical Beijing Bytedance Network Technology Co., Ltd.
Priority to CN202080061697.7A priority Critical patent/CN114365494A/en
Publication of WO2021043166A1 publication Critical patent/WO2021043166A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/80Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation
    • H04N19/82Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation involving filtering within a prediction loop
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/105Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/523Motion estimation or motion compensation with sub-pixel accuracy

Definitions

  • This patent document relates to video coding and decoding.
  • a method of video processing includes determining to use, for a conversion between a video block of video and a coded representation of the video, an interpolation filter for motion candidate interpolation using a rule; and performing the conversion based on the determining, wherein, the interpolation scheme is one of a default interpolation filter and an alternative half-pel interpolation filter.
  • another method of video processing includes determining, for a conversion between a coded representation of a video block and pixel values of the video block, whether a coding tool is used in the conversion, based on an information of an associated candidate used for generating a pairwise merge candidate during the conversion or information of a selected motion candidate in a merge list before adding the pairwise merge candidate to the merge list; and performing the conversion based on the determining.
  • another method of video processing includes determining, for a conversion between a coded representation of a video block and a pixel values of the video block, that a bi-prediction mode is used in generating motion candidates, including a default motion candidate, whether to use unequal weights in calculating the default motion candidate based on a condition; and performing the conversion based on the determining.
  • another method of video processing includes performing a conversion between a coded representation of a video region and pixel values of the video region, wherein the conversion uses a list of motion candidates representing candidates for motion information of the video region, and wherein the list of motion candidates uses one or more motion candidates with motion vectors pointing to half-pel locations.
  • another method of video processing includes performing a conversion between a coded representation of a video block and pixel values of the video block using a rule that specifies that the coded representation omits signaling of an alternative half-pel filter for merge candidate calculations during the conversion, due to no motion vector of the current block having a horizontal or a vertical half pel resolution.
  • another method of video processing includes determining, during a conversion between a coded representation of a video block and pixel values of the video block, whether a re-ordering of a merge candidate list is performed based on usage of an alternative half-pel interpolation filter during the conversion or a coding condition; and performing the conversion based on the determining.
  • another method of video processing includes determining, for a conversion between a video block of a video and a bitstream representation of the video block, whether an alternative luma half-pel interpolation filter is applied to all pairwise average candidates based on a flag which is used to represent whether alternative luma half-pel interpolation filter is employed or not; and performing the conversion based on the determination.
  • another method of video processing includes determining, for a conversion between a video block of the video and a bitstream representation of the video block, whether to enable or disable a coding tool for a pairwise average candidate based on information of associated candidates used for generating the pairwise average candidate; and performing the conversion based on the determination.
  • another method of video processing includes determining, for a conversion between a video block of a video and a bitstream representation of the video block, whether to enable or disable unequal weights in bi-prediction weight for default motion candidates in a merge candidate list associated with the video block based on one or more conditions; and performing the conversion based on the determination.
  • another method of video processing includes deriving, for a conversion between a video block of a video and a bitstream representation of the video block, a merge candidate list associated with the video block; adding one or more half-pel motion vector (MV) candidates with motion vectors pointing to half-pel to the merge candidate list; and performing the conversion based on the merge candidate list.
  • MV half-pel motion vector
  • another method of video processing includes determining, for a conversion between a video block of a video and a bitstream representation of the video block, whether to enable or disable half-pel interpolation filter for default motion candidates in a merge candidate list associated with the video block based on one or more conditions; and performing the conversion based on the determination.
  • another method of video processing includes determining, for a conversion between a video block of the video and a bitstream representation of the video block, whether to enable or disable a coding tool for a pairwise average candidate based on information of all or selected motion candidates in a merge candidate list before adding the pairwise average candidate to the merge candidate list; and performing the conversion based on the determination.
  • another method of video processing includes determining, for a conversion between a video block of a video and a bitstream representation of the video block, a value of a flag which is used to represent whether alternative luma half-pel interpolation filter is employed or not based on motion vector (MV) of the video block; and performing the conversion based on the determination.
  • MV motion vector
  • another method of video processing includes performing, for a conversion between a video block of a video and a bitstream representation of the video block, a re-order process on motion candidates in a merge candidate list associated with the video block based on usage of alternative half-pel interpolation filter; and performing the conversion based on the determination.
  • the above-described method is embodied in the form of processor-executable code and stored in a computer-readable program medium.
  • a device that is configured or operable to perform the above-described method.
  • the device may include a processor that is programmed to implement this method.
  • a video decoder apparatus may implement a method as described herein.
  • a computer program product stores on a non-transitory computer readable media, the computer program product including program code for carrying out the method as described herein.
  • a non-transitory computer readable media recorded thereon program code for carrying out the method as described herein.
  • a non-transitory computer-readable recording medium stores a bitstream representation which is generated by the method as described herein performed by a video processing apparatus.
  • FIG. 1A-1B show example tables used for signaling.
  • FIG. 1A shows the VTM-5.0 table
  • FIG. 1B shows the proposed table.
  • FIG. 2 shows a graphical example of filter characteristics.
  • FIG. 3 is a block diagram of an example implementation of a hardware platform for video processing.
  • FIG. 4 is a flowchart for an example method of video processing.
  • FIG. 5 is a block diagram of an example video processing system in which disclosed techniques may be implemented.
  • FIG. 6 is a flowchart for an example method of video processing.
  • FIG. 7 is a flowchart for an example method of video processing.
  • FIG. 8 is a flowchart for an example method of video processing.
  • FIG. 9 is a flowchart for an example method of video processing.
  • FIG. 10 is a flowchart for an example method of video processing.
  • FIG. 11 is a flowchart for an example method of video processing.
  • FIG. 12 is a flowchart for an example method of video processing.
  • FIG. 13 is a flowchart for an example method of video processing.
  • Embodiments of the disclosed technology may be applied to existing video coding standards (e.g., HEVC, H. 265) and future standards to improve compression performance. Section headings are used in the present document to improve readability of the description and do not in any way limit the discussion or the embodiments (and/or implementations) to the respective sections only.
  • This document is related to video coding technologies. Specifically, it is related to merge candidates in video coding. It may be applied to the existing video coding standard like HEVC, or the standard (Versatile Video Coding) to be finalized. It may be also applicable to future video coding standards or video codec.
  • Video coding standards have evolved primarily through the development of the well- known ITU-T and ISO/IEC standards.
  • the ITU-T produced H. 261 and H. 263, ISO/IEC produced MPEG-1 and MPEG-4 Visual, and the two organizations jointly produced the H. 262/MPEG-2 Video and H. 264/MPEG-4 Advanced Video Coding (AVC) and H. 265/HEVC standards.
  • AVC H. 264/MPEG-4 Advanced Video Coding
  • H. 265/HEVC High Efficiency Video Coding
  • the video coding standards are based on the hybrid video coding structure wherein temporal prediction plus transform coding are utilized.
  • Joint Video Exploration Team JVET was founded by VCEG and MPEG jointly in 2015.
  • JVET Joint Exploration Model
  • Pairwise average candidates are generated by averaging predefined pairs of candidates in the existing merge candidate list, and the predefined pairs are defined as ⁇ (0, 1) , (0, 2) , (1, 2) , (0, 3) , (1, 3) , (2, 3) ⁇ , where the numbers denote the merge indices to the merge candidate list.
  • the averaged motion vectors are calculated separately for each reference list. If both motion vectors are available in one list, these two motion vectors are averaged even when they point to different reference pictures; if only one motion vector is available, use the one directly; if no motion vector is available, keep this list invalid.
  • the zero MVPs are inserted in the end until the maximum merge candidate number is encountered.
  • the bi-prediction signal is generated by averaging two prediction signals obtained from two different reference pictures and/or using two different motion vectors.
  • VTM6 the bi-prediction mode is extended beyond simple averaging to allow weighted averaging of the two prediction signals.
  • weights are allowed in the weighted averaging bi-prediction, w ⁇ ⁇ -2, 3, 4, 5, 10 ⁇ .
  • the weight w is determined in one of two ways: 1) for a non-merge CU, the weight index is signalled after the motion vector difference; 2) for a merge CU, the weight index is inferred from neighbouring blocks based on the merge candidate index. Weighted averaging bi-prediction is only applied to CUs with 256 or more luma samples (i.e., CU width times CU height is greater than or equal to 256) . For low-delay pictures, all 5 weights are used. For non-low-delay pictures, only 3 weights (w ⁇ ⁇ 3, 4, 5 ⁇ ) are used.
  • affine ME When combined with affine, affine ME will be performed for unequal weights if and only if the affine mode is selected as the current best mode.
  • the BCW weight index is coded using one context coded bin followed by bypass coded bins.
  • the first context coded bin indicates if equal weight is used; and if unequal weight is used, additional bins are signalled using bypass coding to indicate which unequal weight is used.
  • Weighted prediction is a coding tool supported by the H. 264/AVC and HEVC standards to efficiently code video content with fading. Support for WP was also added into the VVC standard. WP allows weighting parameters (weight and offset) to be signalled for each reference picture in each of the reference picture lists L0 and L1. Then, during motion compensation, the weight (s) and offset (s) of the corresponding reference picture (s) are applied. WP and BCW are designed for different types of video content. In order to avoid interactions between WP and BCW, which will complicate VVC decoder design, if a CU uses WP, then the BCW weight index is not signalled, and w is inferred to be 4 (i.e. equal weight is applied) .
  • the weight index is inferred from neighbouring blocks based on the merge candidate index. This can be applied to both normal merge mode and inherited affine merge mode.
  • the affine motion information is constructed based on the motion information of up to 3 blocks. The following process is used to derive the BCW index for a CU using the constructed affine merge mode.
  • BCW is also known as generalized bi-prediction (GBi) .
  • FIG. 1A shows the VTM-5.0 table
  • FIG. 1B shows the proposed table.
  • an alternative luma half-pel interpolation filter is used for a non-affine and non-merge inter-coded CU which uses half-pel motion vector accuracy (i.e., the half-pel AMVR mode) .
  • the Gauss luma half-pel interpolation filter is used for test 1.2.
  • a switching between the two alternative half-pel interpolation filters is made based on the value of a new syntax element hpel_if_idx.
  • the syntax element hpel_if_idx is only signaled in case of half-pel AMVR mode as follows:
  • the information which interpolation filter is applied for the half-pel position is inherited from the neighbouring block.
  • pairwise average candidate can be further improved, for example, how to set its GBi index and whether to use alternative luma half-pel interpolation filter.
  • the GBi index of a pairwise average candidate is set equal to GBI_DEFAULT (i.e., equal weights for two prediction blocks are used) .
  • the alternative luma half-pel interpolation filter flags of the two merge candidates used for generating a pairwise merge candidate are equal, the alternative luma half-pel interpolation filter flag of the pairwise merge candidate is set equal to that of the merge candidate with smaller merge index; otherwise, it is set equal to false.
  • the half sample interpolation filter index hpelIfIdx of every new candidate being added is set equal to 0.
  • the bi-prediction weight index (i.e., GBi index) of every new candidate being added is set equal to GBI_DEFAULT.
  • GBiIdx is used to represent the GBi index that indicates the used weighting factor in BCW (a.k.a, GBi)
  • UseAltHpelIf is used to represent whether alternative luma half-pel interpolation filter is employed or not (e.g., when UseAltHpelIf is equal to 1, the alternative luma half-pel interpolation filter is used; otherwise, the alternative luma half-pel interpolation filter is not used)
  • Candidates, Cand1 and Cand2 are used to denote the two merge candidates used for generating a pairwise merge candidate.
  • UseAltHpelIf flag of a pairwise average candidate may be set equal to 0, e.g., default interpolation filter may always be used.
  • Whether to enable or disable a coding tool (e.g., BCW/alternative half-pel interpolation filter) for a pairwise average candidate may depend on the information of the associated candidates used for generating the pairwise merge candidate.
  • a coding tool e.g., BCW/alternative half-pel interpolation filter
  • different pairwise merge candidates may determine the usage or disable the usage of the coding tool on-the-fly.
  • GBiIdx of a pairwise average candidate may depend on the GBiIdx of only one candidate of the pair.
  • GBiIdx of a pairwise average candidate may be set equal to the GBiIdx of Cand1.
  • GBiIdx of a pairwise average candidate may be set equal to the GBiIdx of Cand2.
  • GBiIdx of a pairwise average candidate may be derived as a function of the GBiIdxs of the two candidates of the pair (denoted as GBiIdx1 and GBiIdx2) .
  • GBiIdxC may be set equal to the smaller GBiIdx of Cand1 and Cand2.
  • GBiIdxC may be set equal to the larger GBiIdx of Cand1 and Cand2.
  • GBiIdxC may be set equal to the mean of GBiIdx of Cand1 and Cand2.
  • BCW may be disabled for a pairwise average candidate when the GBiIdx of Cand1 is not equal to the GBiIdx of Cand2.
  • UseAltHpelIf flag of a pairwise average candidate may depend on the UseAltHpelIf flag of only one candidate of the pair.
  • UseAltHpelIf flag of a pairwise average candidate may be set equal to that associated with one candidate, e.g., Cand1 or Cand2.
  • UseAltHpelIf flag may be set to false when the UseAltHpelIf of Cand1 is not equal to the UseAltHpelIf of Cand2.
  • UseAltHpelIf flag of a pairwise average candidate may depend on the UseAltHpelIf flags of the two candidates of the pair.
  • UseAltHpelIf flag of a pairwise average candidate may be set equal to 1 if the UseAltHpelIf flag of Cand1 and Cand2 are both equal to 1.
  • UseAltHpelIf flag of a pairwise average candidate may be set equal to 1 if UseAltHpelIf flag of Cand1 or Cand2 is equal to 1.
  • whether to enable/disable unequal weights may depend on the index of default motion candidates.
  • whether to enable/disable unequal weights may depend on slice/picture type.
  • whether to enable/disable unequal weights may depend on all or partial of the existing merge candidates in the merge list before default motion candidates are added.
  • whether to enable/disable unequal weights may depend on the usage of unequal weights from spatial/temporal neighboring (adjacent or non-adjacent) blocks.
  • half-pel MV candidates may be added to merge candidate lists right after the derivation of pairwise merge candidates/combined bi-predictive merge candidates.
  • half-pel MV candidates may be added to merge candidate lists after the derivation of HMVP candidates.
  • whether to add half-pel MV candidate or zero MV candidate may be changed from block to block, such as based on decoded information from previously coded blocks and/or based on the merge candidates before adding these default candidates.
  • half-pel MV candidate and zero MV candidate may be both added to the motion candidate list.
  • they may be added in an interleaved way.
  • half-pel MV candidates may be added before all zero MV candidates.
  • half-pel MV candidates may be added after all zero MV candidates.
  • whether to enable/disable half-pel interpolation filter may depend on the index of default motion candidates.
  • whether to enable/disable half-pel interpolation filter may depend on slice/picture type.
  • whether to enable/disable half-pel interpolation filter may depend on all or partial of the existing merge candidates in the merge list before default motion candidates are added.
  • whether to enable/disable half-pel interpolation filter may depend on the usage of unequal weights from spatial/temporal neighboring (adjacent or non-adjacent) blocks.
  • Whether to enable or disable a coding tool for a pairwise average candidate may depend on the information of all or partial of motion candidates (named selected motion candidates) in the merge list before adding the pairwise average candidate.
  • the selected motion candidates may be those spatial merge candidates
  • the selected motion candidates may be those HMVP candidates in the merge candidate list
  • the partial of motion candidates may be one or multiple HMVP candidates in the HMVP table
  • UseAltHpelIf flag and/or BCW index may depend on a function of those information associated with the selected motion candidates.
  • UseAltHpelIf may be set to 1 (or 0) .
  • whether to enable/disable a tool may depend on the usage of the tool from spatial/temporal neighboring (adjacent or non-adjacent) blocks.
  • UseAltHpelIf flag is set equal to zero if no MV of a block refers to a horizontal and/or vertical half-pixel position.
  • UseAltHpelIf flag is set equal to zero if no MV of a block refers to a horizontal and/or vertical half-pixel position when the current block is coded with a pairwise average candidate.
  • the ‘pairwise average candidate’ mentioned above may be replaced by other new kinds of motion candidates that are derived from existing candidates added before the new kinds of motion candidates, such as combined bi-predictive merge candidates.
  • Motion candidates in the merge candidate list may be re-ordered based on the usage of the alternative half-pel interpolation filter.
  • candidates with the alternative half-pel interpolation filter enabled may be put before those with alternative half-pel interpolation filter disabled.
  • candidates with the alternative half-pel interpolation filter enabled may be put after those with alternative half-pel interpolation filter disabled.
  • the order of candidates with the alternative half-pel interpolation filter enabled and disabled may be adaptively changed based on the decoded information, such as usage of the alternative half-pel interpolation filter in neighboring (adjacent or non-adjacent) blocks.
  • the proposed method may be only applied to spatial merge candidates.
  • the proposed method may be only applied to spatial merge candidates and HMVP candidates.
  • Deleted parts are highlighted in grey and newly added parts are highlighted in grey.
  • UseAltHpelIf flag of a pairwise average candidate may equal to false.
  • the prediction list utilization flags predFlagL0avgCand and predFlagL1avgCand of candidate avgCand added into mergeCandList during the invocation of this process
  • variable numRefLists is derived as follows:
  • mergeCandList [numCurrMergeCand] is set equal to avgCand
  • the reference indices, the prediction list utilization flags and the motion vectors of avgCand are derived as follows and numCurrMergeCand is incremented by 1:
  • predFlagLXp0Cand is equal to 1 and predFlagLXp1Cand is equal to 1
  • the variables refIdxLXavgCand, predFlagLXavgCand, mvLXavgCand [0] , and mvLXavgCand [1] are derived as follows:
  • predFlagLXp0Cand is equal to 1 and predFlagLXp1Cand is equal to 0
  • the variables refIdxLXavgCand, predFlagLXavgCand, mvLXavgCand [0] , mvLXavgCand [1] are derived as follows:
  • refIdxLXavgCand refIdxLXp0Cand (8-349)
  • predFlagLXp0Cand is equal to 0 and predFlagLXp1Cand is equal to 1
  • the variables refIdxLXavgCand, predFlagLXavgCand, mvLXavgCand [0] , mvLXavgCand [1] are derived as follows:
  • refIdxLXavgCand refIdxLXp1Cand (8-353)
  • predFlagLXp0Cand is equal to 0 and predFlagLXp1Cand is equal to 0
  • the variables refIdxLXavgCand, predFlagLXavgCand, mvLXavgCand [0] , mvLXavgCand [1] are derived as follows:
  • predFlagL1avgCand 0 (8-362)
  • the half sample interpolation filter index hpelIfIdxavgCand is derived as follows:
  • UseAltHpelIf flag of a pairwise average candidate may equal to the UseAltHpelIf flag of Cand1.
  • the prediction list utilization flags predFlagL0avgCand and predFlagL1avgCand of candidate avgCand added into mergeCandList during the invocation of this process
  • variable numRefLists is derived as follows:
  • mergeCandList [numCurrMergeCand] is set equal to avgCand
  • the reference indices, the prediction list utilization flags and the motion vectors of avgCand are derived as follows and numCurrMergeCand is incremented by 1:
  • predFlagLXp0Cand is equal to 1 and predFlagLXp1Cand is equal to 1
  • the variables refIdxLXavgCand, predFlagLXavgCand, mvLXavgCand [0] , and mvLXavgCand [1] are derived as follows:
  • predFlagLXp0Cand is equal to 1 and predFlagLXp1Cand is equal to 0
  • the variables refIdxLXavgCand, predFlagLXavgCand, mvLXavgCand [0] , mvLXavgCand [1] are derived as follows:
  • refIdxLXavgCand refIdxLXp0Cand (8-349)
  • predFlagLXp0Cand is equal to 0 and predFlagLXp1Cand is equal to 1
  • the variables refIdxLXavgCand, predFlagLXavgCand, mvLXavgCand [0] , mvLXavgCand [1] are derived as follows:
  • refIdxLXavgCand refIdxLXp1Cand (8-353)
  • predFlagLXp0Cand is equal to 0 and predFlagLXp1Cand is equal to 0
  • the variables refIdxLXavgCand, predFlagLXavgCand, mvLXavgCand [0] , mvLXavgCand [1] are derived as follows:
  • predFlagL1avgCand 0 (8-362)
  • the half sample interpolation filter index hpelIfIdxavgCand is derived as follows:
  • hpelIfIdxavgCand is set equal to hpelIfIdxp0Cand.
  • UseAltHpelIf flag of a pairwise average candidate may equal to the UseAltHpelIf flag of Cand2.
  • the prediction list utilization flags predFlagL0avgCand and predFlagL1avgCand of candidate avgCand added into mergeCandList during the invocation of this process
  • variable numRefLists is derived as follows:
  • mergeCandList [numCurrMergeCand] is set equal to avgCand
  • the reference indices, the prediction list utilization flags and the motion vectors of avgCand are derived as follows and numCurrMergeCand is incremented by 1:
  • predFlagLXp0Cand is equal to 1 and predFlagLXp1Cand is equal to 1
  • the variables refIdxLXavgCand, predFlagLXavgCand, mvLXavgCand [0] , and mvLXavgCand [1] are derived as follows:
  • predFlagLXp0Cand is equal to 1 and predFlagLXp1Cand is equal to 0
  • the variables refIdxLXavgCand, predFlagLXavgCand, mvLXavgCand [0] , mvLXavgCand [1] are derived as follows:
  • refIdxLXavgCand refIdxLXp0Cand (8-349)
  • predFlagLXp0Cand is equal to 0 and predFlagLXp1Cand is equal to 1
  • the variables refIdxLXavgCand, predFlagLXavgCand, mvLXavgCand [0] , mvLXavgCand [1] are derived as follows:
  • refIdxLXavgCand refIdxLXp1Cand (8-353)
  • predFlagLXp0Cand is equal to 0 and predFlagLXp1Cand is equal to 0
  • the variables refIdxLXavgCand, predFlagLXavgCand, mvLXavgCand [0] , mvLXavgCand [1] are derived as follows:
  • predFlagL1avgCand 0 (8-362)
  • the half sample interpolation filter index hpelIfIdxavgCand is derived as follows:
  • hpelIfIdxavgCand is set equal to hpelIfIdxp Cand.
  • GBiIdx of a pairwise average candidate may equal to the GBiIdx of Cand1.
  • the prediction list utilization flags predFlagL0avgCand and predFlagL1avgCand of candidate avgCand added into mergeCandList during the invocation of this process
  • variable numRefLists is derived as follows:
  • mergeCandList [numCurrMergeCand] is set equal to avgCand
  • the reference indices, the prediction list utilization flags and the motion vectors of avgCand are derived as follows and numCurrMergeCand is incremented by 1:
  • predFlagLXp0Cand is equal to 1 and predFlagLXp1Cand is equal to 1
  • the variables refIdxLXavgCand, predFlagLXavgCand, mvLXavgCand [0] , and mvLXavgCand [1] are derived as follows:
  • predFlagLXp0Cand is equal to 1 and predFlagLXp1Cand is equal to 0
  • the variables refIdxLXavgCand, predFlagLXavgCand, mvLXavgCand [0] , mvLXavgCand [1] are derived as follows:
  • refIdxLXavgCand refIdxLXp0Cand (8-349)
  • predFlagLXp0Cand is equal to 0 and predFlagLXp1Cand is equal to 1
  • the variables refIdxLXavgCand, predFlagLXavgCand, mvLXavgCand [0] , mvLXavgCand [1] are derived as follows:
  • refIdxLXavgCand refIdxLXp1Cand (8-353)
  • predFlagLXp0Cand is equal to 0 and predFlagLXp1Cand is equal to 0
  • the variables refIdxLXavgCand, predFlagLXavgCand, mvLXavgCand [0] , mvLXavgCand [1] are derived as follows:
  • predFlagL1avgCand 0 (8-362)
  • the half sample interpolation filter index hpelIfIdxavgCand is derived as follows:
  • hpelIfIdxp0Cand is equal to hpelIfIdxp1Cand
  • hpelIfIdxavgCand is set equal to hpelIfIdxp0Cand.
  • hpelIfIdxavgCand is set equal to 0.
  • the bi-prediction weight index bcwIdxavgCand is derived as follows:
  • bcwIdxavgCand is set equal to bcwIdxp0Cand.
  • GBiIdx of a pairwise average candidate may equal to the GBiIdx of Cand2.
  • the prediction list utilization flags predFlagL0avgCand and predFlagL1avgCand of candidate avgCand added into mergeCandList during the invocation of this process
  • variable numRefLists is derived as follows:
  • mergeCandList [numCurrMergeCand] is set equal to avgCand
  • the reference indices, the prediction list utilization flags and the motion vectors of avgCand are derived as follows and numCurrMergeCand is incremented by 1:
  • predFlagLXp0Cand is equal to 1 and predFlagLXp1Cand is equal to 1
  • the variables refIdxLXavgCand, predFlagLXavgCand, mvLXavgCand [0] , and mvLXavgCand [1] are derived as follows:
  • predFlagLXp0Cand is equal to 1 and predFlagLXp1Cand is equal to 0
  • the variables refIdxLXavgCand, predFlagLXavgCand, mvLXavgCand [0] , mvLXavgCand [1] are derived as follows:
  • refIdxLXavgCand refIdxLXp0Cand (8-349)
  • predFlagLXp0Cand is equal to 0 and predFlagLXp1Cand is equal to 1
  • the variables refIdxLXavgCand, predFlagLXavgCand, mvLXavgCand [0] , mvLXavgCand [1] are derived as follows:
  • refIdxLXavgCand refIdxLXp1Cand (8-353)
  • predFlagLXp0Cand is equal to 0 and predFlagLXp1Cand is equal to 0
  • the variables refIdxLXavgCand, predFlagLXavgCand, mvLXavgCand [0] , mvLXavgCand [1] are derived as follows:
  • predFlagL1avgCand 0 (8-362)
  • the half sample interpolation filter index hpelIfIdxavgCand is derived as follows:
  • hpelIfIdxp0Cand is equal to hpelIfIdxp1Cand
  • hpelIfIdxavgCand is set equal to hpelIfIdxp0Cand.
  • hpelIfIdxavgCand is set equal to 0.
  • the bi-prediction weight index bcwIdxavgCand is derived as follows:
  • bcwIdxavgCand is set equal to bcwIdxp1Cand.
  • FIG. 5 is a block diagram showing an example video processing system 1900 in which various techniques disclosed herein may be implemented.
  • the system 1900 may include input 1902 for receiving video content.
  • the video content may be received in a raw or uncompressed format, e.g., 8 or 10 bit multi-component pixel values, or may be in a compressed or encoded format.
  • the input 1902 may represent a network interface, a peripheral bus interface, or a storage interface. Examples of network interface include wired interfaces such as Ethernet, passive optical network (PON) , etc. and wireless interfaces such as Wi-Fi or cellular interfaces.
  • PON passive optical network
  • the system 1900 may include a coding component 1904 that may implement the various coding or encoding methods described in the present document.
  • the coding component 1904 may reduce the average bitrate of video from the input 1902 to the output of the coding component 1904 to produce a coded representation of the video.
  • the coding techniques are therefore sometimes called video compression or video transcoding techniques.
  • the output of the coding component 1904 may be either stored, or transmitted via a communication connected, as represented by the component 1906.
  • the stored or communicated bitstream (or coded) representation of the video received at the input 1902 may be used by the component 1908 for generating pixel values or displayable video that is sent to a display interface 1910.
  • the process of generating user-viewable video from the bitstream representation is sometimes called video decompression.
  • certain video processing operations are referred to as “coding” operations or tools, it will be appreciated that the coding tools or operations are used at an encoder and corresponding decoding tools or operations that reverse the results of the coding will be performed by
  • peripheral bus interface or a display interface may include universal serial bus (USB) or high definition multimedia interface (HDMI) or Displayport, and so on.
  • storage interfaces include SATA (serial advanced technology attachment) , PCI, IDE interface, and the like.
  • FIG. 3 is a block diagram of a video processing apparatus 300.
  • the apparatus 300 may be used to implement one or more of the methods described herein.
  • the apparatus 300 may be embodied in a smartphone, tablet, computer, Internet of Things (IoT) receiver, and so on.
  • the apparatus 300 may include one or more processors 302, one or more memories 304 and video processing hardware 306.
  • the processor (s) 302 may be configured to implement one or more methods described in the present document.
  • the memory (memories) 304 may be used for storing data and code used for implementing the methods and techniques described herein.
  • the video processing hardware 306 may be used to implement, in hardware circuitry, some techniques described in the present document.
  • a method of video processing (e.g., method 400 depicted in FIG. 4) , comprising determining (402) to use, for a conversion between a video block of video and a coded representation of the video, an interpolation filter for motion candidate interpolation using a rule; and performing (404) the conversion based on the determining, wherein, the interpolation scheme is one of a default interpolation filter and an alternative half-pel interpolation filter.
  • a method of video processing comprising determining, for a conversion between a coded representation of a video block and pixel values of the video block, whether a coding tool is used in the conversion, based on an information of an associated candidate used for generating a pairwise merge candidate during the conversion or information of a selected motion candidate in a merge list before adding the pairwise merge candidate to the merge list; and performing the conversion based on the determining.
  • a method of video processing comprising determining, for a conversion between a coded representation of a video block and a pixel values of the video block, that a bi-prediction mode is used in generating motion candidates, including a default motion candidate, whether to use unequal weights in calculating the default motion candidate based on a condition; and performing the conversion based on the determining.
  • a method of video processing comprising performing a conversion between a coded representation of a video region and pixel values of the video region, wherein the conversion uses a list of motion candidates representing candidates for motion information of the video region, and wherein the list of motion candidates uses one or more motion candidates with motion vectors pointing to half-pel locations.
  • a method of video processing comprising performing a conversion between a coded representation of a video block and pixel values of the video block using a rule that specifies that the coded representation omits signaling of an alternative half-pel filter for merge candidate calculations during the conversion, due to no motion vector of the current block having a horizontal or a vertical half pel resolution.
  • a method of video processing comprising determining, during a conversion between a coded representation of a video block and pixel values of the video block, whether a re-ordering of a merge candidate list is performed based on usage of an alternative half-pel interpolation filter during the conversion or a coding condition; and performing the conversion based on the determining.
  • a video decoding apparatus comprising a processor configured to implement a method recited in one or more of solutions 1 to 31.
  • a video encoding apparatus comprising a processor configured to implement a method recited in one or more of solutions 1 to 31.
  • a computer program product having computer code stored thereon, the code, when executed by a processor, causes the processor to implement a method recited in any of solutions 1 to 31.
  • FIG. 6 shows a flowchart of an example method for video processing.
  • the method includes determining (602) , for a conversion between a video block of a video and a bitstream representation of the video block, whether an alternative luma half-pel interpolation filter is applied to all pairwise average candidates based on a flag which is used to represent whether alternative luma half-pel interpolation filter is employed or not; and performing (604) the conversion based on the determination.
  • the alternative luma half-pel interpolation filter is applied to all pairwise average candidates.
  • a default interpolation filter is applied to all pairwise average candidates.
  • the first value is 1 and the second value is 0, and the flag is UseAltHpelIf flag.
  • the method further comprises: determining whether half-pel motion vector interpolation is applied to all pairwise average candidates based on a second flag which is used to represent whether half-pel motion vector interpolation is employed or not.
  • FIG. 7 shows a flowchart of an example method for video processing.
  • the method includes determining (702) , for a conversion between a video block of the video and a bitstream representation of the video block, whether to enable or disable a coding tool for a pairwise average candidate based on information of associated candidates used for generating the pairwise average candidate; and performing (704) the conversion based on the determination.
  • the coding tool including at least one of Bi-prediction with CU-level weight (BCW) and alternative half-pel interpolation filter.
  • BCW CU-level weight
  • different pairwise average candidates determine the usage or disable the usage of the coding tool on-the-fly.
  • GBiIdx of the pairwise average candidate depends on the GBiIdx of only one candidate of a pair of candidates used for generating the pairwise average candidate, where GBiIdx is used to represent the generalized bi-prediction (GBi) index that indicates the used weighting factor in BCW.
  • GBiIdx is used to represent the generalized bi-prediction (GBi) index that indicates the used weighting factor in BCW.
  • GBiIdx of the pairwise average candidate is set equal to the GBiIdx of the first candidate of the pair candidates.
  • GBiIdx of the pairwise average candidate is set equal to the GBiIdx of the second candidate of the pair candidates.
  • GBiIdx of the pairwise average candidate which is denoted as GBiIdxC
  • GBiIdxC is derived as a function of the GBiIdxs of a pair of candidates used for generating the pairwise average candidate, which are denoted as GBiIdx1 and GBiIdx2 respectively.
  • GBiIdxC is set equal to the smaller one of GBiIdx1 and GBiIdx2.
  • GBiIdxC is set equal to the larger one of GBiIdx1 and GBiIdx2.
  • GBiIdxC is set equal to the mean of GBiIdx1 and GBiIdx2.
  • BCW is disabled for the pairwise average candidate when GBiIdx1 is not equal to GBiIdx2.
  • UseAltHpelIf flag which is used to represent whether alternative luma half-pel interpolation filter is employed or not, depends on two UseAltHpelIf flags of the pair of candidates.
  • the UseAltHpelIf flag of the pairwise average candidate is set equal to 1 if the UseAltHpelIf flags of both of the pair of candidates are equal to 1.
  • the UseAltHpelIf flag of the pairwise average candidate is set equal to 1 if the UseAltHpelIf flag of one of the pair of candidates is equal to 1.
  • FIG. 8 shows a flowchart of an example method for video processing.
  • the method includes determining (802) , for a conversion between a video block of a video and a bitstream representation of the video block, whether to enable or disable unequal weights in bi-prediction weight for default motion candidates in a merge candidate list associated with the video block based on one or more conditions; and performing (804) the conversion based on the determination.
  • the one or more conditions include index of the default motion candidates.
  • the one or more conditions include slice or picture type.
  • the one or more conditions include all or partial of the existing merge candidates in the merge list before the default motion candidates are added.
  • the one or more conditions include usage of unequal weights from spatial and/or temporal neighboring adjacent or non-adjacent blocks.
  • FIG. 9 shows a flowchart of an example method for video processing.
  • the method includes deriving (902) , for a conversion between a video block of a video and a bitstream representation of the video block, a merge candidate list associated with the video block; adding (904) one or more half-pel motion vector (MV) candidates with motion vectors pointing to half-pel to the merge candidate list; and performing (906) the conversion based on the merge candidate list.
  • MV half-pel motion vector
  • the one or more half-pel MV candidates are added to the merge candidate list right after derivation of pairwise merge candidates and/or combined bi-predictive merge candidates.
  • the one or more half-pel MV candidates are added to the merge candidate list right after derivation of history motion vector prediction (HMVP) merge candidates.
  • HMVP history motion vector prediction
  • whether to add one or more half-pel MV candidate or one or more zero MV candidate to the merge candidate list is changed from block to block based on decoded information from previously coded blocks and/or based on the merge candidates in the merge candidate list before adding these candidates.
  • one or more half-pel MV candidate and one or more zero MV candidate are both added to the merge candidate list.
  • the one or more half-pel MV candidate and the one or more zero MV candidate are added to the merge candidate list in an interleaved way.
  • the one or more half-pel MV candidates are added before all zero MV candidates.
  • the one or more half-pel MV candidates are added after all zero MV candidates.
  • FIG. 10 shows a flowchart of an example method for video processing.
  • the method includes determining (1002) , for a conversion between a video block of a video and a bitstream representation of the video block, whether to enable or disable half-pel interpolation filter for default motion candidates in a merge candidate list associated with the video block based on one or more conditions; and performing (1004) the conversion based on the determination.
  • the one or more conditions include index of the default motion candidates.
  • the one or more conditions include slice or picture type.
  • the one or more conditions include all or partial of the existing merge candidates in the merge list before the default motion candidates are added.
  • the one or more conditions include usage of unequal weights from spatial and/or temporal neighboring adjacent or non-adjacent blocks.
  • FIG. 11 shows a flowchart of an example method for video processing.
  • the method includes determining (1102) , for a conversion between a video block of the video and a bitstream representation of the video block, whether to enable or disable a coding tool for a pairwise average candidate based on information of all or selected motion candidates in a merge candidate list before adding the pairwise average candidate to the merge candidate list; and performing (1104) the conversion based on the determination.
  • the selected motion candidates are those spatial merge candidates in the merge candidate list.
  • the selected motion candidates may be those history motion vector prediction (HMVP) candidates in the merge candidate list.
  • HMVP history motion vector prediction
  • the selected motion candidates are one or multiple HMVP candidates in the HMVP table.
  • UseAltHpelIf flag and/or Bi-prediction with CU-level weight (BCW) index for the pairwise average candidate depend on a function of those information associated with the selected motion candidates.
  • the UseAltHpelIf for the pairwise average candidate is set to 1 or 0.
  • whether to enable/disable a tool depend on the usage of the tool from spatial or temporal neighboring adjacent or non-adjacent blocks.
  • FIG. 12 shows a flowchart of an example method for video processing.
  • the method includes determining (1202) , for a conversion between a video block of a video and a bitstream representation of the video block, a value of a flag which is used to represent whether alternative luma half-pel interpolation filter is employed or not based on motion vector (MV) of the video block; and performing (1204) the conversion based on the determination.
  • determining (1202) for a conversion between a video block of a video and a bitstream representation of the video block, a value of a flag which is used to represent whether alternative luma half-pel interpolation filter is employed or not based on motion vector (MV) of the video block.
  • MV motion vector
  • the flag is UseAltHpelIf flag.
  • UseAltHpelIf flag is set equal to zero if no MV of the video block refers to a horizontal and/or vertical half-pixel position.
  • UseAltHpelIf flag is set equal to zero if no MV of the video block refers to a horizontal and/or vertical half-pixel position when the video block is coded with a pairwise average candidate.
  • the method is applied to one or more new kinds of motion candidates that are derived from existing candidates added before the new kinds of motion candidates in a merge candidate list.
  • FIG. 13 shows a flowchart of an example method for video processing.
  • the method includes performing (1302) , for a conversion between a video block of a video and a bitstream representation of the video block, a re-order process on motion candidates in a merge candidate list associated with the video block based on usage of alternative half-pel interpolation filter; and performing (1304) the conversion based on the determination.
  • candidates with the alternative half-pel interpolation filter enabled are put before those with alternative half-pel interpolation filter disabled.
  • candidates with the alternative half-pel interpolation filter enabled are put after those with alternative half-pel interpolation filter disabled.
  • order of candidates with the alternative half-pel interpolation filter enabled and disabled is adaptively changed based on decoded information including usage of the alternative half-pel interpolation filter in neighboring adjacent or non-adjacent blocks.
  • the re-order process is only performed to spatial merge candidates.
  • the re-order process is only performed to spatial merge candidates and history motion vector prediction (HMVP) merge candidates.
  • HMVP history motion vector prediction
  • the conversion generates the video block of video from the bitstream representation.
  • the conversion generates the bitstream representation from the video block of video.
  • the performing the conversion includes using the results of previous decision step (e.g., using or not using certain coding or decoding steps) during the encoding or decoding operation to arrive at the conversion results.
  • the disclosed and other solutions, examples, embodiments, modules and the functional operations described in this document can be implemented in digital electronic circuitry, or in computer software, firmware, or hardware, including the structures disclosed in this document and their structural equivalents, or in combinations of one or more of them.
  • the disclosed and other embodiments can be implemented as one or more computer program products, i.e., one or more modules of computer program instructions encoded on a computer readable medium for execution by, or to control the operation of, data processing apparatus.
  • the computer readable medium can be a machine-readable storage device, a machine-readable storage substrate, a memory device, a composition of matter effecting a machine-readable propagated signal, or a combination of one or more them.
  • data processing apparatus encompasses all apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, or multiple processors or computers.
  • the apparatus can include, in addition to hardware, code that creates an execution environment for the computer program in question, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, or a combination of one or more of them.
  • a propagated signal is an artificially generated signal, e.g., a machine-generated electrical, optical, or electromagnetic signal, that is generated to encode information for transmission to suitable receiver apparatus.
  • a computer program (also known as a program, software, software application, script, or code) can be written in any form of programming language, including compiled or interpreted languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
  • a computer program does not necessarily correspond to a file in a file system.
  • a program can be stored in a portion of a file that holds other programs or data (e.g., one or more scripts stored in a markup language document) , in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub programs, or portions of code) .
  • a computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network.
  • the processes and logic flows described in this document can be performed by one or more programmable processors executing one or more computer programs to perform functions by operating on input data and generating output.
  • the processes and logic flows can also be performed by, and apparatus can also be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application specific integrated circuit) .
  • processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer.
  • a processor will receive instructions and data from a read only memory or a random-access memory or both.
  • the essential elements of a computer are a processor for performing instructions and one or more memory devices for storing instructions and data.
  • a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto optical disks, or optical disks.
  • mass storage devices for storing data, e.g., magnetic, magneto optical disks, or optical disks.
  • a computer need not have such devices.
  • Computer readable media suitable for storing computer program instructions and data include all forms of non-volatile memory, media and memory devices, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices; magnetic disks, e.g., internal hard disks or removable disks; magneto optical disks; and CD ROM and DVD-ROM disks.
  • semiconductor memory devices e.g., EPROM, EEPROM, and flash memory devices
  • magnetic disks e.g., internal hard disks or removable disks
  • magneto optical disks e.g., CD ROM and DVD-ROM disks.
  • the processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

Improvement of merge candidates are disclosed. One example is a method for processing video, comprising: determining, for a conversion between a video block of a video and a bitstream representation of the video block, whether an alternative luma half-pel interpolation filter is applied to all pairwise average candidates based on a flag which is used to represent whether alternative luma half-pel interpolation filter is employed or not; and performing the conversion based on the determination.

Description

IMPROVEMENT OF MERGE CANDIDATES
CROSS-REFERENCE TO RELATED APPLICATION
Under the applicable patent law and/or rules pursuant to the Paris Convention, this application is made to timely claim the priority to and benefits of International Patent Application No. PCT/CN2019/103963, filed on September 2, 2019. The entire disclosures of International Patent Application No. PCT/CN2019/103963 are incorporated by reference as part of the disclosure of this application.
TECHNICAL FIELD
This patent document relates to video coding and decoding.
BACKGROUND
In spite of the advances in video compression, digital video still accounts for the largest bandwidth use on the internet and other digital communication networks. As the number of connected user devices capable of receiving and displaying video increases, it is expected that the bandwidth demand for digital video usage will continue to grow.
SUMMARY
Devices, systems and methods related to digital video coding, and specifically, to video and image coding and decoding in which merge candidates are used during video encoding or decoding.
In one example aspect, a method of video processing is disclosed. The method includes determining to use, for a conversion between a video block of video and a coded representation of the video, an interpolation filter for motion candidate interpolation using a rule; and performing the conversion based on the determining, wherein, the interpolation scheme is one of a default interpolation filter and an alternative half-pel interpolation filter.
In another example aspect, another method of video processing is disclosed. The  method includes determining, for a conversion between a coded representation of a video block and pixel values of the video block, whether a coding tool is used in the conversion, based on an information of an associated candidate used for generating a pairwise merge candidate during the conversion or information of a selected motion candidate in a merge list before adding the pairwise merge candidate to the merge list; and performing the conversion based on the determining.
In another example aspect, another method of video processing is disclosed. The method includes determining, for a conversion between a coded representation of a video block and a pixel values of the video block, that a bi-prediction mode is used in generating motion candidates, including a default motion candidate, whether to use unequal weights in calculating the default motion candidate based on a condition; and performing the conversion based on the determining.
In another example aspect, another method of video processing is disclosed. The method includes performing a conversion between a coded representation of a video region and pixel values of the video region, wherein the conversion uses a list of motion candidates representing candidates for motion information of the video region, and wherein the list of motion candidates uses one or more motion candidates with motion vectors pointing to half-pel locations.
In another example aspect, another method of video processing is disclosed. The method includes performing a conversion between a coded representation of a video block and pixel values of the video block using a rule that specifies that the coded representation omits signaling of an alternative half-pel filter for merge candidate calculations during the conversion, due to no motion vector of the current block having a horizontal or a vertical half pel resolution.
In another example aspect, another method of video processing is disclosed. The method includes determining, during a conversion between a coded representation of a video block and pixel values of the video block, whether a re-ordering of a merge candidate list is performed based on usage of an alternative half-pel interpolation filter during the conversion or a coding condition; and performing the conversion based on the determining.
In another example aspect, another method of video processing is disclosed. The method includes determining, for a conversion between a video block of a video and a bitstream  representation of the video block, whether an alternative luma half-pel interpolation filter is applied to all pairwise average candidates based on a flag which is used to represent whether alternative luma half-pel interpolation filter is employed or not; and performing the conversion based on the determination.
In another example aspect, another method of video processing is disclosed. The method includes determining, for a conversion between a video block of the video and a bitstream representation of the video block, whether to enable or disable a coding tool for a pairwise average candidate based on information of associated candidates used for generating the pairwise average candidate; and performing the conversion based on the determination.
In another example aspect, another method of video processing is disclosed. The method includes determining, for a conversion between a video block of a video and a bitstream representation of the video block, whether to enable or disable unequal weights in bi-prediction weight for default motion candidates in a merge candidate list associated with the video block based on one or more conditions; and performing the conversion based on the determination.
In another example aspect, another method of video processing is disclosed. The method includes deriving, for a conversion between a video block of a video and a bitstream representation of the video block, a merge candidate list associated with the video block; adding one or more half-pel motion vector (MV) candidates with motion vectors pointing to half-pel to the merge candidate list; and performing the conversion based on the merge candidate list.
In another example aspect, another method of video processing is disclosed. The method includes determining, for a conversion between a video block of a video and a bitstream representation of the video block, whether to enable or disable half-pel interpolation filter for default motion candidates in a merge candidate list associated with the video block based on one or more conditions; and performing the conversion based on the determination.
In another example aspect, another method of video processing is disclosed. The method includes determining, for a conversion between a video block of the video and a bitstream representation of the video block, whether to enable or disable a coding tool for a pairwise average candidate based on information of all or selected motion candidates in a merge candidate list before adding the pairwise average candidate to the merge candidate list; and performing the conversion based on the determination.
In another example aspect, another method of video processing is disclosed. The method includes determining, for a conversion between a video block of a video and a bitstream representation of the video block, a value of a flag which is used to represent whether alternative luma half-pel interpolation filter is employed or not based on motion vector (MV) of the video block; and performing the conversion based on the determination.
In another example aspect, another method of video processing is disclosed. The method includes performing, for a conversion between a video block of a video and a bitstream representation of the video block, a re-order process on motion candidates in a merge candidate list associated with the video block based on usage of alternative half-pel interpolation filter; and performing the conversion based on the determination.
In yet another representative aspect, the above-described method is embodied in the form of processor-executable code and stored in a computer-readable program medium.
In yet another representative aspect, a device that is configured or operable to perform the above-described method is disclosed. The device may include a processor that is programmed to implement this method.
In yet another representative aspect, a video decoder apparatus may implement a method as described herein.
In yet another representative aspect, a computer program product stores on a non-transitory computer readable media, the computer program product including program code for carrying out the method as described herein.
In yet another representative aspect, a non-transitory computer readable media recorded thereon program code for carrying out the method as described herein.
In yet another representative aspect, a non-transitory computer-readable recording medium stores a bitstream representation which is generated by the method as described herein performed by a video processing apparatus.
The above and other aspects and features of the disclosed technology are described in greater detail in the drawings, the description and the claims.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1A-1B show example tables used for signaling. FIG. 1A shows the VTM-5.0  table, and FIG. 1B shows the proposed table.
FIG. 2 shows a graphical example of filter characteristics.
FIG. 3 is a block diagram of an example implementation of a hardware platform for video processing.
FIG. 4 is a flowchart for an example method of video processing.
FIG. 5 is a block diagram of an example video processing system in which disclosed techniques may be implemented.
FIG. 6 is a flowchart for an example method of video processing.
FIG. 7 is a flowchart for an example method of video processing.
FIG. 8 is a flowchart for an example method of video processing.
FIG. 9 is a flowchart for an example method of video processing.
FIG. 10 is a flowchart for an example method of video processing.
FIG. 11 is a flowchart for an example method of video processing.
FIG. 12 is a flowchart for an example method of video processing.
FIG. 13 is a flowchart for an example method of video processing.
DETAILED DESCRIPTION
Embodiments of the disclosed technology may be applied to existing video coding standards (e.g., HEVC, H. 265) and future standards to improve compression performance. Section headings are used in the present document to improve readability of the description and do not in any way limit the discussion or the embodiments (and/or implementations) to the respective sections only.
1. Summary
This document is related to video coding technologies. Specifically, it is related to merge candidates in video coding. It may be applied to the existing video coding standard like HEVC, or the standard (Versatile Video Coding) to be finalized. It may be also applicable to future video coding standards or video codec.
2. Background
Video coding standards have evolved primarily through the development of the well- known ITU-T and ISO/IEC standards. The ITU-T produced H. 261 and H. 263, ISO/IEC produced MPEG-1 and MPEG-4 Visual, and the two organizations jointly produced the H. 262/MPEG-2 Video and H. 264/MPEG-4 Advanced Video Coding (AVC) and H. 265/HEVC standards. Since H. 262, the video coding standards are based on the hybrid video coding structure wherein temporal prediction plus transform coding are utilized. To explore the future video coding technologies beyond HEVC, Joint Video Exploration Team (JVET) was founded by VCEG and MPEG jointly in 2015. Since then, many new methods have been adopted by JVET and put into the reference software named Joint Exploration Model (JEM) . In April 2018, the Joint Video Expert Team (JVET) between VCEG (Q6/16) and ISO/IEC JTC1 SC29/WG11 (MPEG) was created to work on the VVC standard targeting at 50%bitrate reduction compared to HEVC.
2.1. Pair-wise average merge candidates derivation
Pairwise average candidates are generated by averaging predefined pairs of candidates in the existing merge candidate list, and the predefined pairs are defined as { (0, 1) , (0, 2) , (1, 2) , (0, 3) , (1, 3) , (2, 3) } , where the numbers denote the merge indices to the merge candidate list. The averaged motion vectors are calculated separately for each reference list. If both motion vectors are available in one list, these two motion vectors are averaged even when they point to different reference pictures; if only one motion vector is available, use the one directly; if no motion vector is available, keep this list invalid.
When the merge list is not full after pair-wise average merge candidates are added, the zero MVPs are inserted in the end until the maximum merge candidate number is encountered.
2.2. Bi-prediction with CU-level weight (BCW)
In HEVC, the bi-prediction signal is generated by averaging two prediction signals obtained from two different reference pictures and/or using two different motion vectors. In VTM6, the bi-prediction mode is extended beyond simple averaging to allow weighted averaging of the two prediction signals.
P bi-pred= ( (8-w) *P 0+w*P 1+4) >>3      (3-1)
Five weights are allowed in the weighted averaging bi-prediction, w∈  {-2, 3, 4, 5, 10} . For each bi-predicted CU, the weight w is determined in one of two ways: 1) for a non-merge CU, the weight index is signalled after the motion vector difference; 2) for a merge CU, the weight index is inferred from neighbouring blocks based on the merge candidate index. Weighted averaging bi-prediction is only applied to CUs with 256 or more luma samples (i.e., CU width times CU height is greater than or equal to 256) . For low-delay pictures, all 5 weights are used. For non-low-delay pictures, only 3 weights (w∈ {3, 4, 5} ) are used.
– At the encoder, fast search algorithms are applied to find the weight index without significantly increasing the encoder complexity. These algorithms are summarized as follows. For further details readers are referred to the VTM software and document. When combined with AMVR, unequal weights are only conditionally checked for 1-pel and 4-pel motion vector precisions if the current picture is a low-delay picture.
– When combined with affine, affine ME will be performed for unequal weights if and only if the affine mode is selected as the current best mode.
– When the two reference pictures in bi-prediction are the same, unequal weights are only conditionally checked.
– Unequal weights are not searched when certain conditions are met, depending on the POC distance between current picture and its reference pictures, the coding QP, and the temporal level.
The BCW weight index is coded using one context coded bin followed by bypass coded bins. The first context coded bin indicates if equal weight is used; and if unequal weight is used, additional bins are signalled using bypass coding to indicate which unequal weight is used.
Weighted prediction (WP) is a coding tool supported by the H. 264/AVC and HEVC standards to efficiently code video content with fading. Support for WP was also added into the VVC standard. WP allows weighting parameters (weight and offset) to be signalled for each reference picture in each of the reference picture lists L0 and L1. Then, during motion compensation, the weight (s) and offset (s) of the corresponding reference picture (s) are applied. WP and BCW are designed for different types of video content. In order to avoid interactions between WP and BCW, which will complicate VVC decoder design, if a CU uses WP, then the BCW weight index is not signalled, and w is inferred to be 4 (i.e. equal weight is applied) . For a merge CU, the weight index is inferred from neighbouring blocks based on the merge candidate index. This can be applied to both normal merge mode and inherited affine merge mode. For constructed affine merge mode, the affine motion information is constructed based on the motion information of up to 3 blocks. The following process is used to derive the BCW index for a CU using the constructed affine merge mode.
1. Divide the range of BCW index {0, 1, 2, 3, 4} into three groups {0} , {1, 2, 3} and {4} . If all of the control point’s BCW indices are from the same group, the BCW index is derived as in the step 2; otherwise, the BCW index is set to 2.
2. If at least two control points have the same BCW index, then this BCW index value is assigned to the candidate; otherwise the BCW index of the current constructed candidate is set to 2.
BCW is also known as generalized bi-prediction (GBi) .
2.3. Switchable interpolation filter
2.3.1 Half-pel AMVR mode (CE4-1.1)
An additional AMVR mode for non-affine non-merge inter-coded CUs is proposed which allows signaling of motion vector differences at half-pel accuracy. The existing AMVR scheme of the current VVC draft is extended straightforward in the following way: Directly following the syntax element amvr_flag, if amvr_flag == 1, there is a new context-modeled binary syntax element hpel_amvr_flag which indicates usage of the new half-pel AMVR mode if hpel_amvr_flag == 1. Otherwise, i.e. if hpel_amvr_flag == 0, the selection between full-pel and 4-pel AMVR mode is indicated by the syntax element amvr_precision_flag as in the current VVC draft.
The AMVR signalling is extended as shown in FIG. 1A-1B. FIG. 1A shows the VTM-5.0 table, and FIG. 1B shows the proposed table.
2.3.2 Alternative luma half-pel interpolation filters with merging (CE4-1.2, CE4-1.3)
Alternative luma half-pel interpolation filters are proposed where the following two 6-tap filters with smoothing characteristics, denoted as FlatTop and Gauss, are tested in CE4-1 (see FIG. 2) .
Figure PCTCN2020113024-appb-000001
Explicit signalling
For a non-affine and non-merge inter-coded CU which uses half-pel motion vector accuracy (i.e., the half-pel AMVR mode) , an alternative luma half-pel interpolation filter is used. For test 1.2, The Gauss luma half-pel interpolation filter is used. For test 1.3, a switching between the two alternative half-pel interpolation filters is made based on the value of a new syntax element hpel_if_idx. The syntax element hpel_if_idx is only signaled in case of half-pel AMVR mode as follows:
AMVR mode hpel_if_idx Interpolation filter
QPEL, FPEL, 4PEL not present (inferred to 2) HEVC
HPEL
0 FlatTop
HPEL
1 Gauss
Implicit Signalling
In case of skip/merge mode using a spatial merging candidate, the information which interpolation filter is applied for the half-pel position is inherited from the neighbouring block. 
3. Examples of technical problems solved by technical solutions disclosed herein
The current design of pairwise average candidate can be further improved, for example, how to set its GBi index and whether to use alternative luma half-pel interpolation filter.
In VTM-6.0, the GBi index of a pairwise average candidate is set equal to GBI_DEFAULT (i.e., equal weights for two prediction blocks are used) . And if the alternative luma half-pel interpolation filter flags of the two merge candidates used for generating a pairwise merge candidate are equal, the alternative luma half-pel interpolation filter flag of the pairwise merge candidate is set equal to that of the merge candidate with smaller merge index; otherwise, it is set equal to false.
For default merge candidates (i.e., zero motion candidates) , the half sample interpolation filter index hpelIfIdx of every new candidate being added is set equal to 0. The bi-prediction weight index (i.e., GBi index) of every new candidate being added is set equal to GBI_DEFAULT.
4. A listing of embodiments and techniques
The listing of items below should be considered as examples to explain general concepts. These items should not be interpreted in a narrow way. Furthermore, these items can be combined in any manner.
Hereinafter, GBiIdx is used to represent the GBi index that indicates the used weighting factor in BCW (a.k.a, GBi) , and UseAltHpelIf is used to represent whether alternative luma half-pel interpolation filter is employed or not (e.g., when UseAltHpelIf is equal to 1, the alternative luma half-pel interpolation filter is used; otherwise, the alternative luma half-pel interpolation filter is not used) . Candidates, Cand1 and Cand2, are used to denote the two merge candidates used for generating a pairwise merge candidate.
1. Using the alternative luma half-pel interpolation filter may be applied to all pairwise candidates.
a. Alternatively, UseAltHpelIf flag of a pairwise average candidate may be set equal to 0, e.g., default interpolation filter may always be used.
b. Alternatively, furthermore, the above methods may be applied to half-pel motion vector interpolation.
2. Whether to enable or disable a coding tool (e.g., BCW/alternative half-pel interpolation filter) for a pairwise average candidate may depend on the information of the associated candidates used for generating the pairwise merge candidate.
a. In one example, different pairwise merge candidates may determine the usage or disable the usage of the coding tool on-the-fly.
b. In one example, GBiIdx of a pairwise average candidate may depend on the GBiIdx of only one candidate of the pair.
i. In one example, GBiIdx of a pairwise average candidate may be set equal to the GBiIdx of Cand1.
ii. In one example, GBiIdx of a pairwise average candidate may be set equal to the GBiIdx of Cand2.
c. In one example, GBiIdx of a pairwise average candidate (denoted as GBiIdxC) may be derived as a function of the GBiIdxs of the two candidates of the pair (denoted as GBiIdx1 and GBiIdx2) .
i. In one example, GBiIdxC may be set equal to the smaller GBiIdx of Cand1 and Cand2.
ii. In one example, GBiIdxC may be set equal to the larger GBiIdx of Cand1 and Cand2.
iii. In one example, GBiIdxC may be set equal to the mean of GBiIdx of Cand1 and Cand2.
iv. In one example, BCW may be disabled for a pairwise average candidate when the GBiIdx of Cand1 is not equal to the GBiIdx of Cand2.
v. In one example, GBiIdxC = (GBiIdx1 == GBiIdx2? GBiIdx1: GBI_DEFAULT) .
d. In one example, UseAltHpelIf flag of a pairwise average candidate may depend on the UseAltHpelIf flag of only one candidate of the pair.
i. In one example, UseAltHpelIf flag of a pairwise average candidate may be set equal to that associated with one candidate, e.g., Cand1 or Cand2.
ii. In one example, UseAltHpelIf flag may be set to false when the UseAltHpelIf of Cand1 is not equal to the UseAltHpelIf of Cand2.
e. In one example, UseAltHpelIf flag of a pairwise average candidate may depend on the UseAltHpelIf flags of the two candidates of the pair.
i. In one example, UseAltHpelIf flag of a pairwise average candidate may be set equal to 1 if the UseAltHpelIf flag of Cand1 and Cand2 are both equal to 1.
ii. In one example, UseAltHpelIf flag of a pairwise average candidate may be set equal to 1 if UseAltHpelIf flag of Cand1 or Cand2 is equal to 1.
3. Instead of always disabling unequal weights in GBi (i.e., bi-prediction weight index being equal to GBI_DEFAULT) in default motion candidates, it is proposed to allow unequal weights for default motion candidates.
a. In one example, whether to enable/disable unequal weights may depend on the index of default motion candidates.
b. In one example, whether to enable/disable unequal weights may depend on slice/picture type.
c. In one example, whether to enable/disable unequal weights may depend on all or partial of the existing merge candidates in the merge list before default motion candidates are added.
d. In one example, whether to enable/disable unequal weights may depend on the usage of unequal weights from spatial/temporal neighboring (adjacent or non-adjacent) blocks.
4. It is proposed to add motion candidates with motion vectors pointing to half-pel.
a. In one example, half-pel MV candidates may be added to merge candidate lists right after the derivation of pairwise merge candidates/combined bi-predictive merge candidates.
b. In one example, half-pel MV candidates may be added to merge candidate lists after the derivation of HMVP candidates.
c. In one example, whether to add half-pel MV candidate or zero MV candidate (the default ones in current design) may be changed from block to block, such as based on decoded information from previously coded blocks and/or based on the merge candidates before adding these default candidates.
d. In one example, half-pel MV candidate and zero MV candidate (the default ones in current design) may be both added to the motion candidate list.
i. In one example, they may be added in an interleaved way.
ii. In one example, half-pel MV candidates may be added before all zero MV candidates.
iii. In one example, half-pel MV candidates may be added after all zero MV candidates.
5. Instead of always disabling the alternative half-pel interpolation filter in default motion candidates, it is proposed to allow half-pel interpolation filter for default motion candidates.
a. In one example, whether to enable/disable half-pel interpolation filter may depend on the index of default motion candidates.
b. In one example, whether to enable/disable half-pel interpolation filter may depend on slice/picture type.
c. In one example, whether to enable/disable half-pel interpolation filter may depend on all or partial of the existing merge candidates in the merge list before default motion candidates are added.
d. In one example, whether to enable/disable half-pel interpolation filter may depend on the usage of unequal weights from spatial/temporal neighboring (adjacent or non-adjacent) blocks.
6. Whether to enable or disable a coding tool for a pairwise average candidate may depend on the information of all or partial of motion candidates (named selected motion candidates) in the merge list before adding the pairwise average candidate.
a. In one example, the selected motion candidates may be those spatial merge candidates;
b. In one example, the selected motion candidates may be those HMVP candidates in the merge candidate list;
c. In one example, the partial of motion candidates may be one or multiple HMVP candidates in the HMVP table;
d. In one example, UseAltHpelIf flag and/or BCW index may depend on a function of those information associated with the selected motion candidates.
i. In one example, if there are more candidates with UseAltHpelIf equal to 1 than the remaining candidates, then UseAltHpelIf may be set to 1 (or 0) .
e. In one example, whether to enable/disable a tool may depend on the usage of the tool from spatial/temporal neighboring (adjacent or non-adjacent) blocks.
7. UseAltHpelIf flag is set equal to zero if no MV of a block refers to a horizontal and/or vertical half-pixel position.
a. In one example, UseAltHpelIf flag is set equal to zero if no MV of a block refers to a horizontal and/or vertical half-pixel position when the current block is coded with a pairwise average candidate.
8. The ‘pairwise average candidate’ mentioned above may be replaced by other new kinds of motion candidates that are derived from existing candidates added before the new kinds of motion candidates, such as combined bi-predictive merge candidates.
9. Motion candidates in the merge candidate list may be re-ordered based on the usage of the alternative half-pel interpolation filter.
a. In one example, candidates with the alternative half-pel interpolation filter enabled may be put before those with alternative half-pel interpolation filter disabled.
b. In one example, candidates with the alternative half-pel interpolation filter enabled may be put after those with alternative half-pel interpolation filter disabled.
c. In one example, the order of candidates with the alternative half-pel interpolation filter enabled and disabled may be adaptively changed based on the decoded information, such as usage of the alternative half-pel interpolation filter in neighboring (adjacent or non-adjacent) blocks.
d. Alternatively, furthermore, the proposed method may be only applied to spatial merge candidates.
e. Alternatively, furthermore, the proposed method may be only applied to spatial merge candidates and HMVP candidates.
5. Embodiment
Deleted parts are highlighted in grey
Figure PCTCN2020113024-appb-000002
and newly added parts are highlighted in grey.
5.1. Embodiment #1 on UseAltHpelIf
UseAltHpelIf flag of a pairwise average candidate may equal to false.
8.5.2.4 Derivation process for pairwise average merging candidate
Inputs to this process are:
– a merging candidate list mergeCandList,
– the reference indices refIdxL0N and refIdxL1N of every candidate N in mergeCandList,
– the prediction list utilization flags predFlagL0N and predFlagL1N of every candidate N in mergeCandList,
– the motion vectors in 1/16 fractional-sample accuracy mvL0N and mvL1N of every candidate N in mergeCandList,
Figure PCTCN2020113024-appb-000003
– the number of elements numCurrMergeCand within mergeCandList.
Outputs of this process are:
– the merging candidate list mergeCandList,
– the number of elements numCurrMergeCand within mergeCandList,
– the reference indices refIdxL0avgCand and refIdxL1avgCand of candidate avgCand added into mergeCandList during the invocation of this process,
– the prediction list utilization flags predFlagL0avgCand and predFlagL1avgCand of candidate avgCand added into mergeCandList during the invocation of this process,
– the motion vectors in 1/16 fractional-sample accuracy mvL0avgCand and mvL1avgCand of candidate avgCand added into mergeCandList during the invocation of this process,
– the half sample interpolation filter index hpelIfIdxavgCand of every candidate avgCand added into mergeCandList during the invocation of this process.
The variable numRefLists is derived as follows:
numRefLists = (slice_type = = B) ? 2: 1                                       (8-344)
The following assignments are made, with p0Cand being the candidate at position 0 and p1Cand being the candidate at position 1 in the merging candidate list mergeCandList:
p0Cand = mergeCandList [0]                                           (8-345)
p1Cand = mergeCandList [1                               (8-346)
The candidate avgCand is added at the end of mergeCandList, i.e., mergeCandList [numCurrMergeCand] is set equal to avgCand, and the reference indices, the prediction list utilization flags and the motion vectors of avgCand are derived as follows and numCurrMergeCand is incremented by 1:
– For each reference picture list LX with X ranging from 0 to (numRefLists -1) , the following applies:
– If predFlagLXp0Cand is equal to 1 and predFlagLXp1Cand is equal to 1, the variables refIdxLXavgCand, predFlagLXavgCand, mvLXavgCand [0] , and mvLXavgCand [1] are derived as follows:
refIdxLXavgCand = refIdxLXp0Cand                                    (8-347)
predFlagLXavgCand = 1                                             (8-348)
– The rounding process for motion vectors as specified in clause 8.5.2.14 is invoked with mvX set equal to mvLXp0Cand [0] + mvLXp1Cand [0] , rightShift set equal to1, and leftShift set equal to 0 as inputs and the rounded mvLXavgCand [0] as output.
– The rounding process for motion vectors as specified in clause 8.5.2.14 is invoked with mvX set equal to mvLXp0Cand [1] + mvLXp1Cand [1] , rightShift set equal to1, and leftShift set equal to 0 as inputs and the rounded mvLXavgCand [1] as output.
– Otherwise, if predFlagLXp0Cand is equal to 1 and predFlagLXp1Cand is equal to 0, the variables refIdxLXavgCand, predFlagLXavgCand, mvLXavgCand [0] , mvLXavgCand [1] are derived as follows:
refIdxLXavgCand = refIdxLXp0Cand                                              (8-349)
predFlagLXavgCand = 1                                                          (8-350)
mvLXavgCand [0] = mvLXp0Cand [0]                                            (8-351)
mvLXavgCand [1] = mvLXp0Cand [1]                                            (8-352)
– Otherwise, if predFlagLXp0Cand is equal to 0 and predFlagLXp1Cand is equal to 1, the variables refIdxLXavgCand, predFlagLXavgCand, mvLXavgCand [0] , mvLXavgCand [1] are derived as follows:
refIdxLXavgCand = refIdxLXp1Cand                                  (8-353)
predFlagLXavgCand = 1                                                (8-354)
mvLXavgCand [0] = mvLXp1Cand [0]                                           (8-355)
mvLXavgCand [1] = mvLXp1Cand [1]                                            (8-356)
– Otherwise, if predFlagLXp0Cand is equal to 0 and predFlagLXp1Cand is equal to 0, the variables refIdxLXavgCand, predFlagLXavgCand, mvLXavgCand [0] , mvLXavgCand [1] are derived as follows:
refIdxLXavgCand = -1                                                          (8-357)
predFlagLXavgCand = 0                                                         (8-358)
mvLXavgCand [0] = 0                                                          (8-359)
mvLXavgCand [1] = 0                                                          (8-360)
– When numRefLists is equal to 1, the following applies:
refIdxL1avgCand = -1                                                           (8-361)
predFlagL1avgCand = 0                                                         (8-362)
– The half sample interpolation filter index hpelIfIdxavgCand is derived as follows:
Figure PCTCN2020113024-appb-000004
– 
Figure PCTCN2020113024-appb-000005
hpelIfIdxavgCand is set equal to 0.
5.2. Embodiment #2 on UseAltHpelIf
UseAltHpelIf flag of a pairwise average candidate may equal to the UseAltHpelIf flag of Cand1.
8.5.2.4 Derivation process for pairwise average merging candidate
Inputs to this process are:
– a merging candidate list mergeCandList,
– the reference indices refIdxL0N and refIdxL1N of every candidate N in mergeCandList,
– the prediction list utilization flags predFlagL0N and predFlagL1N of every candidate N in mergeCandList,
– the motion vectors in 1/16 fractional-sample accuracy mvL0N and mvL1N of every candidate N in mergeCandList,
– the half sample interpolation filter index hpelIfIdxN of every candidate N in mergeCandList,
– the number of elements numCurrMergeCand within mergeCandList.
Outputs of this process are:
– the merging candidate list mergeCandList,
– the number of elements numCurrMergeCand within mergeCandList,
– the reference indices refIdxL0avgCand and refIdxL1avgCand of candidate avgCand added into mergeCandList during the invocation of this process,
– the prediction list utilization flags predFlagL0avgCand and predFlagL1avgCand of candidate avgCand added into mergeCandList during the invocation of this process,
– the motion vectors in 1/16 fractional-sample accuracy mvL0avgCand and mvL1avgCand of candidate avgCand added into mergeCandList during the invocation of this process,
– the half sample interpolation filter index hpelIfIdxavgCand of every candidate avgCand added into mergeCandList during the invocation of this process.
The variable numRefLists is derived as follows:
numRefLists = (slice_type = = B) ? 2: 1                                        (8-344)
The following assignments are made, with p0Cand being the candidate at position 0 and p1Cand being the candidate at position 1 in the merging candidate list mergeCandList:
p0Cand = mergeCandList [0]                                                     (8-345)
p1Cand = mergeCandList [1]                                                     (8-346)
The candidate avgCand is added at the end of mergeCandList, i.e., mergeCandList [numCurrMergeCand] is set equal to avgCand, and the reference indices, the prediction list utilization flags and the motion vectors of avgCand are derived as follows and numCurrMergeCand is incremented by 1:
– For each reference picture list LX with X ranging from 0 to (numRefLists -1) , the following applies:
– If predFlagLXp0Cand is equal to 1 and predFlagLXp1Cand is equal to 1, the variables refIdxLXavgCand, predFlagLXavgCand, mvLXavgCand [0] , and mvLXavgCand [1] are derived as follows:
refIdxLXavgCand = refIdxLXp0Cand                                              (8-347)
predFlagLXavgCand = 1                                                          (8-348)
– The rounding process for motion vectors as specified in clause 8.5.2.14 is invoked with mvX set equal to mvLXp0Cand [0] + mvLXp1Cand [0] , rightShift set equal to1, and leftShift set equal to 0 as inputs and the rounded mvLXavgCand [0] as output.
– The rounding process for motion vectors as specified in clause 8.5.2.14 is invoked with mvX set equal to mvLXp0Cand [1] + mvLXp1Cand [1] , rightShift set equal to1, and leftShift set equal to 0 as inputs and the rounded mvLXavgCand [1] as output.
– Otherwise, if predFlagLXp0Cand is equal to 1 and predFlagLXp1Cand is equal to 0, the variables refIdxLXavgCand, predFlagLXavgCand, mvLXavgCand [0] , mvLXavgCand [1] are derived as follows:
refIdxLXavgCand = refIdxLXp0Cand                                              (8-349)
predFlagLXavgCand = 1                                                          (8-350)
mvLXavgCand [0] = mvLXp0Cand [0]                                            (8-351)
mvLXavgCand [1] = mvLXp0Cand [1]                                            (8-352)
– Otherwise, if predFlagLXp0Cand is equal to 0 and predFlagLXp1Cand is equal to 1, the variables refIdxLXavgCand, predFlagLXavgCand, mvLXavgCand [0] , mvLXavgCand [1] are derived as follows:
refIdxLXavgCand = refIdxLXp1Cand                                              (8-353)
predFlagLXavgCand = 1                                                          (8-354)
mvLXavgCand [0] = mvLXp1Cand [0]                                            (8-355)
mvLXavgCand [1] = mvLXp1Cand [1]                                            (8-356)
– Otherwise, if predFlagLXp0Cand is equal to 0 and predFlagLXp1Cand is equal to 0, the variables refIdxLXavgCand, predFlagLXavgCand, mvLXavgCand [0] , mvLXavgCand [1] are derived as follows:
refIdxLXavgCand = -1                                                          (8-357)
predFlagLXavgCand = 0                                                         (8-358)
mvLXavgCand [0] = 0                                                          (8-359)
mvLXavgCand [1] = 0                                                          (8-360)
– When numRefLists is equal to 1, the following applies:
refIdxL1avgCand = -1                                                           (8-361)
predFlagL1avgCand = 0                                                         (8-362)
– The half sample interpolation filter index hpelIfIdxavgCand is derived as follows:
– 
Figure PCTCN2020113024-appb-000006
hpelIfIdxavgCand is set equal to hpelIfIdxp0Cand.
Figure PCTCN2020113024-appb-000007
5.3. Embodiment #3 on UseAltHpelIf
UseAltHpelIf flag of a pairwise average candidate may equal to the UseAltHpelIf flag of Cand2.
8.5.2.4 Derivation process for pairwise average merging candidate
Inputs to this process are:
– a merging candidate list mergeCandList,
– the reference indices refIdxL0N and refIdxL1N of every candidate N in mergeCandList,
– the prediction list utilization flags predFlagL0N and predFlagL1N of every candidate N in mergeCandList,
– the motion vectors in 1/16 fractional-sample accuracy mvL0N and mvL1N of every candidate N in mergeCandList,
– the half sample interpolation filter index hpelIfIdxN of every candidate N in mergeCandList,
– the number of elements numCurrMergeCand within mergeCandList.
Outputs of this process are:
– the merging candidate list mergeCandList,
– the number of elements numCurrMergeCand within mergeCandList,
– the reference indices refIdxL0avgCand and refIdxL1avgCand of candidate avgCand added into mergeCandList during the invocation of this process,
– the prediction list utilization flags predFlagL0avgCand and predFlagL1avgCand of candidate avgCand added into mergeCandList during the invocation of this process,
– the motion vectors in 1/16 fractional-sample accuracy mvL0avgCand and mvL1avgCand of candidate avgCand added into mergeCandList during the invocation of this process,
– the half sample interpolation filter index hpelIfIdxavgCand of every candidate avgCand added into mergeCandList during the invocation of this process.
The variable numRefLists is derived as follows:
numRefLists = (slice_type = = B) ? 2: 1                                        (8-344)
The following assignments are made, with p0Cand being the candidate at position 0 and p1Cand being the candidate at position 1 in the merging candidate list mergeCandList:
p0Cand = mergeCandList [0]                                                     (8-345)
p1Cand = mergeCandList [1]                                                     (8-346)
The candidate avgCand is added at the end of mergeCandList, i.e., mergeCandList [numCurrMergeCand] is set equal to avgCand, and the reference indices, the prediction list utilization flags and the motion vectors of avgCand are derived as follows and numCurrMergeCand is incremented by 1:
– For each reference picture list LX with X ranging from 0 to (numRefLists -1) , the following applies:
– If predFlagLXp0Cand is equal to 1 and predFlagLXp1Cand is equal to 1, the variables refIdxLXavgCand, predFlagLXavgCand, mvLXavgCand [0] , and mvLXavgCand [1] are derived as follows:
refIdxLXavgCand = refIdxLXp0Cand                                              (8-347)
predFlagLXavgCand = 1                                                          (8-348)
– The rounding process for motion vectors as specified in clause 8.5.2.14 is invoked with mvX set equal to mvLXp0Cand [0] + mvLXp1Cand [0] , rightShift set equal to1, and leftShift set equal to 0 as inputs and the rounded mvLXavgCand [0] as output.
– The rounding process for motion vectors as specified in clause 8.5.2.14 is invoked with mvX set equal to mvLXp0Cand [1] + mvLXp1Cand [1] , rightShift set equal to1, and leftShift set equal to 0 as inputs and the rounded mvLXavgCand [1] as output.
– Otherwise, if predFlagLXp0Cand is equal to 1 and predFlagLXp1Cand is equal to 0, the variables refIdxLXavgCand, predFlagLXavgCand, mvLXavgCand [0] , mvLXavgCand [1] are derived as follows:
refIdxLXavgCand = refIdxLXp0Cand                                              (8-349)
predFlagLXavgCand = 1                                                          (8-350)
mvLXavgCand [0] = mvLXp0Cand [0]                                            (8-351)
mvLXavgCand [1] = mvLXp0Cand [1]                                            (8-352)
– Otherwise, if predFlagLXp0Cand is equal to 0 and predFlagLXp1Cand is equal to 1, the variables refIdxLXavgCand, predFlagLXavgCand, mvLXavgCand [0] , mvLXavgCand [1] are derived as follows:
refIdxLXavgCand = refIdxLXp1Cand                                              (8-353)
predFlagLXavgCand = 1                                                          (8-354)
mvLXavgCand [0] = mvLXp1Cand [0]                                            (8-355)
mvLXavgCand [1] = mvLXp1Cand [1]                                            (8-356)
– Otherwise, if predFlagLXp0Cand is equal to 0 and predFlagLXp1Cand is equal to 0, the variables refIdxLXavgCand, predFlagLXavgCand, mvLXavgCand [0] , mvLXavgCand [1] are derived as follows:
refIdxLXavgCand = -1                                                          (8-357)
predFlagLXavgCand = 0                                                         (8-358)
mvLXavgCand [0] = 0                                                          (8-359)
mvLXavgCand [1] = 0                                                          (8-360)
– When numRefLists is equal to 1, the following applies:
refIdxL1avgCand = -1                                                           (8-361)
predFlagL1avgCand = 0                                                          (8-362)
– The half sample interpolation filter index hpelIfIdxavgCand is derived as follows:
– 
Figure PCTCN2020113024-appb-000008
hpelIfIdxavgCand is set equal to hpelIfIdxp
Figure PCTCN2020113024-appb-000009
Cand.
Figure PCTCN2020113024-appb-000010
5.4. Embodiment #4 on GBiIdx
GBiIdx of a pairwise average candidate may equal to the GBiIdx of Cand1.
8.5.2.4 Derivation process for pairwise average merging candidate
Inputs to this process are:
– a merging candidate list mergeCandList,
– the reference indices refIdxL0N and refIdxL1N of every candidate N in mergeCandList,
– the prediction list utilization flags predFlagL0N and predFlagL1N of every candidate N in mergeCandList,
– the motion vectors in 1/16 fractional-sample accuracy mvL0N and mvL1N of every candidate N in mergeCandList,
– the half sample interpolation filter index hpelIfIdxN of every candidate N in mergeCandList,
– the bi-prediction weight index bcwIdxN of every candidate N in mergeCandList,
– the number of elements numCurrMergeCand within mergeCandList.
Outputs of this process are:
– the merging candidate list mergeCandList,
– the number of elements numCurrMergeCand within mergeCandList,
– the reference indices refIdxL0avgCand and refIdxL1avgCand of candidate avgCand added into mergeCandList during the invocation of this process,
– the prediction list utilization flags predFlagL0avgCand and predFlagL1avgCand of candidate avgCand added into mergeCandList during the invocation of this process,
– the motion vectors in 1/16 fractional-sample accuracy mvL0avgCand and mvL1avgCand of candidate avgCand added into mergeCandList during the invocation of this process,
– the half sample interpolation filter index hpelIfIdxavgCand of every candidate avgCand added into mergeCandList during the invocation of this process.
– the bi-prediction weight index bcwIdxavgCand of every candidate avgCand added into mergeCandList during the invocation of this process.
The variable numRefLists is derived as follows:
numRefLists = (slice_type = = B) ? 2: 1                                        (8-344)
The following assignments are made, with p0Cand being the candidate at position 0 and p1Cand being the candidate at position 1 in the merging candidate list mergeCandList:
p0Cand = mergeCandList [0]                                                     (8-345)
p1Cand = mergeCandList [1]                                                     (8-346)
The candidate avgCand is added at the end of mergeCandList, i.e., mergeCandList [numCurrMergeCand] is set equal to avgCand, and the reference indices, the prediction list utilization flags and the motion vectors of avgCand are derived as follows and numCurrMergeCand is incremented by 1:
– For each reference picture list LX with X ranging from 0 to (numRefLists -1) , the following applies:
– If predFlagLXp0Cand is equal to 1 and predFlagLXp1Cand is equal to 1, the variables refIdxLXavgCand, predFlagLXavgCand, mvLXavgCand [0] , and mvLXavgCand [1] are derived as follows:
refIdxLXavgCand = refIdxLXp0Cand                                              (8-347)
predFlagLXavgCand = 1                                                          (8-348)
– The rounding process for motion vectors as specified in clause 8.5.2.14 is invoked with mvX set equal to mvLXp0Cand [0] + mvLXp1Cand [0] , rightShift set equal to1, and leftShift set equal to 0 as inputs and the rounded mvLXavgCand [0] as output.
– The rounding process for motion vectors as specified in clause 8.5.2.14 is invoked with mvX set equal to mvLXp0Cand [1] + mvLXp1Cand [1] , rightShift set equal to1, and leftShift set equal to 0 as inputs and the rounded mvLXavgCand [1] as output.
– Otherwise, if predFlagLXp0Cand is equal to 1 and predFlagLXp1Cand is equal to 0, the variables refIdxLXavgCand, predFlagLXavgCand, mvLXavgCand [0] , mvLXavgCand [1] are derived as follows:
refIdxLXavgCand = refIdxLXp0Cand                                              (8-349)
predFlagLXavgCand = 1                                                          (8-350)
mvLXavgCand [0] = mvLXp0Cand [0]                                            (8-351)
mvLXavgCand [1] = mvLXp0Cand [1]                                            (8-352)
– Otherwise, if predFlagLXp0Cand is equal to 0 and predFlagLXp1Cand is equal to 1, the variables refIdxLXavgCand, predFlagLXavgCand, mvLXavgCand [0] , mvLXavgCand [1] are derived as follows:
refIdxLXavgCand = refIdxLXp1Cand                                              (8-353)
predFlagLXavgCand = 1                                                          (8-354)
mvLXavgCand [0] = mvLXp1Cand [0]                                            (8-355)
mvLXavgCand [1] = mvLXp1Cand [1]                                            (8-356)
– Otherwise, if predFlagLXp0Cand is equal to 0 and predFlagLXp1Cand is equal to 0, the variables refIdxLXavgCand, predFlagLXavgCand, mvLXavgCand [0] , mvLXavgCand [1] are derived as follows:
refIdxLXavgCand = -1                                                          (8-357)
predFlagLXavgCand = 0                                                         (8-358)
mvLXavgCand [0] = 0                                                          (8-359)
mvLXavgCand [1] = 0                                                          (8-360)
– When numRefLists is equal to 1, the following applies:
refIdxL1avgCand = -1                                                           (8-361)
predFlagL1avgCand = 0                                                         (8-362)
– The half sample interpolation filter index hpelIfIdxavgCand is derived as follows:
– If hpelIfIdxp0Cand is equal to hpelIfIdxp1Cand, hpelIfIdxavgCand is set equal to hpelIfIdxp0Cand.
– Otherwise, hpelIfIdxavgCand is set equal to 0.
– The bi-prediction weight index bcwIdxavgCand is derived as follows:
bcwIdxavgCand is set equal to bcwIdxp0Cand.
5.5. Embodiment #5 on GBiIdx
GBiIdx of a pairwise average candidate may equal to the GBiIdx of Cand2.
8.5.2.4 Derivation process for pairwise average merging candidate
Inputs to this process are:
– a merging candidate list mergeCandList,
– the reference indices refIdxL0N and refIdxL1N of every candidate N in mergeCandList,
– the prediction list utilization flags predFlagL0N and predFlagL1N of every candidate N in mergeCandList,
– the motion vectors in 1/16 fractional-sample accuracy mvL0N and mvL1N of every candidate N in mergeCandList,
– the half sample interpolation filter index hpelIfIdxN of every candidate N in mergeCandList,
– the bi-prediction weight index bcwIdxN of every candidate N in mergeCandList,
– the number of elements numCurrMergeCand within mergeCandList.
Outputs of this process are:
– the merging candidate list mergeCandList,
– the number of elements numCurrMergeCand within mergeCandList,
– the reference indices refIdxL0avgCand and refIdxL1avgCand of candidate avgCand added into mergeCandList during the invocation of this process,
– the prediction list utilization flags predFlagL0avgCand and predFlagL1avgCand of candidate avgCand added into mergeCandList during the invocation of this process,
– the motion vectors in 1/16 fractional-sample accuracy mvL0avgCand and mvL1avgCand of candidate avgCand added into mergeCandList during the invocation of this process,
– the half sample interpolation filter index hpelIfIdxavgCand of every candidate avgCand added into mergeCandList during the invocation of this process.
– the bi-prediction weight index bcwIdxavgCand of every candidate avgCand added into mergeCandList during the invocation of this process.
The variable numRefLists is derived as follows:
numRefLists = (slice_type = = B) ? 2: 1                                         (8-344)
The following assignments are made, with p0Cand being the candidate at position 0 and p1Cand being the candidate at position 1 in the merging candidate list mergeCandList:
p0Cand = mergeCandList [0]                                                      (8-345)
p1Cand = mergeCandList [1]                                                      (8-346)
The candidate avgCand is added at the end of mergeCandList, i.e., mergeCandList [numCurrMergeCand] is set equal to avgCand, and the reference indices, the prediction list utilization flags and the motion vectors of avgCand are derived as follows and numCurrMergeCand is incremented by 1:
– For each reference picture list LX with X ranging from 0 to (numRefLists -1) , the following applies:
– If predFlagLXp0Cand is equal to 1 and predFlagLXp1Cand is equal to 1, the variables refIdxLXavgCand, predFlagLXavgCand, mvLXavgCand [0] , and mvLXavgCand [1] are derived as follows:
refIdxLXavgCand = refIdxLXp0Cand                                               (8-347)
predFlagLXavgCand = 1                                                          (8-348)
– The rounding process for motion vectors as specified in clause 8.5.2.14 is invoked with mvX set equal to mvLXp0Cand [0] + mvLXp1Cand [0] , rightShift set equal to1, and leftShift set equal to 0 as inputs and the rounded mvLXavgCand [0] as output.
– The rounding process for motion vectors as specified in clause 8.5.2.14 is invoked with mvX set equal to mvLXp0Cand [1] + mvLXp1Cand [1] , rightShift set equal to1, and leftShift set equal to 0 as inputs and the rounded mvLXavgCand [1] as output.
– Otherwise, if predFlagLXp0Cand is equal to 1 and predFlagLXp1Cand is equal to 0, the variables refIdxLXavgCand, predFlagLXavgCand, mvLXavgCand [0] , mvLXavgCand [1] are derived as follows:
refIdxLXavgCand = refIdxLXp0Cand                                               (8-349)
predFlagLXavgCand = 1                                                          (8-350)
mvLXavgCand [0] = mvLXp0Cand [0]                                             (8-351)
mvLXavgCand [1] = mvLXp0Cand [1]                                             (8-352)
– Otherwise, if predFlagLXp0Cand is equal to 0 and predFlagLXp1Cand is equal to 1, the variables refIdxLXavgCand, predFlagLXavgCand, mvLXavgCand [0] , mvLXavgCand [1] are derived as follows:
refIdxLXavgCand = refIdxLXp1Cand                                           (8-353)
predFlagLXavgCand = 1                                                      (8-354)
mvLXavgCand [0] = mvLXp1Cand [0]                                         (8-355)
mvLXavgCand [1] = mvLXp1Cand [1]                                         (8-356)
– Otherwise, if predFlagLXp0Cand is equal to 0 and predFlagLXp1Cand is equal to 0, the variables refIdxLXavgCand, predFlagLXavgCand, mvLXavgCand [0] , mvLXavgCand [1] are derived as follows:
refIdxLXavgCand = -1                                                       (8-357)
predFlagLXavgCand = 0                                                      (8-358)
mvLXavgCand [0] = 0                                                       (8-359)
mvLXavgCand [1] = 0                                                       (8-360)
– When numRefLists is equal to 1, the following applies:
refIdxL1avgCand = -1                                                       (8-361)
predFlagL1avgCand = 0                                                      (8-362)
– The half sample interpolation filter index hpelIfIdxavgCand is derived as follows:
– If hpelIfIdxp0Cand is equal to hpelIfIdxp1Cand, hpelIfIdxavgCand is set equal to hpelIfIdxp0Cand.
– Otherwise, hpelIfIdxavgCand is set equal to 0.
– The bi-prediction weight index bcwIdxavgCand is derived as follows:
bcwIdxavgCand is set equal to bcwIdxp1Cand.
FIG. 5 is a block diagram showing an example video processing system 1900 in which various techniques disclosed herein may be implemented. Various implementations may include some or all of the components of the system 1900. The system 1900 may include input 1902 for receiving video content. The video content may be received in a raw or uncompressed format, e.g., 8 or 10 bit multi-component pixel values, or may be in a compressed or encoded format. The input 1902 may represent a network interface, a peripheral bus interface, or a storage interface. Examples of network interface include wired interfaces such as Ethernet, passive optical network (PON) , etc. and wireless interfaces such as Wi-Fi or cellular interfaces.
The system 1900 may include a coding component 1904 that may implement the various coding or encoding methods described in the present document. The coding component  1904 may reduce the average bitrate of video from the input 1902 to the output of the coding component 1904 to produce a coded representation of the video. The coding techniques are therefore sometimes called video compression or video transcoding techniques. The output of the coding component 1904 may be either stored, or transmitted via a communication connected, as represented by the component 1906. The stored or communicated bitstream (or coded) representation of the video received at the input 1902 may be used by the component 1908 for generating pixel values or displayable video that is sent to a display interface 1910. The process of generating user-viewable video from the bitstream representation is sometimes called video decompression. Furthermore, while certain video processing operations are referred to as “coding” operations or tools, it will be appreciated that the coding tools or operations are used at an encoder and corresponding decoding tools or operations that reverse the results of the coding will be performed by a decoder.
Examples of a peripheral bus interface or a display interface may include universal serial bus (USB) or high definition multimedia interface (HDMI) or Displayport, and so on. Examples of storage interfaces include SATA (serial advanced technology attachment) , PCI, IDE interface, and the like. The techniques described in the present document may be embodied in various electronic devices such as mobile phones, laptops, smartphones or other devices that are capable of performing digital data processing and/or video display.
FIG. 3 is a block diagram of a video processing apparatus 300. The apparatus 300 may be used to implement one or more of the methods described herein. The apparatus 300 may be embodied in a smartphone, tablet, computer, Internet of Things (IoT) receiver, and so on. The apparatus 300 may include one or more processors 302, one or more memories 304 and video processing hardware 306. The processor (s) 302 may be configured to implement one or more methods described in the present document. The memory (memories) 304 may be used for storing data and code used for implementing the methods and techniques described herein. The video processing hardware 306 may be used to implement, in hardware circuitry, some techniques described in the present document.
The following solutions may be implemented as preferred solutions in some embodiments.
The following solutions may be implemented together with additional techniques  described in items listed in the previous section (e.g., item 1) .
1. A method of video processing (e.g., method 400 depicted in FIG. 4) , comprising determining (402) to use, for a conversion between a video block of video and a coded representation of the video, an interpolation filter for motion candidate interpolation using a rule; and performing (404) the conversion based on the determining, wherein, the interpolation scheme is one of a default interpolation filter and an alternative half-pel interpolation filter.
2. The method of solution 1, wherein the rule specifies to use the alternative half-pel interpolation filter for the video block due to the video block being in a video region.
3. The method of any of solutions 1-2, wherein the determining corresponds to a flag in the coded representation, wherein a first value of flag indicates use of the default interpolation filter and a second value of the flag indicates use of the alternative interpolation filter.
The following solutions may be implemented together with additional techniques described in items listed in the previous section (e.g., item 2, 6) .
4. A method of video processing, comprising determining, for a conversion between a coded representation of a video block and pixel values of the video block, whether a coding tool is used in the conversion, based on an information of an associated candidate used for generating a pairwise merge candidate during the conversion or information of a selected motion candidate in a merge list before adding the pairwise merge candidate to the merge list; and performing the conversion based on the determining.
5. The method of solution 4, wherein the coding tool comprises use of generalized bi-prediction weights for generating the pairwise merge candidate.
6. The method of solution 4, wherein the coding tool comprises using an alternative half-pel interpolation filter for generating the pairwise merge candidate.
7. The method of any of solutions 4-6, wherein an index to the pairwise merge candidate is derived as a function of indexes to candidates used for generating the pairwise merge candidate.
8. The method of any of solutions 4-6, wherein whether half pixel calculations are used for the generating the merge candidate depends on use of half pixel calculations in determining one or both of candidates in the pair.
9. The method of solution 8, wherein a flag in the coded representation is included to  indicate use of half pixel calculations.
10. The method of solution 4, wherein the selected motion candidate includes a spatial merge candidate.
11. The method of solution 4, wherein the selected motion candidate includes a history based motion vector predictor candidate.
The following solutions may be implemented together with additional techniques described in items listed in the previous section (e.g., item 3) .
12. A method of video processing, comprising determining, for a conversion between a coded representation of a video block and a pixel values of the video block, that a bi-prediction mode is used in generating motion candidates, including a default motion candidate, whether to use unequal weights in calculating the default motion candidate based on a condition; and performing the conversion based on the determining.
13. The method of solution 12, wherein the condition depends on an index of the default motion candidate.
14. The method of solution 12, wherein the condition depends on a slice or a type of picture containing the video block.
15. The method of solution 12, wherein the condition depends on already existing merge candidates in a list before calculating the default motion candidate.
The following solutions may be implemented together with additional techniques described in items listed in the previous section (e.g., item 4) .
16. A method of video processing, comprising performing a conversion between a coded representation of a video region and pixel values of the video region, wherein the conversion uses a list of motion candidates representing candidates for motion information of the video region, and wherein the list of motion candidates uses one or more motion candidates with motion vectors pointing to half-pel locations.
17. The method of solution 16, wherein the one or more motion vectors are added to a merge candidate list immediately after derivation of pairwise merge candidates or combined bi-predictive merge candidates.
18. The method of solution 16, wherein the one or more motion vectors are added to a merge candidate list immediately after history based motion vector predictor candidates.
19. The method of solution 16, wherein the one or more motion vectors are added along with zero motion vector candidate to the list.
The following solutions may be implemented together with additional techniques described in items listed in the previous section (e.g., item 5) .
20. The method of any of solutions 1-19, wherein the default motion candidate is enabled for using the alternative half-pel interpolation filter based on a coding condition.
21. The method of solution 20, wherein the coding condition includes an index of the default motion candidate.
22. The method of solution 20, wherein the coding condition includes a position of the video block.
The following solutions may be implemented together with additional techniques described in items listed in the previous section (e.g., item 7) .
23. A method of video processing, comprising performing a conversion between a coded representation of a video block and pixel values of the video block using a rule that specifies that the coded representation omits signaling of an alternative half-pel filter for merge candidate calculations during the conversion, due to no motion vector of the current block having a horizontal or a vertical half pel resolution.
The following solutions may be implemented together with additional techniques described in items listed in the previous section (e.g., item 8) .
24. The method of any of solutions 1-23, wherein the pairwise average candidate comprises a candidate calculated using a pre-defined motion candidate calculation scheme.
The following solutions may be implemented together with additional techniques described in items listed in the previous section (e.g., item 9) .
25. A method of video processing, comprising determining, during a conversion between a coded representation of a video block and pixel values of the video block, whether a re-ordering of a merge candidate list is performed based on usage of an alternative half-pel interpolation filter during the conversion or a coding condition; and performing the conversion based on the determining.
26. The method of solution 25, wherein the reordering puts candidates with the alternative half-pel interpolation filter enabled before those with alternative half-pel interpolation  filter disabled.
27. The method of solution 25, wherein the reordering puts candidates with the alternative half-pel interpolation filter enabled after those with alternative half-pel interpolation filter disabled.
28. The method of any of solutions 25-27, wherein the coding condition specifies re-ordering only of spatial merge candidates.
29. The method of any of solutions 25-28, wherein the coding condition specifies re-ordering only of spatial merge candidates and history based motion vector predictor candidates.
30. The method of any of solutions 1 to 29, wherein the conversion comprises encoding the video into the coded representation.
31. The method of any of solutions 1 to 29, wherein the conversion comprises decoding the coded representation to generate pixel values of the video.
32. A video decoding apparatus comprising a processor configured to implement a method recited in one or more of solutions 1 to 31.
33. A video encoding apparatus comprising a processor configured to implement a method recited in one or more of solutions 1 to 31.
34. A computer program product having computer code stored thereon, the code, when executed by a processor, causes the processor to implement a method recited in any of solutions 1 to 31.
35. A method, apparatus or system described in the present document.
FIG. 6 shows a flowchart of an example method for video processing. The method includes determining (602) , for a conversion between a video block of a video and a bitstream representation of the video block, whether an alternative luma half-pel interpolation filter is applied to all pairwise average candidates based on a flag which is used to represent whether alternative luma half-pel interpolation filter is employed or not; and performing (604) the conversion based on the determination.
In some examples, when the flag has a first value, the alternative luma half-pel interpolation filter is applied to all pairwise average candidates.
In some examples, when the flag has a second value different from the first value, a default interpolation filter is applied to all pairwise average candidates.
In some examples, the first value is 1 and the second value is 0, and the flag is UseAltHpelIf flag.
In some examples, the method further comprises: determining whether half-pel motion vector interpolation is applied to all pairwise average candidates based on a second flag which is used to represent whether half-pel motion vector interpolation is employed or not.
FIG. 7 shows a flowchart of an example method for video processing. The method includes determining (702) , for a conversion between a video block of the video and a bitstream representation of the video block, whether to enable or disable a coding tool for a pairwise average candidate based on information of associated candidates used for generating the pairwise average candidate; and performing (704) the conversion based on the determination.
In some examples, the coding tool including at least one of Bi-prediction with CU-level weight (BCW) and alternative half-pel interpolation filter.
In some examples, different pairwise average candidates determine the usage or disable the usage of the coding tool on-the-fly.
In some examples, GBiIdx of the pairwise average candidate depends on the GBiIdx of only one candidate of a pair of candidates used for generating the pairwise average candidate, where GBiIdx is used to represent the generalized bi-prediction (GBi) index that indicates the used weighting factor in BCW.
In some examples, GBiIdx of the pairwise average candidate is set equal to the GBiIdx of the first candidate of the pair candidates.
In some examples, GBiIdx of the pairwise average candidate is set equal to the GBiIdx of the second candidate of the pair candidates.
In some examples, GBiIdx of the pairwise average candidate, which is denoted as GBiIdxC, is derived as a function of the GBiIdxs of a pair of candidates used for generating the pairwise average candidate, which are denoted as GBiIdx1 and GBiIdx2 respectively.
In some examples, GBiIdxC is set equal to the smaller one of GBiIdx1 and GBiIdx2.
In some examples, GBiIdxC is set equal to the larger one of GBiIdx1 and GBiIdx2.
In some examples, GBiIdxC is set equal to the mean of GBiIdx1 and GBiIdx2.
In some examples, BCW is disabled for the pairwise average candidate when GBiIdx1 is not equal to GBiIdx2.
In some examples, GBiIdxC = (GBiIdx1 == GBiIdx2? GBiIdx1: GBI_DEFAULT) , where GBI_DEFAULT indicates that equal weights for two prediction blocks are used.
In some examples, UseAltHpelIf flag, which is used to represent whether alternative luma half-pel interpolation filter is employed or not, depends on two UseAltHpelIf flags of the pair of candidates.
In some examples, the UseAltHpelIf flag of the pairwise average candidate is set equal to 1 if the UseAltHpelIf flags of both of the pair of candidates are equal to 1.
In some examples, the UseAltHpelIf flag of the pairwise average candidate is set equal to 1 if the UseAltHpelIf flag of one of the pair of candidates is equal to 1.
FIG. 8 shows a flowchart of an example method for video processing. The method includes determining (802) , for a conversion between a video block of a video and a bitstream representation of the video block, whether to enable or disable unequal weights in bi-prediction weight for default motion candidates in a merge candidate list associated with the video block based on one or more conditions; and performing (804) the conversion based on the determination.
In some examples, the one or more conditions include index of the default motion candidates.
In some examples, the one or more conditions include slice or picture type.
In some examples, the one or more conditions include all or partial of the existing merge candidates in the merge list before the default motion candidates are added.
In some examples, the one or more conditions include usage of unequal weights from spatial and/or temporal neighboring adjacent or non-adjacent blocks.
FIG. 9 shows a flowchart of an example method for video processing. The method includes deriving (902) , for a conversion between a video block of a video and a bitstream representation of the video block, a merge candidate list associated with the video block; adding (904) one or more half-pel motion vector (MV) candidates with motion vectors pointing to half-pel to the merge candidate list; and performing (906) the conversion based on the merge candidate list.
In some examples, the one or more half-pel MV candidates are added to the merge candidate list right after derivation of pairwise merge candidates and/or combined bi-predictive merge candidates.
In some examples, the one or more half-pel MV candidates are added to the merge  candidate list right after derivation of history motion vector prediction (HMVP) merge candidates.
In some examples, whether to add one or more half-pel MV candidate or one or more zero MV candidate to the merge candidate list is changed from block to block based on decoded information from previously coded blocks and/or based on the merge candidates in the merge candidate list before adding these candidates.
In some examples, one or more half-pel MV candidate and one or more zero MV candidate are both added to the merge candidate list.
In some examples, the one or more half-pel MV candidate and the one or more zero MV candidate are added to the merge candidate list in an interleaved way.
In some examples, the one or more half-pel MV candidates are added before all zero MV candidates.
In some examples, the one or more half-pel MV candidates are added after all zero MV candidates.
FIG. 10 shows a flowchart of an example method for video processing. The method includes determining (1002) , for a conversion between a video block of a video and a bitstream representation of the video block, whether to enable or disable half-pel interpolation filter for default motion candidates in a merge candidate list associated with the video block based on one or more conditions; and performing (1004) the conversion based on the determination.
In some examples, the one or more conditions include index of the default motion candidates.
In some examples, the one or more conditions include slice or picture type.
In some examples, the one or more conditions include all or partial of the existing merge candidates in the merge list before the default motion candidates are added.
In some examples, the one or more conditions include usage of unequal weights from spatial and/or temporal neighboring adjacent or non-adjacent blocks.
FIG. 11 shows a flowchart of an example method for video processing. The method includes determining (1102) , for a conversion between a video block of the video and a bitstream representation of the video block, whether to enable or disable a coding tool for a pairwise average candidate based on information of all or selected motion candidates in a merge candidate list before adding the pairwise average candidate to the merge candidate list; and performing (1104) the  conversion based on the determination.
In some examples, the selected motion candidates are those spatial merge candidates in the merge candidate list.
In some examples, the selected motion candidates may be those history motion vector prediction (HMVP) candidates in the merge candidate list.
In some examples, the selected motion candidates are one or multiple HMVP candidates in the HMVP table.
In some examples, UseAltHpelIf flag and/or Bi-prediction with CU-level weight (BCW) index for the pairwise average candidate depend on a function of those information associated with the selected motion candidates.
In some examples, if there are more candidates with UseAltHpelIf equal to 1 than the remaining candidates, the UseAltHpelIf for the pairwise average candidate is set to 1 or 0.
In some examples, whether to enable/disable a tool depend on the usage of the tool from spatial or temporal neighboring adjacent or non-adjacent blocks.
FIG. 12 shows a flowchart of an example method for video processing. The method includes determining (1202) , for a conversion between a video block of a video and a bitstream representation of the video block, a value of a flag which is used to represent whether alternative luma half-pel interpolation filter is employed or not based on motion vector (MV) of the video block; and performing (1204) the conversion based on the determination.
In some examples, the flag is UseAltHpelIf flag.
In some examples, UseAltHpelIf flag is set equal to zero if no MV of the video block refers to a horizontal and/or vertical half-pixel position.
In some examples, UseAltHpelIf flag is set equal to zero if no MV of the video block refers to a horizontal and/or vertical half-pixel position when the video block is coded with a pairwise average candidate.
In some examples, the method is applied to one or more new kinds of motion candidates that are derived from existing candidates added before the new kinds of motion candidates in a merge candidate list.
FIG. 13 shows a flowchart of an example method for video processing. The method includes performing (1302) , for a conversion between a video block of a video and a bitstream  representation of the video block, a re-order process on motion candidates in a merge candidate list associated with the video block based on usage of alternative half-pel interpolation filter; and performing (1304) the conversion based on the determination.
In some examples, candidates with the alternative half-pel interpolation filter enabled are put before those with alternative half-pel interpolation filter disabled.
In some examples, candidates with the alternative half-pel interpolation filter enabled are put after those with alternative half-pel interpolation filter disabled.
In some examples, order of candidates with the alternative half-pel interpolation filter enabled and disabled is adaptively changed based on decoded information including usage of the alternative half-pel interpolation filter in neighboring adjacent or non-adjacent blocks.
In some examples, the re-order process is only performed to spatial merge candidates.
In some examples, the re-order process is only performed to spatial merge candidates and history motion vector prediction (HMVP) merge candidates.
In some examples, the conversion generates the video block of video from the bitstream representation.
In some examples, the conversion generates the bitstream representation from the video block of video.
In the above solutions, the performing the conversion includes using the results of previous decision step (e.g., using or not using certain coding or decoding steps) during the encoding or decoding operation to arrive at the conversion results.
The disclosed and other solutions, examples, embodiments, modules and the functional operations described in this document can be implemented in digital electronic circuitry, or in computer software, firmware, or hardware, including the structures disclosed in this document and their structural equivalents, or in combinations of one or more of them. The disclosed and other embodiments can be implemented as one or more computer program products, i.e., one or more modules of computer program instructions encoded on a computer readable medium for execution by, or to control the operation of, data processing apparatus. The computer readable medium can be a machine-readable storage device, a machine-readable storage substrate, a memory device, a composition of matter effecting a machine-readable propagated signal, or a combination of one or more them. The term “data processing apparatus”  encompasses all apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, or multiple processors or computers. The apparatus can include, in addition to hardware, code that creates an execution environment for the computer program in question, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, or a combination of one or more of them. A propagated signal is an artificially generated signal, e.g., a machine-generated electrical, optical, or electromagnetic signal, that is generated to encode information for transmission to suitable receiver apparatus.
A computer program (also known as a program, software, software application, script, or code) can be written in any form of programming language, including compiled or interpreted languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. A computer program does not necessarily correspond to a file in a file system. A program can be stored in a portion of a file that holds other programs or data (e.g., one or more scripts stored in a markup language document) , in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub programs, or portions of code) . A computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network.
The processes and logic flows described in this document can be performed by one or more programmable processors executing one or more computer programs to perform functions by operating on input data and generating output. The processes and logic flows can also be performed by, and apparatus can also be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application specific integrated circuit) .
Processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer. Generally, a processor will receive instructions and data from a read only memory or a random-access memory or both. The essential elements of a computer are a processor for performing instructions and one or more memory devices for storing  instructions and data. Generally, a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto optical disks, or optical disks. However, a computer need not have such devices. Computer readable media suitable for storing computer program instructions and data include all forms of non-volatile memory, media and memory devices, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices; magnetic disks, e.g., internal hard disks or removable disks; magneto optical disks; and CD ROM and DVD-ROM disks. The processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.
While this patent document contains many specifics, these should not be construed as limitations on the scope of any subject matter or of what may be claimed, but rather as descriptions of features that may be specific to particular embodiments of particular techniques. Certain features that are described in this patent document in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination.
Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. Moreover, the separation of various system components in the embodiments described in this patent document should not be understood as requiring such separation in all embodiments.
Only a few implementations and examples are described and other implementations, enhancements and variations can be made based on what is described and illustrated in this patent document.

Claims (62)

  1. A method for processing video, comprising:
    determining, for a conversion between a video block of a video and a bitstream representation of the video block, whether an alternative luma half-pel interpolation filter is applied to all pairwise average candidates based on a flag which is used to represent whether alternative luma half-pel interpolation filter is employed or not; and
    performing the conversion based on the determination.
  2. The method of claim 1, wherein when the flag has a first value, the alternative luma half-pel interpolation filter is applied to all pairwise average candidates.
  3. The method of claim 2, wherein when the flag has a second value different from the first value, a default interpolation filter is applied to all pairwise average candidates.
  4. The method of claim 3, wherein the first value is 1 and the second value is 0, and the flag is UseAltHpelIf flag.
  5. The method of any of claims 1-4, further comprising:
    determining whether half-pel motion vector interpolation is applied to all pairwise average candidates based on a second flag which is used to represent whether half-pel motion vector interpolation is employed or not.
  6. A method for processing video, comprising:
    determining, for a conversion between a video block of the video and a bitstream representation of the video block, whether to enable or disable a coding tool for a pairwise average candidate based on information of associated candidates used for generating the pairwise average candidate; and
    performing the conversion based on the determination.
  7. The method of claim 6, wherein the coding tool including at least one of Bi-prediction with CU-level weight (BCW) and alternative half-pel interpolation filter.
  8. The method of claim 6 or 7, wherein different pairwise average candidates determine the usage or disable the usage of the coding tool on-the-fly.
  9. The method of any of claim 6-8, wherein GBiIdx of the pairwise average candidate depends on the GBiIdx of only one candidate of a pair of candidates used for generating the pairwise average candidate, where GBiIdx is used to represent the generalized bi-prediction (GBi) index that indicates the used weighting factor in BCW.
  10. The method of claim 9, wherein GBiIdx of the pairwise average candidate is set equal to the GBiIdx of the first candidate of the pair candidates.
  11. The method of claim 9, wherein GBiIdx of the pairwise average candidate is set equal to the GBiIdx of the second candidate of the pair candidates.
  12. The method of any of claims 6-8, wherein GBiIdx of the pairwise average candidate, which is denoted as GBiIdxC, is derived as a function of the GBiIdxs of a pair of candidates used for generating the pairwise average candidate, which are denoted as GBiIdx1 and GBiIdx2 respectively.
  13. The method of claim 12, wherein GBiIdxC is set equal to the smaller one of GBiIdx1 and GBiIdx2.
  14. The method of claim 12, wherein GBiIdxC is set equal to the larger one of GBiIdx1 and GBiIdx2.
  15. The method of claim 12, wherein GBiIdxC is set equal to the mean of GBiIdx1 and GBiIdx2.
  16. The method of claim 12, wherein BCW is disabled for the pairwise average candidate when GBiIdx1 is not equal to GBiIdx2.
  17. The method of claim 12, wherein GBiIdxC = (GBiIdx1 == GBiIdx2? GBiIdx1: GBI_DEFAULT) , where GBI_DEFAULT indicates that equal weights for two prediction blocks are used.
  18. The method of any of claims 9-17, wherein UseAltHpelIf flag, which is used to represent whether alternative luma half-pel interpolation filter is employed or not, depends on two UseAltHpelIf flags of the pair of candidates.
  19. The method of claim 18, wherein the UseAltHpelIf flag of the pairwise average candidate is set equal to 1 if the UseAltHpelIf flags of both of the pair of candidates are equal to 1.
  20. The method of claim 18, wherein the UseAltHpelIf flag of the pairwise average candidate is set equal to 1 if the UseAltHpelIf flag of one of the pair of candidates is equal to 1.
  21. A method for processing video, comprising:
    determining, for a conversion between a video block of a video and a bitstream representation of the video block, whether to enable or disable unequal weights in bi-prediction weight for default motion candidates in a merge candidate list associated with the video block based on one or more conditions; and
    performing the conversion based on the determination.
  22. The method of claim 21, wherein the one or more conditions include index of the default motion candidates.
  23. The method of claim 21, wherein the one or more conditions include slice or picture type.
  24. The method of claim 21, wherein the one or more conditions include all or partial of the existing merge candidates in the merge list before the default motion candidates are added.
  25. The method of claim 21, wherein the one or more conditions include usage of unequal weights from spatial and/or temporal neighboring adjacent or non-adjacent blocks.
  26. A method for processing video, comprising:
    deriving, for a conversion between a video block of a video and a bitstream representation of the video block, a merge candidate list associated with the video block;
    adding one or more half-pel motion vector (MV) candidates with motion vectors pointing to half-pel to the merge candidate list; and
    performing the conversion based on the merge candidate list.
  27. The method of claim 26, wherein the one or more half-pel MV candidates are added to the merge candidate list right after derivation of pairwise merge candidates and/or combined bi-predictive merge candidates.
  28. The method of claim 26, wherein the one or more half-pel MV candidates are added to the merge candidate list right after derivation of history motion vector prediction (HMVP) merge candidates.
  29. The method of any of claims 26-28, wherein whether to add one or more half-pel MV candidate or one or more zero MV candidate to the merge candidate list is changed from block to block based on decoded information from previously coded blocks and/or based on the merge candidates in the merge candidate list before adding these candidates.
  30. The method of any of claims 26-28, wherein one or more half-pel MV candidate and one or more zero MV candidate are both added to the merge candidate list.
  31. The method of claim 30, wherein the one or more half-pel MV candidate and the one or more zero MV candidate are added to the merge candidate list in an interleaved way.
  32. The method of claim 30, wherein the one or more half-pel MV candidates are added before all zero MV candidates.
  33. The method of claim 30, wherein the one or more half-pel MV candidates are added after all zero MV candidates.
  34. A method for processing video, comprising:
    determining, for a conversion between a video block of a video and a bitstream representation of the video block, whether to enable or disable half-pel interpolation filter for default motion candidates in a merge candidate list associated with the video block based on one or more conditions; and
    performing the conversion based on the determination.
  35. The method of claim 34, wherein the one or more conditions include index of the default motion candidates.
  36. The method of claim 34, wherein the one or more conditions include slice or picture type.
  37. The method of claim 34, wherein the one or more conditions include all or partial of the existing merge candidates in the merge list before the default motion candidates are added.
  38. The method of claim 34, wherein the one or more conditions include usage of unequal weights from spatial and/or temporal neighboring adjacent or non-adjacent blocks.
  39. A method for processing video, comprising:
    determining, for a conversion between a video block of the video and a bitstream representation of the video block, whether to enable or disable a coding tool for a pairwise average candidate based on information of all or selected motion candidates in a merge candidate list before adding the pairwise average candidate to the merge candidate list; and
    performing the conversion based on the determination.
  40. The method of claim 39, wherein the selected motion candidates are those spatial merge candidates in the merge candidate list.
  41. The method of claim 39, wherein the selected motion candidates may be those history motion vector prediction (HMVP) candidates in the merge candidate list.
  42. The method of claim 39, wherein the selected motion candidates are one or multiple HMVP candidates in the HMVP table.
  43. The method of any of claim 39-42, wherein UseAltHpelIf flag and/or Bi-prediction with CU-level weight (BCW) index for the pairwise average candidate depend on a function of those information associated with the selected motion candidates.
  44. The method of claim 43, wherein if there are more candidates with UseAltHpelIf equal to 1 than the remaining candidates, the UseAltHpelIf for the pairwise average candidate is set to 1 or 0.
  45. The method of any of claims 1-44, wherein whether to enable/disable a tool depend on the usage of the tool from spatial or temporal neighboring adjacent or non-adjacent blocks.
  46. A method for processing video, comprising:
    determining, for a conversion between a video block of a video and a bitstream representation of the video block, a value of a flag which is used to represent whether alternative luma half-pel interpolation filter is employed or not based on motion vector (MV) of the video block; and
    performing the conversion based on the determination.
  47. The method of claim 46, wherein the flag is UseAltHpelIf flag.
  48. The method of claim 47, wherein UseAltHpelIf flag is set equal to zero if no MV of the video block refers to a horizontal and/or vertical half-pixel position.
  49. The method of claim 47, wherein UseAltHpelIf flag is set equal to zero if no MV of the video block refers to a horizontal and/or vertical half-pixel position when the video block is coded with a pairwise average candidate.
  50. The method of any of claims 1-49, wherein the method is applied to one or more new kinds of motion candidates that are derived from existing candidates added before the new kinds of motion candidates in a merge candidate list.
  51. A method for processing video, comprising:
    performing, for a conversion between a video block of a video and a bitstream representation of the video block, a re-order process on motion candidates in a merge candidate list associated with the video block based on usage of alternative half-pel interpolation filter; and
    performing the conversion based on the re-ordered merge candidate list.
  52. The method of claim 51, wherein candidates with the alternative half-pel interpolation filter enabled are put before those with alternative half-pel interpolation filter disabled.
  53. The method of claim 51, wherein candidates with the alternative half-pel interpolation filter enabled are put after those with alternative half-pel interpolation filter disabled.
  54. The method of claim 51, wherein order of candidates with the alternative half-pel interpolation filter enabled and disabled is adaptively changed based on decoded information including usage of the alternative half-pel interpolation filter in neighboring adjacent or non-adjacent blocks.
  55. The method of any of claims 51-54, wherein the re-order process is only performed to spatial merge candidates.
  56. The method of any of claims 51-54, wherein the re-order process is only performed to spatial merge candidates and history motion vector prediction (HMVP) merge candidates.
  57. The method of any of claims 1-56, wherein the conversion generates the video block of video from the bitstream representation.
  58. The method of anyone of claims 1 -56, wherein the conversion generates the bitstream representation from the video block of video.
  59. An apparatus in a video system comprising a processor and a non-transitory memory with instructions thereon, wherein the instructions upon execution by the processor, cause the processor to implement the method in any one of claims 1 to 58.
  60. A computer program product stored on a non-transitory computer readable media, the computer program product including program code for carrying out the method in any one of claims 1 to 58.
  61. A non-transitory computer readable media recorded thereon program code for carrying out the method in any one of claims 1 to 58.
  62. A non-transitory computer-readable recording medium storing a bitstream representation which is generated by a method in any one of claims 1 to 58 performed by a video processing apparatus.
PCT/CN2020/113024 2019-09-02 2020-09-02 Improvement of merge candidates WO2021043166A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202080061697.7A CN114365494A (en) 2019-09-02 2020-09-02 Improvement of MERGE candidates

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN2019103963 2019-09-02
CNPCT/CN2019/103963 2019-09-02

Publications (1)

Publication Number Publication Date
WO2021043166A1 true WO2021043166A1 (en) 2021-03-11

Family

ID=74852437

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/113024 WO2021043166A1 (en) 2019-09-02 2020-09-02 Improvement of merge candidates

Country Status (2)

Country Link
CN (1) CN114365494A (en)
WO (1) WO2021043166A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023132556A1 (en) * 2022-01-04 2023-07-13 엘지전자 주식회사 Image encoding/decoding method and device, and recording medium on which bitstream is stored

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1964493A (en) * 2006-12-01 2007-05-16 清华大学 A motion compensation interpolation method for H.264 decoder
US20160191946A1 (en) * 2014-12-31 2016-06-30 Microsoft Technology Licensing, Llc Computationally efficient motion estimation
US20170230673A1 (en) * 2016-02-05 2017-08-10 Blackberry Limited Rolling intra prediction for image and video coding

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1964493A (en) * 2006-12-01 2007-05-16 清华大学 A motion compensation interpolation method for H.264 decoder
US20160191946A1 (en) * 2014-12-31 2016-06-30 Microsoft Technology Licensing, Llc Computationally efficient motion estimation
US20170230673A1 (en) * 2016-02-05 2017-08-10 Blackberry Limited Rolling intra prediction for image and video coding

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
A ROBERT , T POIRIER , F LE LEANNEE , F GALPIN : "non-CE4: BCW for pairwise candidate", 15. JVET MEETING; 20190703 - 20190712; GOTHENBURG; (THE JOINT VIDEO EXPLORATION TEAM OF ISO/IEC JTC1/SC29/WG11 AND ITU-T SG.16 )​, no. JVET-O0516, 25 June 2019 (2019-06-25), pages 1 - 7, XP030219786 *
A. HENKEL (FRAUNHOFER), I. ZUPANCIC (FRAUNHOFER), B. BROSS (FRAUNHOFER), M. WINKEN (FRAUNHOFER), H. SCHWARZ (FRAUNHOFER), D. MARPE: "CE4: Switchable interpolation filter", 15. JVET MEETING; 20190703 - 20190712; GOTHENBURG; (THE JOINT VIDEO EXPLORATION TEAM OF ISO/IEC JTC1/SC29/WG11 AND ITU-T SG.16 ), 9 July 2019 (2019-07-09), XP030218557 *
N. ZHANG (BYTEDANCE), H. LIU (BYTEDANCE), L. ZHANG (BYTEDANCE), K. ZHANG (BYTEDANCE), Y. WANG (BYTEDANCE): "Non-CE4: Usage of half-pel switchable interpolation filter for pairwise candidate", 16. JVET MEETING; 20191001 - 20191011; GENEVA; (THE JOINT VIDEO EXPLORATION TEAM OF ISO/IEC JTC1/SC29/WG11 AND ITU-T SG.16 ), no. JVET-P0461, 25 September 2019 (2019-09-25), XP030217376 *
Y.-L. HSIAO, T.-D. CHUANG, C.-Y. CHEN, C.-W. HSU, Y.-W. HUANG, S.-M. LEI (MEDIATEK): "CE4.4.12: Pairwise average candidates", 12. JVET MEETING; 20181003 - 20181012; MACAO; (THE JOINT VIDEO EXPLORATION TEAM OF ISO/IEC JTC1/SC29/WG11 AND ITU-T SG.16 ), 1 October 2018 (2018-10-01), XP030194130 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023132556A1 (en) * 2022-01-04 2023-07-13 엘지전자 주식회사 Image encoding/decoding method and device, and recording medium on which bitstream is stored

Also Published As

Publication number Publication date
CN114365494A (en) 2022-04-15

Similar Documents

Publication Publication Date Title
US11323697B2 (en) Using interpolation filters for history based motion vector prediction
WO2020103877A1 (en) Coding and decoding of video coding modes
WO2020200269A1 (en) Decoder side motion vector derivation
WO2020098808A1 (en) Construction of merge with motion vector difference candidates
WO2020221258A1 (en) Symmetric motion vector difference coding
WO2020216381A1 (en) Restrictions on motion vector difference
WO2020125750A1 (en) Motion vector precision in merge with motion vector difference mode
US11503288B2 (en) Selective use of alternative interpolation filters in video processing
WO2020221256A1 (en) Symmetric motion vector difference coding
WO2021129685A1 (en) Spatial-temporal motion vector prediction
WO2021204190A1 (en) Motion vector difference for block with geometric partition
EP4011077A1 (en) Weighted sample bi-prediction in video coding
WO2020259681A1 (en) Restrictions on motion vector difference
WO2021043166A1 (en) Improvement of merge candidates
WO2021036990A1 (en) Initialization of history-based motion vector predictor tables

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20861792

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 22/07/2022)

122 Ep: pct application non-entry in european phase

Ref document number: 20861792

Country of ref document: EP

Kind code of ref document: A1