WO2020003273A1 - Extended merge mode - Google Patents

Extended merge mode Download PDF

Info

Publication number
WO2020003273A1
WO2020003273A1 PCT/IB2019/055579 IB2019055579W WO2020003273A1 WO 2020003273 A1 WO2020003273 A1 WO 2020003273A1 IB 2019055579 W IB2019055579 W IB 2019055579W WO 2020003273 A1 WO2020003273 A1 WO 2020003273A1
Authority
WO
WIPO (PCT)
Prior art keywords
candidates
list
candidate
motion
emm
Prior art date
Application number
PCT/IB2019/055579
Other languages
English (en)
French (fr)
Inventor
Hongbin Liu
Li Zhang
Kai Zhang
Yue Wang
Original Assignee
Beijing Bytedance Network Technology Co., Ltd.
Bytedance Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Bytedance Network Technology Co., Ltd., Bytedance Inc. filed Critical Beijing Bytedance Network Technology Co., Ltd.
Publication of WO2020003273A1 publication Critical patent/WO2020003273A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/105Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/157Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/184Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being bits, e.g. of the compressed video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors
    • H04N19/517Processing of motion vectors by encoding
    • H04N19/52Processing of motion vectors by encoding by predictive encoding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors
    • H04N19/521Processing of motion vectors for estimating the reliability of the determined motion vectors or motion vector field, e.g. for smoothing the motion vector field or for correcting motion vectors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/70Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/523Motion estimation or motion compensation with sub-pixel accuracy

Definitions

  • This document is related to video coding and decoding technologies.
  • Digital video accounts for the largest bandwidth use on the internet and other digital communication networks. As the number of connected user devices capable of receiving and displaying video increases, it is expected that the bandwidth demand for digital video usage will continue to grow.
  • the disclosed techniques may be used by video decoder or encoder embodiments for using an extended merge mode in which some motion information may be inherited while some motion information may be signaled.
  • a method of video processing includes constmcting a list of extended merge mode (EMM) candidates; determining, based on a first set of bits in a bitstream representation of a current block, a motion information inherited by the current block from the list; determining, based on a second set of bits in the bitstream representation, a signaled motion information of the current block; and performing, based on the list of EMM candidates and the signaled motion information, a conversion between the current block and the bitstream representation.
  • EMM extended merge mode
  • the above-described method may be implemented by a video decoder apparatus that comprises a processor.
  • the above-described method may be implemented by a video encoder apparatus comprising a processor for decoding encoded video during video encoding process.
  • these methods may be embodied in the form of processor-executable instmctions and stored on a computer-readable program medium.
  • FIG. 1 shows an example of a derivation process for merge candidates list constmction.
  • FIG. 2 shows example positions of spatial merge candidates.
  • FIG. 3 shows examples of candidate pairs considered for redundancy check of spatial merge candidates.
  • FIG. 4A and FIG. 4B show example positions for the second PU of Nx2N and 2NxN partitions.
  • FIG. 5 is an example illustration of motion vector scaling for temporal merge candidate.
  • FIG. 6 shows examples of candidate positions for temporal merge candidate CO and C 1.
  • FIG. 7 shows an example of combined bi-predictive merge candidate.
  • FIG. 8 shows an example derivation process for motion vector prediction candidates.
  • FIG. 9 shows an example illustration of motion vector scaling for spatial motion vector candidate.
  • FIG. 10 shows an example of neighboring samples used for deriving IC parameters.
  • FIG. 1 1 shows an example of simplified affine motion model.
  • FIG. 12 shows an example of affine MVF per sub-block.
  • FIG. 13 shows an example of MVP for AF_INTER.
  • FIGS. 14A and 14B show examples of candidates for AF_MERGE.
  • FIG. 15 shows an example of bilateral matching.
  • FIG. 16 shows an example of template matching.
  • FIG. 17 shows an example of unilateral ME in FRUC.
  • FIG. 18 shows an example of DMVR based on bilateral template matching.
  • FIG. 19 shows examples of non-adjacent merge candidates.
  • FIG. 20 shows examples of non-adjacent merge candidates.
  • FIG. 21 shows examples of non-adjacent merge candidates.
  • FIG. 22 and FIG. 23 depict examples of ultimate motion vector expression technique of video coding.
  • FIG. 24 is a flowchart for an example of a video bitstream processing method.
  • FIG. 25 is a block diagram of an example of a video processing apparatus.
  • the present document provides various techniques that can be used by a decoder of video bitstreams to improve the quality of decompressed or decoded digital video. Furthermore, a video encoder may also implement these techniques during the process of encoding in order to reconstruct decoded frames used for further encoding.
  • Section headings are used in the present document for ease of understanding and do not limit the embodiments and techniques to the corresponding sections. As such, embodiments from one section can be combined with embodiments from other sections.
  • Video coding standards have evolved primarily through the development of the well- known ITU-T and ISO/IEC standards.
  • the ITU-T produced H.261 and H.263, ISO/IEC produced MPEG-l and MPEG-4 Visual, and the two organizations jointly produced the
  • H.262/MPEG-2 Video and H.264/MPEG-4 Advanced Video Coding (A VC) and H.265/HEVC standards Since H.262, the video coding standards are based on the hybrid video coding structure wherein temporal prediction plus transform coding are utilized.
  • Joint Video Exploration Team JVET was founded by VCEG and MPEG jointly in 2015. Since then, many new methods have been adopted by JVET and put into the reference software named Joint Exploration Model (JEM).
  • JEM Joint Exploration Model
  • JEM Joint Exploration Model
  • JEM Joint Exploration Model
  • Each inter-predicted PU has motion parameters for one or two reference picture lists.
  • Motion parameters include a motion vector and a reference picture index. Usage of one of the two reference picture lists may also be signaled using inter_pred_idc .
  • Motion vectors may be explicitly coded as deltas relative to predictors.
  • a merge mode is specified whereby the motion parameters for the current PU are obtained from neighboring PUs, including spatial and temporal candidates.
  • the merge mode can be applied to any inter-predicted PU, not only for skip mode.
  • the alternative to merge mode is the explicit transmission of motion parameters, where motion vector (to be more precise, motion vector difference compared to a motion vector predictor), corresponding reference picture index for each reference picture list and reference picture list usage are signaled explicitly per each PU.
  • Such a mode is named Advanced motion vector prediction (AMVP) in this document.
  • the PU is produced from one block of samples. This is referred to as‘uni-prediction’. Uni-prediction is available both for P-slices and B-slices.
  • the PU is produced from two blocks of samples. This is referred to as‘bi-prediction’. Bi-prediction is available for B-slices only.
  • Step 1.1 Spatial candidates derivation
  • Step 1.2 Redundancy check for spatial candidates
  • FIG. 4A-4B depict the second PU for the case of Nx2N and 2NxN, respectively.
  • candidate at position Ai is not considered for list construction. In fact, by adding this candidate will lead to two prediction units having the same motion information, which is redundant to just have one PU in a coding unit.
  • position Bi is not considered when the current PU is partitioned as 2NxN.
  • a scaled motion vector is derived based on co-located PU belonging to the picture which has the smallest POC difference with current picture within the given reference picture list.
  • the reference picture list to be used for derivation of the co-located PU is explicitly signalled in the slice header.
  • the scaled motion vector for temporal merge candidate is obtained as illustrated by the dotted line in FIG.
  • tb is defined to be the POC difference between the reference picture of the current picture and the current picture
  • td is defined to be the POC difference between the reference picture of the co-located picture and the co-located picture.
  • the reference picture index of temporal merge candidate is set equal to zero.
  • the position for the temporal candidate is selected between candidates Co and Ci, as depicted in FIG. 6. If PU at position Co is not available, is intra coded, or is outside of the current CTU row, position Ci is used. Otherwise, position Co is used in the derivation of the temporal merge candidate.
  • merge candidates Besides spatial and temporal merge candidates, there are two additional types of merge candidates: combined bi-predictive merge candidate and zero merge candidate.
  • Combined bi- predictive merge candidates are generated by utilizing spatial and temporal merge candidates.
  • Combined bi-predictive merge candidate is used for B-Slice only.
  • the combined bi-predictive candidates are generated by combining the first reference picture list motion parameters of an initial candidate with the second reference picture list motion parameters of another. If these two tuples provide different motion hypotheses, they will form a new bi-predictive candidate. As an example, FIG.
  • Zero motion candidates are inserted to fill the remaining entries in the merge candidates list and therefore hit the MaxNumMergeCand capacity. These candidates have zero spatial displacement and a reference picture index which starts from zero and increases every time a new zero motion candidate is added to the list. The number of reference frames used by these candidates is one and two for uni- and bi-directional prediction, respectively. Finally, no redundancy check is performed on these candidates.
  • motion estimation can be performed in parallel whereby the motion vectors for all prediction units inside a given region are derived
  • HEVC defines the motion estimation region (MER) whose size is signalled in the picture parameter set using the
  • AMVP exploits spatio-temporal correlation of motion vector with neighbouring PUs, which is used for explicit transmission of motion parameters.
  • a motion vector candidate list is constmcted by firstly checking availability of left, above temporally neighbouring PU positions, removing redundant candidates and adding zero vector to make the candidate list to be constant length. Then, the encoder can select the best predictor from the candidate list and transmit the corresponding index indicating the chosen candidate. Similarly with merge index signalling, the index of the best motion vector candidate is encoded using truncated unary binarization. The maximum value to be encoded in this case is 2 (see FIG. 8). In the following sections, details about derivation process of motion vector prediction candidate are provided.
  • FIG. 8 summarizes derivation process for motion vector prediction candidate.
  • motion vector candidate two types are considered: spatial motion vector candidate and temporal motion vector candidate.
  • spatial motion vector candidate derivation two motion vector candidates are eventually derived based on motion vectors of each PU located in five different positions as depicted in FIG. 2.
  • one motion vector candidate is selected from two candidates, which are derived based on two different co-located positions. After the first list of spatio-temporal candidates is made, duplicated motion vector candidates in the list are removed. If the number of potential candidates is larger than two, motion vector candidates whose reference picture index within the associated reference picture list is larger than 1 are removed from the list. If the number of spatio-temporal motion vector candidates is smaller than two, additional zero motion vector candidates is added to the list.
  • the no-spatial-scaling cases are checked first followed by the spatial scaling. Spatial scaling is considered when the POC is different between the reference picture of the neighboring PU and that of the current PU regardless of reference picture list. If all PUs of left candidates are not available or are intra coded, scaling for the above motion vector is allowed to help parallel derivation of left and above MV candidates. Otherwise, spatial scaling is not allowed for the above motion vector.
  • the motion vector of the neighbouring PU is scaled in a similar manner as for temporal scaling, as depicted as FIG. 9.
  • the main difference is that the reference picture list and index of current PU is given as input; the actual scaling process is the same as that of temporal scaling.
  • motion vector differences (between the motion vector and predicted motion vector of a PU) are signalled in units of quarter luma samples when use_integer_mv_flag is equal to 0 in the slice header.
  • JEM a locally adaptive motion vector resolution
  • MVD can be coded in units of quarter luma samples, integer luma samples or four luma samples.
  • the MVD resolution is controlled at the coding unit (CU) level, and MVD resolution flags are conditionally signalled for each CU that has at least one non-zero MVD components.
  • a first flag is signalled to indicate whether quarter luma sample MV precision is used in the CU.
  • the first flag (equal to 1) indicates that quarter luma sample MV precision is not used, another flag is signalled to indicate whether integer luma sample MV precision or four luma sample MV precision is used.
  • the quarter luma sample MV resolution is used for the CU.
  • the MVPs in the AMVP candidate list for the CU are rounded to the corresponding precision.
  • CU-level RD checks are used to determine which MVD resolution is to be used for a CU. That is, the CU-level RD check is performed three times for each MVD resolution.
  • the following encoding schemes are applied in the JEM.
  • the motion information of the current CU (integer luma sample accuracy) is stored.
  • the stored motion information (after rounding) is used as the starting point for further small range motion vector refinement during the RD check for the same CU with integer luma sample and 4 luma sample MVD resolution so that the time-consuming motion estimation process is not duplicated three times.
  • motion vector accuracy is one-quarter pel (one -quarter luma sample and one- eighth chroma sample for 4:2:0 video).
  • JEM the accuracy for the internal motion vector storage and the merge candidate increases to 1/16 pel.
  • the higher motion vector accuracy (1/16 pel) is used in motion compensation inter prediction for the CU coded with skip/merge mode.
  • SHVC upsampling interpolation filters which have same filter length and normalization factor as HEVC motion compensation interpolation filters, are used as motion compensation interpolation filters for the additional fractional pel positions.
  • the chroma component motion vector accuracy is 1/32 sample in the JEM, the additional interpolation filters of 1/32 pel fractional positions are derived by using the average of the filters of the two neighbouring 1/16 pel fractional positions.
  • LIC Local Illumination Compensation
  • CU inter-mode coded coding unit
  • a least square error method is employed to derive the parameters a and b by using the neighbouring samples of the current CU and their corresponding reference samples. More specifically, as illustrated in FIG. 10, the subsampled (2:1 subsampling) neighbouring samples of the CU and the corresponding samples (identified by motion information of the current CU or sub-CU) in the reference picture are used. The IC parameters are derived and applied for each prediction direction separately.
  • the LIC flag is copied from neighbouring blocks, in a way similar to motion information copy in merge mode; otherwise, an LIC flag is signalled for the CU to indicate whether LIC applies or not.
  • LIC When LIC is enabled for a picture, additional CU level RD check is needed to determine whether LIC is applied or not for a CU.
  • MR-SAD mean- removed sum of absolute difference
  • MR-SATD mean-removed sum of absolute Hadamard- transformed difference
  • LIC is disabled for the entire picture when there is no obvious illumination change between a current picture and its reference pictures. To identify this situation, histograms of a current picture and every reference picture of the current picture are calculated at the encoder. If the histogram difference between the current picture and every reference picture of the current picture is smaller than a given threshold, LIC is disabled for the current picture; otherwise, LIC is enabled for the current picture. 2.2.4 Affine motion compensation prediction
  • sub-block based affine transform prediction is applied.
  • the sub-block size M c N is derived as in Equation 2, where MvPre is the motion vector fraction accuracy (1/16 in JEM), ( V2 x , V2 y ) is motion vector of the bottom-left control point, calculated according to Equation 1.
  • Equation 2 M and N should be adjusted downward if necessary to make it a divisor of w and h, respectively.
  • the motion vector of the center sample of each sub-block is calculated according to Equation 1 , and rounded to 1/16 fraction accuracy. Then the motion compensation interpolation filters are applied to generate the prediction of each sub-block with derived motion vector.
  • AF_INTER mode AF_MERGE mode
  • AF_INTER mode An affine flag in CU level is signalled in the bitstream to indicate whether AF_INTER mode is used.
  • v 0 is selected from the motion vectors of the block A, B or C.
  • the motion vector from the neighbour block is scaled according to the reference list and the relationship among the POC of the reference for the neighbour block, the POC of the reference for the current CU and the POC of the current CU. And the approach to select v 4 from the neighbour block D and E is similar. If the number of candidate list is smaller than 2, the list is padded by the motion vector pair composed by duplicating each of the AMVP candidates. When the candidate list is larger than 2, the candidates are firstly sorted according to the consistency of the neighbouring motion vectors (similarity of the two motion vectors in a pair candidate) and only the first two candidates are kept.
  • An RD cost check is used to determine which motion vector pair candidate is selected as the control point motion vector prediction (CPMVP) of the current CU. And an index indicating the position of the CPMVP in the candidate list is signalled in the bitstream. After the CPMVP of the current affine CU is determined, affine motion estimation is applied and the control point motion vector (CPMV) is found. Then the difference of the CPMV and the CPMVP is signalled in the bitstream.
  • CPMVP control point motion vector prediction
  • a CU When a CU is applied in AF_MERGE mode, it gets the first block coded with affine mode from the valid neighbour reconstmcted blocks. And the selection order for the candidate block is from left, above, above right, left bottom to above left as shown in FIG. 14A. If the neighbour left bottom block A is coded in affine mode as shown in FIG. 14B, the motion vectors v 2 , v 3 and v 4 of the top left corner, above right comer and left bottom comer of the CU which contains the block A are derived. And the motion vector v 0 of the top left comer on the current CU is calculated according to v 2 , v 3 and v 4 . Secondly, the motion vector v 4 of the above right of the current CU is calculated.
  • Pattern matched motion vector derivation (PMMVD) mode is a special merge mode based on Frame -Rate Up Conversion (FRUC) techniques. With this mode, motion information of a block is not signalled but derived at decoder side.
  • FRUC Frame -Rate Up Conversion
  • a FRUC flag is signalled for a CU when its merge flag is tme.
  • a merge index is signalled and the regular merge mode is used.
  • an additional FRUC mode flag is signalled to indicate which method (bilateral matching or template matching) is to be used to derive motion information for the block.
  • the decision on whether using FRUC merge mode for a CU is based on RD cost selection as done for normal merge candidate. That is the two matching modes (bilateral matching and template matching) are both checked for a CU by using RD cost selection. The one leading to the minimal cost is further compared to other CU modes. If a FRUC matching mode is the most efficient one, FRUC flag is set to tme for the CU and the related matching mode is used.
  • Motion derivation process in FRUC merge mode has two steps.
  • a CU-level motion search is first performed, then followed by a Sub-CU level motion refinement.
  • an initial motion vector is derived for the whole CU based on bilateral matching or template matching.
  • a list of MV candidates is generated and the candidate which leads to the minimum matching cost is selected as the starting point for further CU level refinement.
  • a local search based on bilateral matching or template matching around the starting point is performed and the MV results in the minimum matching cost is taken as the MV for the whole CU.
  • the motion information is further refined at sub-CU level with the derived CU motion vectors as the starting points.
  • the following derivation process is performed for a W c H CU motion information derivation.
  • H CU is derived.
  • the CU is further split into M c M sub-CUs.
  • the value of M is calculated as in (3), D is a predefined splitting depth which is set to 3 by default in the JEM.
  • the MV for each sub-CU is derived.
  • the bilateral matching is used to derive motion information of the current CU by finding the closest match between two blocks along the motion trajectory of the current CU in two different reference pictures.
  • the motion vectors MV0 and MV1 pointing to the two reference blocks shall be proportional to the temporal distances, i.e., TD0 and TD1, between the current picture and the two reference pictures.
  • the bilateral matching becomes mirror based bi-directional MV.
  • template matching is used to derive motion information of the current CU by finding the closest match between a template (top and/or left neighbouring blocks of the current CU) in the current picture and a block (same size to the template) in a reference picture. Except the aforementioned FRUC merge mode, the template matching is also applied to AMVP mode.
  • AMVP has two candidates. With template matching method, a new candidate is derived. If the newly derived candidate by template matching is different to the first existing AMVP candidate, it is inserted at the very beginning of the AMVP candidate list and then the list size is set to two (meaning remove the second existing AMVP candidate).
  • AMVP mode only CU level search is applied.
  • the MV candidate set at CU level consists of:
  • each valid MV of a merge candidate is used as an input to generate a MV pair with the assumption of bilateral matching.
  • one valid MV of a merge candidate is (MVa, refa) at reference list A.
  • the reference picture refb of its paired bilateral MV is found in the other reference list B so that refa and refb are temporally at different sides of the current picture. If such a refb is not available in reference list B, refb is determined as a reference which is different from refa and its temporal distance to the current picture is the minimal one in list B.
  • MVb is derived by scaling MVa based on the temporal distance between the current picture and refa, refb.
  • MVs from the interpolated MV field are also added to the CU level candidate list. More specifically, the interpolated MVs at the position (0, 0), (W/2, 0), (0, H/2) and (W/2, H/2) of the current CU are added.
  • the MV candidate set at sub-CU level consists of:
  • the scaled MVs from reference pictures are derived as follows. All the reference pictures in both lists are traversed. The MVs at a collocated position of the sub-CU in a reference picture are scaled to the reference of the starting CU-level MV.
  • ATMVP and STMVP candidates are limited to the four first ones.
  • interpolated motion field is generated for the whole picture based on unilateral ME. Then the motion field may be used later as CU level or sub-CU level MV candidates.
  • the motion field of each reference pictures in both reference lists is traversed at 4x4 block level.
  • the motion of the reference block is scaled to the current picture according to the temporal distance TD0 and TD1 (the same way as that of MV scaling of TMVP in HE VC) and the scaled motion is assigned to the block in the current frame. If no scaled MV is assigned to a 4x4 block, the block’s motion is marked as unavailable in the interpolated motion field.
  • the matching cost is a bit different at different steps.
  • the matching cost is the sum of absolute difference (SAD) of bilateral matching or template matching.
  • SAD absolute difference
  • the matching cost C of bilateral matching at sub-CU level search is calculated as follows:
  • MV and MV S indicate the current MV and the starting MV, respectively.
  • SAD is still used as the matching cost of template matching at sub-CU level search.
  • MV is derived by using luma samples only. The derived motion will be used for both luma and chroma for MC inter prediction. After MV is decided, final MC is performed using 8-taps interpolation filter for luma and 4-taps interpolation filter for chroma.
  • MV refinement is a pattern based MV search with the criterion of bilateral matching cost or template matching cost.
  • two search patterns are supported - an unrestricted center-biased diamond search (UCBDS) and an adaptive cross search for MV refinement at the CU level and sub-CU level, respectively.
  • UMBDS center-biased diamond search
  • the MV is directly searched at quarter luma sample MV accuracy, and this is followed by one-eighth luma sample MV refinement.
  • the search range of MV refinement for the CU and sub-CU step are set equal to 8 luma samples.
  • the encoder can choose among uni-prediction from listO, uni-prediction from listl or bi-prediction for a CU. The selection is based on a template matching cost as follows:
  • costBi ⁇ factor * min (costO, costl )
  • costO is the SAD of listO template matching
  • costl is the SAD of listl template matching
  • costBi is the SAD of bi -prediction template matching.
  • the value of factor is equal to 1.25, which means that the selection process is biased toward bi-prediction.
  • the inter prediction direction selection is only applied to the CU-level template matching process.
  • bi-prediction operation for the prediction of one block region, two prediction blocks, formed using a motion vector (MV) of listO and a MV of listl, respectively, are combined to form a single prediction signal.
  • MV motion vector
  • DMVR decoder-side motion vector refinement
  • the two motion vectors of the bi-prediction are further refined by a bilateral template matching process.
  • the bilateral template matching applied in the decoder to perform a distortion-based search between a bilateral template and the reconstmction samples in the reference pictures in order to obtain a refined MV without transmission of additional motion information.
  • a bilateral template is generated as the weighted combination (i.e. average) of the two prediction blocks, from the initial MV0 of listO and MV1 of listl, respectively, as shown in FIG. 18.
  • the template matching operation consists of calculating cost measures between the generated template and the sample region (around the initial prediction block) in the reference picture. For each of the two reference pictures, the MV that yields the minimum template cost is considered as the updated MV of that list to replace the original one.
  • nine MV candidates are searched for each list. The nine MV candidates include the original MV and 8 surrounding MVs with one luma sample offset to the original MV in either the horizontal or vertical direction, or both.
  • the two new MVs i.e., MVO' and MV1' as shown in FIG. 18, are used for generating the final bi -prediction results.
  • a sum of absolute differences (SAD) is used as the cost measure.
  • SAD sum of absolute differences
  • DMVR is applied for the merge mode of bi-prediction with one MV from a reference picture in the past and another from a reference picture in the future, without the transmission of additional syntax elements.
  • JEM when LIC, affine motion, FRUC, or sub-CU merge candidate is enabled for a CU, DMVR is not applied.
  • Tencent proposes to derive additional spatial merge candidates from positions in an outer reference area which has an offset of (-96, -96) to the current block.
  • each candidate B (i, j) or C (i, j) has an offset of 16 in the vertical direction compared to its previous B or C candidates.
  • Each candidate A (i, j) or D (i, j) has an offset of 16 in the horizontal direction compared to its previous A or D candidates.
  • Each E (i, j) has an offset of 16 in both horizontal direction and vertical direction compared to its previous E candidates. The candidates are checked from inside to the outside.
  • the order of the candidates is A (i, j), B (i, j), C (i, j), D (i, j), and E (i, j).
  • the candidates are added after TMVP candidates in the merge candidate list.
  • the extended spatial positions from 6 to 27 as in FIG. 21 are checked according to their numerical order after the temporal candidate.
  • all the spatial candidates are restricted within two CTU lines.
  • Ultimate motion vector expression (UMVE) in J0024 can be either skip or direct (or merge) modes, which use proposed motion vector expression method with neighboring motion information.
  • UMVE also makes a candidate list from neighboring motion information. Among those candidates in the list, a MV candidate is selected, and is further expanded by new motion vector expression method.
  • FIG. 22 shows an example of a UMVE search process
  • FIG. 23 shows an example of UMVE search points.
  • UMVE provides a new motion vector expression with simplified signaling.
  • the expression method includes starting point, motion magnitude, and motion direction.
  • Base candidate index defines the starting point.
  • Base candidate index indicates the best candidate among candidates in the list as follows.
  • Distance index is motion magnitude information.
  • Distance index indicates the pre defined distance from the starting point information. Pre-defined distance is as follows:
  • Direction index represents the direction of the MVD relative to the starting point.
  • the direction index can represent of the four directions as shown below. 3. Discussion of drawbacks in existing implementations
  • motion information of a merge candidate is inherited by current block, including motion vector, reference pictures, prediction direction, LIC flag etc. Only a merge index is signaled, which is efficient in many cases. However, the inherited motion information, especially motion vector maybe not good enough.
  • AM VP mode all motion information is signaled, including motion vector (i.e., MVP index and MVD), reference pictures (i.e., reference index), prediction direction, LIC flag and MVD precision etc., which is bits consuming.
  • motion vector i.e., MVP index and MVD
  • reference pictures i.e., reference index
  • prediction direction i.e., prediction direction
  • LIC flag i.e., prediction direction
  • LIC flag and MVD precision etc.
  • the MVD can only has non-zero component in either horizontal direction or vertical direction but not both direction. Meanwhile, it also signals the MVD information, i.e., the distance index or motion magnitude information.
  • EMM Extended Merge Mode
  • motion information such as prediction direction, reference indices/pictures, motion vectors, LIC flag, affine flag, Intra Block Copy (IBC) flag, MVD precision, MVD values
  • IBC Intra Block Copy
  • an EMM list is constmcted, and an index is signaled to indicate the first part of motion information of which candidate is inherited by the current block (e.g., PU/CU). Meanwhile, additional information (i.e., 2 nd part of the motion information) like MVD is further signaled.
  • the first part of motion information includes all or some of the following information: prediction direction, reference pictures, motion vectors, LIC flag and MVD precision etc. b.
  • the second part can be coded with predictive coding.
  • the motion information candidate list is constmcted by inserting motion information of spatial neighboring blocks, temporal neighboring blocks or non-adjacent blocks.
  • the candidate list is constmcted in the same way as merge mode. b.
  • motion information of non-adjacent blocks is inserted into the candidate list.
  • PU/CU based FRUC candidate is inserted into the candidate list.
  • the MVD precision is set to 1/4 or any arbitrary valid MVD precision.
  • the LIC flag is set to false.
  • uni-directional candidates are generated from bi-directional candidates (if available) and inserted into the candidate list.
  • LIC flag and MVD precision are copied from the corresponding bi-directional candidates.
  • Ll-X directional candidates are generated by scaling MV of LX directional candidates (if available). LIC flag and MVD precision are copied from the corresponding LX directional candidates. i. In one example, the first entry of Ll-X reference picture list is selected as the reference picture for Ll-X direction.
  • the symmetry reference picture of the LX reference picture if available, is selected as the reference picture for Ll-X direction.
  • the prediction direction is not inherited and is explicitly signaled. In this case, it is proposed to constmct two or multiple motion information candidate lists.
  • a motion information candidate list is constructed wherein the first part of motion information (excluding the reference picture list index compared to the above examples except that) may be inherited from one of the motion information candidate list.
  • the first part of motion information may include all or some of the following information: reference pictures, motion vectors, LIC flag and MVD precision etc.
  • only one motion information candidate list is constmcted as described in above examples. However, two indices may be signaled to indicate which candidates are inherited for each reference picture list for bi-prediction case.
  • the proposed methods may be applied to certain block sizes/shapes, and/or certain sub block sizes.
  • the proposed methods may be applied to certain modes, such as conventional translational motion (i.e., affine mode is disabled).
  • method 2400 may be implemented at a video decoder or a video encoder.
  • FIG. 24 is a flowchart for an example method 2400 of processing a video bitstream.
  • the method 2400 includes constructing (2402) a list of extended merge mode (EMM) candidates; determining (2404), based on a first set of bits in a bitstream representation of a current block, a motion information inherited by the current block from the list; determining (2406), based on a second set of bits in the bitstream representation, a signaled motion information of the current block; and performing (2408), based on the list of EMM candidates and the signaled motion information, a conversion between the current block and the bitstream representation.
  • EMM extended merge mode
  • a method of video processing comprising constructing a list of extended merge mode (EMM) candidates; determining, based on a first set of bits in a bitstream representation of a current block, a motion information inherited by the current block from the list; determining, based on a second set of bits in the bitstream representation, a signaled motion information of the current block; and performing, based on the list of EMM candidates and the signaled motion information, a conversion between the current block and the bitstream representation.
  • EMM extended merge mode
  • An apparatus in a video system comprising a processor and a non-transitory memory with instmctions thereon, wherein the instructions upon execution by the processor, cause the processor to implement the method in any one of examples 1 to 20.
  • JEM-7.0 https://jvet.hhi.fraunhofer.de/svn/svn_HMJEMSoftware/tags/ HM-l6.6- JEM-7.0.
  • FIG. 25 is a block diagram of a video processing apparatus 2500.
  • the apparatus 2500 may be used to implement one or more of the methods described herein.
  • the apparatus 2500 may be embodied in a smartphone, tablet, computer, Internet of Things (IoT) receiver, and so on.
  • the apparatus 2500 may include one or more processors 2502, one or more memories 2504 and video processing hardware 2506.
  • the processor(s) 2502 may be configured to implement one or more methods (including, but not limited to, method 2400) described in the present document.
  • the memory (memories) 2504 may be used for storing data and code used for implementing the methods and techniques described herein.
  • the video processing hardware 2506 may be used to implement, in hardware circuitry, some techniques described in the present document.
  • the video coding methods may be implemented using an apparatus that is implemented on a hardware platform as described with respect to FIG. 25.
  • the disclosed and other solutions, examples, embodiments, modules and the functional operations described in this document can be implemented in digital electronic circuitry, or in computer software, firmware, or hardware, including the structures disclosed in this document and their structural equivalents, or in combinations of one or more of them.
  • the disclosed and other embodiments can be implemented as one or more computer program products, i.e., one or more modules of computer program instmctions encoded on a computer readable medium for execution by, or to control the operation of, data processing apparatus.
  • the computer readable medium can be a machine -readable storage device, a machine-readable storage substrate, a memory device, a composition of matter effecting a machine-readable propagated signal, or a combination of one or more them.
  • data processing apparatus encompasses all apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, or multiple processors or computers.
  • the apparatus can include, in addition to hardware, code that creates an execution environment for the computer program in question, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, or a combination of one or more of them.
  • a propagated signal is an artificially generated signal, e.g., a machine-generated electrical, optical, or electromagnetic signal, that is generated to encode information for transmission to suitable receiver apparatus.
  • a computer program (also known as a program, software, software application, script, or code) can be written in any form of programming language, including compiled or interpreted languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
  • a computer program does not necessarily correspond to a file in a file system.
  • a program can be stored in a portion of a file that holds other programs or data (e.g., one or more scripts stored in a markup language document), in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub programs, or portions of code).
  • a computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network.
  • processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer.
  • a processor will receive instructions and data from a read only memory or a random-access memory or both.
  • the essential elements of a computer are a processor for performing instructions and one or more memory devices for storing instmctions and data.
  • a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto optical disks, or optical disks.
  • mass storage devices for storing data, e.g., magnetic, magneto optical disks, or optical disks.
  • a computer need not have such devices.
  • Computer readable media suitable for storing computer program instmctions and data include all forms of non-volatile memory, media and memory devices, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices; magnetic disks, e.g., internal hard disks or removable disks; magneto optical disks; and CD ROM and DVD-ROM disks.
  • semiconductor memory devices e.g., EPROM, EEPROM, and flash memory devices
  • magnetic disks e.g., internal hard disks or removable disks
  • magneto optical disks e.g., CD ROM and DVD-ROM disks.
  • the processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
PCT/IB2019/055579 2018-06-29 2019-07-01 Extended merge mode WO2020003273A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN2018093646 2018-06-29
CNPCT/CN2018/093646 2018-06-29

Publications (1)

Publication Number Publication Date
WO2020003273A1 true WO2020003273A1 (en) 2020-01-02

Family

ID=67253944

Family Applications (3)

Application Number Title Priority Date Filing Date
PCT/IB2019/055579 WO2020003273A1 (en) 2018-06-29 2019-07-01 Extended merge mode
PCT/IB2019/055590 WO2020003281A1 (en) 2018-06-29 2019-07-01 Video bitstream processing using an extended merge mode and signaled motion information of a block
PCT/IB2019/055583 WO2020003276A1 (en) 2018-06-29 2019-07-01 Emm mode signaling

Family Applications After (2)

Application Number Title Priority Date Filing Date
PCT/IB2019/055590 WO2020003281A1 (en) 2018-06-29 2019-07-01 Video bitstream processing using an extended merge mode and signaled motion information of a block
PCT/IB2019/055583 WO2020003276A1 (en) 2018-06-29 2019-07-01 Emm mode signaling

Country Status (3)

Country Link
CN (3) CN110662041B (zh)
TW (3) TWI731362B (zh)
WO (3) WO2020003273A1 (zh)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021139806A1 (en) * 2020-01-12 2021-07-15 Beijing Bytedance Network Technology Co., Ltd. Constraints for video coding and decoding
US11523108B2 (en) 2019-08-10 2022-12-06 Beijing Bytedance Network Technology Co., Ltd. Position restriction for inter coding mode
US11539950B2 (en) 2019-10-02 2022-12-27 Beijing Bytedance Network Technology Co., Ltd. Slice level signaling in video bitstreams that include subpictures
US11956432B2 (en) 2019-10-18 2024-04-09 Beijing Bytedance Network Technology Co., Ltd Interplay between subpictures and in-loop filtering

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11051025B2 (en) * 2018-07-13 2021-06-29 Tencent America LLC Method and apparatus for video coding
CN112602322B (zh) * 2018-08-28 2023-08-22 鸿颖创新有限公司 编码视频数据的装置和方法
WO2021192892A1 (ja) * 2020-03-27 2021-09-30 株式会社コナミデジタルエンタテインメント 映像配信システム、映像配信制御方法及びコンピュータプログラム
CN117321994A (zh) * 2021-04-09 2023-12-29 抖音视界有限公司 用于视频处理的方法、设备和介质
CN117581539A (zh) * 2021-04-10 2024-02-20 抖音视界有限公司 Gpm运动细化

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180098087A1 (en) * 2016-09-30 2018-04-05 Qualcomm Incorporated Frame rate up-conversion coding mode
EP3468194A1 (en) * 2017-10-05 2019-04-10 Thomson Licensing Decoupled mode inference and prediction

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9819963B2 (en) * 2011-07-12 2017-11-14 Electronics And Telecommunications Research Institute Inter prediction method and apparatus for same
CN106851311B (zh) * 2011-08-29 2019-08-13 苗太平洋控股有限公司 视频译码装置
US9357214B2 (en) * 2012-12-07 2016-05-31 Qualcomm Incorporated Advanced merge/skip mode and advanced motion vector prediction (AMVP) mode for 3D video
KR101854003B1 (ko) * 2013-07-02 2018-06-14 경희대학교 산학협력단 복수의 레이어를 포함하는 영상의 부호화 및 복호화 방법
CN103561263B (zh) * 2013-11-06 2016-08-24 北京牡丹电子集团有限责任公司数字电视技术中心 基于运动矢量约束和加权运动矢量的运动补偿预测方法
EP3111641A4 (en) * 2014-04-01 2017-11-08 MediaTek Inc. Method of motion information coding
US10958927B2 (en) * 2015-03-27 2021-03-23 Qualcomm Incorporated Motion information derivation mode determination in video coding
US10812791B2 (en) * 2016-09-16 2020-10-20 Qualcomm Incorporated Offset vector identification of temporal motion vector predictor
WO2018070632A1 (ko) * 2016-10-11 2018-04-19 엘지전자 주식회사 영상 코딩 시스템에서 영상 디코딩 방법 및 장치
CN107396106A (zh) * 2017-06-26 2017-11-24 深圳市亿联智能有限公司 一种基于h.265编码标准的视频加密算法
CN107396102B (zh) * 2017-08-30 2019-10-08 中南大学 一种基于Merge技术运动矢量的帧间模式快速选择方法及装置

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180098087A1 (en) * 2016-09-30 2018-04-05 Qualcomm Incorporated Frame rate up-conversion coding mode
EP3468194A1 (en) * 2017-10-05 2019-04-10 Thomson Licensing Decoupled mode inference and prediction

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
"Algorithm description of Joint Exploration Test Model 2 (JEM2)", 114. MPEG MEETING;22-2-2016 - 26-2-2016; SAN DIEGO; (MOTION PICTURE EXPERT GROUP OR ISO/IEC JTC1/SC29/WG11),, no. N16066, 4 April 2016 (2016-04-04), XP030022739 *
A. ALSHINE. ALSHINA: "Description of SDR, HDR and 360° video coding technology proposal by Samsung, Huawei, GoPro, and HiSilicon - mobile application scenario", JVET-J0024, April 2018 (2018-04-01)
C. ROSEWARNEB. BROSSM. NACCARIK. SHARMANG. SULLIVAN: "High Efficiency Video Coding (HEVC) Test Model 16 (HM 16) Improved Encoder Description Update 7", JCTVC-Y1002, October 2016 (2016-10-01)
J. CHENE. ALSHINAG. J. SULLIVANJ.-R. OHMJ. BOYCE: "Algorithm description of Joint Exploration Test Model 7 (JEM7", JVET-G1001, August 2017 (2017-08-01)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11523108B2 (en) 2019-08-10 2022-12-06 Beijing Bytedance Network Technology Co., Ltd. Position restriction for inter coding mode
US11533513B2 (en) 2019-08-10 2022-12-20 Beijing Bytedance Network Technology Co., Ltd. Subpicture size definition in video processing
US11553177B2 (en) 2019-08-10 2023-01-10 Beijing Bytedance Network Technology Co., Ltd. Buffer management in subpicture decoding
US11539950B2 (en) 2019-10-02 2022-12-27 Beijing Bytedance Network Technology Co., Ltd. Slice level signaling in video bitstreams that include subpictures
US11546593B2 (en) 2019-10-02 2023-01-03 Beijing Bytedance Network Technology Co., Ltd. Syntax for subpicture signaling in a video bitstream
US11956432B2 (en) 2019-10-18 2024-04-09 Beijing Bytedance Network Technology Co., Ltd Interplay between subpictures and in-loop filtering
US11962771B2 (en) 2019-10-18 2024-04-16 Beijing Bytedance Network Technology Co., Ltd Syntax constraints in parameter set signaling of subpictures
WO2021139806A1 (en) * 2020-01-12 2021-07-15 Beijing Bytedance Network Technology Co., Ltd. Constraints for video coding and decoding

Also Published As

Publication number Publication date
TWI722467B (zh) 2021-03-21
TW202002651A (zh) 2020-01-01
WO2020003276A1 (en) 2020-01-02
CN110662046A (zh) 2020-01-07
TWI731362B (zh) 2021-06-21
CN110662041A (zh) 2020-01-07
CN110662041B (zh) 2022-07-29
WO2020003281A1 (en) 2020-01-02
TWI736923B (zh) 2021-08-21
CN110662055B (zh) 2022-07-05
CN110662055A (zh) 2020-01-07
CN110662046B (zh) 2022-03-25
TW202017370A (zh) 2020-05-01
TW202002650A (zh) 2020-01-01

Similar Documents

Publication Publication Date Title
US12022087B2 (en) Mode dependent motion vector difference precision set
US11956465B2 (en) Difference calculation based on partial position
US11240531B2 (en) Size selective application of decoder side refining tools
WO2019234673A1 (en) Chroma dmvr
WO2020114405A1 (en) Indication method of maximum number of candidates
CN110662041B (zh) 视频比特流处理的方法和装置,存储视频比特流的方法和非暂时性计算机可读记录介质
WO2020177683A1 (en) Enabling bio based on the information in the picture header
US11729377B2 (en) Affine mode in video coding and decoding
WO2020151765A1 (en) Interpolation for bi-prediction with cu-level weight
WO2020003262A1 (en) Symmetric bi-prediction mode for video coding
US11863771B2 (en) Updating of history based motion vector prediction tables
WO2020070729A1 (en) Size restriction based on motion information
WO2020182187A1 (en) Adaptive weight in multi-hypothesis prediction in video coding
US12034964B2 (en) Selective application of decoder side refining tools

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19739404

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 28.04.2021)

122 Ep: pct application non-entry in european phase

Ref document number: 19739404

Country of ref document: EP

Kind code of ref document: A1