US20210144400A1 - Difference calculation based on partial position - Google Patents
Difference calculation based on partial position Download PDFInfo
- Publication number
- US20210144400A1 US20210144400A1 US17/154,485 US202117154485A US2021144400A1 US 20210144400 A1 US20210144400 A1 US 20210144400A1 US 202117154485 A US202117154485 A US 202117154485A US 2021144400 A1 US2021144400 A1 US 2021144400A1
- Authority
- US
- United States
- Prior art keywords
- tool
- block
- motion vector
- sub
- motion
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
- 238000004364 calculation method Methods 0.000 title abstract description 13
- 238000000034 method Methods 0.000 claims abstract description 291
- 238000006243 chemical reaction Methods 0.000 claims abstract description 53
- 238000012545 processing Methods 0.000 claims abstract description 47
- 230000033001 locomotion Effects 0.000 claims description 466
- 239000013598 vector Substances 0.000 claims description 267
- 238000009795 derivation Methods 0.000 claims description 48
- 230000003287 optical effect Effects 0.000 claims description 22
- 238000000638 solvent extraction Methods 0.000 claims description 5
- 238000005192 partition Methods 0.000 claims description 4
- 230000008569 process Effects 0.000 description 96
- 230000002123 temporal effect Effects 0.000 description 60
- 241000023320 Luma <angiosperm> Species 0.000 description 45
- OSWPMRLSEDHDFF-UHFFFAOYSA-N methyl salicylate Chemical compound COC(=O)C1=CC=CC=C1O OSWPMRLSEDHDFF-UHFFFAOYSA-N 0.000 description 45
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 description 37
- 230000002146 bilateral effect Effects 0.000 description 21
- 238000004590 computer program Methods 0.000 description 13
- 238000005516 engineering process Methods 0.000 description 12
- 238000012986 modification Methods 0.000 description 8
- 230000004048 modification Effects 0.000 description 8
- 230000011664 signaling Effects 0.000 description 8
- 238000004891 communication Methods 0.000 description 6
- 230000006870 function Effects 0.000 description 6
- 238000003672 processing method Methods 0.000 description 6
- 230000005540 biological transmission Effects 0.000 description 5
- 230000006835 compression Effects 0.000 description 5
- 238000007906 compression Methods 0.000 description 5
- 230000003044 adaptive effect Effects 0.000 description 4
- 238000010276 construction Methods 0.000 description 4
- 238000005286 illumination Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 3
- 238000006073 displacement reaction Methods 0.000 description 3
- 238000003780 insertion Methods 0.000 description 3
- 230000037431 insertion Effects 0.000 description 3
- 238000013507 mapping Methods 0.000 description 3
- 230000000007 visual effect Effects 0.000 description 3
- 101100537098 Mus musculus Alyref gene Proteins 0.000 description 2
- 101150095908 apex1 gene Proteins 0.000 description 2
- 238000013459 approach Methods 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 230000001419 dependent effect Effects 0.000 description 2
- 230000007274 generation of a signal involved in cell-cell signaling Effects 0.000 description 2
- 230000001788 irregular Effects 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 238000013515 script Methods 0.000 description 2
- 238000000926 separation method Methods 0.000 description 2
- 238000012546 transfer Methods 0.000 description 2
- 238000012935 Averaging Methods 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 230000006837 decompression Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 229910003460 diamond Inorganic materials 0.000 description 1
- 239000010432 diamond Substances 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 238000007726 management method Methods 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000013139 quantization Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000003252 repetitive effect Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 239000000758 substrate Substances 0.000 description 1
- 239000013589 supplement Substances 0.000 description 1
- 230000008685 targeting Effects 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/577—Motion compensation with bidirectional frame interpolation, i.e. using B-pictures
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
- H04N19/105—Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
- H04N19/107—Selection of coding mode or of prediction mode between spatial and temporal predictive coding, e.g. picture refresh
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
- H04N19/109—Selection of coding mode or of prediction mode among a plurality of temporal predictive coding modes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/119—Adaptive subdivision aspects, e.g. subdivision of a picture into rectangular or non-rectangular coding blocks
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/157—Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
- H04N19/159—Prediction type, e.g. intra-frame, inter-frame or bidirectional frame prediction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/157—Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
- H04N19/16—Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter for a given display mode, e.g. for interlaced or progressive display mode
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/176—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/184—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being bits, e.g. of the compressed video stream
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/513—Processing of motion vectors
- H04N19/517—Processing of motion vectors by encoding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/513—Processing of motion vectors
- H04N19/517—Processing of motion vectors by encoding
- H04N19/52—Processing of motion vectors by encoding by predictive encoding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/593—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/70—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
Definitions
- This patent document relates to video coding techniques, devices and systems.
- Motion compensation is a technique in video processing to predict a frame in a video, given the previous and/or future frames by accounting for motion of the camera and/or objects in the video. Motion compensation can be used in the encoding/decoding of video data for video compression.
- This document discloses methods, systems, and devices related to the use of motion compensation in video coding and decoding.
- a method for video processing comprises: calculating, during a conversion between a current block of video and a bitstream representation of the current block, differences between two reference blocks associated with the current block or differences between two reference sub-blocks associated with a sub-block within the current block based on representative positions of the reference blocks or representative positions of the reference sub-blocks; and performing the conversion based on the differences.
- a method for video processing comprises: making a decision, based on a determination that a current block of a video is coded using a specific coding mode, regarding a selective enablement of a decoder side motion vector derivation (DMVD) tool for the current block, wherein the DMVD tool derives a refinement of motion information signaled in a bitstream representation of the video; and performing, based on the decision, a conversion between the current block and the bitstream representation.
- DMVD decoder side motion vector derivation
- a video processing method includes generating, using a multi-step refinement process, multiple refinement values of motion vector information based on decoded motion information from a bitstream representation of a current video block, and reconstructing the current video block or decoding other video blocks based on multiple refinement values.
- another video processing method includes performing, for conversion between a current block and a bitstream representation of the current block, a multi-step refinement process for a sub-block of the current block and a temporal gradient modification process between two prediction blocks of the sub-block, wherein, using the multi-step refinement process, multiple refinement values of motion vector information based on decoded motion information from a bitstream representation of the current video block and performing the conversion between the current block and the bitstream representation based on refinement values.
- another video processing method includes determining, using a multi-step decoder-side motion vector refinement process a current video block, a final motion vector and performing conversion between the current block and the bitstream representation using the final motion vector.
- another method of video processing includes applying, during conversion between a current video block and a bitstream representation of the current video block; multiple different motion vector refinement processes to different sub-blocks of the current video block and performing conversion between the current block and the bitstream representation using a final motion vector for the current video block generated from the multiple different motion vector refinement processes.
- another method of video processing includes performing a conversion between a current video block and a bitstream representation of the current video block using a rule that limits a maximum number of sub-blocks that a coding unit or a prediction unit in case that the current video block is coded using a sub-block based coding tool,
- sub-block based coding tool includes one or more of affine coding, advanced temporal motion vector predictor, bi-directional optical flow or a decoder-side motion vector refinement coding tool.
- another method of video processing includes performing a conversion between a current video block and a bitstream representation of the current video block using a rule that specifies to use different partitioning for chroma components of the current video block than a luma component of the current video block in case that the current video block is coded using a sub-block based coding tool, wherein the sub-block based coding tool includes one or more of affine coding, advanced temporal motion vector predictor, bi-directional optical flow or a decoder-side motion vector refinement coding tool.
- another method of video processing includes determining, in an early termination stage of a bi-directional optical flow (BIO) technique or a decoder-side motion vector refinement (DMVR) technique, differences between reference video blocks associated with a current video block, and performing further processing of the current video block based on the differences.
- BIO bi-directional optical flow
- DMVR decoder-side motion vector refinement
- the various techniques described herein may be embodied as a computer program product stored on a non-transitory computer readable media.
- the computer program product includes program code for carrying out the methods described herein.
- a video decoder apparatus may implement a method as described herein.
- FIG. 1 shows an example of a derivation process for merge candidates list construction.
- FIG. 2 shows example positions of spatial merge candidates.
- FIG. 3 shows examples of Candidate pairs considered for redundancy check of spatial merge candidates.
- FIG. 4 shows example Positions for the second PU of N ⁇ 2N and 2N ⁇ N partitions.
- FIG. 5 is an Illustration of motion vector scaling for temporal merge candidate.
- FIG. 6 shows examples of Candidate positions for temporal merge candidate, C 0 and C 1 .
- FIG. 7 shows an example of combined bi-predictive merge candidate
- FIG. 8 shows an example of a derivation process for motion vector prediction candidates.
- FIG. 9 is an example illustration of motion vector scaling for spatial motion vector candidate.
- FIG. 10 illustrates an example of advanced temporal motion vector predictor (ATMVP) for a Coding Unit (CU).
- ATMVP advanced temporal motion vector predictor
- FIG. 11 shows an Example of one CU with four sub-blocks (A-D) and its neighbouring blocks (a-d).
- FIG. 12 is an example Illustration of sub-blocks where OBMC applies.
- FIG. 13 shows an example of Neighbouring samples used for deriving IC parameters.
- FIG. 14 shows an example of a simplified affine motion model.
- FIG. 15 shows an example of affine MVF per sub-block.
- FIG. 16 shows an example of a Motion Vector Predictor (MV) for AF_INTER mode.
- MV Motion Vector Predictor
- FIG. 17A-17B shows examples of candidates for AF_MERGE mode.
- FIG. 18 shows example process for bilateral matching.
- FIG. 19 shows example process of template matching.
- FIG. 20 illustrates an implementation of unilateral motion estimation (ME) in frame rate upconversion (FRUC).
- ME unilateral motion estimation
- FRUC frame rate upconversion
- FIG. 21 illustrates an embodiment of an Ultimate Motion Vector Expression (UMVE) search process.
- UMVE Ultimate Motion Vector Expression
- FIG. 22 shows examples of UMVE search points.
- FIG. 23 shows an example of distance index and distance offset mapping.
- FIG. 24 shows an example of an optical flow trajectory.
- FIG. 25A-25B show examples of Bi-directional Optical flow (BIO) w/o block extension: a) access positions outside of the block; b) padding used in order to avoid extra memory access and calculation.
- BIO Bi-directional Optical flow
- FIG. 26 illustrates an example of using Decoder-side motion vector refinement (DMVR) based on bilateral template matching.
- DMVR Decoder-side motion vector refinement
- FIG. 27 shows an example of interweaved prediction.
- FIG. 28 shows an example of iterative motion vector refinement for BIO.
- FIG. 29 is a block diagram of a hardware platform for implementing the video coding or decoding techniques described in the present document.
- FIG. 30 shows an example of a hardware platform for implementing methods and techniques described in the present document.
- FIG. 31 is a flowchart of an example method of video processing.
- FIG. 32 is a flowchart of an example method of video processing.
- FIG. 33 is a flowchart of an example method of video processing.
- the present document is related to video coding technologies. Specifically, it is related to motion compensation in video coding.
- the disclosed techniques may be applied to the existing video coding standard like HEVC, or the standard (Versatile Video Coding) to be finalized. It may be also applicable to future video coding standards or video codec.
- video processing may refer to video encoding, video decoding, video compression or video decompression.
- video compression algorithms may be applied during conversion from pixel representation of a video to a corresponding bitstream representation or vice versa.
- Video coding standards have evolved primarily through the development of the well-known ITU-T and ISO/IEC standards.
- the ITU-T produced H.261 and H.263, ISO/IEC produced MPEG-1 and MPEG-4 Visual, and the two organizations jointly produced the H.262/MPEG-2 Video and H.264/MPEG-4 Advanced Video Coding (AVC) and H.265/HEVC standards.
- AVC H.264/MPEG-4 Advanced Video Coding
- H.265/HEVC H.265/HEVC
- the video coding standards are based on the hybrid video coding structure wherein temporal prediction plus transform coding are utilized.
- Joint Video Exploration Team JVET was founded by VCEG and MPEG jointly in 2015. Since then, many new methods have been adopted by JVET and put into the reference software named Joint Exploration Model (JEM).
- JEM Joint Exploration Model
- Each inter-predicted PU has motion parameters for one or two reference picture lists.
- Motion parameters include a motion vector and a reference picture index. Usage of one of the two reference picture lists may also be signalled using inter_pred_idc. Motion vectors may be explicitly coded as deltas relative to predictors.
- a merge mode is specified whereby the motion parameters for the current PU are obtained from neighbouring PUs, including spatial and temporal candidates.
- the merge mode can be applied to any inter-predicted PU, not only for skip mode.
- the alternative to merge mode is the explicit transmission of motion parameters, where motion vector (to be more precise, motion vector difference compared to a motion vector predictor), corresponding reference picture index for each reference picture list and reference picture list usage are signalled explicitly per each PU.
- Such a mode is named Advanced motion vector prediction (AMVP) in this disclosure.
- the PU When signalling indicates that one of the two reference picture lists is to be used, the PU is produced from one block of samples. This is referred to as ‘uni-prediction’. Uni-prediction is available both for P-slices and B-slices.
- Bi-prediction When signalling indicates that both of the reference picture lists are to be used, the PU is produced from two blocks of samples. This is referred to as ‘bi-prediction’. Bi-prediction is available for B-slices only.
- Step 1 Initial candidates derivation
- Step 2 Additional candidates insertion
- steps are also schematically depicted in FIG. 1 .
- For spatial merge candidate derivation a maximum of four merge candidates are selected among candidates that are located in five different positions.
- temporal merge candidate derivation a maximum of one merge candidate is selected among two candidates. Since constant number of candidates for each PU is assumed at decoder, additional candidates are generated when the number of candidates obtained from step 1 does not reach the maximum number of merge candidate (MaxNumMergeCand) which is signalled in slice header. Since the number of candidates is constant, index of best merge candidate is encoded using truncated unary binarization (TU). If the size of CU is equal to 8, all the PUs of the current CU share a single merge candidate list, which is identical to the merge candidate list of the 2N ⁇ 2N prediction unit.
- TU truncated unary binarization
- a maximum of four merge candidates are selected among candidates located in the positions depicted in FIG. 2 .
- the order of derivation is A 1 , B 1 , B 0 , A 0 and B 2 .
- Position B 2 is considered only when any PU of position A 1 , B 1 , B 0 , A 0 is not available (e.g. because it belongs to another slice or tile) or is intra coded.
- candidate at position A 1 is added, the addition of the remaining candidates is subject to a redundancy check which ensures that candidates with same motion information are excluded from the list so that coding efficiency is improved.
- a redundancy check which ensures that candidates with same motion information are excluded from the list so that coding efficiency is improved.
- not all possible candidate pairs are considered in the mentioned redundancy check. Instead only the pairs linked with an arrow in FIG.
- FIG. 4 depicts the second PU for the case of N ⁇ 2N and 2N ⁇ N, respectively.
- candidate at position A 1 is not considered for list construction. In fact, by adding this candidate will lead to two prediction units having the same motion information, which is redundant to just have one PU in a coding unit.
- position B 1 is not considered when the current PU is partitioned as 2N ⁇ N.
- a scaled motion vector is derived based on co-located PU belonging to the picture which has the smallest POC difference with current picture within the given reference picture list.
- the reference picture list to be used for derivation of the co-located PU is explicitly signalled in the slice header.
- the scaled motion vector for temporal merge candidate is obtained as illustrated by the dotted line in FIG.
- tb is defined to be the POC difference between the reference picture of the current picture and the current picture
- td is defined to be the POC difference between the reference picture of the co-located picture and the co-located picture.
- the reference picture index of temporal merge candidate is set equal to zero.
- the position for the temporal candidate is selected between candidates C 0 and C 1 , as depicted in FIG. 6 . If PU at position C 0 is not available, is intra coded, or is outside of the current CTU row, position C 1 is used. Otherwise, position C 0 is used in the derivation of the temporal merge candidate.
- Zero merge candidate Combined bi-predictive merge candidates are generated by utilizing spatial and temporal merge candidates. Combined bi-predictive merge candidate is used for B-Slice only. The combined bi-predictive candidates are generated by combining the first reference picture list motion parameters of an initial candidate with the second reference picture list motion parameters of another. If these two tuples provide different motion hypotheses, they will form a new bi-predictive candidate. As an example, FIG.
- Zero motion candidates are inserted to fill the remaining entries in the merge candidates list and therefore hit the MaxNumMergeCand capacity. These candidates have zero spatial displacement and a reference picture index which starts from zero and increases every time a new zero motion candidate is added to the list. The number of reference frames used by these candidates is one and two for uni and bi-directional prediction, respectively. Finally, no redundancy check is performed on these candidates.
- HEVC defines the motion estimation region (MER) whose size is signalled in the picture parameter set using the “log 2_parallel_merge_level_minus2” syntax element. When a MER is defined, merge candidates falling in the same region are marked as unavailable and therefore not considered in the list construction.
- AMVP exploits spatio-temporal correlation of motion vector with neighbouring PUs, which is used for explicit transmission of motion parameters.
- a motion vector candidate list is constructed by firstly checking availability of left, above temporally neighbouring PU positions, removing redundant candidates and adding zero vector to make the candidate list to be constant length. Then, the encoder can select the best predictor from the candidate list and transmit the corresponding index indicating the chosen candidate. Similarly with merge index signalling, the index of the best motion vector candidate is encoded using truncated unary. The maximum value to be encoded in this case is 2 (see FIG. 8 ). In the following sections, details about derivation process of motion vector prediction candidate are provided.
- FIG. 8 summarizes derivation process for motion vector prediction candidate.
- motion vector candidate two types are considered: spatial motion vector candidate and temporal motion vector candidate.
- spatial motion vector candidate derivation two motion vector candidates are eventually derived based on motion vectors of each PU located in five different positions as depicted in FIG. 2 .
- one motion vector candidate is selected from two candidates, which are derived based on two different co-located positions. After the first list of spatio-temporal candidates is made, duplicated motion vector candidates in the list are removed. If the number of potential candidates is larger than two, motion vector candidates whose reference picture index within the associated reference picture list is larger than 1 are removed from the list. If the number of spatio-temporal motion vector candidates is smaller than two, additional zero motion vector candidates is added to the list.
- a maximum of two candidates are considered among five potential candidates, which are derived from PUs located in positions as depicted in FIG. 2 , those positions being the same as those of motion merge.
- the order of derivation for the left side of the current PU is defined as A 0 , A 1 , and scaled A 0 , scaled A 1 .
- the order of derivation for the above side of the current PU is defined as B 0 , B 1 , B 2 , scaled B 0 , scaled B 1 , scaled B 2 .
- the no-spatial-scaling cases are checked first followed by the spatial scaling. Spatial scaling is considered when the POC is different between the reference picture of the neighbouring PU and that of the current PU regardless of reference picture list. If all PUs of left candidates are not available or are intra coded, scaling for the above motion vector is allowed to help parallel derivation of left and above MV candidates. Otherwise, spatial scaling is not allowed for the above motion vector.
- the motion vector of the neighbouring PU is scaled in a similar manner as for temporal scaling, as depicted as FIG. 9 .
- the main difference is that the reference picture list and index of current PU is given as input; the actual scaling process is the same as that of temporal scaling.
- each CU can have at most one set of motion parameters for each prediction direction.
- Two sub-CU level motion vector prediction methods are considered in the encoder by splitting a large CU into sub-CUs and deriving motion information for all the sub-CUs of the large CU.
- Alternative temporal motion vector prediction (ATMVP) method allows each CU to fetch multiple sets of motion information from multiple blocks smaller than the current CU in the collocated reference picture.
- STMVP spatial-temporal motion vector prediction
- the motion compression for the reference frames is currently disabled.
- the motion vectors temporal motion vector prediction is modified by fetching multiple sets of motion information (including motion vectors and reference indices) from blocks smaller than the current CU.
- the sub-CUs are square N ⁇ N blocks (N is set to 4 by default).
- ATMVP predicts the motion vectors of the sub-CUs within a CU in two steps.
- the first step is to identify the corresponding block in a reference picture with a so-called temporal vector.
- the reference picture is called the motion source picture.
- the second step is to split the current CU into sub-CUs and obtain the motion vectors as well as the reference indices of each sub-CU from the block corresponding to each sub-CU, as shown in FIG. 10 .
- a reference picture and the corresponding block is determined by the motion information of the spatial neighbouring blocks of the current CU.
- the first merge candidate in the merge candidate list of the current CU is used.
- the first available motion vector as well as its associated reference index are set to be the temporal vector and the index to the motion source picture. This way, in ATMVP, the corresponding block may be more accurately identified, compared with TMVP, wherein the corresponding block (sometimes called collocated block) is always in a bottom-right or center position relative to the current CU.
- a corresponding block of the sub-CU is identified by the temporal vector in the motion source picture, by adding to the coordinate of the current CU the temporal vector.
- the motion information of its corresponding block (the smallest motion grid that covers the center sample) is used to derive the motion information for the sub-CU.
- the motion information of a corresponding N ⁇ N block is identified, it is converted to the motion vectors and reference indices of the current sub-CU, in the same way as TMVP of HEVC, wherein motion scaling and other procedures apply.
- the decoder checks whether the low-delay condition (i.e.
- motion vector MV x the motion vector corresponding to reference picture list X
- motion vector MV y the motion vector corresponding to 0 or 1 and Y being equal to 1-X
- FIG. 11 illustrates this concept. Let us consider an 8 ⁇ 8 CU which contains four 4 ⁇ 4 sub-CUs A, B, C, and D. The neighbouring 4 ⁇ 4 blocks in the current frame are labelled as a, b, c, and d.
- the motion derivation for sub-CU A starts by identifying its two spatial neighbours.
- the first neighbour is the N ⁇ N block above sub-CU A (block c). If this block c is not available or is intra coded the other N ⁇ N blocks above sub-CU A are checked (from left to right, starting at block c).
- the second neighbour is a block to the left of the sub-CU A (block b). If block b is not available or is intra coded other blocks to the left of sub-CU A are checked (from top to bottom, staring at block b).
- the motion information obtained from the neighbouring blocks for each list is scaled to the first reference frame for a given list.
- temporal motion vector predictor (TMVP) of sub-block A is derived by following the same procedure of TMVP derivation as specified in HEVC.
- the motion information of the collocated block at location D is fetched and scaled accordingly.
- all available motion vectors (up to 3) are averaged separately for each reference list. The averaged motion vector is assigned as the motion vector of the current sub-CU.
- FIG. 11 shows an example of one CU with four sub-blocks (A-D) and its neighbouring blocks (a-d).
- the sub-CU modes are enabled as additional merge candidates and there is no additional syntax element required to signal the modes.
- Two additional merge candidates are added to merge candidates list of each CU to represent the ATMVP mode and STMVP mode. Up to seven merge candidates are used, if the sequence parameter set indicates that ATMVP and STMVP are enabled.
- the encoding logic of the additional merge candidates is the same as for the merge candidates in the HM, which means, for each CU in P or B slice, two more RD checks is needed for the two additional merge candidates.
- MVDs motion vector differences
- LAMVR locally adaptive motion vector resolution
- MVD can be coded in units of quarter luma samples, integer luma samples or four luma samples.
- the MVD resolution is controlled at the coding unit (CU) level, and MVD resolution flags are conditionally signalled for each CU that has at least one non-zero MVD components.
- a first flag is signalled to indicate whether quarter luma sample MV precision is used in the CU.
- the first flag (equal to 1) indicates that quarter luma sample MV precision is not used, another flag is signalled to indicate whether integer luma sample MV precision or four luma sample MV precision is used.
- the quarter luma sample MV resolution is used for the CU.
- the MVPs in the AMVP candidate list for the CU are rounded to the corresponding precision.
- CU-level RD checks are used to determine which MVD resolution is to be used for a CU. That is, the CU-level RD check is performed three times for each MVD resolution.
- the following encoding schemes are applied in the JEM.
- motion vector accuracy is one-quarter pel (one-quarter luma sample and one-eighth chroma sample for 4:2:0 video).
- JEM the accuracy for the internal motion vector storage and the merge candidate increases to 1/16 pel.
- the higher motion vector accuracy ( 1/16 pel) is used in motion compensation inter prediction for the CU coded with skip/merge mode.
- the integer-pel or quarter-pel motion is used for the CU coded with normal AMVP mode.
- SHVC upsampling interpolation filters which have same filter length and normalization factor as HEVC motion compensation interpolation filters, are used as motion compensation interpolation filters for the additional fractional pel positions.
- the chroma component motion vector accuracy is 1/32 sample in the JEM, the additional interpolation filters of 1/32 pel fractional positions are derived by using the average of the filters of the two neighbouring 1/16 pel fractional positions.
- OBMC Overlapped Block Motion Compensation
- OBMC can be switched on and off using syntax at the CU level.
- the OBMC is performed for all motion compensation (MC) block boundaries except the right and bottom boundaries of a CU. Moreover, it is applied for both the luma and chroma components.
- a MC block is corresponding to a coding block.
- sub-CU mode includes sub-CU merge, affine and FRUC mode
- each sub-block of the CU is a MC block.
- OBMC is performed at sub-block level for all MC block boundaries, where sub-block size is set equal to 4 ⁇ 4, as illustrated in FIG. 12 .
- OBMC applies to the current sub-block
- motion vectors of four connected neighbouring sub-blocks are also used to derive prediction block for the current sub-block.
- These multiple prediction blocks based on multiple motion vectors are combined to generate the final prediction signal of the current sub-block.
- Prediction block based on motion vectors of a neighbouring sub-block is denoted as P N , with N indicating an index for the neighbouring above, below, left and right sub-blocks and prediction block based on motion vectors of the current sub-block is denoted as P C .
- P N is based on the motion information of a neighbouring sub-block that contains the same motion information to the current sub-block
- the OBMC is not performed from P N . Otherwise, every sample of P N is added to the same sample in Pc, i.e., four rows/columns of P N are added to P C .
- the weighting factors ⁇ 1 ⁇ 4, 1 ⁇ 8, 1/16, 1/32 ⁇ are used for P N and the weighting factors ⁇ 3 ⁇ 4, 7 ⁇ 8, 15/16, 31/32 ⁇ are used for P C .
- the exception are small MC blocks, (i.e., when height or width of the coding block is equal to 4 or a CU is coded with sub-CU mode), for which only two rows/columns of P N are added to P C .
- weighting factors ⁇ 1 ⁇ 4, 1 ⁇ 8 ⁇ are used for P N and weighting factors ⁇ 3 ⁇ 4, 7 ⁇ 8 ⁇ are used for P C .
- For P N generated based on motion vectors of vertically (horizontally) neighbouring sub-block samples in the same row (column) of P N are added to P C with a same weighting factor.
- a CU level flag is signalled to indicate whether OBMC is applied or not for the current CU.
- OBMC is applied by default.
- the prediction signal formed by OBMC using motion information of the top neighbouring block and the left neighbouring block is used to compensate the top and left boundaries of the original signal of the current CU, and then the normal motion estimation process is applied.
- LIC Local Illumination Compensation
- CU inter-mode coded coding unit
- a least square error method is employed to derive the parameters a and b by using the neighbouring samples of the current CU and their corresponding reference samples. More specifically, as illustrated in FIG. 13 , the subsampled (2:1 subsampling) neighbouring samples of the CU and the corresponding samples (identified by motion information of the current CU or sub-CU) in the reference picture are used. The IC parameters are derived and applied for each prediction direction separately.
- the LIC flag is copied from neighbouring blocks, in a way similar to motion information copy in merge mode; otherwise, an LIC flag is signalled for the CU to indicate whether LIC applies or not.
- LIC When LIC is enabled for a picture, additional CU level RD check is needed to determine whether LIC is applied or not for a CU.
- MR-SAD mean-removed sum of absolute difference
- MR-SATD mean-removed sum of absolute Hadamard-transformed difference
- HEVC High Efficiency Video Coding
- MCP motion compensation prediction
- JEM a simplified affine transform motion compensation prediction is applied. As shown FIG. 14 , the affine motion field of the block is described by two control point motion vectors.
- the motion vector field (MVF) of a block is described by the following equation:
- sub-block based affine transform prediction is applied.
- the sub-block size M ⁇ N is derived as in Equation 2, where MvPre is the motion vector fraction accuracy ( 1/16 in JEM), (v 2x , v 2y ) is motion vector of the bottom-left control point, calculated according to Equation 1.
- Equation 2 M and N should be adjusted downward if necessary to make it a divisor of w and h, respectively.
- the motion vector of the center sample of each sub-block is calculated according to Equation 1, and rounded to 1/16 fraction accuracy. Then the motion compensation interpolation filters mentioned in previous section [00111] are applied to generate the prediction of each sub-block with derived motion vector.
- the high accuracy motion vector of each sub-block is rounded and saved as the same accuracy as the normal motion vector.
- affine motion modes there are two affine motion modes: AF_INTER mode and AF_MERGE mode.
- AF_INTER mode can be applied.
- An affine flag in CU level is signalled in the bitstream to indicate whether AF_INTER mode is used.
- v 0 is selected from the motion vectors of the block A, B or C.
- the motion vector from the neighbour block is scaled according to the reference list and the relationship among the POC of the reference for the neighbour block, the POC of the reference for the current CU and the POC of the current CU. And the approach to select v 1 from the neighbour block D and E is similar. If the number of candidate list is smaller than 2, the list is padded by the motion vector pair composed by duplicating each of the AMVP candidates. When the candidate list is larger than 2, the candidates are firstly sorted according to the consistency of the neighbouring motion vectors (similarity of the two motion vectors in a pair candidate) and only the first two candidates are kept. An RD cost check is used to determine which motion vector pair candidate is selected as the control point motion vector prediction (CPMVP) of the current CU.
- CPMVP control point motion vector prediction
- an index indicating the position of the CPMVP in the candidate list is signalled in the bitstream.
- a CU When a CU is applied in AF_MERGE mode, it gets the first block coded with affine mode from the valid neighbour reconstructed blocks. And the selection order for the candidate block is from left, above, above right, left bottom to above left as shown in FIG. 17A . If the neighbour left bottom block A is coded in affine mode as shown in FIG. 17B , the motion vectors v 2 , v 3 and v 4 of the top left corner, above right corner and left bottom corner of the CU which contains the block A are derived. And the motion vector v 0 of the top left corner on the current CU is calculated according to v 2 , v 3 and v 4 . Secondly, the motion vector v 1 of the above right of the current CU is calculated.
- the MVF of the current CU is generated.
- an affine flag is signalled in the bitstream when there is at least one neighbour block is coded in affine mode.
- Pattern matched motion vector derivation (PMMVD) mode is a special merge mode based on Frame-Rate Up Conversion (FRUC) techniques. With this mode, motion information of a block is not signalled but derived at decoder side.
- PMMVD Pattern matched motion vector derivation
- FRUC Frame-Rate Up Conversion
- a FRUC flag is signalled for a CU when its merge flag is true.
- FRUC flag is false, a merge index is signalled and the regular merge mode is used.
- FRUC flag is true, an additional FRUC mode flag is signalled to indicate which method (bilateral matching or template matching) is to be used to derive motion information for the block.
- the decision on whether using FRUC merge mode for a CU is based on RD cost selection as done for normal merge candidate. That is the two matching modes (bilateral matching and template matching) are both checked for a CU by using RD cost selection. The one leading to the minimal cost is further compared to other CU modes. If a FRUC matching mode is the most efficient one, FRUC flag is set to true for the CU and the related matching mode is used.
- Motion derivation process in FRUC merge mode has two steps.
- a CU-level motion search is first performed, then followed by a Sub-CU level motion refinement.
- an initial motion vector is derived for the whole CU based on bilateral matching or template matching.
- a list of MV candidates is generated and the candidate which leads to the minimum matching cost is selected as the starting point for further CU level refinement.
- a local search based on bilateral matching or template matching around the starting point is performed and the MV results in the minimum matching cost is taken as the MV for the whole CU.
- the motion information is further refined at sub-CU level with the derived CU motion vectors as the starting points.
- the following derivation process is performed for a W ⁇ H CU motion information derivation.
- MV for the whole W ⁇ H CU is derived.
- the CU is further split into M ⁇ M sub-CUs.
- the value of M is calculated as in (3), D is a predefined splitting depth which is set to 3 by default in the JEM. Then the MV for each sub-CU is derived.
- the bilateral matching is used to derive motion information of the current CU by finding the closest match between two blocks along the motion trajectory of the current CU in two different reference pictures.
- the motion vectors MV 0 and MV 1 pointing to the two reference blocks shall be proportional to the temporal distances, i.e., TD 0 and TD 1 , between the current picture and the two reference pictures.
- the bilateral matching becomes mirror based bi-directional MV.
- template matching is used to derive motion information of the current CU by finding the closest match between a template (top and/or left neighbouring blocks of the current CU) in the current picture and a block (same size to the template) in a reference picture. Except the aforementioned FRUC merge mode, the template matching is also applied to AMVP mode.
- AMVP has two candidates.
- template matching method a new candidate is derived. If the newly derived candidate by template matching is different to the first existing AMVP candidate, it is inserted at the very beginning of the AMVP candidate list and then the list size is set to two (meaning remove the second existing AMVP candidate).
- AMVP mode only CU level search is applied.
- the MV candidate set at CU level consists of:
- each valid MV of a merge candidate is used as an input to generate a MV pair with the assumption of bilateral matching.
- one valid MV of a merge candidate is (MVa, refa) at reference list A.
- the reference picture refb of its paired bilateral MV is found in the other reference list B so that refa and refb are temporally at different sides of the current picture. If such a refb is not available in reference list B, refb is determined as a reference which is different from refa and its temporal distance to the current picture is the minimal one in list B.
- MVb is derived by scaling MVa based on the temporal distance between the current picture and refa, refb.
- MVs from the interpolated MV field are also added to the CU level candidate list. More specifically, the interpolated MVs at the position (0, 0), (W/2, 0), (0, H/2) and (W/2, H/2) of the current CU are added.
- the original AMVP candidates are also added to CU level MV candidate set.
- the MV candidate set at sub-CU level consists of:
- the scaled MVs from reference pictures are derived as follows. All the reference pictures in both lists are traversed. The MVs at a collocated position of the sub-CU in a reference picture are scaled to the reference of the starting CU-level MV.
- ATMVP and STMVP candidates are limited to the four first ones.
- interpolated motion field is generated for the whole picture based on unilateral ME. Then the motion field may be used later as CU level or sub-CU level MV candidates.
- the motion field of each reference pictures in both reference lists is traversed at 4 ⁇ 4 block level.
- the motion of the reference block is scaled to the current picture according to the temporal distance TD 0 and TD 1 (the same way as that of MV scaling of TMVP in HEVC) and the scaled motion is assigned to the block in the current frame. If no scaled MV is assigned to a 4 ⁇ 4 block, the block's motion is marked as unavailable in the interpolated motion field.
- the matching cost is a bit different at different steps.
- the matching cost is the absolute sum difference (SAD) of bilateral matching or template matching.
- SAD absolute sum difference
- the matching cost C of bilateral matching at sub-CU level search is calculated as follows:
- MV and MV s indicate the current MV and the starting MV, respectively.
- SAD is still used as the matching cost of template matching at sub-CU level search.
- MV is derived by using luma samples only. The derived motion will be used for both luma and chroma for MC inter prediction. After MV is decided, final MC is performed using 8-taps interpolation filter for luma and 4-taps interpolation filter for chroma.
- MV refinement is a pattern based MV search with the criterion of bilateral matching cost or template matching cost.
- two search patterns are supported—an unrestricted center-biased diamond search (UCBDS) and an adaptive cross search for MV refinement at the CU level and sub-CU level, respectively.
- UMBDS center-biased diamond search
- the MV is directly searched at quarter luma sample MV accuracy, and this is followed by one-eighth luma sample MV refinement.
- the search range of MV refinement for the CU and sub-CU step are set equal to 8 luma samples.
- the encoder can choose among uni-prediction from list 0 , uni-prediction from list 1 or bi-prediction for a CU. The selection is based on a template matching cost as follows:
- cost 0 is the SAD of list 0 template matching
- cost 1 is the SAD of list 1 template matching
- costBi is the SAD of bi-prediction template matching.
- the value of factor is equal to 1.25, which means that the selection process is biased toward bi-prediction.
- the inter prediction direction selection is only applied to the CU-level template matching process.
- P TraditionalBiPred is the final predictor for the conventional bi-prediction
- P L0 and P L1 are predictors from L 0 and L 1 , respectively
- RoundingOffset and shiftNum are used to normalize the final predictor.
- GBI Generalized Bi-prediction
- P GBi is the final predictor of GBi. (1 ⁇ w 1 ) and w 1 are the selected GBI weights applied to the predictors of L 0 and L 1 , respectively. RoundingOffset GBi and shiftNum GBi are used to normalize the final predictor in GBi.
- the supported weights of w 1 is ⁇ 1 ⁇ 4, 3 ⁇ 8, 1 ⁇ 2, 5 ⁇ 8, 5/4 ⁇ .
- One equal-weight set and four unequal-weight sets are supported.
- the process to generate the final predictor is exactly the same as that in the conventional bi-prediction mode.
- the number of candidate weight sets is reduced to three.
- the weight selection in GBI is explicitly signaled at CU-level if this CU is coded by bi-prediction.
- the weight selection is inherited from the merge candidate.
- GBI supports DMVR to generate the weighted average of template as well as the final predictor for BMS-1.0.
- one or more additional prediction signals are signaled, in addition to the conventional uni/bi prediction signal.
- the resulting overall prediction signal is obtained by sample-wise weighted superposition.
- the resulting prediction signal p 3 is obtained as follows:
- the weighting factor ⁇ is specified by the syntax element add_hyp_weight_idx, according to the following mapping:
- prediction list 0 /list 1 is abolished, and instead one combined list is used.
- This combined list is generated by alternatingly inserting reference frames from list 0 and list 1 with increasing reference index, omitting reference frames which have already been inserted, such that double entries are avoided.
- more than one additional prediction signals can be used.
- the resulting overall prediction signal is accumulated iteratively with each additional prediction signal.
- the resulting overall prediction signal is obtained as the last p n (i.e., the p n having the largest index n).
- inter prediction blocks using MERGE mode (but not SKIP mode)
- additional inter prediction signals can be specified.
- MERGE not only the uni/bi prediction parameters, but also the additional prediction parameters of the selected merging candidate can be used for the current block.
- Multi-hypothesis intra and inter prediction mode is also known as Combined Inter and Intra Prediction (CIIP) mode.
- CIIP Combined Inter and Intra Prediction
- multi-hypothesis prediction when the multi-hypothesis prediction is applied to improve uni-prediction of AMVP mode, one flag is signaled to enable or disable multi-hypothesis prediction for inter_dir equal to 1 or 2, where 1, 2, and 3 represent list 0 , list 1 , and bi-prediction, respectively. Moreover, one more merge index is signaled when the flag is true. In this way, multi-hypothesis prediction turns uni-prediction into bi-prediction, where one motion is acquired using the original syntax elements in AMVP mode while the other is acquired using the merge scheme. The final prediction uses 1:1 weights to combine these two predictions as in bi-prediction.
- the merge candidate list is first derived from merge mode with sub-CU candidates (e.g., affine, alternative temporal motion vector prediction (ATMVP)) excluded. Next, it is separated into two individual lists, one for list 0 (L 0 ) containing all L 0 motions from the candidates, and the other for list 1 (L 1 ) containing all L 1 motions. After removing redundancy and filling vacancy, two merge lists are generated for L 0 and L 1 respectively.
- ATMVP alternative temporal motion vector prediction
- each candidate of multi-hypothesis prediction implies a pair of merge candidates, containing one for the 1 st merge indexed prediction and the other for the 2 nd merge indexed prediction.
- the merge candidate for the 2 nd merge indexed prediction is implicitly derived as the succeeding merge candidate (i.e., the already signaled merge index plus one) without signaling any additional merge index. After removing redundancy by excluding those pairs, containing similar merge candidates and filling vacancy, the candidate list for multi-hypothesis prediction is formed.
- a merge or skip CU with multi-hypothesis prediction enabled can save the motion information of the additional hypotheses for reference of the following neighboring CUs in addition to the motion information of the existing hypotheses.
- sub-CU candidates e.g., affine, ATMVP
- multi-hypothesis prediction is not applied to skip mode.
- the worst-case bandwidth (required access samples per sample) for each merge or skip CU with multi-hypothesis prediction enabled is calculated in Table 1 and each number is less than half of the worst-case bandwidth for each 4 ⁇ 4 CU with multi-hypothesis prediction disabled.
- UMVE ultimate motion vector expression
- MMVD Motion Vector Difference
- UMVE Ultimate Motion Vector Expression
- UMVE re-uses merge candidate as same as using in VVC.
- a candidate can be selected, and is further expanded by the proposed motion vector expression method.
- UMVE provides a new motion vector expression with simplified signaling.
- the expression method includes starting point, motion magnitude, and motion direction.
- FIG. 21 shows an example of a UMVE Search Process
- FIG. 22 shows an example of UMVE Search Points.
- This proposed technique uses a merge candidate list as it is. But only candidates which are default merge type (MRG_TYPE_DEFAULT_N) are considered for UMVE's expansion.
- Base candidate index defines the starting point.
- Base candidate index indicates the best candidate among candidates in the list as follows.
- Base candidate IDX Base candidate IDX 0 1 2 3 N th MVP 1 st MVP 2 nd MVP 3 rd MVP 4 th MVP
- Base candidate IDX is not signaled.
- Distance index is motion magnitude information.
- Distance index indicates the pre-defined distance from the starting point information. Pre-defined distance is as follows:
- Direction index represents the direction of the MVD relative to the starting point.
- the direction index can represent of the four directions as shown below.
- UMVE flag is signaled right after sending a skip flag and merge flag. If skip and merge flag is true, UMVE flag is parsed. If UMVE flag is equal to 1, UMVE syntaxes are parsed. But, if not 1, AFFINE flag is parsed. If AFFINE flag is equal to 1, that is AFFINE mode, But, if not 1, skip/merge index is parsed for VTM's skip/merge mode.
- UMVE is extended to affine merge mode, we will call this UMVE affine mode thereafter.
- the proposed method selects the first available affine merge candidate as a base predictor. Then it applies a motion vector offset to each control point's motion vector value from the base predictor. If there's no affine merge candidate available, this proposed method will not be used.
- the selected base predictor's inter prediction direction, and the reference index of each direction is used without change.
- the current block's affine model is assumed to be a 4-parameter model, only 2 control points need to be derived. Thus, only the first 2 control points of the base predictor will be used as control point predictors.
- a zero_MVD flag is used to indicate whether the control point of current block has the same MV value as the corresponding control point predictor. If zero_MVD flag is true, there's no other signaling needed for the control point. Otherwise, a distance index and an offset direction index is signaled for the control point.
- a distance offset table with size of 5 is used as shown in the table below.
- Distance index is signaled to indicate which distance offset to use.
- the mapping of distance index and distance offset values is shown in FIG. 23 .
- the direction index can represent four directions as shown below, where only x or y direction may have an MV difference, but not in both directions.
- the signaled distance offset is applied on the offset direction for each control point predictor. Results will be the MV value of each control point.
- MV( v x ,v y ) MVP( v px ,v py )+MV( x -dir-factor*distance-offset, y -dir-factor*distance-offset);
- the signaled distance offset is applied on the signaled offset direction for control point predictor's L 0 motion vector; and the same distance offset with opposite direction is applied for control point predictor's L 1 motion vector. Results will be the MV values of each control point, on each inter prediction direction.
- MV L0 ( v 0x ,v 0y ) MVP L0 ( v 0px ,v 0py )+MV( x -dir-factor*distance-offset, y -dir-factor*distance-offset);
- MV L1 ( v 0x ,v 0y ) MVP L1 ( v 1px ,v 1py )+MV( ⁇ x -dir-factor*distance-offset, ⁇ y -dir-factor*distance-offset);
- BIO motion compensation is first performed to generate the first predictions (in each prediction direction) of the current block.
- the first predictions are used to derive the spatial gradient, the temporal gradient and the optical flow of each subblock/pixel within the block, which are then used to generate the second prediction, i.e., the final prediction of the subblock/pixel.
- the details are described as follows.
- Bi-directional Optical flow is sample-wise motion refinement which is performed on top of block-wise motion compensation for bi-prediction.
- the sample-level motion refinement doesn't use signalling.
- FIG. 24 shows an example of an optical flow trajectory
- ⁇ I (k) / ⁇ x, ⁇ I (k) / ⁇ y are horizontal and vertical components of the I (k) gradient, respectively.
- the motion vector field (v x , v y ) is given by an equation as follows:
- pred BIO 1/2 ⁇ ( I (0) +I (1) +v x /2 ⁇ ( ⁇ 1 ⁇ I (1) / ⁇ x ⁇ 0 ⁇ I (0) / ⁇ x )+ v y /2 ⁇ ( ⁇ 1 ⁇ I (1) / ⁇ y ⁇ 0 ⁇ I (0) / ⁇ y )).
- ⁇ 0 and ⁇ 1 denote the distances to the reference frames as shown on a FIG. 24 .
- the motion vector field (v x , v y ) is determined by minimizing the difference ⁇ between values in points A and B (intersection of motion trajectory and reference frame planes on FIG. 24 ).
- Model uses only first linear term of a local Taylor expansion for ⁇ :
- Equation 5 All values in Equation 5 depend on the sample location (i′, j′), which was omitted from the notation so far. Assuming the motion is consistent in the local surrounding area, we minimize ⁇ inside the (2M+1) ⁇ (2M+1) square window ⁇ centered on the currently predicted point (i, j), where M is equal to 2:
- the JEM uses a simplified approach making first a minimization in the vertical direction and then in the horizontal direction. This results in
- d bit depth of the video samples.
- I (k) , ⁇ I (k) / ⁇ x, ⁇ I (k) / ⁇ y are calculated only for positions inside the current block.
- (2M+1) ⁇ (2M+1) square window ⁇ centered in currently predicted point on a boundary of predicted block needs to accesses positions outside of the block (as shown in FIG. 25A ).
- values of I (k) , ⁇ I (k) / ⁇ x, ⁇ I (k) / ⁇ y outside of the block are set to be equal to the nearest available value inside the block. For example, this can be implemented as padding, as shown in FIG. 25B .
- FIG. 26 shows BIO w/o block extension: a) access positions outside of the block; b) padding is used in order to avoid extra memory access and calculation.
- BIO With BIO, it's possible that the motion field can be refined for each sample.
- a block-based design of BIO is used in the JEM. The motion refinement is calculated based on 4 ⁇ 4 block.
- the values of s n in Equation 9 of all samples in a 4 ⁇ 4 block are aggregated, and then the aggregated values of s n in are used to derived BIO motion vectors offset for the 4 ⁇ 4 block. More specifically, the following formula is used for block-based BIO derivation:
- Equations 7 and 8 are replaced by ((s n,bk )>>4) to derive the associated motion vector offsets.
- MV regiment of BIO might be unreliable due to noise or irregular motion. Therefore, in BIO, the magnitude of MV regiment is clipped to a threshold value thBIO.
- the threshold value is determined based on whether the reference pictures of the current picture are all from one direction. If all the reference pictures of the current picture are from one direction, the value of the threshold is set to 12 ⁇ 2 14 ⁇ d ; otherwise, it is set to 12 ⁇ 2 13 ⁇ d .
- Gradients for BIO are calculated at the same time with motion compensation interpolation using operations consistent with HEVC motion compensation process (2D separable FIR).
- the input for this 2D separable FIR is the same reference frame sample as for motion compensation process and fractional position (fracX, fracY) according to the fractional part of block motion vector.
- gradient filter BIOfilterG is applied in horizontal direction corresponding to the fractional position fracX with de-scaling shift by 18 ⁇ d.
- BIOfilterG corresponding to the fractional position fracY with de-scaling shift d ⁇ 8
- signal displacement is performed using BIOfilterS in horizontal direction corresponding to the fractional position fracX with de-scaling shift by 18 ⁇ d.
- the length of interpolation filter for gradients calculation BIOfilterG and signal displacement BIOfilterF is shorter (6-tap) in order to maintain reasonable complexity.
- Table 2 shows the filters used for gradients calculation for different fractional positions of block motion vector in BIO.
- Table 3 shows the interpolation filters used for prediction signal generation in BIO.
- BIO is applied to all bi-predicted blocks when the two predictions are from different reference pictures.
- BIO is disabled.
- BIO is not applied during the OBMC process. This means that BIO is only applied in the MC process for a block when using its own MV and is not applied in the MC process when the MV of a neighboring block is used during the OBMC process.
- a reference block (or a prediction block) may be modified firstly, and the calculation of temporal gradient is based on the modified reference block.
- mean is removed for all reference blocks.
- mean is defined as the average of selected samples in the reference block.
- all pixels in a reference block X or a sub-block of the reference block X are used to calculate MeanX.
- only partial pixels in a reference block X or a sub-block of the reference block are used to calculate MeanX. For example, only pixels in every second row/column are used.
- bi-prediction operation for the prediction of one block region, two prediction blocks, formed using a motion vector (MV) of list 0 and a MV of list 1 , respectively, are combined to form a single prediction signal.
- MV motion vector
- the two motion vectors of the bi-prediction are further refined by a bilateral template matching process.
- the bilateral template matching applied in the decoder to perform a distortion-based search between a bilateral template and the reconstruction samples in the reference pictures in order to obtain a refined MV without transmission of additional motion information.
- a bilateral template is generated as the weighted combination (i.e. average) of the two prediction blocks, from the initial MV 0 of list 0 and MV 1 of list 1 , respectively, as shown in FIG. 26 .
- the template matching operation consists of calculating cost measures between the generated template and the sample region (around the initial prediction block) in the reference picture. For each of the two reference pictures, the MV that yields the minimum template cost is considered as the updated MV of that list to replace the original one.
- nine MV candidates are searched for each list. The nine MV candidates include the original MV and 8 surrounding MVs with one luma sample offset to the original MV in either the horizontal or vertical direction, or both.
- the two new MVs i.e., MV 0 ′ and MV 1 ′ as shown in FIG. 26 , are used for generating the final bi-prediction results.
- a sum of absolute differences (SAD) is used as the cost measure.
- SAD sum of absolute differences
- DMVR is applied for the merge mode of bi-prediction with one MV from a reference picture in the past and another from a reference picture in the future, without the transmission of additional syntax elements.
- JEM when LIC, affine motion, FRUC, or sub-CU merge candidate is enabled for a CU, DMVR is not applied.
- FIG. 26 shows an example of a DMVR based on bilateral template matching
- a MV update method and a two-step inter prediction method is proposed.
- the derived MV between reference block 0 and reference block 1 in BIO are scaled and added to the original motion vector of list 0 and list 1 .
- the updated MV is used to perform motion compensation and a second inter prediction is generated as the final prediction.
- the temporal gradient is modified by removing the mean difference between reference block 0 and reference block 1 .
- Sub-block based prediction method is proposed.
- Interweaved prediction is proposed for sub-block motion compensation.
- a block is divided into sub-blocks with more than one dividing patterns.
- a dividing pattern is defined as the way to divide a block into sub-blocks, including the size of sub-blocks and the position of sub-blocks.
- a corresponding prediction block may be generated by deriving motion information of each sub-block based on the dividing pattern. Therefore, even for one prediction direction, multiple prediction blocks may be generated by multiple dividing patterns. Alternatively, for each prediction direction, only a dividing pattern may be applied.
- X prediction blocks of the current block denoted as P 0 , P 1 , . . . , P X ⁇ 1 are generated by sub-block based prediction with the X dividing patterns.
- the final prediction of the current block denoted as P, can be generated as:
- FIG. 27 shows an example of interweaved prediction with two dividing patterns.
- a two-step inter prediction method is proposed, however, such method can be performed multiple times to get more accurate motion information such that higher coding gains may be expected.
- motion information e.g., motion vectors
- DMVR decoder-side motion refinement process
- BIO some intermediate motion information different from the final motion information used for motion compensation
- motion information of a block/a sub-block within a coded block may be refined once or multiple times and the refined motion information may be used for motion vector prediction of blocks to be coded afterwards, and/or filtering process.
- DMVD is used to represent DMVR or BIO or other decoder-sider motion vector refinement method or pixel refinement method.
- SATD sum of absolute transformed differences
- MRSATD mean removed sum of absolute transformed differences
- SSE sum of squares error
- MRSSE mean removed sum of squares error
- FIG. 29 is a block diagram illustrating an example of the architecture for a computer system or other control device 2600 that can be utilized to implement various portions of the presently disclosed technology.
- the computer system 2600 includes one or more processors 2605 and memory 2610 connected via an interconnect 2625 .
- the interconnect 2625 may represent any one or more separate physical buses, point to point connections, or both, connected by appropriate bridges, adapters, or controllers.
- the interconnect 2625 may include, for example, a system bus, a Peripheral Component Interconnect (PCI) bus, a HyperTransport or industry standard architecture (ISA) bus, a small computer system interface (SCSI) bus, a universal serial bus (USB), IIC (I2C) bus, or an Institute of Electrical and Electronics Engineers (IEEE) standard 674 bus, sometimes referred to as “Firewire.”
- PCI Peripheral Component Interconnect
- ISA HyperTransport or industry standard architecture
- SCSI small computer system interface
- USB universal serial bus
- I2C IIC
- IEEE Institute of Electrical and Electronics Engineers
- the processor(s) 2605 may include central processing units (CPUs) to control the overall operation of, for example, the host computer. In certain embodiments, the processor(s) 2605 accomplish this by executing software or firmware stored in memory 2610 .
- the processor(s) 2605 may be, or may include, one or more programmable general-purpose or special-purpose microprocessors, digital signal processors (DSPs), programmable controllers, application specific integrated circuits (ASICs), programmable logic devices (PLDs), or the like, or a combination of such devices.
- the memory 2610 can be or include the main memory of the computer system.
- the memory 2610 represents any suitable form of random access memory (RAM), read-only memory (ROM), flash memory, or the like, or a combination of such devices.
- RAM random access memory
- ROM read-only memory
- flash memory or the like, or a combination of such devices.
- the memory 2610 may contain, among other things, a set of machine instructions which, when executed by processor 2605 , causes the processor 2605 to perform operations to implement embodiments of the presently disclosed technology.
- the network adapter 2615 provides the computer system 2600 with the ability to communicate with remote devices, such as the storage clients, and/or other storage servers, and may be, for example, an Ethernet adapter or Fiber Channel adapter.
- FIG. 30 shows a block diagram of an example embodiment of a device 2700 that can be utilized to implement various portions of the presently disclosed technology.
- the mobile device 2700 can be a laptop, a smartphone, a tablet, a camcorder, or other types of devices that are capable of processing videos.
- the mobile device 2700 includes a processor or controller 2701 to process data, and memory 2702 in communication with the processor 2701 to store and/or buffer data.
- the processor 2701 can include a central processing unit (CPU) or a microcontroller unit (MCU).
- the processor 2701 can include a field-programmable gate-array (FPGA).
- FPGA field-programmable gate-array
- the mobile device 2700 includes or is in communication with a graphics processing unit (GPU), video processing unit (VPU) and/or wireless communications unit for various visual and/or communications data processing functions of the smartphone device.
- the memory 2702 can include and store processor-executable code, which when executed by the processor 2701 , configures the mobile device 2700 to perform various operations, e.g., such as receiving information, commands, and/or data, processing information and data, and transmitting or providing processed information/data to another device, such as an actuator or external display.
- the memory 2702 can store information and data, such as instructions, software, values, images, and other data processed or referenced by the processor 2701 .
- the mobile device 2700 includes an input/output (I/O) unit 2703 to interface the processor 2701 and/or memory 2702 to other modules, units or devices.
- I/O unit 2703 can interface the processor 2701 and memory 2702 with to utilize various types of wireless interfaces compatible with typical data communication standards, e.g., such as between the one or more computers in the cloud and the user device.
- the mobile device 2700 can interface with other devices using a wired connection via the I/O unit 2703 .
- the mobile device 2700 can also interface with other external interfaces, such as data storage, and/or visual or audio display devices 2704 , to retrieve and transfer data and information that can be processed by the processor, stored in the memory, or exhibited on an output unit of a display device 2704 or an external device.
- the display device 2704 can display a video frame modified based on the MVPs in accordance with the disclosed technology.
- FIG. 31 is a flowchart for a method 3100 of video processing.
- the method 3100 includes generating ( 3102 ), using a multi-step refinement process, multiple refinement values of motion vector information based on decoded motion information from a bitstream representation of a current video block, and reconstructing ( 3104 ) the current video block or decoding other video blocks based on multiple refinement values.
- Another video processing method includes performing, for conversion between a current block and a bitstream representation of the current block, a multi-step refinement process for a first sub-block of the current block and a temporal gradient modification process for a second sub-block of the current block, wherein, using the multi-step refinement process, multiple refinement values of motion vector information signaled in a bitstream representation of the current video block and performing the conversion between the current block and the bitstream representation using a selected one of the multiple refinement values.
- Another video processing method includes determining, using a multi-step decoder-side motion vector refinement process a current video block, a final motion vector and performing conversion between the current block and the bitstream representation using the final motion vector.
- Another video processing method includes applying, during conversion between a current video block and a bitstream representation of the current video block; multiple different motion vector refinement processes to different sub-blocks of the current video block and performing conversion between the current block and the bitstream representation using a final motion vector for the current video block generated from the multiple different motion vector refinement processes.
- the bitstream representation of a current block of video may include bits of a bitstream (compressed representation of a video) that may be non-contiguous and may depend on header information, as is known in the art of video compression.
- a current block may include samples representative of one or more of luma and chroma components, or rotational variations thereof (e.g, YCrCb or YUV, and so on).
- a method of video processing comprising: generating, using a multi-step refinement process, multiple refinement values of motion vector information based on decoded motion information from a bitstream representation of a current video block, and reconstructing the current video block or decoding other video blocks based on multiple refinement values.
- the refinement operation may include an averaging operation.
- refined values of one step of refinement process are used to reconstruct the current video block.
- refined values of one step of refinement process are used for decoding other video blocks.
- a step of the multi-step refinement process includes splitting the current block into multiple sub-blocks and performing an additional multi-step refinement process for at least some of the multiple sub-blocks.
- splitting the current block includes splitting the current block in a step-dependent manner such that, in at least two steps, sub-blocks of different sizes are used for the additional multi-step refinement process.
- the characteristic of the current block includes a size of the current block or a coding mode of the current block or a value of motion vector difference of the current block or a quantization parameter.
- a method of video processing comprising:
- each step of the multi-step refinement process uses partial pixels of the current block or a sub-block of the current block.
- a method of video processing comprising: determining, using a multi-step decoder-side motion vector refinement process a current video block, a final motion vector; and performing conversion between the current block and the bitstream representation using the final motion vector.
- a method of video processing comprising: applying, during conversion between a current video block and a bitstream representation of the current video block; multiple different motion vector refinement processes to different sub-blocks of the current video block; and performing conversion between the current block and the bitstream representation using a final motion vector for the current video block generated from the multiple different motion vector refinement processes.
- a method of video processing comprising:
- a method of video processing comprising:
- splitting the current block into multiple sub-blocks includes determining a size of the multiple sub-blocks based on a size of the current video block.
- a method of video processing comprising determining, in an early termination stage of a bi-directional optical flow (BIO) technique or a decoder-side motion vector refinement (DMVR) technique, differences between reference video blocks associated with a current video block. Further processing of the current video block based on the differences can be performed.
- BIO bi-directional optical flow
- DMVR decoder-side motion vector refinement
- the reference video blocks include a first reference video block and a second reference video block, the differences based on a summation of the differences between the first reference video block and the second reference video block.
- the differences include one or more of: sum of absolute differences (SAD), sum of absolute transformed differences (SATD), sum of squares error (SSE), mean removed sum of absolute differences (MRSAD), mean removed sum of absolute transformed differences (MRSATD), or mean removed sum of squares error (MRSSE).
- a video processing apparatus comprising a processor configured to implement a method recited in any one or more of clauses 1 to 56.
- a computer program product stored on a non-transitory computer readable media including program code for carrying out the method in any one of clauses 1 to 56.
- FIG. 32 is a flowchart for a method 3200 of video processing.
- the method 3200 includes calculating ( 3202 ), during a conversion between a current block of video and a bitstream representation of the current block, differences between two reference blocks associated with the current block or differences between two reference sub-blocks associated with a sub-block within the current block based on representative positions of the reference blocks or representative positions of the reference sub-blocks, and performing ( 3204 ) the conversion based on the differences.
- calculating the differences comprises calculating differences of interlaced positions of the two reference blocks and/or two reference sub-blocks.
- calculating the differences comprising calculating differences of even rows of the two reference blocks and/or two reference sub-blocks.
- calculating the differences comprising calculating difference of four corner samples of the two reference blocks and/or two reference sub-blocks.
- calculating the differences comprising calculating differences between two reference blocks based on representative sub-blocks within the reference blocks.
- the representative positions are selected by using predetermined strategy.
- the performing the conversion based on the differences comprises: summing up the differences calculated for the representative positions of the reference sub-blocks to obtain the difference for the sub-block.
- the performing the conversion based on the differences comprises: summing up the differences calculated for the representative positions of the reference blocks to obtain the difference for the current block.
- the performing the conversion based on the differences comprises: summing up the differences calculated for the representative positions to obtain the difference for the current block; determining whether a motion vector refinement processing or a prediction refinement processing relying on difference calculation is enabled or disabled for the current block based on the difference of the current block.
- the prediction refinement processing includes bi-directional optical flow (BIO) technique
- the motion vector refinement processing includes a decoder-side motion vector refinement (DMVR) technique or a frame-rate up conversion (FRUC) technique.
- BIO bi-directional optical flow
- DMVR decoder-side motion vector refinement
- FRUC frame-rate up conversion
- the differences include one or more of: sum of absolute differences (SAD), sum of absolute transformed differences (SATD), sum of squares error (SSE), mean removed sum of absolute differences (MRSAD), mean removed sum of absolute transformed differences (MRSATD), or mean removed sum of squares error (MRSSE).
- FIG. 33 is a flowchart for a method 3300 of video processing.
- the method 3300 includes: making ( 3302 ) a decision, based on a determination that a current block of a video is coded using a specific coding mode, regarding a selective enablement of a decoder side motion vector derivation (DMVD) tool for the current block, wherein the DMVD tool derives a refinement of motion information signaled in a bitstream representation of the video; and performing ( 3304 ), based on the decision, a conversion between the current block and the bitstream representation.
- DMVD decoder side motion vector derivation
- the DMVD tool is disabled upon a determination that prediction signal of the current block is generated at least based on an intra prediction signal and an inter prediction signal.
- the DMVD tool is enabled upon a determination that prediction signal of the current block is generated at least based on an intra prediction signal and an inter prediction signal.
- the current block is coded in a Combined Inter and Intra Prediction (CIIP) mode.
- CIIP Combined Inter and Intra Prediction
- the DMVD tool is disabled upon a determination that the current block is coded with a Merge mode and motion vector differences.
- the DMVD tool is enabled upon a determination that the current block is coded with a Merge mode and Motion Vector Differences.
- the current block is coded in a Merge mode with Motion Vector Difference (MMVD) mode.
- MMVD Motion Vector Difference
- the DMVD tool is disabled upon a determination that the current block is coded with multiple sub-regions and at least one of them is non-rectangular.
- the DMVD tool is enabled upon a determination that the current block is coded with multiple sub-regions and at least one of them is non-rectangular.
- the current block is coded with the triangular prediction mode.
- the DMVD tool comprises a decoder side motion vector refinement (DMVR) tool.
- DMVR decoder side motion vector refinement
- the DMVD tool comprises a bi-directional optical flow (BDOF) tool.
- BDOF bi-directional optical flow
- the DMVD tool comprises a frame-rate up conversion (FRUC) tool or other decoder-sider motion vector refinement method or sample refinement method.
- FRUC frame-rate up conversion
- the conversion generates the current block from the bitstream representation.
- the conversion generates the bitstream representation from the current block.
- the disclosed and other embodiments, modules and the functional operations described in this document can be implemented in digital electronic circuitry, or in computer software, firmware, or hardware, including the structures disclosed in this document and their structural equivalents, or in combinations of one or more of them.
- the disclosed and other embodiments can be implemented as one or more computer program products, i.e., one or more modules of computer program instructions encoded on a computer readable medium for execution by, or to control the operation of, data processing apparatus.
- the computer readable medium can be a machine-readable storage device, a machine-readable storage substrate, a memory device, a composition of matter effecting a machine-readable propagated signal, or a combination of one or more them.
- data processing apparatus encompasses all apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, or multiple processors or computers.
- the apparatus can include, in addition to hardware, code that creates an execution environment for the computer program in question, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, or a combination of one or more of them.
- a propagated signal is an artificially generated signal, e.g., a machine-generated electrical, optical, or electromagnetic signal, that is generated to encode information for transmission to suitable receiver apparatus.
- a computer program (also known as a program, software, software application, script, or code) can be written in any form of programming language, including compiled or interpreted languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
- a computer program does not necessarily correspond to a file in a file system.
- a program can be stored in a portion of a file that holds other programs or data (e.g., one or more scripts stored in a markup language document), in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub programs, or portions of code).
- a computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network.
- the processes and logic flows described in this document can be performed by one or more programmable processors executing one or more computer programs to perform functions by operating on input data and generating output.
- the processes and logic flows can also be performed by, and apparatus can also be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application specific integrated circuit).
- processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer.
- a processor will receive instructions and data from a read only memory or a random-access memory or both.
- the essential elements of a computer are a processor for performing instructions and one or more memory devices for storing instructions and data.
- a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto optical disks, or optical disks.
- mass storage devices for storing data, e.g., magnetic, magneto optical disks, or optical disks.
- a computer need not have such devices.
- Computer readable media suitable for storing computer program instructions and data include all forms of non-volatile memory, media and memory devices, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices; magnetic disks, e.g., internal hard disks or removable disks; magneto optical disks; and CD ROM and DVD-ROM disks.
- semiconductor memory devices e.g., EPROM, EEPROM, and flash memory devices
- magnetic disks e.g., internal hard disks or removable disks
- magneto optical disks e.g., CD ROM and DVD-ROM disks.
- the processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US17/534,968 US11956465B2 (en) | 2018-11-20 | 2021-11-24 | Difference calculation based on partial position |
Applications Claiming Priority (7)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2018116371 | 2018-11-20 | ||
CNPCT/CN2018/116371 | 2018-11-20 | ||
CNPCT/CN2019/070062 | 2019-01-02 | ||
CN2019070062 | 2019-01-02 | ||
CNPCT/CN2019/072060 | 2019-01-16 | ||
CN2019072060 | 2019-01-16 | ||
PCT/CN2019/119634 WO2020103852A1 (en) | 2018-11-20 | 2019-11-20 | Difference calculation based on patial position |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2019/119634 Continuation WO2020103852A1 (en) | 2018-11-20 | 2019-11-20 | Difference calculation based on patial position |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/534,968 Continuation US11956465B2 (en) | 2018-11-20 | 2021-11-24 | Difference calculation based on partial position |
Publications (1)
Publication Number | Publication Date |
---|---|
US20210144400A1 true US20210144400A1 (en) | 2021-05-13 |
Family
ID=70774606
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/154,485 Abandoned US20210144400A1 (en) | 2018-11-20 | 2021-01-21 | Difference calculation based on partial position |
US17/534,968 Active US11956465B2 (en) | 2018-11-20 | 2021-11-24 | Difference calculation based on partial position |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/534,968 Active US11956465B2 (en) | 2018-11-20 | 2021-11-24 | Difference calculation based on partial position |
Country Status (6)
Country | Link |
---|---|
US (2) | US20210144400A1 (ja) |
EP (1) | EP3861742A4 (ja) |
JP (1) | JP7241870B2 (ja) |
KR (1) | KR20210091161A (ja) |
CN (2) | CN113056914B (ja) |
WO (1) | WO2020103852A1 (ja) |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20210306656A1 (en) * | 2020-03-26 | 2021-09-30 | Alibaba Group Holding Limited | Method and apparatus for encoding or decoding video |
US20220086481A1 (en) * | 2018-11-20 | 2022-03-17 | Beijing Bytedance Network Technology Co., Ltd. | Difference calculation based on partial position |
US20220174314A1 (en) * | 2019-09-20 | 2022-06-02 | Kddi Corporation | Image decoding device, image decoding method, and program |
US20220264146A1 (en) * | 2019-07-01 | 2022-08-18 | Interdigital Vc Holdings France, Sas | Bi-prediction refinement in affine with optical flow |
US20220295089A1 (en) * | 2021-03-12 | 2022-09-15 | Lemon Inc. | Motion candidate derivation |
CN115190299A (zh) * | 2022-07-11 | 2022-10-14 | 杭州电子科技大学 | Vvc仿射运动估计快速算法 |
US11509929B2 (en) | 2018-10-22 | 2022-11-22 | Beijing Byedance Network Technology Co., Ltd. | Multi-iteration motion vector refinement method for video processing |
US20220417522A1 (en) * | 2021-06-29 | 2022-12-29 | Qualcomm Incorporated | Adaptive bilateral matching for decoder side motion vector refinement |
US11553201B2 (en) | 2019-04-02 | 2023-01-10 | Beijing Bytedance Network Technology Co., Ltd. | Decoder side motion vector derivation |
US11558634B2 (en) | 2018-11-20 | 2023-01-17 | Beijing Bytedance Network Technology Co., Ltd. | Prediction refinement for combined inter intra prediction mode |
US11641467B2 (en) | 2018-10-22 | 2023-05-02 | Beijing Bytedance Network Technology Co., Ltd. | Sub-block based prediction |
US11671616B2 (en) | 2021-03-12 | 2023-06-06 | Lemon Inc. | Motion candidate derivation |
US11843725B2 (en) | 2018-11-12 | 2023-12-12 | Beijing Bytedance Network Technology Co., Ltd | Using combined inter intra prediction in video processing |
US11930165B2 (en) | 2019-03-06 | 2024-03-12 | Beijing Bytedance Network Technology Co., Ltd | Size dependent inter coding |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2022221013A1 (en) * | 2021-04-12 | 2022-10-20 | Qualcomm Incorporated | Template matching refinement in inter-prediction modes |
US11917165B2 (en) * | 2021-08-16 | 2024-02-27 | Tencent America LLC | MMVD signaling improvement |
Family Cites Families (274)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6005627A (en) | 1991-05-31 | 1999-12-21 | Kabushiki Kaisha Toshiba | Video coding apparatus |
KR100203281B1 (ko) | 1996-10-29 | 1999-06-15 | 윤종용 | 강제적 한방향 운동보상에 근거한 동화상 복호화장치 |
US6480615B1 (en) | 1999-06-15 | 2002-11-12 | University Of Washington | Motion estimation within a sequence of data frames using optical flow with adaptive gradients |
US6829303B1 (en) | 1999-11-17 | 2004-12-07 | Hitachi America, Ltd. | Methods and apparatus for decoding images using dedicated hardware circuitry and a programmable processor |
ES2745044T3 (es) | 2002-04-19 | 2020-02-27 | Panasonic Ip Corp America | Método de cálculo de vectores de movimiento |
AU2003264804A1 (en) | 2002-10-16 | 2004-05-04 | Koninklijke Philips Electronics N.V. | Fully scalable 3-d overcomplete wavelet video coding using adaptive motion compensated temporal filtering |
BRPI0413647A (pt) | 2003-08-26 | 2006-10-17 | Thomson Licensing | método e aparelho para codificar blocos intra-inter codificados hìbridos |
US7627037B2 (en) | 2004-02-27 | 2009-12-01 | Microsoft Corporation | Barbell lifting for multi-layer wavelet coding |
US20050201468A1 (en) | 2004-03-11 | 2005-09-15 | National Chiao Tung University | Method and apparatus for interframe wavelet video coding |
US8085846B2 (en) | 2004-08-24 | 2011-12-27 | Thomson Licensing | Method and apparatus for decoding hybrid intra-inter coded blocks |
CN1319383C (zh) | 2005-04-07 | 2007-05-30 | 西安交通大学 | 高性能空域可伸缩的运动估计与运动矢量编码实现方法 |
US8023041B2 (en) | 2006-01-30 | 2011-09-20 | Lsi Corporation | Detection of moving interlaced text for film mode decision |
US20080086050A1 (en) | 2006-10-09 | 2008-04-10 | Medrad, Inc. | Mri hyperthermia treatment systems, methods and devices, endorectal coil |
EP2082585A2 (en) | 2006-10-18 | 2009-07-29 | Thomson Licensing | Method and apparatus for video coding using prediction data refinement |
JP5197630B2 (ja) | 2008-01-09 | 2013-05-15 | 三菱電機株式会社 | 画像符号化装置、画像復号装置、画像符号化方法、および画像復号方法 |
KR101596829B1 (ko) | 2008-05-07 | 2016-02-23 | 엘지전자 주식회사 | 비디오 신호의 디코딩 방법 및 장치 |
JP2010016806A (ja) | 2008-06-04 | 2010-01-21 | Panasonic Corp | フレーム符号化とフィールド符号化の判定方法、画像符号化方法、画像符号化装置およびプログラム |
TW201041404A (en) | 2009-03-06 | 2010-11-16 | Sony Corp | Image processing device and method |
CN101877785A (zh) | 2009-04-29 | 2010-11-03 | 祝志怡 | 一种基于混合预测的视频编码方法 |
US8462852B2 (en) | 2009-10-20 | 2013-06-11 | Intel Corporation | Methods and apparatus for adaptively choosing a search range for motion estimation |
US9654792B2 (en) | 2009-07-03 | 2017-05-16 | Intel Corporation | Methods and systems for motion vector derivation at a video decoder |
WO2011003326A1 (en) | 2009-07-06 | 2011-01-13 | Mediatek Singapore Pte. Ltd. | Single pass adaptive interpolation filter |
US9549190B2 (en) | 2009-10-01 | 2017-01-17 | Sk Telecom Co., Ltd. | Method and apparatus for encoding/decoding image using variable-size macroblocks |
WO2011050641A1 (en) | 2009-10-28 | 2011-05-05 | Mediatek Singapore Pte. Ltd. | Video coding methods and video encoders and decoders with localized weighted prediction |
US20110176611A1 (en) | 2010-01-15 | 2011-07-21 | Yu-Wen Huang | Methods for decoder-side motion vector derivation |
KR101682147B1 (ko) | 2010-04-05 | 2016-12-05 | 삼성전자주식회사 | 변환 및 역변환에 기초한 보간 방법 및 장치 |
CN102934444A (zh) | 2010-04-06 | 2013-02-13 | 三星电子株式会社 | 用于对视频进行编码的方法和设备以及用于对视频进行解码的方法和设备 |
US9172968B2 (en) | 2010-07-09 | 2015-10-27 | Qualcomm Incorporated | Video coding using directional transforms |
KR101484281B1 (ko) | 2010-07-09 | 2015-01-21 | 삼성전자주식회사 | 블록 병합을 이용한 비디오 부호화 방법 및 그 장치, 블록 병합을 이용한 비디오 복호화 방법 및 그 장치 |
CN105163118B (zh) | 2010-07-20 | 2019-11-26 | Sk电信有限公司 | 用于解码视频信号的解码方法 |
US10327008B2 (en) | 2010-10-13 | 2019-06-18 | Qualcomm Incorporated | Adaptive motion vector resolution signaling for video coding |
EP2656610A4 (en) | 2010-12-21 | 2015-05-20 | Intel Corp | SYSTEM AND METHOD FOR EXTENDED DMVD PROCESSING |
JP2012142702A (ja) | 2010-12-28 | 2012-07-26 | Sony Corp | 画像処理装置および方法、並びにプログラム |
GB2487200A (en) | 2011-01-12 | 2012-07-18 | Canon Kk | Video encoding and decoding with improved error resilience |
US9049452B2 (en) | 2011-01-25 | 2015-06-02 | Mediatek Singapore Pte. Ltd. | Method and apparatus for compressing coding unit in high efficiency video coding |
JP2012191298A (ja) | 2011-03-09 | 2012-10-04 | Fujitsu Ltd | 動画像復号装置、動画像符号化装置、動画像復号方法、動画像符号化方法、動画像復号プログラム及び動画像符号化プログラム |
US9143795B2 (en) | 2011-04-11 | 2015-09-22 | Texas Instruments Incorporated | Parallel motion estimation in video coding |
CN102811346B (zh) | 2011-05-31 | 2015-09-02 | 富士通株式会社 | 编码模式选择方法和系统 |
JP2013034163A (ja) | 2011-06-03 | 2013-02-14 | Sony Corp | 画像処理装置及び画像処理方法 |
CN102857764B (zh) | 2011-07-01 | 2016-03-09 | 华为技术有限公司 | 帧内预测模式处理的方法和装置 |
US20130051467A1 (en) | 2011-08-31 | 2013-02-28 | Apple Inc. | Hybrid inter/intra prediction in video coding systems |
CN107968945B (zh) | 2011-09-14 | 2021-09-14 | 三星电子株式会社 | 对视频进行解码的方法和对视频进行编码的方法 |
US9699457B2 (en) | 2011-10-11 | 2017-07-04 | Qualcomm Incorporated | Most probable transform for intra prediction coding |
KR102211673B1 (ko) | 2011-12-16 | 2021-02-03 | 벨로스 미디어 인터내셔널 리미티드 | 동화상 부호화 방법, 동화상 부호화 장치, 동화상 복호 방법, 동화상 복호 장치, 및 동화상 부호화 복호장치 |
US9503716B2 (en) | 2011-12-19 | 2016-11-22 | Broadcom Corporation | Block size dependent filter selection for motion compensation |
EP2800368B1 (en) | 2011-12-28 | 2021-07-28 | Sharp Kabushiki Kaisha | Arithmetic decoding device, arithmetic decoding method, and arithmetic coding device |
US20130195188A1 (en) | 2012-01-26 | 2013-08-01 | Panasonic Corporation | Image coding method, image coding apparatus, image decoding method, image decoding apparatus, and image coding and decoding apparatus |
US9451277B2 (en) | 2012-02-08 | 2016-09-20 | Qualcomm Incorporated | Restriction of prediction units in B slices to uni-directional inter prediction |
EP2642755B1 (en) | 2012-03-20 | 2018-01-03 | Dolby Laboratories Licensing Corporation | Complexity scalable multilayer video coding |
JP5987767B2 (ja) | 2012-04-16 | 2016-09-07 | 株式会社Jvcケンウッド | 動画像復号装置、動画像復号方法、動画像復号プログラム、受信装置、受信方法及び受信プログラム |
US9591312B2 (en) | 2012-04-17 | 2017-03-07 | Texas Instruments Incorporated | Memory bandwidth reduction for motion compensation in video coding |
EP2849441B1 (en) | 2012-05-10 | 2019-08-21 | LG Electronics Inc. | Method and apparatus for processing video signals |
EP3193506A1 (en) | 2012-06-27 | 2017-07-19 | Kabushiki Kaisha Toshiba | Decoding method and decoding device |
US20140002594A1 (en) | 2012-06-29 | 2014-01-02 | Hong Kong Applied Science and Technology Research Institute Company Limited | Hybrid skip mode for depth map coding and decoding |
US9549182B2 (en) | 2012-07-11 | 2017-01-17 | Qualcomm Incorporated | Repositioning of prediction residual blocks in video coding |
EP3588958A1 (en) | 2012-08-29 | 2020-01-01 | Vid Scale, Inc. | Method and apparatus of motion vector prediction for scalable video coding |
US9906786B2 (en) | 2012-09-07 | 2018-02-27 | Qualcomm Incorporated | Weighted prediction mode for scalable video coding |
CN104541506A (zh) | 2012-09-28 | 2015-04-22 | 英特尔公司 | 层间像素样本预测 |
WO2014047877A1 (en) | 2012-09-28 | 2014-04-03 | Intel Corporation | Inter-layer residual prediction |
WO2014082680A1 (en) | 2012-11-30 | 2014-06-05 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Compressed data stream transmission using rate control |
US20140177706A1 (en) | 2012-12-21 | 2014-06-26 | Samsung Electronics Co., Ltd | Method and system for providing super-resolution of quantized images and video |
US9294777B2 (en) | 2012-12-30 | 2016-03-22 | Qualcomm Incorporated | Progressive refinement with temporal scalability support in video coding |
EP2951999A4 (en) | 2013-01-30 | 2016-07-20 | Intel Corp | CONTENT PARAMETRIC TRANSFORMATIONS FOR CODING VIDEOS OF THE NEXT GENERATION |
US9900576B2 (en) | 2013-03-18 | 2018-02-20 | Qualcomm Incorporated | Simplifications on disparity vector derivation and motion vector prediction in 3D video coding |
US9521425B2 (en) | 2013-03-19 | 2016-12-13 | Qualcomm Incorporated | Disparity vector derivation in 3D video coding for skip and direct modes |
US9491460B2 (en) | 2013-03-29 | 2016-11-08 | Qualcomm Incorporated | Bandwidth reduction for video coding prediction |
WO2014165555A1 (en) | 2013-04-02 | 2014-10-09 | Vid Scale, Inc. | Enhanced temporal motion vector prediction for scalable video coding |
WO2014166063A1 (en) | 2013-04-09 | 2014-10-16 | Mediatek Inc. | Default vector for disparity vector derivation for 3d video coding |
US9961347B2 (en) | 2013-04-10 | 2018-05-01 | Hfi Innovation Inc. | Method and apparatus for bi-prediction of illumination compensation |
US9374578B1 (en) | 2013-05-23 | 2016-06-21 | Google Inc. | Video coding using combined inter and intra predictors |
WO2015003383A1 (en) | 2013-07-12 | 2015-01-15 | Mediatek Singapore Pte. Ltd. | Methods for inter-view motion prediction |
US9628795B2 (en) | 2013-07-17 | 2017-04-18 | Qualcomm Incorporated | Block identification using disparity vector in video coding |
US9774879B2 (en) | 2013-08-16 | 2017-09-26 | Sony Corporation | Intra-block copying enhancements for HEVC in-range-extension (RExt) |
US9503715B2 (en) | 2013-08-30 | 2016-11-22 | Qualcomm Incorporated | Constrained intra prediction in video coding |
CN111179946B (zh) | 2013-09-13 | 2023-10-13 | 三星电子株式会社 | 无损编码方法和无损解码方法 |
US10244253B2 (en) | 2013-09-13 | 2019-03-26 | Qualcomm Incorporated | Video coding techniques using asymmetric motion partitioning |
US9554150B2 (en) | 2013-09-20 | 2017-01-24 | Qualcomm Incorporated | Combined bi-predictive merging candidates for 3D video coding |
US9762927B2 (en) | 2013-09-26 | 2017-09-12 | Qualcomm Incorporated | Sub-prediction unit (PU) based temporal motion vector prediction in HEVC and sub-PU design in 3D-HEVC |
US9667996B2 (en) | 2013-09-26 | 2017-05-30 | Qualcomm Incorporated | Sub-prediction unit (PU) based temporal motion vector prediction in HEVC and sub-PU design in 3D-HEVC |
US9906813B2 (en) | 2013-10-08 | 2018-02-27 | Hfi Innovation Inc. | Method of view synthesis prediction in 3D video coding |
CN105519119B (zh) | 2013-10-10 | 2019-12-17 | 夏普株式会社 | 图像解码装置 |
CN105637872B (zh) | 2013-10-16 | 2019-01-01 | 夏普株式会社 | 图像解码装置、图像编码装置 |
WO2015062002A1 (en) | 2013-10-31 | 2015-05-07 | Mediatek Singapore Pte. Ltd. | Methods for sub-pu level prediction |
WO2015085575A1 (en) | 2013-12-13 | 2015-06-18 | Mediatek Singapore Pte. Ltd. | Methods for background residual prediction |
US9774881B2 (en) | 2014-01-08 | 2017-09-26 | Microsoft Technology Licensing, Llc | Representing motion vectors in an encoded bitstream |
US9264728B2 (en) | 2014-01-10 | 2016-02-16 | Sony Corporation | Intra-plane and inter-plane predictive method for Bayer image coding |
US10057590B2 (en) | 2014-01-13 | 2018-08-21 | Mediatek Inc. | Method and apparatus using software engine and hardware engine collaborated with each other to achieve hybrid video encoding |
WO2015109598A1 (en) | 2014-01-27 | 2015-07-30 | Mediatek Singapore Pte. Ltd. | Methods for motion parameter hole filling |
US9906790B2 (en) | 2014-03-14 | 2018-02-27 | Qualcomm Incorporated | Deblock filtering using pixel distance |
US9860559B2 (en) | 2014-03-17 | 2018-01-02 | Mediatek Singapore Pte. Ltd. | Method of video coding using symmetric intra block copy |
EP3139605A4 (en) | 2014-04-28 | 2017-05-17 | Panasonic Intellectual Property Corporation of America | Encoding method, decoding method, encoding apparatus, and decoding apparatus |
WO2015180014A1 (en) | 2014-05-26 | 2015-12-03 | Mediatek Singapore Pte. Ltd. | An improved merge candidate list construction method for intra block copy |
US10327001B2 (en) | 2014-06-19 | 2019-06-18 | Qualcomm Incorporated | Systems and methods for intra-block copy |
CN105493505B (zh) | 2014-06-19 | 2019-08-06 | 微软技术许可有限责任公司 | 统一的帧内块复制和帧间预测模式 |
US9930341B2 (en) | 2014-06-20 | 2018-03-27 | Qualcomm Incorporated | Block vector coding for intra block copying |
GB2531003A (en) | 2014-10-06 | 2016-04-13 | Canon Kk | Method and apparatus for vector encoding in video coding and decoding |
US9918105B2 (en) | 2014-10-07 | 2018-03-13 | Qualcomm Incorporated | Intra BC and inter unification |
WO2016054765A1 (en) | 2014-10-08 | 2016-04-14 | Microsoft Technology Licensing, Llc | Adjustments to encoding and decoding when switching color spaces |
CN104301724B (zh) | 2014-10-17 | 2017-12-01 | 华为技术有限公司 | 视频处理方法、编码设备和解码设备 |
KR20170078682A (ko) | 2014-11-04 | 2017-07-07 | 삼성전자주식회사 | 에지 타입의 오프셋을 적용하는 비디오 부호화 방법 및 그 장치, 비디오 복호화 방법 및 그 장치 |
EP3217663A4 (en) | 2014-11-06 | 2018-02-14 | Samsung Electronics Co., Ltd. | Video encoding method and apparatus, and video decoding method and apparatus |
SG11201703454XA (en) | 2014-11-18 | 2017-06-29 | Mediatek Inc | Method of bi-prediction video coding based on motion vectors from uni-prediction and merge candidate |
US10382795B2 (en) | 2014-12-10 | 2019-08-13 | Mediatek Singapore Pte. Ltd. | Method of video coding using binary tree block partitioning |
CN107113443B (zh) | 2014-12-26 | 2020-04-28 | 索尼公司 | 影像处理设备和影像处理方法 |
JP6501532B2 (ja) | 2015-01-23 | 2019-04-17 | キヤノン株式会社 | 画像符号化装置、画像符号化方法及びプログラム |
US10230980B2 (en) | 2015-01-26 | 2019-03-12 | Qualcomm Incorporated | Overlapped motion compensation for video coding |
US11477477B2 (en) | 2015-01-26 | 2022-10-18 | Qualcomm Incorporated | Sub-prediction unit based advanced temporal motion vector prediction |
US10070130B2 (en) | 2015-01-30 | 2018-09-04 | Qualcomm Incorporated | Flexible partitioning of prediction units |
CN104702957B (zh) | 2015-02-28 | 2018-10-16 | 北京大学 | 运动矢量压缩方法和装置 |
US10958927B2 (en) | 2015-03-27 | 2021-03-23 | Qualcomm Incorporated | Motion information derivation mode determination in video coding |
CA2981916C (en) | 2015-04-13 | 2021-08-31 | Mediatek, Inc. | Methods of constrained intra block copy for reducing worst case bandwidth in video coding |
US10200713B2 (en) | 2015-05-11 | 2019-02-05 | Qualcomm Incorporated | Search region determination for inter coding within a particular picture of video data |
CN115086652A (zh) | 2015-06-05 | 2022-09-20 | 杜比实验室特许公司 | 图像编码和解码方法和图像解码设备 |
TWI816224B (zh) | 2015-06-08 | 2023-09-21 | 美商Vid衡器股份有限公司 | 視訊解碼或編碼方法及裝置 |
US20160360205A1 (en) | 2015-06-08 | 2016-12-08 | Industrial Technology Research Institute | Video encoding methods and systems using adaptive color transform |
US10887597B2 (en) | 2015-06-09 | 2021-01-05 | Qualcomm Incorporated | Systems and methods of determining illumination compensation parameters for video coding |
WO2016200100A1 (ko) | 2015-06-10 | 2016-12-15 | 삼성전자 주식회사 | 적응적 가중치 예측을 위한 신택스 시그널링을 이용하여 영상을 부호화 또는 복호화하는 방법 및 장치 |
WO2017008263A1 (en) | 2015-07-15 | 2017-01-19 | Mediatek Singapore Pte. Ltd. | Conditional binary tree block partitioning structure |
US10404992B2 (en) | 2015-07-27 | 2019-09-03 | Qualcomm Incorporated | Methods and systems of restricting bi-prediction in video coding |
AU2016299036B2 (en) | 2015-07-27 | 2019-11-21 | Hfi Innovation Inc. | Method of system for video coding using intra block copy mode |
CN105163116B (zh) | 2015-08-29 | 2018-07-31 | 华为技术有限公司 | 图像预测的方法及设备 |
EP3332551A4 (en) | 2015-09-02 | 2019-01-16 | MediaTek Inc. | METHOD AND APPARATUS FOR MOVEMENT COMPENSATION FOR VIDEO CODING BASED ON TECHNIQUES FOR OPERATIONAL RADIO RADIATION |
WO2017035831A1 (en) | 2015-09-06 | 2017-03-09 | Mediatek Inc. | Adaptive inter prediction |
KR20180041211A (ko) | 2015-09-10 | 2018-04-23 | 엘지전자 주식회사 | 인터-인트라 병합 예측 모드 기반 영상 처리 방법 및 이를 위한 장치 |
US10375413B2 (en) | 2015-09-28 | 2019-08-06 | Qualcomm Incorporated | Bi-directional optical flow for video coding |
WO2017069590A1 (ko) | 2015-10-22 | 2017-04-27 | 엘지전자 주식회사 | 영상 코딩 시스템에서 모델링 기반 영상 디코딩 방법 및 장치 |
US10412407B2 (en) | 2015-11-05 | 2019-09-10 | Mediatek Inc. | Method and apparatus of inter prediction using average motion vector for video coding |
WO2017082670A1 (ko) | 2015-11-12 | 2017-05-18 | 엘지전자 주식회사 | 영상 코딩 시스템에서 계수 유도 인트라 예측 방법 및 장치 |
KR20170058838A (ko) | 2015-11-19 | 2017-05-29 | 한국전자통신연구원 | 화면간 예측 향상을 위한 부호화/복호화 방법 및 장치 |
CN108293131B (zh) | 2015-11-20 | 2021-08-31 | 联发科技股份有限公司 | 基于优先级运动矢量预测子推导的方法及装置 |
WO2017088093A1 (en) | 2015-11-23 | 2017-06-01 | Mediatek Singapore Pte. Ltd. | On the smallest allowed block size in video coding |
US10268901B2 (en) | 2015-12-04 | 2019-04-23 | Texas Instruments Incorporated | Quasi-parametric optical flow estimation |
CN105578198B (zh) | 2015-12-14 | 2019-01-11 | 上海交通大学 | 基于时偏特征的视频同源Copy-Move检测方法 |
US9955186B2 (en) | 2016-01-11 | 2018-04-24 | Qualcomm Incorporated | Block size decision for video coding |
US10887619B2 (en) | 2016-02-03 | 2021-01-05 | Sharp Kabushiki Kaisha | Moving image decoding device, moving image coding device, and prediction image generation device |
CN108781294B (zh) | 2016-02-05 | 2021-08-31 | 联发科技股份有限公司 | 视频数据的运动补偿方法及装置 |
EP3414906A4 (en) | 2016-02-08 | 2019-10-02 | Sharp Kabushiki Kaisha | SYSTEMS AND METHOD FOR INTRAPRADICATION CODING |
US11405611B2 (en) | 2016-02-15 | 2022-08-02 | Qualcomm Incorporated | Predicting filter coefficients from fixed filters for video coding |
WO2017139937A1 (en) | 2016-02-18 | 2017-08-24 | Mediatek Singapore Pte. Ltd. | Advanced linear model prediction for chroma coding |
WO2017143467A1 (en) | 2016-02-22 | 2017-08-31 | Mediatek Singapore Pte. Ltd. | Localized luma mode prediction inheritance for chroma coding |
US11032550B2 (en) | 2016-02-25 | 2021-06-08 | Mediatek Inc. | Method and apparatus of video coding |
WO2017156669A1 (en) | 2016-03-14 | 2017-09-21 | Mediatek Singapore Pte. Ltd. | Methods for motion vector storage in video coding |
EP3456049B1 (en) | 2016-05-13 | 2022-05-04 | VID SCALE, Inc. | Systems and methods for generalized multi-hypothesis prediction for video coding |
US10560718B2 (en) | 2016-05-13 | 2020-02-11 | Qualcomm Incorporated | Merge candidates for motion vector prediction for video coding |
US10560712B2 (en) | 2016-05-16 | 2020-02-11 | Qualcomm Incorporated | Affine motion prediction for video coding |
US20170339405A1 (en) | 2016-05-20 | 2017-11-23 | Arris Enterprises Llc | System and method for intra coding |
KR20180040319A (ko) | 2016-10-12 | 2018-04-20 | 가온미디어 주식회사 | 영상 처리 방법, 그를 이용한 영상 복호화 및 부호화 방법 |
CN106028026B (zh) | 2016-05-27 | 2017-09-05 | 宁波大学 | 一种基于时空域结构的高效视频质量客观评价方法 |
EP3264769A1 (en) | 2016-06-30 | 2018-01-03 | Thomson Licensing | Method and apparatus for video coding with automatic motion information refinement |
EP3264768A1 (en) | 2016-06-30 | 2018-01-03 | Thomson Licensing | Method and apparatus for video coding with adaptive motion information refinement |
US11638027B2 (en) | 2016-08-08 | 2023-04-25 | Hfi Innovation, Inc. | Pattern-based motion vector derivation for video coding |
US10659802B2 (en) | 2016-08-15 | 2020-05-19 | Nokia Technologies Oy | Video encoding and decoding |
US10609423B2 (en) | 2016-09-07 | 2020-03-31 | Qualcomm Incorporated | Tree-type coding for video coding |
US10728572B2 (en) | 2016-09-11 | 2020-07-28 | Lg Electronics Inc. | Method and apparatus for processing video signal by using improved optical flow motion vector |
WO2018062892A1 (ko) | 2016-09-28 | 2018-04-05 | 엘지전자(주) | 가중치 인덱스에 기초하여 최적의 예측을 수행하는 방법 및 장치 |
US11356693B2 (en) | 2016-09-29 | 2022-06-07 | Qualcomm Incorporated | Motion vector coding for video coding |
EP3301920A1 (en) | 2016-09-30 | 2018-04-04 | Thomson Licensing | Method and apparatus for coding/decoding omnidirectional video |
EP3301918A1 (en) | 2016-10-03 | 2018-04-04 | Thomson Licensing | Method and apparatus for encoding and decoding motion information |
US10448010B2 (en) | 2016-10-05 | 2019-10-15 | Qualcomm Incorporated | Motion vector prediction for affine motion models in video coding |
WO2018092869A1 (ja) | 2016-11-21 | 2018-05-24 | パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ | 符号化装置、復号装置、符号化方法及び復号方法 |
US10674165B2 (en) | 2016-12-21 | 2020-06-02 | Arris Enterprises Llc | Constrained position dependent intra prediction combination (PDPC) |
US10666937B2 (en) * | 2016-12-21 | 2020-05-26 | Qualcomm Incorporated | Low-complexity sign prediction for video coding |
CN110115032B (zh) | 2016-12-22 | 2021-07-20 | 联发科技股份有限公司 | 用于视频编解码的运动细化的方法以及装置 |
US10911761B2 (en) | 2016-12-27 | 2021-02-02 | Mediatek Inc. | Method and apparatus of bilateral template MV refinement for video coding |
US20190387234A1 (en) | 2016-12-29 | 2019-12-19 | Peking University Shenzhen Graduate School | Encoding method, decoding method, encoder, and decoder |
US10931969B2 (en) | 2017-01-04 | 2021-02-23 | Qualcomm Incorporated | Motion vector reconstructions for bi-directional optical flow (BIO) |
EP3565245A4 (en) | 2017-01-04 | 2019-11-27 | Samsung Electronics Co., Ltd. | VIDEO CODING METHOD AND DEVICE AND VIDEO CODING METHOD AND DEVICE |
US20180199057A1 (en) | 2017-01-12 | 2018-07-12 | Mediatek Inc. | Method and Apparatus of Candidate Skipping for Predictor Refinement in Video Coding |
US10701366B2 (en) | 2017-02-21 | 2020-06-30 | Qualcomm Incorporated | Deriving motion vector information at a video decoder |
US10523964B2 (en) | 2017-03-13 | 2019-12-31 | Qualcomm Incorporated | Inter prediction refinement based on bi-directional optical flow (BIO) |
BR112019018922A8 (pt) | 2017-03-16 | 2023-02-07 | Mediatek Inc | Método e aparelho de refinamento de movimento com base em fluxo óptico bi-direcional para codificação de vídeo |
US11277635B2 (en) | 2017-03-17 | 2022-03-15 | Vid Scale, Inc. | Predictive coding for 360-degree video based on geometry padding |
US10595035B2 (en) | 2017-03-22 | 2020-03-17 | Qualcomm Incorporated | Constraining motion vector information derived by decoder-side motion vector derivation |
US10491917B2 (en) | 2017-03-22 | 2019-11-26 | Qualcomm Incorporated | Decoder-side motion vector derivation |
US11496747B2 (en) | 2017-03-22 | 2022-11-08 | Qualcomm Incorporated | Intra-prediction mode propagation |
CN117255197A (zh) | 2017-03-22 | 2023-12-19 | 韩国电子通信研究院 | 基于块形式的预测方法和装置 |
TW201902223A (zh) | 2017-03-24 | 2019-01-01 | 聯發科技股份有限公司 | 視頻編碼中重疊分塊運動補償的雙向光流的方法和裝置 |
US10805650B2 (en) | 2017-03-27 | 2020-10-13 | Qualcomm Incorporated | Signaling important video information in network video streaming using mime type parameters |
EP3383045A1 (en) | 2017-03-27 | 2018-10-03 | Thomson Licensing | Multiple splits prioritizing for fast encoding |
WO2018193968A1 (ja) | 2017-04-19 | 2018-10-25 | パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ | 符号化装置、復号装置、符号化方法及び復号方法 |
US20180310017A1 (en) | 2017-04-21 | 2018-10-25 | Mediatek Inc. | Sub-prediction unit temporal motion vector prediction (sub-pu tmvp) for video coding |
US10711644B2 (en) | 2017-04-24 | 2020-07-14 | Raytheon Technologies Corporation | Method and system to ensure full oil tubes after gas turbine engine shutdown |
KR102409430B1 (ko) | 2017-04-24 | 2022-06-15 | 에스케이텔레콤 주식회사 | 움직임 보상을 위한 옵티컬 플로우 추정 방법 및 장치 |
US10805630B2 (en) | 2017-04-28 | 2020-10-13 | Qualcomm Incorporated | Gradient based matching for motion search and derivation |
US10638126B2 (en) | 2017-05-05 | 2020-04-28 | Qualcomm Incorporated | Intra reference filter for video coding |
CN110574377B (zh) | 2017-05-10 | 2021-12-28 | 联发科技股份有限公司 | 用于视频编解码的重新排序运动向量预测候选集的方法及装置 |
KR102351029B1 (ko) | 2017-05-16 | 2022-01-13 | 엘지전자 주식회사 | 인트라 예측 모드 기반 영상 처리 방법 및 이를 위한 장치 |
CN117241044A (zh) | 2017-05-17 | 2023-12-15 | 株式会社Kt | 解码视频的方法和编码视频的方法 |
WO2018210315A1 (en) | 2017-05-18 | 2018-11-22 | Mediatek Inc. | Method and apparatus of motion vector constraint for video coding |
US10523934B2 (en) | 2017-05-31 | 2019-12-31 | Mediatek Inc. | Split based motion vector operation reduction |
US11190762B2 (en) | 2017-06-21 | 2021-11-30 | Lg Electronics, Inc. | Intra-prediction mode-based image processing method and apparatus therefor |
US20180376148A1 (en) | 2017-06-23 | 2018-12-27 | Qualcomm Incorporated | Combination of inter-prediction and intra-prediction in video coding |
US10904565B2 (en) | 2017-06-23 | 2021-01-26 | Qualcomm Incorporated | Memory-bandwidth-efficient design for bi-directional optical flow (BIO) |
US10477237B2 (en) | 2017-06-28 | 2019-11-12 | Futurewei Technologies, Inc. | Decoder side motion vector refinement in video coding |
CA3068393A1 (en) | 2017-06-30 | 2019-01-03 | Sharp Kabushiki Kaisha | Systems and methods for geometry-adaptive block partitioning of a picture into video blocks for video coding |
WO2019001741A1 (en) | 2017-06-30 | 2019-01-03 | Huawei Technologies Co., Ltd. | MOTION VECTOR REFINEMENT FOR MULTI-REFERENCE PREDICTION |
EP3649780A1 (en) | 2017-07-03 | 2020-05-13 | Vid Scale, Inc. | Motion-compensation prediction based on bi-directional optical flow |
EP3657789A4 (en) | 2017-07-17 | 2020-12-16 | Industry - University Cooperation Foundation Hanyang University | METHOD AND DEVICE FOR CODING / DECODING AN IMAGE |
CN107360419B (zh) | 2017-07-18 | 2019-09-24 | 成都图必优科技有限公司 | 一种基于透视模型的运动前视视频帧间预测编码方法 |
CN117499682A (zh) | 2017-09-20 | 2024-02-02 | 韩国电子通信研究院 | 用于对图像进行编码/解码的方法和装置 |
US10785494B2 (en) | 2017-10-11 | 2020-09-22 | Qualcomm Incorporated | Low-complexity design for FRUC |
US10986360B2 (en) | 2017-10-16 | 2021-04-20 | Qualcomm Incorproated | Various improvements to FRUC template matching |
CN117176958A (zh) | 2017-11-28 | 2023-12-05 | Lx 半导体科技有限公司 | 图像编码/解码方法、图像数据的传输方法和存储介质 |
CN107896330B (zh) | 2017-11-29 | 2019-08-13 | 北京大学深圳研究生院 | 一种用于帧内和帧间预测的滤波方法 |
US11057640B2 (en) | 2017-11-30 | 2021-07-06 | Lg Electronics Inc. | Image decoding method and apparatus based on inter-prediction in image coding system |
CN107995489A (zh) | 2017-12-20 | 2018-05-04 | 北京大学深圳研究生院 | 一种用于p帧或b帧的帧内帧间组合预测方法 |
US11277609B2 (en) | 2017-12-29 | 2022-03-15 | Sharp Kabushiki Kaisha | Systems and methods for partitioning video blocks for video coding |
KR20200096550A (ko) * | 2018-01-02 | 2020-08-12 | 삼성전자주식회사 | 비디오 복호화 방법 및 장치, 비디오 부호화 방법 및 장치 |
US11172229B2 (en) | 2018-01-12 | 2021-11-09 | Qualcomm Incorporated | Affine motion compensation with low bandwidth |
EP3741115A1 (en) | 2018-01-16 | 2020-11-25 | Vid Scale, Inc. | Motion compensated bi-prediction based on local illumination compensation |
US11265551B2 (en) | 2018-01-18 | 2022-03-01 | Qualcomm Incorporated | Decoder-side motion vector derivation |
US11310526B2 (en) | 2018-01-26 | 2022-04-19 | Mediatek Inc. | Hardware friendly constrained motion vector refinement |
WO2019151257A1 (en) | 2018-01-30 | 2019-08-08 | Sharp Kabushiki Kaisha | Systems and methods for deriving quantization parameters for video blocks in video coding |
KR102551363B1 (ko) | 2018-02-28 | 2023-07-04 | 삼성전자주식회사 | 부호화 방법 및 그 장치, 복호화 방법 및 그 장치 |
US20190306502A1 (en) | 2018-04-02 | 2019-10-03 | Qualcomm Incorporated | System and method for improved adaptive loop filtering |
US11381834B2 (en) | 2018-04-02 | 2022-07-05 | Hfi Innovation Inc. | Video processing methods and apparatuses for sub-block motion compensation in video coding systems |
US10779002B2 (en) | 2018-04-17 | 2020-09-15 | Qualcomm Incorporated | Limitation of the MVP derivation based on decoder-side motion vector derivation |
WO2019229683A1 (en) | 2018-05-31 | 2019-12-05 | Beijing Bytedance Network Technology Co., Ltd. | Concept of interweaved prediction |
WO2019234613A1 (en) | 2018-06-05 | 2019-12-12 | Beijing Bytedance Network Technology Co., Ltd. | Partition tree with partition into 3 sub-blocks by horizontal and vertical splits |
WO2019234676A1 (en) | 2018-06-07 | 2019-12-12 | Beijing Bytedance Network Technology Co., Ltd. | Mv precision refine |
TWI725456B (zh) | 2018-06-21 | 2021-04-21 | 大陸商北京字節跳動網絡技術有限公司 | 交錯區塊的自動劃分 |
TWI739120B (zh) | 2018-06-21 | 2021-09-11 | 大陸商北京字節跳動網絡技術有限公司 | 合併仿射模式與非合併仿射模式的統一拘束 |
CN113115046A (zh) | 2018-06-21 | 2021-07-13 | 北京字节跳动网络技术有限公司 | 分量相关的子块分割 |
AU2019295574B2 (en) | 2018-06-27 | 2023-01-12 | Vid Scale, Inc. | Methods and apparatus for reducing the coding latency of decoder-side motion refinement |
KR102648120B1 (ko) | 2018-06-29 | 2024-03-18 | 베이징 바이트댄스 네트워크 테크놀로지 컴퍼니, 리미티드 | 슬라이스/타일/lcu 행마다의 룩업 테이블 리셋 |
TWI731363B (zh) | 2018-07-01 | 2021-06-21 | 大陸商北京字節跳動網絡技術有限公司 | 高效的仿射Merge運動向量推導 |
TWI719519B (zh) | 2018-07-02 | 2021-02-21 | 大陸商北京字節跳動網絡技術有限公司 | 對於dmvr的塊尺寸限制 |
US10911768B2 (en) | 2018-07-11 | 2021-02-02 | Tencent America LLC | Constraint for template matching in decoder side motion derivation and refinement |
CN117294852A (zh) | 2018-07-15 | 2023-12-26 | 北京字节跳动网络技术有限公司 | 跨分量编解码顺序导出 |
US11516490B2 (en) | 2018-07-16 | 2022-11-29 | Lg Electronics Inc. | Method and device for inter predicting on basis of DMVR |
US10911751B2 (en) | 2018-09-14 | 2021-02-02 | Tencent America LLC | Method and apparatus for video coding |
IL280611B1 (en) * | 2018-09-17 | 2024-04-01 | Samsung Electronics Co Ltd | A method for encoding and decoding traffic information and a device for encoding and decoding traffic information |
CN110944193B (zh) | 2018-09-24 | 2023-08-11 | 北京字节跳动网络技术有限公司 | 视频编码和解码中的加权双向预测 |
WO2020070612A1 (en) | 2018-10-06 | 2020-04-09 | Beijing Bytedance Network Technology Co., Ltd. | Improvement for temporal gradient calculating in bio |
CN112889289A (zh) * | 2018-10-10 | 2021-06-01 | 三星电子株式会社 | 通过使用运动矢量差分值对视频进行编码和解码的方法以及用于对运动信息进行编码和解码的设备 |
US11516507B2 (en) | 2018-10-12 | 2022-11-29 | Intellectual Discovery Co., Ltd. | Image encoding/decoding methods and apparatuses |
WO2020084476A1 (en) | 2018-10-22 | 2020-04-30 | Beijing Bytedance Network Technology Co., Ltd. | Sub-block based prediction |
CN112913240A (zh) | 2018-10-22 | 2021-06-04 | 北京字节跳动网络技术有限公司 | 解码器侧运动矢量推导和其他编解码工具之间的并置 |
WO2020084461A1 (en) | 2018-10-22 | 2020-04-30 | Beijing Bytedance Network Technology Co., Ltd. | Restrictions on decoder side motion vector derivation based on coding information |
WO2020084474A1 (en) | 2018-10-22 | 2020-04-30 | Beijing Bytedance Network Technology Co., Ltd. | Gradient computation in bi-directional optical flow |
CN109191514B (zh) | 2018-10-23 | 2020-11-24 | 北京字节跳动网络技术有限公司 | 用于生成深度检测模型的方法和装置 |
JP7231727B2 (ja) | 2018-11-05 | 2023-03-01 | 北京字節跳動網絡技術有限公司 | 精緻化を伴うインター予測のための補間 |
WO2020094049A1 (en) | 2018-11-06 | 2020-05-14 | Beijing Bytedance Network Technology Co., Ltd. | Extensions of inter prediction with geometric partitioning |
CN113329224B (zh) | 2018-11-08 | 2022-12-16 | Oppo广东移动通信有限公司 | 视频信号编码/解码方法以及用于所述方法的设备 |
EP3881541A1 (en) | 2018-11-12 | 2021-09-22 | InterDigital VC Holdings, Inc. | Virtual pipeline for video encoding and decoding |
WO2020098647A1 (en) * | 2018-11-12 | 2020-05-22 | Beijing Bytedance Network Technology Co., Ltd. | Bandwidth control methods for affine prediction |
KR20210089149A (ko) | 2018-11-16 | 2021-07-15 | 베이징 바이트댄스 네트워크 테크놀로지 컴퍼니, 리미티드 | 인터 및 인트라 통합 예측 모드 가중치 |
WO2020098810A1 (en) * | 2018-11-17 | 2020-05-22 | Beijing Bytedance Network Technology Co., Ltd. | Merge with motion vector difference in video processing |
CN113170097B (zh) | 2018-11-20 | 2024-04-09 | 北京字节跳动网络技术有限公司 | 视频编解码模式的编解码和解码 |
CN113056914B (zh) * | 2018-11-20 | 2024-03-01 | 北京字节跳动网络技术有限公司 | 基于部分位置的差计算 |
EP3657794A1 (en) * | 2018-11-21 | 2020-05-27 | InterDigital VC Holdings, Inc. | Method and device for picture encoding and decoding |
CN111294590A (zh) | 2018-12-06 | 2020-06-16 | 华为技术有限公司 | 用于多假设编码的加权预测方法及装置 |
EP3871415A4 (en) | 2018-12-07 | 2022-04-13 | Beijing Bytedance Network Technology Co., Ltd. | CONTEXT-BASED INTRAPREDICTION |
CN111010581B (zh) | 2018-12-07 | 2022-08-12 | 北京达佳互联信息技术有限公司 | 运动矢量信息的处理方法、装置、电子设备及存储介质 |
US11546632B2 (en) | 2018-12-19 | 2023-01-03 | Lg Electronics Inc. | Method and device for processing video signal by using intra-prediction |
US10855992B2 (en) | 2018-12-20 | 2020-12-01 | Alibaba Group Holding Limited | On block level bi-prediction with weighted averaging |
JPWO2020137920A1 (ja) | 2018-12-27 | 2021-11-18 | シャープ株式会社 | 予測画像生成装置、動画像復号装置、動画像符号化装置および予測画像生成方法 |
WO2020140874A1 (en) | 2019-01-02 | 2020-07-09 | Huawei Technologies Co., Ltd. | A hardware and software friendly system and method for decoder-side motion vector refinement with decoder-side bi-predictive optical flow based per-pixel correction to bi-predictive motion compensation |
CN113613019B (zh) | 2019-01-06 | 2022-06-07 | 北京达佳互联信息技术有限公司 | 视频解码方法、计算设备和介质 |
MX2021008449A (es) | 2019-01-15 | 2021-11-03 | Rosedale Dynamics Llc | Metodo y dispositivo de codificacion de imagen usando bandera de salto de transformacion. |
CN113302918A (zh) * | 2019-01-15 | 2021-08-24 | 北京字节跳动网络技术有限公司 | 视频编解码中的加权预测 |
US10958904B2 (en) | 2019-02-01 | 2021-03-23 | Tencent America LLC | Method and apparatus for video coding |
WO2020167097A1 (ko) | 2019-02-15 | 2020-08-20 | 엘지전자 주식회사 | 영상 코딩 시스템에서 인터 예측을 위한 인터 예측 타입 도출 |
US11178414B2 (en) * | 2019-02-27 | 2021-11-16 | Mediatek Inc. | Classification for multiple merge tools |
JP2022521554A (ja) | 2019-03-06 | 2022-04-08 | 北京字節跳動網絡技術有限公司 | 変換された片予測候補の利用 |
EP3939266A4 (en) | 2019-03-12 | 2022-12-21 | Beijing Dajia Internet Information Technology Co., Ltd. | RESTRICTED AND ADJUSTED INTER-AND INTRA-PREDICTION COMBINED MODE APPLICATIONS |
CN113632484A (zh) | 2019-03-15 | 2021-11-09 | 北京达佳互联信息技术有限公司 | 用于双向光流的比特宽度控制的方法和设备 |
JP7307192B2 (ja) | 2019-04-02 | 2023-07-11 | 北京字節跳動網絡技術有限公司 | デコーダ側の動きベクトルの導出 |
AU2020298425A1 (en) | 2019-06-21 | 2021-12-23 | Panasonic Intellectual Property Corporation Of America | Encoder, decoder, encoding method, and decoding method |
US11330287B2 (en) * | 2019-06-28 | 2022-05-10 | Tencent America LLC | Method and apparatus for video coding |
US11272203B2 (en) * | 2019-07-23 | 2022-03-08 | Tencent America LLC | Method and apparatus for video coding |
CN110267045B (zh) | 2019-08-07 | 2021-09-24 | 杭州微帧信息科技有限公司 | 一种视频处理及编码的方法、装置及可读存储介质 |
WO2021058033A1 (en) | 2019-09-29 | 2021-04-01 | Mediatek Inc. | Method and apparatus of combined inter and intra prediction with different chroma formats for video coding |
US11405628B2 (en) * | 2020-04-06 | 2022-08-02 | Tencent America LLC | Method and apparatus for video coding |
-
2019
- 2019-11-20 CN CN201980075985.5A patent/CN113056914B/zh active Active
- 2019-11-20 KR KR1020217014345A patent/KR20210091161A/ko active Search and Examination
- 2019-11-20 WO PCT/CN2019/119634 patent/WO2020103852A1/en unknown
- 2019-11-20 CN CN202311518864.8A patent/CN117319644A/zh active Pending
- 2019-11-20 EP EP19887639.3A patent/EP3861742A4/en active Pending
- 2019-11-20 JP JP2021525770A patent/JP7241870B2/ja active Active
-
2021
- 2021-01-21 US US17/154,485 patent/US20210144400A1/en not_active Abandoned
- 2021-11-24 US US17/534,968 patent/US11956465B2/en active Active
Cited By (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11509929B2 (en) | 2018-10-22 | 2022-11-22 | Beijing Byedance Network Technology Co., Ltd. | Multi-iteration motion vector refinement method for video processing |
US11889108B2 (en) | 2018-10-22 | 2024-01-30 | Beijing Bytedance Network Technology Co., Ltd | Gradient computation in bi-directional optical flow |
US11838539B2 (en) | 2018-10-22 | 2023-12-05 | Beijing Bytedance Network Technology Co., Ltd | Utilization of refined motion vector |
US11641467B2 (en) | 2018-10-22 | 2023-05-02 | Beijing Bytedance Network Technology Co., Ltd. | Sub-block based prediction |
US11956449B2 (en) | 2018-11-12 | 2024-04-09 | Beijing Bytedance Network Technology Co., Ltd. | Simplification of combined inter-intra prediction |
US11843725B2 (en) | 2018-11-12 | 2023-12-12 | Beijing Bytedance Network Technology Co., Ltd | Using combined inter intra prediction in video processing |
US11632566B2 (en) | 2018-11-20 | 2023-04-18 | Beijing Bytedance Network Technology Co., Ltd. | Inter prediction with refinement in video processing |
US11558634B2 (en) | 2018-11-20 | 2023-01-17 | Beijing Bytedance Network Technology Co., Ltd. | Prediction refinement for combined inter intra prediction mode |
US20220086481A1 (en) * | 2018-11-20 | 2022-03-17 | Beijing Bytedance Network Technology Co., Ltd. | Difference calculation based on partial position |
US11956465B2 (en) * | 2018-11-20 | 2024-04-09 | Beijing Bytedance Network Technology Co., Ltd | Difference calculation based on partial position |
US11930165B2 (en) | 2019-03-06 | 2024-03-12 | Beijing Bytedance Network Technology Co., Ltd | Size dependent inter coding |
US11553201B2 (en) | 2019-04-02 | 2023-01-10 | Beijing Bytedance Network Technology Co., Ltd. | Decoder side motion vector derivation |
US20220264146A1 (en) * | 2019-07-01 | 2022-08-18 | Interdigital Vc Holdings France, Sas | Bi-prediction refinement in affine with optical flow |
US12010339B2 (en) * | 2019-09-20 | 2024-06-11 | Kddi Corporation | Image decoding device, image decoding method, and program |
US20220174314A1 (en) * | 2019-09-20 | 2022-06-02 | Kddi Corporation | Image decoding device, image decoding method, and program |
US20210306656A1 (en) * | 2020-03-26 | 2021-09-30 | Alibaba Group Holding Limited | Method and apparatus for encoding or decoding video |
US11706439B2 (en) * | 2020-03-26 | 2023-07-18 | Alibaba Group Holding Limited | Method and apparatus for encoding or decoding video |
US11936899B2 (en) * | 2021-03-12 | 2024-03-19 | Lemon Inc. | Methods and systems for motion candidate derivation |
US20220295089A1 (en) * | 2021-03-12 | 2022-09-15 | Lemon Inc. | Motion candidate derivation |
US11671616B2 (en) | 2021-03-12 | 2023-06-06 | Lemon Inc. | Motion candidate derivation |
US11895302B2 (en) * | 2021-06-29 | 2024-02-06 | Qualcomm Incorporated | Adaptive bilateral matching for decoder side motion vector refinement |
US20220417522A1 (en) * | 2021-06-29 | 2022-12-29 | Qualcomm Incorporated | Adaptive bilateral matching for decoder side motion vector refinement |
CN115190299A (zh) * | 2022-07-11 | 2022-10-14 | 杭州电子科技大学 | Vvc仿射运动估计快速算法 |
Also Published As
Publication number | Publication date |
---|---|
US20220086481A1 (en) | 2022-03-17 |
CN113056914B (zh) | 2024-03-01 |
CN117319644A (zh) | 2023-12-29 |
EP3861742A1 (en) | 2021-08-11 |
CN113056914A (zh) | 2021-06-29 |
JP2022507281A (ja) | 2022-01-18 |
JP7241870B2 (ja) | 2023-03-17 |
EP3861742A4 (en) | 2022-04-13 |
KR20210091161A (ko) | 2021-07-21 |
US11956465B2 (en) | 2024-04-09 |
WO2020103852A1 (en) | 2020-05-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11889108B2 (en) | Gradient computation in bi-directional optical flow | |
US11956465B2 (en) | Difference calculation based on partial position | |
US11659162B2 (en) | Video processing using local illumination compensation | |
US20220150508A1 (en) | Restrictions on decoder side motion vector derivation based on coding information | |
US11509927B2 (en) | Weighted prediction in video coding | |
US11405607B2 (en) | Harmonization between local illumination compensation and inter prediction coding | |
US11483550B2 (en) | Use of virtual candidate prediction and weighted prediction in video processing | |
US11641467B2 (en) | Sub-block based prediction | |
US20210344909A1 (en) | Motion candidate lists that use local illumination compensation | |
US11729377B2 (en) | Affine mode in video coding and decoding | |
WO2020233600A1 (en) | Simplified local illumination compensation | |
WO2020211755A1 (en) | Motion vector and prediction sample refinement |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: BYTEDANCE INC., CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:ZHANG, LI;ZHANG, KAI;XU, JIZHENG;REEL/FRAME:054995/0922 Effective date: 20191108 Owner name: BEIJING BYTEDANCE NETWORK TECHNOLOGY CO., LTD., CHINA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:LIU, HONGBIN;WANG, YUE;REEL/FRAME:054995/0978 Effective date: 20191108 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE AFTER FINAL ACTION FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: ADVISORY ACTION MAILED |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |