CN110719464B - Motion vector prediction based on lookup table with temporal information extension - Google Patents

Motion vector prediction based on lookup table with temporal information extension Download PDF

Info

Publication number
CN110719464B
CN110719464B CN201910637484.3A CN201910637484A CN110719464B CN 110719464 B CN110719464 B CN 110719464B CN 201910637484 A CN201910637484 A CN 201910637484A CN 110719464 B CN110719464 B CN 110719464B
Authority
CN
China
Prior art keywords
motion
candidates
video
block
video block
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910637484.3A
Other languages
Chinese (zh)
Other versions
CN110719464A (en
Inventor
张莉
张凯
刘鸿彬
王悦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing ByteDance Network Technology Co Ltd
ByteDance Inc
Original Assignee
Beijing ByteDance Network Technology Co Ltd
ByteDance Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing ByteDance Network Technology Co Ltd, ByteDance Inc filed Critical Beijing ByteDance Network Technology Co Ltd
Publication of CN110719464A publication Critical patent/CN110719464A/en
Application granted granted Critical
Publication of CN110719464B publication Critical patent/CN110719464B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/157Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
    • H04N19/159Prediction type, e.g. intra-frame, inter-frame or bidirectional frame prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/44Decoders specially adapted therefor, e.g. video decoders which are asymmetric with respect to the encoder
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors
    • H04N19/517Processing of motion vectors by encoding
    • H04N19/52Processing of motion vectors by encoding by predictive encoding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/70Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/90Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
    • H04N19/96Tree coding, e.g. quad-tree coding

Abstract

A video processing method, comprising: maintaining a set of tables, wherein each table includes motion candidates, and each motion candidate is associated with corresponding motion information; performing a transition between a first video block and a bitstream representation of a video comprising the first video block; and updating the one or more tables by selectively pruning existing motion candidates in the one or more tables based on the encoding/decoding mode of the first video block. Another video processing method includes: maintaining a set of tables, wherein each table includes motion candidates, and each motion candidate is associated with corresponding motion information; performing a transition between a first video block and a bitstream representation of a video comprising the first video block; and updating one or more tables to include motion information from one or more temporally adjacent blocks of the first video block as new motion candidates.

Description

Motion vector prediction based on lookup table with temporal information extension
Cross Reference to Related Applications
The present application claims in time the priority and benefit of international patent application number PCT/CN2018/095719 filed on day 7, month 15 of 2018, in accordance with applicable patent laws and/or regulations of paris convention. The entire disclosure of international patent application number PCT/CN2018/095719 is incorporated herein by reference as part of the present disclosure.
Technical Field
The present application relates to video coding techniques, devices, and systems.
Background
Despite advances in video compression, digital video is still using the largest bandwidth over the internet and other digital communication networks. As the number of connected user devices capable of receiving and displaying video increases, the bandwidth requirements for digital video usage are expected to continue to increase.
Disclosure of Invention
This document discloses methods, systems, and devices for encoding and decoding digital video using a Merge list of motion vectors.
In one exemplary aspect, a video processing method is disclosed. The video processing method comprises the following steps: determining new candidates for video processing by averaging two or more selected motion candidates; adding the new candidate to a candidate list; conversion between a first video block of the video and a bitstream representation of the video is performed using the determined new candidate in the candidate list.
In one exemplary aspect, a video processing method is disclosed. The video processing method comprises the following steps: determining new motion candidates for video processing by using one or more motion candidates from one or more tables, wherein the tables include the one or more motion candidates and each motion candidate is associated motion information; a conversion is performed between the video block and the encoded representation of the video block based on the new candidate.
In one exemplary aspect, a video processing method is disclosed. The video processing method comprises the following steps: determining a new candidate for video processing by always using motion information from more than one spatially neighboring block of a first video block in a current picture and not using motion information from a temporal block in a picture different from the current picture; a transition between a first video block in a current picture of the video and a bitstream representation of the video is performed by using the determined new candidate.
In one exemplary aspect, a video processing method is disclosed. The video processing method comprises the following steps: determining a new candidate for video processing by using motion information from at least one spatially non-immediately neighboring block of the first video block in the current picture and other candidates derived from or not from the spatially non-immediately neighboring block of the first video block; a transition between a first video block of a video and a bitstream representation of the video is performed by using the determined new candidate.
In one exemplary aspect, a video processing method is disclosed. The video processing method comprises the following steps: determining a new candidate for video processing by using motion information from one or more tables of a first video block in a current picture and motion information from a temporal block in a picture other than the current picture; a transition between a first video block in a current picture of the video and a bitstream representation of the video is performed by using the determined new candidate.
In one exemplary aspect, a video processing method is disclosed. The video processing method comprises the following steps: determining new candidates for video processing by using motion information from one or more tables of the first video block and motion information from one or more spatially neighboring blocks of the first video block; a transition between a first video block in a current picture of the video and a bitstream representation of the video is performed by using the determined new candidate.
In one exemplary aspect, a video processing method is disclosed. The video processing method comprises the following steps: maintaining a set of tables, wherein each table includes motion candidates, and each motion candidate is associated with corresponding motion information; performing a transition between a first video block and a bitstream representation of a video comprising the first video block; and updating the one or more tables by selectively pruning existing motion candidates in the one or more tables based on the encoding/decoding mode of the first video block.
In one exemplary aspect, a video processing method is disclosed. The video processing method comprises the following steps: maintaining a set of tables, wherein each table includes motion candidates, and each motion candidate is associated with corresponding motion information; performing a transition between a first video block and a bitstream representation of a video comprising the first video block; and updating one or more tables to include motion information from one or more temporally adjacent blocks of the first video block as new motion candidates.
In one exemplary aspect, a method of updating a motion candidate table is disclosed. The method comprises the following steps: based on the encoding/decoding mode of the video block being processed, selectively pruning existing motion candidates in the table, each motion candidate being associated with corresponding motion information; and updating the table to include motion information of the video block as a new motion candidate.
In one exemplary aspect, a method of updating a motion candidate table is disclosed. The method comprises the following steps: maintaining a motion candidate table, each motion candidate being associated with corresponding motion information; and updating the table to include motion information from one or more temporally adjacent blocks of the video block being processed as new motion candidates.
In one exemplary aspect, a video processing method is disclosed. The video processing method comprises the following steps: determining new motion candidates for video processing by using one or more motion candidates from one or more tables, wherein the tables include the one or more motion candidates and each motion candidate is associated with motion information; and performing a conversion between the video block and the encoded representation of the video block based on the new candidate.
In one exemplary aspect, an apparatus in a video system is disclosed. The apparatus includes a processor and a non-transitory memory having instructions thereon, wherein the instructions, when executed by the processor, cause the processor to implement the various methods described herein.
The various techniques described herein may be implemented as a computer program product stored on a non-transitory computer readable medium. The computer program product comprises program code for performing the methods described herein.
The details of one or more implementations are set forth in the accompanying drawings and the description below. Other features will be apparent from the description and drawings, and from the claims.
Drawings
Fig. 1 is a block diagram illustrating an example of a video encoder implementation.
Fig. 2 illustrates macroblock partitioning in the h.264 video coding standard.
Fig. 3 illustrates an example of dividing a Coded Block (CB) into Prediction Blocks (PB).
Fig. 4 illustrates an example implementation of subdivision of a Coding Tree Block (CTB) into CBs and conversion blocks (TBs). The solid line represents a CB boundary and the dashed line represents a TB boundary, including an example CTB with segmentation and a corresponding quadtree.
Fig. 5 shows an example of a quadtree binary tree (QTBT) structure for segmenting video data.
Fig. 6 shows an example of video block segmentation.
Fig. 7 shows an example of quadtree splitting.
Fig. 8 shows an example of tree signaling.
Fig. 9 shows an example of the derivation process of the Merge candidate list construction.
Fig. 10 shows an example location of a spatial Merge candidate.
Fig. 11 shows an example of a candidate pair of redundancy check in consideration of a spatial Merge candidate.
Fig. 12 shows an example of the location of the second PU of the Nx2N and 2NxN partition.
Fig. 13 illustrates an example motion vector scaling of a temporal Merge candidate.
Fig. 14 shows candidate locations of temporal Merge candidates and their collocated pictures.
Fig. 15 shows an example of combining bi-prediction Merge candidates.
Fig. 16 shows an example of a derivation process of motion vector prediction candidates.
Fig. 17 shows an example of motion vector scaling of spatial motion vector candidates.
Fig. 18 shows an example Alternative Temporal Motion Vector Prediction (ATMVP) for motion prediction of a Coding Unit (CU).
Fig. 19 graphically depicts an example of identification of source blocks and source pictures.
Fig. 20 shows an example of one CU having four sub-blocks and neighboring blocks.
Fig. 21 illustrates an example of bilateral matching.
Fig. 22 illustrates an example of template matching.
Fig. 23 depicts an example of single-sided Motion Estimation (ME) in Frame Rate Up Conversion (FRUC).
Fig. 24 shows an example of decoder-side motion vector refinement (DMVR) based on bilateral template matching.
Fig. 25 shows an example of spatially neighboring blocks used to derive illumination compensation IC parameters.
Fig. 26 shows an example of a spatial neighboring block used to derive a spatial Merge candidate.
Fig. 27 shows an example of using neighboring inter prediction blocks.
Fig. 28 shows an example of a planar motion vector prediction process.
Fig. 29 shows an example of the position next to the current Coding Unit (CU) line.
Fig. 30 is a block diagram illustrating an example of a structure of a computer system or other control device that may be used to implement various portions of the disclosed technology.
FIG. 31 illustrates a block diagram of an example embodiment of a mobile device that can be used to implement various portions of the disclosed technology.
Fig. 32 shows a flowchart of an example method for video processing in accordance with the presently disclosed technology.
Fig. 33 shows a flowchart of an example method for video processing in accordance with the presently disclosed technology.
Fig. 34 illustrates a flow chart of an example method for video processing in accordance with the presently disclosed technology.
Fig. 35 shows a flowchart of an example method for video processing in accordance with the presently disclosed technology.
Fig. 36 shows a flowchart of an example method for video processing in accordance with the presently disclosed technology.
Fig. 37 shows a flowchart of an example method for video processing in accordance with the presently disclosed technology.
Fig. 38 shows a flowchart of an example method for video processing in accordance with the presently disclosed technology.
Fig. 39 shows a flowchart of an example method for video processing in accordance with the presently disclosed technology.
Fig. 40 shows a flowchart of an example method for updating a motion candidate table in accordance with the presently disclosed technology.
Fig. 41 shows a flowchart of an example method for updating a motion candidate table in accordance with the presently disclosed technology.
Fig. 42 shows a flowchart of an example method for video processing in accordance with the presently disclosed technology.
Detailed Description
To increase the compression ratio of video, researchers are continually looking for new techniques to encode video.
1. Introduction to the invention
This document relates to video coding techniques. Specifically, it relates to motion information coding (e.g., merge mode, AMVP mode) in video coding. It may be applied to the existing video coding standard HEVC, or standard Versatile Video Coding (VVC) to be finalized. But may also be applicable to future video coding standards or video codecs.
Brief discussion of the drawings
Video coding standards have been developed primarily by developing the well-known ITU-T and ISO/IEC standards. ITU-T developed h.261 and h.263, ISO/IEC developed MPEG-1 and MPEG-4 vision, and two organizations jointly developed the h.262/MPEG-2 video, h.264/MPEG-4 Advanced Video Coding (AVC), and h.265/HEVC standards. Since h.262, video coding standards have been based on hybrid video coding structures in which temporal prediction plus transform coding has been employed. An example of a typical HEVC encoder framework is shown in fig. 1.
2.1 dividing Structure
2.1.1H.264/AVC partition Tree Structure
The core of the coding layer in the previous standard is a macroblock containing 16x16 blocks of luma samples and, in the case of conventional 4:2:0 color sampling, two corresponding 8x8 blocks of chroma samples.
The intra-coded block uses spatial prediction to explore spatial correlation between pixels. Two partitions are defined: 16x16 and 4x4.
Inter-coded blocks use temporal prediction by estimating motion between pictures, rather than spatial prediction. The motion of a 16x16 macroblock or any sub-macroblock partition thereof can be estimated separately: 16x8, 8x16, 8x8, 8x4, 4x8, 4x4 (see fig. 2). Only one Motion Vector (MV) is allowed per sub-macroblock partition.
Partition tree structure in 2.1.2HEVC
In HEVC, various local characteristics are accommodated by dividing CTUs into CUs using a quadtree structure (denoted as coding tree). At the CU level it is decided whether to encode the picture region using inter (temporal) prediction or intra (spatial) prediction. Each CU may be further divided into one, two, or four PUs, depending on the partition type of the PU. In one PU, the same prediction process is applied and the relevant information is transmitted to the decoder on a PU basis. After obtaining the residual block by applying a prediction process based on the PU partition type, the CU may be partitioned into Transform Units (TUs) according to another quadtree structure similar to the coding tree of the CU. An important feature of the HEVC structure is that it has multiple partitioning concepts, including CUs, PUs, and TUs.
Hereinafter, various features involved in hybrid video coding using HEVC are highlighted as follows.
1) Coding tree unit and Coding Tree Block (CTB) structure: a similar structure in HEVC is the Coding Tree Unit (CTU), which has a size that is selected by the encoder and can be larger than a conventional macroblock. The CTU is composed of a luminance CTB and a corresponding chrominance CTB and syntax elements. The size l×l of the luminance CTB can be selected to be l=16, 32, or 64 samples, and a larger size can generally achieve better compression. HEVC then supports the partitioning of CTBs into smaller blocks using tree structure and quadtree signaling.
2) Coding Unit (CU) and Coding Block (CB): the quadtree syntax of the CTU specifies the size and location of its luma and chroma CBs. The root of the quadtree is associated with the CTU. Thus, the size of the luminance CTB is the maximum size supported by the luminance CB. The division of the luminance and chrominance CBs of a CTU is jointly signaled. One luma CB and typically two chroma CBs together with the associated syntax form a Coding Unit (CU). CTBs may contain only one CU or may be partitioned into multiple CUs, each with an associated partition into a Prediction Unit (PU) and a transform unit Tree (TU).
3) Prediction Unit (PU) and Prediction Block (PB): at the CU level it is decided whether to encode the picture region using inter-prediction or intra-prediction. The root of the PU partition structure is located at the CU level. Depending on the basic prediction type decision, the luminance and chrominance CBs may be further divided in size and predicted from the luminance and chrominance Prediction Block (PB). HEVC supports variable PB sizes from 64 x 64 to 4 x 4 samples. Fig. 3 shows an allowed PB example of an MxM CU.
4) Transform Unit (TU) and Transform Block (TB): the prediction residual is encoded using a block transform. The root of the TU tree structure is at the CU level. The luminance CB residual may be the same as the luminance TB or may be further divided into smaller luminance transform blocks TB. The same applies to chroma TB. Integer basis functions similar to Discrete Cosine Transforms (DCTs) are defined for square TBs of 4×4, 8×8, 16×16, and 32×32. For a 4 x 4 transform of the luma intra prediction residual, an integer transform derived from a Discrete Sine Transform (DST) form may also be specified.
Fig. 4 shows an example of subdivision of CTBs into CBs (and conversion blocks (TBs)). The solid line indicates a CB boundary and the dashed line indicates a TB boundary. (a) CTBs with partitions. (b) a corresponding quadtree.
Tree structure partitioning of 2.1.2.1 into transform blocks and units
For residual coding, the CB may be recursively partitioned into Transform Blocks (TBs). The segmentation is signaled by a residual quadtree. Only square CB and TB partitions are specified, where the blocks may be recursively divided into four quadrants, as shown in fig. 4. For a given luminance CB of size M, the flag indicates whether it is divided into four blocks of size M/2. If it is possible to divide further, each quadrant is assigned a flag indicating whether it is divided into four quadrants, as indicated by the maximum depth of the residual quadtree indicated in the Sequence Parameter Set (SPS). The leaf node blocks generated by the residual quadtree are transform blocks further processed by transform coding. The encoder indicates the maximum and minimum luminance TB sizes that it will use. Partitioning is implied when the CB size is greater than the maximum TB size. When a division will result in a smaller luminance TB size than the indicated minimum, then no division is implied. The chroma TB size is half the luma TB size in each dimension except when the luma TB size is 4 x 4, in which case a single 4 x 4 chroma TB is used for the area covered by four 4 x 4 luma TBs. In the case of intra-predicted CUs, decoded samples of the nearest neighbor TBs (either inside or outside the CB) are used as reference data for intra-prediction.
Unlike previous standards, for inter-predicted CUs, the HEVC design allows TBs to span multiple PB to maximize the potential coding efficiency of TB partitioning that benefits from the quadtree structure.
2.1.2.2 parent node and child node
The CTBs are partitioned according to a quadtree structure, with nodes being coding units. The plurality of nodes in the quadtree structure includes leaf nodes and non-leaf nodes. The leaf node has no child nodes in the tree structure (i.e., the leaf node is not further partitioned). The non-leaf nodes include root nodes of the tree structure. The root node corresponds to an initial video block (e.g., CTB) of video data. For each respective non-root node of the plurality of nodes, the respective non-root node corresponds to a video block, which is a child block of the video block corresponding to a parent node in the tree structure of the respective non-root node. Each respective non-leaf node of the plurality of non-leaf nodes has one or more child nodes in the tree structure.
2.1.3 quadtree and binary Tree Block Structure with larger CTU in Joint Exploration Model (JEM)
To explore future video coding techniques beyond HEVC, VCEG and MPEG have jointly established a joint video exploration team (jfet) in 2015. Since then, jfet has adopted many new approaches and applied it to reference software known as the Joint Exploration Model (JEM).
2.1.3.1QTBT block dividing structure
Unlike HEVC, the QTBT structure eliminates the concept of multiple partition types, i.e., it eliminates the separation of CU, PU and TU concepts and supports more flexibility in CU partition shapes. In QTBT block structures, a CU may be square or rectangular. As shown in fig. 5, the Coding Tree Unit (CTU) is first partitioned with a quadtree structure. The quadtree leaf nodes are further partitioned by a binary tree structure. There are two partition types in binary tree partitioning: symmetrical horizontal divisions and symmetrical vertical divisions. The binary leaf nodes are called Coding Units (CUs) and this partitioning is used for the prediction and conversion process without further partitioning. This means that in the QTBT encoded block structure the CU, PU and TU have the same block size. In JEM, a CU sometimes consists of Coded Blocks (CBs) of different color components, e.g., one CU contains one luma CB and two chroma CBs in P-slices and B-slices of a 4:2:0 chroma format, and a CU sometimes consists of CBs of a single component, e.g., in the case of an I-slice, one CU contains only one luma CB or only two chroma CBs.
The following parameters are defined for QTBT partitioning scheme.
CTU size: the root node size of the quadtree is the same as the concept in HEVC.
-micnqtsize: minimum allowable quadtree node size
MaxBTSize: maximum allowable binary tree root node size
MaxBTDePTh: maximum allowed binary tree depth
MiNBTSize: minimum allowable binary leaf node size
In one example of a QTBT split structure, CTU size is set to 128 x 128 luma samples with two corresponding 64 x 64 chroma sample blocks, micntsize is set to 16 x 16, maxbtsize is set to 64 x 64, mictsize (width and height) is set to 4 x 4, and maxbtsize is set to 4. Quadtree splitting is first applied to CTUs to generate quadtree leaf nodes. The size of the quadtree nodes may have a size ranging from 16×16 (i.e., micnqtsize) to 128×128 (i.e., CTU size). If the She Sicha tree node is 128×128, it is not further partitioned by the binary tree because it is over MaxBTSize (i.e., 64×64). Otherwise, the She Sicha tree nodes can be further partitioned by a binary tree. Therefore, the quadtree leaf node is also the root node of the binary tree, and its binary tree depth is 0. When the binary tree depth reaches maxbtdeh (i.e., 4), no further partitioning is considered. When the width of the binary tree node is equal to MiNBTSize (i.e., 4), no further horizontal partitioning is considered. Also, when the height of the binary tree node is equal to MiNBTSize, no further vertical partitioning is considered. The leaf nodes of the binary tree are further processed by prediction and transformation processing without further segmentation. In JEM, the maximum CTU size is 256×256 luminance samples.
Fig. 5 (left side) illustrates an example of block segmentation by using QTBT, and fig. 5 (right side) illustrates a corresponding tree representation. The solid line represents a quadtree split and the dashed line represents a binary tree split. In each partition (i.e., non-leaf) node of the binary tree, a flag would be signaled to indicate which partition type (i.e., horizontal or vertical) to use, where 0 represents a horizontal partition and 1 represents a vertical partition. For quadtree partitioning, the partition type need not be specified, because quadtree partitioning always divides one block horizontally and vertically to generate 4 sub-blocks of the same size.
In addition, QTBT schemes support the ability of luminance and chrominance to have separate QTBT structures. Currently, for P-stripes and B-stripes, the luminance and chrominance CTBs in one CTU share the same QTBT structure. However, for the I-slice, the luminance CTB is partitioned into CUs with a QTBT structure, and the chrominance CTB is partitioned into chrominance CUs with another QTBT structure. This means that a CU in an I slice consists of coding blocks of a luma component or coding blocks of two chroma components, and a CU in a P slice or B slice consists of coding blocks of all three color components.
In HEVC, inter prediction of small blocks is restricted so that 4 x 8 and 8 x 4 blocks do not support bi-prediction and 4 x 4 blocks do not support inter prediction in order to reduce motion compensated memory access. In JEM QTBT, these restrictions are removed.
2.1.4 trifurcate Tree for multifunctional video coding (VVC)
Tree types other than quadtrees and binary trees are also supported as proposed in jfet-D0117. In an implementation, two additional Trigeminal Tree (TT) divisions are introduced, namely horizontal and vertical center side trigeminal trees, as shown in fig. 6 (d) and (e).
Fig. 6 shows: (a) a quadtree split, (b) a vertical binary tree split (c) a horizontal binary tree split (d) a vertical center side trigeminal tree split, and (e) a horizontal center side trigeminal tree split.
In some implementations, there are two levels of trees: regional trees (quadtrees) and predictive trees (binary or trigeminal). The CTUs are first partitioned with a Region Tree (RT). The RT leaves may be further partitioned with a Prediction Tree (PT). PT leaves can also be further partitioned with PT until a maximum PT depth is reached. PT leaves are the basic coding units. For convenience, it is still referred to as a CU. The CU cannot be further divided. Both prediction and transformation are applied to the CU in the same way as JEM. The entire partition structure is called a "multi-type tree".
Partition structure in 2.1.5JVET-J0021
The tree structure known as multi-tree (MTT) is a generalization of QTBT. In QTBT, as shown in fig. 5, the Coding Tree Units (CTUs) are first partitioned with a quadtree structure. The quadtree structure is then used to further divide the quadtree leaf nodes.
The basic structure of MTT consists of two types of tree nodes: region Tree (RT) and Prediction Tree (PT), support nine types of partitioning, as shown in fig. 7.
Fig. 7 illustrates: (a) a quadtree split, (b) a vertical binary tree split, (c) a horizontal binary tree split, (d) a vertical trigeminal tree split, (e) a horizontal trigeminal tree split, (f) a horizontal up asymmetric binary tree split, (g) a horizontal down asymmetric binary tree split, (h) a vertical left asymmetric binary tree split, and (i) a vertical right asymmetric binary tree split.
The region tree may recursively divide the CTUs into square blocks up to 4x4 size region leaf nodes. At each node of the region tree, a prediction tree may be formed from one of three tree types: binary Tree (BT), trigeminal Tree (TT) and Asymmetric Binary Tree (ABT). In PT partitioning, quadtree splitting is prohibited in branches of the prediction tree. As with JEM, the luminance tree and the chrominance tree are separated in the I-band. The signaling method of RT and PT is shown in fig. 8.
2.2 inter prediction in HEVC/H.265
Each inter predicted PU has the motion parameters of one or two reference picture lists. The motion parameters include a motion vector and a reference picture index. The use of one of the two reference picture lists may also be signaled using inter predidc. The motion vector may be explicitly encoded as an increment with respect to the predictor, and this encoding mode is referred to as Advanced Motion Vector Prediction (AMVP) mode.
When a CU is encoded in skip mode, one PU is associated with the CU and has no significant residual coefficients, no encoded motion vector delta, or reference picture index. A Merge mode is specified by which the motion parameters of the current PU can be obtained from neighboring PUs (including spatial and temporal candidates). The Merge mode may be applied to any inter-predicted PU, not just skip mode. Another option for the Merge mode is explicit transmission of motion parameters, where motion vectors, reference picture indices for each reference picture list, and the use of reference picture lists are all explicitly signaled in each PU.
When the signaling indicates that one of the two reference picture lists is to be used, a PU is generated from one sample block. This is called "unidirectional prediction". Unidirectional prediction is available for both P-stripes and B-stripes.
When the signaling indicates that two reference picture lists are to be used, a PU is generated from two sample blocks. This is called "bi-prediction". Bi-directional prediction is only available for B-stripes.
Details of inter prediction modes specified in HEVC are provided below. The description will start from the Merge mode.
2.2.1Merge mode
2.2.1.1 derivation of candidates for Merge mode
When a PU is predicted using the Merge mode, an index to an entry in the Merge candidate list is parsed from the bitstream and used to retrieve motion information. The structure of this list is specified in the HEVC standard and can be summarized in the following order of steps:
step 1: initial candidate derivation
Step 1.1: spatial candidate derivation
Step 1.2: redundancy check of airspace candidates
Step 1.3: time domain candidate derivation
Step 2: additional candidate inserts
Step 2.1: creation of bi-prediction candidates
Step 2.2: insertion of zero motion candidates
These steps are also schematically depicted in fig. 9. For spatial Merge candidate derivation, a maximum of four Merge candidates are selected among the candidates located at five different positions. For time domain Merge candidate derivation, at most one Merge candidate is selected from the two candidates. Since the number of candidates per PU is assumed to be constant at the decoder, additional candidates are generated when the number of candidates does not reach the maximum signaled Merge candidate in the slice header (MaxNumMergeCand). Since the number of candidates is constant, the index of the best Merge candidate is encoded using truncated unitary binarization (TU). If the size of the CU is equal to 8, all PUs of the current CU share one Merge candidate list, which is the same as the Merge candidate list of the 2Nx2N prediction unit.
The operations associated with the above steps are described in detail below.
2.2.1.2 spatial candidate derivation
In the derivation of the spatial Merge candidates, four Merge candidates are selected at maximum among candidates located at the positions shown in fig. 10. The derivation sequences are A1, B0, A0 and B2. Position B2 is only considered when any PU of positions A1, B0, A0 is not available (e.g. because it belongs to another slice or slice) or is intra coded. After the candidates of the A1 position are added, redundancy check is performed on the addition of the remaining candidates, which ensures that candidates having the same motion information are excluded from the list, thereby improving coding efficiency. In order to reduce the computational complexity, not all possible candidate pairs are considered in the mentioned redundancy check. In contrast, only the pair linked with the arrow in fig. 11 is considered, and the candidate is added to the list only when the corresponding candidate for redundancy check does not have the same motion information. Another source of duplicate motion information is a "second PU" associated with a 2N x 2N different partition. For example, fig. 12 depicts a second PU in the case of nx2n and 2nxn, respectively. When the current PU is divided into n×2n, candidates for the A1 position are not considered for list construction. In some embodiments, adding this candidate may result in two prediction units with the same motion information, which is redundant for having only one PU in the coding unit. Likewise, when the current PU is divided into 2n×n, the position B1 is not considered.
2.2.1.3 time-domain candidate derivation
In this step only one candidate is added to the list. In particular, in the derivation of this temporal Merge candidate, a scaled motion vector is derived based on the collocated PU with the smallest picture order count POC difference from the current picture in the given reference picture list. The reference picture list used to derive the collocated PU is explicitly signaled in the slice header. The dashed line in fig. 13 shows the acquisition of a scaled motion vector of the temporal Merge candidate scaled from the motion vector of the collocated PU using POC distances tb and td, where tb is defined as the POC difference between the reference picture of the current picture and td is defined as the POC difference between the reference picture of the collocated picture and the collocated picture. The reference picture index of the temporal Merge candidate is set to zero. The actual processing of the scaling process is described in the HEVC specification. For the B slice, two motion vectors (one for reference picture list 0 and the other for reference picture list 1) are derived and combined to make it a bi-prediction Merge candidate. Motion vector scaling for temporal Merge candidates is illustrated.
In the collocated PU (Y) belonging to the reference frame, in the candidate C 0 And C 1 The locations of the time domain candidates are selected in between, as shown in fig. 14. If position C 0 The PU at is not available, intra-coded or outside the current CTU, then position C is used 1 . Otherwise, position C 0 Is used for the derivation of the time domain Merge candidates.
2.2.1.4 additional candidate insertions
In addition to the space-time Merge candidates, there are two additional types of Merge candidates: the bi-prediction Merge candidate and the zero Merge candidate are combined. The combined bi-predictive Merge candidate is generated using the space-time Merge candidate. The combined bi-predictive Merge candidate is only used for B slices. A combined bi-prediction candidate is generated by combining the first reference picture list motion parameter of the initial candidate with the second reference picture list motion parameter of another candidate. If the two tuples provide different motion hypotheses they will form new bi-prediction candidates. Fig. 15 shows the case where two candidates in the original list (on the left) are used to create a combined bi-predictive Merge candidate added to the final list (on the right), with two candidates of MvL and refIdxL0 or MvL and refIdxL 1. There are many rules to consider for combining to generate these additional Merge candidates.
Zero motion candidates are inserted to fill the remaining entries in the Merge candidate list, thereby achieving MaxNumMergeCand's capacity. These candidates have a zero spatial displacement and a reference picture index that starts from zero and increases each time a new zero motion candidate is added to the list. The number of reference frames used for these candidates is 1 frame and 2 frames for unidirectional prediction and bi-directional prediction, respectively. Finally, no redundancy check is performed on these candidates.
2.2.1.5 motion estimation regions processed in parallel
In order to speed up the encoding process, motion estimation may be performed in parallel, thereby deriving motion vectors for all prediction units within a given region at the same time. Deriving Merge candidates from a spatial neighborhood may interfere with parallel processing because one prediction unit cannot derive motion parameters from neighboring PUs before the relevant motion estimation is completed. In order to mitigate the trade-off between coding efficiency and processing delay, HEVC defines a Motion Estimation Region (MER). The syntax element "log2_parallel_merge_level_minus2" may be used to signal the size of the MER in the picture parameter set as described below. When defining MERs, the Merge candidates that fall into the same region are marked as unavailable and are therefore not considered in list construction.
Picture parameter set primitive byte sequence payload (RBSP) syntax
Universal picture parameter set RBSP syntax
pic_paraMeter_set_rbsp(){ Descriptor for a computer
pps_pic_paraMeter_set_id ue(v)
pps_seq_paraMeter_set_id ue(v)
depeNdeNt_slice_segMeNts_eNabled_flag u(1)
pps_scaliNg_list_data_preseNt_flag u(1)
if(pps_scaliNg_list_data_preseNt_flag)
scaliNg_list_data()
lists_ModificatioN_preseNt_flag u(1)
log2_parallel_Merge_level_MiNus2 ue(v)
slice_segMeNt_header_exteNsioN_preseNt_flag u(1)
pps_exteNsioN_preseNt_flag u(1)
rbsp_trailiNg_bits()
}
The value of Log2_parallel_merge_level_minus2 plus 2 specifies the value of the variable Log2ParMrgLevel, which is used in the derivation of the Merge mode luminance motion vector specified in clause 8.5.3.2.2 and the derivation of the spatial Merge candidate specified in clause 8.5.3.2.3. The value of log2_parallel_merge_level_MiNus2 should be in the range of 0 to CtbLog2SizeY-2, including 0 and CtbLog2SizeY-2.
The variable Log2ParMrgLevel is derived as follows:
Log2ParMrgLevel=log2_parallel_Merge_level_MiNus2+2(7-37)
the value of note 3-Log2ParMrgLevel represents the built-in function of the Merge candidate list derivation in parallel. For example, when Log2ParMrgLevel is equal to 6, a Merge candidate list of all Prediction Units (PUs) and Coding Units (CUs) contained in a 64×64 block may be derived in parallel.
Motion vector prediction in 2.2.2AMVP mode
Motion vector prediction exploits the spatial-temporal correlation of motion vectors with neighboring PUs, which is used for explicit transmission of motion parameters. A motion vector candidate list is first constructed by checking the availability of the temporally adjacent PU locations at the upper left, removing redundant candidate locations, and adding zero vectors to make the candidate list length constant. The encoder may then select the best predictor from the candidate list and send a corresponding index indicating the selected candidate. Similar to the Merge index signaling, the index of the best motion vector candidate is encoded using truncated unary. The maximum value to be encoded in this case is 2 (e.g., fig. 2 to 8). In the following section, the derivation process of the motion vector prediction candidates will be described in detail.
2.2.2.1 derivation of motion vector prediction candidates
Fig. 16 summarizes the derivation of motion vector prediction candidates.
In motion vector prediction, two types of motion vector candidates are considered: spatial motion vector candidates and temporal motion vector candidates. For spatial motion vector candidate derivation, two motion vector candidates are derived based on the motion vector of each PU located at five different positions shown in fig. 11.
For the derivation of temporal motion vector candidates, one motion vector candidate is selected from two candidates, which are derived based on two different collocated positions. After the first space-time selection list is made, the repeated motion vector candidates in the list are removed. If the number of potential candidates is greater than two, the motion vector candidates with reference picture indices greater than 1 in the associated reference picture list are removed from the list. If the space-time motion vector candidates are smaller than two, additional zero motion vector candidates are added to the list.
2.2.2.2 spatial motion vector candidates
In deriving spatial motion vector candidates, a maximum of two candidates are considered among five potential candidates from PU's at the positions depicted in fig. 11, which are the same as the positions of the motion Merge. The derivation order on the left side of the current PU is defined as A 0 、A 1 And scaled A 0 Scaled A 1 . The derivation order above the current PU is defined as B 0 、B 1 ,B 2 Scaled B 0 Scaled B 1 Scaled B 2 . Thus, four cases per side may be used as motion vector candidates, two of which do not require spatial scaling and two of which use spatial scaling. Four different cases are summarized as follows:
-no spatial scaling
(1) Identical reference picture list and identical reference picture index (identical POC)
(2) Different reference picture lists, but the same reference picture (same POC)
-spatial scaling
(3) The same reference picture list, but different reference pictures (different POCs)
(4) Different reference picture lists, and different reference pictures (different POCs)
First, the case without spatial scaling is checked, and then the spatial scaling is checked. When POC is different between the reference picture of a neighboring PU and the reference picture of the current PU, spatial scaling is considered, regardless of the reference picture list. If all PUs of the left candidate are not available or are intra coded, the motion vectors described above are allowed to be scaled to aid in the parallel derivation of the left and top MV candidates. Otherwise, spatial scaling of the motion vectors is not allowed.
In the spatial scaling process, the motion vectors of neighboring PUs are scaled in a similar manner to temporal scaling, as shown in fig. 17. The main difference is that the reference picture list and index of the current PU are given as inputs, the actual scaling process is the same as the temporal scaling process.
2.2.2.3 temporal motion vector candidates
All derivation processes of temporal Merge candidates are the same as those of spatial motion vector candidates except for derivation of reference picture indexes (see, e.g., fig. 6). The reference picture index is signaled to the decoder.
Signaling of 2.2.2.4AMVP information
For AMVP mode, four parts may be signaled in the bitstream, including a prediction direction, a reference index, an MVD, and an MV prediction candidate index.
Syntax table:
Figure BDA0002130757600000161
motion vector difference syntax
Mvd_codiNg(x0,y0,refList){ Descriptor for a computer
abs_Mvd_greater0_flag[0] ae(v)
abs_Mvd_greater0_flag[1] ae(v)
if(abs_Mvd_greater0_flag[0])
abs_Mvd_greater1_flag[0] ae(v)
if(abs_Mvd_greater0_flag[1])
abs_Mvd_greater1_flag[1] ae(v)
if(abs_Mvd_greater0_flag[0]){
if(abs_Mvd_greater1_flag[0])
abs_Mvd_MiNus2[0] ae(v)
Mvd_sigN_flag[0] ae(v)
}
if(abs_Mvd_greater0_flag[1]){
if(abs_Mvd_greater1_flag[1])
abs_Mvd_MiNus2[1] ae(v)
Mvd_sigN_flag[1] ae(v)
}
}
2.3 New inter prediction method in Joint Exploration Model (JEM)
2.3.1 sub-CU based motion vector prediction
In a JEM with QTBT, each CU may have at most one set of motion parameters for each prediction direction. By partitioning a large CU into sub-CUs and deriving motion information for all sub-CUs of the large CU, two sub-CU level motion vector prediction methods are considered in the encoder. An optional temporal motion vector prediction (ATMVP) method allows each CU to obtain multiple sets of motion information from multiple blocks smaller than the current CU in the collocated reference picture. In the space-time motion vector prediction (STMVP) method, the motion vectors of the sub-CUs are recursively derived by using a temporal motion vector predictor and a spatial neighboring motion vector.
In order to maintain a more accurate motion field for sub-CU motion prediction, motion compression of the reference frame is currently disabled.
2.3.1.1 optional temporal motion vector prediction
In the Alternative Temporal Motion Vector Prediction (ATMVP) method, the motion vector Temporal Motion Vector Prediction (TMVP) is modified by extracting multiple sets of motion information (including motion vectors and reference indices) from blocks smaller than the current CU. As shown in fig. 18, the sub CU is a square nxn block (default N is set to 4).
ATMVP predicts the motion vectors of sub-CUs within a CU in two steps. The first step is to identify the corresponding block in the reference picture with a so-called time domain vector. The reference picture is called a motion source picture. The second step is to divide the current CU into sub-CUs and acquire a motion vector and a reference index of each sub-CU from a block corresponding to each sub-CU, as shown in fig. 18.
In a first step, the reference picture and the corresponding block are determined from the motion information of the spatial neighboring blocks of the current CU. To avoid the repeated scanning process of neighboring blocks, the first Merge candidate in the Merge candidate list of the current CU is used. The first available motion vector and its associated reference index are set to the temporal vector and the index of the motion source picture. In this way, in the ATMVP, the corresponding block (sometimes referred to as a collocated block) can be more accurately identified than the TMVP, with the corresponding block always being located in the lower right corner or center position relative to the current CU. In one example, if the first Merge candidate is from the left neighbor block (i.e., a in fig. 19 1 ) Then the source block and source picture are identified using the associated MV and reference picture.
Fig. 19 shows an example of identification of source blocks and source pictures.
In a second step, by adding the temporal vector to the coordinates of the current CU, the corresponding block of the sub-CU is identified by the temporal vector in the motion source picture. For each sub-CU, the motion information of its corresponding block (the smallest motion grid covering the center samples) is used to derive the motion information of the sub-CU. After the motion information of the corresponding nxn block is identified, it is converted into a motion vector and a reference index of the current sub-CU, as in the TMVP method of HEVC, in which motion scaling and other processing are applied. For example, the decoder checks whether a low delay condition is satisfied (i.e., POC of all reference pictures of the current picture is smaller than POC of the current picture), and may predict a motion vector MVy (X equals 0 or 1 and Y equals 1-X) for each sub CU using a motion vector MVx (motion vector corresponding to reference picture list X).
2.3.1.2 space-time motion vector prediction
In this approach, the motion vectors of the sub-CUs are recursively derived in raster scan order. Fig. 20 illustrates this concept. Consider an 8 x 8 CU that contains four 4 x 4 sub-CUs a, B, C, and D. Adjacent 4 x 4 blocks in the current frame are labeled a, b, c, and d.
The motion derivation of sub-CU a begins by identifying its two spatial neighbors. The first neighbor is the nxn block above sub-CU a (block c). If this block c is not available or intra coded, other nxn blocks above sub-CU a are checked (from left to right, starting from block c). The second neighbor is one block to the left of sub-CU a (block b). If block b is not available or intra coded, other blocks on the left side of sub-CU a are checked (from top to bottom, starting from block b). The motion information obtained from neighboring blocks for each list is scaled to the first reference frame of the given list. Next, according to the same procedure as the TMVP specified in HEVC, temporal Motion Vector Prediction (TMVP) of sub-block a is pushed. Motion information of the juxtaposed blocks at position D is extracted and scaled accordingly. Finally, after retrieving and scaling the motion information, all available motion vectors (up to 3) are averaged separately for each reference list. The average motion vector is designated as the motion vector of the current sub-CU.
Fig. 20 shows an example of one CU with four sub-blocks (a-D) and their neighboring blocks (a-D).
2.3.1.3 sub-CU motion prediction mode signaling
The sub-CU mode is enabled as an additional Merge candidate mode and no additional syntax elements are required to signal this mode. Two other Merge candidates are added to the Merge candidate list of each CU to represent ATMVP mode and STMVP mode. If the sequence parameter set indicates that ATMVP and STMVP are enabled, a maximum of seven Merge candidates are used. The coding logic of the additional Merge candidate is the same as that of the Merge candidate in the HM, which means that for each CU in the P-slice or B-slice, two additional Merge candidates need to be checked for two additional RDs.
In JEM, all bin files of the Merge index are context coded by CABAC. In HEVC, however, only the first bin file is context coded and the remaining biN files are context bypass coded.
2.3.2 adaptive motion vector difference resolution
In HEVC, a Motion Vector Difference (MVD) (between the motion vector of the PU and the predicted motion vector) is signaled in quarter luma samples when use_integer_mv_flag is equal to 0 in the slice header. In JEM, locally Adaptive Motion Vector Resolution (LAMVR) is introduced. In JEM, MVD may be encoded in units of quarter-luminance samples, integer-luminance samples, or four-luminance samples. The MVD resolution is controlled at the Coding Unit (CU) level, and the MVD resolution flags conditionally signal for each CU that has at least one non-zero MVD component.
For a CU with at least one non-zero MVD component, the first flag will signal to indicate whether quarter-luma sample MV precision is used in the CU. When the first flag (equal to 1) indicates that quarter-luma sample MV precision is not used, the other flag is signaled to indicate whether integer-luma sample MV precision or four-luma sample MV precision is used.
When the first MVD resolution flag of a CU is zero or not encoded for a CU (meaning that all MVDs in the CU are zero), the CU uses quarter luma sample MV resolution. When a CU uses integer-luminance sample MV precision or four-luminance sample MV precision, the MVPs in the AMVP candidate list of the CU will be rounded to the corresponding precision.
In the encoder, a RD check at the CU level is used to determine which MVD resolution is to be used for the CU. That is, the RD check at the CU level is performed three times for each MVD resolution. In order to accelerate the encoder speed, the following encoding scheme is applied in JEM.
During RD checking of a CU with normal quarter-luminance sampling MVD resolution, motion information of the current CU (integer luminance sampling precision) is stored. When RD checking is performed on the same CU with integer luminance samples and 4 luminance samples MVD resolution, the stored motion information (rounded) is used as a starting point for further refinement of the small range motion vectors, so that the time-consuming motion estimation process is not repeated three times.
The RD check of the CU with 4 luma sample MVD resolution is conditionally invoked. For a CU, when the RD checking cost of the integer-luminance sample MVD resolution is much greater than the RD checking cost of the quarter-luminance sample MVD resolution, the RD checking of the 4-luminance sample MVD resolution of the CU will be skipped.
2.3.3 Pattern matching motion vector derivation
The Pattern Matching Motion Vector Derivation (PMMVD) pattern is a special mere pattern based on Frame Rate Up Conversion (FRUC) techniques. In this mode, the motion information of the block is not signaled but is derived at the decoder side.
For a CU, the FRUC flag is signaled when its Merge flag is true. When the FRUC flag is false, signaling is initiated for the Merge and the regular Merge mode is used. When the FRUC flag is true, another FRUC mode flag is signaled to indicate which mode (bilateral matching or template matching) is to be used to derive motion information for the block.
At the encoder side, a decision is made whether to use FRUC Merge mode for the CU based on RD cost selections made for the normal Merge candidate. I.e. by using RD cost selection to check the two matching modes of the CU (bilateral matching and template matching). The mode resulting in the lowest cost is further compared to other CU modes. If the FRUC match pattern is the most efficient pattern, then the FRUC flag is set to true for the CU and the relevant match pattern is used.
The motion derivation process in FRUC Merge mode has two steps: CU-level motion search is performed first, and then sub-CU-level motion optimization is performed. At the CU level, an initial motion vector for the entire CU is derived based on bilateral matching or template matching. First, a MV candidate list is generated and the candidate that results in the lowest matching cost is selected as the starting point for further optimizing the CU level. Local searches based on bilateral matching or template matching are then performed near the starting point, and the MV result of the minimum matching cost is taken as the MV value of the entire CU. Next, the motion information is further refined at the sub-CU level starting from the derived CU motion vector.
For example, the following derivation procedure is performed for w×h CU motion information derivation. In the first stage, the MVs of the entire W H CU are derived. In the second stage, the CU is further divided into mxm sub-CUs. The value of M is calculated as (16), D being a predefined division depth, and is set to 3 by default in JEM. The MV value for each sub-CU is then derived.
Figure BDA0002130757600000211
As shown in fig. 21, bilateral matching is used to derive the motion information of the current CU by finding the closest match between two blocks in two different reference pictures along the motion trajectory of the current CU. Under the continuous motion trajectory assumption, motion vectors MV0 and MV1 pointing to two reference blocks are proportional to the temporal distance between the current picture and the two reference pictures (i.e., TD0 and TD 1. As a special case, when the current picture is temporarily located between the two reference pictures and the temporal distance from the current picture to the two reference pictures is the same, bilateral matching becomes a mirror-image-based bi-directional MV.
As shown in fig. 22, the template matching is used to derive the motion information of the current CU by finding the closest match between the template in the current picture (top and/or left neighboring block of the current CU) and the block in the reference picture (same size as the template). Template matching is also applied to AMVP mode in addition to FRUC Merge mode described above. In JEM, as in HEVC, AMVP has two candidates. Using the template matching method, new candidates are derived. If the candidate newly derived from the template matching is different from the first existing AMVP candidate, it is inserted at the very beginning of the AMVP candidate list and then the list size is set to 2 (i.e., the second existing AMVP candidate is removed). When applied to AMVP mode, only CU-level search is applied.
2.3.3.1CU level MV candidate set
The MV candidate set at CU level includes:
(i) The original AMVP candidate, if the current CU is in AMVP mode,
(ii) All of the mere candidates were selected to be,
(iii) Several MVs in the MV field are interpolated.
(iv) Top and left adjacent motion vectors
When using bilateral matching, each valid MV of the Merge candidate is usedAn input is made to generate MV pairs that are assumed to be bilateral matches. For example, one valid MV of the Merge candidate at reference list a is (MVa, ref a ). Then find the reference picture ref of its paired bilateral MV in another reference list B b So as to ref a And ref b Located on different sides of the current picture in time. If reference ref in reference list B b Is not available, reference is made to ref b Is determined to be with reference ref a Different references, and its temporal distance to the current picture is the minimum distance in list B. Determination of reference ref b By then based on the current picture and the reference ref a Ref reference to b The temporal distance between them scales MVa to derive MVb.
Four MVs from the interpolated MV field are also added to the CU-level candidate list. More specifically, MVs interpolated at the locations (0, 0), (W/2, 0), (0, H/2) and (W/2, H/2) of the current CU are added.
When FRUC is applied in AMVP mode, the original AMVP candidates are also added to the MV candidate set at the CU level.
At the CU level, at most 15 MVs of the AMVP CU and at most 13 MVs of the Merge CU may be added to the candidate list.
2.3.3.2 sub-CU level MV candidate
MV candidates set at the sub-CU level include:
(i) The determined MV is searched from the CU level,
(ii) Top, left side, upper left and upper right adjacent MVs,
(iii) Scaled versions of collocated MVs from reference pictures,
(iv) A maximum of 4 ATMVP candidates,
(v) Up to 4 STMVP candidates.
The scaled MV from the reference picture is derived as follows. All reference pictures in both lists are traversed. The MVs at the collocated positions of the sub-CUs in the reference picture are scaled to the reference of the starting CU level MVs.
ATMVP and STMVP candidates are limited to the first four. At the sub-CU level, a maximum of 17 MVs are added to the candidate list.
2.3.3.3 Generation of interpolated MV field
Prior to encoding a frame, an interpolated motion field for the entire picture is generated based on unidirectional ME. The motion field may then be subsequently used as MV candidate at the CU level or sub-CU level.
First, the motion field of each reference picture in the two reference lists is traversed at the 4 x 4 block level. For each 4 x 4 block, if the motion associated with the block passes through the 4 x 4 block in the current picture (as shown in fig. 23) and the block is not assigned any interpolation motion, the motion of the reference block is scaled to the current picture (same as MV scaling of TMVP in HEVC) according to temporal distances TD0 and TD1 and the scaling motion is assigned to the block in the current frame. If no scaled MVs are assigned to a 4 x 4 block, the motion of the block is marked as unavailable in the interpolated motion field.
2.3.3.4 interpolation matching cost
When the motion vector points to a fractional sample position, motion compensated interpolation is required. To reduce complexity, bilinear interpolation is used for both bilateral and template matching instead of conventional 8-tap HEVC interpolation.
The computation of the matching costs is somewhat different at the different steps. When a candidate is selected from the candidate set at the CU level, the matching cost is the sum-absolute-difference (SAD) of bilateral matching or template matching. After determining the starting MV, the matching cost C of the bilateral matching search at the sub-CU level is calculated as follows:
Figure BDA0002130757600000231
here, w is a weight coefficient, empirically set to 4.MV and MV s Indicating the current MV and the starting MV, respectively. SAD is still used as a matching cost for pattern matching searching at the sub-CU level.
In FRUC mode, MVs are derived by using only luma samples. The derived motion will be used for MC inter prediction for luminance and chrominance. After the MV is determined, the final MC is performed using an 8-tap (8-taps) interpolation filter for luminance and a 4-tap (4-taps) interpolation filter for chrominance.
2.3.3.5MV refinement
MV refinement is a pattern-based MV search, with bilateral cost or template matching cost as the standard. In JEM, two search modes, unrestricted Center Biased Diamond Search (UCBDS) and adaptive cross search, are supported, MV refinement at CU level and sub-CU level, respectively. For MV refinement at both the CU level and sub-CU level, MVs are searched directly at quarter luma sample accuracy, followed by eighth luma sample MV refinement. The search range for MV refinement of the CU and sub-CU steps is set to 8 luma samples.
2.3.3.6 template-matched selection of prediction direction in FRUC Merge mode
In bilateral Merge mode, bi-prediction is always applied, because the motion information of a CU is derived in two different reference pictures based on the closest match between two blocks on the current CU motion trajectory. The template matching Merge mode is not limited in this way. In template matching Merge mode, the encoder may choose from list 0 unidirectional prediction, list 1 unidirectional prediction, or bi-directional prediction for the CU. The selection is based on the template matching cost as follows:
if cosbi < = factor min (cost 0, cost 1)
Then bi-directional prediction is used;
otherwise, if cost0< = cost1
Unidirectional prediction in list 0 is used;
otherwise the first set of parameters is selected,
using unidirectional predictions in list 1;
where cost0 is the SAD of the list 0 template match, cost1 is the SAD of the list 2 template match, and cost Bi is the SAD of the bi-prediction template match. The value of factor is equal to 1.25, meaning that the selection process is biased towards bi-prediction. The inter prediction direction selection may be applied only to the CU-level template matching process.
2.3.4 decoder side motion vector refinement
In the bi-prediction operation, for prediction of one block region, two prediction blocks respectively formed of a Motion Vector (MV) of list 0 and a MV of list 1 are combined to form a single prediction signal. In the decoder-side motion vector refinement (DMVR) method, two motion vectors of bi-prediction are further refined by a bilateral template matching process. Bilateral template matching applied in the decoder is used to perform a distortion-based search between the bilateral template and reconstructed samples in the reference picture in order to obtain refined MVs without transmitting additional motion information.
In DMVR, the bilateral template is generated as a weighted combination (i.e., average) of two prediction blocks from list 0's initial MV0 and list 1's MV1, respectively. The template matching operation includes calculating a cost metric between the generated template and a sample region (around the initial prediction block) in the reference picture. For each of the two reference pictures, the MV that yields the smallest template cost is considered the updated MV of the list to replace the original MV. In JEM, nine MV candidates are searched for each list. Nine MV candidates include an original MV and 8 surrounding MVs, which have an offset of one luminance sample from the original MV in the horizontal or vertical direction or both. Finally, two new MVs (i.e., MV0 'and MV 1') shown in fig. 24 are used to generate the final bi-prediction result. The Sum of Absolute Differences (SAD) is used as the cost metric.
DMVR is applied to the bi-predictive Merge mode without transmitting additional syntax elements, where one MV comes from a past reference picture and another MV comes from a future reference picture. In JEM, DMVR is not applied when LIC, affine motion, FRUC, or sub-CU Merge candidates are enabled for the CU.
2.3.5 local illumination Compensation
Local Illumination Compensation (IC) uses a scaling factor a and an offset b based on a linear model for illumination change. And adaptively enabling or disabling local illumination compensation for each inter-mode encoded Coding Unit (CU).
When IC is applied to a CU, the least squares error method is employed to derive parameters a and b by using neighboring samples of the current CU and their corresponding reference samples. More specifically, as shown in fig. 25, sub-sampled (2:1 sub-sampled) neighboring samples of the CU and corresponding samples (identified by motion information of the current CU or sub-CU) in the reference picture are used. IC parameters are derived and applied separately for each prediction direction.
When a CU is encoded in the Merge mode, IC flags are copied from neighboring blocks in a similar manner to the motion information copy in Merge mode; otherwise, an IC flag is signaled to the CU to indicate whether LIC is applicable.
When enabling ICs for pictures, an additional CU level RD check is needed to determine whether to apply LIC to the CU. When IC is enabled for CU, mean-removed absolute sum-difference (Mean-Removed Sum of Absolute Diffefference, MR-SAD) and Mean-removed absolute Hadamard transform sum-difference (Mean-Removed Sum of Absolute Hadamard-Transformed Difference, MR-SATD) are used for integer-pixel motion search and fractional-pixel motion search, respectively, instead of SAD and SATD.
In order to reduce the coding complexity, the following coding scheme is applied in JEM. When there is no significant illumination change between the current picture and its reference picture, the IC is disabled for all pictures. To identify this, a histogram of the current picture and each reference picture of the current picture are calculated at the encoder. Disabling the IC for the current picture if the histogram difference between the current picture and each reference picture of the current picture is less than a given threshold; otherwise, the IC is enabled for the current picture.
2.3.6 Merge/skip mode with bi-directional matching refinement
The Merge candidate list is first constructed by inserting the motion vectors and reference indices of the spatially and temporally neighboring blocks into the candidate list using redundancy check until the number of available candidates reaches the maximum candidate size 19. The Merge candidate list of Merge/skip mode is constructed by inserting additional candidates (combined candidates and zero candidates) used in Spatial candidates (fig. 26), temporal candidates, affine candidates, advanced Temporal MVP (Advanced Temporal MVP, ATMVP) candidates, temporal, STMVP (STMVP) candidates, and HEVC according to a predefined insertion order:
(1) Spatial candidates for blocks 1-4
(2) Extrapolation (extended) affine candidates for blocks 1-4
(3)ATMVP
(4)STMVP
(5) Virtual affine candidates
(6) Spatial candidates (block 5) (only used when the number of available candidates is less than 6)
(7) Extrapolation affine candidate (Block 5)
(8) Time candidates (as derived in HEVC)
(9) Non-contiguous spatial candidates followed by extrapolated affine candidates (blocks 6 to 49)
(10) Combinatorial candidates
(11) Zero candidates
Note that in addition to STMVP and affine, the IC tag inherits from the Merge candidate. Also, for the first four spatial candidates, bi-prediction candidates are inserted before candidates with uni-prediction.
2.3.7JVET-K0161
In this proposal, the subblock STMVP is not proposed as a spatio-temporal Merge mode. The proposed method uses a co-located block, which is identical to HEVC/JEM (only 1 picture, here no time vector). The proposed method also examines the upper and left spatial positions, which are adjusted in the proposal. Specifically, in order to check adjacent inter prediction information, at most two positions are checked for each upper and left side. The exact position is shown in fig. 27.
Afar (nPbW 5/2, -1), amid (nPbW/2, -1) (note: offset of the above spatial block above the current block)
Lfar: (-1, nPbH 5/2), lmid (-1, nPbH/2) (note: offset of left spatial block above current block)
The average of the motion vectors of the upper block, the left block, and the time block is calculated as the same as the BMS software implementation. If 3 reference inter prediction blocks are available.
mvLX[0]=((mvLX_A[0]+mvLX_L[0]+mvLX_C[0])*43)/128
mvLX[1]=((mvLX_A[1]+mvLX_L[1]+mvLX_C[1])*43)/128
If only two or one inter prediction block is available, an average of two or only one mv is used.
2.3.8JVT-K0135
In order to generate a smooth fine granularity motion field, fig. 28 gives a brief description of the planar motion vector prediction process.
Planar motion vector prediction is achieved by averaging horizontal and vertical linear interpolation on a 4x4 block basis as follows.
P(x,y)=(H×P h (x,y)+W×P v (x,y)+H×W)/(2×H×W)
W and H represent the width and height of the block. (x, y) is the coordinates of the current sub-block relative to the upper left sub-block. All distances are represented by the pixel distance divided by 4. P (x, y) is the motion vector of the current sub-block.
Horizontal prediction P of position (x, y) h (x, y) and vertical prediction P h (x, y) is calculated as follows:
P h (x,y)=(W-1-x)×L(-1,y)+(x+1)×R(W,y)
P v (x,y)=(H-1-y)×A(x,-1)+(y+1)×B(x,H)
where L (-1, y) and R (W, y) are motion vectors of the 4x4 block to the left and right of the current block. A (x, -1) and B (x, H) are motion vectors of 4x4 blocks above and below the current block.
Reference motion information for left column and upper row neighboring blocks is derived from the spatial neighboring blocks of the current block.
The reference motion information for the right column and bottom row neighboring blocks is derived as follows.
Deriving motion information for lower right temporally neighboring 4 x 4 blocks
Motion vectors of the right column neighboring 4×4 blocks are calculated using the derived motion information of the lower right neighboring 4×4 blocks and the motion information of the upper right neighboring 4×4 blocks, as described in formula K1.
Motion vectors of the bottom row neighboring 4 x 4 blocks are calculated using the derived motion information of the bottom right neighboring 4 x 4 blocks and the motion information of the bottom left neighboring 4 x 4 blocks, as described in formula K2.
R (W, y) = ((H-y-1) ×AR+ (y+1) ×BR)/H formula K1
B (x, H) = ((W-x-1) ×bl+ (x+1) ×br)/W formula K2
Where AR is the motion vector of the upper right spatially neighboring 4 x 4 block, BR is the motion vector of the lower right temporally neighboring 4 x 4 block, and BL is the motion vector of the lower left spatially neighboring 4 x 4 block.
Motion information obtained from neighboring blocks for each list is scaled to the first reference picture of the given list.
3. Examples of problems addressed by the embodiments disclosed herein
The inventors have previously proposed a look-up table based motion vector prediction technique that uses one or more look-up tables storing at least one motion candidate to predict motion information for a block, which may be implemented in various embodiments to provide video coding with higher coding efficiency. Each LUT may include one or more motion candidates, each motion candidate associated with corresponding motion information. Motion information of a motion candidate may include a prediction direction, a reference index/picture, a motion vector, an LIC flag, an affine flag, a Motion Vector Difference (MVD) precision, and/or a MVD value. The motion information may also include block location information to indicate where the motion information came from.
LUT-based motion vector prediction based on the disclosed techniques may enhance existing and future video coding standards, which are set forth in the examples described below for various implementations. Because LUTs allow an encoding/decoding process to be performed based on historical data (e.g., blocks that have been processed), LUT-based motion vector prediction may also be referred to as a history-based motion vector prediction (HMVP) method. In LUT-based motion vector prediction methods, one or more tables with motion information from previously encoded blocks are maintained during the encoding/decoding process. These motion candidates stored in the LUT are named HMVP candidates. During encoding/decoding of one block, the associated motion information in the LUT may be added to a motion candidate list (e.g., merge/AMVP candidate list), and the LUT may be updated after encoding/decoding one block. The updated LUT is then used to encode subsequent blocks. That is, the update of the motion candidates in the LUT is based on the encoding/decoding order of the block. The following examples should be considered as examples explaining the general concept. These examples should not be interpreted in a narrow way. Furthermore, these examples may be combined in any manner.
Some embodiments may use one or more look-up tables storing at least one motion candidate to predict motion information for a block. Embodiments may use motion candidates to indicate a set of motion information stored in a look-up table. For the conventional AMVP or Merge mode, embodiments may use AMVP or Merge candidates to store motion information.
Although current LUT-based motion vector prediction techniques overcome the drawbacks of HEVC by using historical data, only information from spatially neighboring blocks is considered.
When a motion candidate from the LUT is used in the AMVP or Merge list construction process, it is directly inherited without any change.
The design of JET-K0161 is beneficial to coding performance. However, it requires additional derivation of TMVP, which increases computational complexity and memory bandwidth.
4. Some examples
The following examples should be considered as examples explaining the general concept. These examples should not be interpreted in a narrow way. Furthermore, these examples may be combined in any manner.
Some embodiments using the presently disclosed techniques may use motion candidates from the LUT in combination with motion information from the temporal neighboring blocks. In addition, a complexity reduction of JHET-K0161 is also proposed.
Using motion candidates from LUT
1. It is proposed to construct new AMVP/Merge candidates by using motion candidates from the LUT.
a. In one example, a new candidate may be derived by adding/subtracting an offset (or offsets) to the motion vector of the motion candidate from the LUT.
b. In one example, new candidates may be derived by averaging the motion vectors of the selected motion candidates from the LUT.
i. In one embodiment, the averaging may be approximately achieved without a division operation. For example, MVa, MVb, and MVc may be averaged to
Figure BDA0002130757600000281
Or->
Figure BDA0002130757600000282
For example, when n=7, the average value is (mva+mvb+mvc) ×42/128 or (mva+mvb+mvc) ×43/128. Note that ∈x is calculated in advance>
Figure BDA0002130757600000283
Or->
Figure BDA0002130757600000284
And store it in a look-up table.
in one example, only motion vectors with the same reference picture (in both prediction directions) are selected.
in one example, the reference picture in each prediction direction is predetermined, and if necessary, the motion vector is scaled to the predetermined reference picture.
1. In one example, a first entry (x=0 or 1) in the reference picture list X is selected as the reference picture.
2. Alternatively, for each prediction direction, the most frequently used reference picture in the LUT is selected as the reference picture.
c. In one example, for each prediction direction, a motion vector having the same reference picture as the predetermined reference picture is first selected, and then other motion vectors are selected
2. It is proposed to construct a new AMVP/Merge candidate by a function of one or more motion candidates from the LUT and motion information from the temporal neighboring block.
a. In one example, similar to STMVP or JVET-K0161, new candidates can be derived by averaging the motion candidates from LUTs and TMVP.
b. In one example, the above blocks (e.g., amid and Afar in fig. 27) may be replaced with one or more candidates from the LUT. Alternatively, in addition, other processes may remain unchanged, as has been implemented in jfet-K0161.
3. It is proposed to construct a new AMVP/Merge candidate by a function of one or more motion candidates from the LUT, AMVP/Merge candidates from spatially neighboring blocks and/or spatially non-immediately neighboring blocks, and motion information from temporal blocks.
a. In one example, one or more of the above blocks (e.g., amid and Afar in fig. 27) may be replaced with candidates from the LUT. Alternatively, in addition, other processes may remain unchanged, as has been implemented in jfet-K0161.
b. In one example, one or more of the left blocks (e.g., amid and Afar in fig. 27) may be replaced with candidates from the LUT. Alternatively, in addition, other processes may remain unchanged, as has been implemented in jfet-K0161.
4. It is proposed that when inserting motion information of a block into a LUT, whether existing entries in the LUT are pruned may depend on the coding mode of the block.
a. In one example, if the block is encoded in Merge mode, pruning is not performed.
b. In one example, if the block is encoded in AMVP mode, pruning is not performed.
c. In one example, if the block is encoded in AMVP/Merge mode, only the latest M entries of the LUT are pruned.
d. In one example, pruning is always disabled when a block is encoded in a sub-block mode (e.g., affine or ATMVP).
5. It is proposed to add motion information from the temporal block to the LUT.
a. In one example, the motion information may be from co-located blocks.
b. In one example, the motion information may be from one or more blocks from different reference pictures.
Related to STMVP
1. It is proposed to always derive new Merge candidates using spatial Merge candidates, regardless of TMVP candidates.
a. In one example, an average of two motion Merge candidates may be utilized.
b. In one example, the spatial Merge candidate and the motion candidate from the LTU may be used in combination to derive a new candidate.
2. It is proposed that the STMVP candidates can be derived using non-immediately adjacent blocks (which are not right or left adjacent blocks).
a. In one example, the upper block used for STMVP candidate derivation remains unchanged, while the left block used changes from a neighboring block to a non-immediately neighboring block.
b. In one example, the left block used for STMVP candidate derivation remains unchanged, while the upper block used changes from a neighboring block to a non-immediately neighboring block.
c. In one example, candidates for non-immediately adjacent blocks and motion candidates from the LUT may be used in combination to derive new candidates.
3. It is proposed to always derive new Merge candidates using spatial Merge candidates, regardless of TMVP candidates.
a. In one example, an average of two motion Merge candidates may be utilized.
b. Alternatively, an average of two, three or more MVs from different positions adjacent or not adjacent to the current block may be utilized.
i. In one embodiment, the MVs may only be acquired from locations in the current LCU (also referred to as CTUs).
in one embodiment, the MVs can only be retrieved from the locations in the current LCU row.
in one embodiment, the MVs may only be acquired from locations in or next to the current LCU row. An example is shown in fig. 29. Blocks A, B, C, E, E and F are next to the current LCU row.
in one embodiment, the MVs may only be acquired from locations in or next to the current LCU row but not to the left of the upper left corner neighboring blocks. An example is shown in fig. 29. Block T is the upper left corner neighboring block. Blocks B, C, E, E and F are next to the current LCU row but not to the left of the upper left corner neighboring blocks.
c. In one embodiment, the spatial Merge candidate and the motion candidate from the LTU may be used in combination to derive a new candidate
4. It is proposed that the MV of the BR block for planar motion prediction in fig. 28 is not acquired from temporal MV prediction, but from one entry of the LUT.
5. It is proposed that motion candidates from the LUT may be used in combination with other types of Merge/AMVP candidates (e.g. spatial Merge/AMVP candidates, temporal Merge/AMVP candidates, default motion candidates) to derive new candidates.
In various embodiments of this example and other examples disclosed in this patent document, pruning may include: a) comparing the motion information with existing entries uniquely, and b) if unique, adding the motion information to the list, or c) if not unique, either c 1) not adding the motion information, or c 2) adding the motion information and deleting the matching existing entry. In some implementations, the pruning operation is not invoked when a motion candidate is added to the candidate list from the table.
Fig. 30 is a schematic diagram illustrating an example of a structure of a computer system or other control device 3000 that may be used to implement various portions of the disclosed technology. In fig. 30, computer system 3000 includes one or more processors 3005 and memory 3010 connected by an interconnect 3025. Interconnect 3025 may represent any one or more separate physical buses, point-to-point connections, or both connected by appropriate bridges, adapters, or controllers. Accordingly, the interconnect 3025 may include, for example, a system bus, a Peripheral Component Interconnect (PCI) bus, a HyperTransport or Industry Standard Architecture (ISA) bus, a Small Computer System Interface (SCSI) bus, a Universal Serial Bus (USB), an IIC (I2C) bus, or an Institute of Electrical and Electronics Engineers (IEEE) standard 674 bus (sometimes referred to as a "firewire").
The processor 3005 may include a Central Processing Unit (CPU) to control, for example, the overall operation of the host. In some embodiments, the processor 3005 achieves this by executing software or firmware stored in the memory 3010. The processor 3005 may be or include one or more programmable general purpose or special purpose microprocessors, digital Signal Processors (DSPs), programmable controllers, application Specific Integrated Circuits (ASICs), programmable Logic Devices (PLDs), etc., or a combination of such devices.
Memory 3010 may be or include a main memory of a computer system. Memory 3010 represents any suitable form of Random Access Memory (RAM), read Only Memory (ROM), flash memory, or the like, or a combination of such devices. In use, memory 3010 may contain, among other things, a set of machine instructions that, when executed by processor 3005, cause processor 3005 to perform operations to implement embodiments of the disclosed technology.
Also connected to the processor 3005 by way of the interconnect 3025 is a (optional) network adapter 3015. The network adapter 3015 provides the ability for the computer system 3000 to communicate with remote devices, such as storage clients and/or other storage servers, and may be, for example, an ethernet adapter or a fibre channel adapter.
Fig. 31 illustrates a block diagram of an example embodiment of a mobile device 3100 that can be used to implement various portions of the disclosed technology. The mobile device 3100 may be a notebook, smart phone, tablet, video camera, or other device capable of processing video. The mobile device 3100 includes a processor or controller 3101 to process data, and a memory 3102 in communication with the processor 3101 to store and/or buffer data. For example, the processor 3101 may include a Central Processing Unit (CPU) or a microcontroller unit (MCU). In some implementations, the processor 3101 may include a Field Programmable Gate Array (FPGA). In some implementations, the mobile device 3100 includes or communicates with a Graphics Processing Unit (GPU), a Video Processing Unit (VPU), and/or a wireless communication unit to implement various visual and/or communication data processing functions of the smartphone device. For example, the memory 3102 may include and store processor-executable code that, when executed by the processor 3101, configures the mobile device 3100 to perform various operations, such as receiving information, commands, and/or data, processing the information and data, and transmitting or providing the processed information/data to another data device, such as an executor or an external display. To support various functions of the mobile device 3100, the memory 3102 can store information and data, such as instructions, software, values, images, and other data processed or referenced by the processor 3101. For example, the storage function of the memory 3102 may be implemented using various types of Random Access Memory (RAM) devices, read Only Memory (ROM) devices, flash memory devices, and other suitable storage media. In some implementations, the mobile device 3100 includes an input/output (I/O) unit 3103 to interface the processor 3101 and/or memory 3102 with other modules, units, or devices. For example, the I/O unit 3103 may interface with the processor 3101 and memory 3102 to utilize various wireless interfaces compatible with typical data communication standards, e.g., between one or more computers and user devices in the cloud. In some implementations, the mobile device 3100 may interface with other devices using wired connections through the I/O unit 3103. The mobile device 3100 can also be connected to other external interfaces (e.g., data storage) and/or visual or audio display devices 3104 to retrieve and transmit data and information that can be processed by the processor, stored by the memory, or displayed on the display device 3104 or an output unit of the external device. For example, the display device 3104 may display a video frame including blocks (CUs, PUs, or TUs) to which intra block copying is applied based on whether the block is encoded using a motion compensation algorithm in accordance with the disclosed techniques.
In some embodiments, a video decoder device that may implement a method of sub-block based prediction as described herein may be used for video decoding.
In some embodiments, the video decoding method may be implemented using a decoding apparatus implemented on a hardware platform as described in fig. 30 and 31.
Various embodiments and techniques disclosed in this document may be described in the following list of examples.
Fig. 32 is a flow chart of an example method 3200 for video processing in accordance with the presently disclosed technology. The method 3200 includes, at operation 3202, determining a new candidate for video processing by averaging two or more selected motion candidates. The method 3200 includes, at operation 3204, adding the new candidate to a candidate list. The method 3200 includes, at operation 3206, performing a transition between a first video block of a video and a bitstream representation of the video by using the determined new candidate in the candidate list.
In some embodiments, the candidate list is a Merge candidate list, and the determined new candidate is a Merge candidate.
In some embodiments, the Merge candidate list is an inter-prediction Merge candidate list or an intra block copy prediction Merge candidate list.
In some embodiments, the one or more tables include motion candidates derived from previously processed video blocks of the video data that were previously processed by the first video block.
In some embodiments, there are no available spatial candidates and temporal candidates in the candidate list.
In some embodiments, the selected motion candidates are from one or more tables.
In some embodiments, the averaging is achieved without a division operation.
In some embodiments, the averaging is achieved by multiplication of the sum of the motion vectors of the selected motion candidates with a scaling factor.
In some embodiments, the horizontal components of the motion vectors of the selected motion candidates are averaged to derive the horizontal components of the new candidates.
In some embodiments, the vertical components of the motion vectors of the selected motion candidates are averaged to derive the vertical component of the new candidate.
In some embodiments, the scaling factor is pre-computed and stored in a look-up table.
In some embodiments, only motion vectors with the same reference picture are selected.
In some embodiments, only motion vectors with the same reference picture in both prediction directions are selected in both prediction directions.
In some embodiments, the target reference picture in each prediction direction is predetermined, and the motion vector is scaled to the predetermined reference picture.
In some embodiments, a first entry in reference picture list X is selected as the target reference picture for the reference picture list, X being 0 or 1.
In some embodiments, for each prediction direction, the most commonly used reference picture in the table is selected as the target reference picture.
In some embodiments, for each prediction direction, a motion vector having the same reference picture as the predetermined target reference picture is first selected, and then other motion vectors are selected.
In some embodiments, the motion candidates from the table are associated with motion information including at least one of: prediction direction, reference picture index, motion vector value, intensity compensation flag, affine flag, motion vector difference precision, or motion vector difference value.
In some embodiments, method 3200 further comprises updating one or more tables based on the conversion.
In some embodiments, updating the one or more tables includes updating the one or more tables based on motion information of a first video block of the video after performing the conversion.
In some embodiments, the method 3200 further comprises performing a transition between a subsequent video block of the video and the bitstream representation of the video based on the updated table.
In some embodiments, the conversion includes an encoding process and/or a decoding process.
In some embodiments, the video encoding device may perform method 2900 and other methods described herein during reconstruction of video for subsequent video.
In some embodiments, an apparatus in a video system may include a processor configured to perform the methods described herein.
In some embodiments, the described methods may be embodied as computer executable code stored on a computer readable program medium.
Fig. 33 is a flow chart of an example method 3300 for video processing in accordance with the presently disclosed technology. The method 3300 includes determining new motion candidates for video processing by using one or more motion candidates from one or more tables, wherein the tables include the one or more motion candidates and each motion candidate is associated motion information, at operation 3302. Method 3300 includes performing conversion between a video block and an encoded representation of the video block based on the new candidate, at operation 3304.
In some embodiments, the new motion candidate is derived by adding or subtracting an offset to the motion vector associated with the motion candidate from the one or more tables.
In some embodiments, determining the new motion candidate comprises: a new motion candidate is determined as a function of one or more motion candidates from one or more tables and motion information from a temporal neighboring block.
In some embodiments, determining the new motion candidate comprises: the motion candidates from one or more tables and the temporal motion vector predictor are averaged.
In some embodiments, averaging the selected motion candidates includes a weighted average or averaging of motion vectors associated with the selected motion candidates.
In some embodiments, the averaging is achieved without a division operation.
In some embodiments, the averaging is achieved by multiplication of the sum of motion vectors of motion candidates from the one or more tables with the temporal motion vector predictor and a scaling factor.
In some embodiments, the horizontal component of the motion vector of the motion candidate from the one or more tables is averaged with a temporal motion vector predictor to derive a horizontal component of a new motion candidate.
In some embodiments, averaging the selected horizontal components includes a weighted average or averaging of horizontal components associated with the selected motion candidates.
In some embodiments, averaging the selected vertical components includes a weighted average or averaging of the vertical components associated with the selected motion candidates.
In some embodiments, the new motion candidate is determined as a function of one or more motion candidates from one or more tables, mere candidates from spatially neighboring blocks and/or spatially non-immediately neighboring blocks, and motion information from temporally neighboring blocks.
In some embodiments, determining the new candidate includes: the new motion candidates are determined as a function of one or more motion candidates from one or more tables, advanced Motion Vector Prediction (AMVP) candidates from spatially neighboring blocks and/or spatially non-immediately neighboring blocks, and motion information from temporally neighboring blocks.
In some embodiments, determining the new candidate includes: the new motion candidate is determined as a function of one or more motion candidates from one or more tables, and an Advanced Motion Vector Prediction (AMVP) candidate in an AMVP candidate list or a Merge candidate in a Merge candidate list.
In some embodiments, the new motion candidate is added to the Merge candidate list.
In some embodiments, the new motion candidate is added to the AMVP candidate list.
In some embodiments, each of the one or more tables includes a set of motion candidates, wherein each motion candidate is associated with corresponding motion information.
In some embodiments, the motion candidate is associated with motion information comprising at least one of: prediction direction, reference picture index, motion vector value, intensity compensation flag, affine flag, motion vector difference precision, or motion vector difference value.
In some embodiments, the method further comprises updating one or more tables based on the translation.
In some embodiments, updating the one or more tables includes updating the one or more tables based on the motion information of the first video block after performing the conversion.
In some embodiments, the method further comprises performing a transition between a subsequent video block of the video and the bitstream representation of the video based on the updated table.
Fig. 34 is a flow chart of an example method 3400 for video processing in accordance with the presently disclosed technology. The method 3400 includes determining new candidates for video processing by always using motion information of more than one spatially neighboring block from a first video block in a current picture and not using motion information from a temporal block in a picture different from the current picture in operation 3402. The method 3400 includes performing a transition between a first video block in a current picture of a video and a bitstream representation of the video by using the determined new candidate at operation 3404.
In some embodiments, the determined new candidate is added to a candidate list, including a Merge candidate list or an Advanced Motion Vector Prediction (AMVP) candidate list.
In some embodiments, the motion information from more than one spatial neighboring block is a candidate derived from a predefined spatial neighboring block relative to the first video block in the current picture, or a motion candidate from one or more tables.
In some embodiments, the one or more tables include motion candidates derived from previously processed video blocks processed prior to a first video block in the video data.
In some embodiments, the candidate derived from the predefined spatial neighboring block relative to the first video block in the current picture is a spatial Merge candidate.
In some embodiments, the new candidate is derived by averaging at least two spatial mere candidates.
In some embodiments, the new candidate is derived by jointly using the spatial Merge candidate and the motion candidate from one or more tables.
In some embodiments, the new candidate is derived by averaging at least two motion vectors associated with candidates derived from different locations.
In some embodiments, the different locations are adjacent to the first video block.
In some embodiments, the motion vector is obtained only from a location in the current largest coding unit to which the first video block belongs.
In some embodiments, the motion vector is obtained only from a position in the current largest coding unit row.
In some embodiments, the motion vector is obtained only from a position in or next to the current largest coding unit row.
In some embodiments, the motion vector is obtained only from a position in or next to the current maximum coding unit row but not to the left of the upper left corner neighboring block.
In some embodiments, the motion vector of the lower right block for planar motion prediction is not obtained from the temporal motion vector prediction candidates, but from one entry of the table.
In some embodiments, the new candidate is derived by using motion candidates from one or more tables in combination with other kinds of Merge/AMVP candidates.
In some embodiments, the motion candidates in the one or more tables are associated with motion information including at least one of: prediction direction, reference picture index, motion vector value, intensity compensation flag, affine flag, motion vector difference precision, or motion vector difference value.
In some embodiments, the method further comprises updating one or more tables based on the translation.
In some embodiments, updating the one or more tables includes updating the one or more tables based on motion information of the first video block after performing the conversion.
In some embodiments, the method further comprises performing a transition between a subsequent video block of the video and the bitstream representation of the video based on the updated table.
In some embodiments, the conversion includes an encoding process and/or a decoding process.
Fig. 35 is a flow chart of an example method 3500 for video processing in accordance with the presently disclosed technology. The method 3500 includes determining new candidates for video processing by using motion information from at least one spatially non-immediately adjacent block of the first video block in the current picture, and other candidates derived from or not from the spatially non-immediately adjacent block of the first video block, at operation 3502. Method 3500 includes performing a transition between the first video block of the video and the bitstream representation of the video using the determined new candidate at operation 3504.
In some embodiments, the determined new candidate is added to a candidate list, including a Merge or Advanced Motion Vector Prediction (AMVP) candidate list.
In some embodiments, motion information from more than one spatially non-immediately adjacent block is a candidate derived from a predefined spatially non-immediately adjacent block relative to a first video block in a current picture.
In some embodiments, the candidate derived from a predefined spatially non-immediately adjacent block relative to the first video block in the current picture is a spatio-temporal motion vector prediction (STMVP) candidate.
In some embodiments, the non-immediately adjacent block of the video block is not a right or left adjacent block of the first video block.
In some embodiments, the upper block of the first video block used for STMVP candidate derivation remains unchanged, while the left block used changes from a neighboring block to a non-immediately neighboring block.
In some embodiments, the left block of the first video block used for STMVP candidate derivation remains unchanged, while the upper block used changes from a neighboring block to a non-immediately neighboring block.
Fig. 36 is a flow chart of an example method 3600 for video processing in accordance with the presently disclosed technology. The method 3600 includes determining, at operation 3602, a new candidate for video processing by using motion information from one or more tables of a first video block in a current picture and motion information from a temporal block in a picture other than the current picture. The method 3600 includes performing, at operation 3604, a transition between a first video block in a current picture of a video and a bitstream representation of the video by using the determined new candidate.
In some embodiments, the determined new candidate is added to a candidate list comprising a Merge or AMVP candidate list.
In some embodiments, motion information from one or more tables in the current picture is associated with one or more Historical Motion Vector Prediction (HMVP) candidates selected from the one or more tables, and motion information from a temporal block in a picture other than the current picture is a temporal motion candidate.
In some embodiments, the new candidate is derived by averaging one or more HMVP candidates with one or more temporal motion candidates.
In some embodiments, the one or more tables include motion candidates derived from previously processed video blocks processed prior to a first video block in the video data.
Fig. 37 is a flow chart of an example method 3700 for video processing in accordance with the presently disclosed technology. The method 3700 includes determining a new candidate for video processing by using motion information from one or more tables of a first video block and motion information from one or more spatially neighboring blocks of the first video block at operation 3702. The method 3700 includes performing a transition between a first video block in a current picture of a video and a bitstream representation of the video by using the determined new candidate at operation 3704.
In some embodiments, the determined new candidate is added to a candidate list comprising a Merge or AMVP candidate list
In some embodiments, the motion information from the one or more tables of the first video block is associated with one or more Historical Motion Vector Prediction (HMVP) candidates selected from the one or more tables, and the motion information from the one or more spatially neighboring blocks of the first video block is a candidate derived from a predefined spatial block relative to the first video block.
In some embodiments, the candidate derived from the predefined spatial block relative to the first video block is a spatial Merge candidate.
In some embodiments, the new candidate is derived by averaging one or more HMVP candidates and one or more spatial Merge candidates.
In some embodiments, the one or more tables include motion candidates derived from previously processed video blocks processed prior to a first video block in the video data.
In some embodiments, the motion candidates from the table are associated with motion information including at least one of: prediction direction, reference picture index, motion vector value, intensity compensation flag, affine flag, motion vector difference precision, or motion vector difference value.
In some embodiments, the method further comprises updating one or more tables based on the translation.
In some embodiments, updating the one or more tables includes updating the one or more tables based on motion information of the current video block after performing the conversion.
In some embodiments, the method further comprises performing a conversion between a subsequent video block of the video data and a bitstream representation of the video data based on the updated table.
Fig. 38 is a flow chart of an example method 3800 for video processing in accordance with the presently disclosed technology. The method 3800 includes, at operation 3802, maintaining a set of tables, wherein each table includes motion candidates and each motion candidate is associated with corresponding motion information; at operation 3804, performing a transition between a first video block and a bitstream representation of a video including the first video block; and at operation 3806, updating the one or more tables by selectively pruning existing motion candidates in the one or more tables based on the encoding/decoding mode of the first video block.
In some embodiments, the conversion between the first video block and the bitstream representation of the video comprising the first video block is performed based on one or more tables of the set of tables.
In some embodiments, pruning is omitted in the case of encoding/decoding the first video block in Merge mode.
In some embodiments, pruning is omitted in the case of encoding/decoding the first video block in the advanced motion vector prediction mode.
In some embodiments, where the first video block is encoded/decoded in Merge mode or advanced motion vector prediction mode, the latest M entries of the table are pruned, where M is a pre-specified integer.
In some embodiments, pruning is disabled in the case of encoding/decoding the first video block in sub-block mode.
In some embodiments, the sub-block modes include affine modes, optional temporal motion vector prediction modes.
In some embodiments, the pruning includes checking whether there are redundant existing motion candidates in the table.
In some embodiments, the trimming further comprises: if there are redundant existing motion candidates in the table, motion information associated with the first video block is inserted into the table and the redundant existing motion candidates in the table are deleted.
In some embodiments, if there are redundant existing motion candidates in the table, the table is not updated with motion information associated with the first video block.
In some embodiments, the method further comprises performing a transition between a subsequent video block of the video and the bitstream representation of the video based on the updated table.
Fig. 39 is a flowchart of an example method 3900 for video processing in accordance with the presently disclosed technology. The method 3900 includes, at operation 3902, maintaining a set of tables, wherein each table includes motion candidates and each motion candidate is associated with corresponding motion information; in operation 3904, performing a transition between a first video block and a bitstream representation of a video including the first video block; and updating one or more tables to include motion information from one or more temporally adjacent blocks of the first video block as new motion candidates in operation 3906.
In some embodiments, the conversion between the first video block and the bitstream representation of the video comprising the first video block is performed based on one or more tables of the set of tables.
In some embodiments, the one or more temporally adjacent blocks are co-located blocks.
In some embodiments, the one or more temporally adjacent blocks include one or more blocks from different reference pictures.
In some embodiments, the method further comprises performing a conversion between a subsequent video block of the video and a bitstream representation of the video based on the updated table.
Fig. 40 is a flow chart of an example method 4000 for updating a motion candidate table in accordance with the presently disclosed technology. The method 4000 includes, at operation 4002, selectively pruning existing motion candidates in a table, each motion candidate being associated with corresponding motion information, based on an encoding/decoding mode of a video block being processed; and in operation 4004, updating the table to include motion information of the video block as a new motion candidate.
In some embodiments, where the video block is encoded/decoded in Merge mode or advanced motion vector prediction mode, the latest M entries of the table are pruned, where M is a pre-specified integer.
In some embodiments, pruning is disabled in the case of encoding/decoding the video block in sub-block mode.
In some embodiments, the sub-block modes include affine modes, optional temporal motion vector prediction modes.
In some embodiments, the pruning includes checking whether there are redundant existing motion candidates in the table.
In some embodiments, the trimming further comprises: if there are redundant motion candidates in the table, motion information associated with the video block being processed is inserted into the table and the redundant motion candidates in the table are deleted.
In some embodiments, if there are redundant existing motion candidates in the table, the table is not updated with motion information associated with the video block being processed.
Fig. 41 is a flowchart of an example method 4100 for updating a motion candidate table in accordance with the presently disclosed technology. The method 4100 includes, at operation 4102, maintaining a list of motion candidates, each motion candidate associated with corresponding motion information; and updating the table to include motion information from one or more temporally adjacent blocks of the video block being processed as new motion candidates in operation 4104.
In some embodiments, the one or more temporally adjacent blocks are co-located blocks.
In some embodiments, the one or more temporally adjacent blocks include one or more blocks from different reference pictures.
In some embodiments, the motion candidate is associated with motion information comprising at least one of: prediction direction, reference picture index, motion vector value, intensity compensation flag, affine flag, motion vector difference precision, or motion vector difference value.
In some embodiments, the motion candidates correspond to motion candidates of an intra prediction mode used for intra mode coding.
In some embodiments, the motion candidates correspond to motion candidates that include luma compensation parameters for IC parameter encoding.
Fig. 42 is a flowchart of an example method 4200 for video processing in accordance with the presently disclosed technology. The method 4200 includes, at operation 4202, determining new motion candidates for video processing by using one or more motion candidates from one or more tables, wherein the tables include the one or more motion candidates and each motion candidate is associated with motion information; and performing conversion between the video block and the encoded representation of the video block based on the new candidate in operation 4104.
In some embodiments, the determined new candidate is added to a candidate list, including a Merge or Advanced Motion Vector Prediction (AMVP) candidate list.
In some embodiments, determining the new candidate comprises: the new motion candidate is determined as a function of one or more motion candidates from one or more tables, and an Advanced Motion Vector Prediction (AMVP) candidate in an AMVP candidate list or a Merge candidate in a Merge candidate list.
From the foregoing, it will be appreciated that specific embodiments of the disclosed technology have been described herein for purposes of illustration, but that various modifications may be made without deviating from the scope of the invention. Accordingly, the presently disclosed technology is not limited, except as by the appended claims.
The embodiments, modules, and functional operations disclosed herein and otherwise described may be implemented in digital electronic circuitry, or in computer software, firmware, or hardware, including the structures disclosed herein and structural equivalents thereof, or in combinations of one or more of them. The disclosed embodiments and other embodiments may be implemented as one or more computer program products, i.e., one or more modules of computer program instructions encoded on a computer-readable medium, for execution by, or to control the operation of, data processing apparatus. The computer readable medium can be a machine-readable storage device, a machine-readable storage substrate, a memory device, a composition of matter effecting a machine-readable propagated signal, or a combination of one or more of them. The term "data processing apparatus" includes all apparatuses, devices and machines for processing data, including for example a programmable processor, a computer or a multiprocessor or a group of computers. The apparatus may include, in addition to hardware, code that creates an execution environment for a computer program, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, or a combination of one or more of them. A propagated signal is an artificially generated signal, e.g., a machine-generated electrical, optical, or electromagnetic signal, that is generated to encode information for transmission to suitable receiving devices.
A computer program (also known as a program, software application, script, or code) can be written in any form of programming language, including compiled or interpreted languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. The computer program does not necessarily correspond to a file in a file system. A program may be stored in a portion of a file that holds other programs or data (e.g., one or more scripts stored in a markup language document), in a single file dedicated to the program, or in multiple coordinated files (e.g., files that store one or more modules, sub-programs, or portions of code). A computer program can be deployed to be executed on one computer or on multiple computers at one site or distributed across multiple sites and interconnected by a communication network.
The processes and logic flows described herein can be performed by one or more programmable processors executing one or more computer programs to perform functions by operating on input data and generating output. The processing and logic flows may also be performed by, and apparatus may be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application-specific integrated circuit).
Processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more of any kind of digital computer. Generally, a processor will receive instructions and data from a read-only memory or a random access memory or both. The essential elements of a computer are a processor for executing instructions and one or more memory devices for storing instructions and data. Typically, a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto-optical disks, or optical disks. However, a computer does not necessarily have such a device. Computer readable media suitable for storing computer program instructions and data include all forms of non-volatile memory, media and memory devices, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices; magnetic disks, such as internal hard disks or removable disks; a magneto-optical disk; CDROM and DVD-ROM discs. The processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.
It is intended that the specification, together with the accompanying drawings, be considered exemplary only, with the exemplary meaning given by way of example. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. In addition, the use of "or" is intended to include "and/or" unless the context clearly indicates otherwise.
While this patent document contains many specifics, these should not be construed as limitations on the scope of any invention or of what may be claimed, but rather as descriptions of features of particular embodiments of particular inventions. Certain features that are described in this patent document in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various functions that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination. Furthermore, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination and the combination of the claims may be directed to a subcombination or variation of a subcombination.
Also, although operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. Furthermore, the separation of various system components in the embodiments described herein should not be understood as requiring such separation in all embodiments.
Only a few implementations and examples are described, and other implementations, enhancements, and variations may be made based on what is described and illustrated in this patent document.

Claims (23)

1. A video processing method, comprising:
maintaining a set of tables, wherein each table includes motion candidates, and each motion candidate is associated with corresponding motion information;
performing a conversion between a first video block and a bitstream of video comprising the first video block; and
updating one or more tables by selectively pruning existing motion candidates in the one or more tables based on an encoding/decoding mode of the first video block;
wherein pruning is omitted in the case of encoding/decoding the first video block in Merge mode; or alternatively
In the case of encoding/decoding the first video block in an advanced motion vector prediction mode, pruning is omitted; or alternatively
In the case of encoding/decoding the first video block in sub-block mode, pruning is disabled.
2. The method of claim 1, wherein performing the conversion further comprises:
based on one or more tables in the set of tables, a transition between the first video block and a bitstream of video including the first video block is performed.
3. The method of claim 1, wherein the sub-block modes comprise affine mode, optional temporal motion vector prediction modes.
4. A method according to any of claims 1-3, wherein the pruning comprises checking whether there are redundant existing motion candidates in the table.
5. The method of claim 4, wherein the trimming further comprises: if there are redundant existing motion candidates in the table, motion information associated with the first video block is inserted into the table and the redundant existing motion candidates in the table are deleted.
6. The method of claim 4, wherein if there are redundant existing motion candidates in the table, the table is not updated with motion information associated with the first video block.
7. A video processing method, comprising:
Maintaining a set of tables, wherein each table includes motion candidates, and each motion candidate is associated with corresponding motion information;
performing a conversion between a first video block and a bitstream of video comprising the first video block; and
updating one or more tables to include motion information from one or more temporally adjacent blocks of the first video block as new motion candidates;
wherein updating the one or more tables comprises pruning existing motion candidates in the one or more tables by selectively based on an encoding/decoding mode of the first video block;
wherein pruning is omitted in the case of encoding/decoding the first video block in Merge mode; or alternatively
In the case of encoding/decoding the first video block in an advanced motion vector prediction mode, pruning is omitted; or alternatively
In the case of encoding/decoding the first video block in sub-block mode, pruning is disabled.
8. The method of claim 7, wherein performing the conversion further comprises:
based on one or more tables in the set of tables, a transition between the first video block and a bitstream of video including the first video block is performed.
9. The method of claim 7, wherein the one or more time-adjacent blocks are co-located blocks.
10. The method of claim 7, wherein the one or more temporal neighboring blocks comprise one or more blocks from different reference pictures.
11. A method of updating a motion candidate table:
based on the encoding/decoding mode of the video block being processed, selectively pruning existing motion candidates in the table, each motion candidate being associated with corresponding motion information; and
updating the table to include motion information of the video block as a new motion candidate;
wherein in the case of encoding/decoding the video block in Merge mode, pruning is omitted; or alternatively
In the case of encoding/decoding the video block in an advanced motion vector prediction mode, pruning is omitted; or alternatively
In the case of encoding/decoding the video block in sub-block mode, pruning is disabled.
12. The method of claim 11, wherein the sub-block modes comprise affine mode, optional temporal motion vector prediction modes.
13. The method of claim 11 or 12, wherein the pruning comprises checking whether there are redundant existing motion candidates in the table.
14. The method of claim 13, wherein the trimming further comprises: if there are redundant motion candidates in the table, motion information associated with the video block being processed is inserted into the table and the redundant motion candidates in the table are deleted.
15. The method of claim 13, wherein if there are redundant existing motion candidates in the table, the table is not updated with motion information associated with the video block being processed.
16. A method of updating a motion candidate table, comprising:
maintaining a motion candidate table, each motion candidate being associated with corresponding motion information; and
updating the table to include motion information from one or more temporally adjacent blocks of the video block being processed as new motion candidates;
wherein updating the table includes pruning existing motion candidates in the table by selectively based on an encoding/decoding mode of the video block;
wherein in the case of encoding/decoding the video block in Merge mode, pruning is omitted; or alternatively
In the case of encoding/decoding the video block in an advanced motion vector prediction mode, pruning is omitted; or alternatively
In the case of encoding/decoding the video block in sub-block mode, pruning is disabled.
17. The method of claim 16, wherein the one or more time-adjacent blocks are co-located blocks.
18. The method of claim 16, wherein the one or more temporal neighboring blocks comprise one or more blocks from different reference pictures.
19. The method of any of claims 1-3, 7-12, 16-18, wherein the motion candidate is associated with motion information comprising at least one of: prediction direction, reference picture index, motion vector value, intensity compensation flag, affine flag, motion vector difference precision, or motion vector difference value.
20. The method of any of claims 1-3, 7-12, 16-18, wherein the motion candidate corresponds to a motion candidate for an intra prediction mode of intra mode coding.
21. The method of any of claims 1-3, 7-12, 16-18, wherein the motion candidate corresponds to a motion candidate comprising illumination compensation parameters for IC parameter encoding.
22. The method of any one of claims 1-3, 7-10, further comprising:
Based on the updated table, a conversion between a subsequent video block of the video and a bitstream of the video is performed.
23. An apparatus in a video system, the apparatus comprising a processor and a non-transitory memory having instructions thereon, wherein the instructions, when executed by the processor, cause the processor to implement the method of any of claims 1-22.
CN201910637484.3A 2018-07-15 2019-07-15 Motion vector prediction based on lookup table with temporal information extension Active CN110719464B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN2018095719 2018-07-15
CNPCT/CN2018/095719 2018-07-15

Publications (2)

Publication Number Publication Date
CN110719464A CN110719464A (en) 2020-01-21
CN110719464B true CN110719464B (en) 2023-05-30

Family

ID=67998524

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910637484.3A Active CN110719464B (en) 2018-07-15 2019-07-15 Motion vector prediction based on lookup table with temporal information extension

Country Status (3)

Country Link
CN (1) CN110719464B (en)
TW (1) TWI819030B (en)
WO (1) WO2020016743A2 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021207423A1 (en) * 2020-04-08 2021-10-14 Beijing Dajia Internet Information Technology Co., Ltd. Methods and apparatuses for signaling of syntax elements in video coding
CN113709498B (en) * 2020-05-20 2023-06-02 Oppo广东移动通信有限公司 Inter prediction method, encoder, decoder, and computer storage medium
WO2022262818A1 (en) * 2021-06-18 2022-12-22 Beijing Bytedance Network Technology Co., Ltd. Method, apparatus, and medium for video processing

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014089475A1 (en) * 2012-12-07 2014-06-12 Qualcomm Incorporated Advanced merge/skip mode and advanced motion vector prediction (amvp) mode for 3d video
WO2018052986A1 (en) * 2016-09-16 2018-03-22 Qualcomm Incorporated Offset vector identification of temporal motion vector predictor

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107105286B (en) * 2011-03-14 2020-01-21 寰发股份有限公司 Method and apparatus for deriving motion vector predictor
US20130329007A1 (en) * 2012-06-06 2013-12-12 Qualcomm Incorporated Redundancy removal for advanced motion vector prediction (amvp) in three-dimensional (3d) video coding

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014089475A1 (en) * 2012-12-07 2014-06-12 Qualcomm Incorporated Advanced merge/skip mode and advanced motion vector prediction (amvp) mode for 3d video
WO2018052986A1 (en) * 2016-09-16 2018-03-22 Qualcomm Incorporated Offset vector identification of temporal motion vector predictor

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
CE4-related: History-based Motion Vector Prediction;Li Zhang, Kai Zhang, Hongbin Liu;《JVET》;20180711(第JVET-K0104-v1期);abstract,section1,2,3,4 *
Li Zhang, Kai Zhang, Hongbin Liu.CE4-related: History-based Motion Vector Prediction.《JVET》.2018,(第JVET-K0104-v1期), *

Also Published As

Publication number Publication date
WO2020016743A3 (en) 2020-04-16
WO2020016743A2 (en) 2020-01-23
TW202021359A (en) 2020-06-01
TWI819030B (en) 2023-10-21
CN110719464A (en) 2020-01-21

Similar Documents

Publication Publication Date Title
CN111064959B (en) How many HMVP candidates to examine
CN110944196B (en) Simplified history-based motion vector prediction
CN110662058B (en) Conditions of use of the lookup table
CN113383554B (en) Interaction between LUTs and shared Merge lists
CN110662075B (en) Improved temporal motion vector prediction derivation
CN110662043B (en) Method, apparatus and computer readable medium for processing video data
KR20210024502A (en) Partial/full pruning when adding HMVP candidates to merge/AMVP
CN110662063A (en) Reset of look-up table per stripe/slice/LCU row
CN113273186A (en) Invocation of LUT update
CN110719463B (en) Extending look-up table based motion vector prediction with temporal information
CN110662030B (en) Video processing method and device
CN110719464B (en) Motion vector prediction based on lookup table with temporal information extension
CN110719465B (en) Extending look-up table based motion vector prediction with temporal information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant