CN110677674B - Method, apparatus and non-transitory computer-readable medium for video processing - Google Patents

Method, apparatus and non-transitory computer-readable medium for video processing Download PDF

Info

Publication number
CN110677674B
CN110677674B CN201910586571.0A CN201910586571A CN110677674B CN 110677674 B CN110677674 B CN 110677674B CN 201910586571 A CN201910586571 A CN 201910586571A CN 110677674 B CN110677674 B CN 110677674B
Authority
CN
China
Prior art keywords
block
prediction
sub
current video
blocks
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910586571.0A
Other languages
Chinese (zh)
Other versions
CN110677674A (en
Inventor
张凯
张莉
刘鸿彬
王悦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing ByteDance Network Technology Co Ltd
ByteDance Inc
Original Assignee
Beijing ByteDance Network Technology Co Ltd
ByteDance Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing ByteDance Network Technology Co Ltd, ByteDance Inc filed Critical Beijing ByteDance Network Technology Co Ltd
Publication of CN110677674A publication Critical patent/CN110677674A/en
Application granted granted Critical
Publication of CN110677674B publication Critical patent/CN110677674B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/577Motion compensation with bidirectional frame interpolation, i.e. using B-pictures
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/537Motion estimation other than block-based
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/55Motion estimation with spatial constraints, e.g. at image or region borders
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/573Motion compensation with multiple frame prediction using two or more reference frames in a given prediction direction

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

Apparatus, systems, and methods for video processing are described. In a representative aspect, there is provided a video processing method comprising: determining whether an interleaved prediction mode is applicable for switching between the current video block and a bitstream representing the current video block based on a component type of the current video block; and in response to determining that the interleaved prediction mode applies to the current video block, performing a transform by applying the interleaved prediction mode, wherein applying interleaved prediction comprises subdividing part of the current video block into at least one sub-block using more than one subdivision pattern, and generating a predictor for the current video block that is a weighted average of the predictors determined for each of the more than one subdivision patterns.

Description

Method, apparatus and non-transitory computer-readable medium for video processing
Cross Reference to Related Applications
This application claims priority and benefit from international patent application No. PCT/CN2018/093943, filed on 2018, 7/1, according to applicable patent laws and/or according to the rules of the paris convention. The entire disclosure of the international patent application No. PCT/CN2018/093943 is incorporated by reference as part of the disclosure of the present application.
Technical Field
This patent document relates to video encoding and decoding techniques, apparatus and systems.
Background
Despite advances in video compression, digital video accounts for the largest bandwidth usage on the internet and other digital communication networks. As the number of connected user devices capable of receiving and displaying video increases, the bandwidth requirements to pre-count digital video usage will continue to grow.
Disclosure of Invention
This document discloses a technique that can be used in video encoding and decoding embodiments to improve the performance of sub-block based encoding, and in particular when using affine motion coding mode.
In one exemplary aspect, there is provided a video processing method comprising: determining whether an interleaved prediction mode is applicable for conversion between the current video block and a bitstream representation of the current video block based on a component type of the current video block; and in response to determining that the interleaved prediction mode applies to the current video block, performing a transform by applying the interleaved prediction mode, wherein applying interleaved prediction comprises subdividing part of the current video block into at least one sub-block using more than one subdivision pattern, and generating a predictor for the current video block that is a weighted average of the predictors determined for each of the more than one subdivision patterns.
In another exemplary aspect, there is provided a video processing method including: determining whether an interleaved prediction mode is applicable for conversion between the current video block and a bitstream representation of the current video block based on a prediction direction of the current video block; and in response to determining that the interleaved prediction mode applies to the current video block, performing a transform by applying the interleaved prediction mode, wherein applying interleaved prediction comprises subdividing part of the current video block into at least one sub-block using more than one subdivision pattern, and generating a predictor for the current video block as a weighted average of the predictors determined for each of the more than one subdivision patterns.
In another exemplary aspect, there is provided a video processing method including: determining whether an interleaved prediction mode is applicable for conversion between a current video block in a current picture and a bitstream representation of the current video block based on a low delay mode of the current picture; and in response to determining that the interleaved prediction mode applies to the current video block, performing a transform by applying the interleaved prediction mode, and wherein applying interleaved prediction comprises subdividing part of the current video block into at least one sub-block using more than one subdivision pattern, and generating a predictor for the current video block that is a weighted average of the predictors determined for each of the more than one subdivision patterns.
In another exemplary aspect, there is provided a video processing method including: determining whether an interleaved prediction mode is applicable for conversion between the current video block and a bitstream representation of the current video block based on using a current picture comprising the current video block as a reference; and in response to determining that the interleaved prediction mode applies to the current video block, performing a transform by applying the interleaved prediction mode, and wherein applying interleaved prediction comprises subdividing part of the current video block into at least one sub-block pattern using more than one subdivision pattern, and generating a predictor for the current video block that is a weighted average of the predictors determined for each of the more than one subdivision patterns.
In another exemplary aspect, there is provided a video processing method including: selectively based on video conditions, performing an interleaved prediction-based encoding of one or more components of the video from among a luma component, a first chroma component, and a second chroma component of a video frame, wherein performing interleaved prediction comprises determining a prediction block for a current block of the components of the video by: selecting a set of pixels of a component of a video frame to form a block; partitioning a block into a first set of sub-blocks according to a first pattern; generating a first intermediate prediction block based on the first set of sub-blocks; partitioning the block into a second set of sub-blocks according to a second pattern, wherein at least one sub-block in the second set is not in the first set; generating a second intermediate prediction block based on the second set of sub-blocks; and determining a prediction block based on the first inter prediction block and the second inter prediction block.
In yet another exemplary aspect, a video encoder apparatus implementing the video encoding methods described herein is disclosed.
In yet another representative aspect, the various techniques described herein are implemented as a computer program product stored on a non-transitory computer readable medium. The computer program product comprises program code for performing the methods described herein.
In yet another representative aspect, a video decoder device may implement the methods described herein.
The details of one or more implementations are set forth in the accompanying drawings, the drawings, and the description below. Other features will be apparent from the description and drawings, and from the claims.
Drawings
Fig. 1 shows an example of sub-block based prediction.
FIG. 2 illustrates an example of a simplified affine motion model.
Fig. 3 shows an example of an affine Motion Vector Field (MVF) for each sub-block.
Fig. 4 shows an example of Motion Vector Prediction (MVP) in the AF _ INTER mode.
Fig. 5A and 5B illustrate examples of candidates for the AF _ MERGE encoding mode.
Fig. 6 illustrates an exemplary process of Advanced Temporal Motion Vector Predictor (ATMVP) motion prediction for a Coding Unit (CU).
Fig. 7 shows an example of one CU with four sub-blocks (a-D) and its neighboring blocks (a-D).
FIG. 8 shows an example of optical flow trajectories in video coding.
Fig. 9A and 9B illustrate examples of bi-directional optical (BIO) encoding techniques without block expansion. Fig. 9A shows an example of access locations outside of a block, and fig. 9B shows an example of padding (padding) used to avoid additional memory accesses and computations.
FIG. 10 illustrates an example of bilateral matching.
Fig. 11 shows an example of template matching.
Fig. 12 shows an example of one-sided Motion Estimation (ME) in Frame Rate Up Conversion (FRUC).
Fig. 13 illustrates an exemplary implementation of interleaved prediction.
Fig. 14A to 14C show examples of partial interleaving prediction. The dashed lines represent the first subdivision pattern; the solid line represents the second subdivision pattern; the bold lines represent the regions where the interleaved prediction is applied. Outside this region, no interleaved prediction is applied.
Fig. 15 shows an example of weight values in sub-blocks. Exemplary weighting values [ Wa, wb ] are [ 3,1 ], [ 7,1 ], [ 5,3 ], [ 13,3 ], etc.
Fig. 16 illustrates an example of interleaved prediction with two subdivision patterns in accordance with the techniques of this disclosure.
Fig. 17A illustrates an example subdivision pattern in which a block is subdivided into 4 x4 sub-blocks in accordance with the techniques of this disclosure.
Fig. 17B illustrates an example subdivision pattern in which a block is subdivided into 8 x 8 sub-blocks in accordance with the techniques of this disclosure.
Fig. 17C illustrates an example subdivision pattern in which a block is subdivided into 4 x 8 sub-blocks, in accordance with the techniques of this disclosure.
Fig. 17D illustrates an example subdivision pattern in which a block is subdivided into 8 x4 sub-blocks in accordance with the techniques of this disclosure.
Fig. 17E illustrates an example subdivision pattern in which a block is subdivided into non-uniform sub-blocks, in accordance with techniques of this disclosure.
Fig. 17F illustrates another example subdivision pattern in which a block is subdivided into non-uniform sub-blocks, in accordance with the techniques of this disclosure.
Fig. 17G illustrates yet another example subdivision pattern in which a block is subdivided into non-uniform sub-blocks, in accordance with the techniques of this disclosure.
Fig. 18 is a block diagram of an example of a hardware platform for implementing the video processing method described in this document.
Fig. 19 is a flow diagram of an exemplary method of video processing described in this document.
Fig. 20 is a flow diagram of another exemplary method of video processing described in this document.
Fig. 21 is a flow diagram of another exemplary method of video processing described in this document.
Fig. 22 is a flow diagram of another exemplary method of video processing described in this document.
Fig. 23 is a flow diagram of another exemplary method of video processing described in this document.
Detailed Description
Section headings are used in this document to improve readability, and the techniques and embodiments described in a section are not limited to only that section.
To improve the compression ratio of video, researchers are continually seeking new techniques for encoding video.
1. Introduction to
The present invention relates to video/image coding techniques. In particular, it relates to sub-block based prediction in video/image coding. It can be applied to existing Video Coding standards such as HEVC, or standards to be finalized (universal Video Coding).
Brief discussion of the drawings
Sub-block based prediction was originally introduced into the video coding standard by HEVC annex I (3D-HEVC). With sub-block based prediction, a block, such as a Coding Unit (CU) or a Prediction Unit (PU), is subdivided into several non-overlapping sub-blocks. Different sub-blocks may be allocated different motion information such as reference indices or Motion Vectors (MVs) and each sub-block is individually Motion Compensated (MC). Fig. 1 illustrates the concept of sub-block based prediction.
In order to explore future video coding techniques beyond HEVC, joint video exploration team (jfet) was established by VCEG and MPEG in 2015. Since then, jfet adopted many new approaches and incorporated it into a reference software named Joint Exploration Model (JEM).
In JEM, sub-block based prediction, such as affine prediction, optional temporal motion vector prediction (ATMVP), spatio-temporal motion vector prediction (STMVP), bi-directional optical flow (BIO), and frame rate up-conversion (FRUC), is employed in several coding tools.
2.1 affine prediction
In HEVC, only the translational motion model is applied for Motion Compensated Prediction (MCP). In the real world, however, there are many kinds of motions such as zoom-in/zoom-out, rotation, perspective motion, and other irregular motions. In JEM, a simplified affine transform motion compensated prediction is applied. As shown in fig. 2, the affine motion field of a block is described by two control point motion vectors.
The Motion Vector Field (MVF) of a block is described by the following equation:
Figure GDA0003797479320000051
wherein (v) 0x ,v 0y ) Is the motion vector of the control point of the left corner, (v) 1x ,v 1y ) Is the motion vector of the right corner control point.
To further simplify motion compensated prediction, sub-block based affine transform prediction is applied. The sub-block size M N is derived as in equation (2), where MvPre is the motion vector fractional precision (1/16 in JEM), (v) 2x ,v 2y ) Is the motion vector of the lower left control point, which is calculated according to equation (1).
Figure GDA0003797479320000052
After being derived from equation (2), M and N should be adjusted downward, if necessary, to be divisors of w and h, respectively.
As shown in fig. 3, to derive the motion vector of each M × N sub-block, the motion vector of the center sample of each sub-block is calculated according to equation (1) and rounded to 1/16 fractional precision. Then, a motion compensated interpolation filter is applied to generate a prediction for each sub-block using the derived motion vectors.
After MCP, the high precision motion vector of each sub-block is rounded and saved with the same precision as the normal motion vector.
In JEM, there are two affine motion patterns: AF _ INTER mode and AF _ MERGE mode. For CUs with width and height greater than 8, the AF _ INTER mode may be applied. Signaling CUs in a bitstreamAffine flag of level to indicate whether AF _ INTER mode is used. In this mode, neighboring blocks are used to construct pairs with motion vectors { (v) 0 ,v 1 )|v 0 ={v A ,v B ,v c },v 1 ={v D ,v E } of the candidate list. As shown in fig. 4, v0 is selected from the motion vectors of block a, block B, or block C. The motion vectors from the neighboring blocks are scaled according to the reference list and according to the relationship between the reference POC of the neighboring blocks, the reference POC of the current CU, and the POC of the current CU. And selecting v from neighboring blocks D and E 1 The method of (3) is similar. If the number of candidate lists is less than 2, the list is populated by pairs of motion vectors that are composed by copying each AMVP candidate. When the candidate list is larger than 2, the candidates are first classified according to the consistency of the neighboring motion vectors (similarity of two motion vectors in the candidate pair), and only the first two candidates are retained. The RD cost check is used to determine which motion vector pair candidate to select as the Control Point Motion Vector Predictor (CPMVP) for the current CU. And, an index indicating a position of the CPMVP in the candidate list is signaled in the bitstream. After determining the CPMVP of the current affine CU, affine motion estimation is applied and Control Point Motion Vectors (CPMVs) are found. The differences between CPMV and CPMVP are then signaled in the bitstream.
When a CU is applied in AF _ MERGE mode, it obtains the first block encoded using affine mode from the valid neighboring reconstructed blocks. As shown in fig. 5A, and the selection order for the candidate blocks is from left, above right, below left to above left. As shown in fig. 5B, if the adjacent lower-left block a is encoded in an affine mode, a motion vector v containing the top left corner, top right corner and bottom left corner of the CU of the block a is derived 2 、v 3 And v 4 . And according to v 2 、v 3 And v 4 To calculate the motion vector v of the left corner of the current CU 0 . Next, the motion vector v at the upper right of the current CU is calculated 1
Deriving the CPMV v of the current CU 0 And v 1 Thereafter, the MVF of the current CU is generated according to the simplified affine motion model equation (1). To identifyWhether the current CU uses AF _ MERGE mode coding, an affine flag is signaled in the bitstream when there is at least one neighboring block coded in affine mode.
2.2 ATMVP
In an Alternative Temporal Motion Vector Prediction (ATMVP) method, the motion vector Temporal Motion Vector Prediction (TMVP) is modified by extracting multiple sets of motion information (including motion vectors and reference indices) from blocks smaller than the current CU. As shown in fig. 6, the sub-CU is a square N × N block (N is set to 4 by default).
ATMVP predicts the motion vectors of sub-CUs within a CU in two steps. The first step is to identify the corresponding block in the reference picture with a so-called temporal vector. The reference picture is also referred to as a motion source picture. The second step is to divide the current CU into sub-CUs and obtain the motion vector and reference index of each sub-CU from the corresponding block of each sub-CU, as shown in fig. 6.
In a first step, the reference picture and the corresponding block are determined from motion information of spatially neighboring blocks of the current CU. To avoid duplicate scanning processing of neighboring blocks, the first MERGE candidate in the MERGE candidate list of the current CU is used. The first available motion vector and its associated reference index are set as the index of the temporal vector and the motion source picture. In this way, the corresponding block can be identified more accurately in ATMVP than in TMVP, where the corresponding block (sometimes referred to as a collocated block) is always located in the lower right corner or center position relative to the current CU.
In a second step, the corresponding block of the sub-CU is identified by a temporal vector in the motion source picture by adding the temporal vector to the coordinates of the current CU. For each sub-CU, the motion information of the sub-CU is derived using the motion information of its corresponding block (the smallest motion grid covering the center samples). After identifying the motion information of the corresponding nxn block, it is converted into a motion vector and reference index of the current sub-CU in the same way as the TMVP of HEVC, where motion scaling and other procedures are applied. For example, the decoder checks whether a low delay condition is met (i.e., POC of all reference pictures of the current picture is smaller than POC of the current picture), and motion vector MVy (motion vector corresponding to reference picture list X) for each sub-CU is predicted, possibly using motion vector MVx (X equals 0 or 1 and Y equals 1-X).
3.STMVP
In this method, the motion vectors of the sub-CUs are recursively derived in raster scan order. Fig. 7 illustrates this concept. Let us consider an 8 × 8 CU 700, which contains four 4 × 4 sub-CUs a, B, C and D. The neighboring 4 x4 blocks in the current frame are labeled a, b, c, and d.
The motion derivation of sub-CU a starts by identifying its two spatial neighbors. The first neighbor is the nxn block (block c) above the sub-CU a. If this block c is not available or intra coded, the other nxn blocks above the sub-CU a are examined (from left to right, starting at block c). The second neighbor is a block to the left of sub-CU a (block b). If block b is not available or intra coded, the other blocks to the left of sub-CU a are examined (from top to bottom, starting at block b). The motion information obtained by each list from neighboring blocks is scaled to the first reference frame of the given list. Next, the Temporal Motion Vector Prediction (TMVP) of sub-block a is derived following the same procedure as the TMVP specified in HEVC. The motion information of the collocated block at location D is extracted and scaled accordingly. Finally, after retrieving and scaling the motion information, all available motion vectors (up to 3) are averaged for each reference list, respectively. The average motion vector is specified as the motion vector of the current sub-CU.
4.BIO
Bi-directional optical flow (BIO) is a sample-wise motion refinement to bi-directional prediction over block motion compensation. The motion refinement at sample level does not use signaling.
Let I (k) The luminance value after motion compensation to reference k (k =0,1) for the block, and
Figure GDA0003797479320000081
Figure GDA0003797479320000082
are respectively I (k) The horizontal and vertical components of the gradient. Assuming that the optical flow is valid, the motion vector field (v) x ,v y ) Given by the equation: />
Figure GDA0003797479320000083
Combining the optical flow equation with the Hermite interpolation of each sample motion track to obtain a unique third-order polynomial which simultaneously matches the function value I at the end (k) And derivatives thereof
Figure GDA0003797479320000084
The value of this polynomial at t =0 is the BIO prediction:
Figure GDA0003797479320000085
here, τ 0 And τ 1 The distance to the reference frame is shown in fig. 8. POC (POC) calculation distance tau based on Ref0 and Ref1 0 And τ 1 : τ 0= POC (current) -POC (Ref 0), τ 1= POC (Ref 1) -POC (current). If both predictions are from the same time direction (both from the past or both from the future), then the sign is different (i.e., τ 0 ·τ 1 <0). In this case, only when the predictions are not from the same point in time (i.e., τ) 0 ≠τ 1 ) BIO is used in the case of (1). Both reference regions have non-zero motion (i.e., MVx) 0 ,MVy 0 ,MVx 1 ,MVy 1 Not equal to 0) and the block motion vector is proportional to the temporal distance (i.e., MVx) 0 /MVx 1 =MVy 0 /MVy 1 =-τ 01 )。
Determining a motion vector field (v) by minimizing the difference of values between points A and B x ,v y ) (the intersection of the motion trajectory with the reference frame plane on FIGS. 9A and 9B). For Δ, the model uses only the first linear term of the local taylor expansion:
Figure GDA0003797479320000086
all values in equation 5 depend on the sample position (i ', j'), which has been omitted from the notation so far. Assuming that the motion in the local surrounding area is consistent, we minimize Δ within a (2M + 1) × (2M + 1) square window Ω centered on the current predicted point (i, j), where M equals 2:
Figure GDA0003797479320000087
for this optimization problem, JEM uses a simplified approach, first minimizing in the vertical direction, and then minimizing in the horizontal direction. The results are as follows:
Figure GDA0003797479320000091
Figure GDA0003797479320000092
wherein the content of the first and second substances,
Figure GDA0003797479320000093
Figure GDA0003797479320000094
Figure GDA0003797479320000095
to avoid division by zero or a small value, regularization parameters r and m are introduced in equations (7) and (8).
r=500·4 d-8 (10)
m=700·4 d-8 (11)
Where d is the bit depth of the video sample.
For memory access of BIO and conventional bi-prediction motion compensationSimilarly, all prediction and gradient values I are calculated only at the current intra-block location (k) ,
Figure GDA0003797479320000096
In equation (9), a square window Ω of (2M + 1) × (2M + 1) centered on the current prediction point on the prediction block boundary needs to access a position outside the block (as shown in FIG. 9A). In JEM, the value I outside the block (k) ,/>
Figure GDA0003797479320000097
Set equal to the nearest available value within the block. This may be implemented as padding, for example, as shown in fig. 9B.
Using BIO, the motion field for each sample can be refined. To reduce computational complexity, BIO based on block design is adopted in JEM. Motion refinement is calculated based on 4 x4 blocks. In block-based BIO, s in equation (9) for all samples in a 4 × 4 block n The value is polymerized, then s is n Is used for the derived BIO motion vector offset of the 4 x4 block. More specifically, the following equation is used for block-based BIO derivation:
Figure GDA0003797479320000098
Figure GDA0003797479320000099
Figure GDA00037974793200000910
wherein, b k Represents a group of samples belonging to the kth 4 x4 block of the prediction block. S in equations (7) and (8) n Replacement by ((s) n ,b k )>>4) To derive the associated motion vector offset.
In some cases, the MV cluster (region) of the BIO may be unreliable due to noise or irregular motion. Thus, in BIO, of MV clustersThe size is fixed to a threshold thBIO. The threshold is determined based on whether all reference pictures of the current picture come from one direction. If all reference pictures of the current picture come from one direction, the value of the threshold is set to 12 × 2 14-d Otherwise it is set to 12 × 2 13-d
The gradient of the BIO is simultaneously computed by motion compensated interpolation using operations consistent with the HEVC motion compensation process (2D separable FIR). The input to this 2D separable FIR is the same reference frame as the motion compensation process, and the fractional position (fracX, fracY) according to the fractional part of the block motion vector. In the horizontal gradient
Figure GDA0003797479320000102
In the case of (1), the signal is first vertically interpolated using the BIOfilter S corresponding to the fractional position fracY with the rescale scale displacement d-8, and then a gradient filter BIOfilter G corresponding to the fractional position fracX with the rescale scale displacement 18-d is applied in the horizontal direction. In the vertical gradient->
Figure GDA0003797479320000103
In the case of (2), the first gradient filter is applied vertically using the bianterg corresponding to the fractional position fracY with the rescale scale displacement d-8, and then the signal substitution is performed in the horizontal direction using the bianters corresponding to the fractional position fracX with the rescale scale displacement 18-d. The interpolation filters used for gradient computation BIOfiltG and signal substitution BIOfiltS are shorter in length (6-tap) to maintain reasonable complexity. The table shows the filters used for gradient calculations for different fractional positions of block motion vectors in the BIO. The table shows the interpolation filters used for prediction signal generation in the BIO.
Filter for gradient calculation in table 1 BIO
Fractional pixel position Gradient interpolation filter (BIOfiltrG)
0 {8,-39,-3,46,-17,5}
1/16 {8,-32,-13,50,-18,5}
1/8 {7,-27,-20,54,-19,5}
3/16 {6,-21,-29,57,-18,5}
1/4 {4,-17,-36,60,-15,4}
5/16 {3,-9,-44,61,-15,4}
3/8 {1,-4,-48,61,-13,3}
7/16 {0,1,-54,60,-9,2}
1/2 {-1,4,-57,57,-4,1}
Interpolation filter for prediction signal generation in table 2 BIO
Figure GDA0003797479320000101
Figure GDA0003797479320000111
In JEM, when the two predictions are from different reference pictures, the BIO is applied to all bi-prediction blocks. When LIC is enabled for a CU, BIO is disabled.
In JEM, OBMC is applied to blocks after normal MC processing. To reduce computational complexity, no BIO is applied during OBMC processing. This means that during OBMC processing, the BIO is applied to MC processing of a block only when own MV is used, and the BIO is not applied to MC processing of a block when MV of an adjacent block is used.
2.5 FRUC
When a CU's merge flag is true, the FRUC flag is signaled to that CU. When the FRUC flag is false, the Merge index is signaled and normal Merge mode is used. When the FRUC flag is true, an additional FRUC mode flag is signaled to indicate which method (bilateral matching or template matching) will be used to derive the motion information for the block.
At the encoder side, the decision on whether to use FRUC merge mode for a CU is based on RD cost selection as is done for normal merge candidates. In other words, two matching patterns (bilateral matching and template matching) for a CU are verified by using RD cost selection. The matching pattern that results in the smallest cost is further compared to other CU patterns. If the FRUC matching pattern is the most efficient pattern, the FRUC flag is set to true for the CU and the relevant matching pattern is used.
The motion derivation process in FRUC merge mode has two steps. CU-level motion search is performed first, followed by sub-CU-level motion refinement. At the CU level, an initial motion vector is derived for the entire CU based on bilateral matching or template matching. First, a list of MV candidates is generated and the candidate that results in the smallest matching cost is selected as the starting point for further CU-level refinement. Then, local search based on bilateral matching or template matching is performed around the starting point, and the MV that results in the minimum matching cost is taken as the MV of the entire CU. Subsequently, the motion information is further refined at the sub-CU level, with the derived CU motion vector as a starting point.
For example, the following derivation process is performed for W × HCU motion information derivation. In the first stage, MVs of the whole W × HCU are derived. In the second stage, the CU is further partitioned into M × M sub-CUs. The value of M is calculated as in (16), D is a predefined segmentation depth, which is set to 3 by default in JEM. The MV of each sub-CU is then derived.
Figure GDA0003797479320000121
As shown in fig. 10, bilateral matching is used to derive motion information of a current CU by finding the closest match between two blocks along the motion trajectory of the current CU in two different reference images. Under the assumption of a continuous motion trajectory, the motion vectors MV0 and MV1 pointing to the two reference blocks should be proportional to the temporal distance between the current picture and the two reference pictures, i.e., TD0 and TD 1. As a special case, the bilateral matching becomes a mirror-based bidirectional MV when the current picture is temporally between two reference pictures and the temporal distance from the current picture to the two reference pictures is the same.
As shown in fig. 11, template matching is used to derive motion information for a current CU by finding the closest match between the template (the top-neighboring block and/or the left-neighboring block of the current CU) in the current image and the block (having the same size as the template) in the reference image. In addition to the FRUC merge model described above, template matching is also applicable to AMVP model. In JEM, there are two candidates for AMVP, as in HEVC. Using a template matching method, new candidates are derived. If the newly derived candidate matched by the template is different from the first existing AMVP candidate, it is inserted into the very beginning of the AMVP candidate list, and then the list size is set to 2 (which means the second existing AMVP candidate is removed). When applied to AMVP mode, only CU level search is applied.
CU-LEVEL MV candidate set
The CU-level MV candidate set consists of:
(i) If the current CU is in AMVP mode, it is an original AMVP candidate,
(ii) All of the merging candidates are selected as the merging candidates,
(iii) Interpolate a number of MVs in the MV domain (described later),
(iv) Top and left adjacent motion vectors.
When using bilateral matching, each valid MV of the merge candidates is used as an input to generate MV pairs assuming bilateral matching. For example, a valid MV of a merge candidate is in reference list a (MVa, refa). Then, the reference picture refb of its paired bilateral MV is found in the other reference list B, so that refa and refb are temporally located on different sides of the current picture. If such refb is not available in reference list B, refb is determined to be a different reference than refa and the temporal distance of refb to the current picture is the minimum in list B. After determining refb, MVb is derived by scaling MVa based on the temporal distance between the current image and refa, refb.
Four MVs from the interpolated MV field are also added to the CU level candidate list. More specifically, the interpolated MVs at positions (0, 0), (W/2, 0), (0, h/2), and (W/2, h/2) of the current CU are added.
When FRUC is applied to AMVP mode, the original AMVP candidate is also added to the CU level MV candidate set.
At the CU level, a maximum of 15 MVs for AMVP CUs and a maximum of 13 MVs for merging CUs are added to the candidate list.
sub-CU level MV candidate set
The sub-CU level MV candidate set consists of:
(i) The determined MV is searched from the CU level,
(ii) Top, left, top left and top right adjacent MVs,
(iii) A scaled version of the collocated MVs from the reference picture,
(iv) A maximum of 4 ATMVP candidates,
(v) A maximum of 4 STMVP candidates.
The scaled MV from the reference image is derived as follows. All reference pictures in both lists are traversed. The MVs at the collocated positions of the sub-CUs in the reference picture are scaled to the reference of the starting CU level MV.
ATMVP and STMVP candidates are limited to the first four.
At the sub-CU level, a maximum of 17 MVs are added to the candidate list.
Generation of interpolated MV domains
Before encoding a frame, an interpolated motion field is generated for the entire image based on one-sided ME. The motion field may then be used later as a CU-level or sub-CU-level MV candidate.
First, the motion domain of each reference image in the two reference lists is traversed at the 4 × 4 block level. For each 4 x4 block, if the motion associated with the block passes through a 4 x4 block in the current image (as shown in fig. 12) and the block has not been assigned any interpolated motion, the motion of the reference block is scaled to the current image according to temporal distances TD0 and TD1 (in the same way as the MV scaling of the TMVP in HEVC) and the scaled motion is assigned to the block in the current frame. If no scaled MVs are assigned to a 4 x4 block, the motion of the block is marked as unavailable in the interpolated motion domain.
Interpolation and matching costs
When the motion vector points to a fractional sample position, motion compensated interpolation is required. To reduce complexity, both the bilateral matching and the template matching use bilinear interpolation instead of the conventional 8-tap HEVC interpolation.
The matching cost is calculated somewhat differently at different steps. When selecting a candidate from the CU-level candidate set, the matching cost is the Sum of Absolute Differences (SAD) of the bilateral matching or template matching. After determining the starting MV, the matching cost C of the bilateral matching of the sub-CU level search is calculated as follows:
Figure GDA0003797479320000141
where w is a weighting factor and is empirically set to 4,MV and MV s Indicating the current MV and the starting MV, respectively. SAD is still used as a template for sub-CU level searchMatching cost of the match.
In FRUC mode, MVs are derived by using only luma samples. The derived motion will be used for the luminance and chrominance of the MC inter prediction. After the MV is determined, the final MC is performed using an 8-tap interpolation filter for luminance and a 4-tap interpolation filter for chrominance.
MV refinement
MV refinement is a pattern-based MV search with a bilateral matching cost or a template matching cost as a criterion. In JEM, two search modes are supported-an unrestricted center-biased diamond search (UCBDS) and an adaptive cross search (adaptive cross search) for MV refinement at the CU level and the sub-CU level, respectively. For CU-level and sub-CU-level MV refinement, the MV is searched directly with quarter luma sample MV precision and then refined with eighth luma sample MV. The search range for MV refinement for the CU step and the sub-CU step is set equal to 8 luma samples.
Selection of prediction direction in template matching FRUC merge mode
In the bilateral matching Merge mode, bi-prediction is always applied, since the motion information of a CU is derived based on the closest match between two blocks along the motion trajectory of the current CU in two different reference images. There is no such restriction on template matching Merge patterns. In the template matching Merge mode, the encoder may select among unidirectional prediction from list 0, unidirectional prediction from list 1, or bi-directional prediction for a CU. The selection is based on the template matching cost, as follows:
if costBi < = factor x min (cost 0, cost 1)
Using bi-directional prediction;
otherwise, if cost0< = cost1
Using one-way prediction from list 0;
if not, then,
using unidirectional prediction from list 1;
where cost0 is the SAD for the list 0 template match, cost1 is the SAD for the list 1 template match, and costBi is the SAD for the bi-predictive template match. The value of factor is equal to 1.25, which means that the selection process is biased towards bi-directional prediction.
Inter prediction direction selection is only applied to the CU level template matching process.
Interleaved prediction examples
By interleaving prediction, a block is subdivided into sub-blocks in more than one subdivision pattern. The subdivision pattern is defined as the way a block is subdivided into sub-blocks, including the size of the sub-blocks and the location of the sub-blocks. For each subdivision pattern, a corresponding prediction block may be generated by deriving motion information for each sub-block based on the subdivision pattern. Therefore, even for one prediction direction, a plurality of prediction blocks can be generated from a plurality of subdivision patterns. Alternatively, only the subdivision pattern may be applied for each prediction direction.
It is assumed that there are X subdivision patterns, and X prediction blocks of the current block, denoted P 0 ,P 1 ,…,P X-1 Generated by sub-block based prediction using X subdivision patterns. The final prediction of the current block, denoted P, may be generated as
Figure GDA0003797479320000151
Where (x, y) is the coordinate of the pixel in the block, and w i (x, y) is P i The weight value of (3). Without loss of generalization, assume
Figure GDA0003797479320000152
Where N is a non-negative value. Fig. 13 shows an example of interleaved prediction using two subdivision patterns.
3. Exemplary problems addressed by the described embodiments
The affine merge MV derivation process has two potential drawbacks, as shown in fig. 5A and 5B.
First, the left vertex of a CU and the size of the CU must be stored by each 4 × 4 block belonging to the CU. This information need not be stored in HEVC.
Second, the decoder must access the MVs of the 4 × 4 blocks that are not adjacent to the current CU. In HEVC, the decoder only needs to access MVs of 4 × 4 blocks that neighbor the current CU.
4. Examples of the embodiments
We propose several methods to further improve sub-block based prediction, including interleaved prediction and affine merged MV derivation processes.
The following detailed description is to be considered as an example to explain the general concept. These inventions should not be construed in a narrow manner. Further, these inventions may be combined in any manner. Combinations between the present invention and other inventions are also applicable.
Use of interleaved prediction
1. In one embodiment, whether and how the interleaved prediction is applied may depend on the color component.
a. For example, interleaved prediction is applied only on the luminance component, and not on the chrominance component;
b. for example, the subdivision patterns are different for different color components;
c. for example, the weight values are different for different color components.
2. In one embodiment, whether and how the interleaved prediction is applied may depend on whether the inter prediction direction and/or the reference picture is the same or not.
a. For example, interleaved prediction may be used only for unidirectional prediction, not for bidirectional prediction.
b. For example, interleaved prediction may be used for bi-directional prediction only, but the two reference pictures of the two reference picture lists are the same.
c. In one example, interleaved prediction is disabled for Low Delay P (LDP) cases.
d. In one example, when predicting a current block from a current picture, interleaved prediction is also enabled.
Partially interleaved prediction
3. In one embodiment, interleaved prediction may be applied to portions of the entire block.
a. The second subdivision pattern may cover only parts of the entire block. The samples outside this section are not affected by the interleaved prediction.
b. The portion may exclude samples located at block boundaries, e.g., the first/last n rows or the first/last m columns.
c. The portion may exclude samples located at sub-blocks having a size different from a majority of sub-block sizes in the second subdivision pattern within the block.
d. Fig. 14A and 14B illustrate some examples of partially interleaved affine prediction. The first subdivision pattern is the same as in JEM, i.e. the left vertex of the sub-block is at (i × w, j × h), then the MV of this sub-block (i, j) th sub-block) is calculated from equation (1) in (x, y) = (i × w + w/2, j × h + h/2). The size of the sub-block is w × h for both subdivision patterns. For example, w = h =4 or w = h =8.
i. In fig. 14A, the top left of the sub-block of the second subdivision pattern (i, j) -th sub-block) is (i × w + w/2, j × h), and the MV of this sub-block is calculated from equation (1) by (x, y) = (i × w + w, j × h + h/2).
in fig. 14B, the top left of the sub-block of the second subdivision pattern is (i × w, j × h + h/2), and the MV of this sub-block is calculated from equation (1) in (x, y) = (i × w + w/2, j × h + h).
in fig. 14B, the top left of the sub-block of the second subdivision pattern is (i × w + w/2, j × h + h/2), and the MV of this sub-block is calculated from equation (1) by (x, y) = (i × w + w, j × h + h).
Fig. 14A-14C illustrate examples of partial interleaved prediction. The dashed lines represent the first subdivision pattern; the solid line represents the second subdivision pattern; the bold lines indicate the regions where the interleaved prediction is to be applied. Outside this region, no interleaved prediction is applied.
Weight values in interleaved prediction
4. In one embodiment, there are two possible weight values Wa and Wb, satisfying Wa + Wb =2 N . Exemplary weighting values [ Wa, wb ] are [ 3,1 ], [ 7,1 ], [ 5,3 ], [ 13,3 ], etc.
a. If the weight value w1 associated with the predicted sample P1 generated by the first subdivision pattern is the same as the weight value w2 associated with the predicted sample P2 generated by the second subdivision pattern (both equal Wa or Wb), the final prediction P for this sample is calculated as P = (P1 + P2) > >1 or P = (P1 + P2+ 1) > >1.
b. If the weight value w1 associated with the prediction sample P1 generated by the first subdivision pattern is different from the prediction sample P2 generated by the second subdivision pattern ({ w1, w2} = { Wa, wb } or { w1, w2} = { Wb, wa }), the final prediction P for this sample is calculated as P = (w 1 × P1+ w2 × P2+ offset) > > N, where the offset may be 1< < (N-1) or 0.
c. It may be similarly extended to the case when there are more than 2 subdivision patterns.
5. In one embodiment, if sample a is closer to the location of the MV of the derived sub-block than sample B, the weight value of sample a in the sub-block is greater than the weight value of sample B in the sub-block. Exemplary weight values for 4 × 4 sub-blocks, 4 × 2 sub-blocks, 2 × 4 sub-blocks, or 2 × 2 sub-blocks are shown in fig. 16.
Fig. 15 shows an example of weight values in sub-blocks. Exemplary weighting values [ Wa, wb ] are [ 3,1 ], [ 7,1 ], [ 5,3 ], [ 13,3 ], etc.
Examples of interleaved prediction
Fig. 16 illustrates an example of interleaved prediction with two subdivision patterns in accordance with the disclosed technique. The current block 1300 may be subdivided into a plurality of patterns. For example, as shown in fig. 16, the current block is subdivided into a pattern 0 (1301) and a pattern 1 (1302). Generating two prediction blocks P 0 (1303) And P 1 (1304). By calculating P 0 (1303) And P 1 (1304) May generate a final prediction block P of the current block 1300 (1305).
In general, given X subdivision patterns, X prediction blocks (denoted P) of a current block 0 ,P 1 ,,…,P X-1 ) May be generated from sub-block based prediction in X subdivision patterns. The final prediction of the current block (denoted P) may be generated as:
Figure GDA0003797479320000181
here, (x, y) are the coordinates of the pixels in the block, and w i (x,y) Is P i The weight coefficient of (2). By way of example, and not limitation, the weights may be expressed as:
Figure GDA0003797479320000182
n is a non-negative value. Alternatively, the displacement operation in equation (8) may also be expressed as:
Figure GDA0003797479320000183
the sum of weights is a power of 2 and by performing a shift operation instead of floating-point division, the weighted sum P can be calculated more efficiently.
The subdivision patterns may have different sub-block shapes, sizes or locations. In some embodiments, the subdivision pattern may comprise an irregular sub-block size. Fig. 17A-17G show examples of several subdivision patterns of a 16 x 16 block. In fig. 17A, a block is subdivided into 4 x4 sub-blocks according to the disclosed technique. This style is also used for JEM. Fig. 17B illustrates an example of a subdivision pattern that subdivides a block into 8 x 8 sub-blocks in accordance with the disclosed technique. Fig. 17C illustrates an example of a subdivision pattern that subdivides a block into 8 x4 sub-blocks in accordance with the disclosed technique. Fig. 17D illustrates an example of a subdivision pattern that subdivides a block into 4 x 8 sub-blocks in accordance with the disclosed techniques. In fig. 17E, a portion of a block is subdivided into 4 x4 sub-blocks in accordance with the disclosed techniques. The pixels at the block boundaries are subdivided into smaller sub-blocks of a size of e.g. 2x4, 4 x 2 or 2x 2. Some sub-blocks may be combined to form larger sub-blocks. Fig. 17F shows an example of adjacent sub-blocks (e.g., 4 x4 sub-blocks and 2x4 sub-blocks) that are combined to form larger sub-blocks of size 6 x4, 4 x 6, or 6 x 6. In fig. 14C, a portion of a block is subdivided into 8 x 8 sub-blocks. While the pixels at the block boundaries are subdivided into smaller sub-blocks such as 8 x4, 4 x 8 or 4 x 4.
In sub-block based prediction, the shape and size of a sub-block may be determined based on the shape and/or size of an encoded block and/or encoded block information. For example, in some embodiments, when the size of the current block is M × N, the size of the sub-block is 4 × N (or 8 × N, etc.), i.e., the sub-block has the same height as the current block. In some embodiments, when the size of the current block is M × N, the size of the sub-block is M × 4 (or M × 8, etc.), i.e., the sub-block has the same width as the current block. In some embodiments, when the size of the current block is M × N (where M > N), the size of the sub-block is a × B, where a > B (e.g., 8 × 4). Alternatively, the size of the sub-block is B × a (e.g., 4 × 8).
In some embodiments, the size of the current block is mxn. When M × N < = T (or min (M, N) <= T, or max (M, N) <= T, etc.), the size of the sub-block is a × B; when M × N > T (or min (M, N) > T, or max (M, N) > T, etc.), the size of the sub-block is C × D, where a < = C, B < = D. For example, if M × N < =256, the size of the sub-block may be 4 × 4. In some implementations, the size of the sub-blocks is 8 x 8.
In some embodiments, whether to apply interleaved prediction may be determined based on the inter prediction direction. For example, in some embodiments, interleaved prediction may be applied to bi-directional prediction, but not to uni-directional prediction. As another example, when multiple-hypothesis (multiple-hypothesis) is applied, when there is more than one reference block, the interleaved prediction may be applied to one prediction direction.
In some embodiments, how to apply the interleaved prediction may be determined based on the inter-prediction direction. In some embodiments, a block bi-directionally predicted using sub-block based prediction is subdivided into sub-blocks with two different subdivision patterns for two different reference lists. For example, when predicting from the reference list 0 (L0), the bi-directionally predicted block is subdivided into 4 × 8 sub-blocks, as shown in fig. 17D. When predicted from reference list 1 (L1), the same block is subdivided into 8 × 4 sub-blocks, as shown in fig. 17C. The final prediction P is calculated as
Figure GDA0003797479320000191
Here, P0 and P1 are predictions from L0 and L1, respectively. w0 and w1 are weight values of L0 and L1, respectively. As shown in equation (16), the weight values may be determinedThe following are defined: w is a 0 (x,y)+w 1 (x,y)=1<<N (where N is a non-negative integer value). Because fewer sub-blocks are used for prediction in each direction (e.g., 4 x 8 sub-blocks below compared to 8 x 8 sub-blocks), the computation requires less bandwidth than existing approaches based on sub-blocks. By using larger sub-blocks, the prediction result is also less susceptible to noise interference.
In some embodiments, a block that is uni-directionally predicted with sub-block based prediction is subdivided into sub-blocks with two or more different subdivision patterns for the same reference list. For example, list L (L =0 or 1) P L Is calculated as
Figure GDA0003797479320000192
Here, XL is the number of subdivision styles for list L.
Figure GDA0003797479320000193
Is a prediction generated with the ith subdivision pattern and->
Figure GDA0003797479320000194
Is/>
Figure GDA0003797479320000195
The weight value of (2). For example, when XL is 2, two subdivision styles are applied to list L. In the first subdivision pattern, the block is subdivided into 4 × 8 sub-blocks, as shown in fig. 17D. In the second subdivision pattern, the block is subdivided into 8 × 4 sub-blocks, as shown in fig. 17D.
In some embodiments, a bi-directionally predicted block with sub-block based prediction is considered a combination of two uni-directionally predicted blocks from L0 and L1, respectively. The prediction from each list may be derived as described in the examples above. The final prediction P can be calculated as
Figure GDA0003797479320000201
Here, the parameters a and b are two additional weights, which are applied to two intra prediction blocks. In this specific example, both a and b may be set to 1. Similar to the example above, because fewer sub-blocks are used for prediction in each direction (e.g., 4 x 8 sub-blocks below compared to 8 x 8 sub-blocks), bandwidth usage is superior or at the same level as existing sub-block based approaches. At the same time, the prediction results can be improved by using larger sub-blocks.
In some embodiments, a single non-uniform pattern may be used in each uni-directionally predicted block. For example, for each list L (e.g., L0 or L1), the blocks are subdivided into different styles (e.g., as shown in fig. 17E or 17F). Using a smaller number of sub-blocks reduces the bandwidth requirements. The non-uniformity of the sub-blocks also increases the robustness of the prediction results.
In some embodiments, for multi-hypothesis coded blocks, there may be more than one prediction block generated by different subdivision patterns for each prediction direction (or reference picture list). Multiple prediction blocks may be used to generate the final prediction with additional weights applied. For example, the additional weight may be set to 1/M, where M is the total number of generated prediction blocks.
In some embodiments, the encoder may determine whether and how to apply interleaved prediction. The encoder may then transmit information corresponding to the determination to the decoder at a sequence level, a picture level, a view level, a slice level, a Coding Tree Unit (CTU) (also referred to as a maximum coding unit (LCU)) level, a CU level, a PU level, a Tree Unit (TU) level, or a region level, which may contain a plurality of CUs/PUs/TUs/LCUs. The information may be signaled in a Sequence Parameter Set (SPS), a View Parameter Set (VPS), a Picture Parameter Set (PPS), a Slice Header (SH), a CTU/LCU, a CU, a PU, a TU, or a first block of a region.
In some implementations, the interleaved prediction is applied to existing sub-block methods like affine prediction, ATMVP, STMVP, FRUC, or BIO. In such a case, no additional signaling cost is required. In some implementations, new sub-block merge candidates generated by interleaved prediction may be inserted into the merge list, e.g., interleaved prediction + ATMVP, interleaved prediction + STMVP, interleaved prediction + FRUC, etc.
In some embodiments, the subdivision pattern to be used by the current block may be derived based on information from spatially and/or temporally neighboring blocks. For example, instead of relying on the encoder to signal the relevant information, both the encoder and the decoder may employ a set of predetermined rules to obtain the refinement patterns based on temporal adjacency (e.g., previously used refinement patterns of the same block) or spatial adjacency (e.g., refinement patterns used by neighboring blocks).
In some embodiments, the weight value w may be fixed. For example, all the subdivision patterns may be equally weighted: w is a i (x, y) =1. In some embodiments, the weight values may be determined based on the location of the blocks and the subdivision patterns used. For example, w i (x, y) may be different for different (x, y). In some embodiments, the weight values may further depend on sub-block prediction based on encoding techniques (e.g., affine or ATMVP) and/or other encoding information (e.g., skip or non-skip modes, and/or MV information).
In some embodiments, the encoder may determine the weight values and send the values to the decoder at a sequence level, picture level, slice level, CTU/LCU level, CU level, PU level, or area level (which may contain multiple CUs/PUs/TUs/LCUs). The weight value may be signaled in a Sequence Parameter Set (SPS), a Picture Parameter Set (PPS), a Slice Header (SH), a CTU/LCU, a CU, a PU, or a first block of a region. In some embodiments, the weight values may be derived from weight values of spatially and/or temporally neighboring blocks.
Note that the interleaved prediction techniques disclosed herein may be applied to one, some, or all of the coding techniques for sub-block based prediction. For example, the interleaved prediction technique may be applied to affine prediction, while other coding techniques based on prediction of sub-blocks (e.g., ATMVP, STMVP, FRUC, or BIO) do not use interleaved prediction. As another example, all of affine, ATMVP, and STMVP apply the interleaved prediction techniques disclosed herein.
Fig. 18 is a block diagram of an exemplary video processing device 1800. The device 1800 may be used to implement one or more of the methodologies described herein. The device 1800 may be implemented as a smartphone, tablet, computer, internet of things (IoT) receiver, and so on. The device 1800 may include one or more processors 1802, one or more memories 1804, and video processing hardware 1806. The processor(s) 1802 can be configured to implement one or more methods described in this document. Memory (es) 1804 may be used to store data and code for implementing the methods and techniques described herein. Video processing hardware 1806 may be used to implement some of the techniques described in this document in hardware circuitry.
Fig. 19 shows a flow diagram of an exemplary method 1900 of video processing. Method 1900 includes, at step 1902, determining whether an interleaved prediction mode is applicable for conversion between the current video block and a bitstream representation of the current video block based on a component type of the current video block. The method 1900 further includes, at step 1904, in response to determining that the interleaved prediction mode applies to the current video block, performing a transform by applying the interleaved prediction mode, wherein applying the interleaved prediction comprises subdividing a portion of the current video block into at least one sub-block using more than one subdivision pattern, and generating a predictor for the current video block that is a weighted average of the predictors determined for each of the more than one subdivision patterns.
Fig. 20 shows a flow diagram of an exemplary method 2000 of video processing. Method 2000 includes, at step 2002, determining whether an interleaved prediction mode is applicable for a transition between a current video block and a bitstream representation of the current video block based on a prediction direction of the current video block. The method 2000 further comprises, at step 2004, in response to determining that the interleaved prediction mode applies to the current video block, converting by applying the interleaved prediction mode, and wherein applying the interleaved prediction comprises subdividing part of the current video block into at least one sub-block using more than one subdivision pattern, and generating a predictor for the current video block that is a weighted average of the predictors determined for each of the more than one subdivision patterns.
Fig. 21 shows a flow diagram of an exemplary method 2100 of video processing. Method 2100 includes, at step 2102, determining whether an interleaved prediction mode is applicable for conversion between a current video block in a current picture and a bitstream representation of the current video block based on a low delay mode of the current picture. Method 2100 further comprises, at step 2104, in response to determining that the interleaved prediction mode applies to the current video block, converting by applying the interleaved prediction mode, and wherein applying the interleaved prediction comprises subdividing a portion of the current video block into at least one sub-block using more than one subdivision pattern, and generating a predictor for the current video block that is a weighted average of the predictors determined for each of the more than one subdivision patterns.
Fig. 22 shows a flow diagram of an exemplary method 2200 of video processing. Method 2200 includes, at step 2202, determining whether an interleaved prediction mode is applicable for conversion between the current video block and a bitstream representation of the current video block based on using a current picture comprising the current video block as a reference. Method 2200 further comprises, at step 2204, in response to determining that the interleaved prediction mode applies to the current video block, performing the conversion by applying the interleaved prediction mode, and wherein applying the interleaved prediction comprises subdividing part of the current video block into at least one subblock pattern using more than one subdivision pattern, and generating a predictor for the current video block that is a weighted average of the predictors determined for each of the more than one subdivision patterns.
Another exemplary method of video processing is provided. The method includes selectively performing (2300) interlace prediction based encoding of one or more of a luma component, a first chroma component, and a second chroma component of the video from the video frame based on the video conditions. Performing inter-frame interleaved prediction, including determining a prediction block for a current block of a component of video by: selecting (2302) a set of pixels of a component of a video frame to form a block; partitioning (2304) the block into a first set of sub-blocks according to a first pattern; generating (2306) a first inter-prediction block based on the first set of sub-blocks; partitioning (2308) the block into a second set of sub-blocks according to a second pattern, wherein at least one sub-block in the second set is not in the first set; generating (2310) a second intermediate prediction block based on the second set of sub-blocks; and determining (2312) a prediction block based on the first inter prediction block and the second inter prediction block.
Additional features and embodiments of the above-described methods/techniques are described below using clause-based formats.
1. A method of video processing, comprising:
determining whether an interleaved prediction mode is applicable for conversion between the current video block and a bitstream representation of the current video block based on a component type of the current video block; and
responsive to determining that the interleaved prediction mode is applicable to the current video block, switching by applying the interleaved prediction mode
Wherein applying interleaved prediction comprises subdividing part of the current video block into at least one sub-block using more than one subdivision pattern and generating a predictor for the current video block that is a weighted average of the predictors determined for each of the more than one subdivision patterns.
2. The method of clause 1, wherein the interleaved prediction is applied in response to the component type being equal to the luma component.
3. The method of clause 1, wherein the more than one subdivision pattern for a first color component of the video is different from the other more than one subdivision pattern for a second color component of the video.
4. The method of clause 1, wherein the weighted average uses weights, the value of which depends on the component type.
5. A method of video processing, comprising:
determining whether an interleaved prediction mode is applicable for conversion between the current video block and a bitstream representation of the current video block based on a prediction direction of the current video block; and
responsive to determining that the interleaved prediction mode is applicable to the current video block, switching by applying the interleaved prediction mode, and
wherein applying interleaved prediction comprises subdividing part of the current video block into at least one subblock using more than one subdivision pattern, and generating a predictor for the current video block that is a weighted average of the predictors determined for each of the more than one subdivision patterns.
6. The method of clause 5, wherein the interleaved prediction is applied in response to the prediction direction being equal to the unidirectional prediction.
7. The method of clause 5, wherein the interleaved prediction is applied in response to the prediction direction being equal to bi-directional.
8. A method of video processing, comprising:
determining whether an interleaved prediction mode is applicable for conversion between a current video block in a current picture and a bitstream representation of the current video block based on a low latency mode of the current picture; and
responsive to determining that the interleaved prediction mode is applicable to the current video block, switching by applying the interleaved prediction mode, and
wherein applying interleaved prediction comprises subdividing part of the current video block into at least one sub-block using more than one subdivision pattern and generating a predictor for the current video block that is a weighted average of the predictors determined for each of the more than one subdivision patterns.
9. The method of clause 8, wherein the low-latency mode for the current picture disables interleaved prediction.
10. A method of video processing, comprising:
determining whether an interleaved prediction mode is applicable for conversion between the current video block and a bitstream representation of the current video block based on using a current picture comprising the current video block as a reference; and
responsive to determining that the interleaved prediction mode is applicable to the current video block, switching by applying the interleaved prediction mode, and
wherein applying interleaved prediction comprises subdividing part of the current video block into at least one sub-block pattern using more than one subdivision pattern and generating a predictor for the current video block that is a weighted average of the predictors determined for each of the more than one subdivision patterns.
11. The method of clause 10, wherein when predicting the current video block from the current picture, interleaved prediction is enabled.
12. The method of clause 1, 5, 8, or 10, wherein a portion of the current video blocks includes less than all of the current video blocks.
13. The method of any of clauses 1-12, wherein the interleaved prediction mode is applied to a portion of the current video block.
14. The method of clause 13, wherein at least one of the subdivision patterns covers only a portion of the current video block.
15. The method of clause 14, wherein samples located at the boundary of the current video block are partially excluded.
16. The method of clause 14, wherein samples located at sub-blocks having a size different from a majority of sub-block sizes within the current video block are partially excluded.
17. The method of clause 1, 5, 8, 10, or 12, wherein subdividing a portion of a current video block further comprises:
partitioning a current video block into a first set of sub-blocks according to a first pattern of subdivision patterns;
generating a first intermediate prediction block based on the first set of sub-blocks;
partitioning the current video block into a second set of sub-blocks according to a second pattern of the subdivision patterns, wherein at least one sub-block in the second set is not in the first set; and
a second intermediate prediction block is generated based on the second set of sub-blocks.
18. The method of clause 17, wherein the weight value w1 associated with the prediction samples generated by the first pattern is the same as the weight value w2 associated with the prediction samples generated by the second pattern, and the final prediction P is calculated as P = (P1 + P2) > >1 or P = (P1 + P2+ 1) > >1.
19. The method of clause 17, wherein the weight value w1 associated with the prediction samples generated by the first pattern and the weight value w2 associated with the prediction samples generated by the second pattern are different, and the final prediction P is calculated as P = (w 1 × P1+ w2 × P2+ offset) > > N, where the offset is 1< < (N-1) or 0.
20The method according to clause 17, wherein the first weight Wa of the first inter prediction block and the second weight Wb of the second inter prediction block satisfy the condition Wa + Wb =2 N Wherein N is an integer.
21. The method of clause 17, wherein the weight value of the first sample in the sub-block is greater than the weight value of the second sample in the sub-block if the first sample is closer to the position from which the motion vector of the sub-block is derived than the second sample.
22. A method of video processing, comprising:
selectively based on video conditions, performing an interleaved prediction-based encoding of one or more components of the video from among a luma component, a first chroma component, and a second chroma component of a video frame, wherein performing interleaved prediction comprises determining a prediction block for a current block of the components of the video by:
selecting a set of pixels of a component of a video frame to form a block;
partitioning a block into a first set of sub-blocks according to a first pattern;
generating a first intermediate prediction block based on the first set of sub-blocks;
partitioning the block into a second set of sub-blocks according to a second pattern, wherein at least one sub-block in the second set is not in the first set;
generating a second intermediate prediction block based on the second set of sub-blocks; and
a prediction block is determined based on the first inter prediction block and the second inter prediction block.
23. The method of clause 22, wherein the prediction block is formed using interleaved prediction only for the luma component.
24. The method of clause 22, wherein different components of the video are segmented using different first patterns or second patterns.
25. The method of any of clauses 22-24, wherein the video conditions include a direction of prediction, and wherein the interleaved prediction is performed only for one of the unidirectional prediction or the bidirectional prediction, and not for the other of the unidirectional prediction and the bidirectional prediction.
26. The method of clause 22, wherein the video conditions include using a low-delay P coding mode, and wherein in the case of using the low-delay P mode, the method includes refraining from making the interleaved prediction.
27. The method of clause 22, wherein the video condition comprises using a current picture including the current block as a reference for prediction.
28. The method of any of clauses 1-27, wherein the interleaving-based predictive encoding comprises using a first set of sub-blocks and a second set of sub-blocks from only a portion of the current block.
29. The method of clause 28, wherein the smaller portion of the current block excludes samples in the boundary region of the current block.
30. The method of clause 28, wherein the interleaving-based predictive encoding using the partial current block includes affine prediction using the partial current block.
31. The method of clause 22, wherein determining the prediction block includes determining the prediction block using a weighted average of the first inter prediction block and the second inter prediction block.
32. The method of additional clause 31, wherein the first weight Wa of the first inter prediction block and the second weight Wb of the second inter prediction block satisfy the condition Wa + Wb =2 N Wherein N is an integer.
33. The method of clause 32, wherein Wa =3 and Wb =1.
34. The method of clause 22, wherein the weight value of the first sample in the sub-block is greater than the weight value of the second sample in the sub-block when the first sample is closer than the second sample to a location at which the motion vector of the sub-block is derived.
35. An apparatus comprising a processor and a non-transitory memory having instructions thereon, wherein the instructions, when executed by the processor, cause the processor to implement the method in one or more of clauses 1-34.
36. A computer program product, stored on a non-transitory computer readable medium, comprising program code for performing the method of one or more of clauses 1-34.
From the foregoing, it will be appreciated that specific embodiments of the disclosed technology have been described herein for purposes of illustration, but that various modifications may be made without deviating from the scope of the invention. Accordingly, the techniques of this disclosure are not limited except as by the appended claims.
The disclosure and other embodiments, modules, and functional operations described in this document can be implemented in digital electronic circuitry, or in computer software, firmware, or hardware, including the structures disclosed in this document and their structural equivalents, or in combinations of one or more of them. The disclosed and other embodiments may be implemented as one or more computer program products, i.e., one or more modules of computer program instructions encoded on a computer-readable medium for execution by, or to control the operation of, data processing apparatus. The computer readable medium can be a machine-readable storage device, a machine-readable storage substrate, a memory device, a combination of substances that affect a machine-readable propagated signal, or a combination of one or more of them. The term "data processing apparatus" encompasses all apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, or multiple processors or computers. The apparatus can include, in addition to hardware, code that creates an execution environment for the computer program in question, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, or a combination of one or more of them. A propagated signal is an artificially generated signal, e.g., a machine-generated electrical, optical, or electromagnetic signal, that is generated to encode information for transmission to suitable receiver apparatus.
A computer program (also known as a program, software application, script, or code) can be written in any form of programming language, including compiled or interpreted languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. A computer program does not necessarily correspond to a file in a file system. A program can be stored in a portion of a file that holds other programs or data (e.g., one or more scripts stored in a markup language file), in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub programs, or portions of code). A computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network.
The processes and logic flows described in this document can be performed by one or more programmable processors executing one or more computer programs to perform functions by operating on input data and generating output. The processes and logic flows can also be performed by, and apparatus can also be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application-specific integrated circuit).
Processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer. Generally, a processor will receive instructions and data from a read-only memory or a random access memory or both. The essential elements of a computer are a processor for executing instructions and one or more memory devices for storing instructions and data. Generally, a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto-optical disks, or optical disks. However, a computer need not have such a device. Computer-readable media suitable for storing computer program instructions and data include all forms of non-volatile memory, media and memory devices, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices; magnetic disks, such as internal hard disks or removable disks; magneto-optical disks; and CD ROM and DVD-ROM disks. The processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.
While this patent document contains many specifics, these should not be construed as limitations on the scope of any invention or of what may be claimed, but rather as descriptions of features specific to particular embodiments of particular inventions. Certain features that are described in this patent document in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination. Furthermore, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination.
Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. Moreover, the separation of various system components in the embodiments described in this patent document should not be understood as requiring such separation in all embodiments.
Only a few implementations and examples have been described, and other implementations, enhancements and variations can be made based on what is described and illustrated in this patent document.

Claims (49)

1. A method of video processing, comprising:
determining whether an interleaved prediction mode is applicable for switching between the current video block and a bitstream of the current video block based on a component type of the current video block; and
in response to determining that the interleaved prediction mode applies to the current video block, performing the conversion by applying the interleaved prediction mode;
wherein applying the interleaved prediction comprises subdividing a portion of the current video block into at least one sub-block using more than one subdivision pattern and generating a prediction of the current video block, the prediction of the current video block being a weighted average of the predictions determined for each of the more than one subdivision patterns.
2. The method of claim 1, wherein the interleaved prediction is applied in response to the component type being equal to a luma component.
3. The method of claim 1, wherein the more than one subdivision patterns for a first color component of video are different from the other more than one subdivision patterns for a second color component of video.
4. The method of claim 1, wherein the weighted average uses weights whose values depend on the component type.
5. The method of claim 1, wherein a portion of the current video blocks comprises less than all of the current video blocks.
6. The method of claim 1, wherein the interleaved prediction mode is applied to a portion of the current video block.
7. The method of claim 1, wherein the interleaving-based predictive encoding comprises using a first set of the sub-blocks and a second set of the sub-blocks from only a portion of the current video block.
8. The method of claim 1, wherein the subdividing a portion of a current video block further comprises:
partitioning the current video block into a first set of sub-blocks according to a first pattern of the subdivision patterns;
generating a first intermediate prediction block based on the first set of sub-blocks;
partitioning the current video block into a second set of sub-blocks according to a second pattern of the subdivision patterns, wherein at least one sub-block in the second set is not in the first set; and
generating a second inter-prediction block based on the second set of sub-blocks.
9. A method of video processing, comprising:
determining whether an interleaved prediction mode is applicable for switching between the current video block and a bitstream of the current video block based on a prediction direction of the current video block; and
responsive to determining that the interleaved prediction mode applies to the current video block, the converting is performed by applying the interleaved prediction mode, and
wherein applying the interleaved prediction comprises subdividing a portion of the current video block into at least one sub-block using more than one subdivision pattern and generating a prediction of the current video block, the prediction of the current video block being a weighted average of the predictions determined for each of the more than one subdivision patterns.
10. The method of claim 9, wherein the interleaved prediction is applied in response to the prediction direction being equal to unidirectional prediction.
11. The method of claim 9, wherein the interleaved prediction is applied in response to the prediction direction being equal to bi-directional.
12. The method of claim 9, wherein a portion of the current video blocks comprises less than all of the current video blocks.
13. The method of claim 9, wherein the interleaved prediction mode is applied to a portion of the current video block.
14. The method of claim 9, wherein the interleaving-based predictive encoding comprises using a first set of the sub-blocks and a second set of the sub-blocks from only a portion of the current video block.
15. The method of claim 9, wherein the subdividing a portion of a current video block further comprises:
partitioning the current video block into a first set of sub-blocks according to a first pattern of the subdivision patterns;
generating a first intermediate prediction block based on the first set of sub-blocks;
partitioning the current video block into a second set of sub-blocks according to a second pattern of the subdivision patterns, wherein at least one sub-block in the second set is not in the first set; and
generating a second inter-prediction block based on the second set of sub-blocks.
16. A method of video processing, comprising:
determining whether an interleaved prediction mode is applicable for a transition between a current video block in a current picture and a bitstream of the current video block based on a low delay mode of the current picture; and
responsive to determining that the interleaved prediction mode applies to the current video block, the converting is performed by applying the interleaved prediction mode, and
wherein applying the interleaved prediction comprises subdividing a portion of the current video block into at least one sub-block using more than one subdivision pattern and generating a prediction of the current video block, the prediction of the current video block being a weighted average of the predictions determined for each of the more than one subdivision patterns.
17. The method of claim 16, wherein the interleaved prediction is disabled for a low delay mode of the current picture.
18. The method of claim 16, wherein a portion of the current video blocks includes less than all of the current video blocks.
19. The method of claim 16, wherein the interleaved prediction mode is applied to a portion of the current video block.
20. The method of claim 16, wherein the interleaving-based predictive encoding comprises using a first set of the sub-blocks and a second set of the sub-blocks from only a portion of the current video block.
21. The method of claim 16, wherein the subdividing a portion of a current video block further comprises:
partitioning the current video block into a first set of sub-blocks according to a first pattern of the subdivision patterns;
generating a first intermediate prediction block based on the first set of sub-blocks;
partitioning the current video block into a second set of sub-blocks according to a second pattern of the subdivision patterns, wherein at least one sub-block in the second set is not in the first set; and
generating a second inter-prediction block based on the second set of sub-blocks.
22. A method of video processing, comprising:
determining whether an interleaved prediction mode is applicable for conversion between the current video block and a bitstream of the current video block based on using a current picture comprising the current video block as a reference; and
responsive to determining that the interleaved prediction mode applies to the current video block, the converting is performed by applying the interleaved prediction mode, and
wherein applying the interleaved prediction comprises subdividing a portion of the current video block into at least one sub-block pattern using more than one subdivision pattern and generating a prediction of the current video block, the prediction of the current video block being a weighted average of the predictions determined for each of the more than one subdivision patterns.
23. The method of claim 22, wherein the interleaved prediction is enabled when predicting the current video block from the current picture.
24. The method of claim 22, wherein a portion of the current video blocks includes less than all of the current video blocks.
25. The method of claim 22, wherein the interleaved prediction mode is applied to a portion of the current video block.
26. The method of claim 25, wherein at least one of the subdivision patterns covers only a portion of the current video block.
27. The method of claim 26, wherein the portion excludes samples located at boundaries of the current video block.
28. The method of claim 26, wherein the partial exclusion excludes samples located at sub-blocks that have a size different from a majority of sub-block sizes within the current video block.
29. The method of claim 22, wherein the subdividing a portion of a current video block further comprises:
partitioning the current video block into a first set of sub-blocks according to a first pattern of the subdivision patterns;
generating a first intermediate prediction block based on the first set of sub-blocks;
partitioning the current video block into a second set of sub-blocks according to a second pattern of the subdivision patterns, wherein at least one sub-block in the second set is not in the first set; and
generating a second inter-prediction block based on the second set of sub-blocks.
30. The method of claim 29, wherein a weight value w1 associated with a prediction sample generated by the first pattern is the same as a weight value w2 associated with a prediction sample generated by the second pattern, and a final prediction P is calculated as P = (P1 + P2) > >1 or P = (P1 + P2+ 1) > >1, wherein P1 and P2 represent a prediction generated by the first pattern and a prediction generated by the second pattern, respectively, and wherein > > represents a right shift operation, N being an integer greater than 0.
31. The method of claim 29, wherein a weight value w1 associated with prediction samples generated by the first pattern is different from a weight value w2 associated with prediction samples generated by the second pattern, and a final prediction P is calculated as P = (w 1 × P1+ w2 × P2+ offset) > > N, where offset is 1< < < (N-1) or 0, where P1 and P2 represent prediction generated by the first pattern and prediction generated by the second pattern, respectively, and where > > represents a right shift operation, and < < represents a left shift operation, N being an integer greater than 0.
32. The method of claim 29, wherein a first weight Wa of the first inter prediction block and a second weight Wb of the second inter prediction block satisfy a condition Wa + Wb =2 N Wherein N is an integer.
33. The method of claim 29, wherein a weight value for a first sample in a sub-block is greater than a weight value for a second sample in a sub-block if the first sample is closer than the second sample to a location at which a motion vector for the sub-block is derived.
34. The method of claim 22, wherein the interleaving-based predictive encoding comprises using a first set of the sub-blocks and a second set of the sub-blocks from only a portion of the current video block.
35. A method of video processing, comprising:
selectively based on video conditions, performing an interleaved prediction-based encoding of one or more components of the video from among a luma component, a first chroma component, and a second chroma component of a video frame, wherein performing the interleaved prediction comprises determining a prediction block for a current video block of a component of the video by:
selecting a set of pixels of the component of the video frame to form a block;
partitioning the block into a first set of sub-blocks according to a first pattern;
generating a first intermediate prediction block based on the first set of sub-blocks;
partitioning the block into a second set of sub-blocks according to a second pattern, wherein at least one sub-block in the second set is not in the first set;
generating a second intermediate prediction block based on the second set of sub-blocks; and
determining a prediction block based on a weighted average of the first inter prediction block and the second inter prediction block.
36. The method of claim 35, wherein a prediction block is formed using interleaved prediction only for the luma component.
37. The method of claim 35, wherein different components of the video are segmented using different first or second patterns.
38. The method of any of claims 35-37, wherein the video conditions comprise a direction of prediction, and wherein the interleaved prediction is performed only for one of unidirectional prediction or bidirectional prediction and not for the other of unidirectional prediction and bidirectional prediction.
39. The method of claim 35, wherein the video condition comprises using a low-latency P coding mode, and wherein if the low-latency P coding mode is used, the method includes refraining from making the interleaved prediction.
40. The method of claim 35, wherein the video condition comprises using a current picture that includes the current video block as a reference for the prediction.
41. The method of claim 35, wherein the interleaving-based predictive encoding comprises using a first set of the sub-blocks and a second set of the sub-blocks from only a portion of the current video block.
42. The method of claim 41, wherein a smaller portion of the current video block excludes samples in boundary regions of the current video block.
43. The method of claim 41, wherein interleaving-based predictive encoding that uses a portion of the current video block comprises affine prediction using the portion of the current video block.
44. The method of claim 35, wherein determining the prediction block includes determining the prediction block using a weighted average of the first inter prediction block and the second inter prediction block.
45. The method of claim 44, wherein a first weight Wa of the first inter prediction block and a second weight Wb of the second inter prediction block satisfy a condition Wa + Wb =2 N Wherein N is an integer.
46. The method of claim 45, wherein Wa =3 and Wb =1.
47. The method of claim 35, wherein a weight value for a first sample in a sub-block is greater than a weight value for a second sample in a sub-block when the first sample is closer than the second sample to a location at which a motion vector for the sub-block is derived.
48. An apparatus for video processing comprising a processor and a non-transitory memory having instructions thereon, wherein the instructions, when executed by the processor, cause the processor to implement the method of any of claims 1 to 47.
49. A non-transitory computer readable medium having stored thereon instructions that, when executed by a processor, cause the processor to perform the method of any one of claims 1 to 47.
CN201910586571.0A 2018-07-01 2019-07-01 Method, apparatus and non-transitory computer-readable medium for video processing Active CN110677674B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN2018093943 2018-07-01
CNPCT/CN2018/093943 2018-07-01

Publications (2)

Publication Number Publication Date
CN110677674A CN110677674A (en) 2020-01-10
CN110677674B true CN110677674B (en) 2023-03-31

Family

ID=67297214

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910586571.0A Active CN110677674B (en) 2018-07-01 2019-07-01 Method, apparatus and non-transitory computer-readable medium for video processing

Country Status (3)

Country Link
CN (1) CN110677674B (en)
TW (1) TWI705696B (en)
WO (1) WO2020008325A1 (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019229683A1 (en) 2018-05-31 2019-12-05 Beijing Bytedance Network Technology Co., Ltd. Concept of interweaved prediction
WO2020140948A1 (en) 2019-01-02 2020-07-09 Beijing Bytedance Network Technology Co., Ltd. Motion vector derivation between dividing patterns
US11025951B2 (en) * 2019-01-13 2021-06-01 Tencent America LLC Method and apparatus for video coding
WO2020143826A1 (en) * 2019-01-13 2020-07-16 Beijing Bytedance Network Technology Co., Ltd. Interaction between interweaved prediction and other coding tools
CN115665409A (en) * 2020-06-03 2023-01-31 北京达佳互联信息技术有限公司 Method and apparatus for encoding video data
CN117296324A (en) * 2021-05-17 2023-12-26 抖音视界有限公司 Video processing method, apparatus and medium

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140003527A1 (en) * 2011-03-10 2014-01-02 Dolby Laboratories Licensing Corporation Bitdepth and Color Scalable Video Coding
EP3078196B1 (en) * 2013-12-06 2023-04-05 MediaTek Inc. Method and apparatus for motion boundary processing
US10230980B2 (en) * 2015-01-26 2019-03-12 Qualcomm Incorporated Overlapped motion compensation for video coding

Also Published As

Publication number Publication date
TW202007154A (en) 2020-02-01
TWI705696B (en) 2020-09-21
WO2020008325A1 (en) 2020-01-09
CN110677674A (en) 2020-01-10

Similar Documents

Publication Publication Date Title
US20240098295A1 (en) Efficient affine merge motion vector derivation
CN110581999B (en) Chroma decoder side motion vector refinement
CN112868240B (en) Collocated local illumination compensation and modified inter prediction codec
US11889108B2 (en) Gradient computation in bi-directional optical flow
CN110620932B (en) Mode dependent motion vector difference accuracy set
CN112868239B (en) Collocated local illumination compensation and intra block copy codec
US11956465B2 (en) Difference calculation based on partial position
CN110677674B (en) Method, apparatus and non-transitory computer-readable medium for video processing
US20210235083A1 (en) Sub-block based prediction
CN113302918A (en) Weighted prediction in video coding and decoding
CN110740321B (en) Motion prediction based on updated motion vectors
CN113841396B (en) Simplified local illumination compensation
CN110876064B (en) Partially interleaved prediction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant