CN113727107A - Video decoding method and apparatus, and video encoding method and apparatus - Google Patents

Video decoding method and apparatus, and video encoding method and apparatus Download PDF

Info

Publication number
CN113727107A
CN113727107A CN202110580687.0A CN202110580687A CN113727107A CN 113727107 A CN113727107 A CN 113727107A CN 202110580687 A CN202110580687 A CN 202110580687A CN 113727107 A CN113727107 A CN 113727107A
Authority
CN
China
Prior art keywords
current
bio
sub
dmvr
prediction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110580687.0A
Other languages
Chinese (zh)
Inventor
修晓宇
陈伟
郭哲瑋
陈漪纹
马宗全
朱弘正
王祥林
于冰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Publication of CN113727107A publication Critical patent/CN113727107A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/577Motion compensation with bidirectional frame interpolation, i.e. using B-pictures
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The present disclosure relates to a video decoding method and apparatus and a video encoding method and apparatus. The video decoding method includes: acquiring a plurality of Coding Units (CU) into which a video image is divided from a bitstream; under the condition that the current CU in the CUs is a bidirectional prediction CU, acquiring information of a bidirectional prediction sampling point of the current CU; selectively skipping at least one of a bi-directional optical flow BIO process and a decoding-side motion vector modification DMVR process associated with the current CU based on information of bi-directional predicted samples of the current CU. Embodiments of the present disclosure improve and optimize video coding and decoding techniques.

Description

Video decoding method and apparatus, and video encoding method and apparatus
Technical Field
The present disclosure relates to the field of video encoding and decoding, and more particularly, to a video decoding method and apparatus and a video encoding method and apparatus.
Background
Various video encoding techniques may be used to compress video data. Video encoding is performed according to one or more video encoding standards. For example, some well-known video coding standards include general video coding (VVC), high efficiency video coding (HEVC, also known as H.265 or MPEG-H part 2), and advanced video coding (AVC, also known as H.264 or MPEG-4 part 10), developed jointly by ISO/IEC MPEG and ITU-T VECG. AOMedia video 1(AV1) was developed by the open media Alliance (AOM) as a successor to its previous standard VP 9. Audio video coding (AVS) is another video compression series established by the chinese audio video coding standards working group, referring to digital audio and digital video compression standards.
Most existing video coding standards build on a well-known hybrid video coding framework, i.e., using block-based prediction methods (e.g., inter-frame prediction, intra-frame prediction) to reduce redundancy present in a video image or sequence, and using transform coding to compress the energy of the prediction error. An important goal of video coding techniques is to compress video data into a form that uses a lower bit rate while avoiding or minimizing video quality degradation. However, the current various video codec standards and techniques still require further optimization and improvement in several respects.
Disclosure of Invention
The present disclosure provides a video decoding method and apparatus and a video encoding method and apparatus to improve and optimize a video encoding and decoding technique to solve at least the technical problems of the above standards or techniques, and potentially other technical problems.
According to a first aspect of the embodiments of the present disclosure, there is provided a video decoding method, including: acquiring a plurality of Coding Units (CU) into which a video image is divided from a bitstream; under the condition that the current CU in the CUs is a bidirectional prediction CU, acquiring information of a bidirectional prediction sampling point of the current CU; selectively skipping at least one of a bi-directional optical flow BIO process and a decoding-side motion vector modification DMVR process associated with the current CU based on information of bi-directional predicted samples of the current CU.
According to a second aspect of the embodiments of the present disclosure, there is provided a video encoding method, including: dividing a video image into a plurality of Coding Units (CU); under the condition that the current CU in the CUs is a bidirectional prediction CU, acquiring information of a bidirectional prediction sampling point of the current CU; selectively skipping at least one of a bi-directional optical flow BIO process and a decoding-side motion vector modification DMVR process associated with the current CU based on information of bi-directional predicted samples of the current CU.
According to a third aspect of the embodiments of the present disclosure, there is provided a video decoding apparatus comprising: a first acquisition unit configured to: acquiring a plurality of Coding Units (CU) into which a video image is divided from a bitstream; a second acquisition unit configured to: under the condition that the current CU in the CUs is a bidirectional prediction CU, acquiring information of a bidirectional prediction sampling point of the current CU; an execution unit configured to: selectively skipping at least one of a bi-directional optical flow BIO process and a decoding-side motion vector modification DMVR process associated with the current CU based on information of bi-directional predicted samples of the current CU. According to a fourth aspect of the embodiments of the present disclosure, there is provided a video encoding apparatus comprising: a dividing unit configured to: dividing a video image into a plurality of Coding Units (CU); an acquisition unit configured to: under the condition that the current CU in the CUs is a bidirectional prediction CU, acquiring information of a bidirectional prediction sampling point of the current CU; an execution unit configured to: selectively skipping at least one of a bi-directional optical flow BIO process and a decoding-side motion vector modification DMVR process associated with the current CU based on information of bi-directional predicted samples of the current CU.
According to a fifth aspect of embodiments of the present disclosure, there is provided an electronic apparatus including: at least one processor; at least one memory storing computer-executable instructions, wherein the computer-executable instructions, when executed by the at least one processor, cause the at least one processor to perform a video decoding method or a video encoding method according to the present disclosure.
According to a sixth aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium, wherein instructions, when executed by at least one processor, cause the at least one processor to perform a video decoding method or a video encoding method according to the present disclosure.
According to an eighth aspect of embodiments of the present disclosure, there is provided a computer program product comprising computer instructions which, when executed by at least one processor, implement a video decoding method or a video encoding method according to the present disclosure. It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure and are not to be construed as limiting the disclosure.
Fig. 1 is a block diagram illustrating a general block-based video coding system.
Fig. 2 is a diagram showing block division in the AVS3 standard.
Fig. 3 is a block diagram illustrating a general block-based video decoding system.
Fig. 4 is a schematic diagram showing the BIO mode.
Fig. 5 is a diagram illustrating a DMVR mode.
Fig. 6 is a diagram illustrating integer search candidates for DMVR.
Fig. 7 is a flow chart illustrating a motion compensation process with DMVR and BIO.
Fig. 8 is a flowchart illustrating a video decoding method according to an exemplary embodiment of the present disclosure.
Fig. 9 is a flowchart illustrating adaptively skipping the BIO and DMVR at the CU level or sub-block level according to a first exemplary embodiment of the present disclosure.
Fig. 10 is a flowchart illustrating adaptively skipping the BIO process and the DMVR process at a sub-block level according to a second exemplary embodiment of the present disclosure.
Fig. 11 is a flowchart illustrating a video encoding method according to an exemplary embodiment of the present disclosure.
Fig. 12 is a block diagram illustrating a video decoding apparatus according to an exemplary embodiment of the present disclosure.
Fig. 13 is a block diagram illustrating a video encoding method apparatus according to an exemplary embodiment of the present disclosure.
Fig. 14 is a block diagram of an electronic device according to an example embodiment of the present disclosure.
Detailed Description
In order to make the technical solutions of the present disclosure better understood by those of ordinary skill in the art, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the disclosure described herein are capable of operation in sequences other than those illustrated or otherwise described herein. The embodiments described in the following examples do not represent all embodiments consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
In this case, the expression "at least one of the items" in the present disclosure means a case where three types of parallel expressions "any one of the items", "a combination of any plural ones of the items", and "the entirety of the items" are included. For example, "include at least one of a and B" includes the following three cases in parallel: (1) comprises A; (2) comprises B; (3) including a and B. For another example, "at least one of the first step and the second step is performed", which means that the following three cases are juxtaposed: (1) executing the step one; (2) executing the step two; (3) and executing the step one and the step two.
The first generation of AVS standard includes Chinese national standard' information technology, advanced audio and video coding, part 2: video "(referred to as AVS1) and" information technology, advanced audio video coding, part 16: broadcast television video "(referred to as AVS +). It can provide a bit rate saving of about 50% compared to the MPEG-2 standard at the same perceptual quality. The AVS1 standard video part was released as a chinese national standard in 2006, month 2. The second generation AVS standard includes the chinese national standard "information technology, high efficiency multimedia coding" (AVS2) series, which is primarily directed to the transmission of high definition television programs. The encoding efficiency of AVS2 is twice that of AVS +. Year 2016, month 5, AVS2 was released as a chinese national standard. Meanwhile, the AVS2 standard video part is submitted by the Institute of Electrical and Electronics Engineers (IEEE) as an international standard for applications. The AVS3 standard is a new generation of video coding standard for UHD video applications aimed at surpassing the coding efficiency of the latest international standard HEVC. In month 3 2019, on the 68 th AVS meeting, the AVS3-P2 baseline was completed, which provides a bit rate savings of about 30% over the HEVC standard. Currently, there is a reference software called High Performance Model (HPM) maintained by the AVS group to demonstrate the reference implementation of the AVS3 standard.
As with HEVC, the AVS3 standard builds on a block-based hybrid video coding framework. Fig. 1 is a block diagram illustrating a general block-based video coding system. The input video signal is processed block by block (referred to as a Coding Unit (CU)). Unlike HEVC, which partitions blocks based on quadtrees only, in AVS3, one Coding Tree Unit (CTU) is partitioned into CUs based on quadtree/binary tree/extended quadtree to adapt to changing local characteristics. In addition, the concept of multiple partition unit types in HEVC is removed, i.e., there is no partitioning of CUs, Prediction Units (PUs), and Transform Units (TUs) in AVS 3. Instead, each CU is always used as a basic unit for both prediction and transform without further partitioning. In the tree splitting structure of AVS3, one CTU is first split based on the quadtree structure. Each quadtree leaf node may then be further partitioned based on the binary tree and the extended quadtree structure.
Fig. 2 is a diagram showing block division in the AVS3 standard. As shown in fig. 2, there are five types of segmentation: quadtree partitioning (vertical partitioning), horizontal binary partitioning (horizontal partitioning), vertical binary partitioning (vertical partitioning), horizontal extended quadtree partitioning (horizontal extended quaternary partitioning), and vertical extended quadtree partitioning (vertical extended quaternary partitioning).
Referring back to fig. 1, in performing video encoding, spatial prediction and/or temporal prediction may be performed. Spatial prediction (or "intra prediction") uses pixels from already coded neighboring blocks' samples (called reference samples) in the same video picture/slice to predict the current video block. Spatial prediction reduces the spatial redundancy inherent in video signals. Temporal prediction (also referred to as "inter prediction" or "motion compensated prediction") uses reconstructed pixels from already encoded video pictures to predict the current video block. Temporal prediction reduces temporal redundancy inherent in video signals. The temporal prediction signal of a given CU is typically signaled by one or more Motion Vectors (MVs) indicating the motion offset and direction between the current CU and its temporal reference. Furthermore, if multiple reference pictures are supported, one reference picture index is additionally transmitted for identifying from which reference picture in the reference picture memory the temporal prediction signal comes.
After spatial and/or temporal prediction, an intra/inter mode decision block in the encoder selects the best prediction mode, e.g., based on a rate-distortion optimization method. The prediction block is then subtracted from the current video block to obtain a prediction residual, and the prediction residual is decorrelated using a transform and then quantized. The quantized residual coefficients are inverse quantized and inverse transformed to form a reconstructed residual, which is then added back to the prediction block to form the reconstructed signal for the CU. Further loop filtering, such as deblocking filters, Sample Adaptive Offset (SAO), and Adaptive Loop Filters (ALF), may be applied to the reconstructed CU before the reconstructed CU is placed in a reference picture memory and used as a reference for encoding future video blocks. To form the output video bitstream, the coding mode (inter or intra), prediction mode information, motion information and quantized residual coefficients are all sent to an entropy coding unit for further compression and packing.
As mentioned above, in the VVC standard and the AVS3 standard, in performing motion compensation for bi-directional prediction, a bi-directional optical flow (BIO) tool (in VVC, may also be abbreviated as BDOF) and decoder-side motion vector correction (DMVR) may be used to improve the efficiency of motion compensation. In particular, in inter mode, motion estimation and motion compensation may be performed on a CU using a corresponding prediction block in a reference picture of the CU. When a CU is a bi-predicted CU, motion estimation and motion compensation may be performed on the CU using two prediction blocks in the bi-reference picture lists L0 and L1. When performing bi-prediction, BIO and DMVR may be performed on a CU to improve the efficiency of motion compensation.
Fig. 3 is a block diagram illustrating a general block-based video decoding system. The video bitstream is first entropy decoded in an entropy decoding unit. The coding mode and prediction information are sent to a spatial prediction unit (if intra-coded) or a temporal prediction unit (if inter-coded) to form a prediction block. The residual transform coefficients are sent to an inverse quantization unit and an inverse transform unit to reconstruct the residual block. The prediction block and the residual block are then added together. The reconstructed block may further undergo loop filtering before it is stored in the reference picture memory. The reconstructed video in the reference picture store is then sent out for display and used to predict future video blocks.
In inter mode, motion compensation may be performed on a CU using a corresponding prediction block in a reference picture of the CU. When the CU is a bi-predicted CU, motion compensation may be performed on the CU using two prediction blocks in the bi-reference picture lists L0 and L1. In bi-prediction mode, BIO and DMVR may be performed on a CU to improve the efficiency of motion compensation.
In the following, the existing BIO and DMVR design in the AVS3 standard will be described with reference to fig. 4 to 6 as an example to explain the main design principles of the two coding tools BIO and DMVR.
Bidirectional optical flow BIO
Conventional bi-prediction in video coding is a simple combination of two temporally predicted blocks obtained from a reference picture. However, due to the signaling cost and accuracy trade-off of motion vectors, the motion vectors received at the decoder end may be less accurate. As a result, there may still be residual small motion that may be observed between the two prediction blocks, which may reduce the efficiency of motion compensated prediction. To address this problem, the BIO tool is used in both the VVC and AVS3 standards to compensate for this motion of each sample (sample may also be referred to as a pixel) within a block.
Fig. 4 is a schematic diagram showing the BIO mode.
Referring to fig. 4, BIO is a sample-by-sample motion correction performed on top of block-based motion compensated prediction when bi-prediction is used. In existing BIO designs, a modified motion vector for each sample in a block is derived based on a classical optical flow model. Let I(k)(x, y) are sample values at coordinates (x, y) of the prediction block derived from a reference picture list k (k is 0,1) (may also be referred to as l0 and l1), and
Figure BDA0003085992190000061
and
Figure BDA0003085992190000062
horizontal and vertical gradients of the samples. Motion correction at (x, y) (v) assuming the optical flow model is validx,vy) Can be derived by the following formula:
Figure BDA0003085992190000063
using the combination of optical flow equation (1) and the interpolation of the prediction block along the motion trajectory (as shown in fig. 4), the current block (currblk) BIO prediction can be obtained as:
Figure BDA0003085992190000064
in FIG. 4, (MV)x0,MVy0) And (MV)x1,MVy1) Indicating for generating two prediction blocks I(0)And I(1)Block level motion vectors. Further, the motion correction (v) at the sampling position (x, y) is calculated by minimizing the difference Δ between the sampling values (i.e., a and B in fig. 4) after the motion correction compensationx,vy) The difference Δ is as follows:
Figure BDA0003085992190000071
in addition, to ensure regularity of the derived motion correction, it is assumed that the motion correction is consistent within a local surrounding area centered at (x, y), so in the current BIO design in AVS3, (v, y) is derived by minimizing Δ within a 4 × 4 window Ω around the current sample point at (x, y)x,vy) The values of (A) are:
Figure BDA0003085992190000072
in formula (4), (i, j is the coordinates of the samples around the current sample.
As shown in equations (2) and (4), in addition to the block level MC, it is also necessary to compensate the block (i.e., I) for each motion in the BIO(0)And I(1)) To derive local motion correction and generate a final prediction at the location of the sample. In AVS3, gradients are computed by a 2D separable Finite Impulse Response (FIR) filtering process that defines a set of 8-tap filters and applies different filters to account for block-level motion vectors (e.g., (MV) in FIG. 4x0,MVy0) And (MV)x1,MVy1) ) derive horizontal and vertical gradients. Table 1 below shows the gradient filter used by BIOAnd (4) the coefficient.
[ TABLE 1 ]
Fractional position Gradient filter
0 {-4,11,-39,-1,41,-14,8,-2}
1/4 {-2,6,-19,-31,53,-12,7,-2}
1/2 {0,-1,0,-50,50,0,1,0}
3/4 {2,-7,12,-53,31,19,-6,2}
Finally, BIO is only applied to bi-predicted blocks predicted by two reference blocks from temporally neighboring pictures. In addition, BIO is enabled without sending additional information from the encoder to the decoder. In particular, the BIO is applied to all bi-prediction blocks having both forward and backward prediction signals.
Decoding end motion vector modification DMVR
Similar to the VVC standard, to increase the accuracy of the MV for the conventional merge mode, decoder-side motion vector modification (DMVR) based on bilateral matching is applied in AVS 3. In the bi-directional prediction operation, the modified MVs are searched around the initial MVs in the reference picture list L0 and the reference picture list L1. The method calculates distortion between two candidate blocks in the reference picture list L0 and the list L1.
Fig. 5 is a diagram illustrating a DMVR mode.
Referring to fig. 5, the Sum of Absolute Differences (SAD) between empty color blocks is calculated based on each MV candidate around the initial MV. The MV candidate with the lowest SAD becomes the modified MV and is used to generate the bi-directional prediction signal.
In DMVR, the search point surrounds the integer sample pointed to by the initial MV, and the MV offset is considered to comply with the mirroring rule. In other words, any MV correction checked by the DMVR should satisfy the following two equations:
MV0′=MV0+MV_offset (5)
MV1′=MV1-MV_offset (6)
wherein MV _ offset represents a modified offset between an initial MV and a modified MV in one of the reference pictures. The modified search range is two integer luma samples from the original MV. The search comprises an integer sampling point search stage and a fraction sampling point correction stage.
Fig. 6 is a diagram illustrating integer search candidates for DMVR.
Referring to fig. 6, in the integer search phase, the SAD of the 21 shown integer sample positions (including the integer sample position corresponding to the initial MV) is checked. The SAD of the initial MV pair is first calculated. The integer offset that minimizes the SAD value is selected as the integer sample offset for the integer search stage. The integer sample search is followed by fractional sample correction. In order to save computational complexity, a parametric error surface method is used to derive fractional sample corrections, rather than an additional search using a SAD comparison method. In the sub-pixel offset estimation based on the parametric error surface, the center (shown in grey) position cost and the cost of four neighboring positions of the center are used to fit the following two-dimensional parabolic error surface equation:
E(x,y)=A(x-xmin)2+B(y-ymin)2+C (7)
wherein (x, y) corresponds to fractional position, (x)min,ymin) Corresponds to the fractional position with the smallest cost, and a and B correspond to the parameters, and C corresponds to the smallest cost value. (x) by solving the above equation using the cost values for five search pointsmin,ymin) The calculation is as follows:
xmin=(E(-1,0)-E(1,0))/(2(E(-1,0)+E(1,0)-2E(0,0))) (8)
ymin=(E(0,-1)-E(0,1))/(2((E(0,-1)+E(0,1)-2E(0,0))) (9)
wherein x isminAnd yminIs automatically constrained between-8 and 8 because all cost values are positive and the minimum value is E (0, 0). The calculated score (x)min,ymin) Is added to the integer distance correction MV to obtain the sub-pixel precision correction Δ MV.
While BIO and DMVR can effectively improve the efficiency of motion compensation, they also introduce significant complexity increases in terms of hardware and software to encoder and decoder designs. Specifically, the following complexity issues exist in the existing BIO and DMVR designs:
(1) as described above, in the conventional motion compensation phase, DMVR and BIO are always enabled for bi-prediction blocks with both forward and backward prediction signals. Such a design may not be practical for certain video applications (e.g., video streaming on mobile devices) that cannot afford too heavy a calculation due to their limited power. For example, BIO requires derivation of gradient values at each sample location, which requires multiple multiply and add operations due to 2D FIR filtering, and DMVR requires computation of multiple SAD values during the bilateral matching process. All of these operations require intensive computations. This complexity increase may become even worse when applying the BIO and DMVR jointly to one bi-directionally predicted CU. Fig. 7 is a flow chart illustrating a motion compensation process with DMVR and BIO. Referring to fig. 7, first, in step 701, a CU needs to be further split into multiple sub-blocks for DMVR, and each sub-block may present one unique motion vector. For example, the size may be min (16, CUWidth) x min (16, CUHeight), where CUWidth and CUHeight may be the width and height of the CU, respectively. DMVR is performed on the ith sub-block in step 702, and motion compensation is performed on the ith sub-block using the modified motion vector in step 703. Subsequently, in step 704, the BIO is performed on the ith sub-block. Subsequently, in step 705, it is determined whether the ith sub-block is the last sub-block in the CU, and if not, steps 702 to 704 are continued for the (i + 1) th sub-block. That is, when applying the BIO and DMVR jointly to one CU, all BIO-related operations need to be performed separately for each sub-block. This may result in significant computational complexity and memory bandwidth costs, and may potentially complicate pipeline design and/or parallel processing in hardware and software.
(2) As described above, the current BIO uses a 2D separable FIR filter to calculate the horizontal and vertical gradient values. Specifically, a low-pass 8-tap interpolation filter (which is used for interpolation at conventional motion compensation) and a high-pass 8-tap gradient filter (as shown in table 1) are applied, and the filters are selected based on the fractional positions of the corresponding motion vectors. Assuming that both L0 and L1 MV point to reference samples at fractional sample positions in the horizontal and vertical directions, the number of multiplications and additions used to calculate the horizontal and vertical gradients in L0 and L1 would be (W × (H +7) + W × H) × 2 × 2, where W and H are the width and height of the prediction block, respectively. However, due to the high-pass nature of gradient computation, using a gradient filter with more filter coefficients may not always be beneficial to accurately extract useful gradient information from neighboring samples. Therefore, it is highly desirable to further reduce the filter length of the gradient filter for BIO, which not only can potentially improve the accuracy of the gradient derivation, but can also reduce the computational complexity of BIO.
In order to solve the above-described problems, the present disclosure proposes a video encoding method and apparatus and a video decoding method and apparatus that reduce the complexity of a BIO tool and a DMVR tool used in both the VVC and AVS3 standards. Specifically, according to the present disclosure, DMVR and/or BIO processes may be conditionally skipped when performing conventional motion compensation, thereby reducing the complexity of motion compensation and increasing coding efficiency. Furthermore, according to the present disclosure, when certain conditions are met, the BIO and/or DMVR may be skipped entirely at the CU level or sub-block level, thereby a better tradeoff between improving codec performance and reducing codec complexity. In addition, according to the present disclosure, the existing 8-tap gradient filter in the existing BIO design can be replaced with a gradient filter with fewer coefficients, thereby reducing the computational complexity of BIO and improving the coding and decoding efficiency. Although the existing BIO and DMVR designs in the AVS3 standard are used as the basis herein, it will be apparent to those skilled in the art of video encoding that the video encoding methods and apparatus and video decoding methods and apparatus presented in this disclosure may also be applied to other BIO and DMVR designs or other encoding tools having the same or similar design styles.
Hereinafter, a video decoding method and apparatus and a video encoding method and apparatus according to the present disclosure will be described in detail with reference to fig. 8 to 14. In addition, fig. 1 and 3 may be implemented as scene diagrams of video encoding methods and apparatuses and video decoding methods and apparatuses according to the present disclosure.
Fig. 8 is a flowchart illustrating a video decoding method according to an exemplary embodiment of the present disclosure.
Referring to fig. 8, in step 801, a plurality of coding units CU into which a video image is divided may be acquired from a bitstream. Here, an image may also be referred to as a picture or a frame. As described above, an input video image needs to be encoded block by block, the video image after encoding forms a bitstream and is transmitted to a decoding end, and the decoding end receives and parses the bitstream to perform block by block decoding. Here, a "block" that is a basic unit of encoding may be referred to as a CU. Therefore, when encoding a video image, the video image needs to be divided into a plurality of CUs. The division is described above and thus will not be described herein. In step 802, in the case that a current CU among the plurality of CUs is a bidirectional predicted CU, information of a bidirectional predicted sample point of the current CU may be acquired. Also, at step 803, at least one of a BIO process and a DMVR process associated with the current CU may be selectively skipped based on the information of the bidirectional predicted sample point of the current CU.
According to an exemplary embodiment of the present disclosure, at least one of a BIO process and a DMVR process associated with a current CU may be selectively skipped at least one of a CU level and a sub-block level, wherein the sub-blocks are obtained by dividing the CU. Selectively skipping at least one of the BIO process and the DMVR process at the CU level refers to selectively skipping at least one of the BIO process and the DMVR process for each CU individually, i.e., determining whether to skip the BIO process and/or the DMVR process is performed for each CU individually; selectively skipping at least one of the BIO process and the DMVR process at the sub-level block means selectively skipping at least one of the BIO process and the DMVR process individually for each sub-block, i.e., determining whether to skip the BIO process and/or the DMVR process is performed individually for each sub-block. Whether to skip the BIO process and/or the DMVR process may be determined only at the CU level, or only at the sub-block level, or adaptively at the CU level and the sub-block level, as needed.
According to an exemplary embodiment of the present disclosure, the bidirectional prediction samples of the current CU may be prediction samples in two prediction blocks (may also be referred to as a forward prediction block and a backward prediction block) from two reference pictures in the reference picture list L0 and the reference picture list L1, respectively, of the current CU, and may also be referred to as L0 prediction samples and L1 prediction samples for short. The information of the bidirectional predicted samples of the current CU may include at least one of similarity and gradient information of the bidirectional predicted samples of the current CU. The similarity of the bidirectional prediction samples of the current CU refers to the similarity between prediction samples in two prediction blocks of the current CU, and the gradient information of the bidirectional prediction samples of the current CU refers to the gradient information of the prediction samples in the two prediction blocks of the current CU.
According to an example embodiment of the present disclosure, a BIO process and/or a DMVR process may be selectively skipped based on similarity of bidirectional predicted samples of a current CU; or may selectively skip the BIO process based on gradient information of bi-directionally predicted samples of the current CU; or may selectively skip the BIO process based on both the similarity and gradient information of the bi-directionally predicted samples of the current CU, as will be described in detail below.
Skipping BIO procedures and/or DMVR procedures based on similarity
As described above, the BIO and DMVR use L0 and L1 predicted samples to derive local motion corrections at different granularity levels, e.g., the BIO derives motion correction for each sample and the DMVR calculates motion correction for each sub-block. When the difference between the L0 and L1 prediction signals is small, it is reasonable to consider the two prediction blocks as highly correlated so that the DMVR and BIO processes can be safely skipped without incurring a large coding loss.
Next, specific exemplary embodiments of the manner of calculating the similarity according to the present disclosure will be described.
According to an exemplary embodiment of the present disclosure, the similarity between prediction samples in two prediction blocks of the current CU may be found based on a difference between prediction samples in the two prediction blocks of the current CU (i.e., L0 and L1 prediction samples).
According to another exemplary embodiment of the present disclosure, since the initial motion vectors may point to reference samples at fractional sample positions, the generation of L0 and L1 predicted samples may invoke an interpolation process that requires non-negligible complexity and causes some delay to make the decision. Thus, instead of directly comparing prediction samples, another simple way of measuring the correlation between two prediction blocks may be employed, namely calculating the difference between their integer reference samples in the reference pictures from which the two prediction blocks were generated. That is, the similarity of bi-directional prediction samples of the current CU may be derived based on a difference between corresponding integer reference samples in reference pictures in which two prediction blocks of the current CU are located. For example, the difference between corresponding integer reference samples in the reference pictures where the two prediction blocks of the current CU are located may be calculated by the following formula (10).
Figure BDA0003085992190000121
Wherein L is(0)(x, y) and L(1)(x ', y') are integer sample coordinates (x, y) at the forward and backward reference pictures and sample values at (x ', y'); b and B' are sets of integer sample coordinates used to generate L0 and L1 prediction samples for the current block; n is the number of samples in the current block; d is a distortion measure to which different metrics may be applied, such as Sum of Squared Error (SSE), Sum of Absolute Difference (SAD), Sum of Absolute Transformed Difference (SATD), and so on.
Of course, the similarity between prediction samples in two prediction blocks of the current CU according to the present disclosure is not limited to the above-described method, and the similarity between prediction samples in two prediction blocks of the current CU may also be obtained according to any other reasonable method.
In the following, a specific exemplary embodiment of selectively skipping at least one of the BIO process and the DMVR process based on the similarity according to the present disclosure will be described.
According to an exemplary embodiment of the present disclosure, after obtaining a similarity between prediction samples in two prediction blocks of a current CU, at least one of a BIO process and a DMVR process may be selectively skipped based on a comparison result of a value representing the similarity with a predetermined threshold (e.g., may be referred to as a first predetermined threshold). For example, when the value representing the similarity is not greater than a first predetermined threshold (e.g., Diff ≦ D)thres) In case of (2), at least one of the BIO process and the DMVR process is skipped, i.e. at least one of the BIO process and the DMVR process does not need to be performed during the motion compensation phase. When the value representing the similarity is greater than a first predetermined threshold (e.g., Diff > D)thres) In case of (2), at least one of the BIO process and the DMVR process is performed, i.e. at least one of the BIO process and the DMVR process still needs to be performed during the motion compensation phase.
Further, in the above-described exemplary embodiment, at least one of the BIO process and the DMVR process may be selectively skipped at the CU level. In this case, the similarity may be the similarity between all prediction samples in the two prediction blocks of the current CU. Alternatively, at least one of the BIO process and the DMVR process may be selectively skipped at a sub-block level. In this case, the similarity may be a similarity between prediction samples in corresponding sub-blocks in two prediction blocks of the current CU.
According to another exemplary embodiment of the present disclosure, at least one of the BIO process and the DMVR process may be selectively skipped based on whether the current CU is allowed to perform the DMVR process and a result of comparing the value representing the similarity with a predetermined threshold. For example, in case the current CU is disabled for DMVR procedures, the BIO procedure is selectively skipped at CU level or sub-block level; selectively skipping BIO and DMVR processes at a sub-block level if the current CU is enabled with DMVR processes; wherein the BIO process is performed in a case where the value representing the similarity is greater than a second predetermined threshold (e.g., Diff > thresBIO), and the BIO process is skipped in a case where the value representing the similarity is not greater than the second predetermined threshold (e.g., Diff ≦ thresBIO); wherein the DMVR procedure is performed if the value representing the similarity is greater than a third predetermined threshold (e.g., Diff > threshDMVR), and the DMVR procedure is skipped if the value representing the similarity is not greater than the third predetermined threshold (e.g., Diff ≦ threshDMVR).
According to a first exemplary embodiment, a method of selectively skipping at least one of the BIO and DMVR processes at the CU or sub-block level (which may also be referred to as an early termination method for the BIO and/or DMVR, which may potentially provide various tradeoffs between coding performance and complexity reduction.) on the one hand, sub-block level early termination may better maintain the coding gain of the BIO and DMVR due to the finer granularity of control of the BIO and DMVR. The motion field (motion field) of the current CU is derived at the sub-block level. Thus, it is highly likely that the predicted samples of various sub-blocks inside the current CU become more diverse after applying DMVR. In this case, it is more reasonable to determine whether to skip (or bypass) at least one of the BIO process and the DMVR process on a sub-block basis. Otherwise (i.e., when DMVR is not enabled (or disabled)), the CU-level distortion measure may be relied upon to determine whether the BIO process for the entire CU is to be bypassed. In view of this, the first exemplary embodiment proposes a multi-stage early termination method: the BIO and DMVR procedures are adaptively skipped at the CU level or sub-block level depending on whether DMVR is allowed for a current CU. That is, the BIO process is selectively skipped at the CU level in case the current CU is disabled for the DMVR process. In the case where the current CU is enabled with the DMVR process, the BIO process and the DMVR process are selectively skipped at a sub-block level. The method can be summarized as follows: (1) when the DMVR process is disabled for the current CU, a decision is made at the CU level as to whether to bypass the BIO process. In particular, if the distortion measure of the CU's reference samples is not greater than a predefined threshold (i.e., a second predetermined threshold thresBIO, the BIO process is disabled entirely for the entire CU; otherwise, BIO is still applied to the CU: (2) a decision is made at the sub-block level as to whether to bypass the BIO process and the DMVR process when the DMVR process is enabled (i.e., allowed) for the current CU.
Next, the procedure of the above-described multi-stage early termination method will be described in detail with reference to fig. 9. Fig. 9 is a flowchart illustrating adaptively skipping the BIO and DMVR at the CU level or sub-block level according to a first exemplary embodiment of the present disclosure.
Referring to FIG. 9, in step 901, it may be determined whether the DMVR process is enabled for the current CU. When it is determined that the DMVR procedure is disabled for the current CU, that means that the current CU is not to be subjected to the DMVR procedure, it is only necessary to determine whether to skip the BIO procedure at the CU level, that is, to selectively skip the BIO procedure at the CU level. When it is determined that the DMVR process is enabled for the current CU, it may be determined at a sub-block level whether to skip the DMVR process and the BIO process, i.e., selectively skip the DMVR process and the BIO process at the sub-block level.
In step 902, in case the DMVR procedure is disabled by the current CU, a comparison result of a value Diff representing the similarity with a second predetermined threshold thresBIO may be obtained, i.e. it may be determined whether the value representing the similarity is not greater than the second predetermined threshold thresBIO, wherein the similarity may be the similarity between all predicted samples in the two predicted blocks of the current CU.
In step 903, in case the value representing the similarity is larger than a second predetermined threshold thresBIO, a BIO process may be performed on the current CU. Otherwise, the execution of the BIO process on the current CU may be skipped (i.e., not executed).
In step 904, where the current CU enables DMVR procedures, the current CU may be divided into a plurality of sub-blocks and the determination of whether to skip DMVR and BIO procedures is performed for each sub-block. For example, but not limiting of, the size of a subblock may be min (16, CUWidth) x min (16, CUHeight), where CUWidth and CUHeight may be the width and height of a CU, respectively.
In step 905, a comparison result between the value representing the similarity of the ith sub-block and the third predetermined threshold threshdmvr may be obtained, that is, it may be determined whether the value representing the similarity is not greater than the third predetermined threshold threshdmvr, where the similarity may be the similarity between the predicted samples in the ith sub-block of the two predicted blocks of the current CU.
In step 906, in case the data representing the similarity of the ith sub-block is greater than a third predetermined threshold threshdmvr, a DMVR procedure may be performed on the ith sub-block. Otherwise, the DMVR process performed on the ith sub-block may be skipped (i.e., not performed).
In step 907, a comparison result of the data representing the similarity of the ith sub-block with the second predetermined threshold thresBIO may be obtained, i.e., it may be determined whether the data representing the similarity is not greater than the second predetermined threshold thresBIO.
In step 908, in case the data representing the similarity of the ith sub-block is greater than a second predetermined threshold thresBIO, a BIO process may be performed on the ith sub-block. Otherwise, the execution of the BIO process on the ith sub-block may be skipped (i.e., not performed).
At step 909, it is determined whether the ith sub-block is the last sub-block of the current CU. If it is determined that the ith sub-block is not the last sub-block of the current CU, steps 905 through 908 are performed on the next sub-block (i.e., the (i + 1) th sub-block). Of course, steps 905 and 906 described above may be performed sequentially, in reverse order, or synchronously with steps 907 and 908.
Furthermore, in the first exemplary embodiment, in the case where the DMVR process is disabled for a CU, it is determined at the CU level whether to skip the BIO, i.e., the distortion measure needs to be calculated by considering all L0 and L1 reference samples of the CU. This may increase the on-chip memory size to store those reference samples. For example, assuming a maximum CU size of 128 × 128, the memory size required for SAD calculation is equal to 2 × 128 × 128, which is very expensive for practical hardware codec implementations.
Therefore, according to the second exemplary embodiment, in order to reduce the memory size for distortion calculation, another method is proposed to skip the BIO process and the DMVR process at the sub-block level at all times. That is, the BIO process is selectively skipped at a sub-block level in case the current CU is disabled for the DMVR process. In the case where the current CU is enabled with the DMVR process, the BIO process and the DMVR process are selectively skipped at a sub-block level. The method can be summarized as follows: (1) when the DMVR process is disabled and BDOF is applied to the current CU, the current CU is first partitioned into sub-sizesBIO×subBIOSub-blocks, and each sub-block is considered as one independent region for enabling/disabling the BIO process. In particular, if the distortion measure of the reference sample points of a sub-block is not greater than a predefined threshold thresBIO0 subThen the BIO process of the sub-block is completely disabled; otherwise, the BIO is still applied. (2) Sub-block size in DMVR mode (i.e., sub) when DMVR is enabled for the current CUDMVR×subDMVR) A decision is made as to whether to bypass the BIO and DMVR. In addition, two thresholds thresBIO1 subAnd threshDMVR is used to bypass the BIO and DMVR, respectively, of each sub-block.
Next, the above-described process of always skipping the BIO process and the DMVR process at the sub-block level will be described in detail with reference to fig. 10. Fig. 10 is a flowchart illustrating adaptively skipping the BIO process and the DMVR process at a sub-block level according to a second exemplary embodiment of the present disclosure.
Referring to FIG. 10, at step 1001, it may be determined whether a DMVR process is enabled for the current CU. When it is determined that the DMVR procedure is disabled for the current CU, that means that the current CU is not to be subjected to the DMVR procedure, it is only necessary to determine whether to skip the BIO procedure at the sub-block level, that is, to selectively skip the BIO procedure at the sub-block level. When it is determined that the DMVR process is enabled for the current CU, it may be determined at a sub-block level whether to skip the DMVR process and the BIO process, i.e., selectively skip the DMVR process and the BIO process at the sub-block level.
In step 1002, where the current CU disables the DMVR process, the current CU may be divided into sub-blocks and a determination of whether to BIO is performed for each sub-blockThe process of (1). For example, the sub-block may be a preset subBIO×subBIOAlso referred to as a first dimension.
In step 1003, a value Diff representing similarity of the ith first-size sub-block and a second predetermined threshold thresBIO may be obtained0 subThat is, whether or not the value representing the similarity is not greater than the second predetermined threshold thresBIO can be judged0 subWherein the similarity may be a similarity between prediction samples of the ith first-size sub-block in two prediction blocks of the current CU.
In step 1004, the value representing the similarity is greater than a second predetermined threshold thresBIO0 subIn this case, the BIO process may be performed on the ith first size sub-block. Otherwise, the BIO process may be skipped (i.e., not performed) on the ith first size sub-block.
At step 1005, it may be determined whether the ith first size sub-block is the last sub-block of the current CU. If it is determined that the ith first-size sub-block is not the last sub-block of the current CU, steps 1003 to 1004 are performed on the next sub-block (i.e., the (i + 1) th first-size sub-block).
In step 1006, where the current CU enables the DMVR process, the current CU may be divided into a plurality of sub-blocks and the determining whether to skip the DMVR process and the BIO process is performed for each sub-block. For example, the sub-block may be a preset subDMVR×subDMVRAnd may also be referred to as a second dimension.
In step 1007, a comparison result between the value representing the similarity of the ith second size sub-block and the third predetermined threshold threshdmvr may be obtained, that is, it may be determined whether the value representing the similarity is not greater than the third predetermined threshold threshdmvr, where the similarity may be the similarity between the predicted samples in the ith second size sub-block in the two predicted blocks of the current CU.
In step 1008, in case the value representing the similarity of the ith second size sub-block is greater than a third predetermined threshold threshdmvr, a DMVR procedure may be performed on the ith second size sub-block. Otherwise, the DMVR process may be skipped (i.e., not performed) on the ith second size sub-block.
In step 1009, the second size of the ith second size sub-block and the second predetermined threshold thresBIO may be obtained1 subI.e., whether the second size is not greater than a second predetermined threshold thresBIO can be determined1 sub
At step 1010, the second size of the ith second size sub-block is greater than a second predetermined threshold thresBIO1 subIn this case, the BIO process may be performed on the ith second size sub-block. Otherwise, the BIO process may be skipped (i.e., not performed) on the ith second size sub-block.
At step 1011, it is determined whether the ith second size sub-block is the last sub-block of the current CU. If it is determined that the ith second size sub-block is not the last sub-block of the current CU, steps 1007 to 1010 are performed on the next sub-block (i.e., the (i + 1) th second size sub-block). Of course, steps 1007 and 1008 described above may be performed sequentially, in reverse order, or synchronously with steps 1009 and 1010.
Further, in the second exemplary embodiment, two different sub-block sizes may be used to bypass the BIO/DMVR process, i.e., sub, depending on whether the DMVR process is enabled or notBIO×subBIOCan be preset to be not equal to subDMVR×subDMVRThat is, the first size and the second size may be set to be different. Alternatively, the BIO/DMVR process may be bypassed using the same sub-block size, i.e., sub, regardless of whether or not the DMVR process applies to the current CUBIO×subBIOCan be preset equal to subDMVR×subDMVRThat is, the first size and the second size may be set to be the same.
Further, in the second exemplary embodiment, two different thresholds may be used to bypass the BIO process, i.e., thresbIO, depending on whether the DMVR process is enabled or not0 subCan be preset to be not equal to thresBIO1 subI.e. the second predetermined threshold thresBIO in case the current CU is disabled for the DMVR procedure0 subAnd a second predetermined threshold thresBIO if the current CU is enabled with the DMVR process1 subAre set differently. For example, thresBI may be usedO0 subIs preset to be equal to thresBIO1 subHalf of (1), i.e. the second predetermined threshold thresBIO in case the current CU is disabled for the DMVR procedure0 subSet equal to a second predetermined threshold thresBIO with the current CU enabled for DMVR procedures1 subHalf of that. Alternatively, the BIO process, i.e., thresbIO, may be bypassed using the same threshold regardless of whether the DMVR process is applied to the current CU or not0 subCan be preset equal to thresBIO1 subI.e. the second predetermined threshold thresBIO in case the current CU is disabled for the DMVR procedure0 subAnd a second predetermined threshold thresBIO if the current CU is enabled with the DMVR process1 subAre set to be the same.
Skipping BIO procedures based on gradient information
The BIO is designed to improve the accuracy of motion compensated prediction by providing a sample-by-sample motion correction that is computed based on the local gradient computed at each sample position in one motion compensated block. The gradient values derived using the gradient filter in table 1 tend to be smaller for blocks within a region that contains less high frequency detail (e.g., a flat region). Thus, BIO cannot provide an efficient correction of the predicted samples for those blocks. This can be demonstrated by equation (2), when the local gradient (i.e.,
Figure BDA0003085992190000181
and
Figure BDA0003085992190000183
) Close to zero, the final prediction signal obtained from the BIO is approximately equal to the prediction signal generated by conventional bi-prediction, i.e.,
Figure BDA0003085992190000182
in view of this, the present disclosure proposes to apply the BIO only to predicted samples of an encoded block that contains sufficiently high frequency information. Whether a prediction signal for a video block contains sufficient height may be made based on various criteriaAnd (4) determining frequency information. In one example, the average magnitude of the gradient of samples within a block may be used. If the average gradient magnitude is less than a threshold, then the block is classified as a flat region and BIO should not be applied; otherwise, the block is considered to contain sufficient high frequency detail, while the BIO is still applicable.
According to an exemplary embodiment of the present disclosure, the BIO process may be selectively skipped based on a result of comparing an average gradient value of prediction samples in two prediction blocks of the current CU with a predetermined gradient threshold (e.g., a fourth predetermined threshold). For example, in case the average gradient value of prediction samples in two prediction blocks of the current CU is greater than a fourth predetermined threshold, performing a BIO process associated with the current CU; in case the average gradient values of the prediction samples in the two prediction blocks of the current CU are not larger than a fourth predetermined threshold, the BIO process associated with the current CU is skipped.
Furthermore, a method of skipping the BIO process based on gradient information may also be applied at the CU level or the sub-block level. When the method is applied at the CU level, the gradient values of all predicted samples inside the CU are used to determine whether to bypass the BIO process. Otherwise, when the method is applied at the sub-block level, a decision is made independently for each sub-block as to whether to skip the BIO process by comparing the average gradient values of the predicted samples within the corresponding sub-block.
According to an example embodiment of the present disclosure, a BIO process associated with a current CU may be selectively skipped at the CU level. In this case, the average gradient value may be an average gradient value of all prediction samples in the two prediction blocks of the current CU. According to another exemplary embodiment of the present disclosure, a BIO process associated with the current CU may be selectively skipped at a sub-block level, in which case the average gradient value may be an average gradient value of prediction samples in corresponding sub-blocks of two prediction blocks of the current CU.
Skipping BIO based on similarity and gradient information
The present disclosure also proposes to check the blending condition of both similarity and gradient information to determine whether to skip the BIO. In this case, the similarity and gradient information may be examined jointly or separately. In the joint case, the BIO process is applied when both the similarity and gradient values are determined to be large (e.g., by threshold comparison), otherwise the BIO process is skipped. In the individual case, when the similarity or gradient values are small (e.g., by threshold comparison), the BIO process is skipped.
According to an exemplary embodiment of the present disclosure, in the case where the BIO process is determined to be performed based on the similarity (by the above-described method of skipping the BIO process based on the similarity) and the BIO process is determined to be performed based on the gradient information (by the above-described method of skipping the BIO process based on the gradient information), the BIO process is determined to be performed, and otherwise the BIO process is determined to be skipped. According to another exemplary embodiment of the present disclosure, in case that it is determined to perform the BIO process based on the similarity information (by the above-described method of skipping the BIO process based on the similarity information) or to perform the BIO process based on the gradient information (by the above-described method of skipping the BIO process based on the gradient information), it is determined to perform the BIO process, and otherwise, it is determined to skip the BIO process.
Furthermore, in current BIO designs, 2D separable 8-tap FIR filters are used to compute the horizontal and vertical gradient values, i.e. 8-tap interpolation filters and 8-tap gradient filters. As previously mentioned, using an 8-tap gradient filter may not always be effective in accurately extracting gradient information from reference samples, while leading to a non-negligible increase in computational complexity. To solve the above problem, the present disclosure proposes a gradient filter with fewer coefficients to calculate gradient information used by the BIO.
Specifically, the input of the gradient derivation process is the same reference samples as used for motion compensation and the input Motion (MV) of the current blockx,MVy) Fractional component (fracX, fracY). In addition, a gradient filter h is applied depending on the direction of the derived gradientGAnd an interpolation filter hLThe order of (a) is different. Specifically, in the case of deriving a horizontal gradient, a gradient filter h is first applied in the horizontal directionGTo derive a horizontal gradient value at the horizontal fractional sample position fracX; then, the interpolation filter h is applied verticallyLTo interpolate the gradient values at the vertical fractional sample position fracY. On the contrary, when derivingIn the case of a vertical gradient, the interpolation filter h is first applied horizontallyLTo interpolate intermediate interpolated samples at the horizontal sample position fracX and then apply a gradient filter h in the vertical directionGTo derive the vertical gradient at the vertical fractional sample position fracY from the intermediate interpolated samples.
According to an exemplary embodiment of the present disclosure, gradient information for a BIO process may be obtained using a gradient filter having gradient filter coefficients less than an 8-tap gradient filter when performing BIO. For example, a 4-tap gradient filter or a 6-tap gradient filter may be used.
In accordance with one embodiment of the present disclosure, in the case of using a 4-tap gradient filter, the gradient filter coefficients are as shown in table 2 below:
[ TABLE 2 ]
Figure BDA0003085992190000201
Wherein f isgrad[p][index]Index in (1) represents the coefficient index of the gradient filter at the fractional sample position p, and in the case of a 4-tap gradient filter, the index has a value in the range of [0,1,2,3 ]]。
According to another embodiment of the present disclosure, in the case of using a 6-tap gradient filter, the gradient filter coefficients are as shown in table 3 below:
[ TABLE 3 ]
Figure BDA0003085992190000202
Wherein f isgrad[p][index]Index in (1) represents the coefficient index of the gradient filter at the fractional sample position p, and in the case of a 6-tap gradient filter, the index has a value in the range of [0,1,2,3,4,5]。
Fig. 11 is a flowchart illustrating a video encoding method according to an exemplary embodiment of the present disclosure.
Referring to fig. 11, in step 1101, a video image may be divided into a plurality of coding units CU. Here, an image may also be referred to as a picture or a frame. As described above, an input video image needs to be encoded block by block, and here, a "block" as an encoded basic unit may be referred to as a CU. Therefore, when encoding a video image, the video image needs to be divided into a plurality of CUs. The division is described above and thus will not be described herein.
In step 1102, in the case that a current CU among the plurality of CUs is a bidirectional predicted CU, information of a bidirectional predicted sample of the current CU may be acquired. Also, at step 1103, at least one of a BIO process and a DMVR process associated with the current CU may be selectively skipped based on the information of the bidirectional predicted samples of the current CU.
According to an exemplary embodiment of the present disclosure, at least one of a BIO process and a DMVR process associated with a current CU may be selectively skipped at least one of a CU level and a sub-block level, wherein the sub-blocks are obtained by dividing the CU. Selectively skipping at least one of the BIO process and the DMVR process at the CU level means selectively skipping at least one of the BIO process and the DMVR process for each CU individually, i.e., performing the determination whether to skip the BIO process and/or the DMVR process for each CU individually; selectively skipping at least one of the BIO process and the DMVR process at the sub-level block means selectively skipping at least one of the BIO process and the DMVR process for each sub-block individually, i.e., determining whether to skip the BIO process and/or the DMVR process is performed for each sub-block individually. Whether to skip the BIO process and/or the DMVR process may be determined only at the CU level, or only at the sub-block level, or adaptively at the CU level and the sub-block level, as needed.
According to an exemplary embodiment of the present disclosure, the bidirectional prediction samples of the current CU may be prediction samples in two prediction blocks (may also be referred to as a forward prediction block and a backward prediction block) from two reference pictures in the reference picture list L0 and the reference picture list L1, respectively, of the current CU, and may also be referred to as L0 prediction samples and L1 prediction samples for short. The information of the bidirectional predicted samples of the current CU may include at least one of similarity and gradient information of the bidirectional predicted samples of the current CU. The similarity of the bidirectional prediction samples of the current CU refers to the similarity between prediction samples in two prediction blocks of the current CU, and the gradient information of the bidirectional prediction samples of the current CU refers to the gradient information of the prediction samples in the two prediction blocks of the current CU.
According to an example embodiment of the present disclosure, a BIO process and/or a DMVR process may be selectively skipped based on similarity of bidirectional predicted samples of a current CU; or may selectively skip the BIO process based on gradient information of bi-directionally predicted samples of the current CU; or may selectively skip the BIO process based on both the similarity information and the gradient information of the bi-directionally predicted samples of the current CU, as will be described in detail below.
According to an exemplary embodiment of the present disclosure, the similarity between prediction samples in two prediction blocks of the current CU may be found based on a difference between prediction samples in the two prediction blocks of the current CU (i.e., L0 and L1 prediction samples).
According to another exemplary embodiment of the present disclosure, the similarity of the bidirectional prediction samples of the current CU may be obtained based on a difference between corresponding integer reference samples in reference pictures in which two prediction blocks of the current CU are located. Of course, the similarity information between prediction samples in two prediction blocks of the current CU according to the present disclosure is not limited to the above-described method, and the similarity between prediction samples in two prediction blocks of the current CU may also be obtained according to any other reasonable method.
According to an exemplary embodiment of the present disclosure, after obtaining a similarity between prediction samples in two prediction blocks of a current CU, at least one of a BIO process and a DMVR process may be selectively skipped based on a comparison result of a value representing the similarity with a predetermined threshold (e.g., may be referred to as a first predetermined threshold). For example, when the value representing the similarity is not greater than a first predetermined threshold (e.g., Diff ≦ D)thres) In case of (2), at least one of the BIO process and the DMVR process is skipped, i.e. at least one of the BIO process and the DMVR process does not need to be performed during the motion compensation phase. When the value representing the similarity is greater than a first predetermined threshold(e.g., Diff > D)thres) In case of (2), at least one of the BIO process and the DMVR process is performed, i.e. at least one of the BIO process and the DMVR process still needs to be performed during the motion compensation phase.
Further, in the above-described exemplary embodiment, at least one of the BIO process and the DMVR process may be selectively skipped at the CU level. In this case, the similarity may be the similarity between all prediction samples in the two prediction blocks of the current CU. Alternatively, at least one of the BIO process and the DMVR process may be selectively skipped at a sub-block level. In this case, the similarity may be a similarity between prediction samples in corresponding sub-blocks in two prediction blocks of the current CU.
According to another exemplary embodiment of the present disclosure, at least one of the BIO process and the DMVR process may be selectively skipped based on whether the current CU is allowed to perform the DMVR process and a result of comparing the value representing the similarity with a predetermined threshold. For example, in case the current CU is disabled for DMVR procedures, the BIO procedure is selectively skipped at CU level or sub-block level; selectively skipping BIO and DMVR processes at a sub-block level if the current CU is enabled with DMVR processes; wherein the BIO process is performed in a case where the value representing the similarity is greater than a second predetermined threshold (e.g., Diff > thresBIO), and the BIO process is skipped in a case where the value representing the similarity is not greater than the second predetermined threshold (e.g., Diff ≦ thresBIO); wherein the DMVR procedure is performed if the value representing the similarity is greater than a third predetermined threshold (e.g., Diff > threshDMVR), and the DMVR procedure is skipped if the value representing the similarity is not greater than the third predetermined threshold (e.g., Diff ≦ threshDMVR).
According to a first exemplary embodiment, the BIO process is selectively skipped at the CU level in case the current CU is disabled for the DMVR process. In the case where the current CU is enabled with the DMVR process, the BIO process and the DMVR process are selectively skipped at a sub-block level.
According to a second exemplary embodiment, the BIO process is selectively skipped at a sub-block level in case the current CU is disabled for the DMVR process. In the case where the current CU is enabled with the DMVR process, the BIO process and the DMVR process are selectively skipped at a sub-block level.
Further, in the second exemplary embodiment, two different sub-block sizes may be used to bypass the BIO/DMVR process, i.e., sub, depending on whether the DMVR process is enabled or notBIO×subBIOCan be preset to be not equal to subDMVR×subDMVRThat is, the first size and the second size may be set to be different. Alternatively, the BIO/DMVR process may be bypassed using the same sub-block size, i.e., sub, regardless of whether the DMVR process applies to the current CU or notBIO×subBIOCan be preset equal to subDMVR×subDMVRThat is, the first size and the second size may be set to be the same.
Further, in the second exemplary embodiment, two different thresholds may be used to bypass the BIO process, i.e., thresbIO, depending on whether the DMVR process is enabled or not0 subCan be preset to be not equal to thresBIO1 subI.e. the second predetermined threshold thresBIO in case the current CU is disabled for the DMVR procedure0 subAnd a second predetermined threshold thresBIO if the current CU is enabled with the DMVR process1 subAre set differently. For example, thresBIO may be used0 subIs preset to be equal to thresBIO1 subHalf of (1), i.e. the second predetermined threshold thresBIO in case the current CU is disabled for the DMVR procedure0 subSet equal to a second predetermined threshold thresBIO with the current CU enabled for DMVR procedures1 subHalf of that. Alternatively, the BIO process, i.e., thresbIO, may be bypassed using the same threshold regardless of whether the DMVR process is applied to the current CU or not0 subCan be preset equal to thresBIO1 subI.e. the second predetermined threshold thresBIO in case the current CU is disabled for the DMVR procedure0 subAnd a second predetermined threshold thresBIO if the current CU is enabled with the DMVR process1 subAre set to be the same.
According to an exemplary embodiment of the present disclosure, the BIO process may be selectively skipped based on a result of comparing an average gradient value of prediction samples in two prediction blocks of the current CU with a predetermined gradient threshold (e.g., a fourth predetermined threshold). For example, in case the average gradient value of prediction samples in two prediction blocks of the current CU is greater than a fourth predetermined threshold, performing a BIO process associated with the current CU; in case the average gradient values of the prediction samples in the two prediction blocks of the current CU are not larger than a fourth predetermined threshold, the BIO process associated with the current CU is skipped.
According to an example embodiment of the present disclosure, a BIO process associated with a current CU may be selectively skipped at the CU level. In this case, the average gradient value may be an average gradient value of all prediction samples in the two prediction blocks of the current CU. According to another exemplary embodiment of the present disclosure, a BIO process associated with the current CU may be selectively skipped at a sub-block level, in which case the average gradient value may be an average gradient value of prediction samples in corresponding sub-blocks of two prediction blocks of the current CU.
According to an exemplary embodiment of the present disclosure, in the case where the BIO process is determined to be performed based on the similarity (by the above-described method of skipping the BIO process based on the similarity) and the BIO process is determined to be performed based on the gradient information (by the above-described method of skipping the BIO process based on the gradient information), the BIO process is performed, and otherwise the BIO process is skipped. According to another exemplary embodiment of the present disclosure, in case that it is determined to perform the BIO process based on the similarity (by the above-described method of skipping the BIO process based on the similarity information) or it is determined to perform the BIO process based on the gradient information (by the above-described method of skipping the BIO process based on the gradient information), the BIO process is performed, and otherwise, the BIO process is skipped.
According to an exemplary embodiment of the present disclosure, gradient information for a BIO process may be obtained using a gradient filter having less than 8 tap gradient filter coefficients when performing BIO. For example, a 4-tap gradient filter or a 6-tap gradient filter may be used. For example, in the case of using a 4-tap gradient filter, the gradient filter coefficients are as shown in table 2 above. For another example, in the case of using a 6-tap gradient filter, the gradient filter coefficients are as shown in table 3 above.
Fig. 12 is a block diagram illustrating a video decoding apparatus according to an exemplary embodiment of the present disclosure.
Referring to fig. 12, a video decoding apparatus 1200 according to an exemplary embodiment of the present disclosure may include a first acquisition 1201, a second acquisition unit 1202, and an execution unit 1203.
The first acquisition unit 1201 may acquire, from a bitstream, a plurality of coding units CU into which a video image is divided. Here, an image may also be referred to as a picture or a frame. As described above, an input video image needs to be encoded block by block, the video image after encoding forms a bitstream and is transmitted to a decoding end, and the decoding end receives and parses the bitstream to perform block by block decoding. Here, a "block" that is a basic unit of encoding may be referred to as a CU. Therefore, when encoding a video image, the video image needs to be divided into a plurality of CUs. The division is described above and thus will not be described herein.
The acquisition unit 1202 may acquire information of a bidirectional prediction sample point of a current CU in a case where the current CU among the plurality of CUs is a bidirectional prediction CU. Also, the execution unit 1203 may selectively skip at least one of a BIO process and a DMVR process associated with the current CU based on the information of the bidirectional predicted sample point of the current CU.
According to an example embodiment of the present disclosure, the execution unit 1203 may selectively skip at least one of a BIO process and a DMVR process associated with a current CU at least one of a CU level and a sub-block level, where the sub-block is obtained by dividing the CU. Selectively skipping at least one of the BIO process and the DMVR process at the CU level means selectively skipping at least one of the BIO process and the DMVR process for each CU individually, i.e., performing the determination whether to skip the BIO process and/or the DMVR process for each CU individually; selectively skipping at least one of the BIO process and the DMVR process at the sub-level block means selectively skipping at least one of the BIO process and the DMVR process for each sub-block individually, i.e., determining whether to skip the BIO process and/or the DMVR process is performed for each sub-block individually. Whether to skip the BIO process and/or the DMVR process may be determined only at the CU level, or only at the sub-block level, or adaptively at the CU level and the sub-block level, as needed.
According to an exemplary embodiment of the present disclosure, the bidirectional prediction samples of the current CU may be prediction samples in two prediction blocks (may also be referred to as a forward prediction block and a backward prediction block) from two reference pictures in the reference picture list L0 and the reference picture list L1, respectively, of the current CU, and may also be referred to as L0 prediction samples and L1 prediction samples for short. The information of the bidirectional predicted samples of the current CU may include at least one of similarity and gradient information of the bidirectional predicted samples of the current CU. The similarity of the bidirectional prediction samples of the current CU refers to the similarity between prediction samples in two prediction blocks of the current CU, and the gradient information of the bidirectional prediction samples of the current CU refers to the gradient information of the prediction samples in the two prediction blocks of the current CU.
According to an example embodiment of the present disclosure, the execution unit 1203 may selectively skip the BIO process and/or the DMVR process based on the similarity of the bidirectional predicted samples of the current CU; or may selectively skip the BIO process based on gradient information of bi-directionally predicted samples of the current CU; or may selectively skip the BIO process based on both the similarity information and the gradient information of the bi-directionally predicted samples of the current CU, as will be described in detail below.
According to an exemplary embodiment of the present disclosure, the similarity between prediction samples in two prediction blocks of the current CU may be found based on a difference between prediction samples in the two prediction blocks of the current CU (i.e., L0 and L1 prediction samples).
According to another exemplary embodiment of the present disclosure, the similarity of the bidirectional prediction samples of the current CU may be obtained based on a difference between corresponding integer reference samples in reference pictures in which two prediction blocks of the current CU are located. Of course, the similarity between prediction samples in two prediction blocks of the current CU according to the present disclosure is not limited to the above-described method, and the similarity between prediction samples in two prediction blocks of the current CU may also be obtained according to any other reasonable method.
In accordance with exemplary embodiments of the present disclosure, in obtainingAfter the similarity between the prediction samples in the two prediction blocks of the current CU, the performing unit 1203 may selectively skip at least one of the BIO process and the DMVR process based on a comparison result of the value representing the similarity with a predetermined threshold (e.g., may be referred to as a first predetermined threshold). For example, when the value representing the similarity is not greater than a first predetermined threshold (e.g., Diff ≦ D)thres) In case of (2), at least one of the BIO process and the DMVR process is skipped, i.e. at least one of the BIO process and the DMVR process does not need to be performed during the motion compensation phase. When the value representing the similarity is greater than a first predetermined threshold (e.g., Diff > D)thres) In case of (2), at least one of the BIO process and the DMVR process is performed, i.e. at least one of the BIO process and the DMVR process still needs to be performed during the motion compensation phase.
Further, in the above-described exemplary embodiment, the execution unit 1203 may selectively skip at least one of the BIO process and the DMVR process at the CU level. In this case, the similarity may be the similarity between all prediction samples in the two prediction blocks of the current CU. Alternatively, the execution unit 1203 may selectively skip at least one of the BIO process and the DMVR process at a sub-block level. In this case, the similarity may be a similarity between prediction samples in corresponding sub-blocks in two prediction blocks of the current CU.
According to another exemplary embodiment of the present disclosure, the performing unit 1203 may selectively skip at least one of the BIO process and the DMVR process based on whether the current CU is allowed to perform the DMVR process and a result of comparing the value representing the similarity with a predetermined threshold. For example, in the case where the current CU is disabled for DMVR procedures, the execution unit 1203 selectively skips the BIO procedures at the CU level or sub-block level; in the case that the current CU is enabled with the DMVR process, the execution unit 1203 selectively skips the BIO process and the DMVR process at a sub-block level; wherein the execution unit 1203 executes the BIO process in a case where the value representing the similarity is larger than a second predetermined threshold (for example, Diff > thresBIO), and the execution unit 1203 skips the BIO process in a case where the value representing the similarity is not larger than the second predetermined threshold (for example, Diff ≦ thresBIO); wherein the execution unit 1203 executes the DMVR procedure in a case where the value representing the similarity is larger than a third predetermined threshold (for example, Diff > threshdmvr), and the execution unit 1203 skips the DMVR procedure in a case where the value representing the similarity is not larger than the third predetermined threshold (for example, Diff ≦ threshdmvr).
According to the first exemplary embodiment, in the case where the current CU is disabled from the DMVR process, the execution unit 1203 selectively skips the BIO process at the CU level. In the case where the current CU is enabled with the DMVR process, the execution unit 1203 selectively skips the BIO process and the DMVR process at a sub-block level.
According to the second exemplary embodiment, the execution unit 1203 selectively skips the BIO process at a sub-block level in a case where the current CU is disabled from the DMVR process. In the case where the current CU is enabled with the DMVR process, the execution unit 1203 selectively skips the BIO and DMVR processes at a sub-block level.
Further, in the second exemplary embodiment, two different sub-block sizes may be used to bypass the BIO/DMVR process, i.e., sub, depending on whether the DMVR process is enabled or notBIO×subBIOCan be preset to be not equal to subDMVR×subDMVRThat is, the first size and the second size may be set to be different. Alternatively, the BIO/DMVR process may be bypassed using the same sub-block size, i.e., sub, regardless of whether the DMVR process applies to the current CU or notBIO×subBIOCan be preset equal to subDMVR×subDMVRThat is, the first size and the second size may be set to be the same.
Further, in the second exemplary embodiment, two different thresholds may be used to bypass the BIO process, i.e., thresbIO, depending on whether the DMVR process is enabled or not0 subCan be preset to be not equal to thresBIO1 subI.e. the second predetermined threshold thresBIO in case the current CU is disabled for the DMVR procedure0 subAnd a second predetermined threshold thresBIO if the current CU is enabled with the DMVR process1 subAre set differently. For example, thresBIO may be used0 subIs preset to be equal to thresBIO1 subHalf of (1), i.e. the second predetermined threshold thresBIO in case the current CU is disabled for the DMVR procedure0 subSet equal to a second predetermined threshold thresBIO with the current CU enabled for DMVR procedures1 subHalf of that. Alternatively, the BIO process, i.e., thresbIO, may be bypassed using the same threshold regardless of whether the DMVR process is applied to the current CU or not0 subCan be preset equal to thresBIO1 subI.e. the second predetermined threshold thresBIO in case the current CU is disabled for the DMVR procedure0 subAnd a second predetermined threshold thresBIO if the current CU is enabled with the DMVR process1 subAre set to be the same.
According to an exemplary embodiment of the present disclosure, the performing unit 1203 may selectively skip the BIO process based on a comparison result of the average gradient values of the prediction samples in the two prediction blocks of the current CU and a predetermined gradient threshold (e.g., a fourth predetermined threshold). For example, in case the average gradient values of prediction samples in two prediction blocks of the current CU are greater than a fourth predetermined threshold, the execution unit 1203 performs a BIO process associated with the current CU; in case the average gradient values of the prediction samples in the two prediction blocks of the current CU are not larger than a fourth predetermined threshold, the execution unit 1203 skips the BIO process associated with the current CU.
According to an example embodiment of the present disclosure, the execution unit 1203 may selectively skip the BIO process associated with the current CU at the CU level. In this case, the average gradient value may be an average gradient value of all prediction samples in the two prediction blocks of the current CU. According to another example embodiment of the present disclosure, execution unit 1203 may selectively skip a BIO process associated with the current CU at a sub-block level, in which case the average gradient value may be an average gradient value of prediction samples in corresponding sub-blocks of two prediction blocks of the current CU.
According to an exemplary embodiment of the present disclosure, the execution unit 1203 executes the BIO process in a case where it is determined to execute the BIO process based on the similarity (by the above-described method of skipping the BIO process based on the similarity) and it is determined to execute the BIO process based on the gradient information (by the above-described method of skipping the BIO process based on the gradient information), and otherwise skips the BIO process. According to another exemplary embodiment of the present disclosure, the performing unit 1203 performs the BIO process in a case where it is determined to perform the BIO process based on the similarity (by the above-described method of skipping the BIO process based on the similarity) or it is determined to perform the BIO process based on the gradient information (by the above-described method of skipping the BIO process based on the gradient information), and otherwise skips the BIO process.
According to an exemplary embodiment of the present disclosure, the video decoding apparatus 1200 may further include a gradient information obtaining unit (not shown). The gradient information obtaining unit may obtain the gradient information for the BIO process using a gradient filter having gradient filter coefficients less than the 8-tap gradient filter when the BIO process is performed. For example, the gradient information obtaining unit may use a 4-tap gradient filter or a 6-tap gradient filter. For example, in the case of using a 4-tap gradient filter, the gradient filter coefficients are as shown in table 2 above. For another example, in the case of using a 6-tap gradient filter, the gradient filter coefficients are as shown in table 3 above.
Fig. 13 is a block diagram illustrating a video encoding apparatus according to an exemplary embodiment of the present disclosure.
Referring to fig. 13, a video encoding apparatus 1300 according to an exemplary embodiment of the present disclosure includes a dividing unit 1301, an acquiring unit 1302, and an performing unit 1303.
The division unit 1301 may divide a video image into a plurality of coding units CU. Here, an image may also be referred to as a picture or a frame. As described above, an input video image needs to be encoded block by block, and here, a "block" as an encoded basic unit may be referred to as a CU. Therefore, when encoding a video image, the video image needs to be divided into a plurality of CUs. The division is described above and thus will not be described herein.
The acquisition unit 1302 may acquire information of a bidirectional prediction sample point of a current CU in a case where the current CU among the plurality of CUs is a bidirectional prediction CU. Also, the execution unit 1303 may selectively skip at least one of a BIO process and a DMVR process associated with the current CU based on the information of the bidirectional predicted samples of the current CU.
According to an example embodiment of the present disclosure, the execution unit 1303 may selectively skip at least one of a BIO process and a DMVR process associated with the current CU at least one of a CU level and a sub-block level, where the sub-block is obtained by dividing the CU. Selectively skipping at least one of the BIO process and the DMVR process at the CU level means selectively skipping at least one of the BIO process and the DMVR process for each CU individually, i.e., performing the determination whether to skip the BIO process and/or the DMVR process for each CU individually; selectively skipping at least one of the BIO process and the DMVR process at the sub-level block means selectively skipping at least one of the BIO process and the DMVR process for each sub-block individually, i.e., determining whether to skip the BIO process and/or the DMVR process is performed for each sub-block individually. Whether to skip the BIO process and/or the DMVR process may be determined only at the CU level, or only at the sub-block level, or adaptively at the CU level and the sub-block level, as needed. .
According to an exemplary embodiment of the present disclosure, the bidirectional prediction samples of the current CU may be prediction samples in two prediction blocks (may also be referred to as a forward prediction block and a backward prediction block) from two reference pictures in the reference picture list L0 and the reference picture list L1, respectively, of the current CU, and may also be referred to as L0 prediction samples and L1 prediction samples for short. The information of the bidirectional predicted samples of the current CU may include at least one of similarity and gradient information of the bidirectional predicted samples of the current CU. The similarity of the bidirectional prediction samples of the current CU refers to the similarity between prediction samples in two prediction blocks of the current CU, and the gradient information of the bidirectional prediction samples of the current CU refers to the gradient information of the prediction samples in the two prediction blocks of the current CU.
According to an exemplary embodiment of the present disclosure, the execution unit 1303 may selectively skip the BIO process and/or the DMVR process based on the similarity of the bidirectional predicted samples of the current CU; or may selectively skip the BIO process based on gradient information of bi-directionally predicted samples of the current CU; or may selectively skip the BIO process based on both the similarity information and the gradient information of the bi-directionally predicted samples of the current CU, as will be described in detail below.
According to an exemplary embodiment of the present disclosure, the similarity between prediction samples in two prediction blocks of the current CU may be found based on a difference between prediction samples in the two prediction blocks of the current CU (i.e., L0 and L1 prediction samples).
According to another exemplary embodiment of the present disclosure, the similarity of the bidirectional prediction samples of the current CU may be obtained based on a difference between corresponding integer reference samples in reference pictures in which two prediction blocks of the current CU are located. Of course, the similarity between prediction samples in two prediction blocks of the current CU according to the present disclosure is not limited to the above-described method, and the similarity between prediction samples in two prediction blocks of the current CU may also be obtained according to any other reasonable method.
According to an exemplary embodiment of the present disclosure, after obtaining the similarity between prediction samples in two prediction blocks of the current CU, the performing unit 1303 may selectively skip at least one of the BIO process and the DMVR process based on a comparison result of a value representing the similarity with a predetermined threshold (e.g., may be referred to as a first predetermined threshold). For example, when the value representing the similarity is not greater than a first predetermined threshold (e.g., Diff ≦ D)thres) In case of (2), at least one of the BIO process and the DMVR process is skipped, i.e. at least one of the BIO process and the DMVR process does not need to be performed during the motion compensation phase. When the value representing the similarity is greater than a first predetermined threshold (e.g., Diff > D)thres) In case of (2), at least one of the BIO process and the DMVR process is performed, i.e. at least one of the BIO process and the DMVR process still needs to be performed during the motion compensation phase.
Further, in the above-described exemplary embodiment, the execution unit 1303 may selectively skip at least one of the BIO process and the DMVR process at the CU level. In this case, the similarity may be the similarity between all prediction samples in the two prediction blocks of the current CU. Alternatively, the execution unit 1303 may selectively skip at least one of the BIO process and the DMVR process at a sub-block level. In this case, the similarity may be a similarity between prediction samples in corresponding sub-blocks in two prediction blocks of the current CU.
According to another exemplary embodiment of the present disclosure, the performing unit 1303 may selectively skip at least one of the BIO process and the DMVR process based on whether the current CU is allowed to perform the DMVR process and a result of comparing the value representing the similarity with a predetermined threshold. For example, in the case where the current CU is disabled for the DMVR process, the execution unit 1303 selectively skips the BIO process at the CU level or the sub-block level; in the case where the current CU is enabled with the DMVR procedure, the execution unit 1303 selectively skips the BIO procedure and the DMVR procedure at a sub-block level; wherein the execution unit 1303 executes the BIO process in a case where the value indicating the similarity is larger than a second predetermined threshold (for example, Diff > thresBIO), and the execution unit 1303 skips the BIO process in a case where the value indicating the similarity is not larger than the second predetermined threshold (for example, Diff ≦ thresBIO); wherein the execution unit 1303 executes the DMVR procedure in a case where the value indicating the similarity is larger than a third predetermined threshold (for example, Diff > threshdmvr), and the execution unit 1303 skips the DMVR procedure in a case where the value indicating the similarity is not larger than the third predetermined threshold (for example, Diff ≦ threshdmvr).
According to the first exemplary embodiment, the execution unit 1303 selectively skips the BIO process at the CU level in the case where the current CU is disabled from the DMVR process. In the case where the current CU is enabled with the DMVR process, the execution unit 1303 selectively skips the BIO process and the DMVR process at a sub-block level.
According to the second exemplary embodiment, the execution unit 1303 selectively skips the BIO process at a sub-block level in a case where the current CU is disabled from the DMVR process. In the case where the current CU is enabled with the DMVR process, the execution unit 1303 selectively skips the BIO process and the DMVR process at a sub-block level.
Further, in the second exemplary embodiment, two different sub-block sizes may be used to bypass the BIO/DMVR process, i.e., sub, depending on whether the DMVR process is enabled or notBIO×subBIOCan be preset to be not equal to subDMVR×subDMVRThat is, the first size and the second size can be setAre different. Alternatively, the BIO/DMVR process may be bypassed using the same sub-block size, i.e., sub, regardless of whether the DMVR process applies to the current CU or notBIO×subBIOCan be preset equal to subDMVR×subDMVRThat is, the first size and the second size may be set to be the same.
Further, in the second exemplary embodiment, two different thresholds may be used to bypass the BIO process, i.e., thresbIO, depending on whether the DMVR process is enabled or not0 subCan be preset to be not equal to thresBIO1 subI.e. the second predetermined threshold thresBIO in case the current CU is disabled for the DMVR procedure0 subAnd a second predetermined threshold thresBIO if the current CU is enabled with the DMVR process1 subAre set differently. For example, thresBIO may be used0 subIs preset to be equal to thresBIO1 subHalf of (1), i.e. the second predetermined threshold thresBIO in case the current CU is disabled for the DMVR procedure0 subSet equal to a second predetermined threshold thresBIO with the current CU enabled for DMVR procedures1 subHalf of that. Alternatively, the BIO process, i.e., thresbIO, may be bypassed using the same threshold regardless of whether the DMVR process is applied to the current CU or not0 subCan be preset equal to thresBIO1 subI.e. the second predetermined threshold thresBIO in case the current CU is disabled for the DMVR procedure0 subAnd a second predetermined threshold thresBIO if the current CU is enabled with the DMVR process1 subAre set to be the same.
According to an exemplary embodiment of the present disclosure, the performing unit 1303 may selectively skip the BIO process based on a comparison result of average gradient values of prediction samples in two prediction blocks of the current CU with a predetermined gradient threshold (e.g., a fourth predetermined threshold). For example, in a case where the average gradient values of prediction samples in two prediction blocks of the current CU are greater than a fourth predetermined threshold, the execution unit 1303 performs a BIO process associated with the current CU; in case the average gradient values of the prediction samples in the two prediction blocks of the current CU are not greater than the fourth predetermined threshold, execution unit 1303 skips the BIO process associated with the current CU.
According to an example embodiment of the present disclosure, the execution unit 1303 may selectively skip a BIO process associated with the current CU at the CU level. In this case, the average gradient value may be an average gradient value of all prediction samples in the two prediction blocks of the current CU. According to another exemplary embodiment of the present disclosure, the performing unit 1303 may selectively skip a BIO process associated with the current CU at a sub-block level, in which case the average gradient value may be an average gradient value of prediction samples in corresponding sub-blocks of two prediction blocks of the current CU.
According to an exemplary embodiment of the present disclosure, the execution unit 1303 executes the BIO process in a case where it is determined to execute the BIO process based on the similarity (by the above-described method of skipping the BIO process based on the similarity) and it is determined to execute the BIO process based on the gradient information (by the above-described method of skipping the BIO process based on the gradient information), and otherwise skips the BIO process. According to another exemplary embodiment of the present disclosure, the execution unit 1303 executes the BIO process in a case where it is determined to execute the BIO process based on the similarity (by the above-described method of skipping the BIO process based on the similarity) or it is determined to execute the BIO process based on the gradient information (by the above-described method of skipping the BIO process based on the gradient information), and otherwise skips the BIO process.
According to an exemplary embodiment of the present disclosure, the video encoding apparatus 1300 may further include a gradient information obtaining unit (not shown). The gradient information obtaining unit may calculate the gradient information for the BIO process using a gradient filter having gradient filter coefficients less than the 8-tap gradient filter when the BIO process is performed. For example, the gradient information obtaining unit may use a 4-tap gradient filter or a 6-tap gradient filter. For example, in the case of using a 4-tap gradient filter, the gradient filter coefficients are as shown in table 2 above. For another example, in the case of using a 6-tap gradient filter, the gradient filter coefficients are as shown in table 3 above.
Fig. 14 is a block diagram of an electronic device according to an example embodiment of the present disclosure.
Referring to fig. 14, an electronic device 1400 includes at least one memory 1401 and at least one processor 1402, the at least one memory 1401 having stored therein a set of computer-executable instructions that, when executed by the at least one processor 1402, perform a video encoding method or a video decoding method according to exemplary embodiments of the present disclosure.
By way of example, the electronic device 1400 may be a PC computer, tablet device, personal digital assistant, smartphone, or other device capable of executing the set of instructions described above. Here, the electronic device 1400 does not have to be a single electronic device, but can be any collection of devices or circuits that can execute the above instructions (or sets of instructions) individually or in combination. The electronic device 1400 may also be part of an integrated control system or system manager, or may be configured as a portable electronic device that interfaces with local or remote (e.g., via wireless transmission).
In the electronic device 1400, the processor 1402 may include a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a programmable logic device, a special-purpose processor system, a microcontroller, or a microprocessor. By way of example, and not limitation, processors may also include analog processors, digital processors, microprocessors, multi-core processors, processor arrays, network processors, and the like.
The processor 1402 may execute instructions or code stored in the memory 1401, wherein the memory 1401 may also store data. The instructions and data may also be transmitted or received over a network via a network interface device, which may employ any known transmission protocol.
The memory 1401 may be integrated with the processor 1402, e.g. by arranging a RAM or flash memory within an integrated circuit microprocessor or the like. In addition, memory 1401 may comprise a stand-alone device, such as an external disk drive, storage array, or any other storage device usable by a database system. The memory 1401 and the processor 1402 may be operatively coupled or may communicate with each other, e.g. via I/O ports, network connections, etc., such that the processor 1402 can read files stored in the memory.
In addition, the electronic device 1400 may also include a video display (such as a liquid crystal display) and a user interaction interface (such as a keyboard, mouse, touch input device, etc.). All components of the electronic device 1400 may be connected to each other via a bus and/or a network.
According to an exemplary embodiment of the present disclosure, there may also be provided a computer-readable storage medium storing instructions, which when executed by at least one processor, cause the at least one processor to perform a video encoding method or a video decoding method according to the present disclosure. Examples of the computer-readable storage medium herein include: read-only memory (ROM), random-access programmable read-only memory (PROM), electrically erasable programmable read-only memory (EEPROM), random-access memory (RAM), dynamic random-access memory (DRAM), static random-access memory (SRAM), flash memory, non-volatile memory, CD-ROM, CD-R, CD + R, CD-RW, CD + RW, DVD-ROM, DVD-R, DVD + R, DVD-RW, DVD + RW, DVD-RAM, BD-ROM, BD-R, BD-R LTH, BD-RE, Blu-ray or compact disc memory, Hard Disk Drive (HDD), solid-state drive (SSD), card-type memory (such as a multimedia card, a Secure Digital (SD) card or a extreme digital (XD) card), magnetic tape, a floppy disk, a magneto-optical data storage device, an optical data storage device, a hard disk, a magnetic tape, a magneto-optical data storage device, a hard disk, a magnetic tape, a magnetic data storage device, a magnetic tape, a magnetic data storage device, a magnetic tape, a magnetic data storage device, a magnetic tape, a magnetic data storage device, a magnetic tape, a magnetic data storage device, A solid state disk, and any other device configured to store and provide a computer program and any associated data, data files, and data structures to a processor or computer in a non-transitory manner such that the processor or computer can execute the computer program. The computer program in the computer-readable storage medium described above can be run in an environment deployed in a computer apparatus, such as a client, a host, a proxy device, a server, and the like, and further, in one example, the computer program and any associated data, data files, and data structures are distributed across a networked computer system such that the computer program and any associated data, data files, and data structures are stored, accessed, and executed in a distributed fashion by one or more processors or computers.
According to an exemplary embodiment of the present disclosure, there may also be provided a computer program product, in which instructions are executable by a processor of a computer device to perform a video encoding method or a video decoding method according to an exemplary embodiment of the present disclosure.
The technical scheme provided by the embodiment of the disclosure at least brings the following beneficial effects:
according to the video coding method or the video decoding method disclosed by the invention, the DMVR and/or BIO process can be conditionally skipped when the conventional motion compensation is executed, so that the complexity of the motion compensation is reduced, and the coding and decoding efficiency is improved. Furthermore, BIO and/or DMVR may be skipped entirely at CU level or sub-block level, resulting in a better tradeoff between improving codec performance and reducing codec complexity. In addition, the gradient filter with fewer coefficients can be adopted to replace the existing 8-tap gradient filter in the existing BIO design, so that the calculation complexity of the BIO is reduced, and the coding and decoding efficiency is improved. Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.
According to embodiments of the present disclosure, the following are provided:
1. a video decoding method, comprising: acquiring a plurality of Coding Units (CU) into which a video image is divided from a bitstream; under the condition that the current CU in the CUs is a bidirectional prediction CU, acquiring information of a bidirectional prediction sampling point of the current CU; selectively skipping at least one of a bi-directional optical flow BIO process and a decoding-side motion vector modification DMVR process associated with the current CU based on information of bi-directional predicted samples of the current CU.
2. The video decoding method of example 1, wherein selectively skipping at least one of the BIO process and the DMVR process comprises: selectively skipping the at least one process at least one of a CU level and a sub-block level, wherein the sub-blocks are obtained by dividing the CU.
3. The video decoding method of example 1, wherein the information of bi-directionally predicted samples of the current CU comprises at least one of: similarity between prediction samples in two prediction blocks of the current CU; and gradient information associated with prediction samples in the two prediction blocks, wherein the two prediction blocks are a forward prediction block and a backward prediction block of the current CU.
4. The video decoding method of example 3, wherein in the case where the at least one process is selectively skipped based on the similarity, the at least one process is selectively skipped based on a comparison result of a value representing the similarity with a first predetermined threshold.
5. The video decoding method of example 4, wherein selectively skipping the at least one process based on the comparison comprises: performing the at least one process if the value is greater than a first predetermined threshold; in the event that the value is not greater than a first predetermined threshold, skipping the at least one process.
6. The video decoding method of example 4, wherein selectively skipping the at least one process based on the comparison comprises: selectively skipping, at a CU level, the at least one process, wherein the similarity is a similarity between all prediction samples in two prediction blocks of a current CU; or selectively skipping the at least one process at a sub-block level, wherein the sub-blocks are obtained by dividing the CU, and the similarity is a similarity between prediction samples in corresponding sub-blocks of two prediction blocks of the current CU.
7. The video decoding method of example 3, wherein in the case that the at least one process is selectively skipped based on the similarity, the at least one process is selectively skipped based on whether the current CU is allowed to perform the DMVR process and a result of comparing a value representing the similarity with a predetermined threshold.
8. The video decoding method of example 7, wherein selectively skipping the at least one process comprises: selectively skipping, at a CU level or a sub-block level, a BIO process associated with the current CU if the DMVR process is disabled for the current CU; selectively skipping, at a sub-block level, a BIO process and a DMVR process associated with the current CU if the current CU is enabled with the DMVR process; wherein the BIO process associated with the current CU is performed if the value is greater than a second predetermined threshold, and the BIO process associated with the current CU is skipped if the value is not greater than the second predetermined threshold; wherein the DMVR process associated with the current CU is performed if the value is greater than a third predetermined threshold, and the DMVR process associated with the current CU is skipped if the value is not greater than the third predetermined threshold; wherein the sub-blocks are obtained by dividing the CU.
9. The video decoding method of example 8, wherein: in the case where the current CU is disabled from the DMVR process and the BIO process associated with the current CU is selectively skipped at the CU level, the similarity is a similarity between all prediction samples in the two prediction blocks of the current CU; in a case where the current CU is disabled from the DMVR process and the BIO process associated with the current CU is selectively skipped at a sub-block level, the similarity is between prediction samples in corresponding sub-blocks of two prediction blocks of the current CU, and a size of the sub-block is a first size; in the case where the current CU is enabled with the DMVR process, the similarity is a similarity between prediction samples in corresponding sub-blocks of two prediction blocks of the current CU, and the size of the sub-block is a second size.
10. The video decoding method of example 8, wherein the second predetermined threshold in the case where the DMVR process is disabled for the current CU is set to be different from the second predetermined threshold in the case where the DMVR process is enabled for the current CU.
11. The video decoding method of example 8, wherein the second predetermined threshold in the case where the DMVR process is disabled for the current CU is set to half the second predetermined threshold in the case where the DMVR process is enabled for the current CU.
12. The video decoding method of example 8, wherein the second predetermined threshold in the case where the DMVR process is disabled for the current CU is set to be the same as the second predetermined threshold in the case where the DMVR process is enabled for the current CU.
13. The video decoding method of example 9, wherein the first size and the second size are set to be different.
14. The video decoding method of example 9, wherein the first size and the second size are set to be the same.
15. The video decoding method of example 3, wherein in the case that the at least one process is selectively skipped based on the gradient information, a BIO process associated with the current CU is selectively skipped based on a comparison of average gradient values of prediction samples in two prediction blocks of the current CU with a fourth predetermined threshold.
16. The video decoding method of example 15, wherein selectively skipping a BIO process associated with the current CU based on the comparison comprises: performing a BIO procedure associated with a current CU if the average gradient value is greater than a fourth predetermined threshold; skipping a BIO process associated with the current CU if the average gradient value is not greater than a fourth predetermined threshold.
17. The video decoding method of example 15, wherein selectively skipping a BIO process associated with the current CU based on the comparison comprises: selectively skipping, at a CU level, a BIO process associated with a current CU, wherein the average gradient value is an average gradient value of all prediction samples in two prediction blocks of the current CU; or selectively skipping a BIO process associated with the current CU at a sub-block level, wherein the sub-blocks are obtained by dividing the CU, and the average gradient value is an average gradient value of prediction sampling points in corresponding sub-blocks in two prediction blocks of the current CU.
18. The video decoding method of example 3, wherein in an instance in which the at least one process is selectively skipped based on both the similarity and the gradient information, in an instance in which a BIO process is determined to be performed based on the similarity and a BIO process is determined to be performed based on the gradient information, the BIO process associated with the current CU is performed, otherwise the BIO process associated with the current CU is skipped; or in case it is determined to perform a BIO procedure based on the similarity determination or based on the gradient information, performing a BIO procedure associated with the current CU, otherwise skipping the BIO procedure associated with the current CU.
19. The video decoding method of example 3, wherein the similarity between prediction samples in two prediction blocks of the current CU is derived based on at least one of: a difference between prediction samples in two prediction blocks of a current CU; or the difference between corresponding integer reference samples in reference pictures where two prediction blocks of the current CU are located.
20. The video decoding method of example 1, further comprising: in performing the BIO process, gradient information for the BIO process is obtained using a gradient filter having gradient filter coefficients less than an 8-tap gradient filter.
21. The video decoding method of example 20, wherein the gradient filter is a 4-tap gradient filter or a 6-tap gradient filter.
22. The video decoding method of example 21, wherein in the case of using a 4-tap gradient filter, the gradient filter coefficients are as follows:
Figure BDA0003085992190000371
wherein f isgrad[p][index]Index in (1) represents the coefficient index of the gradient filter at the fractional sample position p, and in the case of a 4-tap gradient filter, the index has a value in the range of [0,1,2,3 ]]。
23. The video decoding method of example 21, wherein in the case of using a 6-tap gradient filter, the gradient filter coefficients are as follows:
Figure BDA0003085992190000372
Figure BDA0003085992190000381
wherein f isgrad[p][index]Index in (1) represents the coefficient index of the gradient filter at the fractional sample position p, and in the case of a 6-tap gradient filter, the index has a value in the range of [0,1,2,3,4,5]。
24. A video encoding method, comprising: dividing a video image into a plurality of Coding Units (CU); under the condition that the current CU in the CUs is a bidirectional prediction CU, acquiring information of a bidirectional prediction sampling point of the current CU; selectively skipping at least one of a bi-directional optical flow BIO process and a decoding-side motion vector modification DMVR process associated with the current CU based on information of bi-directional predicted samples of the current CU.
25. The video encoding method of example 24, wherein selectively skipping at least one of the BIO process and the DMVR process comprises: selectively skipping the at least one process at least one of a CU level and a sub-block level, wherein the sub-blocks are obtained by dividing the CU.
26. The video encoding method of example 24, wherein the information of bi-directionally predicted samples for the current CU comprises at least one of: similarity between prediction samples in two prediction blocks of the current CU; and gradient information associated with prediction samples in the two prediction blocks, wherein the two prediction blocks are a forward prediction block and a backward prediction block of the current CU.
27. The video encoding method of example 26, wherein in the event that the at least one process is selectively skipped based on the similarity, the at least one process is selectively skipped based on a comparison of a value representing the similarity to a first predetermined threshold.
28. The video encoding method of example 27, wherein selectively skipping the at least one process based on the comparison comprises: performing the at least one process if the value is greater than a first predetermined threshold; in the event that the value is not greater than a first predetermined threshold, skipping the at least one process.
29. The video encoding method of example 27, wherein selectively skipping the at least one process based on the comparison comprises: selectively skipping, at a CU level, the at least one process, wherein the similarity is a similarity between all prediction samples in two prediction blocks of a current CU; or selectively skipping the at least one process at a sub-block level, wherein the sub-blocks are obtained by dividing the CU, and the similarity is a similarity between prediction samples in corresponding sub-blocks of two prediction blocks of the current CU.
30. The video encoding method of example 26, wherein the at least one process is selectively skipped based on whether the current CU is allowed to perform a DMVR process and a comparison of a value representing the similarity to a predetermined threshold, if the at least one process is selectively skipped based on the similarity.
31. The video encoding method of example 30, wherein selectively skipping the at least one process comprises: selectively skipping, at a CU level or a sub-block level, a BIO process associated with the current CU if the DMVR process is disabled for the current CU; selectively skipping, at a sub-block level, a BIO process and a DMVR process associated with the current CU if the current CU is enabled with the DMVR process; wherein the BIO process associated with the current CU is performed if the value is greater than a second predetermined threshold, and the BIO process associated with the current CU is skipped if the value is not greater than the second predetermined threshold; wherein the DMVR process associated with the current CU is performed if the value is greater than a third predetermined threshold, and the DMVR process associated with the current CU is skipped if the value is not greater than the third predetermined threshold; wherein the sub-blocks are obtained by dividing the CU.
32. The video encoding method of example 31, wherein: in the case where the current CU is disabled from the DMVR process and the BIO process associated with the current CU is selectively skipped at the CU level, the similarity is a similarity between all prediction samples in the two prediction blocks of the current CU; in a case where the current CU is disabled from the DMVR process and the BIO process associated with the current CU is selectively skipped at a sub-block level, the similarity is between prediction samples in corresponding sub-blocks of two prediction blocks of the current CU, and a size of the sub-block is a first size; in the case where the current CU is enabled with the DMVR process, the similarity is a similarity between prediction samples in corresponding sub-blocks of two prediction blocks of the current CU, and the size of the sub-block is a second size.
33. The video encoding method of example 31, wherein the second predetermined threshold is set to be different if the DMVR process is disabled for the current CU than if the DMVR process is enabled for the current CU.
34. The video encoding method of example 31, wherein the second predetermined threshold in the case where the current CU is disabled for DMVR procedures is set to half the second predetermined threshold in the case where the current CU is enabled for DMVR procedures.
35. The video encoding method of example 31, wherein the second predetermined threshold if the current CU is disabled for DMVR procedures is set the same as the second predetermined threshold if the current CU is enabled for DMVR procedures.
36. The video encoding method of example 32, wherein the first size and the second size are set to be different.
37. The video encoding method of example 32, wherein the first size and the second size are set to be the same.
38. The video encoding method of example 26, wherein, in the case that the at least one process is selectively skipped based on the gradient information, a BIO process associated with the current CU is selectively skipped based on a comparison of average gradient values of prediction samples in two prediction blocks of the current CU with a fourth predetermined threshold.
39. The video encoding method of example 38, wherein selectively skipping a BIO process associated with the current CU based on the comparison comprises: performing a BIO procedure associated with a current CU if the average gradient value is greater than a fourth predetermined threshold; skipping a BIO process associated with the current CU if the average gradient value is not greater than a fourth predetermined threshold.
40. The video encoding method of example 38, wherein selectively skipping a BIO process associated with the current CU based on the comparison comprises: selectively skipping, at a CU level, a BIO process associated with a current CU, wherein the average gradient value is an average gradient value of all prediction samples in two prediction blocks of the current CU; or selectively skipping a BIO process associated with the current CU at a sub-block level, wherein the sub-blocks are obtained by dividing the CU, and the average gradient value is an average gradient value of prediction sampling points in corresponding sub-blocks in two prediction blocks of the current CU.
41. The video encoding method of example 26, wherein if the at least one process is selectively skipped based on both the similarity and the gradient information, if a BIO process is determined to be performed based on the similarity and a BIO process is determined to be performed based on the gradient information, then performing a BIO process associated with the current CU, otherwise skipping a BIO process associated with the current CU; or in case it is determined to perform a BIO procedure based on the similarity determination or based on the gradient information, performing a BIO procedure associated with the current CU, otherwise skipping the BIO procedure associated with the current CU.
42. The video encoding method of example 26, wherein the similarity between prediction samples in two prediction blocks of the current CU is derived based on at least one of: a difference between prediction samples in two prediction blocks of a current CU; or the difference between corresponding integer reference samples in reference pictures where two prediction blocks of the current CU are located.
43. The video encoding method of example 24, further comprising: in performing the BIO process, gradient information for the BIO process is obtained using a gradient filter having gradient filter coefficients less than an 8-tap gradient filter.
44. The video encoding method of example 43, wherein the gradient filter is a 4-tap gradient filter or a 6-tap gradient filter.
45. The video encoding method of example 44, wherein in the case of using a 4-tap gradient filter, the gradient filter coefficients are as follows:
Figure BDA0003085992190000411
wherein f isgrad[p][index]Index in (1) represents the coefficient index of the gradient filter at the fractional sample position p, and in the case of a 4-tap gradient filter, the index has a value in the range of [0,1,2,3 ]]。
46. The video encoding method of example 44, wherein in the case of using a 6-tap gradient filter, the gradient filter coefficients are as follows:
Figure BDA0003085992190000412
wherein f isgrad[p][index]Index in (1) represents the coefficient index of the gradient filter at the fractional sample position p, and in the case of a 6-tap gradient filter, the index has a value in the range of [0,1,2,3,4,5]。
47. A video decoding apparatus, comprising: a first acquisition unit configured to: acquiring a plurality of Coding Units (CU) into which a video image is divided from a bitstream; a second acquisition unit configured to: under the condition that the current CU in the CUs is a bidirectional prediction CU, acquiring information of a bidirectional prediction sampling point of the current CU; an execution unit configured to: selectively skipping at least one of a bi-directional optical flow BIO process and a decoding-side motion vector modification DMVR process associated with the current CU based on information of bi-directional predicted samples of the current CU.
48. The video decoding apparatus of example 47, wherein the execution unit is configured to: selectively skipping the at least one process at least one of a CU level and a sub-block level, wherein the sub-blocks are obtained by dividing the CU.
49. The video decoding apparatus of example 47, wherein the information of bi-directionally predicted samples for the current CU comprises at least one of: similarity between prediction samples in two prediction blocks of the current CU; and gradient information associated with prediction samples in the two prediction blocks, wherein the two prediction blocks are a forward prediction block and a backward prediction block of the current CU.
50. The video decoding apparatus of example 49, wherein the execution unit is configured to: selectively skipping the at least one process based on a comparison of a value representing the similarity with a first predetermined threshold in the case where the at least one process is selectively skipped based on the similarity.
51. The video decoding apparatus of example 50, wherein the execution unit is configured to: performing the at least one process if the value is greater than a first predetermined threshold; in the event that the value is not greater than a first predetermined threshold, skipping the at least one process.
52. The video decoding apparatus of example 50, wherein the execution unit is configured to: selectively skipping, at a CU level, the at least one process, wherein the similarity is a similarity between all prediction samples in two prediction blocks of a current CU; or selectively skipping the at least one process at a sub-block level, wherein the sub-blocks are obtained by dividing the CU, and the similarity is a similarity between prediction samples in corresponding sub-blocks of two prediction blocks of the current CU.
53. The video decoding apparatus of example 49, wherein the execution unit is configured to: selectively skipping the at least one process based on whether the current CU is allowed to perform the DMVR process and a comparison of a value representing the similarity to a predetermined threshold, in the event that the at least one process is selectively skipped based on the similarity.
54. The video decoding apparatus of example 53, wherein the execution unit is configured to: selectively skipping, at a CU level or a sub-block level, a BIO process associated with the current CU if the DMVR process is disabled for the current CU; selectively skipping, at a sub-block level, a BIO process and a DMVR process associated with the current CU if the current CU is enabled with the DMVR process; wherein the BIO process associated with the current CU is performed if the value is greater than a second predetermined threshold, and the BIO process associated with the current CU is skipped if the value is not greater than the second predetermined threshold; wherein the DMVR process associated with the current CU is performed if the value is greater than a third predetermined threshold, and the DMVR process associated with the current CU is skipped if the value is not greater than the third predetermined threshold; wherein the sub-blocks are obtained by dividing the CU.
55. The video decoding apparatus of example 54, wherein: in the case where the current CU is disabled from the DMVR process and the BIO process associated with the current CU is selectively skipped at the CU level, the similarity is a similarity between all prediction samples in the two prediction blocks of the current CU; in a case where the current CU is disabled from the DMVR process and the BIO process associated with the current CU is selectively skipped at a sub-block level, the similarity is between prediction samples in corresponding sub-blocks of two prediction blocks of the current CU, and a size of the sub-block is a first size; in the case where the current CU is enabled with the DMVR process, the similarity is a similarity between prediction samples in corresponding sub-blocks of two prediction blocks of the current CU, and the size of the sub-block is a second size.
56. The video decoding apparatus of example 54, wherein the second predetermined threshold if the current CU is disabled for the DMVR process is set to be different from the second predetermined threshold if the current CU is enabled for the DMVR process.
57. The video decoding apparatus of example 54, wherein the second predetermined threshold if the current CU is disabled for the DMVR process is set to half the second predetermined threshold if the current CU is enabled for the DMVR process.
58. The video decoding apparatus of example 54, wherein the second predetermined threshold in the case where the DMVR process is disabled for the current CU is set the same as the second predetermined threshold in the case where the DMVR process is enabled for the current CU.
59. The video decoding apparatus of example 55, wherein the first size and the second size are set to be different.
60. The video decoding apparatus of example 55, wherein the first size and the second size are set to be the same.
61. The video decoding apparatus of example 49, wherein the execution unit is configured to: in the case where the at least one process is selectively skipped based on the gradient information, a BIO process associated with the current CU is selectively skipped based on a comparison result of average gradient values of prediction samples in two prediction blocks of the current CU with a fourth predetermined threshold.
62. The video decoding apparatus of example 61, wherein the execution unit is configured to: performing a BIO procedure associated with a current CU if the average gradient value is greater than a fourth predetermined threshold; skipping a BIO process associated with the current CU if the average gradient value is not greater than a fourth predetermined threshold.
63. The video decoding apparatus of example 61, wherein the execution unit is configured to: selectively skipping, at a CU level, a BIO process associated with a current CU, wherein the average gradient value is an average gradient value of all prediction samples in two prediction blocks of the current CU; or selectively skipping a BIO process associated with the current CU at a sub-block level, wherein the sub-blocks are obtained by dividing the CU, and the average gradient value is an average gradient value of prediction sampling points in corresponding sub-blocks in two prediction blocks of the current CU.
64. The video decoding apparatus of example 49, wherein the execution unit is configured to: in the event that the at least one process is selectively skipped based on both the similarity and the gradient information, performing a BIO process associated with the current CU in the event that a BIO process is determined to be performed based on the similarity determination and a BIO process is determined to be performed based on the gradient information, otherwise skipping a BIO process associated with the current CU; or in case it is determined to perform a BIO procedure based on the similarity determination or based on the gradient information, performing a BIO procedure associated with the current CU, otherwise skipping the BIO procedure associated with the current CU.
65. The video decoding apparatus of example 49, wherein the similarity between prediction samples in two prediction blocks of the current CU is based on at least one of: a difference between prediction samples in two prediction blocks of a current CU; or the difference between corresponding integer reference samples in reference pictures where two prediction blocks of the current CU are located.
66. The video decoding apparatus of example 47, further comprising: a gradient information obtaining unit configured to: gradient information for the BIO process is obtained using a gradient filter having gradient filter coefficients less than an 8-tap gradient filter while the BIO process is performed.
67. The video decoding apparatus of example 66, wherein the gradient filter is a 4-tap gradient filter or a 6-tap gradient filter.
68. The video decoding apparatus of example 67, wherein in the case of using a 4-tap gradient filter, the gradient filter coefficients are as follows:
Figure BDA0003085992190000441
wherein f isgrad[p][index]Index in (1) represents the coefficient index of the gradient filter at the fractional sample position p, and in the case of a 4-tap gradient filter, the index has a value in the range of [0,1,2,3 ]]。
69. The video decoding apparatus of example 67, wherein in the case of using a 6-tap gradient filter, the gradient filter coefficients are as follows:
Figure BDA0003085992190000442
Figure BDA0003085992190000451
wherein f isgrad[p][index]Index in represents the fractional sample position pThe coefficient index of the gradient filter at (1), in the case of a 6-tap gradient filter, the index has a value in the range of [0,1,2,3,4,5]。
70. A video encoding device, comprising: a dividing unit configured to: dividing a video image into a plurality of Coding Units (CU); an acquisition unit configured to: under the condition that the current CU in the CUs is a bidirectional prediction CU, acquiring information of a bidirectional prediction sampling point of the current CU; an execution unit configured to: selectively skipping at least one of a bi-directional optical flow BIO process and a decoding-side motion vector modification DMVR process associated with the current CU based on information of bi-directional predicted samples of the current CU.
71. The video encoding apparatus of example 70, wherein the execution unit is configured to: selectively skipping the at least one process at least one of a CU level and a sub-block level, wherein the sub-blocks are obtained by dividing the CU.
72. The video encoding apparatus of example 70, wherein the information of bi-directionally predicted samples for the current CU comprises at least one of: similarity between prediction samples in two prediction blocks of the current CU; and gradient information associated with prediction samples in the two prediction blocks, wherein the two prediction blocks are a forward prediction block and a backward prediction block of the current CU.
73. The video encoding apparatus of example 72, wherein the execution unit is configured to: selectively skipping the at least one process based on a comparison of a value representing the similarity with a first predetermined threshold in the case where the at least one process is selectively skipped based on the similarity.
74. The video encoding apparatus of example 73, wherein the execution unit is configured to: performing the at least one process if the value is greater than a first predetermined threshold; in the event that the value is not greater than a first predetermined threshold, skipping the at least one process.
75. The video encoding apparatus of example 73, wherein the execution unit is configured to: selectively skipping, at a CU level, the at least one process, wherein the similarity is a similarity between all prediction samples in two prediction blocks of a current CU; or selectively skipping the at least one process at a sub-block level, wherein the sub-blocks are obtained by dividing the CU, and the similarity is a similarity between prediction samples in corresponding sub-blocks of two prediction blocks of the current CU.
76. The video encoding apparatus of example 72, wherein the execution unit is configured to: selectively skipping the at least one process based on whether the current CU is allowed to perform the DMVR process and a comparison of a value representing the similarity to a predetermined threshold, in the event that the at least one process is selectively skipped based on the similarity.
77. The video encoding apparatus of example 76, wherein the execution unit is configured to: selectively skipping, at a CU level or a sub-block level, a BIO process associated with the current CU if the DMVR process is disabled for the current CU; selectively skipping, at a sub-block level, a BIO process and a DMVR process associated with the current CU if the current CU is enabled with the DMVR process; wherein the BIO process associated with the current CU is performed if the value is greater than a second predetermined threshold, and the BIO process associated with the current CU is skipped if the value is not greater than the second predetermined threshold; wherein the DMVR process associated with the current CU is performed if the value is greater than a third predetermined threshold, and the DMVR process associated with the current CU is skipped if the value is not greater than the third predetermined threshold; wherein the sub-blocks are obtained by dividing the CU.
78. The video encoding apparatus of example 77, wherein: in the case where the current CU is disabled from the DMVR process and the BIO process associated with the current CU is selectively skipped at the CU level, the similarity is a similarity between all prediction samples in the two prediction blocks of the current CU; in a case where the current CU is disabled from the DMVR process and the BIO process associated with the current CU is selectively skipped at a sub-block level, the similarity is between prediction samples in corresponding sub-blocks of two prediction blocks of the current CU, and a size of the sub-block is a first size; in the case where the current CU is enabled with the DMVR process, the similarity is a similarity between prediction samples in corresponding sub-blocks of two prediction blocks of the current CU, and the size of the sub-block is a second size.
79. The video encoding device of example 77, wherein the second predetermined threshold if the current CU is disabled for DMVR procedures is set to be different than the second predetermined threshold if the current CU is enabled for DMVR procedures.
80. The video encoding device of example 77, wherein the second predetermined threshold in the case that the DMVR process is disabled for the current CU is set to half the second predetermined threshold in the case that the DMVR process is enabled for the current CU.
81. The video encoding device of example 77, wherein the second predetermined threshold if the current CU is disabled for the DMVR process is set the same as the second predetermined threshold if the current CU is enabled for the DMVR process.
82. The video encoding apparatus of example 78, wherein the first size and the second size are set to be different.
83. The video encoding apparatus of example 78, wherein the first size and the second size are set to be the same.
84. The video encoding apparatus of example 72, wherein the execution unit is configured to: in the case where the at least one process is selectively skipped based on the gradient information, a BIO process associated with the current CU is selectively skipped based on a comparison result of average gradient values of prediction samples in two prediction blocks of the current CU with a fourth predetermined threshold.
85. The video encoding apparatus of example 84, wherein the execution unit is configured to: performing a BIO procedure associated with a current CU if the average gradient value is greater than a fourth predetermined threshold; skipping a BIO process associated with the current CU if the average gradient value is not greater than a fourth predetermined threshold.
86. The video encoding apparatus of example 84, wherein the execution unit is configured to: selectively skipping, at a CU level, a BIO process associated with a current CU, wherein the average gradient value is an average gradient value of all prediction samples in two prediction blocks of the current CU; or selectively skipping a BIO process associated with the current CU at a sub-block level, wherein the sub-blocks are obtained by dividing the CU, and the average gradient value is an average gradient value of prediction sampling points in corresponding sub-blocks in two prediction blocks of the current CU.
87. The video encoding apparatus of example 72, wherein the execution unit is configured to: in the event that the at least one process is selectively skipped based on both the similarity and the gradient information, performing a BIO process associated with the current CU in the event that a BIO process is determined to be performed based on the similarity determination and a BIO process is determined to be performed based on the gradient information, otherwise skipping a BIO process associated with the current CU; or in case it is determined to perform a BIO procedure based on the similarity determination or based on the gradient information, performing a BIO procedure associated with the current CU, otherwise skipping the BIO procedure associated with the current CU.
88. The video coding device of example 72, wherein the similarity between prediction samples in two prediction blocks of the current CU is based on at least one of: a difference between prediction samples in two prediction blocks of a current CU; or the difference between corresponding integer reference samples in reference pictures where two prediction blocks of the current CU are located.
89. The video encoding apparatus of example 70, further comprising: a gradient information obtaining unit configured to: gradient information for the BIO process is obtained using a gradient filter having gradient filter coefficients less than an 8-tap gradient filter while the BIO process is performed.
90. The video encoding apparatus of example 89, wherein the gradient filter is a 4-tap gradient filter or a 6-tap gradient filter.
91. The video encoding apparatus of example 90, wherein in the case of using a 4-tap gradient filter, the gradient filter coefficients are as follows:
Figure BDA0003085992190000481
wherein f isgrad[p][index]Index in (1) represents the coefficient index of the gradient filter at the fractional sample position p, and in the case of a 4-tap gradient filter, the index has a value in the range of [0,1,2,3 ]]。
92. The video encoding apparatus of example 90, wherein in the case of using a 6-tap gradient filter, the gradient filter coefficients are as follows:
Figure BDA0003085992190000482
wherein f isgrad[p][index]Index in (1) represents the coefficient index of the gradient filter at the fractional sample position p, and in the case of a 6-tap gradient filter, the index has a value in the range of [0,1,2,3,4,5]。
93. An electronic device, comprising: at least one processor; at least one memory storing computer-executable instructions that, when executed by the at least one processor, cause the at least one processor to perform a video decoding method as in any one of examples 1 to 23 or a video encoding method as in any one of examples 24 to 46.
94. A computer-readable storage medium, wherein instructions in the computer-readable storage medium, when executed by at least one processor, cause the at least one processor to perform a video decoding method as in any of examples 1 to 23 or as in any of examples 24 to 46.
95. A computer program product comprising computer instructions, wherein the computer instructions, when executed by at least one processor, implement a video decoding method as in any of examples 1 to 23 or a video encoding method as in any of examples 24 to 46.

Claims (10)

1. A video decoding method, comprising:
acquiring a plurality of Coding Units (CU) into which a video image is divided from a bitstream;
under the condition that the current CU in the CUs is a bidirectional prediction CU, acquiring information of a bidirectional prediction sampling point of the current CU;
selectively skipping at least one of a bi-directional optical flow BIO process and a decoding-side motion vector modification DMVR process associated with the current CU based on information of bi-directional predicted samples of the current CU.
2. The video decoding method of claim 1, wherein selectively skipping at least one of the BIO process and the DMVR process comprises:
selectively skipping the at least one process at least one of a CU level and a sub-block level, wherein the sub-blocks are obtained by dividing the CU.
3. A video decoding method as defined in claim 1 wherein the information of bi-directionally predicted samples of the current CU comprises at least one of:
similarity between prediction samples in two prediction blocks of the current CU; and
gradient information associated with prediction samples in the two prediction blocks,
wherein the two prediction blocks are a forward prediction block and a backward prediction block of the current CU.
4. A video encoding method, comprising:
dividing a video image into a plurality of Coding Units (CU);
under the condition that the current CU in the CUs is a bidirectional prediction CU, acquiring information of a bidirectional prediction sampling point of the current CU;
selectively skipping at least one of a bi-directional optical flow BIO process and a decoding-side motion vector modification DMVR process associated with the current CU based on information of bi-directional predicted samples of the current CU.
5. The video coding method of claim 4, wherein selectively skipping at least one of the BIO process and the DMVR process comprises:
selectively skipping the at least one process at least one of a CU level and a sub-block level, wherein the sub-blocks are obtained by dividing the CU.
6. A video decoding apparatus, comprising:
a first acquisition unit configured to: acquiring a plurality of Coding Units (CU) into which a video image is divided from a bitstream;
a second acquisition unit configured to: under the condition that the current CU in the CUs is a bidirectional prediction CU, acquiring information of a bidirectional prediction sampling point of the current CU;
an execution unit configured to: selectively skipping at least one of a bi-directional optical flow BIO process and a decoding-side motion vector modification DMVR process associated with the current CU based on information of bi-directional predicted samples of the current CU.
7. A video encoding device, comprising:
a dividing unit configured to: dividing a video image into a plurality of Coding Units (CU);
an acquisition unit configured to: under the condition that the current CU in the CUs is a bidirectional prediction CU, acquiring information of a bidirectional prediction sampling point of the current CU;
an execution unit configured to: selectively skipping at least one of a bi-directional optical flow BIO process and a decoding-side motion vector modification DMVR process associated with the current CU based on information of bi-directional predicted samples of the current CU.
8. An electronic device, comprising:
at least one processor;
at least one memory storing computer-executable instructions,
wherein the computer-executable instructions, when executed by the at least one processor, cause the at least one processor to perform the video decoding method of any one of claims 1 to 3 or the video encoding method of any one of claims 4 to 5.
9. A computer-readable storage medium, wherein instructions in the computer-readable storage medium, when executed by at least one processor, cause the at least one processor to perform the video decoding method of any of claims 1 to 3 or the video decoding method of any of claims 4 to 5.
10. A computer program product comprising computer instructions, characterized in that the computer instructions, when executed by at least one processor, implement the video decoding method of any of claims 1 to 3 or the video encoding method of any of claims 4 to 5.
CN202110580687.0A 2020-05-26 2021-05-26 Video decoding method and apparatus, and video encoding method and apparatus Pending CN113727107A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202063030284P 2020-05-26 2020-05-26
US63/030,284 2020-05-26

Publications (1)

Publication Number Publication Date
CN113727107A true CN113727107A (en) 2021-11-30

Family

ID=78672800

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110580687.0A Pending CN113727107A (en) 2020-05-26 2021-05-26 Video decoding method and apparatus, and video encoding method and apparatus

Country Status (1)

Country Link
CN (1) CN113727107A (en)

Similar Documents

Publication Publication Date Title
CN110809887B (en) Method and apparatus for motion vector modification for multi-reference prediction
TWI711300B (en) Signaling for illumination compensation
US11856220B2 (en) Reducing computational complexity when video encoding uses bi-predictively encoded frames
CN110313180B (en) Method and apparatus for encoding and decoding motion information
JP7171770B2 (en) Memory access windowing and padding for motion vector refinement and motion compensation
US8503532B2 (en) Method and apparatus for inter prediction encoding/decoding an image using sub-pixel motion estimation
US8705611B2 (en) Image prediction encoding device, image prediction encoding method, image prediction encoding program, image prediction decoding device, image prediction decoding method, and image prediction decoding program
CN107105232B (en) Method and apparatus for generating motion vector predictor candidates
KR102429449B1 (en) Method and device for bit-width control for bidirectional light flow
TW201933866A (en) Improved decoder-side motion vector derivation
JP2019519998A (en) Method and apparatus for video coding with automatic refinement of motion information
CN117221528B (en) Video encoding method, computing device, and medium
CN111201795B (en) Memory access window and padding for motion vector modification
US11871034B2 (en) Intra block copy for screen content coding
EP4037320A1 (en) Boundary extension for video coding
CN115280779A (en) Method and apparatus for affine motion compensated prediction refinement
WO2023274360A1 (en) Utilization of recursive prediction unit in video coding
CN113727107A (en) Video decoding method and apparatus, and video encoding method and apparatus
CN116158079A (en) Weighted AC prediction for video codec
JP7571105B2 (en) MEMORY ACCESS WINDOW AND PADDING FOR MOTION VECTOR REFINEMENT AND MOTION COMPENSATION - Patent application
RU2783331C2 (en) Memory access window and filling for detailing of motion vector and motion compensation
CN115315955A (en) Simplified method and apparatus for bi-directional optical flow and decoder-side motion vector refinement
CN114342390B (en) Method and apparatus for prediction refinement for affine motion compensation
US20230199217A1 (en) Shared Architecture For Multiple Video Coding Modes
TWI856066B (en) Constraints on decoder-side motion vector refinement

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination